AMD on Tuesday formally announced its next-generation EPYC processor code-named Rome. The new server CPU will feature up to 64 cores featuring the Zen 2 microarchitecture, thus providing at least two times higher performance per socket than existing EPYC chips.

As discussed in a separate story covering AMD’s new ‘chiplet’ design approach, AMD EPYC ‘Rome’ processor will carry multiple CPU chiplets manufactured using TSMC’s 7 nm fabrication process as well as an I/O die produced at a 14 nm node. As it appears, high-performance ‘Rome’ processors will use eight CPU chiplets offering 64 x86 cores in total, as well as an eight-channel DDR4 memory controller supporting up to 4 TB of DRAM per socket. Besides, the new processor supports 128 PCIe 4.0 lanes to connect next-generation accelerators, such as the Radeon Instinct MI60 based on the Vega 7nm GPU.

Considering the fact that Zen 2 microarchitecture is expected to generally increase performance of CPU cores (especially when it comes to floating point performance, which AMD expects to double), the Rome processors will boost performance of servers quite dramatically when compared to existing machines. In particular, AMD expects performance per socket to double as a result of higher core count, and predicts that floating point performance per socket will quadruple because of arhitectural IPC improvements and increase of the core count.

One important peculiarity of AMD’s EPYC ‘Rome’ processor is that it is socket compatible with existing EPYC ‘Naples’ platform and will be forward compatible with AMD’s future ‘Milan’ platforms featuring CPUs powered by the Zen 3 microarchitecture. This will greatly simplify development of AMD-based servers and will enable server makers to reuse their existing designs for future machines, which is important for AMD that needs to capture market from Intel. To do that, it has to simplify job of server builders by making its platforms simple.

AMD is currently sampling its EPYC ‘Rome’ processor with server makers and customers. The company plans to launch ‘Rome’ products sometimes in 2019, but it does not disclose its launch schedule just now.

This is a breaking news. We are updating the news story with more details.

Comments Locked


View All Comments

  • shabby - Tuesday, November 6, 2018 - link

    Those 8 core dies look so small and inexpensive, how can Intel compete with those massive 28 core 14nm dies?
  • FriendlyUser - Tuesday, November 6, 2018 - link

    That is exactly the point.
  • rahvin - Wednesday, November 7, 2018 - link

    Those little chiplets probably cost a few dollar per chiplet, that small they will have fantastic yields. Add in the slightly bigger IO chip on the older process and likely low power use. Do have to wonder what assembling the package costs though. AMD made a good choice, Rome may pull them into some really good cloud contracts with a TCO that's much lower than Intel.
  • edzieba - Wednesday, November 7, 2018 - link

    On the flipside, that means instead of one die that needs to pass binning, you need 5 dies. If - like with HBM-on-substrate devices - that requires binning /after/ packaging - you have 5 chances for a defect to fail a package, and also 1 in 5 of that dead die killing the entire CPU assembly.

    And post-packaging binning appears to be what is used for current Threadripper binning (for example), so there's a good chance that if individual dual-CCX dies cannot be independantly binned pre-packaging, a bare northbridge-less CCX die will not either.
  • Topweasel - Wednesday, November 7, 2018 - link

    Enough testing and control on wafer production and you can be confident of the capabilities of each die in each location.

    When AMD first started disabling cores after testing with Phenom. Intel made a statement that was along the lines of if we tested a die and it had any issues we would trash it. Part of the reason for that was they were much more limited they didn't really have a product for disabled cores on a 2c Core 2 CPU. The other reason was they had recently made a big push on basically making sure all fabs had changed all manfucationing steps to make all of the ones on the same process exactly the same. They had gotten everything down to a perfect science and could pre bin chips before testing and therefore wouldn't "fail" after being lasered. When they did those chips could be trashed. But they were basically recognizing chips limits just like AMD was just before instead of after testing.

    QC at other Fabs has gone waaaay up. They have to, to get this small. My guess is they know bad chips before they do the packaging.
  • psychobriggsy - Friday, November 9, 2018 - link

    On the flipflipside, you can bin chiplets by speed as per usual, and then match them in assembly.

    This increases clock speed distribution for the final product. Instead of getting a lucky 64-core die where all 64-cores can run at speed X, you only need to use 8 8-core dies that bin at that speed (which is far more common). The chances of 64-cores on a die reaching X are low, but the chances of 8 cores on a die reaching X are fairly reasonable. There's a good video on YouTube by AdoredTV that explains this, it has graphs and everything :)

    I would also presume that assembly yields are very high - it's hardly as if MCM technology is new.
  • abufrejoval - Wednesday, November 7, 2018 - link

    Physical size can be misleading: These are EUV chips and there are wonderful articles on this site on just how crazy difficult and expensive that technology is.... Use the search box and make sure to collect your jaw, before you get up after reading that.
  • psychobriggsy - Friday, November 9, 2018 - link

    These are not EUV chips, that's next year with TSMC's 7N+ process, this is the plain 7N process.
  • jospoortvliet - Sunday, November 11, 2018 - link

    Plain still means 4x patterning, itself very expensive and low yield. Samsung even claims to do EUV on 7nm to SAVE on cost.
  • colinstu - Tuesday, November 6, 2018 - link

    something something burning rome

Log in

Don't have an account? Sign up now