Core to Core to Core: Design Trade Offs

AMD’s approach to these big processors is to take a small repeating unit, such as the 4-core complex or 8-core silicon die (which has two complexes on it), and put several on a package to get the required number of cores and threads. The upside of this is that there are a lot of replicated units, such as memory channels and PCIe lanes. The downside is how cores and memory have to talk to each other.

In a standard monolithic (single) silicon design, each core is on an internal interconnect to the memory controller and can hop out to main memory with a low latency. The speed between the cores and the memory controller is usually low, and the routing mechanism (a ring or a mesh) can determine bandwidth or latency or scalability, and the final performance is usually a trade-off.

In a multiple silicon design, where each die has access to specific memory locally but also has access to other memory via a jump, we then come across a non-uniform memory architecture, known in the business as a NUMA design. Performance can be limited by this abnormal memory delay, and software has to be ‘NUMA-aware’ in order to maximize both the latency and the bandwidth. The extra jumps between silicon and memory controllers also burn some power.

We saw this before with the first generation Threadripper: having two active silicon dies on the package meant that there was a hop if the data required was in the memory attached to the other silicon. With the second generation Threadripper, it gets a lot more complex.

On the left is the 1950X/2950X design, with two active silicon dies. Each die has direct access to 32 PCIe lanes and two memory channels each, which when combined gives 60/64 PCIe lanes and four memory channels. The cores that have direct access to the memory/PCIe connected to the die are faster than going off-die.

For the 2990WX and 2970WX, the two ‘inactive’ dies are now enabled, but do not have extra access to memory or PCIe. For these cores, there is no ‘local’ memory or connectivity: every access to main memory requires an extra hop. There is also extra die-to-die interconnects using AMD’s Infinity Fabric (IF), which consumes power.

The reason that these extra cores do not have direct access is down to the platform: the TR4 platform for the Threadripper processors is set at quad-channel memory and 60 PCIe lanes. If the other two dies had their memory and PCIe enabled, it would require new motherboards and memory arrangements.

Users might ask, well can we not change it so each silicon die has one memory channel, and one set of 16 PCIe lanes? The answer is that yes, this change could occur. However the platform is somewhat locked in how the pins and traces are managed on the socket and motherboards. The firmware is expecting two memory channels per die, and also for electrical and power reasons, the current motherboards on the market are not set up in this way. This is going to be an important point when get into the performance in the review, so keep this in mind.

It is worth noting that this new second generation of Threadripper and AMD’s server platform, EPYC, are cousins. They are both built from the same package layout and socket, but EPYC has all the memory channels (eight) and all the PCIe lanes (128) enabled:

Where Threadripper 2 falls down on having some cores without direct access to memory, EPYC has direct memory available everywhere. This has the downside of requiring more power, but it offers a more homogenous core-to-core traffic layout.

Going back to Threadripper 2, it is important to understand how the chip is going to be loaded. We confirmed this with AMD, but for the most part the scheduler will load up the cores that are directly attached to memory first, before using the other cores. What happens is that each core has a priority weighting, based on performance, thermals, and power – the ones closest to memory get a higher priority, however as those fill up, the cores nearby get demoted due to thermal inefficiencies. This means that while the CPU will likely fill up the cores close to memory first, it will not be a simple case of filling up all of those cores first – the system may get to 12-14 cores loaded before going out to the two new bits of silicon.

The AMD Threadripper 2990WX 32-Core and 2950X 16-Core Review Precision Boost 2, Precision Boost Overdrive, and StoreMI
Comments Locked

171 Comments

View All Comments

  • Lolimaster - Monday, August 13, 2018 - link

    I don't really see a point OCing the 2990WX, it seems quite efficient at stock setting with an average of 170w fully loaded, why go all the way to 400w+ for just 30% extra performance, it already destroys the 2950X/7980XE OCed to hell beyond repair.
  • Lolimaster - Monday, August 13, 2018 - link

    Threadripper 2990WX = Raid Boss
  • yeeeeman - Monday, August 13, 2018 - link

    Amazing performance from AMDs part. If you want to see a real review of 2990WX from a reviewer who understands how this CPU will be used, please check https://www.phoronix.com/scan.php?page=article&...
  • mapesdhs - Monday, August 13, 2018 - link

    Figured it would be those guys. 8) I talked to them way back when they started using C-ray for testing, after the original benchmark author handed it over to me for general public usage, though it's kinda spread all over the place since then. Yes, they did a good writeup. It's amusing when elsewhere one will see someone say something like, these CPUs are not best for gaming! Well, oh my, what a surprise, I could never have guessed. :D

    In the future though, who knows. Fancy a full D-day simulator with thousands of players? 10 to 20 years from now, CPUs like this might be the norm.
  • eva02langley - Tuesday, August 14, 2018 - link

    It is exactly what I said. If we don't have a proper test bed for a unique product like this, then the results we are going to provide are not going to be representative of the true potential of a CPU like this.

    Sites will need to update their benchmarks suites, or propose new review systems.
  • Gideon - Monday, August 13, 2018 - link

    Great article overall. The Fabric Power part was the most interesting one! Though you might want to check The Stilt's comments regarding that:

    https://forums.anandtech.com/threads/2990wx-review...
    and:
    https://forums.anandtech.com/threads/2990wx-review...
  • Icehawk - Monday, August 13, 2018 - link

    Ian, for in progress articles can they please be labelled that way? I would rather wait for the article to be complete than read just a few pages and have to check back hoping it has been updated.
  • mapesdhs - Monday, August 13, 2018 - link

    Ian, can you add C-ray to the multithreaded testing mix please? Becoming quite a popular test these days as it can scale to hundreds of threads. Just run at 8K res using the sphract scene file with a deep recursion depth (at least 8), to give a test that's complicated enough to last a decent amount of time and push out to main RAM a fair bit aswell.
  • abufrejoval - Monday, August 13, 2018 - link

    Ok, I understand it we are all enjoying this pay-back moment: Intel getting it on the nose for trying to starve AMD and Nvidia by putting chipsets and GPUs into surplus transistors from process shrinks, transistors that couldn’t do anything meaningful for Excel (thing is: Spreadsheets would actually be ideal for multi-cores even GPGPU, you just need to rewrite them completely…)

    But actually, this article does its best to prepare y’all for the worst: Twice the cores won’t be twice the value, not this time around, nor the next… or the one after that.

    Please take a moment and consider the stark future ahead of us: From now on PCs will be worse than middle class smartphones with ten cores, where it’s cheaper to cut & paste more cores than to think of something useful.
  • KAlmquist - Monday, August 13, 2018 - link

    I'm not sure AMD would have bothered with the 2990WX if it weren't for the Intel Core i9 7980XE. With 18 cores, the 7980XE beats the 16 core Threadripper 2950X pretty much across the board. On the other hand, if you running software that scales well across lots of cores--and you probably are if you're considering shelling out the money for a 7980XE--the 32 core 2990WX will be faster, for about $100 less.

    These are niche processors; I doubt either of them will sell in enough volume to make a significant difference to the bottom line at Intel or AMD. My guess is that both the 2990WX and the 7980XE were released more for the bragging rights than for the sales revenue they will produce.

Log in

Don't have an account? Sign up now