Core to Core to Core: Design Trade Offs

AMD’s approach to these big processors is to take a small repeating unit, such as the 4-core complex or 8-core silicon die (which has two complexes on it), and put several on a package to get the required number of cores and threads. The upside of this is that there are a lot of replicated units, such as memory channels and PCIe lanes. The downside is how cores and memory have to talk to each other.

In a standard monolithic (single) silicon design, each core is on an internal interconnect to the memory controller and can hop out to main memory with a low latency. The speed between the cores and the memory controller is usually low, and the routing mechanism (a ring or a mesh) can determine bandwidth or latency or scalability, and the final performance is usually a trade-off.

In a multiple silicon design, where each die has access to specific memory locally but also has access to other memory via a jump, we then come across a non-uniform memory architecture, known in the business as a NUMA design. Performance can be limited by this abnormal memory delay, and software has to be ‘NUMA-aware’ in order to maximize both the latency and the bandwidth. The extra jumps between silicon and memory controllers also burn some power.

We saw this before with the first generation Threadripper: having two active silicon dies on the package meant that there was a hop if the data required was in the memory attached to the other silicon. With the second generation Threadripper, it gets a lot more complex.

On the left is the 1950X/2950X design, with two active silicon dies. Each die has direct access to 32 PCIe lanes and two memory channels each, which when combined gives 60/64 PCIe lanes and four memory channels. The cores that have direct access to the memory/PCIe connected to the die are faster than going off-die.

For the 2990WX and 2970WX, the two ‘inactive’ dies are now enabled, but do not have extra access to memory or PCIe. For these cores, there is no ‘local’ memory or connectivity: every access to main memory requires an extra hop. There is also extra die-to-die interconnects using AMD’s Infinity Fabric (IF), which consumes power.

The reason that these extra cores do not have direct access is down to the platform: the TR4 platform for the Threadripper processors is set at quad-channel memory and 60 PCIe lanes. If the other two dies had their memory and PCIe enabled, it would require new motherboards and memory arrangements.

Users might ask, well can we not change it so each silicon die has one memory channel, and one set of 16 PCIe lanes? The answer is that yes, this change could occur. However the platform is somewhat locked in how the pins and traces are managed on the socket and motherboards. The firmware is expecting two memory channels per die, and also for electrical and power reasons, the current motherboards on the market are not set up in this way. This is going to be an important point when get into the performance in the review, so keep this in mind.

It is worth noting that this new second generation of Threadripper and AMD’s server platform, EPYC, are cousins. They are both built from the same package layout and socket, but EPYC has all the memory channels (eight) and all the PCIe lanes (128) enabled:

Where Threadripper 2 falls down on having some cores without direct access to memory, EPYC has direct memory available everywhere. This has the downside of requiring more power, but it offers a more homogenous core-to-core traffic layout.

Going back to Threadripper 2, it is important to understand how the chip is going to be loaded. We confirmed this with AMD, but for the most part the scheduler will load up the cores that are directly attached to memory first, before using the other cores. What happens is that each core has a priority weighting, based on performance, thermals, and power – the ones closest to memory get a higher priority, however as those fill up, the cores nearby get demoted due to thermal inefficiencies. This means that while the CPU will likely fill up the cores close to memory first, it will not be a simple case of filling up all of those cores first – the system may get to 12-14 cores loaded before going out to the two new bits of silicon.

The AMD Threadripper 2990WX 32-Core and 2950X 16-Core Review Precision Boost 2, Precision Boost Overdrive, and StoreMI
Comments Locked

171 Comments

View All Comments

  • jospoortvliet - Saturday, August 18, 2018 - link

    https://www.phoronix.com/scan.php?page=article&... has some.
  • nul0b - Monday, August 13, 2018 - link

    Ian please define how exactly you're calculating and deriving uncore and IF power utilization.
  • alpha754293 - Monday, August 13, 2018 - link

    I vote that from now on, all of the CPU reviews should be like this.

    Just raw data.
  • Lolimaster - Monday, August 13, 2018 - link

    To resume:

    Intel's TDP is a blatant lie, it barely keeps at TDP at 6c/6t, meanwhile AMD stick on point or below TDP with their chips, boost included :D
  • Lolimaster - Monday, August 13, 2018 - link

    Selling more shares from $1.65 now to $19 :D

    AMD Threadripper 2, ripping the blue hole.
  • Lolimaster - Monday, August 13, 2018 - link

    It seems geekbench can't scale beyond 16cores.
  • Lolimaster - Monday, August 13, 2018 - link

    WHERE IS CINEBENCH?
  • Lolimaster - Monday, August 13, 2018 - link

    And I mean CB15

    Also, for some reason CB11.5 MT seems to be broken for TR, it stops caling at 12cores.
  • mapesdhs - Monday, August 13, 2018 - link

    CB R15 is suffering issues aswell these days, at this level it can exhibit huge variance from one run to another.
  • eastcoast_pete - Monday, August 13, 2018 - link

    Thanks Ian, great article, look forward to seeing the full final version!

    My conclusions: All these are workstation processors, NOT for gaming; the Ryzen 2700X and the upcoming Intel octacore 9000 series are/will be better for gaming, both in value for money and absolute performance. That being said, the TR 2950X could be a great choice, if your productivity software can make good use of the 16 cores/32 threads, and if that same software isn't written to make strong use of AVX 512. If the applications that you buy these monsters can make heavy use of AVX 512, Intel's chips are currently hard or impossible to beat, even at the same price point. That being said, a 2950X workstation with 128 or 256 Gb of RAM (in quad channel, of course), plus some fast PCIe/NVMe SSDs and a big & fast graphics card would make an awesome video editing setup; and, the 60 PCIe channels would come in really handy for add-in boards. One fly in the ointment: AMD, since you're hamstringing TR with only quad-channel, at least let us use faster DDR4; how about officially supporting > 3.2 Ghz?

    Unrelated: Love the testing setup where the system storage SSD ( 1TB) is the same size as the working memory (1 TB). With one of these, you know you're in the heavyweight division.

    @Ian: I also really appreciate the testing of power draws by cores vs. interconnecting fabric. I also believe (as you wrote) that this is a much underappreciated hurdle in simply escalating the number of cores. I also wonder a. How is that affecting ARM-based multicore chips, especially once we are talking 32 cores and up, as for the chips intended for servers? and b. Is that one of the reasons (or THE reason) why ARM-based manycore solutions have not been getting much traction, and why companies like Qualcomm have stopped their development? Yes, the cores might be very efficient, but if those power savings are being gobbled up by the interconnects, fewer but broader and deeper cores might still end up winning the performance/wh race.
    If you and/or Ryan (or any of your colleagues) could do a deep dive into the general issue of power use by the interconnecting fabric and the different architectures, I would really appreciate it.

Log in

Don't have an account? Sign up now