Feed Me: Infinity Fabric Requires More Power

When moving from a ring bus to move data between cores and the memory controllers to a mesh or chiplet system, communication between the cores gets a lot more complex. At this point each core or core subset has to act like a router, and decide the best path for the data to go if multiple hops are required to reach the intended target. As we saw with Intel’s MoDe-X mesh at the launch of Skylake-X, the key here is to both avoid contention for the best performance and reduce wire lengths to decrease power. It turns out that in these systems, the inter-core communication technique starts eating up a lot of power, and can consume more power than the cores.

To describe chip power, all consumer processors have a rated ‘TDP’, or thermal design power. Intel and AMD measure this value differently, based on workloads and temperatures, but the technical definition of TDP is the ability of the cooler to dissipate this much thermal energy when the processor is fully loaded (and usually defined at base frequency, not all-core turbo). Actual power consumption might be higher, based on losses by the power delivery, or thermal dissipation through the board, but for most situations TDP and power consumption are broadly considered equal.

This means that the TDP ratings on modern processors, such as 65W, 95W, 105W, 140W, 180W, and now 250W, should broadly indicate the peak sustained power consumption. However, as explained in the first paragraph, not all that power gets to go to pushing frequency in the cores. Some has to be used in the memory controllers, in the IO, into integrated graphics (for the chips that have them), and now the core-to-core interconnect becomes a big part of this. Just how much should be something eye-opening.

For most CPUs, we have the ability to measure either per-core or all-core power, as well as the power of the whole chip. If we subtract the 'core' power from the 'chip' power, we are left with a number of things: DRAM controller power plus interconnect power, and in some cases, L3 power as well.

To see the scale of this, let us start with something straight-forward and known to most users. Intel’s latest Coffee Lake processors, such as the Core i7-8700K, use what is known as a ring-bus design. These processors use a single ring to connect each of the cores and the memory controller – if data has to be moved, it gets placed into the ring and shuttles along to go to where it is needed. This system has historically been called the ‘Uncore’, and can run at a different frequency to the main cores, allowing for its power to scale with what is available. The power distribution looks like this:

Despite the 95W TDP, this processor at stock frequencies consumes around 125 W at load, beyond its TDP (which is defined at base frequency). However it is more the ratio of the uncore to the total power we are concerned with: at light loading, the uncore is only 4% of the total power, but that rises to 7-9% as we load up the cores. For argument, let us call this a maximum of 10%.

Now let us go into something more meaty: Intel’s Skylake-X processors. In this design, Intel uses its new ‘mesh’ architecture, similar to MoDe-X, whereby each subset of the processor has a small router / crossbar partition that can direct a data packet to the cores around it, or to itself, as required.

This design allows the processor to scale, given that ring based systems occur additional latency beyond about 14 cores or so (going by how Intel intercepted the mesh design). However, the mesh runs at a lower latency than the ring systems that Intel used to use, and they also consume a lot more power.

In this setup, we see the power that isn't for the cores starting at 20% of the total chip power, moving up to 25-30% as more cores are loaded. As a result, around one-quarter to one-third of the power on the chip is being used for core-to-core and core-to-memory communication, as well as IO. This is despite the fact that the mesh is often cited as one of the key criticisms of the performance of this processor: the benefit of being able to scale out beyond 24 processors properly is the reason why Intel has gone down this path.

For AMD, the situation is a bit mix and match. With a single four-core complex, communication between cores is relatively simple and uses a centralized crossbar. When dealing with so few cores, the communication method is simple and light. However, within two sets of complexes on the same silicon, or the memory controller, the interconnect comes into play. This is not so much a ring, but is based on an internal version of Infinity Fabric (IF).

The IF is designed to be scalable across cores, silicon, and sockets. We can probe what it does within a single piece of silicon by looking directly at the Ryzen 7 2700X, which has a TDP of 105W.


*IF Power should be 'Non-Core' power, which includes IF + DRAM controller + IO

AMD’s product here gives two interesting data points. Firstly, when the cores are weakly loaded, the IF + DRAM controller + IO accounts for a massive 43% of the total power of the processor. This is compared to 4% for the i7-8700K and 19% for the i9-7980XE. However, that 43% reduces down to around 25% of the full chip, but this is still on par with the bigger mesh based processor from Intel.

Another interesting point is that the combined non-core power doesn’t change that much scaling up the cores, going from ~17.6W to ~25.7W. For the big Intel chip, we saw it rise from ~13.8W up to beyond 40W in some cases. This brings the question as to if Intel’s offering can scale in power at the low end, and if AMD’s non-core power as an initial ‘power penalty’ to pay before the cores start getting loaded up.

With this data, let us kick it up a notch to the real story here. The Ryzen Threadripper 2950X is the updated 16-core version of Threadripper, which uses a single IF link between the two silicon dies to talk between the sets of core complexes.

As shown on the diagram, the red line represents the IF link combined with the DRAM controller and IO. In this case, the non-core we measure includes the intra-silicon interconnect as well as the inter-silicon interconnect.


*IF Power should be 'Non-Core' power, which includes IF + DRAM controller + IO

In percentage of power, the non-core power consumes 59% of the total power consumption of the chip when loaded at two threads. So even though both threads are located on the same core on the same CCX, because it needs to have access to all of the memory of the system, the die-to-die silicon link is enabled as well as the intra-silicon links are all fired up.

However, the amount of power consumed by the IF + DRAM controller + IO, as the core loading scales, does not increase all that much, from 34W to 43W, slowly reducing to around 25% of the total chip power, similar to the 2700X. It is just that initial bump that screams a lot, because of the way that that core still needs access to all the memory bandwidth available.

Now we should consider the 2990WX. Because all four silicon dies are enabled on the package, and each die needs an IF interconnect to each other die, there are now six IF links to fire up:

There’s a lot of red in that diagram. It is noteworthy that two of the silicon dies do not have DRAM attached to them, and so when only a few cores are enabled, theoretically AMD should be able to power down those IF links as they would only cause additional latency hops if other IF links are congested. In fact, we get something very odd indeed.


*IF Power should be 'Non-Core' power, which includes IF + DRAM controller + IO

First, let us start with the low-loading metrics. Here the non-core power is consuming 56.1W from the total 76.7W power consumption, a massive 73% of the total power consumption of the processor. If a single link on the 2950W was only 34W, it is clear that the 56W here means that more than a single IF link is being fired up. There are perhaps additional power management opportunities here.

Moving through the stack, you will notice that our 2990WX sample never goes near the 250W rated TDP of the processor, actually barely hitting 180W at times. We are unsure why this is. What we can say is that as loading increases, the total contribution that the non-core power gives does decrease, slowly settling around 36%, varying between 35% and 40% depending on the specific workload. This is a rise up from the 25% we saw in the 2700X and 2950X.

So given that this is the first review with our EPYC 7601 numbers, how about we take it up another notch? While based on the older first generation Zen cores, EPYC has additional memory controllers and IO to worry about, all of which fall under the uncore power category.

Moving into the power consumption numbers, and similar to the 2990WX as we load up all the cores, the values do get a little bit squirrely. However the proportion numbers are staggering.


*IF Power should be 'Non-Core' power, which includes IF + DRAM controller + IO

At low loading, out of a total package power of 74.1W, the non-core power consumes 66.2W, or a staggering 89% ! As we go up through the cores, that 66.2W becomes up to 90W in places, but even at its lowest point, the IF accounts for 50% of the power of the total chip. The cores are barely getting 90W out of the 180W TDP!

This raises an interesting point – if we are purely considering the academic merits of one core compared to another, does the uncore power count to that contribution? For a real-world analysis, yes, but for a purely academic one? It also means I can claim the following prophecy:

After core counts, the next battle will be on the interconnect. Low power, scalable, and high performance: process node scaling will mean nothing if the interconnect becomes 90% of the total chip power.

Precision Boost 2, Precision Boost Overdrive, and StoreMI Test Setup and Comparison Points
Comments Locked

171 Comments

View All Comments

  • plonk420 - Tuesday, August 14, 2018 - link

    worse for efficiency?

    https://techreport.com/r.x/2018_08_13_AMD_s_Ryzen_...
  • Railgun - Monday, August 13, 2018 - link

    How can you tell? The article isn’t even finished.
  • mapesdhs - Monday, August 13, 2018 - link

    People will argue a lot here about performance per watt and suchlike, but in the real world the cost of the software and the annual license renewal is often far more than the base hw cost, resulting in a long term TCO that dwarfs any differences in some CPU cost. I'm referring here to the kind of user that would find the 32c option relevant.

    Also missing from the article is the notion of being able to run multiple medium scale tasks on the same system, eg. 3 or 4 tasks each of which is using 8 to 10 cores. This is quite common practice. An article can only test so much though, at this level of hw the number of different parameters to consider can be very large.

    Most people on tech forums of this kind will default to tasks like 3D rendering and video conversion when thinking about compute loads that can use a lot of cores, but those are very different to QCD, FEA and dozens of other tasks in research and data crunching. Some will match the arch AMD is using, others won't; some could be tweaked to run better, others will be fine with 6 to 10 cores and just run 4 instances testing different things. It varies.

    Talking to an admin at COSMOS years ago, I was told that even coders with seemingly unlimited cores to play with found it quite hard to scale relevant code beyond about 512 cores, so instead for the sort of work they were doing, the centre would run multilple simulations at the same time, which on the hw platform in question worked very nicely indeed (1856 cores of the SandyBridge-EP era, 14.5TB of globally shared memory, used primarily for research in cosmology, astrophysics and particle physics; squish it all into a laptop and I'm sure Sheldon would be happy. :D) That was back in 2012, but the same concepts apply today.

    For TR2, the tricky part is getting the OS to play nice, along with the BIOS, and optimised sw. It'll be interesting to see how 2990WX performance evolves over time as BIOS updates come out and AMD gets feedback on how best to exploit the design, new optimisations from sw vendors (activate TR2 mode!) and so on.

    SGI dealt with a lot of these same issues when evolving its Origin design 20 years ago. For some tasks it absolutely obliterated the competition (eg. weather modelling and QCD), while for others in an unoptimised state it was terrible (animation rendering, not something that needs shared memory, but ILM wrote custom sw to reuse bits of a frame already calculated for future frame, the data able to fly between CPUs very fast, increasing throughput by 80% and making the 32-CPU systems very competitive, but in the long run it was easier to brute force on x86 and save the coder salary costs).

    There are so many different tasks in the professional space, the variety is vast. It's too easy to think cores are all that matter, but sometimes having oodles of RAM is more important, or massive I/O (defense imaging, medical and GIS are good examples).

    I'm just delighted to see this kind of tech finally filter down to the prosumer/consumer, but alas much of the nuance will be lost, and sadly some will undoubtedly buy based on the marketing, as opposed to the golden rule of any tech at this level: ignore the publish benchmarks, the ony test that actually matters is your specific intended task and data, so try and test it with that before making a purchasing decision.

    Ian.
  • AbRASiON - Monday, August 13, 2018 - link

    Really? I can't tell if posts like these are facetious or kidding or what?

    I want AMD to compete so badly long term for all of us, but Intel have such immense resources, such huge infrastructure, they have ties to so many big business for high end server solutions. They have the bottom end of the low power market sealed up.

    Even if their 10nm is delayed another 3 years, AMD will only just begin to start to really make a genuine long term dent in Intel.

    I'd love to see us at a 50/50 situation here, heck I'd be happy with a 25/75 situation. As it stands, Intel isn't finished, not even close.
  • imaheadcase - Monday, August 13, 2018 - link

    Are you looking at same benchmarks as everyone else? I mean AMD ass was handed to it in Encoding tests and even went neck to neck against some 6c intel products. If AMD got one of these out every 6 months with better improvements sure, but they never do.
  • imaheadcase - Monday, August 13, 2018 - link

    Especially when you consider they are using double the core count to get the numbers they do have, its not very efficient way to get better performance.
  • crotach - Tuesday, August 14, 2018 - link

    It's happened before. AMD trashes Intel. Intel takes it on the chin. AMD leads for 1-2 years and celebrates. Then Intel releases a new platform and AMD plays catch-up for 10 years and tries hard not to go bankrupt.

    I dearly hope they've learned a lesson the last time, but I have my doubts. I will support them and my next machine will be AMD, which makes perfect sense, but I won't be investing heavily in the platform, so no X399 for me.
  • boozed - Tuesday, August 14, 2018 - link

    We're talking about CPUs that cost more than most complete PCs. Willy-waving aside, they are irrelevant to the market.
  • Ian Cutress - Monday, August 13, 2018 - link

    Hey everyone, sorry for leaving a few pages blank right now. Jet lag hit me hard over the weekend from Flash Memory Summit. Will be filling in the blanks and the analysis throughout today.

    But here's what there is to look forward to:

    - Our new test suite
    - Analysis of Overclocking Results at 4G
    - Direct Comparison to EPYC
    - Me being an idiot and leaving the plastic cover on my cooler, but it completed a set of benchmarks. I pick through the data to see if it was as bad as I expected

    The benchmark data should now be in Bench, under the CPU 2019 section, as our new suite will go into next year as well.

    Thoughts and commentary welcome!
  • Tamz_msc - Monday, August 13, 2018 - link

    Are the numbers for test LuxMark C++ test correct? Seems they've been swapped(2900WX and 2950X).

Log in

Don't have an account? Sign up now