Investigating Performance of Multi-Threading on Zen 3 and AMD Ryzen 5000
by Dr. Ian Cutress on December 3, 2020 10:00 AM EST- Posted in
- CPUs
- AMD
- Zen 3
- X570
- Ryzen 5000
- Ryzen 9 5950X
- SMT
- Multi-Threading
One of the stories around AMD’s initial generations of Zen processors was the effect of Simultaneous Multi-Threading (SMT) on performance. By running with this mode enabled, as is default in most situations, users saw significant performance rises in situations that could take advantage. The reasons for this performance increase rely on two competing factors: first, why is the core designed to be so underutilized by one thread, or second, the construction of an efficient SMT strategy in order to increase performance. In this review, we take a look at AMD’s latest Zen 3 architecture to observe the benefits of SMT.
What is Simultaneous Multi-Threading (SMT)?
We often consider each CPU core as being able to process one stream of serial instructions for whatever program is being run. Simultaneous Multi-Threading, or SMT, enables a processor to run two concurrent streams of instructions on the same processor core, sharing resources and optimizing potential downtime on one set of instructions by having a secondary set to come in and take advantage of the underutilization. Two of the limiting factors in most computing models are either compute or memory latency, and SMT is designed to interleave sets of instructions to optimize compute throughput while hiding memory latency.
An old slide from Intel, which has its own marketing term for SMT: Hyper-Threading
When SMT is enabled, depending on the processor, it will allow two, four, or eight threads to run on that core (we have seen some esoteric compute-in-memory solutions with 24 threads per core). Instructions from any thread are rearranged to be processed in the same cycle and keep utilization of the core resources high. Because multiple threads are used, this is known as extracting thread-level parallelism (TLP) from a workload, whereas a single thread with instructions that can run concurrently is instruction-level parallelism (ILP).
Is SMT A Good Thing?
It depends on who you ask.
SMT2 (two threads per core) involves creating core structures sufficient to hold and manage two instruction streams, as well as managing how those core structures share resources. For example, if one particular buffer in your core design is meant to handle up to 64 instructions in a queue, if the average is lower than that (such as 40), then the buffer is underutilized, and an SMT design will enable the buffer is fed on average to the top. That buffer might be increased to 96 instructions in the design to account for this, ensuring that if both instruction streams are running at an ‘average’, then both will have sufficient headroom. This means two threads worth of use, for only 1.5 times the buffer size. If all else works out, then it is double the performance for less than double the core design in design area. But in ST mode, where most of that 96-wide buffer is less than 40% filled, because the whole buffer has to be powered on all the time, it might be wasting power.
But, if a core design benefits from SMT, then perhaps the core hasn’t been designed optimally for a single thread of performance in the first place. If enabling SMT gives a user exact double performance and perfect scaling across the board, as if there were two cores, then perhaps there is a direct issue with how the core is designed, from execution units to buffers to cache hierarchy. It has been known for users to complain that they only get a 5-10% gain in performance with SMT enabled, stating it doesn't work properly - this could just be because the core is designed better for ST. Similarly, stating that a +70% performance gain means that SMT is working well could be more of a signal to an unbalanced core design that wastes power.
This is the dichotomy of Simultaneous Multi-Threading. If it works well, then a user gets extra performance. But if it works too well, perhaps this is indicative of a core not suited to a particular workload. The answer to the question ‘Is SMT a good thing?’ is more complicated than it appears at first glance.
We can split up the systems that use SMT:
- High-performance x86 from Intel
- High-performance x86 from AMD
- High-performance POWER/z from IBM
- Some High-Performance Arm-based designs
- High-Performance Compute-In-Memory Designs
- High-Performance AI Hardware
Comparing to those that do not:
- High-efficiency x86 from Intel
- All smartphone-class Arm processors
- Successful High-Performance Arm-based designs
- Highly focused HPC workloads on x86 with compute bottlenecks
(Note that Intel calls its SMT implementation ‘HyperThreading’, which is a marketing term specifically for Intel).
At this point, we've only been discussing SMT where we have two threads per core, known as SMT2. Some of the more esoteric hardware designs go beyond two threads-per-core based SMT, and use up to eight. You will see this stylized in documentation as SMT8, compared to SMT2 or SMT4. This is how IBM approaches some of its designs. Some compute-in-memory applications go as far as SMT24!!
There is a clear trend between SMT-enabled systems and no-SMT systems, and that seems to be the marker of high-performance. The one exception to that is the recent Apple M1 processor and the Firestorm cores.
It should be noted that for systems that do support SMT, it can be disabled to force it down to one thread per core, to run in SMT1 mode. This has a few major benefits:
It enables each thread to have access to a full core worth of resources. In some workload situations, having two threads on the same core will mean sharing of resources, and cause additional unintended latency, which may be important for latency critical workloads where deterministic (the same) performance is required. It also reduces the number of threads competing for L3 capacity, should that be a limiting factor. Also should any software be required to probe every other workflow for data, for a 16-core processor like the 5950X that means only reaching out to 15 other threads rather than 31 other threads, reducing potential crosstalk limited by core-to-core connectivity.
The other aspect is power. With a single thread on a core and no other thread to jump in if resources are underutilized, when there is a delay caused by pulling something from main memory, then the power of the core would be lower, providing budget for other cores to ramp up in frequency. This is a bit of a double-edged sword if the core is still at a high voltage while waiting for data in an SMT disabled mode. SMT in this way can help improve performance per Watt, assuming that enabling SMT doesn’t cause competition for resources and arguably longer stalls waiting for data.
Mission critical enterprise workloads that require deterministic performance, and some HPC codes that require large amounts of memory per thread often disable SMT on their deployed systems. Consumer workloads are often not as critical (at least in terms of scale and $$$), and so the topic isn’t often covered in detail.
Most modern processors, when in SMT-enabled mode, if they are running a single instruction stream, will operate as if in SMT-off mode and have full access to resources. Some software takes advantage of this, spawning only one thread for each physical core on the system. Because core structures can be dynamically partitioned (adjusts resources for each thread while threads are in progress) or statically shared (adjusts before a workload starts), situations where the two threads on a core are creating their own bottleneck would benefit having only a single thread per core active. Knowing how a workload uses a core can help when designing software designed to make use of multiple cores.
Here is an example of a Zen3 core, showing all the structures. One of the progress points with every new generation of hardware is to reduce the number of statically allocated structures within a core, as dynamic structures often give the best flexibility and peak performance. In the case of Zen3, only three structures are still statically partitioned: the store queue, the retire queue, and the micro-op queue. This is the same as Zen2.
SMT on AMD Zen3 and Ryzen 5000
So much like AMD’s previous Zen-based processors, the Ryzen 5000 series that uses Zen3 cores also have an SMT2 design. By default this is enabled in every consumer BIOS, however users can choose to disable it through the firmware options.
For this article, we have run our AMD Ryzen 5950X processor, a 16-core high-performance Zen3 processor, in both SMT Off and SMT On modes through our test suite and through some industry standard benchmarks. The goals of these tests are to ascertain the answers to the following questions:
- Is there a single-thread benefit to disabling SMT?
- How much performance increase does enabling SMT provide?
- Is there a change in performance per watt in enabling SMT?
- Does having SMT enabled result in a higher workload latency?*
*more important for enterprise/database/AI workloads
The best argument for enabling SMT would be a No-Lots-Yes-No result. Conversely the best argument against SMT would be a Yes-None-No-Yes. But because the core structures were built with having SMT enabled in mind, the answers are rarely that clear.
Test System
For our test suite, due to obtaining new 32 GB DDR4-3200 memory modules for Ryzen testing, we re-ran our standard test suite on the Ryzen 9 5950X with SMT On and SMT Off. As per our usual testing methodology, we test memory at official rated JEDEC specifications for each processor at hand.
Test Setup | |||||
AMD AM4 | Ryzen 9 5950X | MSI X570 Godlike |
1.B3T13 AGESA 1100 |
Noctua NH-U12S |
ADATA 4x32 GB DDR4-3200 |
GPU | Sapphire RX 460 2GB (CPU Tests) NVIDIA RTX 2080 Ti |
||||
PSU | OCZ 1250W Gold | ||||
SSD | Crucial MX500 2TB | ||||
OS | Windows 10 x64 1909 Spectre and Meltdown Patched |
||||
VRM Supplimented with Silversone SST-FHP141-VF 173 CFM fans |
Also many thanks to the companies that have donated hardware for our test systems, including the following:
Hardware Providers for CPU and Motherboard Reviews | |||
Sapphire RX 460 Nitro |
NVIDIA RTX 2080 Ti |
Crucial SSDs | Corsair PSUs |
G.Skill DDR4 | ADATA DDR4-3200 32GB |
Silverstone Ancillaries |
Noctua Coolers |
126 Comments
View All Comments
Oxford Guy - Friday, December 4, 2020 - link
Suggestions:Compare with Zen 2 and Zen 1, particularly in games.
Explain SMT vs. CMT. Also, is SMT + CMT possible?
AntonErtl - Sunday, December 6, 2020 - link
CMT has at least two meanings.Sun's UltraSparc T1 has in-order cores that run several threads alternatingly on the functional units. This is probably the closest thing to SMT that makes sense on an in-order core. Combining this with SMT proper makes no sense; if you can execute instructions from different threads in the same cycle, there is no need for an additional mechanism for processing them in alternate cycles. Instruction fetch on some SMT cores processes instructions in alternate cycles, though.
The AMD Bulldozer and family have pairs of cores that share more than cores in other designs share (but less than with SMT): They share the I-cache, front end and FPU. As a result, running code on both cores of a pair is often not as fast as when running it on two cores of different pairs. You can combine this scheme with SMT, but given that it was not such a shining success, I doubt anybody is going to do it.
Looking at roughly contemporary CPUs (Athlon X4 845 3.5GHz Excavator and Core i7 6700K 4.2Ghz Skylake), when running the same application twice one after the other on the same core/thread vs. running it on two cores of the same pair or two threads of the same core, using two cores was faster by a factor 1.65 on the Excavator (so IMO calling them cores is justified), and using two threads was faster by a factor 1.11 on the Skylake. But Skylake was faster by a factor 1.28 with two threads than Excavator with two cores, and by a factor 1.9 when running only a single core/thread, so even on multi-threaded workloads a 4c/8t Skylake can beat an 8c Excavator (but AFAIK Excavators were not built in 8c configurations). The benchmark was running LaTeX.
Oxford Guy - Sunday, December 6, 2020 - link
AMD's design was very inefficient in large part because the company didn't invest much into improving it. The decision was made, for instance, to stall high-performance with Piledriver in favor of a very very long wait for Zen. Excavator was made on a low-quality process and was designed to be cheap to make.Comparing a 2011/2012 design that was bad when it came out with Skylake is a bit of a stretch, in terms of what the basic architectural philosophy is capable of.
I couldn't remember that fourth type (the first being standard multi-die CPU multiprocessing) so thanks for mentioning it (Sun's).
USGroup1 - Saturday, December 5, 2020 - link
So yCruncher is far away from real world use cases and 3DPMavx isn't.pc8086 - Sunday, December 6, 2020 - link
Many congratulations to Dr. Ian Cutress for the excellent analysis carried out.If possible, it would be extremely interesting to repeat a similar rigorous analysis (at least on multi-threaded subsection of choosen benchmarks) on the following platforms:
- 5900X (Zen 3, but fewer cores for each chiplet, maybe with more thermal headroom)
- 5800X (Zen 3, only a single computational chiplet, so no inter CCX latency throubles)
- 3950X (same cores and configuration, but with Zen 2, to check if the new, beefier core improved SMT support)
- 2950X (Threadripper 2, same number of cores but Zen+, with 4 mamory channels; useful expecially for tests such as AIBench, which have gotten worse with SMT)
- 3960X (Threadripper3, more cores, but Zen2 and with 4 memory ch.)
Obviously, it would be interesting to check Intel HyperThreading impact on recent Comet Lake, Tiger Lake and Cascade Lake-X.
For the time being, Apple has decided not to use any form of SMT on its own CPUs, so it is useful to fully understand the usefulness of SMT technologies for notebooks, high-end PCs and prosumer platforms.
Than you very much.
eastcoast_pete - Sunday, December 6, 2020 - link
Thanks Ian! With some of your comments about memory access limiting performance in some cases, how does (or would) a quad channel memory setup give in additional performance compared to the dual channel consumer setups (like these or mine) have? Now, I know that servers and actual workstations usually have 4 or more memory channels, and for good reason. So, in the time of 12 and 16 core CPUs, is it time for quad channel memory access for the rest of us, or would that break the bank?mapesdhs - Thursday, December 10, 2020 - link
That's a good question. As time moves on and we keep getting more cores, with people doing more things that make use of them (such as gaming and streaming at the same time, with browser/tabs open, livechat, perhaps an ecode too), perhaps indeed the plethora of cores does need better mem bw and parallelism, but maybe the end user would not yet tolerate the cost.Something I noticed about certain dual-socket S2011 mbds on Aliexpress is that they don't have as many memory channels as they claim, which with two CPUs does hurt performance of even consumer grade tasks such as video encoding:
http://www.sgidepot.co.uk/misc/kllisre_analysis.tx...
bez5dva - Monday, December 7, 2020 - link
Hi Dr. Cutress!Thanks for these interesting tests!
Perhaps, SMT thing is a something that could drastically improve more budget CPUs performance? Your CPU has more than enough shiny cores for these games, but what if you take Ryzen 3100? I believe %age would be different, as it was in my real world case :)
Back then i had 6600k@4500 and in some FPS games with a huge maps and a lot of players (Heroes and Generals; Planetside 2) i started to receive stutters in a tight fights, but when i switched to 6700@4500 it wasn't my case anymore. So i do believe that Hyperthreading worked in my case, cuz my CPUs were identical aside of virtual threads in the last one.
Would super interesting to have this post updated with a cheaper sample results 😇
peevee - Monday, December 7, 2020 - link
It is clear that 16-core Ryzen is power, memory and thermally limited. I bet SMT results on 8-core Ryzen 7 5800x would be much better for more loads.naive dev - Tuesday, December 8, 2020 - link
The slide states that Zen 3 decodes 4 instructions/cycle. Are there two independent decoders which each decode those 4 instruction for a thread? Or is there a single decoder that switches between the program counters of both threads but only decodes instructions of one thread per cycle?