Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's highest officially-supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the highest official frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance.

While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC-supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

The Current CPU Test Suite

For our AMD Ryzen 9000 testing, we are using the following test system:

AMD Ryzen 9000 Series (Zen 5) System
CPU Ryzen 7 9700X ($359)
8 Cores, 16 Threads
65 W TDP

Ryzen 5 9600X ($279)
6 Cores, 12 Threads
65 W TDP
Motherboard ASRock X670E Taichi
Memory SK Hynix
2x16 GB
DDR5-5600B CL46
Cooling MSI MAG Coreliquid E360 360mm AIO
Storage SK Hynix Platinum P41 2TB PCIe 4.0 x4
Power Supply MSI A1000G 1000W
GPUs MSI NVIDIA RTX 4080 Gaming X Trio
Operating Systems Windows 11 23H2

Our CPU 2024 Suite: What to Expect

We recently updated the CPU test suite to our 2023, but we've decided to update it again as we head into 2024. Our new suite has a more diverse selection of tests and benchmarks, focusing on real-world instruction sets and newer encoding and decoding libraries such as AV1, VP9, and HVEC. We have also included a range of AI-focused workloads and benchmarks, as we're seeing a direct shift from manufacturers to incorporate some form of on-chip AI processing, such as Ryzen AI and Intel's Meteor Lake AI NPU.

While we've kept some of the more popular ones, such as CineBench R23, we've added Maxon's latest CineBench 2024 benchmark to our test suite. We have also updated to the latest versions (at the time of incorporating the suite) in benchmarks such as Blender, V-Ray, and y-Cruncher.

With our processor reviews, especially on a new generational product such as AMD's Ryzen 9000 series, we also include SPEC2017 data to account for any increases (or decreases) to generational single-threaded and multi-threaded performance. It should be noted that per the terms of the SPEC license because our benchmark results are not vetted directly by the SPEC consortium, it is officially classified as an ‘estimated’ score.

We've also carried over some older (but still relevant/enlightening) benchmarks from our CPU 2023 suite. This includes benchmarks such as Dwarf Fortress, Factorio, Dr. Ian Cutress's 3DPMv2 benchmark, and Blender 3.6. We've also kept UL's Procyon office suite in as a more holistic system-wide, but we've omitted the AI suite as it's really nonsensical when not testing or utilizing NPUs.

As for gaming, we've updated our suite to include Company of Heroes 3, Cyberpunk 2077, F1 2023, Returnal, and Total War: Warhammer 3. We opt for a time-tested and similar methodology based on how we usually do things. This includes testing at 720p, 1080p, and 4K.

We've also taken the opportunity to update to NVIDIA's latest generation GeForce RTX 4080 video cards. And a big thank you to MSI for providing the Gaming X Trio cards we're using.

The CPU-focused tests featured specifically in this review are as follows:

Power

  • Peak Power (y-Cruncher using AVX)
  • Power analysis with Cinebench 2024

Office & Web

  • UL Procyon Office: Various office-based tasks using various Microsoft Office applications
  • JetStream 2.1 Benchmark: Measures various levels of web performance within a browser (we use the latest available Chrome)
  • Timed Linux Kernel Compilation: How long it takes to compile a Linux build with the standard settings
  • Timed PHP Compilation: How long does it take to compile PHP
  • Timed Node.js Compilation: Same as above, but with Node.js
  • MariaDB: A MySQL database benchmark using mysqlslap

Encoding

  • WebP2 Image Encode: Encoding benchmark using the WebP2 format
  • SVT AV1 Encoding: Encoding using AV1 at both 1080p and 4K, at different settings
  • Dav1D AV1 Benchmark: A simple AV1 based benchmark
  • SVT-HEVC Encoding: Same as SVT AV1, but with HEVC, at both 1080p and 4K
  • SVT-VP9 Encoding: Same as other SVT benchmarks, but using VP9, both at 1080p and 4K
  • FFmpeg 6.0 Benchmark: Benchmarking with x264 and x265 using a live scenario
  • FLAC Audio Encoding: Benchmarking audio encoding from WAV to FLAC
  • 7-Zip: A fabled benchmark we've used before, but updated to the latest version

Rendering

  • Blender 3.6: Popular rendering program
  • CineBench R23: The fabled Cinema4D Rendering engine
  • CineBench 2024: The latest Cinema4D Rendering engine
  • V-Ray: Another popular renderer
  • POV-Ray: A persistence of ray-tracing benchmark

Science & Simulation

  • y-Cruncher 0.8.2.9523: Calculating Pi to 5M digits, both ST and MT
  • 3D Particle Movement v2.1 (Non-AVX + AVX2/AVX512)
  • OpenFOAM: A Computational Fluid Dynamics (CFD) benchmark using drivaerFastback test case to analyze automotive aerodynamics.
  • Dwarf Fortress 0.44.12: Fantasy world creation and time passage
  • Factorio v1.1.26 Test: A game-based benchmark that is largely consistent for measuring overall CPU and memory performance
  • 3D Mark CPU Profile: Benchmark testing just the CPU with multiple levels of thread usage

AI and Inferencing

  • ONNX Runtime: A Microsoft developed open source machine learning and inferencing accelarator
  • DeepSpeech: A Mozilla based speech-to-text engine benchmark powered by TensorFlow
  • TensorFlow 2.12: A TensorFlow benchmark using the deep learning framework

We are currently using our new games from our 2024 suite, which has been long overdue. Our current games in our CPU testing and those featured in this review are as follows:

  • Company of Heroes 3: 720p, 1080p, and 4K (both avg and 95% percentile)
  • Cyberpunk 2077: 720p, 1080p, and 4K (both avg and 95% percentile)
  • F1 2023: 720p, 1080p, and 4K (both avg and 95th percentile)
  • Returnal: 720p, 1080p,and 4K (both avg and 95th percentile)
  • Total War Warhammer 3: 720p, 1080p, and 4K (both avg and 95th percentile) 

On Intel Woes & Raptor Lake Stability

Even though they're not the focus of today's review, there's no dancing around the subject of Intel's recent chip stability and longevity woes. The long-term stability of the company's high-end Raptor Lake silicon, used to power the 13th and 14th Generation Core desktop series, has come into question based on an increasing number of reports of initially stable chips becoming unstable months and years down the line. These complaints reached a crescendo earlier this year, kicking off a detailed investigation from Intel that is ultimately resulting in new microcode, new suggested motherboard settings, and an extended warranty for Intel's high-end desktop chips.

Intel has narrowed down the issue to “elevated operating voltages,” that at its heart, stems from a flawed algorithm in Intel’s microcode that requested the wrong voltage. The good news is that Intel will be able to resolve the issue of further damage through a new microcode update that will put an end to excessive voltage requests. Pending validation, this microcode is expected to be released in the middle of August.

From what we’re hearing, the performance impact of this microcode patch will be minimal-to-nonexistent. Though it goes without saying that it’s something we’ll want to validate ourselves.

In the meantime, however, that leaves Intel’s Raptor Lake desktop chips slowly singeing themselves to death ahead of the necessary microcode update. So what does this mean for our Ryzen review?

For the moment, we’re essentially in a holding pattern when it comes to Intel chips. While we are reasonably confident that Intel will be able to fix the problem with their microcode update later this month, we don’t know what the performance impact will be. And while it’s incredibly unlikely anyone will be able to toast a new Raptor Lake chip in the span of just a couple of weeks, we cannot in good consciousness recommend buying Intel’s high-end Raptor Lake chips until that microcode update is available.

But this problem will eventually get fixed. And in the meantime, we need to provide some kind of performance comparison for the Ryzen 9000 chips against both AMD’s previous-generation parts, as well as AMD’s competitors (spoiler: AMD is already winning, so pulling Intel’s chips now doesn’t do them any favors). Consequently, we’ve decided to include Intel’s chips anyhow, which are being marked with the classic and time-honored anomalous data symbol, the asterisk, to indicate that these are results from unfixed chips.

Hopefully, you’ll agree that this is a fair way to include the data for Intel’s chips so that we can compare their performance, while acknowledging that performance could very well change for the worse here in a matter of weeks.

Microcode fixes aside, there’s also one other change we’re making with Intel chips going forward. Which, although coming from the shadow of a major chip stability scandal, is something we’ve been wanting Intel to address for years now: stock power limits.

As part of their investigation into the Raptor Lake stability issue, Intel finally took their motherboard partners to task for shipping their motherboards with ridiculous out-of-the-box power settings. These elevated settings essentially allowed Intel’s chips to run in their highest boost state at all times, power consumption be damned. Which makes for great benchmarks, but a poor user experience overall when those chips are drawing 400 Watts of power.

The end result of those efforts is that Intel has published official guidance for power settings for motherboard vendors and users alike to follow. These “Intel Default Settings” are Intel’s formal recommendation for BIOS default power settings, and while Intel isn’t forcing anyone to use them, they are strongly encouraging everyone to use them.

The settings, overall, are very reasonable. And ideally, how Intel motherboards should have been shipping all along. This includes keeping current limits and other protective measures enabled, enabling enhanced Thermal Velocity Boost (eTVB), and actually following the TDP guidelines for Intel’s chips.

Intel’s default settings also include multiple potential power delivery profiles. These align mostly to the capabilities of the motherboard, reflecting the fact that high-end boards are typically specifically engineered to allow for higher TDP and current limits. To that end, Intel’s recommendation is always to use the highest profile a board/chip combo can support – Performance for 600K and 700K processors, and Extreme for 900K chips. All settings still adhere to Intel’s PL2 power limits, but Extreme allows for higher currents still, and higher power limits on Intel’s ridiculously binned KS processors.

Going forward, our testing with Intel processors will be following the Intel Default Settings. This means we’ve reined in our Intel chips a bit so that their power consumption is a bit more down to earth, but so is their performance. Though this also means that our Intel performance results going forward are not comparable to previous data; we’re wiping the board and starting from scratch.

Ultimately, this is being done in accordance with our longstanding policy to prefer testing hardware as it operates out of the box, without overclocking or other warrany-breaking changes. Now that Intel (finally) has a proper and reasonable set of default motherboard power settings, we are going to make sure our testing adheres to them, just as we do other default settings.

The AMD Ryzen 7 9700X and Ryzen 5 9600X Review: Zen 5 is Alive Power Consumption
Comments Locked

70 Comments

View All Comments

  • Ryan Smith - Wednesday, August 7, 2024 - link

    This is an area where technical specifications and casual nomenclature have drifted apart.

    DDR5 channels are 32-bits each. A DIMM offers two 32-bit channels, for 64-bits altogether.

    So AM5 takes two DIMMs. But it's technically four independent DDR5 channels.
  • Kevin G - Wednesday, August 7, 2024 - link

    This is going to be more divergent with DDR6 as the draft specs have four 16 bit sub channels for a 64 bit DIMM. However the DIMM format might only bee seen in servers with consumer products likely moving to a version CAMM which would ultimately have eight 16 bit sub channels for a 128 bit wide CAMM product.
  • phoenix_rizzen - Tuesday, August 13, 2024 - link

    If CAMM is ever going to take off in desktops they're going to have to come up with a vertical-oriented version (similar to how DIMMs are inserted vertically into a motherboard). There's just not enough horizontal space on ATX motherboards for multiple CAMM boards to be attached.

    Would also be nice if someone came up with a vertical M.2 slot for NVMe drives.

    Either that or extended ATX (or larger) motherboards are going to have to make a comeback. :)
  • 'nar - Friday, August 16, 2024 - link

    Get off my lawn! Geez I feel old now. These are "Dual Inline Memory Modules," but otherwise just as Ryan already explained. What threw me was that they've been called dual channel for so long calling them quad channel now is misleading.

    The DIMMs started back in the x486 days I think, maybe the Pentium? From 8086, 286, 386, 486, Intel increased the channel width(or the word length that the CPU can process), 8-bit, 16-bit, 32-bit, then 32-bit x2. Processors calculated smaller chunks back then, but have mostly stayed at 32-bit, which are four 8-bit bytes, so it is a 4-byte word that equals 32-bit. The Processor is much faster than the memory, so they decided to double up on the data, hence, Dual inline memory modules. Before this they were SIMMs. But I don't believe we got "dual channel" (where we got A and B channels) until 64-bit CPU's, which use twice the data, And Quad channel is mostly found in servers and HEDT systems. So, in the end this seems to be a marketing decision meant to confuse people. Sales guys don't need us to understand, just buy, and we all like MOWAR Power eh? Even if we just think it is.
  • Terry_Craig - Wednesday, August 7, 2024 - link

    This architecture has some very serious bottlenecks. It performs slightly better than Zen4 or the same in almost everything, except where AVX512 is used (DL/AI software) there the performance shoots ahead.

    Disappointed to see such a wide design not deliver what it promised.
  • yeeeeman - Wednesday, August 7, 2024 - link

    yes, exactly
  • Bulat Ziganshin - Wednesday, August 7, 2024 - link

    It was my first thought, but just look at Zen1-4 history. It kept the same width, but increased IPC 1.5x by going deeper (i.e. larger ROB and so on). It's the way AMD reduces their expenses - they increase width once and then slowly make CPU deeper to get small gains every year. So, I expect that Zen8 or so will be 1.5x faster that Zen4 by finally making it as wide as Apple M1.
  • Bulat Ziganshin - Wednesday, August 7, 2024 - link

    sorry, I meant "Zen8 will be 1.5x faster than Zen4 by making it as DEEP as M1 while keeping the same width as Zen5"
  • Khanan - Wednesday, August 7, 2024 - link

    Did we read the same article? Maybe you shouldn’t comment if you didn’t read or understand the article.

    And if you’re only about games, 5 games are nothing, go for the reviews where 20 games are tested (at least).
  • Bulat Ziganshin - Wednesday, August 7, 2024 - link

    Zen5 has 6 ALUs - 1.5x more than Zen4. e.g. Apple M1 also had 6 ALUs, but Zen5 is nowhere near M1 IPC or 1.5x Zen4 IPC. even in the official benchmarks IPC improved only by 16% on average

Log in

Don't have an account? Sign up now