Precision Boost 2

Exact per-core turbo timings for the new processors will be determined by AMD’s voltage-frequency scaling functionality through Precision Boost 2. This feature, which we covered extensively in our Ryzen 7 2700X review, relies on available power and current to determine frequency, rather than a discrete look-up-table for voltage and frequency based on loading. Depending on the system default capabilities, the frequency and voltage will dynamically shift in order to use more of the power budget available at any point in the processor loading.

The idea is that the processor can use more of the power budget available to it than a fixed look up table that has to be consistent between all SKUs that are stamped with that number.

Precision Boost 2 also works in conjunction with XFR2 (eXtreme Frequency Range) which reacts to additional thermal headroom. If there is additional thermal budget, driven by a top-line cooler, then the processor is enabled to use more power up to the thermal limit and get additional frequency. AMD claims that a good cooler in a low ambient situation can compute >10% better in selected tests as a result of XFR2.

Ultimately this makes testing Threadripper 2 somewhat difficult. With a turbo table, performance is fixed between the different performance characteristics of each bit of silicon, making power the only differentiator. With PB2 and XF2, no two processors will perform the same. AMD has also hit a bit of a snag with these features, choosing to launch Threadripper 2 during the middle of a heatwave in Europe. Europe is famed for its lack of air conditioning everywhere, and when the ambient temperature is going above 30ºC, this will limit additional performance gains. It means that a review from a Nordic publication might see better results than one from the tropics, quite substantially.

Luckily for us we tested most of our benchmarks while in an air conditioned hotel thanks to Intel’s Data-Centric Innovation Summit which was the week before launch.

Precision Boost Overdrive

The new processors also support a feature called Precision Boost Overdrive, which looks at three key areas for power, thermal design current, and electrical design current. If any of these three areas has additional headroom, then the system will attempt to raise both the frequency and the voltage for increased performance. PBO is a mix of ‘standard’ overclocking, giving an all core boost, but gives a single core frequency uplift along with the support to still keep Precision Boost trying to raise frequency in middle-sized workloads, which is typically lost with a standard overclock. PBO also allows for idle power saving with a standard performance. PBO is enabled through Ryzen Master.

The three key areas are defined by AMD as follows:

  • Package (CPU) Power, or PPT: Allowed socket power consumption permitted across the voltage rails supplying the socket
  • Thermal Design Current, or TDC: The maximum current that can be delivered by the motherboard voltage regulator after warming to a steady-state temperature
  • Electrical Design Current, or EDC: The maximum current that can be delivered by the motherboard voltage regulator in a peak/spike condition

By extending these limits, PBO gives rise for PB2 to have more headroom, letting PB2 push the system harder and further. PBO is quoted by AMD as supplying up to +16% performance beyond the standard.

AMD also clarifies that PBO is pushing the processor beyond the rated specifications and is an overclock: and thus any damage incurred will not be protected by warranty

StoreMI

Also available with the new Ryzen Threadripper 2 processors is StoreMI, AMD’s solution to caching by offering configurable tiered storage for users that want to mix DRAM, SSD, and HDD storage into a single unified platform. The software implementation dynamically adjusts data between up to 2GB of DRAM, up to 256 GB of SSD (NVMe or SATA), and a spinning hard drive to afford the best reading and writing experience when there isn’t enough fast storage.

AMD initially offered this software as a $20 add-on to the Ryzen APU platform, then it became free (up to a 256GB SSD) for the Ryzen 2000-series processors. That offer now extends to Threadripper. AMD’s best case scenario is citing a 90% improvement in loading times.

Core to Core to Core: Design Trade Offs Feed Me: Infinity Fabric Requires More Power
Comments Locked

171 Comments

View All Comments

  • T1beriu - Monday, August 13, 2018 - link

    > We confirmed this with AMD, but for the most part the scheduler will load up the cores that are directly attached to memory first, before using the other cores. [...]

    It seems that Tomshardware says the opposite:

    >AMD continues working with Microsoft to route threads to the die with direct-attached memory first, and then spill remaining threads over to the compute dies. Unfortunately, the scheduler currently treats all dies as equal, operating in Round Robin mode. [...] According to AMD, Microsoft has not committed to a timeline for updating its scheduler.
  • Ian Cutress - Monday, August 13, 2018 - link

    Yeah, Paul and I were discussing this. It is a round robin mode, but it's weighted based on available resources, thermal performance, proximity of busy threads, etc.
  • JoeyJoJo123 - Monday, August 13, 2018 - link

    Maybe just user error, but all the article pages between Test Setup and Comparison Results to Going up Against Epyc, just have the text "Still writing...". I'm unsure if the article is actually still being written and was supposed to be published in this partial manner or if possible something was lost between writing and upload.

    In any case, kind of crazy how the infinity fabric is consuming so much power. The cores look super-efficient, but if the uncore can get efficiency improvements, that can help the Zen architecture stay even more efficient under load. Intel's uncore consumes a fraction of the wattage, but doesn't scale as well for multiple threads.
  • Ian Cutress - Monday, August 13, 2018 - link

    Still being written. See my comment at the top. Unfortunately travel back and forth from UK to SF bit me over the weekend and I lost a couple of days testing, along with having to take a full benchmark set up with me to SF to test in the hotel room.
  • JoeyJoJo123 - Monday, August 13, 2018 - link

    I understand, take your rest. You don't need to reply to me, I actually saw the reason after I posted.
  • compilerdev2 - Monday, August 13, 2018 - link

    Hi Ian,
    I have some questions about the Chromium compilation benchmark, since I was hoping to get the 2990WX for compiling large C++ apps. What version of Chromium is used? Is the compiler being used Clang-CL or Visual C++? Is the build in debug or release (optimized) mode? If it's release mode with Visual C++, does it use LTCG? (link-time code generation, the equivalent of LTO of gcc/clang). For example, if the build is Visual C++ LTCG, the entire code optimization, code generation and linking is by default limited to 4 threads. Thanks!
  • Ian Cutress - Monday, August 13, 2018 - link

    It's the standard Windows walkthrough available online. So we use a build of Chrome 62 (it was relevant when we pulled), VC++, build in release. It's done in the command line via ninja, and yes it does use LTCG.

    Destructions are here. They might be updated a little from when I wrote the benchmark. Out test is automated to keep consistency.

    https://chromium.googlesource.com/chromium/src/+/m...
  • compilerdev2 - Monday, August 13, 2018 - link

    With LTCG those strange results make sense - it's spending a lot of time on just 4 threads - actually majority of the time is on one thread for the Chromium case, it hits some current limitations of the VC++ compiler regarding CPU/memory usage that makes scaling worse for Chromium (but not for smaller programs or with non-LTCG builds). Increasing the number of threads from the default of 4 is possible, but will not help here. The frontend (parsing) work is well parallelized by Ninja, it's probably the reason why the Threadrippers do end up ahead of the faster single-core Intel CPUs. It would be interesting to see the benchmarks without LTCG, or even better, more compilation benchmarks, since these CPUs are really great for C/C++/Rust programmers.
  • Nexus-7 - Monday, August 13, 2018 - link

    Cool write-up on the uncore power usage! I especially enjoyed that part of the article.
  • johnny_boy - Monday, August 13, 2018 - link

    The Phoronix articles are more telling for the sort of workloads a 64 thread count would be used for.

Log in

Don't have an account? Sign up now