Our New Testing Suite for 2018 and 2019

Spectre and Meltdown Hardened

In order to keep up to date with our testing, we have to update our software every so often to stay relevant. In our updates we typically implement the latest operating system, the latest patches, the latest software revisions, the newest graphics drivers, as well as add new tests or remove old ones. As regular readers will know, our CPU testing revolves an automated test suite, and depending on how the newest software works, the suite either needs to change, be updated, have tests removed, or be rewritten completely. Last time we did a full re-write, it took the best part of a month, including regression testing (testing older processors).

One of the key elements of our testing update for 2018 (and 2019) is the fact that our scripts and systems are designed to be hardened for Spectre and Meltdown. This means making sure that all of our BIOSes are updated with the latest microcode, and all the steps are in place with our operating system with updates. In this case we are using Windows 10 x64 Enterprise 1709 with April security updates which enforces Smeltdown (our combined name) mitigations. Uses might ask why we are not running Windows 10 x64 RS4, the latest major update – this is due to some new features which are giving uneven results. Rather than spend a few weeks learning to disable them, we’re going ahead with RS3 which has been widely used.

Our previous benchmark suite was split into several segments depending on how the test is usually perceived. Our new test suite follows similar lines, and we run the tests based on:

  • Power
  • Memory
  • Office
  • System
  • Render
  • Encoding
  • Web
  • Legacy
  • Integrated Gaming
  • CPU Gaming

Depending on the focus of the review, the order of these benchmarks might change, or some left out of the main review. All of our data will reside in our benchmark database, Bench, for which there is a new ‘CPU 2019’ section for all of our new tests.

Within each section, we will have the following tests:

Power

Our power tests consist of running a substantial workload for every thread in the system, and then probing the power registers on the chip to find out details such as core power, package power, DRAM power, IO power, and per-core power. This all depends on how much information is given by the manufacturer of the chip: sometimes a lot, sometimes not at all.

We are currently running POV-Ray as our main test for Power, as it seems to hit deep into the system and is very consistent. In order to limit the number of cores for power, we use an affinity mask driven from the command line.

Memory

These tests involve disabling all turbo modes in the system, forcing it to run at base frequency, and them implementing both a memory latency checker (Intel’s Memory Latency Checker works equally well for both platforms) and AIDA64 to probe cache bandwidth.

Office

  • Chromium Compile: Windows VC++ Compile of Chrome 56 (same as 2017)
  • PCMark10: Primary data will be the overview results – subtest results will be in Bench
  • 3DMark Physics: We test every physics sub-test for Bench, and report the major ones (new)
  • GeekBench4: By request (new)
  • SYSmark 2018: Recently released by BAPCo, currently automating it into our suite (new, when feasible)

System

  • Application Load: Time to load GIMP 2.10.4 (new)
  • FCAT: Time to process a 90 second ROTR 1440p recording (same as 2017)
  • 3D Particle Movement: Particle distribution test (same as 2017) – we also have AVX2 and AVX512 versions of this, which may be added later
  • Dolphin 5.0: Console emulation test (same as 2017)
  • DigiCortex: Sea Slug Brain simulation (same as 2017)
  • y-Cruncher v0.7.6: Pi calculation with optimized instruction sets for new CPUs (new)
  • Agisoft Photoscan 1.3.3: 2D image to 3D modelling tool (updated)

Render

  • Corona 1.3: Performance renderer for 3dsMax, Cinema4D (same as 2017)
  • Blender 2.79b: Render of bmw27 on CPU (updated to 2.79b)
  • LuxMark v3.1 C++ and OpenCL: Test of different rendering code paths (same as 2017)
  • POV-Ray 3.7.1: Built-in benchmark (updated)
  • CineBench R15: Older Cinema4D test, will likely remain in Bench (same as 2017)

Encoding

  • 7-zip 1805: Built-in benchmark (updated to v1805)
  • WinRAR 5.60b3: Compression test of directory with video and web files (updated to 5.60b3)
  • AES Encryption: In-memory AES performance. Slightly older test. (same as 2017)
  • Handbrake 1.1.0: Logitech C920 1080p60 input file, transcoded into three formats for streaming/storage:
    • 720p60, x264, 6000 kbps CBR, Fast, High Profile
    • 1080p60, x264, 3500 kbps CBR, Faster, Main Profile
    • 1080p60, HEVC, 3500 kbps VBR, Fast, 2-Pass Main Profile

Web

  • WebXPRT3: The latest WebXPRT test (updated)
  • WebXPRT15: Similar to 3, but slightly older. (same as 2017)
  • Speedometer2: Javascript Framework test (new)
  • Google Octane 2.0: Depreciated but popular web test (same as 2017)
  • Mozilla Kraken 1.1: Depreciated but popular web test (same as 2017)

Legacy (same as 2017)

  • 3DPM v1: Older version of 3DPM, very naïve code
  • x264 HD 3.0: Older transcode benchmark
  • Cinebench R11.5 and R10: Representative of different coding methodologies

Linux (when feasible)

When in full swing, we wish to return to running LinuxBench 1.0. This was in our 2016 test, but was ditched in 2017 as it added an extra complication layer to our automation. By popular request, we are going to run it again.

Integrated and CPU Gaming

We have recently automated around a dozen games at four different performance levels. A good number of games will have frame time data, however due to automation complications, some will not. The idea is that we get a good overview of a number of different genres and engines for testing.

For our CPU Gaming tests, we will be running on an NVIDIA GTX 1080. For the CPU benchmarks, we use an RX460 as we now have several units for concurrent testing.

In previous years we tested multiple GPUs on a small number of games – this time around, due to a Twitter poll I did which turned out exactly 50:50, we are doing it the other way around: more games, fewer GPUs.

Scale Up vs Scale Out: Benefits of Automation

One comment we get every now and again is that automation isn’t the best way of testing – there’s a higher barrier to entry, and it limits the tests that can be done. From our perspective, despite taking a little while to program properly (and get it right), automation means we can do several things:

  1. Guarantee consistent breaks between tests for cooldown to occur, rather than variable cooldown times based on ‘if I’m looking at the screen’
  2. It allows us to simultaneously test several systems at once. I currently run five systems in my office (limited by the number of 4K monitors, and space) which means we can process more hardware at the same time
  3. We can leave tests to run overnight, very useful for a deadline
  4. With a good enough script, tests can be added very easily

Our benchmark suite collates all the results and spits out data as the tests are running to a central storage platform, which I can probe mid-run to update data as it comes through. This also acts as a mental check in case any of the data might be abnormal.

We do have one major limitation, and that rests on the side of our gaming tests. We are running multiple tests through one Steam account, some of which (like GTA) are online only. As Steam only lets one system play on an account at once, our gaming script probes Steam’s own APIs to determine if we are ‘online’ or not, and to run offline tests until the account is free to be logged in on that system. Depending on the number of games we test that absolutely require online mode, it can be a bit of a bottleneck.

Benchmark Suite Updates

As always, we do take requests. It helps us understand the workloads that everyone is running and plan accordingly.

A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite.

Test Bed and Setup CPU Performance: System Tests
Comments Locked

48 Comments

View All Comments

  • GTVic - Monday, November 5, 2018 - link

    I'm wondering what is the status on the W-Series. Seems like no update/launches for over 1 year?
  • CallumS - Monday, November 5, 2018 - link

    These look great for SMB finance/inventory management/ERP applications where low latency and high single thread performance is often most beneficial. Or where software is licensed per core. Particularly if they are available within servers with OK remote management functionality at decent price points. I'd love to be able to recommend 3 or 4 of them, and the presumably upcoming 8 core configuration when it is are available, in 1U servers to SMBs rather than Xeon-SP configurations.

    The Intel Xeon-SP configurations are obviously still going to be the best performing and value for a lot of large enterprise/scale workloads but for smaller organisations and applications only used by under a 100 users, having the simplicity (i.e. no NUMA configuration/consideration requirements) and the performance benefits of a leaner configuration would be great. Plus, having 3 or 4 identical servers with SSD drives in RAID1 could dramatically simplify and improve a lot of local hardware related DR capabilities for organisations with moderate budgets and requirements (essentially an unplug of production SSD drives and move to another/test server).

    From a market competition perspective, unfortunately it doesn't look like there is any other decent options for entry level server usage at this price at the moment. The AMD EPYC platform and CPUs are too expensive and at too lower clock speeds for a lot of business applications requiring quick response times/low latency and or licensing per core. And while AMD Ryzen CPUs are great for desktops, particularly where a dedicated GPU was already going to be required, this is actually one area where the Intel solutions can often end up cheaper and better when factoring platform costs - while also having far better support and availability. Therefore, it's really just Intel competing with themselves at the moment and enticing businesses to upgrade/invest. While not hopeful, it would be great if AMD and partners could change this.

    Given that an 8700k in one of my desktops is already quicker than a lot of the 12 to 16 core Intel Xeon-SP configurations that we've also used, even for heavy load tests, due to frequency, latency, and IPC benefits, I'm really looking forward to these CPUs, and the 8 core version, hitting the market. Just the saving in per core licensing costs would probably make it cheaper to buy new servers with these CPUs than to configure new VMs on existing Xeon-SP servers for new setups.
  • Cooe - Tuesday, November 6, 2018 - link

    Uhh... You seem to have entirely forgotten that X399 & Ryzen Threadripper exist. Plenty good single-core performance, but absolutely barnstorming multi-core for the price, ECC support, AND 64x PCIe 3.0 lanes.
  • CallumS - Tuesday, November 6, 2018 - link

    Agreed about Ryzen Threadripper CPUs being great for multi-threaded workloads and also having pretty good single core performance. I didn't forget, it's what will likely be in my next workstation, I just didn't go into that detail for the purpose of brevity.

    For production server purposes, atleast a basic remote management interface and support from the major vendors is generally required, though. If we could get a Ryzen Threadripper 2950X or equivalent EPYC CPU with similar frequencies in a 1RU chasis from a major vendor with decent support and management interfaces at a good price, we'd be all over it. Perhaps with the new Zen 2 EPYC CPUs about to be announced, AMD will offer something like it. I certainly hope so.
  • Spunjji - Tuesday, November 6, 2018 - link

    It has nothing to do with what AMD are offering, unfortunately, and everything to do with what system integrators are prepared to put out there. As long as Intel is filling their pockets with plenty of MDF then I wouldn't expect to see anything soon. Hell, HP even took the iLO out of their MicroServer when they switched back to using AMD CPUs because "reasons".
  • CallumS - Tuesday, November 6, 2018 - link

    I think that it's far more likely to be a combination. System integrators still require support from manufacturers/vendors for the products/solutions that they are selling. And both AMD and Intel definitely put in mechanisms/differences to protect product lines/profit. It's not like the major vendors aren't selling EPYC systems now. A new EPYC SKU by AMD with 2950X like performance would in itself provide us with the option for a higher frequency server/EPYC CPU. Given the TDP of the Epyc 7601, it should be quite easy and practical to do from engineering and manufacturing perspectives. Or, alternatively, it should be easy enough for AMD to provide capabilities for, and to encourage, board partners to release 'server' orientated Threadripper boards. Either of which I'd love to see - but would still much prefer higher frequency EPYC SKUs due to memory and platform advantages - particularly with major system integrators already having validated EPYC server platforms.
  • Dusk_Star - Monday, November 5, 2018 - link

    Corsair Ballistix
    4x4GB
    DDR4-2666

    I feel like this should be *Crucial Ballistix* to match the rest of the "Test Setup" table.
  • watersb - Tuesday, November 6, 2018 - link

    Awesome review, many thanks.

    I usually build my systems with ECC DRAM, whenever possible, but that has become a huge pain point over the past few generations.

    I prefer to hear the news on these parts from AnandTech. ServeTheHome is fantastic, but nothing but $10,000+ systems gets a bit discouraging.
  • mkaibear - Tuesday, November 6, 2018 - link

    Can I just say how much the header text (EEEEEEEE) made me laugh?

    Not sure why, think it just appealed to my inner surrealist.

    Cheers!
  • CyrIng - Thursday, November 8, 2018 - link

    Nice review and thanks for the Chromium results but those are professional processors which to my pov will also be employed in Linux/BSD/database/backend frameworks and so on where games don't really matter.

    For example, x86 and arm cross compilations such as buildroot would be great to read.

    As an engineer Windows is out of the scope

Log in

Don't have an account? Sign up now