AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

The Toshiba XG6 brings a healthy boost to the full-drive average data rate on the Heavy test, but only improves the empty drive test run performance by about 5% over the XG5. Toshiba is definitely starting to fall behind the fastest high-end drives on this test, but the XG6 is still comfortably ahead of most entry-level NVMe products and more than twice as fast as the Crucial MX500 SATA SSD.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The Toshiba XG6 brings very small regressions to the latency scores on the empty-drive test runs, but makes up for it with substantially improved average and 99th percentile latency when the Heavy test is run on a full drive.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The slight regression in average latency for the empty drive test runs comes from an increase in average write latency. Read latency has improved substantially and write latency for the full-drive test runs doesn't stand out for the XG6 the way it did for the XG5.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

For 99th percentile latency, both read and write performance are slightly worse on the XG6 than the XG5 when the Heavy test is run on an empty drive. But full-drive latency QoS has improved markedly for both read and write operations.

ATSB - Heavy (Power)

The Toshiba XG6 uses slightly more energy over the course of the Heavy test than the XG5 does, when the test is run on an empty drive. The improved full-drive performance helps the XG6 come out ahead on energy usage for that test run. Either way, the XG6's efficiency is comparable to SATA drives and the WD Black is the only other high-end NVMe that offers this kind of power efficiency.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
Comments Locked

31 Comments

View All Comments

  • Spoelie - Thursday, September 6, 2018 - link

    2 short questions:
    - what happened to the plextor M9Pe, performance is hugely different from the review back in march.
    - i know this is already the case for a year or so, but what happened to the perf consistency graphs, where can i deduce the same information from?
  • hyno111 - Thursday, September 6, 2018 - link

    M9Pe had firmware updates, not sure if it's applied or related though.
  • DanNeely - Thursday, September 6, 2018 - link

    I don't recall the details, but something went wrong with generating the performance consistency data, and they were pulled pending finding a fix due to concerns they were no longer valid. IF you have the patience to dig through the archive, IIRC the situation was explained in the first review without them.
  • Billy Tallis - Thursday, September 6, 2018 - link

    I think both of those are a result of me switching to a new version of the test suite at the same time that I applied the Spectre/Meltdown patches and re-tested everything. The Windows and Linux installations were updated, and a few tweaks were made to the synthetic test configuration (such as separating the sequential read results according to whether the test data was written sequentially or randomly). I also applied all the drive firmware updates I could find in the April-May timeframe.

    The steady-state random write test as it existed a few years ago is gone for good, because it really doesn't say anything relevant about drives that use SLC caching, which is now basically every consumer SSD (except Optane and Samsung MLC drives). I also wasn't too happy with the standard deviation-based consistency metric, because I don't think a drive should be penalized for occasionally being much faster than normal, only much slower than normal.

    To judge performance consistency, I prefer to look at the 99th percentile latencies for the ATSB real-world workload traces. Those tend to clearly identify which drives are subject to stuttering performance under load, without exaggerating things as much as an hour-long steady-state torture test.

    I may eventually introduce some more QoS measures for the synthetic tests, but at the moment most of them aren't set up to produce meaningful latency statistics. (Testing at a fixed queue depth leads to the coordinated omission problem, potentially drastically understating the severity of things like garbage collection pauses.) At some point I'll also start graphing the performance as a drive is filled, but with the intention of observing things like SLC cache sizes, not for the sake of seeing how the drive behaves when you keep torturing it after it's full.

    I will be testing a few consumer SSDs for one of my upcoming enterprise SSD reviews, and that will include steady-state full drive performance for every test.
  • svan1971 - Thursday, September 6, 2018 - link

    I wish current reviews would use current hardware, the 970 Pro replaced the 960 Pro months ago.
  • Billy Tallis - Thursday, September 6, 2018 - link

    I've had trouble getting a sample of that one; Samsung's consumer SSD sampling has been very erratic this year. But the 970 Pro is definitely a different class of product from a mainstream TLC-based drive like the XG6. I would only include 970 Pro results here for the same reason that I include Optane results. They're both products for people who don't really care about price at all. There's no sensible reason to be considering a 970 Pro and an XG6-like retail drive as both potential choices for the same purchasing decision.
  • mapesdhs - Thursday, September 6, 2018 - link

    Please never stop including older models, the comparisons are always useful. Kinda wish the 950 Pro was in there too.
  • Spunjji - Friday, September 7, 2018 - link

    I second this. I know that I am (and feel most other savvy consumers would be) more likely to compare an older high-end product to a newer mid-range product, partly to see if it's worth buying the older gear at a discount and partly to see when there is no performance trade-off in dropping a cost tier.
  • jajig - Friday, September 7, 2018 - link

    I third it. I want to know if an upgrade is worth while.
  • dave_the_nerd - Sunday, September 9, 2018 - link

    Very much this. And not all of us upgrade our gear every year or two.

Log in

Don't have an account? Sign up now