The Western Digital WD Black 3D NAND SSD Review: EVO Meets Its Matchby Ganesh T S & Billy Tallis on April 5, 2018 9:45 AM EST
- Posted in
- Western Digital
- Extreme Pro
- WD Black
AnandTech Storage Bench - The Destroyer
The Destroyer is an extremely long test replicating the access patterns of very IO-intensive desktop usage. A detailed breakdown can be found in this article. Like real-world usage, the drives do get the occasional break that allows for some background garbage collection and flushing caches, but those idle times are limited to 25ms so that it doesn't take all week to run the test. These AnandTech Storage Bench (ATSB) tests do not involve running the actual applications that generated the workloads, so the scores are relatively insensitive to changes in CPU performance and RAM from our new testbed, but the jump to a newer version of Windows and the newer storage drivers can have an impact.
We quantify performance on this test by reporting the drive's average data throughput, the average latency of the I/O operations, and the total energy used by the drive over the course of the test.
The average data rate from the new WD Black on The Destroyer is almost as fast as Samsung's TLC-based 960 EVO and their newer PM981 OEM drive. Where the original WD Black NVMe SSD was clearly a low-end NVMe drive and no faster than SATA SSDs on this test, the new WD Black is competitive at the high end.
The average latencies from the WD Black are competitive with Samsung's TLC drives, and the 99th percentile latencies are the fastest we've seen from any flash-based SSD for this capacity class.
The average read latencies from the WD Black on The Destroyer are as good as any flash-based SSD we've tested. Average write latencies are great but Samsung's top drives are still clearly faster.
The WD Black has the best 99th percentile read latency scores aside from Intel's Optane SSD 900P, but the 99th percentile write latency scores are only in the second tier of drives.
The load power consumption of the new WD Black is a huge improvement over the previous SSD to bear this name. The new model uses less than half as much energy over the course of The Destroyer, putting it in first place slightly ahead of the Toshiba XG5.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Chaitanya - Thursday, April 5, 2018 - linkNice to see some good competition to Samsung products in SSD space. Would like to see durability testing on these drives.
HStewart - Thursday, April 5, 2018 - linkYes it nice to have competition in this area and important thing to notice here a long time disk drive manufacture is changes it technology to meet changes in storage technology.
Samus - Thursday, April 5, 2018 - linkLooks like WD's purchase of SanDisk is showing some payoff. If only Toshiba would have taken advantage of OCZ (who purchased Indilinx) in-house talent. The Barefoot controller showed a lot of promise and could have easily been updated to support low power states and TLC NAND. But they shelved it. I don't really know why Toshiba bought OCZ.
haukionkannel - Friday, April 6, 2018 - linkIndeed! Samsung did have too long time performance supremesy and that did make the company to upp the prices (natural development thought).
Hopefully this better situation help uss customers in reasonable time frame. Too much bad news to consumers last years considering the prices.
XabanakFanatik - Thursday, April 5, 2018 - linkWhatever happened to performance consistency testing?
Billy Tallis - Thursday, April 5, 2018 - linkThe steady state QD32 random write test doesn't say anything meaningful about how modern SSDs will behave on real client workloads. It used to be a half-decent test before everything was TLC with SLC caching and the potential for thermal throttling on M.2 NVMe drives. Now, it's impossible to run a sustained workload for an hour and claim that it tells you something about how your drive will handle a bursty real world workload. The only purpose that benchmark can serve today is to tell you how suitable a consumer drive is for (ab)use as an enterprise drive.
iter - Thursday, April 5, 2018 - linkMost of the tests don't say anything meaningful about "how modern SSDs will behave on real client workloads". You can spend 400% more money on storage that will only get you 4% of performance improvement in real world tasks.
So why not omit synthetic tests altogether while you are at it?
Billy Tallis - Thursday, April 5, 2018 - linkYou're alluding to the difference between storage performance and whole system/application performance. A storage benchmark doesn't necessarily give you a direct measurement of whole system or application performance, but done properly it will tell you about how the choice of an SSD will affect the portion of your workload that is storage-dependent. Much like Amdahl's law, speeding up storage doesn't affect the non-storage bottlenecks in your workload.
That's not the problem with the steady-state random write test. The problem with the steady state random write test is that real world usage doesn't put the drive in steady state, and the steady state behavior is completely different from the behavior when writing in bursts to the SLC cache. So that benchmark isn't even applicable to the 5% or 1% of your desktop usage that is spent waiting on storage.
On the other hand, I have tried to ensure that the synthetic benchmarks I include actually are representative of real-world client storage workloads, by focusing primarily on low queue depths and limiting the benchmark duration to realistic quantities of data transferred and giving the drive idle time instead of running everything back to back. Synthetic benchmarks don't have to be the misleading marketing tests designed to produce the biggest numbers possible.
MrSpadge - Thursday, April 5, 2018 - linkGood answer, Billy. It won't please everyone here, but that's impossible anyway.
iter - Thursday, April 5, 2018 - linkPeople do want to see how much time it takes before cache gives out. Don't presume to know what all people do with their systems.
As I mentioned 99% of the tests are already useless when it comes to indicating overall system performance. 99% of the people don't need anything above mainstream SATA SSD. So your point on excluding that one test is rather moot.
All in all, it seems you are intentionally hiding the weakness of certain products. Not cool. Run the tests, post the numbers, that's what you get paid for, I don't think it is unreasonable to expect that you do your job. Two people pointed out the absence of that tests, which is two more than those who explicitly stated they don't care about it, much less have anything against it. Statistically speaking, the test is of interest, and I highly doubt it will kill you to include it.