Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The burst random read performance of the Plextor M9Pe is good at either of the two tested capacities. Only a handful of flash-based SSDs outperform the 1TB M9Pe, and the 512GB model is just over 5% slower.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

The sustained random read speeds of the M9Pe are better than most TLC drives, though the Samsung 970 EVO is a bit faster still. The MLC-based M8Pe is also slightly faster than the M9Pe on this test.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the Plextor M9Pe during random reads is quite poor, but it's only slightly worse than the Samsung 970 EVO and the previous generation of Plextor drives.

At low queue depths, the random read performance of the two tested capacities of the Plextor M9Pe is nearly identical, and the larger model doesn't gain a significant lead until queue depths are well above 8.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The burst random write speeds from the Plextor M9Pe are about average for a high-end NVMe SSD. Several Samsung drives score about the same as the M9Pe. The WD Black and Intel 760p lead with the fastest write caches.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test, the Plextor drives are all clustered together with very similar performance, though it appears the heatsink helps a little bit here. The Samsung NVMe drives are all performing much better than the Plextor drives, and the WD Black holds the spot as the top flash-based SSD on this test.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Even with 3D NAND, the Plextor drives still deliver relatively poor power efficiency during random writes. The controller and DRAM both need to be updated to newer, lower-power designs.

The 512GB M9Pe hits its full random write speed at QD4, while the 1TB model is able to continue increasing performance up to QD8, leaving it with a much higher overall limit. Both capacities are well-behaved once reaching saturation, and don't appear to be having major garbage collection issues.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

15 Comments

View All Comments

  • Yuriman - Thursday, May 24, 2018 - link

    Looks like that heatspreader does it a lot of good.
  • peevee - Tuesday, May 29, 2018 - link

    But the price of it? I understand it for $4 on 256GB model. But why the same thing is closer to $40 on 1T?
  • romrunning - Thursday, May 24, 2018 - link

    Regarding the testing platform: "The Windows 10 version will still be 1709, because Microsoft has not yet fixed all the new bugs introduced in the NVMe driver in Windows 10 version 1803."

    If you're referring to the issues with Intel 600p drives in the April Update (version 1803), Microsoft released a new patch (KB4100403) that "Addresses an issue with power regression on systems with NVMe devices from certain vendors."

    So it sounds like you should be able to update Windows to 1803 as long as you include that patch.
  • Billy Tallis - Thursday, May 24, 2018 - link

    That's not the only problem that's been reported with 1803's NVMe driver. I don't trust that they've even found all the new bugs yet, let alone patched them all. And I actually started running the new tests almost a month ago, to try to minimize the interruption to our review schedule.
  • Drazick - Thursday, May 24, 2018 - link

    Are you sure it is Microsoft's issue and not the firmware of those drives?
  • Billy Tallis - Thursday, May 24, 2018 - link

    In the absence of a proper changelog from Microsoft, I assume the new issues are mostly their fault. At the very least, they're responsible for upsetting whatever fragile balance of bugs the SSD manufacturers have achieved by testing against previous versions of Windows 10. I want to freeze my testbed software configuration for at least a year, and there's sufficient reason to consider 1803 as still being essentially beta-quality and thus a bad choice for the 2018 SSD test suite.
  • GeorgeH - Thursday, May 24, 2018 - link

    FWIW that's very reasonable. It's utterly foolish to update to any Windows 10 version until at least 6 months after release (unless your time is worthless and you'd like to do free QA for Microsoft, of course).
  • lmcd - Thursday, May 24, 2018 - link

    Not even close to true. In fact, it's because I value my time that I upgraded to 1803 immediately. 1803 adds the "Windows Hypervisor Platform" to its features, which (as a primary effect) allows Docker for Windows and a buggy-but-usable Xamarin variant of AVD to run side-by-side (along with other Hyper-V images). It's possible we even see VirtualBox run on this excellent feature, though I don't know if it's on their roadmap yet.
  • smilingcrow - Friday, May 25, 2018 - link

    Which is an irrelevant feature for most home users so your post is myopic.
  • Death666Angel - Friday, May 25, 2018 - link

    If you are running normal consumer grade hardware, I don't think that is the case.

Log in

Don't have an account? Sign up now