Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.

On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.

Working Set Size

Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.

When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.

We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.

When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.

The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.

The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.

Introduction AnandTech Storage Bench
Comments Locked

150 Comments

View All Comments

  • Oxford Guy - Monday, December 7, 2020 - link

    I have three OCZ 240 GB Vertex 2 drives. They're all bricked. Two of them were replacements for bricked drives. One of them bricked within 24 hours of being used. They bricked in four different machines.

    Pure garbage. OCZ pulled a bait and switch, where it substituted 64-bit NAND for the 32-bit the drives were reviewed/tested with and rated for on the box. The horrendously bad Sandforce controller choked on 64-bit NAND and OCZ never stabilized it with its plethora of firmware spew. The company also didn't include the 240 GB model in its later exchange program even though it was the most expensive in the lineup. Sandforce was more interested in protecting the secrets of its garbage design than protecting users from data loss so the drives would brick as soon as the tiniest problem was encountered and no tool was ever released to the public to retrieve the data. It was designed to make that impossible for anyone who wasn't in spycraft/forensics or working for a costly drive recovery service. I think there was even an announced partnership between OCZ and a drive recovery company for Sandforce drives which isn't at all suspicious.
  • Oxford Guy - Monday, December 7, 2020 - link

    The Sandforce controller also was apparently incompatible with the TRIM command but customers were never warned about that. So, TRIM didn't cause performance to rebound as it should.
  • UltraWide - Saturday, December 5, 2020 - link

    AMEN for silence. I have a 6 x 8TB NAS and even with 5,400rpm hdds it's quite loud.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    I really want to like the slim, and would love one that I could load up with 2TB SATA SSDS in raid, but they've drug their feet on a 10G version. 1G or even 2.5G is totally pointless for SSD NASes.
  • bsd228 - Friday, December 4, 2020 - link

    sequential transfer speed isn't all that matters.

    two mirrored SSDs on a 10G connection can get you better read performance than any SATA ssd locally. But it can be shared across all of the home network.
  • david87600 - Friday, December 4, 2020 - link

    My thoughts exactly. SSD rarely makes sense for NAS.
  • Hulk - Friday, December 4, 2020 - link

    What do we know about the long term data retention of these QLC storage devices?
  • Oxford Guy - Friday, December 4, 2020 - link

    16 voltage states to deal with for QLC. 8 voltage states for TLC. 4 for 2-layer MLC. 2 for SLC.

    More voltage states = bad. The only good thing about QLC is density. Everything else is worse.
  • Spunjji - Monday, December 7, 2020 - link

    It's not entirely. More voltage states is more difficult to read, for sure, but they've also begun implementing more robust ECC systems with each new variant of NAND to counteract that.

    I'd trust one of these QLC drives more than I'd trust my old 120GB 840 drive in that regard.
  • Oxford Guy - Tuesday, December 8, 2020 - link

    Apples and oranges. More robust things to try to work around shortcomings are not the shortcomings not existing.

Log in

Don't have an account? Sign up now