Miscellaneous Drive Features

Enterprise SSDs can be distinguished from client/consumer SSDs by far more than just their performance profile and price. There are a wide variety of features that enterprise SSDs can implement to benefit aspects like reliability, security and ease of management, and the scope of possibilities continues to grow with the evolution of standards like NVMe. The drives in this review are all relatively 'mainstream' enterprise SSD products which don't target any particular niche that requires obscure features, but there's still some variety in which optional features they provide.

Reliability Features
Model Samsung 860 DCT Samsung 883 DCT Samsung 983 DCT Intel DC P4510 Intel Optane DC P4800X Memblaze PBlaze5
Power Loss Protection No Yes Yes Yes Yes Yes
T10 Data Integrity No No No No Yes No
Multipath
IO
No No No No No Yes

Power loss protection is often considered a mandatory feature for a drive to be considered server-grade, but there are many use cases where losing a bit of data during an unexpected power failure isn't a serious concern. The Samsung 860 DCT is still unusual in omitting power loss protection, but this may become more common as low-end enterprise SSDs push into hard drive price territory.

Support for multipath IO and T10 Data Integrity Field are features commonly found on SAS drives, but they have been appearing more often in NVMe drives as that ecosystem matures toward fully replacing SAS. The T10 Data Integrity Field enables end-to-end data protection by augmenting each sector with a few extra bytes of checksum and metadata that are carried along with the normal payload data. This metadata effectively causes the drive's sector size to expand from 512 bytes to 520 or 528 bytes. All of the NVMe drives in this review already support switching between 512-byte and 4kB sector sizes, but only the Optane SSD supports the extended metadata sector formats.

Multipath IO allows a drive to remain accessible even if one of the switches/port expanders or HBAs between it and the host system fails. Support for two port interfaces is standard for SAS drives, impossible for SATA drives, and rare for NVMe drives. The Microsemi Flashtec controller used by the Memblaze PBlaze5 supports dual-port operation, and Memblaze's firmware exposes that capability. This feature isn't useful for drives that are directly attached to CPU PCIe lanes, but is an important high-availability feature for large arrays that rely on PCIe fanout switches (and there are a lot of those).

Security Features
Model Samsung
860 DCT
Samsung
883 DCT
Samsung
983 DCT
Intel
DC P4510
Intel Optane
DC P4800X
Memblaze
PBlaze5
TCG Opal No* No Yes No Yes No
Sanitize No Yes No No No No

The TCG Opal standard defines a command set for managing self-encrypting drives. Samsung and Crucial are the only two consumer SSD brands that commonly implement TCG Opal, though it was recently revealed that their early implementations suffer from several severe flaws. In the enterprise space, demand for self-encrypting drives is largely confined to certain customer bases that have regulatory obligations to protect customer data. Some market segments actively prefer non-encrypting drives, such as when selling to (or from) certain countries that regulate strong cryptography.

In most cases, SSDs that support TCG Opal can be identified by the presence of a PSID on the drive label. This is a long serial number unique to the drive that can be used to reset and unlock it if the password/keys are forgotten. The PSID cannot be determined by electronically querying the drive, so resetting a drive with the PSID requires physical access to the label. The Samsung 860 DCT's label includes a PSID, but the drive does not respond to TCG Opal commands and is not listed by Samsung as supporting TCG Opal. The Samsung 983 DCT and Intel Optane DC P4800X both implement TCG Opal. (The consumer counterparts of the 983 DCT also support TCG Opal, but the consumer Optane SSDs do not.)

Sanitize commands were introduced to the ATA, SCSI and NVMe standards as an erase method that comes with stronger guarantees than an ATA Secure Erase command. Sanitize operations are required to purge user data from all caches and buffers and from flash memory that is awaiting garbage collection. A Sanitize operation cannot be cancelled and is required to run to completion and resume after a power loss. Sanitize commands also make it clear whether data is destroyed through block erase operations, overwriting, or destroying the key necessary to decrypt the data. Most SSDs already implement adequate erase operations through ATA Secure Erase or NVMe Format commands, but a few also provide the Sanitize command interface. Among this batch of drives, only the Samsung 883 DCT implements this feature.

Other NVMe Features
Model Samsung
983 DCT
Intel
DC P4510
Intel Optane
DC P4800X
Memblaze
PBlaze5
Firmware Slots 2+1 1 1 2+1
Multiple Namespaces No No No Yes
Active Power States 1 1 1 3
Temperature Sensors 3 1 1 4

The NVMe standard has grown to encompass a wide range of optional features, and the list gets longer every year. NVMe drives can support multiple firmware slots, allowing a new firmware upgrade to be flashed to the drive without overwriting the currently in-use version. The Samsung 983 DCT and Memblaze PBlaze5 both implement three firmware slots, one of which is a permanently read-only fallback.

The Memblaze PBlaze5 is the first SSD we have tested that implements support for multiple namespaces. At a high level, namespaces are a way of partitioning the drive's storage space. Most interesting use cases involve pairing this feature with something else: for example, support for different sector sizes/formats can allow one namespace to provide T10 Data Integrity Field support while another uses plain 4k sectors. Multiple namespace support also has numerous uses in tandem with virtualization support or NVMe over Fabrics.

In client computing, SSD power management is primarily about putting the drive to sleep during idle times. In a server, high wake-up latencies make such sleep states relatively useless, but the baseline idle power consumption of an enterprise SSD without sleep states still contributes to the operating cost of the server. There are also some scenarios where the maximum power draw of an SSD needs to be capped due to limitations on airflow or power delivery. In the client space, this is usually only seen in fanless battery-powered systems. In servers, it can happen if the system design provides less airflow than usual for a particular form factor, or if the rack as a whole is pushing the limits of what the datacenter can handle. The PBlaze5 is the most power-hungry drive in this bunch, but it provides lower-power states that limit it to 20W or 15W instead of the default 25W.

The PBlaze5 and the Samsung 983 DCT both provide access to multiple temperature sensors. These are also aggregated in a drive-specific way to produce a composite temperature readout that indicates how close the drive is to its thermal throttling threshold(s). The Intel drives only report the composite temperature.

Drives In Detail, Part 2: Intel P4510, Optane P4800X, Memblaze PBlaze5 Performance at Queue Depth 1
Comments Locked

36 Comments

View All Comments

  • ZeDestructor - Friday, January 4, 2019 - link

    Could you do the MemBlaze drives too? I'm really curious how those behave under consumer workloads.
  • mode_13h - Thursday, January 3, 2019 - link

    At 13 ms, the Peak 4k Random Read (Latency) chart is likely showing the overhead of a pair of context switches for 3 of those drives. I'd be surprised if that result were reproducible.
  • Billy Tallis - Thursday, January 3, 2019 - link

    Those tail latencies are the result of far more than just a pair of context switches. The problem with those three drives is that they need really high queue depths to reach full throughput. Since that test used many threads each issuing one IO at a time, tail latencies get much worse once the number of threads outnumbers the number of (virtual) cores. The 64-thread latencies are reasonable, but the 99.9th and higher percentiles are many times worse for the 96+ thread iterations of the test. (The machine has 72 virtual cores.)

    The only way to max out those drive's throughput while avoiding the thrashing of too many threads is to re-write an application to use fewer threads that are issuing IO requests in batches with asynchronous APIs. That's not always an easy change to make in the real world, and for benchmarking purposes it's an extra variable that I didn't really want to dig into for this review (especially given how it complicates measuring latency).

    I'm comfortable with some of the results being less than ideal as a reflection of how the CPU can sometimes bottleneck the fastest SSDs. Optimizing the benchmarks to reduce CPU usage doesn't necessarily make them more realistic.
  • CheapSushi - Friday, January 4, 2019 - link

    Hey Billy. this is a bit of a tangent but do you think SSHDs will have any kind of resurgence? There hasn't been a refresh at all. The 2.5" SSHDs max out at about 2TB I believe with 8GB of MLC(?) NAND. Now that QLC is being pushed out and with fairly good SLC schemes, do you think SSHDs could still fill a gap in price + capacity + performance? Say, at least a modest bump to 6TB of platter with 128GB of QLC/SLC-turbo NAND? Or some kind of increase along those lines? I know most folks don't care about them anymore. But there's still something appealing to me about the combination.
  • leexgx - Friday, January 4, 2019 - link

    Sshd tend to use MLC, Only ones been interesting has been the Toshiba second gen sshds as they use some of the 8gb for write caching (from some Basic tests I have seen)
    where as seagate only caches commonly read locations
  • leexgx - Friday, January 4, 2019 - link

    Very annoying the page reloading

    Want to test second gen Toshiba but finding the right part number as they are using creptic part numbers
  • CheapSushi - Friday, January 4, 2019 - link

    Ah, I was not aware of the ones from Toshiba, thanks for the heads up. Write caching seems the way to go for such a setup. Did the WD SSHD's do the same as Seagates?
  • leexgx - Friday, January 11, 2019 - link

    I have obtained the Toshiba mq01, mq02 and there h200 sshd all 500gb to test to see if write caching works (limit testing to 500mb writing at start see how it goes from There
  • thiagotech - Friday, January 4, 2019 - link

    Can someone help me understanding which scenarios is considered as QD1 and higher? Does anyone have a guide for dummies what is queue depth? Lets suppose i'll start Windows and there is 200 files of 4k, is it a QD1 or QD64? Because i was copying a folder with a large number of tiny files and my Samsung 960 Pro reached like 70MBPS of copy speed, is really bad number...
  • Greg100 - Saturday, January 5, 2019 - link

    thiagotech,

    About queue depth during boot up a Windows check last post: https://forums.anandtech.com/threads/qd-1-workload...

    About optimization Samsung 960 Pro performance check: "The SSD Reviewers Guide to SSD Optimization 2018" on thessdreview

Log in

Don't have an account? Sign up now