Intel SSD DC P4510

When Intel launched the DC P4510 earlier this year, our initial review focused on using it to test out Intel's Virtual RAID on CPU (VROC) feature. Now, we're focusing on individual drive performance and adding in power efficiency testing that wasn't practical for multi-drive RAID configurations.

Since the P4510 launched, Intel has announced a new naming scheme for their enterprise SSDs. Their long-term plan is to push a combination of QLC drives for capacity and Optane drives for performance, but TLC drives like the P4510 aren't going away anytime soon and the P4510 is still a current-generation product.

The Intel P4510 is the middle tier of their enterprise NVMe drives with 64-layer 3D NAND. Below it sits the D5-P4320 QLC NAND SSD, and above it is the DC P4610. The Px700 product tier is empty this generation, with the P4610 pitched as the replacement for both the P4600 and the first-generation P3700. With the P4500 and P4600, Intel introduced their second-generation enterprise NVMe controller and paired it with their first-generation (32-layer) 3D NAND. The P4510 is an update to use the current 64-layer 3D TLC.

Intel SSD DC P4510 Specifications
Capacity 1 TB 2 TB 4 TB 8 TB
Form Factor 2.5" 15mm U.2
Interface PCIe 3.1 x4 NVMe 1.2
Memory Intel 512Gb 64-layer 3D TLC
Sequential Read 2850 MB/s 3200 MB/s 3000 MB/s 3200 MB/s
Sequential Write 1100 MB/s 2000 MB/s 2900 MB/s 3000 MB/s
Random Read 469k IOPS 624k IOPS 625.5k IOPS 620k IOPS
Random Write 72k IOPS 79k IOPS 113.5k IOPS 139.5k IOPS
Maximum Power Active 10 W 10 W 14 W 16 W
Idle 5 W 5 W 5 W 5 W
Write Endurance 1.1 DWPD 0.7 DWPD 0.9 DWPD 1.0 DWPD
Warranty 5 years

With the P4510, we're finally starting to see performance specs that look comparable to high-end consumer SSDs, but unlike consumer drives the P4510 can be expected to maintain this performance indefinitely. The write endurance rating varies a bit with capacity, from a low of 0.7 DWPD up to 1.1 DWPD. Overall, this puts it in the same endurance class as the Samsung 983 ZET, but the P4510 is in a slightly higher class for performance and power consumption.

The P4510 uses a 15mm 2.5" U.2 form factor. Inside are two stacked PCBs, connected by a semi-flexible joint. The 12-channel controller and most of the DRAM are at the bottom of the drive, with thermal pads dissipating heat to the case's heatsink surface. The back side of the primary PCB has room for more DRAM (to enable the 8TB model) and a bit more NAND. But most of the NAND is on the secondary PCB, with 10 packages on each side. Due to the extra thickness of the 15mm form factor, a single large power loss protection capacitor is used instead of an array of smaller caps. The NAND and DRAM on the inner faces of the two PCBs do not get any airflow or thermal pads bridging them to the case. This lack of cooling is one of the major motivators of Intel's Ruler form factor, now standardized as EDSFF.

Intel Optane SSD DC P4800X

Intel's Optane SSD DC P4800X is the flagship model not just of Intel's SSD family, but the entire SSD market. Built around 3D XPoint memory instead of NAND flash, the Optane SSD sets the standard for low latency and high endurance. We first tested the 375GB model through remote access before the P4800X was ready for widespread release, then later reviewed the 750GB model hands-on. More recently, Intel has introduced a 1.5TB model and doubled the write endurance rating of new models to 60 DWPD.

Intel Optane SSD DC P4800X Specifications
Capacity 375 GB 750 GB 1.5 TB
Form Factor PCIe HHHL or 2.5" 15mm U.2
Interface PCIe 3.0 x4 NVMe
Controller Intel SLL3D
Memory 128Gb 20nm Intel 3D XPoint
Typical Latency (R/W) <10µs
Random Read (4 kB) IOPS (QD16) 550,000
Random Read 99.999% Latency (QD1) 60µs
Random Read 99.999% Latency (QD16) 150µs
Random Write (4 kB) IOPS (QD16) 500,000
Random Write 99.999% Latency (QD1) 100µs
Random Write 99.999% Latency (QD16) 200µs
Mixed 70/30 (4kB) Random IOPS (QD16) 500,000
Sequential Read (64kB) 2400 MB/s
Sequential Write (64kB) 2000 MB/s
Active Power Read 8 W 10 W 18 W
Write 13 W 15 W
Idle Power 5 W 6 W 7 W
Endurance 30 DWPD 60 DWPD
Warranty 5 years

 

Memblaze PBlaze5

Memblaze is not one of the biggest names in the SSD market, but they supplied two of the most powerful SSDs in this review. Memblaze is one of many companies that offer high-end enterprise SSDs but don't manufacture their own memory or controllers. Instead, Memblaze uses the most powerful SSD controllers available on the open market: the Flashtec NVMe product family that originated with IDT and was used in one of the first PCIe SSDs, and has changed hands several times since then in corporate acquisitions, to PMC-Sierra, Microsemi and now Microchip. Major players like Intel and Samsung have their own controller ASICs that can compete in this product segment, but most other companies either go for the Flashtec controllers or a big Xilinx FPGA. Familiar controller vendors like Marvell and Silicon Motion have been eying this market segment, but so far they are only offering solutions that more or less pair up two of their smaller 8-channel controllers, rather than monolithic 16 or 32-channel controller designs.

The PBlaze5 900 series SSDs use the Flashtec NVMe2016 controller that provides 16 channels for NAND and up to 8 PCIe lanes for the host interface. The PBlaze D900 is the U.2 version and supports operating with either a PCIe 3.0 x4 host interface or in dual-port x2+x2 mode when used in a system with PCIe fabric switches that support redundant multipath connections. For our testing, the D900 has a direct PCIe x4 connection to one of the test system's CPUs.

Memblaze PBlaze5 Series Specifications
  PBlaze5 D900 PBlaze5 C900
Form Factor 2.5" 15mm U.2 HHHL AIC
Interface PCIe 3.0 x4 PCIe 3.0 x8
Controller Microsemi Flashtec PM8607 NVMe2016
Protocol NVMe 1.2a
NAND Micron 384Gb 32L 3D TLC
Capacities (TB) 2 TB 3.2 TB 4 TB 8 TB 2 TB 3.2 TB 4 TB 8 TB
Sequential Read (GB/s) 3.5 5.3 6.0 5.9 5.5
Sequential Write (GB/s) 2.2 3.2 3.4 3.5 2.2 3.2 3.8 3.8
Random Read (4 kB) IOPS 825k 835k 823k 1005k 1010k 1001k
Random Write (4 kB) IOPS 255k 280k 347k 328k 235k 288k 335k 348k
Latency Read (4kB) 94 µs 93 µs
Latency Write 16 µs 15 µs
Power Idle 7 W
Operating 25 W
Endurance 3 DWPD
Warranty Five years

The PBlaze5 C900 is a half-height half-length add-in card with a PCIe x8 connection, giving it the possibility of providing more than 4GB/s—our 4TB sample is rated for 5.9GB/s sequential reads, and slightly over 1M random read IOPS.

The PBlaze5 900-series uses Micron's 32-layer 3D TLC NAND. The somewhat awkward 384Gb per-die capacity lends itself well to building a drive with large overprovisioning: these 4TB drives have 6TiB of NAND onboard, plenty to enable a 3 DWPD endurance rating and support very high performance despite the older and slower 3D NAND. Memblaze has also introduced newer PBlaze5 models that move to Micron's 64L TLC, but due to less extreme overprovisioning ratios the drive performance ratings aren't significantly higher.

All this performance comes at the cost of power consumption of up to 25W. The C900's add-in card form factor can easily dissipate this much with its large heatsink, but even with the bottleneck of a narrower PCIe x4 link the D900 is still very power-hungry. Memblaze uses a 2.5" 15mm U.2 case design that allows for some airflow through the drive between the two PCBs, though a typical hot-swap backplane won't provide a very clear path for this flow.

Drives In Detail, Part 1: Samsung 860 DCT, 883 DCT, 983 DCT Drive Features
Comments Locked

36 Comments

View All Comments

  • ZeDestructor - Friday, January 4, 2019 - link

    Could you do the MemBlaze drives too? I'm really curious how those behave under consumer workloads.
  • mode_13h - Thursday, January 3, 2019 - link

    At 13 ms, the Peak 4k Random Read (Latency) chart is likely showing the overhead of a pair of context switches for 3 of those drives. I'd be surprised if that result were reproducible.
  • Billy Tallis - Thursday, January 3, 2019 - link

    Those tail latencies are the result of far more than just a pair of context switches. The problem with those three drives is that they need really high queue depths to reach full throughput. Since that test used many threads each issuing one IO at a time, tail latencies get much worse once the number of threads outnumbers the number of (virtual) cores. The 64-thread latencies are reasonable, but the 99.9th and higher percentiles are many times worse for the 96+ thread iterations of the test. (The machine has 72 virtual cores.)

    The only way to max out those drive's throughput while avoiding the thrashing of too many threads is to re-write an application to use fewer threads that are issuing IO requests in batches with asynchronous APIs. That's not always an easy change to make in the real world, and for benchmarking purposes it's an extra variable that I didn't really want to dig into for this review (especially given how it complicates measuring latency).

    I'm comfortable with some of the results being less than ideal as a reflection of how the CPU can sometimes bottleneck the fastest SSDs. Optimizing the benchmarks to reduce CPU usage doesn't necessarily make them more realistic.
  • CheapSushi - Friday, January 4, 2019 - link

    Hey Billy. this is a bit of a tangent but do you think SSHDs will have any kind of resurgence? There hasn't been a refresh at all. The 2.5" SSHDs max out at about 2TB I believe with 8GB of MLC(?) NAND. Now that QLC is being pushed out and with fairly good SLC schemes, do you think SSHDs could still fill a gap in price + capacity + performance? Say, at least a modest bump to 6TB of platter with 128GB of QLC/SLC-turbo NAND? Or some kind of increase along those lines? I know most folks don't care about them anymore. But there's still something appealing to me about the combination.
  • leexgx - Friday, January 4, 2019 - link

    Sshd tend to use MLC, Only ones been interesting has been the Toshiba second gen sshds as they use some of the 8gb for write caching (from some Basic tests I have seen)
    where as seagate only caches commonly read locations
  • leexgx - Friday, January 4, 2019 - link

    Very annoying the page reloading

    Want to test second gen Toshiba but finding the right part number as they are using creptic part numbers
  • CheapSushi - Friday, January 4, 2019 - link

    Ah, I was not aware of the ones from Toshiba, thanks for the heads up. Write caching seems the way to go for such a setup. Did the WD SSHD's do the same as Seagates?
  • leexgx - Friday, January 11, 2019 - link

    I have obtained the Toshiba mq01, mq02 and there h200 sshd all 500gb to test to see if write caching works (limit testing to 500mb writing at start see how it goes from There
  • thiagotech - Friday, January 4, 2019 - link

    Can someone help me understanding which scenarios is considered as QD1 and higher? Does anyone have a guide for dummies what is queue depth? Lets suppose i'll start Windows and there is 200 files of 4k, is it a QD1 or QD64? Because i was copying a folder with a large number of tiny files and my Samsung 960 Pro reached like 70MBPS of copy speed, is really bad number...
  • Greg100 - Saturday, January 5, 2019 - link

    thiagotech,

    About queue depth during boot up a Windows check last post: https://forums.anandtech.com/threads/qd-1-workload...

    About optimization Samsung 960 Pro performance check: "The SSD Reviewers Guide to SSD Optimization 2018" on thessdreview

Log in

Don't have an account? Sign up now