Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  SanDisk Extreme Pro SanDisk Extreme II Intel SSD 730 Intel SSD 530 OCZ Vector 150
Default
25% Spare Area

Similar to the Extreme II, the IO consistency is just awesome. SanDisk's firmware design is unique in the sense that instead of pushing high IOPS at the beginning, the performance drops close to 10K IOPS at first and then rises to over 50K and stays there for a period of time. The higher the capacity, the longer the high IOPS period: the 960GB Extreme Pro takes ~800 seconds before the IOPS drops to 10K (i.e. the drive reaches steady-state). I do not know why SanDisk's behavior is so different (maybe it has something to do with nCache?) but it definitely works well. Furthermore, SanDisk seems to be the only manufacturer that has really nailed IO consistency with a Marvell controller because Crucial/Micron and Plextor have had some difficulties and their performance is not even close to SanDisk.

However, I would not say that the Extreme Pro is unique. Both Intel SSD 730 and OCZ Vector 150 provide the same or even better performance at steady-state, and with added over-provisioning the difference is even more significant. That is not to say that the Extreme Pro is inconsistent, not at all, but for a pure 4KB random write workload there are drives that offer (slightly) better performance.

  SanDisk Extreme Pro SanDisk Extreme II Intel SSD 730 Intel SSD 530 OCZ Vector 150
Default
25% Spare Area

 

  SanDisk Extreme Pro SanDisk Extreme II Intel SSD 730 Intel SSD 530 OCZ Vector 150
Default
25% Spare Area

 

TRIM Validation

To test TRIM, I filled the drive with sequential data and proceeded with 60 minutes of 4KB random writes at queue depth of 32. I measured performance with HD Tach after issuing a single TRIM pass to the drive.

TRIM works for sure as the write speed is at steady 400MB/s.

.

Introduction, The Drives & The Test AnandTech Storage Bench 2013
Comments Locked

85 Comments

View All Comments

  • fackamato - Monday, June 16, 2014 - link

    Nice. Time to replace the old Intel 320 in RAID0 perhaps.
  • MikeMurphy - Monday, June 16, 2014 - link

    I always like a good review, but I'm finding SSD benchmarks difficult to respect when the real-world difference between this drive and the MX100 will be invisible to most users.
  • Samus - Tuesday, June 17, 2014 - link

    I agree, Ferrari's vs Lamborghini's. Anybody coming from a hard drive or even a last-gen SSD (like an Intel X25) isn't going to notice the difference between a $100 MX100 and a $200 Sandisk Extreme Pro
  • nathanddrews - Tuesday, June 17, 2014 - link

    No one will notice... except people that can and do distinguish between Ferraris and Lambos. I would imagine that someone that could tell the difference between a WD Velociraptor and a Seagate Barracuda would notice the difference between these two drives. Different users have different needs, that should be obvious.
  • MyrddinE - Tuesday, June 17, 2014 - link

    The issue is that many power users *think* they can tell the difference, but fail to in blind tests. This has been proven true frequently, usually in relation to more subjective domains like audio, but it applies everywhere. Sit a user at two computers, one overclocked 5%, one not, and it's likely not a single power user will be able to tell without a FPS meter or perf test result.
  • nathanddrews - Tuesday, June 17, 2014 - link

    Depends on what operation is boosted by 5%. If 5% allows you to maintain solid vsync vs dips, then you sure as heck will be able to tell. If 5% is the difference between completing 5% more editing projects in the same amount of time, then people who spend more will see a benefit. There's always a case to be made for measurable improvements.

    I'm sorry, but the audiophile straw man doesn't apply here.
  • Chaser - Tuesday, June 17, 2014 - link

    A 5 % performance difference with selective benchmarks using higher end SSD WON'T be noticed in real world user experiences. No need to apologize.
  • Kristian Vättö - Tuesday, June 17, 2014 - link

    5%? The difference between 240GB Extreme Pro and 256GB MX100 is more like 162% in the Storage Bench 2013...

    http://www.anandtech.com/bench/product/1240?vs=122...

    Bear in mind that that's real-world IOs played back on the drive, so it's not as synthetic as e.g. Iometer tests are.
  • TheWrongChristian - Wednesday, June 18, 2014 - link

    It's entirely synthetic, even if derived from real trace data.

    As I understand it, the trace is replayed as fast as possible. In the real world, the trace probably would have been collected over a period of hours or days. In those time frames, different levels of near instantaneous is the same if it's too quick for human perception. Consider the microcontroller controlling your washing machine. It does all it needs to do fast enough, that adding a 10000% faster CPU won;t make your washing clean any quicker.

    Plus, in the real world, other factors come into play. Had the trace been replayed in real time (as in takes as long to replay as to collect, pauses and all) different drives will do things like background GC, which will improve performance of the next burst of activity. A drive that takes 162% as long to replay the trace at full speed may complete the real time trace within milliseconds of the faster drive. Result, no perceptible difference to the user.
  • Kristian Vättö - Wednesday, June 18, 2014 - link

    Maximum idle time (i.e. when QD=0) is set to 25 seconds, otherwise the trace is played as it was collected. Sure that's still not the same as playing it back in real time but it's still quite a bit of time for the SSD to do GC.

Log in

Don't have an account? Sign up now