AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 – Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

The Phoenix Blade continues to be strong in our 2011 Storage Benches, although the XP941 retrieves its crown as the fastest client drive. I'm guessing the XP941 is more optimized for typical client workloads, which the 2011 suites present, whereas the 2013 workload is much, much heavier and only applies to users with very heavy IO workload. 

Light Workload 2011 - Average Data Rate

AnandTech Storage Bench 2013 Random & Sequential Performance
Comments Locked

62 Comments

View All Comments

  • Havor - Sunday, December 14, 2014 - link

    What really sucks is that Intel continues attaching a PCH to the host processor through a four-lane DMI 2.0 connection on even the X99. You only get 2 GB/s of bi-directional throughput.

    So 3 disk R0 or 4 disk R5 is all it takes to saturate the DMI connection between chipset and CPU, even do you got 10x SATA3 connectors.

    On the moment only solutions are M.2, PCIe to have a faster storage solution.

    And for the desktop, only M.2 with native PCIe 3.x 4x will be able to to deliverer cost affectedly solutions, one's they finally have good SSD controllers developed.
  • alacard - Sunday, December 14, 2014 - link

    You're preaching to the quire on that one. 2GB per second (actually only 1800MB/s after overhead) divided between 10 SATA ports, 14 USB (6 3.0) ports, Gigabit LAN, and 8 PCI express lanes, is an absolute joke.
  • TheWrongChristian - Monday, December 15, 2014 - link

    What you're missing is that while an SSD at peak speed can saturate a SATA 3 link, and 3 such drives can saturate 2GB/s DMI connection, even the best SSDs can rarely reach such speeds with normal workloads.

    Random (especially low queue depth 4K random) workloads tend to be limited to much lower speeds, and random IO is much more representative of typical workloads. Sequential workloads are usually bulk file copy operations, and how often do you do that?

    So, given your 10x SATA 3 connectors, what workload do you possibly envisage that would require that combined bandwidth? And benchmark dick swinging doesn't count.
  • personne - Sunday, December 14, 2014 - link

    My tasks are varied but they often involve opening large data sets and importing them into an inverted index store, at the same time running many process agents on the incoming data as well as visualizing it. This host is also used for virtualization. Programs loading faster is the least of my concerns.
  • AllanMoore - Saturday, December 13, 2014 - link

    Well you could see the blistering speed on 480Gb comparing to 240Gb version, see the table: http://picoolio.net/image/e4O
  • EzioAs - Saturday, December 13, 2014 - link

    I know RAID 0 (especially with 4 drives) theoretically would give high performance but is it really worth the data risks? I do question some laptop manufacturers or PC OEM to actually build a RAID 0 with SSDs for potential customers, it's just not a good practice imo.
  • personne - Monday, December 15, 2014 - link

    RAM is much more volatile than flash or spinning storage yet it has its place. SSDs are in a sense always RAID array since many chips are used. And it's been posted that the failure rate of a good SSD is much less than a HDD, multiple SSDs are still less than a single HDD. And one should always have good backups regardless. So if the speed is worth it it's not at at all unreasonable.
  • Symbolik - Sunday, December 14, 2014 - link

    I have 3x Kingston HyperX 240gb in Raid 0, I have 4 of them, but 3 maxes out my AMDraid gains, it is significant over 2 at around 1000 x 1100 r/w (ATTO diskbench). I have tried 4, the gain was minimal. To get further gains with the 4th, I'd probably need to put in an actual RAID card. I know it's not intel, but it is sandforce.
  • Dug - Friday, December 12, 2014 - link

    You say - "As a result the XP941 will remain as my recommentation for users that have compatible setups (PCIe M.2 and boot support for the XP941) because I'd say it's slightly better performance wise and at $200 less there is just no reason to choose the Phoenix Blade over the XP941, except for compatibility"

    I'm curious, what are you using to determine the XP941 has slightly better performance? It just seems to me most of the benchmarks favor the Phoenix Blade.
  • Kristian Vättö - Friday, December 12, 2014 - link

    It's the 2011 Heavy Workload in particular where the XP941 performs considerably better than the Phoenix Blade, whereas in 2013 and 2011 Light suites the difference between the two is quite small. The XP941 also has better low QD random performance, which typically important for desktop workloads.

Log in

Don't have an account? Sign up now