Buying an SSD for your notebook or desktop is nice. You get more consistent performance. Applications launch extremely fast. And if you choose the right SSD, you really curb the painful slowdown of your PC over time. I’ve argued that an SSD is the single best upgrade you can do for your computer, and I still believe that to be the case. However, at the end of the day, it’s a luxury item. It’s like saying that buying a Ferrari will help you accelerate quicker. That may be true, but it’s not necessary.

In the enterprise world however, SSDs are even more important. Our own Johan de Gelas had his first experience with an SSD in one of his enterprise workloads a year ago. This OLTP test looks at the performance difference between using 15K RPM SAS drives for a database server. Johan experimented with using SAS HDDs vs. SSDs for both data and log drives in the server.

Using a single SSD (Intel’s X25-E) for a data drive and a single SSD for a log drive is faster than running eight 15,000RPM SAS drives in RAID 10 plus another two in in RAID 0 as a logging drive.

Not only is performance higher, but total power consumption is much lower. Under full load eight SAS drives use 153W, compared to 2 - 4W for a single Intel X25-E. There are also reliability benefits. While mechanical storage requires redundancy in case of a failed disk, SSDs don’t. As long as you’ve properly matched your controller, NAND and ultimately your capacity to your workload, an SSD should fail predictably.

The overwhelming number of poorly designed SSDs on the market today is one reason most enterprise customers are unwilling to consider SSDs. The high margins available in the enterprise market is the main reason SSD makers are so eager to conquer it.

Micron’s Attempt

Just six months ago we were first introduced to Crucial’s RealSSD C300. Not only was it the first SSD we tested with a native 6Gbps SATA interface, but it was also one of the first to truly outperform Intel across the board. A few missteps later and we found the C300 to be a good contender, but our second choice behind SandForce based drives like the Corsair Force or OCZ Vertex 2.

Earlier this week Micron, Crucial’s parent company, called me up to talk about a new SSD. This drive would only ship under the Micron name as it’s aimed squarely at the enterprise market. It’s the Micron RealSSD P300.

The biggest difference between the P300 and the C300 is that the former uses SLC (Single Level Cell) NAND Flash instead of MLC NAND. As you may remember from my earlier SSD articles, SLC and MLC NAND are nearly identical - they just store different amounts of data per NAND cell (1 vs. 2).


SLC (left) vs. MLC (right) NAND

The benefits of SLC are higher performance and a longer lifespan. The downside is cost. SLC NAND is at least 2x the price of MLC NAND. You take up the same die area as MLC but you get half the storage. It’s also produced in lower quantities so you get at least twice the cost.

  SLC NAND flash MLC NAND flash
Random Read 25 µs 50 µs
Erase 2ms per block 2ms per block
Programming 250 µs 900 µs

Micron wouldn’t share pricing but it expects drives to be priced under $10/GB. That’s actually cheaper than Intel’s X25-E, despite being 2 - 3x more than what we pay for consumer MLC drives. Even if we’re talking $9/GB that’s a bargain for enterprise customers if you can replace a whole stack of 15K RPM HDDs with just one or two of these.

The controller in the P300 is nearly identical to what was in the C300. The main differences are two fold. First, the P300’s controller supports ECC/CRC from the controller down into the NAND. Micron was unable to go into any more specifics on what was protected via ECC vs. CRC. Secondly, in order to deal with the faster write speed of SLC NAND, the P300’s internal buffers and pathways operate at a quicker rate. Think of the P300’s controller as a slightly evolved version of what we have in the C300, with ECC/CRC and SLC NAND support.


The C300

The rest of the controller specs are identical. We still have the massive 256MB external DRAM and unchanged cache size on-die. The Marvell controller still supports 6Gbps SATA although the P300 doesn’t have SAS support.

Micron P300 Specifications
  50GB 100GB 200GB
Formatted Capacity 46.5GB 93.1GB 186.3GB
NAND Capacity 64GB SLC 128GB SLC 256GB SLC
Endurance (Total Bytes Written) 1 Petabyte 1.5 Petabytes 3.5 Petabytes
MTBF 2 million device hours 2 million device hours 2 million device hours
Power Consumption < 3.8W < 3.8W < 3.8W

The P300 will be available in three capacities: 50GB, 100GB and 200GB. The drives ship with 64GB, 128GB and 256GB of SLC NAND on them by default. Roughly 27% of the drive capacity is designated as spare area for wear leveling and bad block replacement. This is in line with other enterprise drives like the original 50/100/200GB SandForce drives and the Intel X25-E. Micron’s P300 datasheet seems to imply that the drive will dynamically use unpartitioned LBAs as spare area. In other words, if you need more capacity or have a heavier workload you can change the ratio of user area to spare area accordingly.

Micron shared some P300 performance data with me:

Micron P300 Performance Specifications
  Peak Sustained
4KB Random Read Up to 60K IOPS Up to 44K IOPS
4KB Random Write Up to 45.2K IOPS Up to 16K IOPS
128KB Sequential Read Up to 360MB/s Up to 360MB/s
128KB Sequential Write Up to 275MB/s Up to 255MB/s

The data looks good, but I’m working on our Enterprise SSD test suite right now so I’ll hold off any judgment until we get a drive to test. Micron is sampling drives today and expects to begin mass production in October.

Comments Locked

49 Comments

View All Comments

  • bah12 - Friday, August 13, 2010 - link

    "SSDs have more predictable failure modes than hard drives"

    More predictable <> 0% unpredicable. In other words even if there is a very slight risk of unpredictable failure, it is still there and therefore an unpredictable failure can occur.

    Also no one has addressed timing, so what if a "predictable" failure can be determined if it is only "predicted" 30 sec before the failure.

    When Intel/Micron will guarantee 0 failures, and their driver will accurately predict that and notify an admin, then I'll buy it. Until then it is just theory. Trust me if Intel could with 100% certainty predict failures early enough, they would be marketing the hell out of that. Bottom line is if they won't stick their neck out on it, then I won't be either.
  • Justin Time - Friday, August 13, 2010 - link

    EVERYTHING in an enterprise system requires redundancy. When down-time costs you $/min you simply can't afford to accept anything less. An SSD may have a predictable life-span, but that doesn't rule out unexpected failure.

    However, the point the article seems to miss about replacing a bunch of 15K dirves with a single SSD is that the primary reason for multi-drive RAID is typically capacity as well as redundancy. You don't replace a RAID array of 1TB+ drives with a single SSD of ANY kind.
  • JHoov - Monday, August 30, 2010 - link


    However, the point the article seems to miss about replacing a bunch of 15K dirves with a single SSD is that the primary reason for multi-drive RAID is typically capacity as well as redundancy. You don't replace a RAID array of 1TB+ drives with a single SSD of ANY kind.


    Primary reason for a multi-drive RAID is rarely capacity in my experience, and if capacity is your primary reason, SSDs aren't the right path for you. In my experience, redundancy is number 1, with IOPS (Performance) being a close second. In some cases, that is reversed, or redundancy is not a concern, ie for temp tables on a database where the data is transient and unimportant, but the speed at which it can come in and out of the storage is paramount.

    In many cases, I've seen 10 or more (sometimes a lot more) 146GB or 300GB 15K RPM SAS HDDs in an array that have been short stroked to keep the data on the fastest portion of the disk, thereby increasing throughput. So you've got between 1.4 and 3TB of RAW space being used to hold a couple hundred GB of data, with the rest being wasted. In many cases, a pair of good SSDs in RAID1 would provide the same or better performance than the disks in random IOPS, at about the same cost, without needing the external array and everything that goes with it (rack space, power, cooling, etc.)

    I recently looked at a database server that was completely I/O bound, and determined that the traditional hard disk method would have needed ~40 15K RPM drives to satisfy the IOPS requirements. Instead of that, an array of 6 SSDs was specified, and thus far, benchmarking on them looks like it will easily meet the needs, with some headroom for future expansion.

    On your second point, of course you would never replace an array of 1TB+ HDDs with SSD (one or many) unless you've got a printing press churning out money in the back. But then, you shouldn't be using an array of 1TB+ disks if performance is your concern either, seeing as all of the 1TB+ disks I've seen have been SATA (or 'midline' SAS), and <=7200 RPM. SSDs can, do, and will continue to serve a definite purpose in applications where performance is the absolute highest priority, such as the OLTP example given in the original article.
  • cdillon - Thursday, August 12, 2010 - link

    While mechanical storage requires redundancy in case of a failed disk, SSDs don’t. As long as you’ve properly matched your controller, NAND and ultimately your capacity to your workload, an SSD should fail predictably.


    No "enterprise" SSD will fail unpredictably, ever? I want to be able to keep my data integrity and uptime high by being able to swap user-replaceable components out as soon as they fail or are predicted to fail, and some kind of N+1 (or better) redundancy of your data-storage modules is required to do that. Whether they are HDDs or SSDs or holographic cubes or whatever is irrelevant.
  • andersx - Thursday, August 12, 2010 - link

    SSD is for very good use on scratch disks, if you are in that sweet spot, where the data you're fetching is not small enough to fit in the memory, but not large enough to fill several TB's of disk space. If you can get by with perhaps as little as perhaps 3-4 SSDs in your RAID array, you can potentially save a lot of time, depending on your program. But it's a weighing of the cost/benefit. You can get a more storage if you buy 300gigs 10 or 15K rpm disks for the same price.

    Our servers run 5 300GB 15K rpm disks in raid 0 (no redundancy) for swapping only. If we could fit our data into SSD, execution times would probably go up a fair bit, but then again we couldn't afford buying as many nodes, since SSDs are way more costly.

    It's hard to generalize "enterprise" or "servers" as a whole.
  • mapesdhs - Thursday, August 12, 2010 - link


    I would love to use SSDs as scratch storage for movie processing, and a good one
    for a system disk, but the cost & lowish capacity still put me off. Though obviously
    nothing like as fast, it was a lot cheaper to buy a couple of used 450GB 15K SAS,
    for which I get 187 to 294MB/sec sequential read (HDTach). Not tested random
    read/write yet, should be interesting.

    There's also the problem of XP vs. Win7, which controllers perform better when TRIM
    isn't available, eg. SandForce vs. whatever Crucial RealSSD C300 uses (forget
    offhand, Micron?)

    IMO the technology still needs to mature a bit.

    Ian.
  • Out of Box Experience - Thursday, August 12, 2010 - link

    Re: There's also the problem of XP vs. Win7, which controllers perform better when TRIM
    isn't available, eg. SandForce vs. whatever Crucial RealSSD C300 uses (forget
    offhand, Micron?)
    -----------------------
    Trim is not available on Sandforce Conrolled SSD's only when using XP
    Trim is available on Sandforce Controlled SSd's when using Windows 7

    Here is how a 40GB OCZ Vertex 2 performs on a Lowly ATOM Dualcore 510 series computer in the WORST CASE Scenario>

    Boot time from Win Logo to Functioning Desktop is 7 seconds after installing XP SP2 without ANY drivers or software installed (Just XP was installed)

    Boot time with all drivers and AVG Antivirus is about 14 seconds

    The difference in boot times is due to the software you have installed and NOT a SSD related issue (This should be covered when explaining SSD performance)

    Read performance in a worst case scenario (Without Trim) using XP-SP2 on a lowly Atom Begins at 120 MB / SEC and quickly levels off at about 240 MB / SEC after just a few seconds (HD-Tach)

    Encrypted Partition Read Speed is HORRIBLE!
    Read Speed of a DriveCrypt v4 partition on a Vertex 2 was just over 14 MB / Sec in XP

    I think this may be due to Drivecrypt however and not the SSD
    Further testing is needed

    I chose worst case scenario's for this drive to get a feel for real world performance without Trim

    I did notice that ATOM computers feel like completely different animals when using a fast SSD compaired to a Hard Drive however and the performance gain feels much greater on an Atom than when starting off with a really fast computer

    NOTE: Other encryption or a newer version of Drivecrypt might yield better results but the real performance gains on a Sandforce controller are from hardware compression which might not work with encrypted drives!

    Hope that helps
  • retnuh - Friday, August 13, 2010 - link

    ahhh..... sandforce drives support hardware 128bit AES encryption, why not use that instead of drivecrypt?
  • Out of Box Experience - Friday, August 13, 2010 - link

    Thanks for the info..

    I was not aware that it had hardware encryption untill now.

    http://www.ocztechnologyforum.com/forum/showthread...

    Guess I need to run a few more tests today!
  • Out of Box Experience - Friday, August 13, 2010 - link

    Nope!

    Guess not

    The OCZ Toolbox is still unavailable due to bugs

    Can't wait to test the hardware encryption

Log in

Don't have an account? Sign up now