SSDs are beginning to challenge conventional drive form factors in a major way. On the consumer side we're seeing more systems use new form factors for SSDs, enabled by mSATA. The gumstick form factor used in the MacBook Air and ASUS UX Series comes to mind. SSDs can offer performance in a smaller package, thus helping scale down the size of notebooks.


The 11-inch MacBook Air SSD, courtesy of iFixit

The enterprise market has seen a form factor transition of its own. While 2.5" SSDs are still immensely common, there's a lot of interest in PCIe solutions.
 
The quick and easy way to get a PCIe SSD is to take a bunch of SSDs and RAID them together on a single PCIe card. You don't really get a performance benefit, but it does help you get a lot of performance without being drive-bay limited. This is what we typically see from companies like OCZ.
 
The other alternative is a native PCIe solution. In the aforementioned example, you typically have a couple of SATA SSD controllers paired with a SATA to PCIe RAID controller. With a native solution you'd skip the RAID controller entirely and just have a custom SSD controller that interfaces directly to PCIe. A native PCIe SSD is just an SSD that avoids SATA entirely, thus avoiding any potential bottlenecks. Today Micron is announcing its first native PCIe SSD: the P320h.
 

 
The P320h is Micron's first PCIe SSD as well as its first in-house controller design. You'll remember from our C300/C400/m4 reviews that Micron typically buys its controllers from Marvell and simply does firmware development in house. The P320h changes that. While it's too early to assume that we'll see Micron designed controllers for consumer drives as well, clearly that's a step the company is willing to take.
 
The P320h's controller is a beast. With 32 parallel channels and a PCIe gen 2 x8 interface, the P320h is built for bandwidth. Micron's peak performance specs speak for themselves:
 

 
Sequential read/write performance is up to 3GB/s and 2GB/s respectively. Random 4KB read performance is up at a staggering 750,000 IOPS, while random write speed peaks at 341,000 IOPS. The former is unmatched by anything I've seen on a single card, while the latter is a number that OCZ's recently announced Z-Drive R4 88 is promising as well. Note that these aren't steady state numbers nor are the details of the testing methodology known so believe accordingly.
 
There is of course support for NAND redundancy, which Micron calls RAIN (Redundant Array of Independent NAND). Micron calls RAIN very similar to RAID-7 with 1 parity channel, however it didn't release information as to what sorts of failures are recoverable as a result. RAIN in addition to typical enterprise level write amplification concerns result in a some pretty heavy overprovisioning on the drive as you'll see below.
 

 
Micron will offer the P320h in two capacities: 350GB and 700GB. The drives use 16Gbit 34nm SLC NAND (ONFI 2.1). The 700GB drive features 64 package placements with 8 die per package - that works out to be 16GB per die, or 1TB of NAND on the card. 
 
The 350GB version has the same number of package placements (64) but it only has 4 die per package, which works out to be 512GB of NAND on board. Obviously with twice as many die per package there are some interleaving benefits which result in better 4KB random write performance.
 
Pricing is unknown at this point, although Micron pointed out that it is expecting cost to be somewhere south of $16 per GB (at $16/GB that would be $5600 for the 350GB board and $11,200 for the 700GB board).
Comments Locked

22 Comments

View All Comments

  • davegraham - Thursday, June 2, 2011 - link

    actually SNIA is doing the majority of the standards definition here. Intel /= standards body.
  • peternelson - Thursday, June 2, 2011 - link

    Well certainly SNIA are doing important work eg SNIA-CTP 1.4.

    I don't really mean Intel standards per-se, I agree Intel is not a standards body, but some of what it is involved in do become de-facto or finalised standards over time. eg USB did, Thunderbolt likely will. Some standardisation tracks take a long time - witness Wifi router manufacturers shipping "draft N" for years prior. I mean things that have broad agreement and adoption (or will have), whether they fully "standardised" through whatever external ratification process yet, or emerging in draft form.

    Certainly vendors seem to be cooperating in the field of SSD storage, and where 70+ leading players are talking to each other including names like IBM, Dell, EMC, Fujitsu Amphenol, Emulex, Fusion-IO, IDT, Marvel, Micron, Molex, PLX, Qlogic, Sandforce etc, regardless of who initiated or ultimately "owns" it, we can benefit from their deliberations. This seems to be a "coalition of the winning" rather than necessarily being led by Intel. Intel themselves are already producing SSD products, but they were also involved in development of PCI Express, and wider PC platform direction, roadmaps and technologies so have significant influence.

    The latest ncmhci seems to offer direct PCIe connection bypassing the SATA interface bottleneck which can deliver higher bandwidth and lower latency. I believe that if vendors agree to do this in some common way, it is an improvement for customers who buy these products because it will likely be easier to swap or mix and match between such products, and results in drivers that are better tested, with less development cost (which potentially could result in more competitive pricing).

    If there exists such a specification (let's not use the word "standard" prematurely) which achieves good industry adoption, then it's worth knowing about it when discussing related products, and worth noting if some manufacturer decided to alternatively pursue their own proprietary way of doing things. If they do they would need to spell out any benefits of using their approach.
  • tygrus - Thursday, June 2, 2011 - link

    P320h may require a queue depth of >64 for max read IO speed.
    This has 32 channels of SLC which beats 64 channels of MLC (RAID0 of 8 controllers of 8 channels, eg. Z-Drive R4 m88). Or does it really matter how many die's there are in total
    The 700GB could be closer to $9000 if the main controller is not duplicated. ie. $1700 base plus $7.5/GB x 512GB = $5540 (350GB card); $1900plus $7.5 x 1024 = $9580 (750GB card)
    4-6x the cost of other MLC based cards.
    At that rate SAS datasets could be read/written faster than they can be processed by the CPU. Take that you datasteps, PROC SORT and PROC FREQ.
  • Kevin G - Thursday, June 2, 2011 - link

    For an enterprise class SLC based flash, the pricing isn't bad. I do think that Micron should add a low profile ~350 GB version to their line up and they'd be set (not all servers accept full height cards).

    Just need a consumer centric version that'll be bootable and have a ton of MLC flash on it.
  • Shadowmaster625 - Thursday, June 2, 2011 - link

    The controller chip and its associated nre costs are too high for the consumer market.
  • murray13 - Thursday, June 2, 2011 - link

    Yes the development costs ARE high, but if you can re-coup them in the enterprise sector you're free to move to the consumer market for not much more than manufacturing costs. It would depend more on what the manufacturing costs are than the development costs, which is where enterprise pays, a LOT.
  • Demon-Xanth - Thursday, June 2, 2011 - link

    ...but I want one.

    *posts a picture of this card up next to the Countach from his high school days*
  • Olternaut - Thursday, June 2, 2011 - link

    I skipped right to the part about the read/write speeds. I was like "3GB/s ?????". Then I saw the pricing. :(

    If it wasn't for the pricing I was going to definitely get this instead of the OCZ RevoDrive 3 for my new build.
    But obviously, the pricing is meant for servers somewhere in some company's IT department.

    Oh well, OCZ Revodrive 3 it is then. 1.5GB/s is nothing to sneeze at.
  • TEAMSWITCHER - Friday, June 3, 2011 - link

    The biggest problem with the PC world is that awesome new technology like this can take forever to reach the mainstream. Why isn't every SSD just a PCI Express card? There is no reason to ship it in a case that makes it look like a hard disk drive. I love the speed of SSDs, but the implementation feels so legacy. I hope technology like this gets adopted fast.
  • FREEPAT75014 - Saturday, June 4, 2011 - link

    Please clarify for all such SSD assemblies if they are BOOTABLE or not. I'm chasing an SSD 1st to accelerate Windows 7, so a device non-bootable will be useless for me.

Log in

Don't have an account? Sign up now