A few weeks ago a very smart friend of mine sent me an email asking why we haven’t seen more PCIe SSDs by now. While you can make the argument for keeping SATA around as an interface with traditional hard drives, it ends up being a bottleneck when it comes to SSDs. The move to 6Gbps SATA should alleviate that bottleneck for a short period, but it is easy enough to put NAND in parallel that you could quickly saturate it as well. So why not a higher bandwidth interface like PCIe?

The primary reason appears to be cost. While PCIe can offer much more bandwidth than SATA, the amount of NAND you’d need to get there and the controllers necessary would be cost prohibitive. The unfortunate reality is that good SSDs launched at the worst possible time. The market would’ve been ripe in 2006 - 2007, but in the post recession period getting companies to spend even more money on PCs wasn’t very easy. A slower than expected SSD ramp put the brakes on a lot of development on exotic PCIe SSDs.

We have seen a turnaround however. At last year’s IDF Intel showed off a proof of concept PCIe SSD that could push 1 million IOPS. And with the consumer SSD market dominated by a few companies, the smaller players turned to building their own PCIe SSDs to go after the higher margin enterprise market. Enterprise customers had the budget and the desire to push even more bandwidth. Throw a handful of Indilinx controllers on a PCB, give it a good warranty and you had something you could sell to customers for over a thousand dollars.

OCZ was one of the most eager in this space. We first met their Z-Drive last year:

The PCIe x8 card was made up of four Indilinx barefoot controllers configured in RAID-0, delivering up to four times the performance of a single Indilinx SSD but on a single card. That single card would set you back anywhere between $900 - $3500 depending on capacity and configuration.

With the SSD controllers behind a LSI Logic RAID controller there was no way to get TRIM commands to the data. OCZ instead relied on idle garbage collection to keep Z-Drive owners happy. Even today the company is still working on bringing a TRIM driver to Z-Drive owners.

The Z-Drive apparently sold reasonably well. Well enough for OCZ to create a follow on drive: the Z-Drive R2. This card uses custom NAND cards that would allow users to upgrade their drive capacity down the line. The cards are SO-DIMMs populated with NAND, available only through OCZ. The new Z-Drive still carries the hefty price tag of the original.

Ryan Petersen, OCZ’s CEO, hopes to change that with a new PCIe SSD: the OCZ RevoDrive. Announced at Computex 2010, the RevoDrive uses SandForce controllers instead of the Indilinx controllers of the Z-Drives. The first incarnation uses two SandForce controllers in RAID-0 on a PCIe x4 card. As far as attacking price: how does $369 for 120GB sound? And it is of course bootable.

OCZ sent us the more expensive $699.99 240GB version but the sort of performance scaling we'll show here today should apply to the smaller, more affordable card as well. Below is a shot of our RevoDrive sample:

The genius isn’t in the product, but in how OCZ made it affordable. Looking at the RevoDrive you’ll see the two SandForce SF-1200 controllers that drive the NAND, but you’ll also see a Silicon Image RAID controller and a Pericom PI7C9X130 bridge chip.

The Silicon Image chip is a SiI3124 PCI-X to 4 port 3Gbps SATA controller. The controller supports up to four SATA devices, which means OCZ could make an even faster version of the RevoDrive with four SF-1200 controllers in RAID.

Astute readers will note that I said the Sil3124 chip is a PCI-X to SATA controller. The Pericom bridge converts PCI-X to a PCIe x4 interface which is what you see at the bottom of the card.



The Pericom PCI-X to PCIe Bridge

Why go from SATA to PCI-X then to PCIe? Cost. These Silicon Image PCI-X controllers are dirt cheap compared to native PCIe SATA controllers, and the Pericom bridge chip doesn’t add much either. Bottom line? OCZ is able to offer a single card at very little premium compared to a standalone drive. A standard OCZ Vertex 2 E 120GB (13% spare area instead of 22%) will set you back $349.99. A 120GB RevoDrive will sell for $369.99 ($389.99 MSRP), but deliver much higher performance thanks to you having two SF-1200 controllers in RAID on the card.

You’ll also notice that at $369.99 a 120GB RevoDrive is barely any more expensive than a single SF-1200 SSD, and it’s actually cheaper than two smaller capacity drives in RAID. If OCZ is actually able to deliver the RevoDrive at these prices then the market is going to have a brand new force to reckon with. Do you get a standard SATA SSD or pay a little more for a much faster PCIe SSD? I suspect that many will choose the latter, especially because unlike the Z-Drive the RevoDrive is stupidly fast in desktop workloads.

If you’re wondering how this is any different than a pair of SF-1200 based SSDs in RAID-0 using your motherboard’s RAID controller, it’s not. The OCZ RevoDrive will offer lower CPU utilization than an on-board software based RAID solution thanks to its Silicon Image RAID controller, but the advantage isn’t huge. The only reason you’d opt for this over a standard RAID setup is cost and to a lesser extent, simplicity.

What’s that Connector?

When I first published photos of the Revo a number of readers wondered what the little connector next to the Silicon Image RAID controller was. Those who guessed it was for expansion were right: it is.

Unfortunately that connector won’t be present on the final RevoDrive shipped for mass production. At some point we may see another version of the Revo with that connector. The idea is to be able to add a daughterboard with another pair of SF-1200 controllers and NAND to increase capacity and performance of the Revo down the line. Remember that Silicon Image controller has four native SATA ports stemming off of it, only two are currently in use.

Installation and Early Issues
Comments Locked

62 Comments

View All Comments

  • shin0bi272 - Friday, June 25, 2010 - link

    Please tell me you're going to give this away in your 13 year anniversary loot. I really want an SSD but Im unemployed and this one would last me a LONG time since its expandable (such a great idea btw).
  • fwibbler - Friday, June 25, 2010 - link

    Since a lot of people may be upgrading from older SSDs (like Vertex 1) it might be an idea to show one of them in a performance chart when you review the release version (in particular 2x Vertex 30GB ;-)
  • buyaofeichu - Friday, June 25, 2010 - link

    (nike-alliance).(com),Inc. We are the best online dealer,about all kinds of nike.run retailing and wholesale trade wordwidely for years. Free Shipping And Customs,Super Sale Off Retailing,With 1Week Delivery to your door.
  • Breit - Friday, June 25, 2010 - link

    wrong place pal... ;)
  • SL_Eric - Friday, June 25, 2010 - link

    Does using it on an NF200-equipped board (and the appropriate through-the-NF200 PCIe lanes) have any impact on performance?
  • bji - Friday, June 25, 2010 - link

    All this talk about upgradable flash using NAND chips on SODIMM cards inspired me to think about what the future of storage will look like.

    Is it likely that eventually, the SSD controllers will follow the same path that memory controllers have? Starting with external devices (which I would bet is the way that core memory was done back in the mainframe days) as we have now with external SSD controllers, then moving to controllers built into the motherboard, and finally moving to on-die controllers. All the while, with NAND flash becoming a raw commodity part that you just plug into SODIMM slots or similar on your motherboard?

    So you'd buy RAM and put it in your RAM SODIMM slots, and then buy some NAND flash SODIMMS and plug those into your NAND SODIMM slots, each being handled by an on-board on on-die controller.

    Is this the eventuality of solid state storage? It actually sounds really good to me. I suspect that breaking the devices apart like this, and making the SSD controllers a separate part of the system from the raw NAND flash, would allow for greater efficiency for both, and drive prices down and capacities up.

    One drawback would be that the NAND flash sticks would have persisted data on them so to switch to new ones, you'd have to have somewhere to copy that data and then copy the data back onto the new sticks after you'd upgraded. Of course, that's really no different than what you have to do now when you buy a new SSD drive or (heaven forbid) spinning platter drive. The major difference being with external connector standards like SATA, you can add extra drives easily, whereas with SODIMM slots on the motherboard, you are more limited in the number of 'devices' that you can support at one time, making adding a temporary drive for the purposes of copying data around a bit harder.
  • allingm - Friday, June 25, 2010 - link

    You know I was thinking exactly the same thing. It would be so much better if hard drive space was upgraded just like RAM. I think the RAM standard group should come up with a persistent memory storage standard.
  • Shadowmaster625 - Monday, June 28, 2010 - link

    You would just clone the SSD DIMM(s) drive image onto an external usb hard drive. Then restore the image onto the new flash DIMM(s) and, if necessary, resize the partition.
  • nurd - Friday, June 25, 2010 - link

    It seems to me that since, from the OS's standpoint, it's just a SiI 3124, if you broke the RAID and just treated it as two drives at the OS level (maybe striping them at THAT layer), you could TRIM just fine (assuming your 3124 driver supports it of course).
  • ggathagan - Friday, June 25, 2010 - link

    My guess is that you will never find TRIM support for SiL3124's.

    As Anand mentioned, one of the keys to the device's low cost is the use of relatively old silicon.
    It would surprise me if Silicon Image was still actively doing any work on the 3124 drivers.
    The latest Windows RAID drivers listed on their website are dated 2006, with a note for the 64-bit version that it supports Vista beta 2.

    That part aside, TRIM support would depend on what it is about RAID that TRIM doesn't like.
    Even if the controllers allows you to break up the pair to be seen as individual drives, the controller still considers each of them to be RAID0 devices.

    I've no idea as to the issue between RAID and TRIM, but if the very fact that it's a RAID device disallows TRIM, then you're out of luck.
    If the issue with TRIM and RAID has to do with data striping, then you might be correct, assuming that SI ever developed TRIM support for the 3124.

Log in

Don't have an account? Sign up now