OCZ has been teasing the Vector 180 for quite some time now. The first hint of the drive was unveiled over nine months ago at Computex 2014 where OCZ displayed a Vector SSD with power loss protection, but the concept of 'full power loss protection for the enterprise segment' as it existed back then never made it to the market. Instead, OCZ decided to partially use the concept and apply it to its new flagship client drive that is also known as the Vector 180.

OCZ calls the power loss protection feature in Vector 180 'Power Failure Management Plus', or PFM+ for short. For cost reasons, OCZ didn't go with full power loss protection similar to enterprise SSDs and hence PFM+ is limited to offering protection for data-at-rest. In other words, PFM+ will protect data that has already been written to the NAND, but any and all user data that still sits in the DRAM buffer waiting to be written will be lost in case of a sudden power loss. 

The purpose of PFM+ is to protect the mapping table and reduce the risk of bricking due to a sudden power loss. Since the mapping table is stored in the DRAM for faster access, all SSDs without some sort of power loss protection are inherently vulnerable to mapping table corruption in case of a sudden power loss. In its other SSDs OCZ tries to protect the mapping table by frequently flushing the table from DRAM to NAND, but with higher capacities (like the 960GB) there's more metadata involved and thus more data at risk, which is why OCZ is introducing PFM+ to the Vector 180.

That said, while drive bricking due to mapping table corruption has always been a concern, I don't think it has been significant enough to warrant physical power loss protection for all client SSDs. It makes sense for the Vector 180 given it's high-end focus as professional users are less tolerant to downtime and it also grants OCZ some differentiation in the highly competitive client market. 

Aside from PFM+, the other new thing OCZ is bringing to the market with the Vector 180 is a 960GB model. The higher capacity is enabled by the use of 128Gbit NAND, whereas in the past OCZ has only used a 64Gbit die in its products. It seems that Toshiba's switch to 128Gbit die has been rather slow as I have not seen too many products with 128Gbit Toshiba NAND - perhaps there have been some yield issues or maybe Toshiba's partners are just more willing to use the 64Gbit die for performance reasons (you always lose some performance with a higher capacity die due to reduced parallelism).

OCZ Vector 180 Specifications
Capacity 120GB 240GB 480GB 960GB
Controller OCZ Barefoot 3 M00
NAND Toshiba A19nm MLC
NAND Density 64Gbit per Die 128Gbit per Die
DRAM Cache 512MB 1GB
Sequential Read 550MB/s 550MB/s 550MB/s 550MB/s
Sequential Write 450MB/s 530MB/s 530MB/s 530MB/s
4KB Random Read 85K IOPS 95K IOPS 100K IOPS 100K IOPS
4KB Random Write 90K IOPS 90K IOPS 95K IOPS 95K IOPS
Steady-State 4KB Random Write 12K IOPS 20K IOPS 23K IOPS 20K IOPS
Idle Power 0.85W
Max Power 3.7W
Encryption AES-256
Endurance 50GB/day for 5 years
Warranty Five years
MSRP $90 $150 $275 $500

The retail package includes a 3.5" desktop adapter and a license for Acronis True Image HD 2013 cloning software. Like some of OCZ's recent SSDs, the Vector 180 includes a 5-year ShieldPlus Warranty.

OCZ has two flavors of the Barefoot 3 controller and obviously the Vector 180 is using the faster M00 bin, which runs at 397MHz (whereas the M10 as used in the ARC 100 and Vertex 460(a) is clocked at 352MHz). 

OCZ's other SSDs have already made the switch to Toshiba's latest A19nm MLC and with the Vector 180 the Vector series is the last one to make that jump. Given that the Vector lineup is OCZ's SATA 6Gbps flagship, it makes sense since NAND endurance and performance tend to increase as the process matures.

The Vector 180 review is the second that is based on our new 2015 SSD Suite and I suggest that you read the introduction article (i.e. the Samsung SM951 review) to get the full details. Due to several NDAs and travel, I unfortunately don't have too many drives as comparison points yet, but I'm running tests non-stop to add more drives for more accurate conclusions.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Deluxe (BIOS 2205)
Chipset Intel Z97
Chipset Drivers Intel 10.0.24+ Intel RST
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers
Desktop Resolution 1920 x 1080
OS Windows 8.1 x64
SSD Guru: The New OCZ Toolbox
Comments Locked


View All Comments

  • nils_ - Wednesday, March 25, 2015 - link

    It's an interesting concept (especially when the Datacenter uses a DC Distribution instead of AC), but I don't know if I would be comfortable with batteries in everything. A capacitor holds less of a charge but doesn't deteriorate over time and the only component that really needs to stay on is the drive (or RAID controller if you're into that).
  • nils_ - Wednesday, March 25, 2015 - link

    "I don't think it has been significant enough to warrant physical power loss protection for all client SSDs."

    If a drive reports a flush as complete, the operating system must be confident that the data is already written to the underlying device. Any drive that doesn't deliver this is quite simply defective by design. Back in the day this was already a problem with some IDE and SATA drives, they reported a write operation as complete once the data hit the drive cache. Just because something is rated as consumer grade does not mean that they should ship defective devices.

    Even worse is that instead of losing the last few writes you'll potentially lose all the data stored on the drive.

    If I don't care whether the data makes it to the drive I can solve that in software.
  • shodanshok - Wednesday, March 25, 2015 - link

    If a drive receive an ATA FLUSH command, it _will_ write to stable storage (HDD platters or NAND chips) before returning. For unimportant writes (the ones not marked with FUA or encapsulated into an ATA FLUSH) the drive is allowed to store data into cache and return _before_ the data hit the actual permanent storage.

    SSDs adds another problem: by the very nature of MCL and TLC cells, data at rest (already comitted to stable storage) are at danger by the partial page write effect. So, PMF+ and Crucial's consumer drive Power Loss Protection are _required_ for reliable use of the drive. Drives that don't use at least partial power loss protection should use a write-through (read-only cache) approach at least for the NAND mapping table or very frequent flushes of the mapping table (eg: Sandisk)
  • mapesdhs - Wednesday, March 25, 2015 - link

    How do the 850 EVO & Pro deal with this scenario atm?

  • Oxford Guy - Wednesday, March 25, 2015 - link

    "That said, while drive bricking due to mapping table corruption has always been a concern, I don't think it has been significant enough to warrant physical power loss protection for all client SSDs."

    I see you never owned 240 GB Vertex 2 drives with 25nm NAND.
  • prasun - Wednesday, March 25, 2015 - link

    "PFM+ will protect data that has already been written to the NAND"

    They should be able to do this by scanning NAND. The capacitor probably makes life easier, but with better firmware design this should not be necessary.

    With the capacitor, the steady state performance should be consistent, as they won't need to flush mapping table to NAND regularly.

    Since this is also not the case, this points to bad firmware design
  • marraco - Wednesday, March 25, 2015 - link

    I have a bricked Vertex 2 resting a meter away. It was so expensive that I cannot resign to trow it at the waste.

    I will never buy another OCZ product, ever.

    OCZ refused to release the software needed to unbrick it. Is just a software problem. OCZ got my money, but refuses to make it work.

    Do NOT EVER buy anything from OCZ.
  • ocztosh - Wednesday, March 25, 2015 - link

    Hello Marraco, thank you for your feedback and sorry to hear that you had an issue with the Vertex 2. That particular drive was Sandforce based and there was no software to unbrick it unfortunately, nor did the previous organization have the source code for firmware. This was actually one of the reasons that drove the company to push to develop in-house controllers and firmware, so we could control these elements which ultimately impacts product design and support.

    Please do contact our support team and reference this thread. Even though this is a legacy product we would be more than happy to help and provide support. Thank you again for your comments and we look forward to supporting you.
  • mapesdhs - Wednesday, March 25, 2015 - link

    Indeed, the Vertex4 and Vector series are massively more reliable, but the OCZ haters
    ignore them entirely, focusing on the old Vertex2 series, etc. OCZ could have handled
    some of the support issues back then better, but the later products were more reliable
    anyway so it was much less of an issue. With the newer warranty structure, Toshiba
    ownership & NAND, etc., it's a very different company.

    Irony is, I have over two dozen Vertex2E units and they're all working fine (most are
    120s, with a sprinkling of 60s and 240s). One of them is an early 3.5" V2E 120GB,
    used in an SGI Fuel for several years, never a problem (recently replaced with a
    2.5" V2E 240GB).

    Btw ocztosh, I've been talking to some OCZ people recently about why certain models
    force a 3gbit SAS controller to negotiate only a 1.5gbit link when connected to a SATA3
    SSD. This occurs with the Vertex3/4, Vector, etc., whereas connecting the SATA2 V2E
    correctly results in a 3Gbit link. Note I've observed similar behaviour with other brands,
    ditto other SATA2 SSDs (eg. SF-based Corsair F60, 3Gbit link selected ok). The OCZ
    people I talked to said there's nothing they can do to fix whatever the issue might be,
    but what I'm interested in is why it happens; if I can find that out then maybe I can
    figure a workaround. I'm using LSI 1030-based PCIe cards, eg. SAS3442, SAS3800,
    SAS3041, etc. I'd welcome your thoughts on the issue. Would be nice to get a Vertex4
    running with a 3Gbit link in a Fuel, Tezro or Origin/Onyx.

    Note I've been using the Vertex4 as a replacement for ancient 1GB SCSI disks in
    Stoll/SIRIX systems used by textile manufacturers, works rather well. Despite the
    low bandwidth limit of FastSCSI2 (10MB/sec), it still cut the time for a full backup
    from 30 mins to just 6 mins (tens of thousands of small pattern files). Alas, with
    the Vertex4 no longer available, I switched to the Crucial M550 (since it does have
    proper PLP). I'd been hoping to use the V180 instead, but its lack of full PLP is an issue.

  • alacard - Wednesday, March 25, 2015 - link

    In my view the performance consistency basically blows the lid off of OCZ and the reliability of their Barefoot controller. Despite reporting from most outlets, for years now drives based off of this technology have suffered massive failure rates due to sudden power loss. Here we have definitive evidence of those flaws and the lengths OCZ is going to in order to work around them (note, i didn't say 'fix' them).

    The fact that they were willing to go to the extra cost of adding the power loss module in addition to crippling the sustained performance of their flagship drive in order to flush the cache out of DRAM speaks VOLUMES about how bad their reliability was before. You don't go to such extreme - potentially kiss of death measures - without a good boot up your ass pushing you headlong toward them. In this case said boot was constructed purely out of OCZ's fear that releasing yet ANOTHER poorly constructed drive would finally put their reputation out of it's misery for good and kill any chance a future sales.

    OCZ has cornered themselves in a no win scenario:

    1) They don't bother making the drive reliable and in doing so save the cost of the power loss module and keep the sustained speed of the Vector 180 high. The drive reviews well with no craters in performance and the few customers OCZ has left buy another doomed Barefoot SSD that's practically guaranteed to brick on them within a few months. As a result they loose those customers for good along with their company.


    2) The go to the cost of adding the power loss module and cripple the drives performance to ensure that the drive is reliable. The drive reviews horribly and no one buys it.

    This is their position. Kiss of death indeed.

    Ultimately, i think it speaks to how complicated controller development is and that if you don't have a huge company with millions of R&D funds at your disposal it's probably best if you don't throw your hat into that ring. It's a shame but it seems to be the way high tech works. (Global oligopoly, here we come.)

    All things considered, it's nice that this is finally all out in the open.

Log in

Don't have an account? Sign up now