HighPoint Technologies has updated their NVMe RAID solutions with PCIe 4.0 support and adapter cards supporting up to eight NVMe drives. The new HighPoint SSD7500 series adapter cards are the PCIe 4.0 successors to the SSD7100 and SSD7200 series products. These cards are primarily aimed at the workstation market, as the server market has largely moved on from traditional RAID arrays, especially when using NVMe SSDs for which traditional hardware RAID controllers do not exist. HighPoint's PCIe gen4 lineup currently consists of cards with four or eight M.2 slots, and one with eight SFF-8654 ports for connecting to U.2 SSDs. They also recently added an 8x M.2 card to their PCIe gen3 family, with the Mac Pro specifically in mind as a popular workstation platform that won't be getting PCIe gen4 support particularly soon.

HighPoint's NVMe RAID is implemented as software RAID bundled with adapter cards featuring Broadcom/PLX PCIe switches. HighPoint provides RAID drivers and management utilities for Windows, macOS and Linux. Competing software NVMe RAID solutions like Intel RST or VROC achieve boot support by bundling a UEFI driver in with the rest of the motherboard's firmware. Highpoint's recent 4-drive cards include their UEFI driver on an Option ROM to provide boot support for Windows and Linux systems, and all of their cards allow booting from an SSD that is not part of a RAID array. HighPoint's NVMe RAID supports RAID 0/1/10 modes, but does not implement any parity RAID options.

Highpoint has also improved the cooling on their RAID cards. Putting several high-performance M.2 SSDs and a power-hungry PCIe switch on one card generally requires active cooling, and HighPoint's early NVMe RAID cards could be pretty noisy. Their newer heatsink design lets the cards benefit from airflow provided by case fans instead of just the card's own fan (two fans, for the 8x M.2 cards), and the fans they are now using are a bit larger and quieter.

In the PCIe 2.0 era, PLX PCIe switches were common on high-end consumer motherboards to provide multi-GPU connectivity. In the PCIe 3.0 era, the switches were priced for the server market and almost completely disappeared from consumer/enthusiast products. In the PCIe 4.0 era, it looks like prices have gone up again. Even though these cards are the best way to get lots of M.2 PCIe SSDs connected to mainstream consumer platforms that don't support the PCIe port bifurcation required by passive quad M.2 riser boards, the pricing makes it very unlikely that they'll ever see much use in systems less high-end than a Threadripper or Xeon workstation. However, Highpoint has actually tested on the AMD X570 platform and achieved 20GB/s throughput using Phison E16 SSDs, and almost 28GB/s on an AMD EPYC platform (out of a theoretical limit of 31.5 GB/s). These numbers should improve a bit as faster, lower-latency PCIe 4.0 SSDs become available.

HighPoint NVMe RAID Adapters
Model SSD7505 SSD7540 SSD7580 SSD7140
Host Interface PCIe 4.0 x16 PCIe 3.0 x16
Downstream Ports 4x M.2 8x M.2 8x U.2 8x M.2
MSRP $599 $999 $999 $699

Now that consumer M.2 NVMe SSDs are available in 4TB and 8TB capacities, these RAID products can accommodate up to 64TB of storage at a much lower price per TB than using enterprise SSDs, and without requiring a system with U.2 drive bays. For tasks like audio and video editing workstations, that's an impressive amount of local storage capacity and throughput. The lower write endurance of consumer SSDs (even QLC drives) is generally less of a concern for workstations than for servers that are busy around the clock, and for many use cases having a capacity of tens of TB means the array as a whole has plenty of write endurance even if the individual drives have low DWPD ratings. Using consumer SSDs also means that peak performance is higher than for many enterprise SSDs, and a large RAID-0 array of consumer SSDs will have a total SLC cache size in the TB range.

The SSD7140 (8x M.2, PCIe gen3) and the SSD7505 (4x M.2, PCIe gen4) have already hit the market and the SSD7540 (8x M.2, PCIe gen4) is shipping this month. The SSD7580 (8x U.2, PCIe gen4) is planned to be available next month.

Comments Locked

31 Comments

View All Comments

  • Unashamed_unoriginal_username_x86 - Friday, November 13, 2020 - link

    you're still writing at the same speed per drive (though for half as much useful data on mirrored arrays), so your drives will be utilized the same amount.
  • croc - Friday, November 13, 2020 - link

    The issue that I have with HighPoint is that their cards act more as aggregators than true controllers. They offer no raid level that the CPU does not do natively. So, just what are they then controlling? And, given that they have to be aimed at the AMD market, (whose software raid includes MORE levels...) why would anyone buy them when ASUS and Gigabyte offer an aggregator for free? Maybe if they offered hardware raid at raid levels 5 and 6... But only having levels 0, 1 and 10 is pretty limiting. Especially since you would get the same raid levels just using AMDs software, with direct-to-CPU levels of performance.
  • Billy Tallis - Friday, November 13, 2020 - link

    Your complaint applies to almost all NVMe RAID solutions. There simply aren't hardware RAID controllers for NVMe that are comparable to what we're accustomed to seeing for SATA and SAS. What ASUS, Gigabyte and similar offer are passive riser cards that rely on the host motherboard to support bifurcation, and on the CPU/chipset vendor to provide a software RAID implementation that may or may not be any faster or more flexible than HighPoint's software RAID implementation. HighPoint's NVMe RAID cards get rid of both of those requirements, but in practice those two issues aren't often a deal-breaker in this price range. The 8-drive HighPoint cards do have a more clear advantage.
  • Hul8 - Sunday, November 15, 2020 - link

    Since this card uses a PLX chip, it should be possible to drive it with a PCIe x8 link and still retain use of all its M.2 slots. That's not possible with the passive cards - there if you have x8 lanes from the MB, even with bifurcation you only get 2 usable M.2 slots.
  • Hul8 - Sunday, November 15, 2020 - link

    Nevermind you can't bifurcate x16 into 8 times x4 without a PLX chip.
  • saratoga4 - Saturday, November 14, 2020 - link

    >The issue that I have with HighPoint is that their cards act more as aggregators than true controllers.

    Hardware RAID died with SATA.
  • Duncan Macdonald - Friday, November 13, 2020 - link

    If you have enough PCIe lanes (eg on a Threadripper or EPYC platform) then the MUCH cheaper ASUS Hyper M.2 card is a better option. (No expensive PCIe switch - just a fan to keep everything cool - available from Amazon UK for £65.30) This is a passive card which directly connects up to 4 NVMe SSDs to a PCIe x16 socket - the CPU and motherboard must support splitting the x16 to 4 x4 connections.
  • bernstein - Friday, November 13, 2020 - link

    was going to make the exact same point. the irony is that, these are the only platforms the target market for these cards buys... so this is aimes at the niche of xeon/epyc/threadripper buyers that still don’t have enough pcie lanes... which in the epyc case is basically noone ;-)
  • TootsieNootan - Saturday, November 14, 2020 - link

    The Threadripper 3960x and higher have 72 PCIE lanes. Plenty for raid controllers. Even the Threadripper 2 series did fine on PCIE lanes. I had an older SSD7102 from Highpoint in my old machine and it did fine as well. I looked at the i9x chips which have more lanes but not nearly as many. The Xeon and Epyc cpu's have a lot of lanes but i needed a workstation and not a server.
  • TootsieNootan - Saturday, November 14, 2020 - link

    One of the big factors to consider when running PCIE 4.0 m.2 drives is excessive heat. I looked at the Asus controller but was worried it had inadequate cooling. A fan and no heatsinks. The Highpoint its one giant heatsink with a fan. I went with Highpoint because i didnt want to risk expensive m.2's overheating and frying.

Log in

Don't have an account? Sign up now