The Samsung XP941

In June last year, Samsung made an announcement that they were mass producing the industry's first native PCIe SSD: the XP941. In the enterprise world, native PCIe SSDs were nothing new but in the consumer market all PCIe SSDs had previously just used several SATA SSDs behind a PCIe bridge. Samsung didn't reveal many details about the XP941 and all we knew was that it was capable of speeds up to 1,400MB/s and would be available in capacities of 128GB, 256GB and 512GB. Given that the XP941 was (and still is) an OEM-only product, that wasn't surprising. OEM clients often don't want the components they are buying to be put under a public microscope as it may lead to confusion among their customers (e.g. why isn't the drive in my laptop performing as well as the drive in the review?).

Photography by Juha Kokkonen

The XP941 is available only in M.2 2280 form factor. The four-digit code refers to the size of the drive in millimeters and 2280 is currently the second largest size according to the M.2 standard. Basically, the first two digits refer to the width, which per M.2 standard is always 22mm (hence the 22 in the code) and the last two digits describe the length, which in this case is 80mm. There are four possible lengths in the M.2 spec: 42mm, 60mm, 80mm and 110mm, and the purpose of different lengths is to allow manufacturers to design drives for multiple uses.

The reason for the variable size comes from mSATA, where the issue industry had problems with the fixed size. That's why Apple, ASUS, and others went with custom designs in order to fit more NAND packages on the PCB. I believe 80mm (i.e. 2280) will be the most popular form factor as it's capable of holding up to eight NAND packages but especially drives that are aimed for tablets and other ultra mobile devices may utilize the smaller 2242 and 2260 form factors.

As the XP941 isn't available in retail, Samsung isn't sampling it to media. Fortunately an Australian Samsung OEM retailer, RamCity, was kind enough to send us a review sample. The SSD we received is the highest capacity 512GB model, so we should be able to reach the maximum potential performance. RamCity actually sent us two 512GB drives and I couldn't resist putting the two in RAID 0 configuration in order to see what kind of throughput two drives could offer.

RamCity also sent us a PCIe 3.0 x4 adapter for connecting the drive to our testbed. The adapter in question is a Lycom DT-120, which retails for around $25. There are also other adapters in the market, such as Bplus' dual PCIe 2.0 x4 adapter, but the Lycom is the cheapest I've seen and is likely the best option for the average buyer.

Samsung SSD XP941 Specifications
Capacity 128GB 256GB 512GB
Controller Samsung S4LNO53X01 (PCIe 2.0 x4)
NAND Samsung 19nm MLC
Sequential Read 1000MB/s 1080MB/s 1170MB/s
Sequential Write 450MB/s 800MB/s 950MB/s
4KB Random Read 110K IOPS 120K IOPS 122K IOPS
4KB Random Write 40K IOPS 60K IOPS 72K IOPS
Power (idle / active) 0.08W / 5.8W
Warranty Three years (from RamCity)

The XP941 is based on Samsung's in-house PCIe controller. Samsung isn't willing to disclose any exact details of the controller but I would expect it to be quite similar to the MEX controller found in the 840 EVO, except it utilizes a PCIe interface instead of SATA. The controller supports up to four PCIe 2.0 lanes, so in practice it should be good for up to ~1560MB/s without playing with the PCIe clock settings (it's possible to overclock the PCIe interface for even higher bandwidths). In terms of the software interface, the XP941 is still AHCI based but Samsung does have an NVMe based SSD for the enterprise market. I would say that AHCI is a better solution for the consumer market because the state of NVMe drivers is still a bit of a question and the gains of NVMe are much more significant in the enterprise space (see here why NVMe matters).

NAND wise the XP941 uses Samsung's own 64Gbit 19nm Toggle-Mode 2.0 MLC NAND. There are only four packages on the PCB, two on each side, which means we are dealing with 16-die packages. I'm not sure if Samsung has a proprietary technology for NAND die stacking but at least they seem to be the only manufacturer that is packing more than eight dies in a single package. From what I have heard, other manufacturers don't want to go above eight dies for signal integrity reasons, which directly impacts performance, but Samsung doesn't seem to have issues with this.

Again, no details such as the P/E cycle count are known but I'm guessing the NAND is still rated at the same 3,000 P/E cycles as its 21nm counterpart. Samsung's datasheet doesn't list any specific endurance for the drive, though that makes sense since it's an OEM drive and the warranty will be provided for the whole product (e.g. a laptop) where the drive is used in.

The XP941 shows up in Samsung's Magician software but it's not recognized as a Samsung SSD. This has been typical for Samsung's OEM SSDs as even though they sometimes share the same hardware and firmware as their retail counterparts, the Magician functionality is disabled. It makes sense because PC OEMs often source their drives from more than one SSD OEM, so it wouldn't be fair to buyers if one drive had additional functionality while the other didn't.

The SSD in 2013 Mac Pro - courtesy of iFixit

The XP941 is hardware wise the same drive that you can find inside 2013 Macs, although most of the laptop version are limited to PCIe 2.0 x2 interface. However, the form factor and interface Apple uses are custom and the larger form factor allows for eight NAND packages and thus support for a 1TB model.

Test System

CPU Intel Core i5-2500K running at 3.3GHz
(Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5
(1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

Introduction Boot Support: Mac? Yes. PC? Mostly No.
Comments Locked

110 Comments

View All Comments

  • McTeags - Thursday, May 15, 2014 - link

    I think there is a spelling mistake in the first sentence. Did you mean SATA instead of PATA? I don't know all of the tech lingo so maybe I'm mistaken.
  • McTeags - Thursday, May 15, 2014 - link

    Please disregard my comment. I googled it...
  • BMNify - Thursday, May 15, 2014 - link

    sata-e[serial], sata[serial], pata[parrallel] ,SCSI [several, and chainable to 15+ drives on one cable, we should have used that as generic] ,shugart these are all drive interfaces and there are more too going back in the day....
  • metayoshi - Thursday, May 15, 2014 - link

    "It's simply much faster to move electrons around a silicon chip than it is to rotate a heavy metal disk."

    While SSD performance blows HDDs out of the water, the quoted statement is technically not correct. If you take a single channel NAND part and put it up against today's mechanical HDDs, the HDD will probably blow the NAND part out of the water in everything except for random reads.

    What really kills HDD performance isn't the just rotational speed as much as it is the track-to-track seek + rotational latency of a random workload. A sequential workload will reduce the seek and rotational latency so much that the areal density of today's 5 TB HDDs will give you pretty good numbers. In a random workload however, the next block of data you want to read is most likely on a different track, different platter, and different head. Now it has to seek the heads to the correct track, perform a head switch because only 1 head can be on at a time, and then wait for the rotation of the disk for that data block to be under the head.

    A NAND part with a low number of channels will give you pretty crappy performance. Just look at the NAND in smartphones and tablets of today, and in the SD cards and USB thumb drives of yesteryear. What really makes SSDs shine is that they have multiple NAND parts on these things, and that they stripe the data across a huge number of channels. Just think RAID 0 with HDDs, except this time, it's done by the SSD controller itself, so the motherboard only needs 1 SATA (or other like PCIe) interface to the SSD. That really put SSDs on the map, and if a single NAND chip can do 40 MB/s writes, think about 16 of them doing it at the same time.

    So yes, there's no question that the main advantage of SSDs vs HDDs is an electrical vs mechanical thing. It's just simply not true that reading the electrical signals off of a single NAND part is faster than reading the bits off of a sequential track in an HDD. It's a lot of different things working together.
  • iwod - Friday, May 16, 2014 - link

    I skim read it. Few things i notice, No Power usage testing. But 0.05w idle is pretty amazing. Since the PCI-E supply the power as well i guess they could be much better fine grained? Although Active was 5.6W. So at the same time we want more performance == faster controller while using much lower power. it seems there could be more work to do.

    I wonder if the relative slow Random I/O were due to Samsung betting its use on NVMe instead of ACHI.
  • iwod - Friday, May 16, 2014 - link

    It also prove my points about Random I/O. We see how Random I/O for xp941 being at the bottom of the chart while getting much better benchmarks results. Seq I/O matters! And It matters a lot. The PCI -E x4 interfaces will once again becomes bottleneck until we move to PCI-E 3.0 Which i hope we do in 2015.
    Although i have this suspicious feeling intel is delaying or slowing down our progression.
  • nevertell - Friday, May 16, 2014 - link

    Can't you place the bootloader on a hard drive, yet have it load the OS up from the SSD ?
  • rxzlmn - Friday, May 16, 2014 - link

    'Boot Support: Mac? Yes. PC? Mostly No.'

    Uh, a Mac is a PC. On a serious tech site I don't expect lines like that.
  • Penti - Friday, May 16, 2014 - link

    Firmware differences.
  • Haravikk - Friday, May 16, 2014 - link

    It still surprises me that PCs can have so many hurdles when it comes to booting from various devices; for years now Macs have been able to boot from just about anything you plug into them (that can store data of course). I have one machine already that uses an internal drive combined with an external USB drive as a Fusion Drive, and it not only boots just fine, but the Fusion setup really helps eliminate the USB performance issues.

    Anyway, it's good to see PCIe storage properly reaching general release; it's probably going to be a while before I adopt it on PCs, as I'm still finding regular SATA or M.2 flash storage runs just fine for my needs, but having tried Apple's new Mac Pros, the PCIe flash really is awesome. Hopefully the next generation of Mac Pros will have connectors for two, as combined in a RAID-0 or RAID-1 the read performance can be absolutely staggering.

Log in

Don't have an account? Sign up now