Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.

The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.

The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.

The Optane DC Persistent Memory modules are currently sampling and will be shipping for revenue later this year, but only to select customers. Broad availability is planned for 2019. In a similar strategy to how Intel brought Optane SSDs to market, Intel will be offering remote access to systems equipped with Optane DC Persistent Memory so that developers can prepare their software to make full use of the new memory. Intel is currently taking applications for access to this program. The preview systems will feature 192GB of DRAM and 1TB of Optane Persistent Memory, plus SATA and NVMe SSDs. The preview program will run from June through August. Participants will be required to keep their findings secret until Intel gives permission for publication.

Intel is not officially disclosing whether it will be possible to mix and match DRAM and Optane Persistent Memory on the same memory controller channel, but the 192GB DRAM capacity for the development preview systems indicates that they are equipped with a 16GB DRAM DIMM on every memory channel. Also not disclosed in today's briefing: power consumption, clock speeds, specific endurance ratings, and whether Optane DC Persistent Memory will be supported across the Xeon product line or only on certain SKUs. Intel did vaguely promise that Optane DIMMs will be operational for the normal lifetime of a DIMM, but we don't know what assumptions Intel is making about workload.

Intel has been laying the groundwork for application-level persistent memory support for years through their open-source Persistent Memory Development Kit (PMDK) project, known until recently as NVM Library. This project implements the SNIA NVM Programming Model, an industry standard for the abstract interface between applications and operating systems that provide access to persistent memory. The PMDK project currently includes libraries to support several usage models, such as a transactional object store or log storage. These libraries build on top of existing DAX capabilities in Windows and Linux for direct memory-mapped access to files residing on persistent memory devices.

Optane SSD Endurance Boost

The existing enterprise Optane SSD DC P4800X initially launched with a write endurance rating of 30 drive writes per day (DWPD) for three years, and when it hit widespread availability Intel extended that to 30 DWPD for five years. Intel is now preparing to introduce new Optane SSDs with a 60 DWPD rating, still based on first-generation 3D XPoint memory. Another endurance rating increase isn't too surprising: Intel has been accumulating real-world reliability information about their 3D XPoint memory and they have been under some pressure from competition like Samsung's Z-NAND that also offers 30 DWPD with a more conventional flash-based memory.

Comments Locked

73 Comments

View All Comments

  • CaptCalamity - Wednesday, May 30, 2018 - link

    Strike "With this,"
  • tomatotree - Thursday, May 31, 2018 - link

    Honestly, where I see this going, "bootable" could become a thing of the past. If your working memory never loses data, then when you power cycle you should still be in the exact same state as you left it, with the same apps running. No need to reinitialize everything -- that's an artifact of losing RAM data.
  • tmbm50 - Wednesday, May 30, 2018 - link

    I wonder how latency compares to existing nvdimms?

    Several companies offer DDR4 dimms with an external battery backup and flash backup.

    For example, the motherboard used on NetApp filers has 2 dimm sockets cpu that are battery backed up and use this to log disk writes/journal. Intel Xeon chipsets already support this. It does not go through the pci-e bus, though some vendors do make nvdimm pci-e add-on cards.

    While not truely persistent, you can get days of standby on a battery and flush contents to a flash drive for long term lights out.

    Seems it would be way faster (true dimm speed) and might be cost competitive using commodity dimms over optane premiums.

    This article suggest pricing between optane SSD and ddr4, but ddr4 pricing really varies.

    Just makes me wonder if the folks that need super fast persistent storage already have a faster option over optane and the price difference won't scale done enough with the performance loss (in relation to nvdimm).
  • Peter2k - Wednesday, May 30, 2018 - link

    I would like to think that part of the reason for optane existance is the fact that you can have a lot more GB per stick than traditional RAM
    Not sure how interesting "persitent" memory is in the equation
    I'm sure there are some use cases (top of the hat, none really, not when you have to make a sacrifice between RAM speed/latnecy for persistence being the only factors)
    Servers aren't exactly shut down usually

    Having several TB of "RAM" might be more useful, even if its slower than normal RAM, as long as you don't have to access any drives
    Maybe
  • invasmani - Wednesday, May 30, 2018 - link

    That part is easy enough to understand, but you could just use RAM to cache traditional storage with things like AMD's SenseMI, Samsung's Magician Magic, SuperCache/SuperVolume, or FancyCache and they are way more cost effective and probably quicker alternatives anyway. It doesn't take my actual DRAM to cache other storage with the right software for massive performance gains and it's scale pretty linearly with RAM bandwidth as well meaning quad/octa channel is even more insane especially with faster speed/quality DRAM.
  • peevee - Friday, June 1, 2018 - link

    "but you could just use RAM to cache traditional storage with things like AMD's SenseMI, Samsung's Magician Magic, SuperCache/SuperVolume, or FancyCache "

    Which all have zero sense with RAM because every OS since time immemorial (even DOS with a right driver) uses RAM to cache traditional disks.
  • invasmani - Wednesday, May 30, 2018 - link

    It's a big maybe, but in certain workloads maybe though why not just use DRAM to cache a mechanical HD, SSD, or NVMe drive? If you really limited on storage speed and massive storage density I'd think EYPC and Supercache/FancyCache would be the clear winner.
  • invasmani - Wednesday, May 30, 2018 - link

    Yeah I've been wondering this as well. In any case why would I want to lose DRAM DIMM slots in the first place? More over if you also have to reduce DRAM speed to that of Optane or NVDIMM that's a huge a negative as well. Personally I like the SenseMI approach of using a tiny % of RAM to cache traditional storage and greatly increase it's performance and in a very cost effective manner by contrast.
  • tomatotree - Thursday, May 31, 2018 - link

    NVDIMMs are on the market and are indeed faster since they're just DRAM when powered, but they're *extremely* expensive. Even regular DRAM is very expensive compared to optane, especially if you need a lot in one server, since that usually means adding more CPUs as well. Just being able to get 512G in a single DIMM is a huge advantage.
  • eastcoast_pete - Wednesday, May 30, 2018 - link

    If I read this and the companion live blog correctly, the real use scenario targeted here are very large, high availability databases. Intel used Cassandra (and HANA) in their presentation. Intelsomebody running SAP's HANA might want to take this for a spin, once Tier-1 OEMs have systems ready. Having your precious database in a non-volatile

Log in

Don't have an account? Sign up now