Everspin recently announced they have begun pilot production of their 1Gb STT-MRAM (Spin-transfer Torque Magnetoresistive RAM) nonvolatile memory, after shipping the first pre-production samples in December. The new MRAM parts are fabbed on a GlobalFoundries 28nm process and are a significant advance in density and capacity compared to their current 40nm 256Mb parts. Production will be ramping up through the second half of this year.

The new parts offer 8-bit or 16-bit DDR4 interfaces at 1333MT/s (667MHz), but as with the older DDR3-based MRAM components, timing differences mean they're not necessarily a drop-in replacement for DRAM. Low capacities have kept discrete MRAM components largely confined to embedded systems, where SoCs and ASICs can more easily be designed with compatible DDR controllers. The new 1Gb capacity will significantly broaden the appeal of MRAM, but Everspin still has a ways to go before catching up to the density of DRAM. We don't expect to see much in the way of MRAM-only storage devices yet (SSDs or NVDIMMs), but there will probably be more uses along the lines of the hybrid SSDs we've seen that still rely on NAND flash as the primary storage medium but replace some or all of the capacitor-backed DRAM with MRAM: IBM's FlashCore Module introduced last year, and a Seagate prototype shown at FMS 2017.

Everspin isn't the only company working on MRAM technology, but they are the the only supplier of discrete MRAM parts, as opposed to embedded on-die MRAM for ASICs. This is the second generation of discrete MRAM parts Everspin has produced in partnership with GlobalFoundries, and they also have embedded MRAM on the roadmap for GloFo's 22nm FD-SOI process. Since GlobalFoundries cancelled plans for 7nm and smaller processes, specialized processes including embedded memories like Everspin's MRAM have become crucial to their new future.

Source: Everspin

Comments Locked

19 Comments

View All Comments

  • nandnandnand - Monday, June 24, 2019 - link

    So it's non-volatile, but less dense and slower than DRAM.

    We could use something that is non-volatile, near DRAM speeds, and as denser or more dense than NAND. Then we can have universal memory.
  • SaberKOG91 - Monday, June 24, 2019 - link

    Which is literally the goal of all of these new memory technologies, to becomes Storage Class Memories: all of the performance of DRAM without volatility.

    Most technologies are still a lower density than DRAM because DRAM is so simple. Each bit is a single MOS transistor tied to a reasonable capacitance. CMOS is insanely mature from a process standpoint, unlike all of these new devices like MRAM, memristors, and phase-change memories. As all of these technologies catch up to CMOS, they will definitely exceed the density of DRAM per unit area. For now though, DRAM density scales easily with each new process node and we really haven't taken advantage of 3D dies.
  • rpg1966 - Monday, June 24, 2019 - link

    I don't know much about these new memories (obviously) how will they exceed the density of DRAM, given that (as you note) DRAM is such a simple structure?
  • SaberKOG91 - Tuesday, June 25, 2019 - link

    The building blocks are physically smaller than DRAM cells on the same node. Right now it's more of an issue of how to reliably improve the matching between these new elements. We are very good at doing this with CMOS devices, but new tech is still behind.

    There's also other interesting aspects of these new devices which allow them to store multiple bits per cell. Memristors have so far been shown to have dozens of states which makes it easier to consider 4+bits per cell. They are also a lot easier to design crossbar structures for, which means that 3D storage can be accomplished without expensive die-stacking. This is not dissimilar to 3D NAND devices, just at higher densities and theoretically lower power.
  • rpg1966 - Tuesday, June 25, 2019 - link

    Excellent, thank you.
  • FunBunny2 - Tuesday, June 25, 2019 - link

    "The building blocks are physically smaller than DRAM cells on the same node. "

    Not just that, when/if we get there, but that by eliminating one or more of the hops to 'cold' storage you eliminate other devices, their power, their controllers, etc. Also, for apps that use controlled data storage, e.g. RDBMS, many (most?) of those apps could run on-line without all those caches to be managed, as well. Periodic backup to SSD/HDD/CDROM/tape will still be part of the protocol, of course.

    The biggest hit to system building will be expanding the cpu/MB address space support all the way to 64 bits. Imagine DB2 run on a single-level datastore? Just what Codd had in mind 50 years ago.
  • sonicmerlin - Tuesday, June 25, 2019 - link

    Then why are they still selling 4 or 8 GB RAM modules while SSDs have scaled into multi terabytes?
  • SaberKOG91 - Wednesday, June 26, 2019 - link

    Because FLASH on its own isn't very fast or low enough latency. Most of the current gen of NVDIMMs (excluding 3D XPoint) use DRAM most of the time and then dump that data to FLASH on a power loss. They serve to prevent data loss, not to speed up workloads. Everything else is still catching up in density and cost to FLASH.
  • sonicmerlin - Friday, June 28, 2019 - link

    But DRAM has existed for decades, why haven't we seen it scale up over the last several years at the pace of Moore's Law?
  • Santoval - Tuesday, June 25, 2019 - link

    "As all of these technologies catch up to CMOS, they will definitely exceed the density of DRAM per unit area."
    3D XPoint is a type of phase-change memory (or, according to others, a ReRAM memory) and due to the way it's designed it can have 4 times the density of DRAM at the same node. It's nowhere close in latency though. High density alone is not enough to displace DRAM. High density, DRAM rivaling latency *and* comparable cost are required for that (non volatility as well, of course, which all modern kinds of memory tend to have).

Log in

Don't have an account? Sign up now