Comments Locked

66 Comments

Back to Article

  • tspacie - Friday, August 26, 2016 - link

    Did they say how the "software defined cache hierarchy" was different from a paging file and OS virtual memory ?
  • Omoronovo - Friday, August 26, 2016 - link

    The difference is in abstraction. Paging files or swap space are explicitly advertised to software; software which knows there's 8GB of ram and an 8GB swap area/page file will likely modify its behaviour to fit within the RAM envelope instead of relying on the effective (16gb) working space.

    The idea of the software defined cache hierarchy is that everything below the hypervisor/driver layer only sees one contiguous space. It's completely transparent to the application. In the above example, that would mean the application would see 8GB+*size of xpoint* as ram, then whatever else on top as paging area.
  • saratoga4 - Friday, August 26, 2016 - link

    >The difference is in abstraction. Paging files or swap space are explicitly advertised to software; software which knows there's 8GB of ram and an 8GB swap area/page file will likely modify its behaviour to fit within the RAM envelope instead of relying on the effective (16gb) working space.

    While you can of course query the operating system to figure out how much physical memory it has, for all intents and purposes the page file and physical memory form one unified continuous memory space (literally, that is what the virtual address space is for). Where a page resides (physical memory or page file) is NOT advertised to software.

    >The idea of the software defined cache hierarchy is that everything below the hypervisor/driver layer only sees one contiguous space. It's completely transparent to the application.

    Which is how virtual memory works right now. The question is what is actually different.
  • Omoronovo - Friday, August 26, 2016 - link

    The virtual address space - which has multiple definitions depending on your software environment, so lets assume Windows - is limited by the bit depth of the processor and has nothing directly to do with Virtual Memory. Applications that have any kind of realistic performance requirement - such as those likely to be used in early adopter industries of this technology - most certainly need to know how much physical memory is installed.

    The abstraction (and the point I was trying to make, apologies for any lack of clarity) is that right now an application can know exactly how much physical address space there is, simply by querying for it. That same query in a system with 8GB of ram and a theoretical 20GB Xpoint device configured as in the article will return a value of 28GB of ram to the application; the application does not know it's 8GB+20, and will therefore treat it completely the same as if there were 28GB of physical ram. This sits above Virtual Memory in the system hierarchy, so any paging/swap space in addition to this will be available as well, and treated as such by the application.

    The primary benefit to this approach is that no applications need to be rewritten to take advantage of the potentially large performance improvements from having a larger working set outside of traditional storage - even if it isn't quite as good as having that entire amount as actual physical RAM.

    Hopefully that makes more sense.
  • saratoga4 - Friday, August 26, 2016 - link

    >The virtual address space - which has multiple definitions depending on your software environment, so lets assume Windows - is limited by the bit depth of the processor and has nothing directly to do with Virtual Memory.

    Actually virtual address space and virtual memory are the same thing. Take a look at the wikipedia article.

    > Applications that have any kind of realistic performance requirement - such as those likely to be used in early adopter industries of this technology - most certainly need to know how much physical memory is installed.

    This is actually not how software works on Windows. You can query how much physical memory you have, but it is quite rare.

    Dumb question: have you ever done any Windows programming? I get the impression that you are not aware of how VirtualAlloc works on Windows...

    >The primary benefit to this approach is that no applications need to be rewritten to take advantage of the potentially large performance improvements from having a larger working set outside of traditional storage - even if it isn't quite as good as having that entire amount as actual physical RAM.

    This is actually how Windows has worked for the last several decades. Pages can be backed by different types of memory or none at all, and software has no idea at all about it. What you are describing isn't above or below virtual memory, it is EXACTLY virtual memory :)

    This is what the original question was asking, what is changing vs the current system where we have different types of memory hidden from software using a virtual address space.
  • dave_s25 - Friday, August 26, 2016 - link

    Exactly. Windows even has a catchy marketing term for "Software Defined Cache" : ReadyBoost. Linux, the BSDs, all major server OS can do the exact same thing with the virtual page file.
  • Omoronovo - Friday, August 26, 2016 - link

    Sorry guys, that's about as well as I can describe it since I'm a hardware engineer, not a software engineer, and this is how it was described to me at a high level when I was at Computex earlier this year.

    I'm unlikely going to be able to describe in enough specificity to satisfy you, but hopefully Intel themselves can do a better job of explaining it when the stack is finalised; until now a lot of us are having to make a few assumptions (including me, and I should have made this more clear from my first post).

    Of particular interest is hypervisors being able to provision substantially more RAM than the system physically contains without VM contention as happens with current memory overcommitment technologies like from VMware. A standard virtual memory system cannot handle this kind of situation with any kind of granularity; deduplication can only take you so far in a multi-platform environment.

    Exciting times ahead for everyone, I'm just really glad the storage market isn't going to turn into the decades of stagnation we had with hard disks since SSD's are now mainstream.
  • jab701 - Saturday, August 27, 2016 - link

    Hi,

    I have been reading this thread and I thought I might answer, as I have experience designing hardware working in virtual memory environments.

    Personally, this optane stuff fits nicely into the current virtual memory system. Programs don't know (or care) where their data is stored, provided that when they load from/store to memory that they data is where it is meant to be.

    Optane could behave like an extra level of the cache hierarchy or just like extra memory. The OS will have to know what to do with it, #1 if it is like a cache then it needs to ensure that it is managed properly and flushed as required. #2 If it is like normal memory then the OS can map virtual pages to the physical memory address space where optane sits, depending on what is required.

    Situation #1 is more suited to a device that looks like a drive where the whole drive isnt completely IO mapped and is accessed through "ports". #2 is suited to the situation where the device appears as memory usable by the OS.

    Software defined cache hierarchy would mean the device is like readyboost...but the cache is managed by the OS and not transparently by the hardware...

    Sorry a bit rambley :)
  • Klimax - Saturday, August 27, 2016 - link

    Sorry, but Virtual Address Space is NOT Virtual Memory. See:
    https://blogs.msdn.microsoft.com/oldnewthing/20040...
  • Omoronovo - Saturday, August 27, 2016 - link

    This is what I was referring to in a broader context, but since I'm not any kind of software engineer (beyond implementing it of course), I didn't feel justified in trying to argue the point with someone who actually does it for a living. Thanks for clarifying. For what it's worth, I don't think much of what I said was too far off base, but when Intel releases the implementation details we'll be able to know for sure. Hopefully sooner rather than later, I doubt Western Digital or Sandisk/HP are likely to stay quiet about their own SCM products for long.
  • bcronce - Friday, August 26, 2016 - link

    I assume it's similar, but a big issue with really high bandwidth low latency swap files is talking to a block device has to go through many many IO related abstract layers. If this new "optane" interface can remove most of the layers and mostly let the OS talk to the device as if it was just another bank of memory, that's a big win. Probably similar to how PAE works on 32bit CPUs.
  • JoeyJoJo123 - Friday, August 26, 2016 - link

    "Facebook and Intel are collaborating closely on Intel Optane technology..."

    That's all I needed to read. Never buying this crap. Facebook destroyed Oculus Rift and its goal of being an open VR platform. They'll destroy XPoint, as well.
  • Omoronovo - Friday, August 26, 2016 - link

    That's an incredibly short sighted (and, frankly, incorrect) view to have. Facebook have been a major force in the open hardware industry, they founded the Open Compute Project, open sourced their server designs and welcomed improvements and input from other companies.

    That Intel is collaborating with them - arguably on one of the largest hot data sets in the world - simply means that these devices are going to have far more useful real-world performance tuning before release instead of being optimised for marketing materials.
  • JoeyJoJo123 - Friday, August 26, 2016 - link

    Let's be real here, Facebook isn't winning any awards for humanism or receiving Nobel Peace Prizes for their outstanding dedication to user privacy. They're just about as evil as it gets for a corporation.

    Secondly, I am not incorrect, for two reasons: 1) My opinion that I dislike Facebook and everything they stand for cannot be incorrect, as it is just an opinion. 2) It is a fact that Facebook has altered Oculus Rift from being an open platform to being a closed platform.

    Oculus Rift was intended to be an open PC VR platform. They were bought out by Facebook. Then, behind closed doors, offer large sums of money to VR game developers (such as Croteam and their Serious Sam IP) to develop their games exclusive to Oculus Rift (with no support to HTC's Vive or Razer's OSVR). Based Croteam has turned down these offers so as to create open games for the VR community. This underhandedness was not a part of the Oculus Rift platform until Facebook stepped in. This move is done for business (ie: $$$) so as to lock people into the Oculus Rift platform to reap the benefit of future sales of VR headsets and Oculus Rift funded VR games.

    See: https://duckduckgo.com/?q=croteam+turns+down+oculu...

    I'm not at all very trusting of Intel to begin with. But no way in hell do I trust Facebook with ANYTHING, and anything the Facebook touches turns to utter garbage. Intel + Facebook? Not touching it at all, not even with a 10 foot pole, as knowing Facebook, they'd turn right around and give me advertisements trying to sell me 11 foot poles.
  • Friendly0Fire - Friday, August 26, 2016 - link

    Oculus was *bought* by Facebook. 3D XPoint remains firmly an Intel project, they're just using Facebook for real life data and testing.

    So yes, you are entirely incorrect in the assumption that Facebook can somehow "destroy" XPoint.
  • Omoronovo - Friday, August 26, 2016 - link

    I don't bring my emotions into discussions about companies for this very reason - it makes you lose focus of the points at hand.

    Intel is working with a company that has an avid interest in helping them make a great product. Intel gets the benefit of some amazing engineering talent at facebook, a proper real-world data set to work out performance problems and get real world feedback, and Facebook gets to have a head start on building support for this kind of device to make their software and hardware environment better.

    I'm sorry to say, but your personal feelings towards either of these companies is completely immaterial to discussions about the technology itself. When the tech launches it can just as easily be used by humanitarian organizations, it doesn't mean Intel is going to ask PETA for engineering advice because of their higher moral ground (in your opinion).
  • JoeyJoJo123 - Friday, August 26, 2016 - link

    Completely missing the point that I, personally, will not be touching XPoint technology from here on out.

    >I'm sorry to say, but your personal feelings towards either of these companies is completely immaterial to discussions about the technology itself.
    You keep changing the subject here, dude. I'm allowed to most my opinions on technology articles. I don't really care if you're here to disagree with my opinions. And right now your opinion is that opinions are immaterial to technology articles. I'd disagree and say opinions have a fair place in any article, technology or not. If a technology article gets posted tomorrow stating that Intel's releasing a new chip called the 7990X Hyper-Extreme Edition for $1999, it is perfectly acceptable and reasonable for someone to post their _opinion_ that the chip is overpriced and isn't worth $1999. As a result, it's perfectly acceptable and reasonable for someone to post that they'd be abstaining from purchasing XPoint technology due to Intel's involvement with Facebook.

    Also, I've never implied that it made any sense for any engineering firm to ask animal rights groups such as PETA for engineering advice, so take your strawman back with you, because I don't want it.
  • DanNeely - Friday, August 26, 2016 - link

    fArseBook is a big enough company that they almost certainly use Intel CPUs, AMD CPUs, ARM SoCs, along with both AMD and NVidia GPUs in various products; and are a big enough customer that they probably have access to pre-release hardware for testing purpose. Are you also going to dump all of your mainstream computing hardware for some sort of oddball device with a MIPS cpu that requires you to compile 99.9% of your software from source because there aren't any binaries available to download?
  • woggs - Friday, August 26, 2016 - link

    Wow. Your loss. Please don't touch it.
  • CaedenV - Friday, August 26, 2016 - link

    exactly right. You, personally, will not be able to buy or use 3DxPt technology. Ever. Period. Get over it.
    It is an extremely expensive, extremely high-end Ram/storage consolidation technology for servers. I don't understand why people keep thinking they are going to be able to wake up one morning 2-3 years from now and just pick up one of these drives and expect it to work in a general purpose PC. You wont. You will have to buy it through a vendor, or already integrated into a server. Even if you could pop over to your local microcenter and pick one up... what on earth are you going to use it for? to have a 1 sec boot time on your PC? To load games a little faster? It is a complete misunderstanding of what the technology offers and what it will be really good at: Running large multi-user databases. For pretty much any other application you will be able to find much cheaper and nearly as fast (for single-user workloads) traditional storage options.
    Down the road (like 5-10 years from now) we will see the next iteration of this come out for consumer use and it will be built into the motherboard for devices like performance tablets and laptops. For desktops we will see other solutions that will be more appropriate, and probably more similar to HMC technology being developed in Japan right now.
  • Khato - Friday, August 26, 2016 - link

    Except for that minor detail where Intel stated that one of the target markets for 3D XPoint was PC gaming. It is valid to question how long it's going to be until we actually see it available in that space though, since it's basically a given that Intel's going to be selling as much as it can at the higher server margins before releasing consumer products. But if I had to guess, I'd put it closer to a year from now rather than 5+.
  • ewitte - Sunday, August 28, 2016 - link

    The first to release devices on the roadmap are in the mainstream performance section with enterprise being the beginning of next year. There tends to be additional components, reliability and testing in enterprise markets.
  • Murloc - Saturday, August 27, 2016 - link

    you're making links that do not exist.
  • Morawka - Saturday, August 27, 2016 - link

    who cares what Facebook did to Oculus. it still sells like hot cakes and you can still run software and games from any company/developer and use it on the HMD.

    Most of the exclusive money was justified because Facebook footed the entire bill for development. Sure there are 2 or 3 games that signed exclusivity near the end of development, but you need to be mad at those developers and not facebook over that one.

    If you are unhappy with facebook using your info to sell ads, then just dont use it man. It's a choice, and just like a opinion, everyone has a choice.

    Oculus was open all the way until CV1 launch. When the consumer version hit, that's when open-ness ended. It was completely open from the initial prototype, all the way through the end of DK2's lifespan.

    HTC has it's own exclusives like Dota Spectator VR, LAB, etc.. where they purposely omitted Xbox controller support, therefore making them un-usable. Maybe they will work once touch is out, but thats not today, so it's exclusive to HTC without using the word "exclusive"
  • edzieba - Friday, August 26, 2016 - link

    "Then, behind closed doors, offer large sums of money to VR game developers (such as Croteam and their Serious Sam IP) to develop their games exclusive to Oculus Rift (with no support to HTC's Vive or Razer's OSVR). "

    A commonly repeated talking point, but not actually true.

    Croteam: http://uploadvr.com/oculus-denies-seeking-exclusiv...

    Other Ocean (Giant cop): http://uploadvr.com/giant-cop-speaks-oculus-exclus...

    Super Hot: http://superhotgame.com/2016/06/15/3-years-of-vr-h...

    It makes for a nice story to get riled up over, but it hasn't actually happened.
  • Shadow7037932 - Friday, August 26, 2016 - link

    That's fine. This isn't going to be a consumer level tech any time soon.
  • MamiyaOtaru - Sunday, August 28, 2016 - link

    facebook arguably turned oculus into a closed platform, and you can worry about oculus stuff spying on you or whatever, but worrying about a storage technology? You think Intel's going to let them add in phone home or ad serving stuff to the firmware or something?
  • Michael Bay - Monday, August 29, 2016 - link

    Once upon a time, there was a talk about adding what would be now called a paywall to CPU extensions.
  • fanofanand - Monday, August 29, 2016 - link

    I remember this, that was one of the ways they were looking at "product differentiation". Competition prevents stuff like that.
  • CaedenV - Friday, August 26, 2016 - link

    um... wut?
    1) Facebook has not destroyed OR. In fact it is doing better than most people expected.
    2) They are collaborating closely means that FB is telling Intel what specific feature sets, performance levels, and system hooks they want, and Intel is trying to build that into the device. It doesn't mean that FB is designing the device. It means they are putting out requests, Intel is designing a device, and FB is doing some real-world testing for Intel. This is a perfectly normal and healthy business relationship and it will help several other companies that buy into this tech.

    3) OR is a consumer product. 3DxPt is a business class (and very large business class) product that the general public probably wont see for another 10 years or more, if at all because it will likely morph into a completely different product before the tech trickles down to us. It is an apples and bananas comparison.
  • versesuvius - Friday, August 26, 2016 - link

    It is a hard disk after all. Samsung and WD don't do this kind of theatrics when they want to make a hard disk. For a company like Intel to go to Facebook for its data sets is stupid to begin with. No need for reminders as to what scumbags Facebook is.
  • Omoronovo - Saturday, August 27, 2016 - link

    That's not really the same situation. I can't think of a single non-evolutionary change to hard disk manufacturing in 20 years - fundamentally, new hard disks work the same as old hard disks, and hard disks in general have a very well defined place in the current, past, and future hierarchy of data storage on modern computing devices.

    These new SCM devices are a whole new tier of storage devices, and it's important that they get as much real-world feedback before release as possible. You can't make assumptions about the performance and implementation characteristics of a class of device that hasn't been readily available before in a commodity environment.
  • Michael Bay - Saturday, August 27, 2016 - link

    OR was planned from the beginning as a sale to some sugar daddy, in this case Faceberg. That is how tech startups work nowadays.
  • K_Space - Monday, August 29, 2016 - link

    +1
  • jann5s - Friday, August 26, 2016 - link

    There was some coverage of Xpoint and optane at the flash memory summit, PCper posted a few articles about it.
  • plopke - Friday, August 26, 2016 - link

    Always wonder if Optane is going to be locked out from POWER9 , AMD,... infrastructure? I assume not since Micron is also a supplier?
  • Omoronovo - Friday, August 26, 2016 - link

    It seems highly unlikely, at least for the storage versions of Xpoint, that it will be locked to Intel platforms. If the storage products operate over PCI-E instead of some kind of custom interconnect (like the ones shown at IDF), it makes zero sense for Intel to try to lock this tech to their own platforms.

    Stuff like the software defined cache hierarchy might require specific extensions included on Intel platforms to maximise performance (or work at all without problems), but fundamentally they should operate like any other PCI-E based storage device that simply has different performance characteristics.

    As for the down-the-line DRAM hybrid parts, that's entirely unknown. Intel could quite easily lock down the market for this type of device down to only Intel platforms, because it will most certainly require the memory controller to support it - hence new processor designs. If intel opens the specification, AMD will be able to add support later, but Intel will definitely have a significant time (and hence potential market share) advantage in either case.
  • jjj - Friday, August 26, 2016 - link

    Very unlikely that XPoint has a distant future as it was a stop-gap solution before a "proper" 3D SCM.
    Micron's roadmap in 2015 was showing gen 2 in late 2016 and next gen memory in 2017. Things got delayed a bit but as WD and everybody else come up with better solutions, XPoint will end. They'll keep making it for a while to recup costs and they could even rebrand the next technology as XPoint but XPoint itself is not aimed to have a long life.
    As for retail vs server, more likely we see it in retail first as server qualification takes more time. In consumer they can ship it even if it's a bit iffy and figure it out later.
  • Omoronovo - Friday, August 26, 2016 - link

    I'm not sure I agree with that. There are too many unknowns right now to really say one way or the other how successful Xpoint will be in the marketplace. Intel and Micron are working together on Xpoint - who's to say that their "proper" SCM technologies aren't being implemented into Xpoint?

    Also, I don't see this as a technology that has a small niche to fill, and if it fails, it's all over... if the characteristics of Xpoint in the performance trifecta (performance/cost/reliability) can be tweaked, like for example making for cheaper, higher-performance replacements for NAND, then it could simply be moved into a different part of the product stack and keep on going.
  • jjj - Friday, August 26, 2016 - link

    It doesn't scale well , it's not designed for that and that's why , as stated by Intel and Micron, it was aimed as a stop-gap solution.
    A proper solution for 3D anything (ReRAM, MRAM, PCM or w/e else) will be designed to scale much better on the vertical and will easily win in cost. Perf could also be better but lets not count on that.

    There are also 2 different strategies going forward:
    1. A single type of memory aimed at both DRAM and NAND.
    2. Target DRAM and NAND with 2 different types of new memory. One much better than DRAM and the other much better (including cost) than NAND.
    How each major will go about it, hard to say. WD/Toshiba are going with 3D ReRAM ,that much has been clear for many years. Micron might go with 2 different solutions, neither of those being XPoint. Hynix , Samsung and China, no idea. IBM might be a player too but they'll license it.The foundries and ARM will also care about embedded memory, for IoT and more. ARM seems to like CeRAM.
  • Omoronovo - Friday, August 26, 2016 - link

    Unless you have some Intel insider knowledge, I don't think we can really be sure yet exactly how far Xpoint will scale, or even if it eventually turns into some other form of technology (like ReRAM, as you said). Intel certainly hasn't gone into that much depth yet, and even this teaser was just a side event at IDF, so clearly Intel doesn't want to let us all know yet.

    Honestly, I'm just really excited to see how this technology evolves, and to find out more about it. I'm guessing the low-key nature of this means Intel won't be wanting to give an hard, in-depth looks at the technology for quite a while yet.
  • jjj - Friday, August 26, 2016 - link

    I've mentioned a redrand in my first comment so don't insist on that. Yes they could keep the name even if it's substantially different solution.
    The difference between XPoint and a"proper"solution is like the difference between 2D NAND with more than 1 layer and 3D NAND.
    With 3D XPoint they just add layers and that's why costs aren't scaling great. AT in their first article on Xpoint i think mentions that Intel/Micron hope that EUV arrives soon and helps them scale.
    3D NAND is designed so you can process multiple layers in 1 step. Fewer steps = lower costs and shorter cycles. Shorter cycles helps costs further, as the cost.
    Here how they make their 3D NAND http://www.chipworks.com/about-chipworks/overview/...
    They deposit the layers, they etch and then you got a few more steps where many layers are processed in 1. That's where the bulk of the cost saving are supposed to come and this is what makes it a "true"3D.
    XPoint is not as smart as that, they just add layer after layer and that's why it's not ideal from a cost perspective.

    Went and found the AT article i've mentioned , the first section is relevant "The Technology: How Does 3D XPoint Work?"http://www.anandtech.com/show/9470/intel-and-micro... t
  • saratoga4 - Friday, August 26, 2016 - link

    >3D NAND is designed so you can process multiple layers in 1 step. Fewer steps = lower costs and shorter cycles. Shorter cycles helps costs further, as the cost.

    Are you sure about that? Micron I think was supposed to have a slightly more efficient process than Samsung, but as far as I know scaling is still O(N) masks where N is the number of layers in the memory array.
  • jjj - Friday, August 26, 2016 - link

    Isn't this the entire point of 3D NAND? Otherwise they would just add layers of 2D NAND. Guess BeSang is trying to just add high density layers, we'll see how it goes.

    Anyway,Micron and Intel are somewhat constrained on the vertical,on the horizontal and hard to say how easy it might be to add bits per cell.
    A " proper" solution would have more freedom in at least 2 of those.
    XPoint would have served it's purpose without the delays. Owning the market for a few years and scaling a little bit to stay alive for a while even after the competition responds.
    With the delays,we might see some volumes next year and real volumes in 2018 so its future is rather murky.
  • saratoga4 - Friday, August 26, 2016 - link

    >Isn't this the entire point of 3D NAND?

    The point of 3D NAND is to allow stacking of NAND cells vertically so that you can improve density without having to pay for multi-patterning. It doesn't necessarily reduce the number of steps (actually I think it requires a lot more) as compared to planar, but it does result in lower costs per bit since you don't need multipatterning.
  • jjj - Friday, August 26, 2016 - link

    On WD's thingy the author completely misses the point but you have this quote " Western Digital indicated that it would use some of the things it has learnt while developing its BiCS 3D NAND to produce its ReRAM chips. " http://www.anandtech.com/show/10562/western-digita...

    The presentation was called "Creating Storage Class Memory: Learning from 3D NAND" so it's very clear that their entire point was this ,how to make it scale properly.
  • floobit - Friday, August 26, 2016 - link

    Their latency numbers seem a bit different than your analysis of the P3700 in http://www.anandtech.com/show/8104/intel-ssd-dc-p3... Can you explain? ~20@QD1, then 200@QD32, and 800 us @QD128 in your article, and they have 1800 us by "thread".
  • Krysto - Friday, August 26, 2016 - link

    We already know the price - it's 4x higher than NVMe drives per GB, which themselves are about 3x higher than mainstream SATA 3 SSDs. So it should be at least 10-12x more expensive than a 140GB SATA 3 SSD. If it's meant for "enterprise customers", then you can easily double that price yet again.

    So my guess is at the very least $1,000 for 140GB drive. Potentially on the higher-end of that $1,000, too.
  • Omoronovo - Friday, August 26, 2016 - link

    There are very few directly-comparable AHCI and NVME devices, but the ones I looked at directly (The samsung SM951 drives) have almost zero difference in price. In fact, on average, the NVME 256/512GB version of that drive is actually about 5% cheaper, possibly just due to supply/demand.

    Fundamentally, there is no reason why the signalling interface should have any bearing on the cost of the device; I think you are probably comparing m.2 drives with 2.5" drives, which is not a fair cost comparison since m.2 drives generally need denser (and hence costlier) NAND modules to reach the same capacities whist fitting into smaller physical space.

    Assuming these storage devices from Intel ship as standard PCI-E devices (either directly or via m.2 form factor), then hopefully the 4x figure is the only one that will matter. There is no way one of these will use AHCI as the signalling protocol since the cpu overhead would be enormous at anything approaching the iops figures quoted in the article.
  • TheinsanegamerN - Monday, August 29, 2016 - link

    A 512GB samsung 950 pro is over $300. $350 ATM. a SM951 is $339.

    A 512GB sata III SSD is only $150 for a high end model. A 1TB is only $239 for a mushkin reactor. A samsung m.2 sata drive at 500 GB is only $169.

    NVME certainly commands a price premium.
  • Omoronovo - Monday, August 29, 2016 - link

    Those aren't apples-to-apples comparisons, so you can't point to NVME as the factor causing the difference in price.

    For example, you're comparing a 512GB Samsung 950 pro (for completeness sake I'll assume you meant the m.2 one) and an SM951. Not only are these devices going to cost a different amount regardless of their technical merits - one is an OEM device, the other is retail for a start - they aren't even the same market segment.

    The rest of your post doesn't have enough specifics to actually compare the drives, let alone enough to point to a specific feature as the differentiator in price. My example in my previous post is fair because it's comparing *two exact same drives* where the only difference is that one is AHCI and the other is NVMe.

    Just for completeness sake, don't forget that literally no 2.5" sata drives use NVMe. Feel free to try to dispute that, though. This means that you *must* compare only m.2 drives if you want to actually find out what the specific cost overhead NVMe adds versus AHCI.

    2.5" drives can use substantially cheaper nand in terms of cost per gigabyte; they can fit generally 16 packages onto the pcb, meaning that full capacity can be made up of many smaller capacity dies. These are cheaper to design and manufacture and have higher yields. An M.2 drive will have substantially higher density NAND due to space constraints, which means that - at least for the moment - m.2 versus 2.5" sata form factors are always going to have a substantial price disparity. This has nothing to do with NVMe.
  • ewitte - Sunday, August 28, 2016 - link

    Based off the prices they quoted compared to dram I'd guess closer to $500.if they go too high people will just mostly just get a 960 pro which will have 2-3 times the iops performance and a new NVMe controller and costs about the same as a 950 pro.
  • Pork@III - Friday, August 26, 2016 - link

    Will wait to 100+TB optane in future generations . In this hardly slowest times of piece by piece tiny progress and big steps backward...May I will have to wait until at least Y2027.
  • Vlad_Da_Great - Friday, August 26, 2016 - link

    "...we’re still crying out for more information about 3D XPoint, ..". That is all you guys are good for, crying and whining like babies, creating drama and conspiracy. I tell you what, go to Intel and say we are looking to buy 10 000, 1TB Optane SSD's here is $1M non refundable deposit. After that you will get any detail specification and abilities of those drivers. Can you dig it?
    Second, how low IBM has fallen to advertise on AnandTech, man-o-man, shaking my head, rolling my eyes.
  • Ian Cutress - Friday, August 26, 2016 - link

    Can't tell if troll or...
  • fanofanand - Monday, August 29, 2016 - link

    The last few comments made by Vlad on Anandtech all scream "Troll". Better to just ignore this one.
  • iwod - Friday, August 26, 2016 - link

    Once we get next generation DDR5, and TSV Stacked DRAM, I presume we should be able to get 128GB to 512GB per DIMM. That is the same range as Xpoint Capacity, may be 2-3x the price / GB, but massive win in latency. Not to mention limitless read write, may be the TCO in long term is better.

    This is reaching 4-8TB Memory per server, and if you add in the ever increasing speed of SSD as 2nd tier. What *exactly* does XPoint provide in this future?
  • Lolimaster - Friday, August 26, 2016 - link

    It seems SSD's are a dead end in terms of latency and 4k random red performance.

    Just in latency DRM is 1000x faster, Xpoint should reduce that to 10-100x.
  • ddriver - Saturday, August 27, 2016 - link

    SDDs can go a long way in those regards, but need MOAR CASH... hot data needs to be kept in cache, and only flushed to nand on shutdown. It may well turn out that intel are doing that same thing with xpoint, thus the big secrecy. There is no word on the actual volume of touched data in those tests they present, and I suspect it is not all that much to begin with, and a lot of that performance they are bragging about is just cache hits for a workload, small enough to stay in cache in its entirety. And the only reason it is slower than dram is that it is on pci-e...
  • nils_ - Friday, September 9, 2016 - link

    This sort of technology already exists as NVDIMM. Basically you have the same size RAM + NAND on a DIMM module (there is also PCIe cards that come as block devices) and an array of capacitors + controller that will flush the RAM to NAND on power loss.
  • Meteor2 - Saturday, August 27, 2016 - link

    So you could have 1TB of DRAM on a server, or 1TB of DRAM+3DXPoint. What's the benefit of the latter??
  • Omoronovo - Saturday, August 27, 2016 - link

    Main benefit: cost. If the capacity for the application in use - like databases, in the example Intel themselves gave - is more important than the pure performance then an example 256GB DRAM plus 768GB Xpoint system has the potential to be substantially cheaper than a system with 1TB of DRAM.

    We'll really need to know the exact performance and price characteristics to know how large the niche for Xpoint is, but the potential is definitely there.
  • Ian Cutress - Monday, August 29, 2016 - link

    Also, capacity. If you're limited at the socket to X amount of DRAM, but can add 10 times as much with Optane and use software defined DRAM management to cover both, then you can dump a large database in there with code without much issue.
  • Meteor2 - Monday, August 29, 2016 - link

    Thanks Ian, that sounds much more like it.
  • Meteor2 - Monday, August 29, 2016 - link

    I always wonder about this because we're talking, what a few tens of thousands of dollars in hardware? But that's only a fraction of the programmers' payroll, and of course even less of company revenue. I can't see how such a small saving can be significant for the bottom line.
  • Motion2082 - Saturday, September 3, 2016 - link

    So how much will the industry price gauge us on this one?

Log in

Don't have an account? Sign up now