>it might be standard for low-end SSDs to borrow a bit of your system's RAM
You could say that it's like the soft modem of old, but I really don't have an issue with this. My GPU uses system memory as it's integrated and RAM in a desktop or laptop is plentiful.
You mean the lousy network or sound card of today. Good hardware is hardware accelerated.
I don't see this as a move to bring down prices, but as a move to bring up profit margins by skimping on the hardware. SDD cache is best placed in the SSD, in system RAM it will still have to be transferred via the bus. Also, operating systems already have a RAM cache for the file system.
Yes, but if it's cheap and it works... then who cares? This isn't for bleeding edge hardware in the first place. For most people, onboard networking and audio is just fine and they'll never appreciate the difference.
As long as the purchase is a product of an informed decision it should be OK in this particular case. But generally it is a bad practice, cutting corners often results in serious damages or even casualties.
Improve your reading and reasoning skills, as I said "it should be OK in this particular case".
But when a car manufacturer cuts corners and that results in lower yet still in the limits of "acceptable" reliability - that kills people, which is bad, even if the industry and regulators have deemed it acceptable.
The only way they get to bring up profit margins is if no one else does this and it's a competitive advantage. One more than one supplier does, I think we'll see some price flexibility. If it bumps up performance enough, it may end up being a premium budget drive on that aspect alone, and priced accordingly.
You don't have L3/4 cache on all CPUs, you don't have dedicated memory for all GPUs, you don't have dedicated cache for all SSDs. There's room for expensive hardware and for cheap hardware. You can't have only the best on the market.
No, a 5.25" SSD wont help bring prices down. SSD's just don't take up much space, and they don;'t need to use alternate, more expensive technologies or production methods to make an ssd fit into a 2.5" form factor. A 5.25" ssd would just have the same pcb as a 2.5" ssd and a ton of extra space. You could decide to fill all that space with more flash .. but then you would end up with a drive costing many thousands of dollars, and there is essentially no market for such a device in the consumer space. Same goes for 3.5" ssd's.
See, people have this idea in their head that smaller costs more, but the thing is a 2.5" ssd is not the small version, is it the default size!
No, the 5.25" SSD was a joke for ddriver :). We need products for every segment because not all people can afford the high end models. It doesn't mean they should not have at least a lower end SSD with no cache. As long as it does the job calling them lousy is a gross exaggeration.
Windows dropped support for all hardware sound acceleration as of Vista. The only way to utilize hardware acceleration in sound devices is to use the fairly unpopular OpenAL standard. All current model sound cards don't support hardware acceleration, the last card to do so being the Creative Labs Xi-Fi which is 2 generations behind now.
You can get hardware network cards, but they're normally only used in servers because they cost at least $200.
While you are correct about the sound cards (sadly), pretty much all decent client network cards (ie stuff from Intel, Broadcom, Qualcomm Atheros, etc) will support several forms of hardware offloading (like tcp/ip checksums, and a few other things).
Thus MS decided to stop supporting hardware accelerated audio, because hardware accelerated audio has long bypassed windows completely. There is no need to support something that bypasses your APIs. That doesn't mean there is no hardware accelerated audio and there is no benefit from it.
Yeah, because it's the cost of that 1GB of RAM per TB that's keeping SSD prices high for sure. RAM at retail is currently ~£4 per GB. 1TB SSD at retail is £200+. You do the maths. ;)
As the article stated, it's not just the BoM of the RAM itself. Putting the RAM on the SSD itself incurs costs in: 1) PCB space to put the RAM on (which could be used for more flash, some M.2 and mSATA SSDs can't physically put more flash on them); 2) Pinouts on the SSD controller chip to interface with the RAM, taking up more space, more pins, therefore more $$; 3) the traces between the controller and the RAM, again taking up more space, more complexity, havint to be applied to the PCB and being designed in etc.
So that $6 RAM chip might cost another $4 to actually install on the PCB and increase the cost of the controller (due to extra pinouts etc) by another $2. Therefore effectively adjusting the sale price of the consumer SSD by $20, which when a drive might be $130 rather than $150 is quite significant.
Oh wow, you boosted the cost by like 50 cents. It is hard to argue with such massive numbers :)
Today SSDs borrow RAM from your PC, can't wait for the bright future, when SSDs will borrow storage space from your PC. I bet it will drop end user prices by at least 5% and profit margins by at least 5000%. A brilliant business strategy.
Obviously I don't know what they are doing exactly, but this sort of thing is not completely unprecedented. For example fairly recently (in 10.10 or 10.11) Apple have changed the tree data structure they use to describe the contents of JHFS+ volumes. In an ideal world, this better data structure would also be stored on the volume, but that would be a non-backwards-compatible change; so instead they construct the tree at mount-time and use it in RAM, but it has no persistence. This makes mounting a little slower but what can you do.
So in principle the SSD could do the same thing --- use compressed state within flash to describe data layout along with a faster in-RAM version of the same data. The issue then is simply to ensure that any PERSISTENT change to the RAM version of the data structure is pushed out to flash in a timely and safe manner. That's not trivial, of course, but it's the standard file system problem (and in principle easier for the SSD because it has more control over the exact ordering of writes than a file system does). Time will show whether Marvell solved it with the robustness of a modern file system or with the "robustness" of FAT.
Nice. But, I'm not sure this will be utilized in cheap devices which needs this while the more expensive drives will boast more RAM as a marketing tool.
Can this even happen in OS-agnostic ways or will it need drivers for each OS to tell it not to touch that part of RAM etc?
And sure, move the controller's working memory to my system RAM. What's next... move the controller's logic itself to my system RAM and have it ran by my system CPU instead of making a controller at all? Then call it "revolutionary new progress".
It'll need driver support; but that will consist of little more than doing a memory allocation to hold it.
Moving the controller logic to the CPU won't work on anything this side of a real time OS; CPU scheduling's way too unpredictable, and the size of the controller is probably limited by the number of bumps it needs for IO pins to talk to the PCIe bus and flash chips anyway so it wouldn't help. The reason why devices can offload memory to the main system without major latency penalties is that for the last 20+ years, the cpu/memory controller/etc platforms have all supported direct memory access; which lets devices on an IO bus talk to the memory controller directly without having to raise interupts and wait until the CPU gets around to handling them some time later.
Uh, what? Of course moving the controller logic to the CPU can work, CPU scheduling is a non-issue when your code is a ring-0 driver. Well, a simpler "controller" will still remain in hardware, but it will not have to deal with any block remapping, wear levelling, caching and whatever else. Just give raw 1:1 access to all the blocks. This is nothing new, it could have been done in the very first SSDs, and just like back then it was deemed a bad idea, I think it is still a bad idea today, and that was my entire point. Depending on OS-specific drivers where so many things can go wrong so easily is not worth the small cost savings.
Bad idea on so many levels. If I spend the money on a SSD drive that costs double the price of a normal hard drive I expect it to have its own complete hardware to run and not use my ram resources in my system as a cache or whatever. If I was to get a 2TB SSD and it used 2GB of my memory in the system that is memory my OS could have or was using for its own needs. O a lower end system with less memory this would pretty much kill system performance everywhere else but hey I got that new fancy SSD drive in there big whoop if the system is struggling every where else now. I would say min spec for these types of SSD drives would be at lest 8GB of system ram better 10GB but have the extra 2GB over the 8GB partitioned off on a sep memory channel just for the cheap ass SSD drives,It would mean Intel and AMD having to add a \n extra memory channel that could be filled if a crap SSD like these are used in a system like most OEM's will use to fill the check mark on the spec sheets. By adding the extra memory channel and making it only useable by these SSD drive when installed you do not lose system memory or bandwidth that the crap SSD drives would normally use/steal. Most new basic systems now days come equipped with 6GB or 8GB which is good for most everyday tasks but not enough for heavy use like for people that never close facebook pages and have like 15 to 20 tabs open and music playing from youtube videos etc etc that 6 or 8GB pretty much is all used up once windows takes it share as well. oh yea we all rember how soft modems worked out most times not so great I see SSD drives going this route and if so their future is bleak for sure..just my input on this.
This reminds me of the work Baidu did in dumbing down their flash drives to improve data center performance. I think they had the OS take over from the flash controller and removed the flash DRAM. Their context was much different from a PC user's, but it's an interesting piece of work: http://www.zdnet.com/article/baidu-chooses-dumb-ss...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
33 Comments
Back to Article
Pissedoffyouth - Tuesday, January 12, 2016 - link
>it might be standard for low-end SSDs to borrow a bit of your system's RAMYou could say that it's like the soft modem of old, but I really don't have an issue with this. My GPU uses system memory as it's integrated and RAM in a desktop or laptop is plentiful.
Bring on cheap TLC 2TB drives!
nathanddrews - Tuesday, January 12, 2016 - link
Or the network cards of today, the sound cards of today, etc. borrowing CPU cycles...Whatever it takes to get these 2TB+ SSDs in my hands, bring it on!
ddriver - Tuesday, January 12, 2016 - link
You mean the lousy network or sound card of today. Good hardware is hardware accelerated.I don't see this as a move to bring down prices, but as a move to bring up profit margins by skimping on the hardware. SDD cache is best placed in the SSD, in system RAM it will still have to be transferred via the bus. Also, operating systems already have a RAM cache for the file system.
All in all, a cheap move.
nathanddrews - Tuesday, January 12, 2016 - link
Yes, but if it's cheap and it works... then who cares? This isn't for bleeding edge hardware in the first place. For most people, onboard networking and audio is just fine and they'll never appreciate the difference.ddriver - Tuesday, January 12, 2016 - link
As long as the purchase is a product of an informed decision it should be OK in this particular case. But generally it is a bad practice, cutting corners often results in serious damages or even casualties.Justwow - Wednesday, January 13, 2016 - link
"cutting corners often results in serious damages or even casualties."Are you going to kill yourself over your SSD using a bit of system RAM? What are you on about.
ddriver - Thursday, January 14, 2016 - link
Improve your reading and reasoning skills, as I said "it should be OK in this particular case".But when a car manufacturer cuts corners and that results in lower yet still in the limits of "acceptable" reliability - that kills people, which is bad, even if the industry and regulators have deemed it acceptable.
icrf - Tuesday, January 12, 2016 - link
The only way they get to bring up profit margins is if no one else does this and it's a competitive advantage. One more than one supplier does, I think we'll see some price flexibility. If it bumps up performance enough, it may end up being a premium budget drive on that aspect alone, and priced accordingly.close - Tuesday, January 12, 2016 - link
You don't have L3/4 cache on all CPUs, you don't have dedicated memory for all GPUs, you don't have dedicated cache for all SSDs. There's room for expensive hardware and for cheap hardware. You can't have only the best on the market.A 5.25" SSD might help though.
extide - Tuesday, January 12, 2016 - link
No, a 5.25" SSD wont help bring prices down. SSD's just don't take up much space, and they don;'t need to use alternate, more expensive technologies or production methods to make an ssd fit into a 2.5" form factor. A 5.25" ssd would just have the same pcb as a 2.5" ssd and a ton of extra space. You could decide to fill all that space with more flash .. but then you would end up with a drive costing many thousands of dollars, and there is essentially no market for such a device in the consumer space. Same goes for 3.5" ssd's.See, people have this idea in their head that smaller costs more, but the thing is a 2.5" ssd is not the small version, is it the default size!
close - Wednesday, January 13, 2016 - link
No, the 5.25" SSD was a joke for ddriver :). We need products for every segment because not all people can afford the high end models. It doesn't mean they should not have at least a lower end SSD with no cache. As long as it does the job calling them lousy is a gross exaggeration.Flunk - Tuesday, January 12, 2016 - link
Windows dropped support for all hardware sound acceleration as of Vista. The only way to utilize hardware acceleration in sound devices is to use the fairly unpopular OpenAL standard. All current model sound cards don't support hardware acceleration, the last card to do so being the Creative Labs Xi-Fi which is 2 generations behind now.You can get hardware network cards, but they're normally only used in servers because they cost at least $200.
extide - Tuesday, January 12, 2016 - link
While you are correct about the sound cards (sadly), pretty much all decent client network cards (ie stuff from Intel, Broadcom, Qualcomm Atheros, etc) will support several forms of hardware offloading (like tcp/ip checksums, and a few other things).ddriver - Tuesday, January 12, 2016 - link
Professional sound cards (also known as audio interfaces) completely bypass that pile of rubbish known as "windows apis".ddriver - Tuesday, January 12, 2016 - link
Thus MS decided to stop supporting hardware accelerated audio, because hardware accelerated audio has long bypassed windows completely. There is no need to support something that bypasses your APIs. That doesn't mean there is no hardware accelerated audio and there is no benefit from it.smilingcrow - Tuesday, January 12, 2016 - link
"Bring on cheap TLC 2TB drives!"Yeah, because it's the cost of that 1GB of RAM per TB that's keeping SSD prices high for sure.
RAM at retail is currently ~£4 per GB. 1TB SSD at retail is £200+. You do the maths. ;)
eldakka - Tuesday, January 12, 2016 - link
As the article stated, it's not just the BoM of the RAM itself. Putting the RAM on the SSD itself incurs costs in:1) PCB space to put the RAM on (which could be used for more flash, some M.2 and mSATA SSDs can't physically put more flash on them);
2) Pinouts on the SSD controller chip to interface with the RAM, taking up more space, more pins, therefore more $$;
3) the traces between the controller and the RAM, again taking up more space, more complexity, havint to be applied to the PCB and being designed in etc.
So that $6 RAM chip might cost another $4 to actually install on the PCB and increase the cost of the controller (due to extra pinouts etc) by another $2. Therefore effectively adjusting the sale price of the consumer SSD by $20, which when a drive might be $130 rather than $150 is quite significant.
ddriver - Tuesday, January 12, 2016 - link
Oh wow, you boosted the cost by like 50 cents. It is hard to argue with such massive numbers :)Today SSDs borrow RAM from your PC, can't wait for the bright future, when SSDs will borrow storage space from your PC. I bet it will drop end user prices by at least 5% and profit margins by at least 5000%. A brilliant business strategy.
ddriver - Tuesday, January 12, 2016 - link
*and BOOST profit margins by at least 5000%Come on AT, it is the 21st century, where is the edit button? Or are you saving that for the upcoming century?
ImSpartacus - Tuesday, January 12, 2016 - link
This sounds like a reasonable way to get ssds into places that they aren't currently occupying. Very neat.The_Assimilator - Tuesday, January 12, 2016 - link
The only way this will help DRAM-less SSDs is if they're using system RAM for storing their page tables, and that's already a bad idea.extide - Tuesday, January 12, 2016 - link
Yeah that's exactly the point, storing the page table data in system ram. You could do this in ways where it would be a good idea, not a bad one.bug77 - Tuesday, January 12, 2016 - link
I'm not sure I want to find out my data was still in RAM when the power went out.extide - Tuesday, January 12, 2016 - link
It's not a data cache, but a metadata storage location.name99 - Tuesday, January 12, 2016 - link
Obviously I don't know what they are doing exactly, but this sort of thing is not completely unprecedented.For example fairly recently (in 10.10 or 10.11) Apple have changed the tree data structure they use to describe the contents of JHFS+ volumes. In an ideal world, this better data structure would also be stored on the volume, but that would be a non-backwards-compatible change; so instead they construct the tree at mount-time and use it in RAM, but it has no persistence. This makes mounting a little slower but what can you do.
So in principle the SSD could do the same thing --- use compressed state within flash to describe data layout along with a faster in-RAM version of the same data. The issue then is simply to ensure that any PERSISTENT change to the RAM version of the data structure is pushed out to flash in a timely and safe manner. That's not trivial, of course, but it's the standard file system problem (and in principle easier for the SSD because it has more control over the exact ordering of writes than a file system does).
Time will show whether Marvell solved it with the robustness of a modern file system or with the "robustness" of FAT.
zodiacfml - Tuesday, January 12, 2016 - link
Nice. But, I'm not sure this will be utilized in cheap devices which needs this while the more expensive drives will boast more RAM as a marketing tool.Visual - Tuesday, January 12, 2016 - link
Can this even happen in OS-agnostic ways or will it need drivers for each OS to tell it not to touch that part of RAM etc?And sure, move the controller's working memory to my system RAM. What's next... move the controller's logic itself to my system RAM and have it ran by my system CPU instead of making a controller at all? Then call it "revolutionary new progress".
extide - Tuesday, January 12, 2016 - link
Yes, it needs the NVME driver at least supporting rev 1.2DanNeely - Tuesday, January 12, 2016 - link
It'll need driver support; but that will consist of little more than doing a memory allocation to hold it.Moving the controller logic to the CPU won't work on anything this side of a real time OS; CPU scheduling's way too unpredictable, and the size of the controller is probably limited by the number of bumps it needs for IO pins to talk to the PCIe bus and flash chips anyway so it wouldn't help. The reason why devices can offload memory to the main system without major latency penalties is that for the last 20+ years, the cpu/memory controller/etc platforms have all supported direct memory access; which lets devices on an IO bus talk to the memory controller directly without having to raise interupts and wait until the CPU gets around to handling them some time later.
Visual - Wednesday, January 13, 2016 - link
Uh, what? Of course moving the controller logic to the CPU can work, CPU scheduling is a non-issue when your code is a ring-0 driver. Well, a simpler "controller" will still remain in hardware, but it will not have to deal with any block remapping, wear levelling, caching and whatever else. Just give raw 1:1 access to all the blocks. This is nothing new, it could have been done in the very first SSDs, and just like back then it was deemed a bad idea, I think it is still a bad idea today, and that was my entire point. Depending on OS-specific drivers where so many things can go wrong so easily is not worth the small cost savings.rocky12345 - Tuesday, January 12, 2016 - link
Bad idea on so many levels. If I spend the money on a SSD drive that costs double the price of a normal hard drive I expect it to have its own complete hardware to run and not use my ram resources in my system as a cache or whatever. If I was to get a 2TB SSD and it used 2GB of my memory in the system that is memory my OS could have or was using for its own needs. O a lower end system with less memory this would pretty much kill system performance everywhere else but hey I got that new fancy SSD drive in there big whoop if the system is struggling every where else now. I would say min spec for these types of SSD drives would be at lest 8GB of system ram better 10GB but have the extra 2GB over the 8GB partitioned off on a sep memory channel just for the cheap ass SSD drives,It would mean Intel and AMD having to add a \n extra memory channel that could be filled if a crap SSD like these are used in a system like most OEM's will use to fill the check mark on the spec sheets. By adding the extra memory channel and making it only useable by these SSD drive when installed you do not lose system memory or bandwidth that the crap SSD drives would normally use/steal. Most new basic systems now days come equipped with 6GB or 8GB which is good for most everyday tasks but not enough for heavy use like for people that never close facebook pages and have like 15 to 20 tabs open and music playing from youtube videos etc etc that 6 or 8GB pretty much is all used up once windows takes it share as well. oh yea we all rember how soft modems worked out most times not so great I see SSD drives going this route and if so their future is bleak for sure..just my input on this.Frihed - Friday, January 15, 2016 - link
In a world where 16gb is about to become standard I don't see it as a problem. We don't need that much memory anyway.JoeDuarte - Monday, February 22, 2016 - link
This reminds me of the work Baidu did in dumbing down their flash drives to improve data center performance. I think they had the OS take over from the flash controller and removed the flash DRAM. Their context was much different from a PC user's, but it's an interesting piece of work: http://www.zdnet.com/article/baidu-chooses-dumb-ss...