This could be the reason I maybe upgrade to X570. Part of me really wants three of the 1TB of these in RAID0. So I would have 3TB of ~10GB/s+ sequential read / write performance, assuming a bit of overhead. Basically RAMDISK performance (I always notice huge software/driver overheads with RAMDISK I use, I cannot actually get more than around 10GB/s even with 3200 MHz DDR4).
It would be completely pointless, since my current Nvme gen3 4x is not actually bottlenecking me at all realistically, games dont really benefit from all this sequential performance but it's cool to have.
At most I suspect you might notice a very slight improvement in loading and install times.
PCIe3x4 vs SATA is noticable with my NVME work laptop is noticably faster in launching apps to the show on taskbar state when I do my post-login mass start than my SATA desktop at home despite the later having a faster clocked CPU. The difference is only a second or two, definitely not worth spending money on an upgrade at home for; but enough to be clearly seen. It's roughly app 1's icon popping while I'm clicking on app 2 vs app 3.
I noticed a much larger gain while installing stuff on the new laptop vs my old laptop (sata ssd); but since my old laptop had a much slower CPU and I haven't installed MS dev tools/databases on my home system I'm not sure how much of the difference there is IO and how much is elsewhere.
When you run complex web apps and all your browser, it's cache and it's profile all run from RamDisk, you will NEVER go back to running them from HD/SATA SSD. The difference can be big.
not for gaming but at work I had to use 50 GB of data for some computing with this pull would finally be seconds... still 2 in any raid means all coworkers will hate you...
The E16 controller is already the bottleneck. It's taking the PCIe 3.0 version and they've just swapped the PHYs. They're having to develop a new front end for the controllers in order to process data >5 GB/s. The NAND still has plenty of speed left.
I know the economics don't work out, but it would be interesting to see a modern SSD with fully-populated channels of lower-density NAND to cut size instead of just cutting channels.
That was my point. The E16 isn't fast enough to be able to max out 4 4.0 lanes in sequential IO; which is my I'm wondering when the first controllers fast enough to fully utilize the bandwidth will be out.
I would say at least a year. Companies are going to use existing controllers for this generation. They will want to recover R&D and design costs. The next gen drives in 2020 will be based on controllers which can saturate PCIe 4x4. Then the bottleneck moves back to the raw flash which will eventually get fast enough that drives all stall out at around 7 to 7.5 GB/s. Just in time for the whole cycle to repeat with PCIe 5.
This will be something to watch. If we saturate the gen 4 bus with new controllers and existing NAND in a ~year this cycle will repeat as you say for PCIe gen 5. If it happens a little slower in the future, if the NAND is not there yet to save cost and maybe power we might see devices at least on the consumer side of things drop to 2 gen 5 PCIe lanes instead of 4. Even if drives are not constricted by NAND there is still a chance consumer drives will go back to 2 lanes since the bandwidth of PCIe gen5 is so great. Also difference to most users between a 2 or 4 lane drive won't be noticeable thus not matter unless it becomes common for the Dells and HPs to use NVMe drives as a ram cache to save on ram costs...
Well I have 2x 128GB pendrives and 256 one that helped me to migrate at least few TB already, I think it's over 2 years now, at least for smaller ones. maybe something with specific series or something...
What I like about pcie4.x is the fact that in terms of mb/s it's a significant jump from pcie3x, much larger a performance jump than moving from 2.x to 3.x. Duh...;)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
21 Comments
Back to Article
AshlayW - Monday, June 10, 2019 - link
This could be the reason I maybe upgrade to X570. Part of me really wants three of the 1TB of these in RAID0. So I would have 3TB of ~10GB/s+ sequential read / write performance, assuming a bit of overhead. Basically RAMDISK performance (I always notice huge software/driver overheads with RAMDISK I use, I cannot actually get more than around 10GB/s even with 3200 MHz DDR4).It would be completely pointless, since my current Nvme gen3 4x is not actually bottlenecking me at all realistically, games dont really benefit from all this sequential performance but it's cool to have.
DanNeely - Monday, June 10, 2019 - link
At most I suspect you might notice a very slight improvement in loading and install times.PCIe3x4 vs SATA is noticable with my NVME work laptop is noticably faster in launching apps to the show on taskbar state when I do my post-login mass start than my SATA desktop at home despite the later having a faster clocked CPU. The difference is only a second or two, definitely not worth spending money on an upgrade at home for; but enough to be clearly seen. It's roughly app 1's icon popping while I'm clicking on app 2 vs app 3.
I noticed a much larger gain while installing stuff on the new laptop vs my old laptop (sata ssd); but since my old laptop had a much slower CPU and I haven't installed MS dev tools/databases on my home system I'm not sure how much of the difference there is IO and how much is elsewhere.
halcyon - Monday, June 10, 2019 - link
When you run complex web apps and all your browser, it's cache and it's profile all run from RamDisk, you will NEVER go back to running them from HD/SATA SSD. The difference can be big.Andy Chow - Monday, June 10, 2019 - link
There is no difference running web apps or browser, since all that data is in the RAM, using a RamDisk won't speed it up at all.deil - Monday, June 10, 2019 - link
not for gaming but at work I had to use 50 GB of data for some computing with this pull would finally be seconds... still 2 in any raid means all coworkers will hate you...DanNeely - Monday, June 10, 2019 - link
I wonder how long until we have controllers bottlenecked on the 4.0 interface.Ian Cutress - Monday, June 10, 2019 - link
The E16 controller is already the bottleneck. It's taking the PCIe 3.0 version and they've just swapped the PHYs. They're having to develop a new front end for the controllers in order to process data >5 GB/s. The NAND still has plenty of speed left.A5 - Monday, June 10, 2019 - link
Huh.I know the economics don't work out, but it would be interesting to see a modern SSD with fully-populated channels of lower-density NAND to cut size instead of just cutting channels.
DanNeely - Monday, June 10, 2019 - link
That was my point. The E16 isn't fast enough to be able to max out 4 4.0 lanes in sequential IO; which is my I'm wondering when the first controllers fast enough to fully utilize the bandwidth will be out.TheUnhandledException - Monday, June 10, 2019 - link
I would say at least a year. Companies are going to use existing controllers for this generation. They will want to recover R&D and design costs. The next gen drives in 2020 will be based on controllers which can saturate PCIe 4x4. Then the bottleneck moves back to the raw flash which will eventually get fast enough that drives all stall out at around 7 to 7.5 GB/s. Just in time for the whole cycle to repeat with PCIe 5.Skeptical123 - Monday, June 10, 2019 - link
This will be something to watch. If we saturate the gen 4 bus with new controllers and existing NAND in a ~year this cycle will repeat as you say for PCIe gen 5. If it happens a little slower in the future, if the NAND is not there yet to save cost and maybe power we might see devices at least on the consumer side of things drop to 2 gen 5 PCIe lanes instead of 4. Even if drives are not constricted by NAND there is still a chance consumer drives will go back to 2 lanes since the bandwidth of PCIe gen5 is so great. Also difference to most users between a 2 or 4 lane drive won't be noticeable thus not matter unless it becomes common for the Dells and HPs to use NVMe drives as a ram cache to save on ram costs...Santoval - Tuesday, June 11, 2019 - link
I'd say at least two years. One year seems too short a time to me.DigitalFreak - Monday, June 10, 2019 - link
I wouldn't touch PNY products with a 10' pole. Every flash memory product I've ever bought from them has died an early death.deil - Monday, June 10, 2019 - link
Well I have 2x 128GB pendrives and 256 one that helped me to migrate at least few TB already, I think it's over 2 years now, at least for smaller ones.maybe something with specific series or something...
A5 - Monday, June 10, 2019 - link
Should be easy to avoid, since this will probably be the same price as all the other Phison E16 clones.halcyon - Monday, June 10, 2019 - link
Exactly my experience as well.ksec - Monday, June 10, 2019 - link
They should start their design on PCI-E 5.0 SSD Controller aiming at 10GB+/s.alpha754293 - Monday, June 10, 2019 - link
I wonder what's the random read/write IOPS are.DigitalFreak - Monday, June 10, 2019 - link
42qlum - Monday, June 10, 2019 - link
I see what you did there.WaltC - Tuesday, June 11, 2019 - link
What I like about pcie4.x is the fact that in terms of mb/s it's a significant jump from pcie3x, much larger a performance jump than moving from 2.x to 3.x. Duh...;)