Comments Locked

118 Comments

Back to Article

  • FatFlatulentGit - Thursday, May 13, 2021 - link

    Is there a reason the Sabrent Rocket 4 Plus never shows up in the benches? I'm baffled that AT didn't review that one as it uses the Phison E18 and came out about six months ago. You don't even include it in your bench results and it's a flagship PCIe4 M.2 drive. What gives?
  • Linustechtips12#6900xt - Thursday, May 13, 2021 - link

    could just be they didn't think of it,
  • Death666Angel - Thursday, May 13, 2021 - link

    They probably didn't get sampled one. Sabrent seems more like a non-brand type option and thus likely does not have much of a PR presence.
  • Billy Tallis - Thursday, May 13, 2021 - link

    The timing of the first round of review samples of the Rocket 4 Plus didn't work out for us, because I was still putting the finishing touches on the new test suite and had a bit of a backlog of other Gen4 drives to review first. By the time I was ready to start testing an E18 drive, Inland was offering to sample several of their drives, and they're also a frequently requested brand. It wouldn't make sense to review the Rocket 4 Plus now because it's the same underlying hardware as the Inland Performance Plus. We'll wait for the 176L NAND before doing another E18 review.
  • Chaser - Thursday, May 13, 2021 - link

    Until Microsoft unshackles Windows from the magnetic Hard disk era, the perceptible difference between most SSDs in Windows desktop is miniscule. But I suppose these reviews and their hairsplitting synthetic benchmarks get clicks.
  • Marlin1975 - Thursday, May 13, 2021 - link

    What are you talking about? Performance between platter disk and SSD/M.2 is day and night.
  • evilpaul666 - Thursday, May 13, 2021 - link

    There is a huge difference between rust and flash, but we could use some major software improvements and architectural improvements to PCs to move them forward. Maybe persistent memory? A better file system that uses something like compression and checksums on everything similar to ZFS?
  • GeoffreyA - Thursday, May 13, 2021 - link

    NTFS has had compression since NT 3.51 or 4, and ReFS, checksums.
  • DougMcC - Sunday, May 16, 2021 - link

    It's great and all but linux/mac do most disk intensive activities close to twice as fast. Mac and Linux have completely taken over at my company because of this.
  • GeoffreyA - Sunday, May 16, 2021 - link

    Haven't got much experience with Linux but yes, I believe its performance beats Windows on a lot of points.
  • Samus - Sunday, May 16, 2021 - link

    Microsoft really has to get with the times and launch ReFS on the client end already. NTFS is a joke compared to even legacy file systems like EXT3 and hasn't been updated in 20 years (unless you consider the journaling update starting with Windows 8)
  • GeoffreyA - Monday, May 17, 2021 - link

    Well, NTFS might not have been updated much, but you know what they say, if it ain't broke, don't fix it. It was quite advanced for its time. Still is solid. Had journalling from the start, Unicode, high-precision time, etc. Compression came next. Then in NT 5, encryption, sparse files, quotas, and all that. Today, the main things it's lacking are copy-on-write, de-duplication, and checksums for data. Microsoft seems to have downplayed ReFS, owing to some technical issues.
  • MyRandomUsername - Tuesday, May 18, 2021 - link

    Have you tried compression on NTFS (particularly on small files). I/O performance on a high end NVME drive plummets to first gen SSD level. Absolutely unusable.
  • GeoffreyA - Wednesday, May 19, 2021 - link

    Haven't got an NVMe drive but I'll try some experiments and see how it goes. Could be that many, small files stagger any SSD.
  • mode_13h - Tuesday, May 18, 2021 - link

    > copy-on-write, de-duplication

    A huge use case for that is snapshots. They're my favorite feature of BTRFS.
  • GeoffreyA - Wednesday, May 19, 2021 - link

    Glancing over it, Btrfs looks impressive.
  • mode_13h - Thursday, May 20, 2021 - link

    Copy-on-write can cause problems, in some cases. BTRFS lets you disable it on a per-file, per-directory, or per-subvolume basis.

    One feature of BTRFS I haven't touched is its built-in RAID functionality. I've always used it atop a hardware RAID controller or even a software RAID. And if you're using mechanical disks, software RAID is plenty fast, these days.
  • GeoffreyA - Thursday, May 20, 2021 - link

    Whenever there's sharing of this sort, there's always trouble round the corner.
  • mode_13h - Friday, May 21, 2021 - link

    > Whenever there's sharing of this sort, there's always trouble round the corner.

    Maybe. I think the issue is really around pretending you have a unique copy, when it's really not. In that sense, it's a little like caches -- usually a good optimization, but there's pretty much always some corner case where you hit a thrashing effect and they do more harm than good.
  • GeoffreyA - Sunday, May 23, 2021 - link

    "I think the issue is really around pretending you have a unique copy, when it's really not."

    You hit the nail there. A breaking down between concept ("I've got a unique copy") and implementation. And so the outside world, tying itself to the concept, runs into occasional trouble.
  • DominionSeraph - Thursday, May 13, 2021 - link

    Try an optimized OS like XP. There's really no difference.
  • philehidiot - Thursday, May 13, 2021 - link

    I do actually have windows 95 installed as a VM, running off an SSD. If you want to really understand how bloated and sluggish Windows 10 is, try using Windows 95 and see how far they have regressed in pursuit of looking pretty.
  • GeoffreyA - Friday, May 14, 2021 - link

    Even XP feels faster, on an older computer, than 10. Vista is where the sluggishness crept in.
  • GeoffreyA - Friday, May 14, 2021 - link

    Also, software in general has become more sluggish, owing to excessive use of abstractions, frameworks, and modern languages.
  • jospoortvliet - Friday, May 14, 2021 - link

    Software has become vastly more complex as users demand more and more features and slick interfaces. Also, platforms evolve faster and more need to be supported. Developers have less time per feature so more abstractions and higher level languages are needed. You can't write code in a browser that is as efficient as good old assembly as it has to run everywhere and even if you could you would lose to a competitor who wrote more features with less developers....

    So yeah, you are right but it is a trend that is hard to reverse.
  • GeoffreyA - Friday, May 14, 2021 - link

    Quite true, but one can't help feeling a pang of regret when looking at today's applications vs. those rare C/C++ Win32 ones that, as they say, just fly.
  • FunBunny2 - Saturday, May 15, 2021 - link

    "Quite true, but one can't help feeling a pang of regret when looking at today's applications vs. those rare C/C++ Win32 ones that, as they say, just fly."

    true fact. I used 1-2-3 pretty much from version 1, which is X86 assembler as was DOS. somewhere around 2.4 it was re-written in C (C++ didn't yet exist). the first time I fired up 2.4 1-2-3 (on a 640K 8088) what had been instant screen updates were now slow as molasses up hill in winter; you could see individual elements change, one by one.

    it appears to be the fact that the constant push and pull between node shrink, more transistors, phatter cpu, more memory on the one hand and software bloat on the other doesn't balance out. I've always been sceptical of ever-increasing number of 'tiers' in the memory hierarchy paired with load-store architectures. may haps persistent memory will give us a true Single Level Storage that's more performant than just virtual storage/memory. have to work out a new version of transaction control, though.
  • GeoffreyA - Saturday, May 15, 2021 - link

    Well, soon they'll need some big changes, when the quantum limits set by Nature are hit. As for the software, yes, it tends to get slower as time goes by. Any gains in hardware are quickly reversed. I think there's been a view inculcated against C++, instigated by Java perhaps, that it's not safe, it's bad, and so one needs to use a better, more modern language; or if C++, do things in an excessive object-oriented way, away from the lighter C sort of style. As in all of life, even "good" programming principles can be taken too far. So moderation is best.
  • FunBunny2 - Saturday, May 15, 2021 - link

    "if C++, do things in an excessive object-oriented way, away from the lighter C sort of style."

    C has been described as the universal assembler. pretty much true, esp. if you limit the description to the bare language w/o the many libraries. a C program can be blazingly fast, if the code treats the machine as a Control Program would. but that's how the PC World was nearly extinguished in the late 80s and early 90s by viruses of all kinds. I'm among those who spent more time than I wanted, editing with Norton Disk Doctor. not an era I miss.
  • GeoffreyA - Sunday, May 16, 2021 - link

    Oh, yes, programs were doing their own thing, till OS's began to clamp down. As years went by, security got more attention too, as it should, and newer languages guaranteed different types of safety. An important point in this era where so much of our information is handled electronically. Or portability made easier, or maintenance.
  • mode_13h - Sunday, May 16, 2021 - link

    > programs were doing their own thing, till OS's began to clamp down.

    DOS was really PCs' biggest Achilles heel. It wasn't until Windows 2000 that MS finally offered a mainstream OS that really provided all the protections available since the 386 (some, even dating back to the 286).

    Even then, it took them 'till Vista to figure out that ordinary users having admin privileges was a bad idea.

    In the Mac world, Apple was doing even worse. I was shocked to learn that MacOS had *no* memory protection until OS X! Of course, OS X is BSD-derived and a fully-decent OS.
  • FunBunny2 - Monday, May 17, 2021 - link

    " I was shocked to learn that MacOS had *no* memory protection until OS X! "

    IIRC, until Apple went the *nix way, it was just co-operative multi-tasking, which is worth a box of Kleenex.
  • Oxford Guy - Tuesday, May 18, 2021 - link

    Apple had protected memory long before Microsoft did — and before Motorola had made a non-buggy well-functioning MMU to get it working at good speed.

    One of the reasons the Lisa platform was slow was because Apple has to kludge protected memory support.

    The Mac was originally envisioned as a $500 home computer, which was just above toy pricing in those days. It wasn’t designed to be a minicomputer on one’s desk like the Lisa system, which also had a bunch of other data-safety features like ECC and redundant storage of file system data/critical files — for hard disks and floppies.

    The first Mac had a paltry amount of RAM, no hard disk support, no multitasking, no ECC, no protected memory, worse resolution, a poor-quality file system, etc. But, it did have a GUI that was many many years ahead of what MS showed itself to be capable of producing.
  • mode_13h - Tuesday, May 18, 2021 - link

    > Apple had protected memory long before Microsoft did

    I mean in a mainstream product, enabled by default. Through MacOS 8, Apple didn't even enable virtual memory by default!

    > The first Mac

    I'm not talking about the first Mac. I'm talking about the late 90's, when Macs were PowerPC-based and MS had Win 9x & NT 4. Linux was already at 2.x (with SMP-support), BeOS was shipping, and OS/2 was sadly well on its way out.
  • mode_13h - Sunday, May 16, 2021 - link

    > C has been described as the universal assembler.

    It was created as a cross-platform alternative to writing operating systems in assembly language!

    > a C program can be blazingly fast, if the code treats the machine as a Control Program would.

    No, that's just DOS. C came out of the UNIX world, where C programs are necessarily as well-behaved as anything else. The distinction you're thinking of is really DOS vs. real operating systems!

    > I'm among those who spent more time than I wanted, editing with Norton Disk Doctor.

    That's cuz you be on those shady BBS' dog!
  • mode_13h - Sunday, May 16, 2021 - link

    > I think there's been a view inculcated against C++

    C++ is a messy topic, because it's been around for so long. It's a litle hard to work out what someone means by it. STL, C++11, and generally modern C++ style have done a lot to alleviate the grievances many had with it. Before the template facility worked well, inheritance was the main abstraction mechanism. That forced more heap allocations, and the use of virtual functions often defeated compilers' ability to perform function inlining.

    It's still the case that C++ tends to hide lots of heap allocations. Where a C programmer would tend to use stack memory for string buffers (simply because its easiest), the easiest thing in C++ is basically to put it on the heap. Now, an interesting twist is that heap overrun bugs are both easier to find and less susceptible to exploits than on stack. So, what used to be seen as a common inefficiency of C++ code is now regarded as providing reliability and security benefits.

    Another thing I've noticed about C code is that it tends to do a lot of work in-place, whereas C++ does more copying. This makes C++ easier to debug, and compilers can optimize away some of those copies, but it does work to the benefit of C. The reason is simple: if a C programmer wants to copy anything beyond a built-in datatype, they have to explicitly write code to do it. In C++ the compiler generally emits that code for you.

    The last point I'll mention is restricted pointers. C has them (since C99), while C++ left them out. Allegedly, nearly all of the purported performance benefits of Fortran disappear, when compared against C written with restricted pointers. That said, every C++ compiler I've used has a non-standard extension for enabling them.

    > if C++, do things in an excessive object-oriented way

    Before templates came into more common use, and especially before C++11, you would typcially see people over-relying on inheritance. Since then, it's a lot more common to see functional-style code. When the two styles are mixed judiciously, the combination can be very powerful.
  • GeoffreyA - Monday, May 17, 2021 - link

    Yes! I was brought up like that, using inheritance, though templates worked as well. Generally, if a class had some undefined procedure, it seemed natural to define it as a pure virtual function (or even a blank body), and let the inherited class define what it did. Passing a function object, using templates, was possible but felt strange. And, as you said, virtual functions came at a cost, because they had to be resolved at run-time.

    Concerning allocation on the heap, oh yes, another concern back then because of its overhead. Arrays on the stack are so fast (and combine those buggers with memcpy or memmove, and one's code just burns). I first started off using string classes, but as I went on, switched to char/wchar_t buffers as much as possible---and that meant you ended up writing a lot of string functions to do x, y, z. And learning about buffer overruns, had to go back and rewrite everything, so buffer sizes were respected. (Unicode brought more hassle too.)

    "whereas C++ does more copying"

    I think it's a tendency in C++ code, too much is returned by value/copy, simply because of ease. One can even be guilty of returning a whole container by value, when the facility is there to pass by reference or pointer. But I think the compiler can optimise a lot of that away. Still, not good practice.
  • mode_13h - Tuesday, May 18, 2021 - link

    > though templates worked as well

    It actually took a while for compilers (particularly MSVC) to be fully-conformant in thier template implementations. That's one reason they took longer to catch on -- many programmers had gotten burned in early attempts to use templates.

    > Passing a function object, using templates, was possible but felt strange.

    Templates give you another way to factor out common code, so that you don't have to force otherwise unrelated data types into an inheritance relationship.

    > I think it's a tendency in C++ code, too much is returned by value/copy, simply because of ease.

    Oh yes. It's clean, side effect-free and avoids questions about what happens to any existing container elements.

    > One can even be guilty of returning a whole container by value, when the facility is there
    > to pass by reference or pointer. But I think the compiler can optimise a lot of that away.

    It's called (N)RVO and C++11 took it to a new level, with the introduction of move-constructors.

    > Still, not good practice.

    In a post-C++11 world, it's now preferred. The only time I avoid it is when I need a function to append some additional values to a container. Then, it's most efficient to pass in a reference to the container.
  • GeoffreyA - Wednesday, May 19, 2021 - link

    "many programmers had gotten burned in early attempts to use templates"

    It could be tricky getting them to work with classes and compile. If I remember rightly, the notation became quite unwieldy.

    "C++11 took it to a new level, with the introduction of move-constructors"

    Interesting. I suppose those are the counterparts of copy constructors for an object that's about to sink into oblivion. Likely, just a copying over of the pointers (or of all the variables if the compiler handles it)?
  • mode_13h - Thursday, May 20, 2021 - link

    > > "many programmers had gotten burned in early attempts to use templates"

    > It could be tricky getting them to work with classes and compile.

    I meant that early compiler implementations of C++ templates were riddled with bugs. After people started getting bitten by some of these bugs, I think templates got a bad reputation, for a while.

    Apart from that, it *is* a complex language feature that probably could've been done a bit better. Most people are simply template consumers and maybe write a few simple ones.

    If you really get into it, templates can do some crazy stuff. Looking up SFINAE will quickly take you down the rabbit hole.

    > If I remember rightly, the notation became quite unwieldy.

    I always used a few typedefs, to deal with that. Now, C++ expanded the "using" keyword to serve as a sort of templatable typedef. The repurposed "auto" keyword is another huge help, although some people definitely use it too liberally.
  • GeoffreyA - Thursday, May 20, 2021 - link

    "templates can do some crazy stuff. Looking up SFINAE will quickly take you down the rabbit
    hole"

    I gave it a try and, Great Scott, it's already looking like Mad Hatter territory. Will take a while to decipher all that. Even "using" and "auto" are starting to look puzzling.

    "I always used a few typedef"

    typedefs were a must to combat all those colons and endless right angle brackets.
  • mode_13h - Thursday, May 20, 2021 - link

    > > move-constructors"

    > I suppose those are the counterparts of copy constructors
    > for an object that's about to sink into oblivion.

    This touches on something very interesting about C++, which is that certain operations on objects have well-specified semantics and the compiler is allowed to make substitutions, on that basis. This is very un- C-like, where the compiler only calls the functions you tell it to. Sure, it can optimize *out* some functions, but there's never a case where it just decides to call something different (but semantically equivalent) to what you coded.

    A move constructor (or move assignment) is allowed to assume that the only subsequent operation on the original object is destruction. So, if an existing object owns some heap-allocated memory, it can be transferred to the new object. However, it's not required to do so -- copying the data is also valid. In any case, the original object must be left in some state that still allows its destructor to successfully execute.
  • GeoffreyA - Thursday, May 20, 2021 - link

    "the original object must be left in some state that still allows its destructor to successfully execute"

    I think I'd copy the pointers or handles over and set them to null in the original object. That ought to do it. Let's hope they don't use move constructors when they start "copying" people or things. Might be painful.
  • mode_13h - Friday, May 21, 2021 - link

    > I think I'd copy the pointers or handles over and set them to null in the original object.

    Exactly. Transfer ownership to the new object and set the original to it empty state. That's the typical approach. And, for any data members that have their own move constructors, you invoke those.

    > Let's hope they don't use move constructors when they start "copying" people or things. Might be painful.

    Kind of like the Star Trek "transporter", though. Getting back into the familiar realm of metaphysics, I'd never send myself through one. I believe you'd die and simply create a copy who thinks they're you.
  • GeoffreyA - Sunday, May 23, 2021 - link

    I think so too and don't like the idea of copy + destroy == teleport. The "clone think it's me" motif brings up moral questions. If I were cloned, who is the real me? Certainly, the original; but from the clone's point of view, he's the main fellow and is out to prove it. I suspect cloning hints at a breaking down of our everyday notion of self as unique instance. Three Eiffel Towers aren't a problem but would be a strange sight.

    I feel this whole thing hints at something deeper in reality. Conceivably, "move" might be impossible to implement at some primitive level. Perhaps all moves, in the universe, were implemented as copy + delete (or reassigning pointers). Even the flow of time could have been done this way, constantly copying, with changes, and throwing away the old. Taken further, I reckon that "move" could be a high-level concept; and at some pre-spacetime level, there's no location/locality.
  • mode_13h - Sunday, May 23, 2021 - link

    > I feel this whole thing hints at something deeper in reality

    I'm not qualified to comment on that, but it reminds me of the FSA theory of spacetime.

    Also, reminds me of the recent discovery that quantum leaps aren't instantaneous, as previously thought. I'm pretty sure I didn't even know they were supposed to be instantaneous.
  • GeoffreyA - Sunday, May 23, 2021 - link

    I remember reading about that, not too long ago, and being pleasantly surprised that there was some touch of determinism to it as well. That was a revelation.

    "instantaneous, as previously thought"

    Not too sure about quantum leaps but think that comes from collapse of the wave function, which is supposedly an instantaneous, non-local process. Some interpretations reject collapse though.
  • GeoffreyA - Monday, May 17, 2021 - link

    I am pre-C++11 and out of touch with programming in general, sadly. And this may seem madness but I'm still using VC++ 6.0, during those rare times I touch a bit of code.

    I see C++ as a beautiful, potent language (along with the STL), despite its messiness. Its data abstraction and hiding mechanisms offer real advances over C. But a tincture of the latter's philosophy will add much to any C++ program. And I reckon that templates are where its real power lies. I mean, the idea of some function or iterator knowing nothing about some object, yet being able to operate on it. Some QuickSort could be sorting something it hasn't got a clue about, yet works because the objects defined the comparison operators. I've felt there's something strangely haunting about some of these mechanisms in C++, especially templates and virtual functions, as if they bore some elusive analogy to the mechanisms underlying reality. Who knows?
  • mode_13h - Tuesday, May 18, 2021 - link

    > I'm still using VC++ 6.0

    OMG. Do yourself a favor and check out MS Visual Studio Community Edition.

    https://visualstudio.microsoft.com/vs/community/

    I don't have any experience with it, as I use GCC (and now Clang) directly, but I'm betting you'll never go back to VC++ 6.0, after you try it.

    > I am pre-C++11

    https://en.cppreference.com/w/

    It actually has references for both C and C++. MSDN now has all their C & C++ language + standard library references online, too.

    However, when I want to write something simple, I usually reach for Python. It's not the simple language you could learn in an afternoon, like it was 2 decades ago, but you can quickly pick up enough to be off and running in very little time, indeed.
  • GeoffreyA - Wednesday, May 19, 2021 - link

    Much obliged!

    I actually tried VC++ 2010 some years ago, the Express version. Heard of the Community Edition too, and thought it was just another Express; but looking at it now, I see that's not the case. Who knew MS had got so generous? Well, I'm excited and will certainly give it a try when I can. Hopefully, import or recreate my 6.0 projects. And thanks for that language reference as well. I had always overlooked it and relied on the MSDN docs. It looks good. (Funny enough, I see that C++11 added an array<T, n> class. I remember I wrote my own long ago and the interface turns out to be roughly the same as that one.)
  • mode_13h - Sunday, May 16, 2021 - link

    > somewhere around 2.4 it was re-written in C (C++ didn't yet exist). the first time
    > I fired up 2.4 1-2-3 (on a 640K 8088) what had been instant screen updates were now slow

    Early C compilers weren't good at optimizations. Also, 16-bit x86 had that mess with NEAR and FAR pointers. Basically, you needed a segment + offset (each 16 bits) to address beyond 64k. Where the ASM had probably been doing a lot of memory optimizations to pack lots of stuff into a single segment, maybe the C version just used FAR pointers and heap-allocated memory, for most things.

    Back in the day, I preferred 320x200 VGA resolution over 320x240, even though the latter had square pixels, precisely because I could fit the former in a single 64k segment.

    > it appears to be the fact that the constant push and pull between node shrink, more transistors,
    > phatter cpu, more memory on the one hand and software bloat on the other doesn't balance out.

    There are also software optimizations happening at the same time as hardware. Compilers are already on a different planet, compared to those days! Even in the mid-90's, I knew a guy in the MIPS compiler group at SGI who said they considered it a bug if you could write assembly that was faster than the equivalent C.

    Moving on from C, Just-in-Time compilation in browsers had been the norm for more than a decade. And performance-intensive software like games and video codecs often get special attention paid to finding and optimizing their performance hot spots.

    However, we have more and higher-level languages than ever before, and you do see people using them for things they'd have previously done with C or C++. Then again, C lacks good support for abstract data structures, which means that either it uses worse algorithms, it's more buggy, or it takes a lot longer to write (sometimes all 3).

    Even as progress on hardware performance continues to slow, I think software optimizations will continue. That doesn't mean everything will get uniformly faster, as some key software is already close to the theoretical limits of the hardware. It does mean that the overall experience should still improve, a bit.
  • Reflex - Saturday, May 15, 2021 - link

    This is a common misconception about XP. Yes it 'feels' faster. The main reason for that is it lies about what it's doing. It has no concept of large caches on drives, network cards and CPU's so the GUI shows tasks as complete before the cache is flushed. A large part of the perceived sluggishness of Vista was the major update to dialog boxes like file copy to ensure that when a job was reported as done, it was done. Reporting complete when a cache is not flushed is a way to end up with corrupted data.
  • GeoffreyA - Saturday, May 15, 2021 - link

    You're right. Excellent point. I forgot about Vista's improved accuracy in reporting. Having said that, I'd still say XP, being a simpler, more primitive OS, was lighter on the whole. Also, the DWM doubtless added a lot more overhead than GDI.
  • Spunjji - Monday, May 17, 2021 - link

    @GeoffreyA - DWM is undoubtedly heavier than GDI, but GDI was pretty buggy and slow in its own ways. I still remember the revelation of moving a window around at speed in Vista and having it just move over things, rather than leaving behind big white gaps to be filled in at leisure 😅
  • GeoffreyA - Monday, May 17, 2021 - link

    "rather than leaving behind big white gaps to be filled in at leisure"

    Oh, yes, when we were youngsters, we used to consider it a mark of a fast computer to move a window about with ease. Most left those delayed-action white gaps in their wake.
  • Spunjji - Friday, May 14, 2021 - link

    I see these comments a lot, but having used every Windows OS from 3.11 onwards, I would take "bloated and sluggish" Windows 10 over anything that preceded it - whether it's the half-DOS configuration nightmare of 95, the blue-screen happiness of 98, XP's inability to recognise now-basic hardware like SATA and WiFi controllers, or 7's inability to boot on anything other than the exact hardware on which it was installed.

    It's all a lovely happy dream when it's abstracted behind a VM, but setting up 95 on actual hardware was (and remains) an extended nightmare of CDs, floppy disks and low-level tweaking.
  • GeoffreyA - Friday, May 14, 2021 - link

    Spunjji, I generally agree and am happy using Windows 10; and I say this as one who used to hate it. Truth is, 10 is Windows all the way through, along with many improvements (especially the copy dialog and Task Manager of 8). I wouldn't say it's bloated. It's lighter, relatively speaking, than Vista; and concerning its appearance, I'm glad they got rid of Aero. Easier on the eye. Actually, it looks closer to XP. Of course, it's not as "snappy" as XP, but the culprit there is Vista. And we'd hope that loss in speed was made up for in the security department. Personally, though, my favourite was XP. I think it'll go down in history as a classic.
  • Spunjji - Monday, May 17, 2021 - link

    @GeoffreyA - XP was a revelation on launch, and it does retain some charm to this day. I think it wore thin for me simply because it outstayed its welcome; I had the unenviable experience of hacking it onto new systems for business customers long after everyone with an ounce of sanity had already migrated to Windows 7. XP definitely has more of a sense of immediacy in use than 10, but then 10 boots like it has a rocket strapped to it!
  • GeoffreyA - Monday, May 17, 2021 - link

    Agreed; and as for booting, full marks for Windows 10! It's fantastic in that regard. After XP, 10 is my second favourite, actually (after tweaking, that is).
  • mode_13h - Tuesday, May 18, 2021 - link

    > 10 boots like it has a rocket strapped to it!

    Isn't it really just like coming out of hibernation, unless you force it to do a full boot?
  • GeoffreyA - Friday, May 14, 2021 - link

    By the way, methinks the worst nightmare was ME.
  • Spunjji - Monday, May 17, 2021 - link

    @GeoffreyA - agreed, it was somehow even worse than the first release of 98.
  • mode_13h - Tuesday, May 18, 2021 - link

    Lucky for me, I skipped all the bad Windows releases: 98, ME, Vista, and Win 8.

    I went 95 -> NT 4 -> 2k -> XP -> 7 -> 10.
  • sonny73n - Friday, May 14, 2021 - link

    There's nothing pretty about Windows 10. Just more and more malicious codes aka spywares added for Big Brother.
  • Mdarrish - Friday, May 14, 2021 - link

    It’s the old classic: Intel (and even more so now, AMD) giveth and Microsoft taketh away. If they would stick to making a better operating system, they might be able to reduce the bloat and giveth back. They never will do that, of course. They are too busy trying to kill off anything remotely resembling a competitor to any of their products to put out the best products.
  • GeoffreyA - Friday, May 14, 2021 - link

    Don't worry. Intel and AMD will raise performance, and software developers will take it back in no time.
  • pSupaNova - Saturday, May 15, 2021 - link

    Total rubbish, try out WSL 2 and then repeat what you just said with a straight face. Windows 10 is a powerhouse of an operating system!
  • Billy Tallis - Saturday, May 15, 2021 - link

    How is WSL 2 any kind of argument in favor of Windows? It only exists because Microsoft couldn't fix all the fundamental Windows problems that were getting in the way of WSL 1 working well, so they had to switch to a full VM instead.
  • mode_13h - Saturday, May 15, 2021 - link

    Microsoft is now less focused on being anti-competitive and more focused on selling cloud services and harvesting its users' data.

    Contrary to my experience on Win 7, I've hated every minute of 10. It's far more heavy-weight and the instability problems never end.

    My previous approach to Windows upgrades was to wait until a couple service packs got released and things generally settled down. With Windows 10, you cannot do that. They basically force you to take their rolling releases.

    And I don't know what they're adding in these releases, but I know most of it is of no benefit to me. It's all just so much spy-ware and support for cloud services that I don't want or need.
  • GeoffreyA - Sunday, May 16, 2021 - link

    That's the bad part about 10, all the cloud nonsense they're trying to force onto people and things running in the background. At any rate, ShutUp10 is a nice tool that can help to curtail some of that; and from an aesthetic point of view, to get it looking more like < Windows 8, I installed Open-Shell, which re-creates the classic Start menu. Also got rid of that search box from the taskbar, brought back Quick Launch, sent Cortana into exile, renamed "This PC" to "My Computer," minimised the Ribbon, and it feels almost like home again :)
  • teldar - Thursday, May 13, 2021 - link

    This seems ridiculous? OS queries are typically non sequential. This limits performance of everything. It's not like it's unknown that the next real step should be combined storage and ram, something that optane was supposed to do.
  • Billy Tallis - Friday, May 14, 2021 - link

    > OS queries are typically non sequential.

    To quantify this a bit, when I had Windows Explorer tally up the disk usage of my photos folders, about 35% of the IOs were sequential, and just over half at QD1. That should have been operating mostly on file metadata, rather than sequentially reading the actual file contents.
  • Valantar - Thursday, May 13, 2021 - link

    Synthetic? What? The main ATSB test suite here consists of 100% real-world application traces. Sure, there are synthetics too, but the reviews are pretty explicit in pointing out that the value of those tests are mostly academic, and rather serve to illustrate why we see the differences we do in the ATSB tests.
  • mode_13h - Thursday, May 13, 2021 - link

    Please ignore the troll.

    Synthetics are useful for testing manufacturer claims, exploring corner cases, boundary conditions, and the outer performance envelope.

    Real-world benchmarks give users some idea of what they can expect and show how the characteristics revealed in the synthetics relate to practical user experience.

    Without reviews like the ones at Anandtech, manufacturers would be trying to get away with a lot more sketchy stuff, I fear, and making even more outlandish claims.
  • jabber - Thursday, May 13, 2021 - link

    I have to say the upgrade from SSD to NVMe was one of the biggest disappointments in my computing life. There is a law of diminishing returns in day to day performance once you reach and go past 500MBps.
  • mode_13h - Thursday, May 13, 2021 - link

    Do you use antivirus? On the windows PC I use for work, they hav it so bogged down with security software that I might as well be using a hard drive!

    On Linux, I saw a marked difference between SATA and NVMe.
  • Tomatotech - Friday, May 14, 2021 - link

    It depends on your OS & system. On my MacBooks, I saw a large difference going from an old but adequate 2013-era 128GB SSD to a 1TB 2019 NVMe. The new NVME was literally 3x faster in every way than the SSD and it made a huge difference to the laptop. Felt like new again.

    Like most things in tech, it takes around a doubling in speed for the difference to be noticeable to the user.
  • mode_13h - Thursday, May 13, 2021 - link

    > Until Microsoft unshackles Windows from the magnetic Hard disk era

    Please explain what you mean by this. Also, not all of us are using Windows.

    > I suppose these reviews and their hairsplitting synthetic benchmarks get clicks.

    If you feel it's not relevant to you, then please *don't* click. And spare us your whining, as well!

    I'm happy to have this review. I appreciate seeing the detailed performance analysis, as well as how much light it sheds on SSDs, their inner workings, and the overall SSD industry.
  • FunBunny2 - Saturday, May 15, 2021 - link

    "Also, not all of us are using Windows."

    so far as AT Management goes, yes we are. how much ink is devoted to anything else, aka linux?
  • Billy Tallis - Saturday, May 15, 2021 - link

    All of the synthetic tests in our SSD reviews are using Linux.
  • FunBunny2 - Saturday, May 15, 2021 - link

    "All of the synthetic tests in our SSD reviews are using Linux."

    I was specifically asking about user-space tests.
  • Billy Tallis - Saturday, May 15, 2021 - link

    > user-space

    No definition of that term that I'm aware of is relevant here. What did you mean?
  • FunBunny2 - Sunday, May 16, 2021 - link

    "What did you mean?"

    comment/question wasn't SSD specific, but testing regime generally.
  • Mdarrish - Friday, May 14, 2021 - link

    That is nonsense. I’ve replaced spinning disks with SSD’s and even SATA SSDs significantly speed up the performance.
  • GeoffreyA - Saturday, May 15, 2021 - link

    Big difference going from hard drive to ordinary SSD, especially booting.
  • pugster - Monday, May 17, 2021 - link

    Why is it Microsoft's fault? Magnetic hard drives will be used for at least a few more years because it is cheaper ssd's.
  • Spunjji - Thursday, May 13, 2021 - link

    High-performance SSDs just don't seem worth the extra cost for most of us. I'll be keeping an eye on how things develop with DirectStorage and the like, but for the time being I'm feeling happy enough even with my old Samsung PM951.
  • GeoffreyA - Thursday, May 13, 2021 - link

    Yes. Ordinary SSDs covered most of the ground, advancing from hard drives, and now the gains aren't doing much.
  • boozed - Thursday, May 13, 2021 - link

    The big advance for consumer applications was access time, everything else is just icing
  • mode_13h - Thursday, May 13, 2021 - link

    > High-performance SSDs just don't seem worth the extra cost for most of us.

    It depends on what you do.
  • Spunjji - Friday, May 14, 2021 - link

    Yes, that's why I said "most of us". I know there are use cases that benefit from them, but the majority of the time they don't apply to a majority of users.
  • jabber - Friday, May 14, 2021 - link

    Yeah you are correct, 'most users' do not notice and most don't need the speed but you will always get the one or two who, for some reason, choose to work with 8K video all day and think they are 'most users'.
  • James5mith - Thursday, May 13, 2021 - link

    Please add one of the new 5800x Optanes into the mix. I would like to see what Optane + PCIe4.0 can do as far as sustained performance.

    It's really sad that none of these drives can maintain performance except for the old 970 Pro and the Optane.
  • mode_13h - Thursday, May 13, 2021 - link

    > Please add one of the new 5800x Optanes into the mix.

    It would be amusing to see a 4.3M IOPS drive compared to consumer SSDs. However, the pricing on those things is pretty nuts. If such performance is worthwhile for you, you already know you want one.

    That said, I'm always interested in seeing cutting edge tech put to the test.
  • Tomatotech - Friday, May 14, 2021 - link

    > maintain performance

    Absolutely not needed for the average price-sensitive user. It’s like buying a 40-ton truck instead of an average family car. And the average family car will outperform the truck in acceleration, top speed, fuel economy, and ease of storage / parking.

    There’s many drives available for people who need 100% sustained performance or other industrial metrics and they’re going to cost a bit more.
  • DigitalFreak - Thursday, May 13, 2021 - link

    I'm waiting on DirectStorage before I upgrade.
  • mode_13h - Thursday, May 13, 2021 - link

    Why?
  • RSAUser - Friday, May 14, 2021 - link

    Because there is no real-world gain for most people going from a normal PCIe 3 SSD to PCIe 4 SSD right now, they're within ms of each other.

    If DirectStorage starts taking off, it could maybe have a more noticeable impact in terms of texture loading, etc.
  • mode_13h - Saturday, May 15, 2021 - link

    I guess it also depends on what you're upgrading from.

    It amazes me how some people complain that it's not worth upgrading a CPU (for example), when they have one from the preceding generation! I'm like: well, join the crowd! Most of us keep hardware for *multiple* generations.

    So, I find it's also worth keeping in mind that people complaining that it's not worth upgrading might be bringing unrealistic expectations.
  • thestryker - Thursday, May 13, 2021 - link

    I'm not sure the feasibility, but I'd be really curious to see how the PCIe 4 SSD performance varies between AMD and Intel. I know some of Intel's nebulous marketing materials had suggested RKL was better than the Ryzen alternatives, and while I highly doubt that I haven't seen any reviewers test out the same drive on both platforms.

    I'd also like to second that dream that you'll be able to run a P5800X through the test suite.
  • Billy Tallis - Thursday, May 13, 2021 - link

    Single-core performance can help with a lot of synthetic storage benchmarks, by making for faster context switches and system calls. But if you care about such marginal improvements, I suspect we would find that dropping Windows and using Linux instead will have a far greater impact on storage performance and OS overhead.

    I don't recall any of the PCIe 4.0 SSD controller vendors complaining about AMD's PCIe implementation being a bottleneck.
  • mode_13h - Thursday, May 13, 2021 - link

    @thestryker is right that Intel claimed faster PCIe 4 SSD performance than the competition, in one of their Rocket Lake slides. I think it was like 20%, but now I can't find the slide.

    I was so struck by it that I clearly remember it, and was wondering if they were talking about a PCIe 4.0 drive connected to Ryzen via its chipset link. Because that's the only way it made sense to me.
  • GeoffreyA - Friday, May 14, 2021 - link

    "connected to Ryzen via its chipset link"

    That's a possibility.
  • Spunjji - Friday, May 14, 2021 - link

    Ryan Shrout released the information in February, and it was 11%. The claim was based on performance from the PCMark 10's "quick" storage benchmark. Apparently the drives being tested were connected to a riser card in a secondary PCIe slot, which was an odd decision as X570 supports connecting the SSD directly to the CPU via the M.2 slots.

    It looks like they found a benchmark that favoured their setup specifically and went with it.
  • Slash3 - Friday, May 14, 2021 - link

    Rocket Lake itself also has a dedicated CPU connected NVMe M.2 slot. The whole setup was just absurd.
  • carcakes - Thursday, May 13, 2021 - link

    Experience the Best of Both Worlds: 8x M.2 Ports @ x16 PCIe 4.0 Speed!

    1x HighPoint SSD7540 PCIe Gen4 x16 8-Port M.2 NVMe RAID Controller + 8x ASRock Legacy M.2 Graphics Card.
  • mode_13h - Thursday, May 13, 2021 - link

    That's more expensive, chews up PCIe lanes, and can only hurt read latency. Plus, having faster SSDs to put in a RAID makes such configurations even faster!
  • Dug - Thursday, May 13, 2021 - link

    So what you are really saying is, buy the WD SN850 instead of this.
  • Oxford Guy - Friday, May 14, 2021 - link

    Looks like the ADATA is the price-performance winner for budget buyers.
  • Alexvrb - Thursday, May 13, 2021 - link

    When the 176L-equipped models with tuned firmware roll around, they just might take the crown.

    Then again, until hardware-accelerated DirectStorage titles start coming out, I don't think there's much benefit for me. Even then, only for titles that have some extremely large assets that need to be streamed in and don't fit in RAM... DS is far more beneficial for consoles since they need to save money wherever possible - mainly RAM.
  • RSAUser - Friday, May 14, 2021 - link

    Even then, storage is substantially cheaper than RAM, but it will be interesting to see if e.g. 64-128GB RAM configs will become a more common thing (since 64GB/memory die on DDR5 vs 16GB/die on DDR4).
  • oRAirwolf - Thursday, May 13, 2021 - link

    Great article as always. I do wish Anandtech would add some real world performance numbers like Windows load times, game load times, file transfer speeds, etc.
  • jospoortvliet - Friday, May 14, 2021 - link

    That is exactly what the trace tests on page 2 are.
  • Spunjji - Friday, May 14, 2021 - link

    Those kind of tests aren't going to show any noticeable differences. I'm saying this as someone who has personally messed around with configurations like having 6 SATA 3 SSDs in RAID-0, various flavours of NVMe, etc.
  • mode_13h - Saturday, May 15, 2021 - link

    > having 6 SATA 3 SSDs in RAID-0

    Depends on your controller and how it's connected. I have a fileserver with 3x SATA SSDs in a RAID-5, and my bottleneck is the DMI link.
  • Spunjji - Monday, May 17, 2021 - link

    Sort-of, and sort-of not - you'll get lower performance on devices connected over a chipset link than directly, but in terms of Windows and game load times you're rarely going to see more than single-second differences.

    For the record, my 6-drive array was connected directly to the CPU via a PCIe 3.0 8x RAID card. It would be handily outperformed by a modern ~1TB NVMe drive, and the RAID BIOS initialization time easily eclipsed the minor difference it made to Windows load times over a single drive. I didn't keep it around for long - it was just a thing I tried because I ended up with a bunch of 256GB SATA SSDs and some spare time.
  • edzieba - Monday, May 17, 2021 - link

    I'd love to see the recent crop of "New Faster PCIe 4.0!" drives be tested on both PCIe 4.0 and PCIe 3.0 (on the same system, just with the bus capped) to control for meaningful improvements i ndrive controller performance vs. meaning improvements from link rate increase.
    I suspect that the majority of performance gain from new drives is down to using newer controllers, and those without PCIe 4.0 capable boards would see near identical performance.
  • KarlKastor - Tuesday, May 18, 2021 - link

    @Billy Tallis
    Can you please write the NAND manufacturer in the lists? You just write the No of Layers. The difference between Toshiba and Micron NAND were sometimes quite huge in the past.

Log in

Don't have an account? Sign up now