The NVIDIA GeForce RTX 2080 Super Review: Memories of the Future
by Ryan Smith on July 23, 2019 9:00 AM EST- Posted in
- GPUs
- GeForce
- NVIDIA
- Turing
- GeForce RTX
The Test
For the launch of the RTX 2080 Super, NVIDIA has rolled out a new set of drivers to enable the card: 431.56. These drivers don’t offer any performance improvements over the 431.15 drivers in our games, so the results are fully comparable.
Meanwhile, I've gone ahead and tossed in the Radeon RX 5700 XT in to our graphs. While it's aimed at a distinctly lower market with its $399 price tag, it has the potential to be a very strong spoiler here, especially for 1440p gaming.
CPU: | Intel Core i9-9900K @ 5.0GHz |
Motherboard: | ASRock Z390 Taichi |
Power Supply: | Corsair AX1200i |
Hard Disk: | Phison E12 PCIe NVMe SSD (960GB) |
Memory: | G.Skill Trident Z RGB DDR4-3600 2 x 16GB (17-18-18-38) |
Case: | NZXT Phantom 630 Windowed Edition |
Monitor: | Asus PQ321 |
Video Cards: | NVIDIA GeForce RTX 2080 Super Founders Edition NVIDIA GeForce RTX 2080 Ti Founders Edition NVIDIA GeForce RTX 2080 Founders Edition NVIDIA GeForce RTX 2070 Super Founders Edition NVIDIA GeForce GTX 1080 NVIDIA GeForce GTX 980 AMD Radeon VII AMD Radeon RX 5700 XT AMD Radeon RX Vega 64 AMD Radeon R9 390X |
Video Drivers: | NVIDIA Release 431.15 NVIDIA Release 431.56 AMD Radeon Software Adrenalin 2019 Edition 19.7.1 |
OS: | Windows 10 Pro (1903) |
111 Comments
View All Comments
Cellar Door - Tuesday, July 23, 2019 - link
The delta compression used by Nvidia is loseless.notashill - Tuesday, July 23, 2019 - link
If memory bandwidth was "the" bottleneck then the Radeon VII would be the fastest consumer level GPU on the market by an enormous margin.Samus - Tuesday, July 23, 2019 - link
Sadly I think you are right. While commendable AMD has always pushed higher memory capacities to the mainstream, their focus on memory bandwidth has never really paid off, and at a huge expense to die area for the larger memory controller, and obviously an energy efficiency deficit. This is why 3-channel memory was dropped in favor of reversion back to two channel with the Intel X58 chipset. It would be years before we would move beyond two channel again, and even then - quad channel never became mainstream.The reason is simple. Even on single channel, Intel CPU’s especially show extraordinary memory performance. The controller is well optimized and cache hit rates are high. Likewise, Nvidia using excellent compression with optimized caches makes high memory bandwidth unnecessary.
willis936 - Tuesday, July 23, 2019 - link
SISD benefits greatly from caching and ILP. SIMD doesn’t need to run ILP to keep execution units busy so it chews through memory bandwidth by comparison. There is also quick diminishing returns on GPU cache size. GPUs have 20x the memory bandwidth of CPUs for a good reason: they use it.flyingpants265 - Monday, July 29, 2019 - link
Somewhat related to the subject of compression... adaptive resolution is by far the best graphics technology I have ever seen. Render at 1800p, drop down to 1400p when below the target framerate, and upscale everything to 4k. No need to buy the highest-end graphics card anymore. If we had adaptive resolution when Far Cry 1 came out, there would have been no market for the 6800, just use a 6600.Combine with checkerboarding for console, which is impressive in its own right by NEAR-HALVING the workload. So render at half 1800p every other frame (equivalent of about 2300*1300 pixels, so 1.44x 1080p, not 4.0x) and get a generated 4k image.
notashill - Tuesday, July 23, 2019 - link
Radeon VII has double the bandwidth for the same price but it doesn't really help performance at least in games. I think there has been more focus on effectively utilizing bandwidth because making the buses wider can get really expensive.Smell This - Tuesday, July 23, 2019 - link
Hard to say . . .GDDR6 has a good deal of *theoretical* bandwidth on the table, there is the economical 'ghetto-HBM2' from Sammy, and HBM3 in the short-term.
We are likely to hear about Radeon **Navi-Instinct** pro cards this quarter, in addition to a Titan/Ampere 7nm HPC update. I'm thinking the trend will continue toward more efficient 'wider' bandwidth and advances in compression algorhtms, too.
wr3zzz - Tuesday, July 23, 2019 - link
How are these new cards draw so much more power than GTX980 under load yet have lower load temperature and noise? Are the new fans that good?Ryan Smith - Tuesday, July 23, 2019 - link
Blower versus open air (axial) cooler.Betonmischer - Tuesday, July 23, 2019 - link
Absolutely, if you compare against the reference blower that Nvidia used prior to RTX 20.