For AMD’s Radeon Technologies Group, 2018 was a bit of a breather year. After launching the Polaris architecture in 2016 and the Vega architecture in 2017, for 2018 AMD set about to enjoy their first full year of Vega. Instead of having to launch a third architecture in three years, the company would focus on further expanding the family by bringing Vega's laptop and server variants to market. And while AMD's laptop efforts have gone in an odd direction, their Radeon Instinct server efforts have put some pep back in their figurative step, giving company the claim to the first 7nm GPU.

Following the launch of a late-generation product refresh in November, in the form of the Radeon RX 590, we had expected AMD's consumer side to be done for a while. Instead, AMD made a rather unexpected announcement at CES 2019 last month: the company would be releasing a new high-end consumer card, the Radeon VII (Seven). Based on their aforementioned server GPU and positioned as their latest flagship graphics card for gamers and content creators alike, Radeon VII would once again be AMD’s turn to court enthusiast gamers. Now launching today – on the 7th, appropriately enough – we're taking a look at AMD's latest card, to see how the Radeon VII measures up to the challenge.

On the surface, the Radeon VII would seem to be straightforward. The silicon underpinning the card is AMD's Vega 20 GPU, a derivative of the original Vega 10 that has been enhanced for scientific compute and machine learning, and built on TSMC's cutting-edge 7nm process for improved performance. An important milestone for AMD's server GPU efforts – it's essentially their first high-end server-class GPU since Hawaii all the way back in 2013 – AMD has been eager to show off Vega 20 throughout the later part of its bring-up, as this is the GPU the heart of AMD’s relatively new Radeon Instinct MI50 and MI60 server accelerators.

First and foremost designed for servers then, Vega 20 is not the class of GPU that could cheaply make its way to consumers. Or at least, would seem to be AMD's original thought. But across the aisle, something unexpected has happened: NVIDIA hasn't moved the meter very much in terms of performance-per-dollar. The new Turing-based GeForce RTX cards instead are all about features, looking to usher in a new paradigm of rendering games with real-time raytracing effects, and in the process allocating large parts of the already-large Turing GPUs to this purpose. The end result has been relatively high prices for the GeForce RTX 20 series cards, all the while their performance gains in conventional game are much less than the usual generational uplift.

Faced with a less hostile pricing environment than many were first expecting, AMD has decided to bring Vega 20 to consumers after all, dueling with NVIDIA at one of these higher price points. Hitting the streets at $699, the Radeon VII squares up with the GeForce GTX 2080 as the new flagship Radeon gaming card.

AMD Radeon Series Specification Comparison
  AMD Radeon VII AMD Radeon RX Vega 64 AMD Radeon RX 590 AMD Radeon R9 Fury X
Stream Processors 3840
(60 CUs)
4096
(64 CUs)
2304
(36 CUs)
4096
(64 CUs)
ROPs 64 64 32 64
Base Clock 1400MHz 1247MHz 1469MHz N/A
Boost Clock 1750MHz 1546MHz 1545MHz 1050MHz
Memory Clock 2.0Gbps HBM2 1.89Gbps HBM2 8Gbps GDDR5 1Gbps HBM
Memory Bus Width 4096-bit 2048-bit 256-bit 4096-bit
VRAM 16GB 8GB 8GB 4GB
Single Precision 13.8 TFLOPS 12.7 TFLOPS 7.1 TFLOPS 8.6 TFLOPS
Double Precision 3.5 TFLOPS
(1/4 rate)
794 GFLOPS
(1/16 rate)
445 GFLOPS
(1/16 rate)
538 GFLOPS
(1/16 rate)
Board Power 300W 295W 225W 275W
Reference Cooling Open-air triple-fan Blower N/A AIO CLC
Manufacturing Process TSMC 7nm GloFo 14nm GloFo/Samsung 12nm TSMC 28nm
GPU Vega 20
(331 mm2)
Vega 10
(495 mm2)
Polaris 30
(232 mm2)
Fiji
(596 mm2)
Architecture Vega
(GCN 5)
Vega
(GCN 5)
GCN 4 GCN 3
Transistor Count 13.2B 12.5B 5.7B 8.9B
Launch Date 02/07/2019 08/14/2017 11/15/2018 06/24/2015
Launch Price $699 $499 $279 $649

Looking at our specification table, Radeon VII ships with a "peak engine clock" of 1800MHz, while the official boost clock is 1750MHz. This compares favorably to RX Vega 64's peak engine clock, which was just 1630MHz, so AMD has another 10% or so in peak clockspeed to play with. And thanks to an open air cooler and a revised SMU, Radeon VII should be able to boost to and sustain its higher clockspeeds a little more often still. So while AMD's latest card doesn't add more ROPs or CUs (it's actually a small drop from the RX Vega 64), it gains throughput across the board.

However, if anything, the biggest change compared to the RX Vega 64 is that AMD has doubled their memory size and more than doubled their memory bandwidth. This comes courtesy of the 7nm die shrink, which sees AMD's latest GPU come in with a relatively modest die size of 331mm2. The extra space has given AMD room on their interposer for two more HBM2 stacks, allowing for more VRAM and a wider memory bus. AMD has also been able to turn up the memory clockspeed up a bit as well, from 1.89 Gbps/pin on the RX Vega 64 to a flat 2 Gbps/pin for the Radeon VII.

Interestingly, going by its base specifications, the Radeon VII is essentially a Radeon Instinct MI50 at heart. So for AMD, there's potential to cannibalize Instinct sales if the Radeon VII's performance is too good for professional compute users. As a result, AMD has cut back on some of the chip's features just a bit to better differentiate the products. We'll go into more a bit later, but chief among these is that the card operates at a less-than-native FP64 rate, loses its full-chip ECC support, and naturally for a consumer product, it uses the Radeon Software gaming drivers instead of the professional Instinct driver stack.

Of course any time you're talking about putting a server GPU in to a consumer or prosumer card, you're talking about the potential for a powerful card, and this certainly applies to the Radeon VII. Ultimately, the angle that AMD is gunning for with their latest flagship card is on the merit of its competitive performance, further combined with its class-leading 16GB of HBM2 memory. As one of AMD's few clear-cut specification advantages over the NVIDIA competition, VRAM capacity is a big part of AMD's marketing angle; they are going to be heavily emphasizing content creation and VRAM-intensive gaming. Also new to this card and something AMD will be keen to call out is their triple-fan cooler, replacing the warmly received blower on the Radeon RX Vega 64/56 cards.

Furthermore, as a neat change, AMD is throwing their hat into the retail ring as a board vendor and directly selling the new card at the same $699 MSRP. Given that AIBs are also launching their branded reference cards today, it's an option for avoiding inflated launch prices.

Meanwhile, looking at the competitive landscape, there are a few items to tackle today. A big part of the mix is (as has become common lately) a game bundle. The ongoing Raise the Game Fully Loaded pack sees Devil May Cry 5, The Division 2, and Resident Evil 2 included for free with the Radeon VII, RX Vega and RX 590 cards. Meanwhile the RX 580 and RX 570 cards qualify for two out of the three. Normally, a bundle would be a straightforward value-add against a direct competitor – in this case, the RTX 2080 – but NVIDIA has their own dueling Game On bundle with Anthem and Battlefield V. In a scenario where the Radeon VII is expected to trade blows with the RTX 2080 rather than win outright, these value-adds become more and more important.

The launch of the Radeon VII also marks the first product launch since the recent shift in the competitive landscape for variable refresh monitor technologies. Variable refresh rate monitors have turned into a must-have for gamers, and since the launch of variable refresh technology earlier this decade, there's been a clear split between AMD and NVIDIA cards. AMD cards have supported VESA Adaptive Sync – better known under AMD's FreeSync branding – while NVIDIA desktop cards have only supported their proprietary G-Sync. But last month, NVIDIA made the surprise announcement that their cards would support VESA Adaptive Sync on the desktop, under the label of 'G-Sync Compatibility.' Details are sparse on how this program is structured, but at the end of the day, adaptive sync is usable in NVIDIA drivers even if a FreeSync panel isn't 'G-Sync Compatible' certified.

The net result is that while NVIDIA's announcement doesn't hinder AMD as far as features go, it does undermine AMD's FreeSync advantage – all of the cheap VESA Adaptive Sync monitors that used to only be useful on AMD cards are now potentially useful on NVIDIA cards as well. AMD of course has been quite happy to emphasize the "free" part of FreeSync, so as a weapon to use against NVIDIA, it has been significantly blunted. AMD's official line is one of considering this a win for FreeSync, and for freedom of consumer choice, though the reality is often a little more unpredictable.

The launch of the Radeon VII and its competitive positioning against the GeForce RTX 2080 means that AMD also has to crystalize their stance on the current feature gap between their cards and NVIDIA's latest Turing machines. To this end, AMD's position has remained the same on DirectX Raytracing (DXR) and AI-based image quality/performance techniques such as DLSS. In short, AMD's argument goes along the lines that they believe that the performance hit and price premium for these features isn't worth the overall image quality difference. In the meantime, AMD isn't standing still, and along with DXR fallback drivers, they working on support for WinML and DirectML for their cards. The risk to AMD being, of course, is that if DXR or NVIDIA's DLSS efforts end up taking off quickly, then the feature gap is going to become more than a theoretical annoyance.

All told, pushing out a 7nm large gaming GPU for consumers now is a very aggressive move so early in this process' lifecycle, especially as on a cyclical basis, Q1 is typically flat-to-down and Q2 is down. But in context, AMD doesn't have that much time to wait and see. The only major obstacle would be pricing it to be acceptable for consumers.

That brings us to today's launch. For $699, NVIDIA has done the price-bracket shifting already, on terms of dedicated hardware for accelerating raytracing and machine learning workloads. For the Radeon VII, the terms revolve around 16GB HBM2 and prosumer/content creator value. All that remains is their gaming performance.

2/2019 GPU Pricing Comparison
AMD Price NVIDIA
  $1299 GeForce RTX 2080 Ti
(Game On Bundle)
Radeon VII
(Raise the Game Bundle)
$699/$719 GeForce RTX 2080
(Game On Bundle)
  $499 GeForce RTX 2070
(Game On Bundle, 1 game)
Radeon RX Vega 64
OR
Radeon RX Vega 56
(Raise the Game Bundle)
$399  
  $349 GeForce RTX 2060
(Game On Bundle, 1 game)
Vega 20: Under The Hood
Comments Locked

289 Comments

View All Comments

  • repoman27 - Thursday, February 7, 2019 - link

    The Radeon Pro WX 7100 is Polaris 10, which does not do DSC. DSC requires fixed function encoding blocks that are not present in any of the Polaris or Vega variants. They do support DisplayPort 1.3 / 1.4 and HBR3, but DSC is an optional feature of the DP spec. AFAIK, the only GPUs currently shipping that have DSC support are NVIDIA's Turing chips.

    The CPU in the iMac Pro is a normal, socketed Xeon W, and you can max the RAM out at 512 GB using LRDIMMs if you're willing to crack the sucker open and shell out the cash. So making those things user accessible would be the only benefit to a modular Mac Pro. CPU upgrades are highly unlikely for that platform though, and I doubt Apple will even provide two DIMM slots per channel in the new Mac Pro. However, if they have to go LGA3647 to get an XCC based Xeon W, then they'd go with six slots to populate all of the memory channels. And the back of a display that is also 440 square inches of aluminum radiator is not necessarily a bad place to be, thermally. Nothing is open about Thunderbolt yet, by the way, but of course Apple could still add existing Intel TB3 controllers to an AMD design if they wanted to.

    So yeah, in order to have a product, they need to beat the iMac Pro in some meaningful way. And simply offering user accessible RAM and PCIe slots in a box that's separate from the display isn't really that, in the eyes of Apple at least. Especially since PCIe slots are far from guaranteed, if not unlikely.
  • halcyon - Friday, February 8, 2019 - link

    Apple cannot ship Mac Pro with a vacuum cleaner. That 43 dBA is isane. Even if Apple downclocked and undervolted the bios, I doubt they could make it very quiet.

    Also, I doubt AMD is willing to sell the tons of them at a loss.
  • dark_light - Thursday, February 7, 2019 - link

    Well written, balanced and comprehensive review that covers all the bases with just the right
    amount of detail.

    Thanks Nate Oh.

    Anandtech is still arguably the best site for this content. Kudos guys.
  • drgigolo - Thursday, February 7, 2019 - link

    So I got a 1080Ti at launch, because there was no other alternative at 4K. Finally we have an answer from AMD, unfortunately it's no faster than my almost 2 year old GPU, priced the same no less.

    I really think this would've benefitted from 128 rops, or 96.

    If they had priced this at 500 dollars, it would've been a much better bargain.

    I can't think of anyone who I would recommend this to.
  • sing_electric - Thursday, February 7, 2019 - link

    To be fair, you could almost say the same thing about the 2080, "I got a 1080 Ti at launch and 2 years later, Nvidia released a GPU that barely performs better if you don't care about gimmicks like ray tracing."

    People who do gaming and compute might be very well tempted, people who don't like Nvidia (or just do like AMD) might be tempted.

    Unfortunately, the cost of the RAM in this thing alone is probably nearly $350, so there's no way AMD could sell this thing for $500 (but it wouldn't surprise me if we see it selling a little under MSRP if there is plentiful supply and Nvidia can crank out enough 2080s).
  • eva02langley - Thursday, February 7, 2019 - link

    That was the whole point of RTX. Beside the 2080 TI, there was nothing new. You were having the same performances for around the same price than the last generation. There was no price disruption.
  • Oxford Guy - Thursday, February 7, 2019 - link

    Poor AMD.

    We're supposed to buy a clearly inferior product (look at that noise) just so they can sell leftover and defective Instincts?

    We're supposed to buy an inferior product because AMD's bad business moves have resulted in Nvidia being able to devalue the GPU market with Turing?

    Nope. We're supposed to either buy the best product for the money or sit out and wait for something better. Personally, I would jump for joy if everyone would put their money into a crowdfunded company, with management that refuses to become eaten alive by a megacorp, to take on Nvidia and AMD in the pure gaming space. There was once space for three players and there is space today. I am not holding my breath for Intel to do anything particularly valuable.

    Wouldn't it be nice to have a return to pure no-nonsense gaming designs, instead of this "you can buy our defective parts for high prices and feel like you're giving to charity" and "you can buy our white elephant feature well before its time has come and pay through the nose for it" situation.

    Capitalism has had a bad showing for some time now in the tech space. Monopolies and duopolies reign supreme.
  • eva02langley - Friday, February 8, 2019 - link

    Honestly, beside a RX570/580, no GPUs make sense right now.

    Funny that Polaris is still the best bang for the $ still today.
  • drgigolo - Saturday, February 9, 2019 - link

    Well, at least you can buy a 2080Ti, eventhough the 2080 is of course at the same price point as the 1080Ti. But I won't buy a 2080Ti either, it's too expensive and the performance increase is too small.

    The last decent AMD card I had, was the R9 290X. Had that for a few years until the 1080 came out, and then, replaced that to a 1080Ti when I got a Acer Predator XB321HK.

    I will wait until something better comes along. Would really like HDMI 2.1 output, so that I can use VRR on the upcoming LG OLED C9.
  • sing_electric - Thursday, February 7, 2019 - link

    Oh, also, FWIW: The other way of looking at it is "damn, that 1080 Ti was a good buy. Here I am 2 years later and there's very little reason for me to upgrade."

Log in

Don't have an account? Sign up now