Meet The Radeon VII

First things first is the design and build, and for the AMD Radeon VII, we've already noticed the biggest change: an open air cooler. Keeping the sleek brushed metal look of the previous RX Vega 64 Limited Edition and Liquid variants, they've forgone the blower for a triple axial fan setup, the standard custom AIB configuration for high-end cards.

While NVIDIA's GeForce RTX series went this way with open-air dual-fan coolers, AMD is no stranger to changing things up themselves. Aside from the RX Vega 64 Liquid, the R9 Fury X's AIO CLC was also quite impressive for a reference design. But as we mentioned with the Founders Edition cards, moving away from blowers for open-air means adopting a cooling configuration that can no longer guarantee complete self-cooling. That is, cooling effectiveness won't be independent of chassis airflow, or lack thereof. This is usually an issue for large OEMs that configure machines assuming blower-style cards, but this is less the case for the highest-end cards, which for pre-builts tend to come from boutique system integrators.

The move to open-air does benefit higher TDP, and at 300W TBP the Radeon VII is indeed one for higher power consumption. While 5W more than the RX Vega 64, there's presumably more localized heat with two more HBM2 stacks, plus the fact that the same amount of power is being consumed but on a smaller die area. And at 300W TBP, this would mean that all power-savings from the smaller process were re-invested into performance. If higher clockspeeds are where the Radeon VII is bringing the majority of its speedup over RX Vega 64, then there would be little alternative to abandoning the blower.

Returning to the Radeon VII build, then, the card naturally has dual 8-pin PCIe connectors, but lacks the BIOS switch of the RX Vega cards that toggled a lower-power BIOS. And with the customary LEDs, the 'Radeon' on the side lights up, as does the 'R' cube in the corner.

In terms of display outputs, there are no surprises here with 3x DisplayPort and 1x HDMI.

A few teardowns of the card elsewhere revealed a vapor chamber configuration with a thermal pad for the TIM, rather than the usual paste. While lower-performing in terms of heat transfer, we know that the RX Vega cards ended up having molded and unmolded package variants, requiring specific instructions to manufacturers on the matter. So this might be a way to head off potential ASIC height difference issues.

FP64 Perf and Separating Radeon VII from MI50 The Test
Comments Locked

289 Comments

View All Comments

  • mapesdhs - Friday, February 8, 2019 - link

    It's going to be hillariously funny if Ryzen 3000 series reverses this accepted norm. :)
  • mkaibear - Saturday, February 9, 2019 - link

    I'd not be surprised - given anandtech's love for AMD (take a look at the "best gaming CPUs" article released today...)

    Not really "hilariously funny", though. More "logical and methodical"
  • thesavvymage - Thursday, February 7, 2019 - link

    It's not like itll perform any better though... Intel still has generally better gaming performance. There's no reason to artificially hamstring the card, as it introduces a CPU bottleneck
  • brokerdavelhr - Thursday, February 7, 2019 - link

    Once again - in gaming for the most part....try again with other apps and their is a marked difference. Many of which are in AMD's favor. try again.....
  • jordanclock - Thursday, February 7, 2019 - link

    In every scenario that is worth testing a VIDEO CARD, Intel CPUs offer the best performance.
  • ballsystemlord - Thursday, February 7, 2019 - link

    There choice of processor is kind of strange. An 8-core Intel on *plain* 14nm, now 2! years old, with rather low clocks at 4.3Ghz, is not ideal for a gaming setup. I would have used a 9900K or 2700X personally[1].
    For a content creator I'd be using a Threadripper or similar.
    Re-testing would be an undertaking for AT though. Probably too much to ask. Maybe next time they'll choose some saner processor.
    [1] 9900K is 4.7Ghz all cores. The 2700X runs at 4.0Ghz turbo, so you'd loose frequency, but then you could use faster RAM.
    For citations see:
    https://www.intel.com/content/www/us/en/products/p...
    https://images.anandtech.com/doci/12625/2nd%20Gen%...
    https://images.anandtech.com/doci/13400/9thGenTurb...
  • ToTTenTranz - Thursday, February 7, 2019 - link

    Page 3 table:
    - The MI50 uses a Vega 20, not a Vega 10.
  • Ryan Smith - Thursday, February 7, 2019 - link

    Thanks!
  • FreckledTrout - Thursday, February 7, 2019 - link

    I wonder why this card absolutely dominates in the "LuxMark 3.1 - LuxBall and Hotel" HDR test? Its pulling in numbers 1.7x higher than the RTX 2080 on that test. That's a funky outlier.
  • Targon - Thursday, February 7, 2019 - link

    How much video memory is used? That is the key. Since many games and benchmarks are set up to test with a fairly low amount of video memory being needed(so those 3GB 1050 cards can run the test), what happens when you try to load 10-15GB into video memory for rendering? Cards with 8GB and under(the majority) will suddenly look a lot slower in comparison.

Log in

Don't have an account? Sign up now