A bit over two weeks ago AMD launched their new flagship video card, the Radeon R9 Fury X. Based on the company’s new Fiji GPU, the R9 Fury X brought with it significant performance improvements to AMD’s lineup, with AMD’s massive Fiji greatly increasing the card’s shading resources. Meanwhile Fiji also marked the introduction of High Bandwidth Memory (HBM) in to consumer products, giving the R9 Fury X a significant leg up in memory bandwidth. Overall AMD put together a very impressive card, however at $649 it fell just short of the GeForce GTX 980 Ti AMD needed it to beat.

Meanwhile alongside the announcement of the R9 Fury X, AMD announced that there would be three other Fiji-based cards. These include the R9 Fury, the R9 Nano, and a yet-to-be-named dual-GPU Fiji card. The first of these remaining cards to launch would be the R9 Fury, the obligatory lower-tier sibling to AMD’s flagship R9 Fury X. Today we will be taking a look at the first of those remaining cards, the R9 Fury, which launches next week.

While R9 Fury X remains the fastest Fiji card – and by virtue of being introduced first, the groundbreaking card – the impending launch of the R9 Fury brings with it a whole slew of changes that make it an interesting card in its own right, and a very different take on a Fiji product altogether. From a performance standpoint it is a lower performing card, featuring a cut-down Fiji GPU, but at the same time it is $100 cheaper than the R9 Fury X. Meanwhile in terms of construction, unlike the R9 Fury X, which is only available in its reference closed loop liquid cooling design, the R9 Fury is available as semi-custom and fully-custom cards from AMD’s board partners, built using traditional air coolers, making this the first air cooled Fiji card. As a result the R9 Fury at times ends up being a very different take on Fiji, for all of the benefits and drawbacks that comes with.

AMD GPU Specification Comparison
  AMD Radeon R9 Fury X AMD Radeon R9 Fury AMD Radeon R9 290X AMD Radeon R9 290
Stream Processors 4096 3584 2816 2560
Texture Units 256 224 176 160
ROPs 64 64 64 64
Boost Clock 1050MHz 1000MHz 1000MHz 947MHz
Memory Clock 1Gbps HBM 1Gbps HBM 5Gbps GDDR5 5Gbps GDDR5
Memory Bus Width 4096-bit 4096-bit 512-bit 512-bit
VRAM 4GB 4GB 4GB 4GB
FP64 1/16 1/16 1/8 1/8
TrueAudio Y Y Y Y
Transistor Count 8.9B 8.9B 6.2B 6.2B
Typical Board Power 275W 275W 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.2 GCN 1.2 GCN 1.1 GCN 1.1
GPU Fiji Fiji Hawaii Hawaii
Launch Date 06/24/15 07/14/15 10/24/13 11/05/13
Launch Price $649 $549 $549 $399

Starting things off, let’s take a look at the specifications of the R9 Fury. As we mentioned in our R9 Fury X review, we have known since the initial R9 Fury series launch that the R9 Fury utilizes a cut-down Fiji GPU, and we can now reveal just how it has been cut down. As is usually the case for these second-tier cards, the R9 Fury features both a GPU with some functional units disabled and a slightly reduced clockspeed, allowing AMD to recover partially defective GPUs while easing up on the clockspeed requirements.

The Fiji GPU in the R9 Fury ends up having 56 of 64 CUs enabled, which brings down the total stream processor count from 4,096 to 3,584. This in turn ends up being the full extent of the R9 Fury’s disabled functional units, as AMD has not touched the front-end or back-end, meaning the number of geometry units and the number of ROPs remained unchanged.

Also unchanged is the memory subsystem. All Fiji-based cards, including the R9 Fury, will be shipping with a fully enabled memory subsystem, meaning we’re looking at 4GB of HBM attached to the GPU over a 4096-bit memory bus. With Fiji topping out at just 4GB of memory in the first place – one of the drawbacks faced by the $650 R9 Fury X – cutting back on memory here to a smaller capacity is not a real option for AMD, so every Fiji card will come with that much memory.

As for clockspeeds, R9 Fury takes a slight trim on the GPU clockspeed. The reference clockspeed for the R9 Fury is a flat 1000MHz, a 5% reduction from the R9 Fury X. On the other hand the memory clock remains unchanged at 500MHz DDR, for an effective memory rate of 1Gbps/pin.

All told then, on paper the performance difference between the R9 Fury and R9 Fury X will stand to be between 0% and 17%; that is, the R9 Fury will be up to 17% slower than the R9 Fury X. In the best case scenario for the R9 Fury of a memory bandwidth bottleneck, it has the same 512GB/sec of memory bandwidth as the R9 Fury X. At the other end of the spectrum, in a shader-bound scenario, the combination of the reduction in shader hardware and clockspeeds is where the R9 Fury will be hit the hardest, as its total FP32 throughput drops from 8.6 TFLOPs to 7.17 TFLOPs. Finally in the middle, workloads that are front-end or back-end bound will see a much smaller drop since those units haven’t been cut-down at all, leading to just a 5% performance drop. As for the real world performance drop, as we’ll see it’s around 7%.

Power consumption on the other hand is going to be fairly similar to the R9 Fury X. AMD’s official Typical Board Power (TBP) for the R9 Fury is 275W, the same as its older sibling. Comparing the two products, the R9 Fury sees some improvement from the disabled CUs, however as a second-tier part it uses lower quality chips overall. Meanwhile the use of air cooling means that operating temperatures are higher than the R9 Fury X’s cool 65C, and as a result power loss from leakage is higher as well. At the end of the day this means that the R9 Fury is going to lose some power efficiency compared to the R9 Fury X, as any reduction in power consumption is going to be met with a larger decrease in performance.

Moving on, let’s talk about the cards themselves. With the R9 Fury X AMD has restricted vendors to selling the reference card, and we have been told it will be staying this way, just as it was for the R9 295X2. On the other hand for R9 Fury AMD has not even put together a complete reference design, leaving the final cards up to their partners. As a result next week’s launch will be a “virtual” launch, with all cards being semi or fully-custom.

Out of the gate the only partners launching cards are Sapphire and Asus, AMD’s closest and largest partners respectively. Sapphire will be releasing stock and overclocked SKUs based on a semi-custom design that couples the AMD reference PCB with Sapphire’s Tri-X cooler. Asus on the other hand has gone fully-custom right out of the gate, pairing up a new custom PCB with one of their DirectCU III coolers. Cards from additional partners will eventually hit the market, but not until later in the quarter.

The R9 Fury will be launching with an MSRP of $549, $100 below the R9 Fury X. This price puts the R9 Fury up against much different competition than its older sibling; instead of going up against NVIDIA’s GeForce GTX 980 Ti, the closest competition will be the older GeForce GTX 980. The official MSRP on that card is $499, so the R9 Fury is more expensive, but in turn AMD is promising better performance than the GTX 980. Otherwise NVIDIA’s partners serve to fill that $50 gap with their higher-end factory overclocked GTX 980 cards.

Finally, today’s reviews of the R9 Fury are coming slightly ahead of the launch of the card itself. As previously announced, the card goes on sale on Tuesday the 14th, however the embargo on the reviews is being lifted today. AMD has not officially commented on the launch supply, but once cards do go on sale, we’re expecting a repeat of the R9 Fury X launch, with limited quantities that will sell out within a day. After that, it seems likely that R9 Fury cards will remain in short supply for the time being, also similar to the R9 Fury X. R9 Fury X cards have come back in stock several times since the launch, but have sold out within an hour or so, and there’s currently no reason to expect anything different for R9 Fury cards.

Summer 2015 GPU Pricing Comparison
AMD Price NVIDIA
Radeon R9 Fury X $649 GeForce GTX 980 Ti
Radeon R9 Fury $549  
  $499 GeForce GTX 980
Radeon R9 390X $429  
Radeon R9 290X
Radeon R9 390
$329 GeForce GTX 970
Radeon R9 290 $250  
Radeon R9 380 $200 GeForce GTX 960
Radeon R7 370
Radeon R9 270
$150  
  $130 GeForce GTX 750 Ti
Radeon R7 360 $110  
Meet The Sapphire Tri-X R9 Fury OC
Comments Locked

288 Comments

View All Comments

  • nightbringer57 - Friday, July 10, 2015 - link

    Intel kept it in stock for a while but it didn't sell. So the management decided to get rid of it, gave it away to a few colleagues (dell, HP, many OEMs used BTX for quite a while, both because it was a good user lock-down solution and because the inconvenients of BTX didn't matter in OEM computers, while the advantages were still here) and noone ever heard of it on the retail market again?
  • nightbringer57 - Friday, July 10, 2015 - link

    Damn those not-editable comments...
    I forgot to add: with the switch from the netburst.prescott architecture to Conroe (and its followers), CPU cooling became much less of a hassle for mainstream models so Intel did not have anything left to gain from the effort put into BTX.
  • xenol - Friday, July 10, 2015 - link

    It survived in OEMs. I remember cracking open Dell computers in the later half of 2000 and finding out they were BTX.
  • yuhong - Friday, July 10, 2015 - link

    I wonder if a BTX2 standard that fixes the problems of original BTX is a good idea.
  • onewingedangel - Friday, July 10, 2015 - link

    With the introduction of HBM, perhaps it's time to move to socketed GPUs.

    It seems ridiculous for the industry standard spec to devote so much space to the comparatively low-power CPU whilst the high-power GPU has to fit within the confines of (multiple) pci-e expansion slots.

    Is it not time to move beyond the confines of ATX?
  • DanNeely - Friday, July 10, 2015 - link

    Even with the smaller PCB footprint allowed by HBM; filling up the area currently taken by expansion cards would only give you room for a single GPU + support components in an mATX sized board (most of the space between the PCIe slots and edge of the mobo is used for other stuff that would need to be kept not replaced with GPU bits); and the tower cooler on top of it would be a major obstruction for any non-GPU PCIe cards you might want to put into the system.
  • soccerballtux - Friday, July 10, 2015 - link

    man, the convenience of the socketed GPU is great, but just think of how much power we could have if it had it's own dedicated card!
  • meacupla - Friday, July 10, 2015 - link

    The clever design trend, or at least what I think is clever, is where the GPU+CPU heatsinks are connected together, so that, instead of many smaller heatsinks trying to cool one chip each, you can have one giant heatsink doing all the work, which can result in less space, as opposed to volume, being occupied by the heatsink.

    You can see this sort of design on high end gaming laptops, Mac Pro, and custom water cooling builds. The only catch is, they're all expensive. Laptops and Mac Pro are, pretty much, completely proprietary, while custom water cooling requires time and effort.

    If all ATX mobos and GPUs had their core and heatsink mounting holes in the exact same spot, it would be much easier to design a 'universal multi-core heatsink' that you could just attach to everything that needs it.
  • Peichen - Saturday, July 11, 2015 - link

    That's quite a good idea. With heat-pipes, distance doesn't really matter so if there is a CPU heatsink that can extend 4x 8mm/10mm heatpipes over the videocard to cooled the GPU, it would be far quieter than the 3x 90mm can cooler on videocard now.
  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    330 watts transferred to the low lying motherboard, with PINS attached to amd's core failure next...
    Slap that monster heat onto the motherboard, then you can have a giant green plastic enclosure like Dell towers to try to move that heat outside the case... oh, plus a whole 'nother giant VRM setup on the motherboard... yeah they sure will be doing that soon ... just lay down that extra 50 bucks on every motherboard with some 6X VRM's just incase amd fanboy decides he wants to buy the megawatter amd rebranded chip...

    Yep, NOT HAPPENING !

Log in

Don't have an account? Sign up now