The AMD Radeon R9 290X Reviewby Ryan Smith on October 24, 2013 12:01 AM EST
- Posted in
- Radeon 200
To say it’s been a busy month for AMD is probably something of an understatement. After hosting a public GPU showcase in Hawaii just under a month ago, the company has already launched the first 5 cards in the Radeon 200 series – the 280X, 270X, 260X, 250, and 240 – and AMD isn’t done yet. Riding a wave of anticipation and saving the best for last, today AMD is finally launching the Big Kahuna: the Radeon R9 290X.
The 290X is not only the fastest card in AMD’s 200 series lineup, but the 290 series in particular also contains the only new GPU in AMD’s latest generation of video cards. Dubbed Hawaii, with the 290 series AMD is looking to have their second wind between manufacturing node launches. By taking what they learned from Tahiti and building a refined GPU against a much more mature 28nm process – something that also opens the door to a less conservative design – AMD has been able to build a bigger, better Tahiti that continues down the path laid out by their Graphics Core Next architecture while bringing some new features to the family.
Bigger and better isn’t just a figure of speech, either. The GPU really is bigger, and the performance is unquestionably better. After vying with NVIDIA for the GPU performance crown for the better part of a year, AMD fell out of the running for it earlier this year after the release of NVIDIA’s GK110 powered GTX Titan, and now AMD wants that crown back.
|AMD GPU Specification Comparison|
|AMD Radeon R9 290X||AMD Radeon R9 280X||AMD Radeon HD 7970||AMD Radeon HD 6970|
|Memory Clock||5GHz GDDR5||6GHz GDDR5||5.5GHz GDDR5||5.5GHz GDDR5|
|Memory Bus Width||512-bit||384-bit||384-bit||256-bit|
|Typical Board Power||~300W (Unofficial)||250W||250W||250W|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm||TSMC 40nm|
|Architecture||GCN 1.1||GCN 1.0||GCN 1.0||VLIW4|
We’ll dive into the full architectural details of Hawaii a bit later, but as usual let’s open up with a quick look at the specs of today’s card. Hawaii is a GCN 1.1 part – the second such part from AMD – and because of that comparisons with older GCN parts are very straightforward. For gaming workloads in particular we’re looking at a GCN GPU with even more functional blocks than Tahiti and even more memory bandwidth to feed it, and 290X performs accordingly.
Compared to Tahiti, AMD has significantly bulked up both the front end and the back end of the GPU, doubling each of them. The front end now contains 4 geometry processor and rasterizer pairs, up from 2 geometry processors tied to 4 rasterizers on Tahiti, while on the back end we’re now looking at 64 ROPs versus Tahiti’s 32. Meanwhile in the computational core AMD has gone from 32 CUs to 44, increasing the amount of shading/texturing hardware by 38%.
On the other hand GPU clockspeeds on 290X are being held consistent versus the recently released 280X, with AMD shipping the card with a maximum boost clock of 1GHz (they’re unfortunately still not telling us the base GPU clockspeed), which means any significant performance gains will come from the larger number of functional units. With that in mind we’re looking at a video card that has 200% of 280X’s geometry/ROP performance and 138% of its shader/texturing performance. In the real world performance will trend closer to the increased shader/texturing performance – ROP/geometry bottlenecks don’t easily scale out like shading bottlenecks – so for most scenarios the upper bound for performance increases is that 38%.
Meanwhile the job of feeding Hawaii comes down to AMD’s fastest memory bus to date. With 280X and other Tahiti cards already shipping with a 384-bit memory bus running at 6GHz – and consuming quite a bit of die space to get there – to increase their available memory bandwidth AMD has opted to rebalance their memory configuration in favor of a wider, lower clockspeed memory bus. For Hawaii we’re looking at a 512-bit memory bus paired up with 5GHz GDDR5, which brings the total amount of memory bandwidth to 320GB/sec. The reduced clockspeed means that AMD’s total memory bandwidth gains aren’t quite as large as the increase in the memory bus size itself, but compared to the 288GB/sec on 280X this is still an 11% increase in memory bandwidth and a move very much needed to feed the larger number of ROPs that come with Hawaii. More interesting however is that in spite of the larger memory bus the total size of AMD’s memory interface has gone down compared to Tahiti, and we’ll see why in a bit.
At the same time because AMD’s memory interface is so compact they’ve been able to move to a 512-bit memory bus without requiring too large a GPU. At 438mm2 and composed of 6.2B transistors Hawaii is still the largest GPU ever produced by AMD – 18mm2 bigger than R600 (HD 2900) – but compared to the 365mm2, 4.31B transistor Tahiti AMD has been able to pack in a larger memory bus and a much larger number of functional units into the GPU for only a 73mm2 (20%) increase in die size. The end result being that AMD is able to once again significantly improve their efficiency on a die size basis while remaining on the same process node. AMD is no stranger to producing these highly optimized second wind designs, having done something similar for the 40nm era with Cayman (HD 6900), and as with Cayman the payoff is the ability to increase performance an efficiency between new manufacturing nodes, something that will become increasingly important for GPU manufacturers as the rate of fab improvements continues to slow.
Moving on, let’s quickly talk about power consumption. With Hawaii AMD has made a number of smaller changes both to the power consumption of the silicon itself, and how it is defined. On the tech side of matters AMD has been able to reduce transistor leakage compared to Tahiti, directly reducing power consumption of the GPU as a result, and this is being paired with changes to certain aspects of their power management system, with implementing advanced power/performance management abilities that vastly improve the granularity of their power states (more on this later).
However at the same time how power consumption is being defined is getting far murkier: AMD doesn’t list the power consumption of the 290X in any of their documentation or specifications, and after asking them directly we’re only being told that the “average gaming scenario power” is 250W. We’ll dive into this more when we do a breakdown of the changes to PowerTune on 290X, but in short AMD is likely underreporting the 290X’s power consumption. Based on our test results we’re seeing 290X draw more power than any other “250W” card in our collection, and in reality the TDP of the card is almost certainly closer to 300W. There are limits to how long the card can sustain that level of power draw due to cooling requirements, but given sufficient cooling the power limit of the card appears to be around 300W, and for the moment we’re labeling it as such.
Left To Right: 6970, 7970, 290X
Finally, let’s talk about pricing, availability, and product positioning. As AMD already launched the rest of the 200 series 2 weeks ago, the launch of the 290X is primarily filling out the opening at the top of AMD’s product lineup that the rest of the 200 series created. The 7000 series is in the middle of its phase out – and the 7990 can’t be too much farther behind – so the 290X is quickly going to become AMD’s de-facto top tier card.
The price AMD will be charging for this top tier is $549, which happens to be the same price as the 7970 when it launched in 2012. This is about $100-$150 more expensive than the outgoing 7970GE and $250 more expensive than 280X, with the 290X offering an average performance increase over 280X of 30%. Meanwhile when placed against NVIDIA’s lineup the primary competition for 290X will be the $650 GeForce GTX 780, a card that the 290X can consistently beat, making AMD the immediate value proposition at the high-end. At the same time however NVIDIA will have their 3 game Holiday GeForce Bundle starting on the 28th, making this an interesting inversion of earlier this year where it was AMD offering large game bundles to improve the competitive positioning of their products versus NVIDIA’s. As always, the value of bundles are ultimately up to the buyer, especially in this case since we’re looking at a rather significant $100 price gap between the 290X and the GTX 780.
Finally, unlike the 280X this is going to be a very hard launch. As part of their promotional activities for the 290X retailers have already been listing the cards while other retailers have been taking pre-orders, and cards will officially go on sale tomorrow. Note that this is a full reference launch, so everyone will be shipping identical reference cards for the time being. Customized cards, including the inevitable open air cooled ones, will come later.
|Fall 2013 GPU Pricing Comparison|
|$650||GeForce GTX 780|
|Radeon R9 290X||$550|
|$400||GeForce GTX 770|
|Radeon R9 280X||$300|
|$250||GeForce GTX 760|
|Radeon R9 270X||$200|
|$180||GeForce GTX 660|
|$150||GeForce GTX 650 Ti Boost|
|Radeon R7 260X||$140|
Post Your CommentPlease log in or sign up to comment.
View All Comments
kyuu - Friday, October 25, 2013 - linkI agree. Ignore at all the complainers; it's great to have the benchmark data available without having to wait for all the rest of the article to be complete. Those who don't want anything at all until it's 100% done can always just come back later.
AnotherGuy - Friday, October 25, 2013 - linkWhat a beast
zodiacsoulmate - Friday, October 25, 2013 - linkDonno, all the geforce cards looks like sh!t in this review, and 280x/7970 290x looks like haven's god...
but my 6990 7970 never really make me happier than my gtx 670 system...
TheJian - Friday, October 25, 2013 - linkWhile we have a great card here, it appears it doesn't always beat 780, and gets toppled consistently by Titan in OTHER games:
World of Warcraft (spanked again all resolutions by both 780/titan even at 5760x1080)
Splinter Cell Blacklist (smacked by 780 even, of course titan)
StarCraft 2 (by both 780/titan, even 5760x1080)
Titan adds more victories (780 also depending on res, remember 98.75% of us run 1920x1200 or less):
Skyrim (all res, titan victory at techpowerup) Ooops, 780 wins all res but 1600p also skyrim.
Assassins creed3, COD Black Ops2, Diablo3, FarCry3 (though uber ekes a victory at 1600p, reg gets beat handily in fc3, however hardocp shows 780 & titan winning apples-apples min & avg, techspot shows loss to 780/titan also in fc3)
Hardocp & guru3d both show Bioshock infinite, Crysis 3 (titan 10% faster all res) and BF3 winning on Titan. Hardocp also show in apples-apples Tombraider and MetroLL winning on titan.
Guild wars 2 at techreport win for both 780/titan big also (both over 12%).
Also tweaktown shows lost planet 2 loss to the lowly 770, let alone 780/titan.
I guess there's a reason why most of these quite popular games are NOT tested here :)
So while it's a great card, again not overwhelming and quite the loser depending on what you play. In UBER mode as compared above I wouldn't even want the card (heat, noise, watts loser). Down it to regular and there are far more losses than I'm listing above to 780 and titan especially. Considering the overclocks from all sites, you are pretty much getting almost everything in uber mode (sites have hit 6-12% max for OCing, I think that means they'll be shipping uber as OC cards, not much more). So NV just needs to kick up 780TI which should knock out almost all 290x uber wins, and just make the wins they already have even worse, thus keeping $620-650 price. Also drop 780 to $500-550 (they do have great games now 3 AAA worth $100 or more on it).
Looking at 1080p here (a res 98.75% of us play at 1920x1200 or lower remember that), 780 does pretty well already even at anandtech. Most people playing above this have 2 cards or more. While you can jockey your settings around all day per game to play above 1920x1200, you won't be MAXING much stuff out at 1600p with any single card. It's just not going to happen until maybe 20nm (big maybe). Most of us don't have large monitors YET or 1600p+ and I'm guessing all new purchases will be looking at gsync monitors now anyway. Very few of us will fork over $550 and have the cash for a new 1440p/1600p monitor ALSO. So a good portion of us would buy this card and still be 1920x1200 or lower until we have another $550-700 for a good 1440/1600p monitor (and I say $550+ since I don't believe in these korean junk no-namers and the cheapest 1440p newegg itself sells is $550 acer). Do you have $1100 in your pocket? Making that kind of monitor investment right now I wait out Gsync no matter what. If they get it AMD compatible before 20nm maxwell hits, maybe AMD gets my money for a card. Otherwise Gsync wins hands down for NV for me. I have no interest in anything but a Gsync monitor at this point and a card that works with it.
Guru3D OC: 1075/6000
Hardwarecanucks OC: 1115/5684
Hardwareheaven OC: 1100/5500
PCPerspective OC: 1100/5000
TweakTown OC: 1065/5252
TechpowerUp OC: 1125/6300
Techspot OC: 1090/6400
Bit-tech OC: 1120/5600
Left off direct links to these sites regarding OCing but I'm sure you can all figure out how to get there (don't want post flagged as spam with too many links).
b3nzint - Friday, October 25, 2013 - link"So NV just needs to kick up 780TI which should knock out almost all 290x uber wins, and just make the wins they already have even worse, thus keeping $620-650 price. Also drop 780 to $500-550"
we're talking about titan killer here.
titan vs titan killer, at res 3840, at high or ultra :
coh2 - 30%
metro - 30%
bio - (10%) but win 3% at medium
bf3 - 15%
crysis 3 - tie
crysis - 10
totalwar - tie
hitman - 20%
grid 2 - 10%+
2816 sp, 64rop, 176tmu, 4gb 512bit. 780 or 780ti won't stand a chance. this is titan killer dude wake up. only then then we're talking CF, SLi and res 5760. But for single card i go for this titan killer. good luck with gsync, im not gave up my dell u2711 yet.
just4U - Friday, October 25, 2013 - linkWell.. you have to put this in context. Those guys gave it their editor's choice award and a overall score of 9.3 They summed it up with this..
The real highlight of AMD's R9 290X is certainly the price. What has been rumored to cost around $700 (and got people excited at that price), will actually retail for $549! $549 is an amazing price for this card, making it the price/performance king in the high-end segment. NVIDIA's $1000 GTX Titan is completely irrelevant now, even the GTX 780 with its $625 price will be a tough sale."
theuglyman0war - Thursday, October 31, 2013 - linkthe flagship gtx *80 $msrp has been $499 for every upgrade I have ever made. After waiting out the 104 fer the 110 chip only to have the insult of the previous 780 pricing meant I will be holding off to see if everything returns to normal with Maxwell. Kind of depressing when others are excited for $550? As far as I know the market still dictates pricing and my price iz $499 if AMD is offering up decent competition to keep the market healthy and respectful.
ToTTenTranz - Friday, October 25, 2013 - linkHow isn't this viral?
nader21007 - Friday, October 25, 2013 - linkRadeon R9 290X received Tom’s Hardware’s Elite award—the first time a graphics card has received this honor. Nvidia: Why?
Wiseman: Because it Outperformed a card that is nearly double it's price (your Titan).
Do you hear me Nvidia? Please don't gouge consumers again.
doggghouse - Friday, October 25, 2013 - linkI don't think the Titan was ever considered to be a gamer's card... it was more like "prosumer" card for compute. But it was also marketed to people who build EXTREME! machines for maximum OC scores. The 780 was basically the gamer's card... it has 90-95% of the Titan's gaming capability, but for only $650 (still expensive).
If you want to compare the R9 290X to the Titan, I would look at the compute benchmarks. And in that, it seems to be an apples to oranges comparison... AMD and nVIDIA seem to trade blows depending on the type of compute.
Compared to the 780, the 290X pretty much beats it hands down in performance. If I hadn't already purchased a 780 last month ($595 yay), I would consider the 290X... though I'd definitely wait for 3rd party cards with better heat solutions. A stock card on "Uber" setting is simply way too hot, and too loud!