The AMD Radeon R9 295X2 Review
by Ryan Smith on April 8, 2014 8:00 AM EST- Posted in
- GPUs
- AMD
- Radeon
- Radeon 200
Meet the Radeon R9 295X2: Build Quality & Performance Expectations
Moving on, let’s talk about the build quality of the card itself. With the 7990 AMD did not explicitly chase the luxury market despite its $1000 price tag, choosing to use a standard mixture of metal and plastic parts as part of their more aggressive pricing and positioning of that card. Even with that choice, the 7990 was a solid card that was no worse for the use of plastic, reflecting the fact that while metal adds a degree of sturdiness to a card, it’s not strictly necessary for a well-designed card.
However with the R9 295X2 priced at $1500 and AMD choosing to go after the luxury market, AMD has stepped up their build quality in order to meet the higher expectations NVIDIA has set with their GTX Titan series of cards. The end result is that while the R9 295X isn’t a carbon copy of the GTX Titan by any means, it does successfully implement the metal finish that we’ve seen with NVIDIA’s luxury cards, and in the process ends up being a very sturdy card that can stand toe-to-toe with Titan.
Overall AMD is using a 2 piece design here. Mounted on top of the PCB is the baseplate, which runs the length of the card. Meanwhile a series of screws around the edge of the metal shroud holds it to the baseplate, making it similar to the shroud AMD used for their reference 290X cards. The bolts seen at the top of the card, despite appearances, are for all practical purposes decorative, with the aforementioned screws being the real key to holding the card together.
Elsewhere, at the top of the card we can see that AMD has taken a page from NVIDIA’s playbook and invested in red LED lighting for the fan on the card and the Radeon logo. This is one of those gimmicks we mentioned earlier, that although don’t improve the functionality of the card at all, have become popular with luxury buyers for showing off their cards.
Wrapping up our discussion of the R9 295X2’s construction, let’s quickly discuss the card’s physical limits and AMD’s own overclocking limits. While we’ll get into the heart of the matter in our look at power/temperature/noise, this is as good a time as any to point out the card’s various limits and why they are what they are.
Starting with the fans, because AMD is relying on a split cooling design with both an on-board fan and a CLLC, AMD doesn’t offer any fan control options for this card. In the case of the CLLC this is because the CLLC itself controls its own fan speed, with the single 120mm fan slaved into the pumps and the pumps in turn adjusting the fan based on their own sensors. Since the 120mm fan is under the control of the pumps and not tied into the board’s fan controller, there’s no way to control this fan short of introducing a physical fan controller in the middle.
The 120mm fan in question is a simple 2pin fan, which is rated to operate between 1200 RPM and 2000 RPM (+/- 10%). Using an optical tachometer we measured the fan operating between 1340RPM at idle and 1860RPM under load, which is a bit high for idle based on AMD’s specifications, but within the 10% window for load operation. The fan does get up to full speed before the GPUs reach their75C temperature limit, so whether it’s FurMark or Crysis, in our experience the 120mm fan maxes out at the same speed and hence the same noise levels.
AMD Radeon R9 295X2 120mm Radiator Fan Speeds | |||||||||||
Spec | Measured | ||||||||||
Idle | 1200 RPM +/- 10% | ~1340 RPM | |||||||||
Full Load | 2000 RPM +/- 10% | ~1860 RPM |
Meanwhile the smaller fan on the card itself is rated for 1350 RPM to 2050 RPM, and is tied into the board’s fan controller. However since this fan is responding to board component temperatures rather than GPU temperatures, AMD has not made this fan controllable. AMD’s APIs do not directly expose the temperatures of these components (MSI AB was only able to tell us the GPU temperatures), so it’s not unexpected that this fan can’t be directly controlled.
While fan controls aren’t exposed, AMD does expose the other overclocking controls in their Overdrive control panel. Power limits, temperature limits, CPU clockspeed, and memory clockspeed are all exposed and can be adjusted. However unlike the 290 series, which allowed a GPU temperature of up to 95C, AMD has clamped down on the R9 295X2’s GPUs, only allowing them to reach 75C before throttling. Upon finding this we asked AMD why they were using such a relatively low temperature limit, and the response we received is that it’s due to a combination of factors including the operational requirements of the CLLC itself, and what AMD considers the best temperature for optimal performance. As we briefly discussed in our 290X review leakage increases with temperature, and while Hawaii is supposed to be a lower leakage part leakage is still going to be occurring. To that end our best guess is that 75C is as warm as Hawaii can get before leakage starts becoming a problem for this card.
Which brings us to our final point, which is how close AMD is operating to the limits of the Asetek CLLC. For the nearly 500W that the R9 295X2’s CLLC needs to dissipate, a single 120mm CLLC is relatively small by CLLC standards. CPU CLLCs are easily found in larger sizes, including 140mm, 2x120mm, and 2x140mm (which is what we use for our CPU). In all of those cases CPUs will generate less heat than a pair of GPUs, which means the R9 295X2’s radiator is operating under quite a bit of load.
Based on our testing the CLLC is big enough to handle the load at stock, but only just. Under our most strenuous gaming workloads we’re hitting temperatures in the low 70s coupled all the while the 120mm fan is reaching its maximum fan speed. This level of cooling performance is enough to keep the card from throttling, but it’s clear that the CLLC can’t handle any more heat. Which ties in to what we said earlier about AMD not designing this card for overclocking. Even without voltage adjustments, just significantly increasing the power limit would cause it to throttle based on GPU temperatures. To that end the 120mm CLLC gets the job done, but this is clearly a card that’s best suited for running at stock.
131 Comments
View All Comments
mickulty - Wednesday, April 9, 2014 - link
Well, Arctic's 6990 cooler wasn't far off. The arctic mono is good for 300W and it should be possible to fit two such heatsinks on one card. So it's possible. The resulting card would be absolutely huge though, and wouldn't be nearly as popular with gaming PC boutiques (IE the target market).Oh, VRM cooling might be an issue too. I guess a thermaltake-style heatpipe arrangement would fix that.
SunLord - Tuesday, April 8, 2014 - link
Huh looking at that board and layout of the cooling setup you can swap in two independent closed looped coolers pretty easily and try and overclock it if you want and since your rich if you buy this it's totally viable for any ownernsiboro - Tuesday, April 8, 2014 - link
Ryan, thank you for a wonderfully written and informative review. Appreciate much.behrouz - Tuesday, April 8, 2014 - link
Ryan Smith , Please Confirm this :The new nv's Driver Does Overclock GTX 780 Ti, From 928 to 1019Mhz.if So Temp should be increased.
behrouz - Tuesday, April 8, 2014 - link
and also Power ConsumptionRyan Smith - Tuesday, April 8, 2014 - link
Overclock GTX 780 Ti? No. I did not see any changes in clockspeeds or temperatures that I can recall.PWRuser - Tuesday, April 8, 2014 - link
I have a Antec Signature 850W sitting in the closet. 295X2 too much for it?It's this one: http://www.jonnyguru.com/modules.php?name=NDReview...
Dustin Sklavos - Tuesday, April 8, 2014 - link
Word of warning: do not use daisy-chained PCIe power connectors (i.e. one connection to the power supply and two 8-pins to the graphics card). If AMD wasn't going over the per-connector power spec it wouldn't be an issue, but they are, which means you can melt the connector at the power supply end. Those daisy-chained PCIe connectors are meant for 300W max, not 425W.We've been hearing about this from a bunch of partners and I believe end users should be warned.
PWRuser - Tuesday, April 8, 2014 - link
Thank you. According to specs my PSU could handle these GPU separately, I guess utilizing 2 PCIE slots via 2 separate cards alleviates the strain.extide - Tuesday, April 8, 2014 - link
No it has nothing to do with how many cards or slots. It's how many CABLES from the PSU.Sometimes you can have a single cable with two pcie connectors on the end, one daisy chained of the other. What he is saying is, don't use connectors like that, use two individual cables instead.
Although, unless the PSU you are using has really crappy (thin) power cables, it should be OK even with a single cable. But yeah, it's definitely a good idea to use two!