The AMD Radeon RX 5600 XT Review, Feat. Sapphire Pulse: A New Challenger For Mainstream Gamingby Ryan Smith on January 21, 2020 9:01 AM EST
While at this point we’ve pretty much reached the “mid-generation” point in the GPU space, that doesn’t mean activity in the GPU market is slowing down. Indeed, just three weeks into 2020 and AMD is already up to bat with a new video card, the Radeon RX 5600 XT. Announced a couple of weeks back at CES 2020, the Radeon RX 5600 XT is AMD’s answer to the $200-$300 mainstream video card segment, and is designed to be their ultimate 1080p gaming card. And of all of the AMD RX 5000 series video card launches in the last six months, it’s quickly shaping up to be the biggest fight yet between AMD and NVIDIA.
The reasons for such a competitive and contested atmosphere are two-fold. First, AMD is bringing in some fairly powerful hardware to anchor their position in the sub-$300 market. Even as cut-down as it is, the Navi 10 GPU used in the RX 5600 XT is quite powerful, easily outclassing NVIDIA’s GeForce GTX 1660 series. The second factor is just how badly both AMD and NVIDIA want this market: AMD has determined that it’s essential to break into this market and to break NVIDIA’s virtual domination of the mainstream, while NVIDIA is quite happy with the status quo. As a result, over the last week alone both parties have been making moves to reposition and counter each other in an effort to come out on top, including price cuts and last-minute BIOSes. So in many ways the launch of the Radeon RX 5600 XT feels like the biggest slugfest yet between AMD and NVIDIA in quite some time.
Overall, the $279 RX 5600 XT is designed to fill that gap between the RX 5700 series and the RX 5500 series, employing a further cut-down version of AMD’s class-leading Navi 10 GPU. For their newest card, AMD is dialing back on the amount of enabled hardware in order to offer a card with performance between the existing Radeon RX 5000 series cards, and with a price to match.
|AMD Radeon RX Series Specification Comparison|
|AMD Radeon RX 5600 XT||AMD Radeon RX 5700||AMD Radeon RX 5500 XT||AMD Radeon RX 590|
|Throughput (FP32)||7.2 TFLOPs||7.95 TFLOPs||5.2 TFLOPs||7.1 TFLOPs|
|Memory Clock||12 Gbps GDDR6||14 Gbps GDDR6||14 Gbps GDDR6||8 Gbps GDDR5|
|Memory Bus Width||192-bit||256-bit||128-bit||256-bit|
|Typical Board Power||150W||180W||130W||225W|
|Manufacturing Process||TSMC 7nm||TSMC 7nm||TSMC 7nm||GloFo/Samsung 12nm|
|Architecture||RDNA (1)||RDNA (1)||RDNA (1)||GCN 4|
|GPU||Navi 10||Navi 10||Navi 14||Polaris 30|
As mentioned earlier, AMD hasn’t minted a new GPU for the Radeon RX 5600 XT. Instead, the company is using a further cut-down version of Navi 10, which is already used for the Radeon 5700 series. And while AMD generally avoids talking about salvaging when discussing lower-tier products, it's difficult to imagine that's not a factor here, even if the official line is focused more on it being a conscientious choice to bring Navi 10 down to compete with NVIDIA's GTX 1660 series. Either way, with the RX 5600 XT essentially getting a third-tier Navi 10-based card.
All told, it is rather rare for AMD to offer a third-tier part on the desktop; normally the company only offers two cards/configurations before moving up or down to the next GPU. Part of their current situation is based out of sheer necessity on the GPU side of matters – Navi 14 isn't powerful enough for the market AMD is chasing – though I imagine yields on the 7nm process also have room to improve since it's still relatively young. At the end of the day, it means AMD can kill two birds with one stone by putting chips that would otherwise be discarded to use in a card to counter the GeForce GTX 1660 series.
Digging into the specifications, as it’s configured, the Radeon RX 5600 XT is set to be firmly between the RX 5700 and RX 5500 XT in performance. AMD’s interesting choice of functional blocks to cut (and not to cut) means that the RX 5600 XT actually retains a lot of the RX 5700’s "core" GPU performance: on paper, the card's shading, texturing, pixel throughput, and compute performance are all quite close to the RX 5700, with the only difference is 15% lower average clockspeeds. AMD has even retained Navi 10's 64 ROPs, meaning that the card has plenty of pixel pushing power.
Instead, the big tradeoff in specing out the RX 5600 XT has been in memory bandwidth, and to a lesser extent memory capacity. The RX 5600 XT ships with 6GB of GDDR6 on a 192-bit memory bus, which is to say that 1/4 of Navi 10's MC/L2 partitions have been disabled. Furthermore the card ships with 12Gbps GDDR6 instead of the 14Gbps chips found on the RX 5700/5500. As a result, the RX 5600 XT has 36% less memory bandwidth than the RX 5700, coming in at 288GB/sec, making this the biggest factor differentiating the RX 5600 XT from its RX 5700 sibling.
The flip side of the coin, however, is that a 192-bit memory bus means that the RX 5600 XT has a lot more memory bandwidth than the RX 5500 XT, with the new card delivering 29% more memory bandwidth here. And with significantly more ROPs and CUs to feed than the RX 5500 XT, the RX 5600 XT is posed to shoot well past AMD's cheaper 1080p card.
Cutting back on memory bandwidth makes a lot of sense for AMD, both in terms of differentiating products with regards to performance as well as bringing down the card’s manufacturing price. That said, this is the first time in a long while that we’ve seen an AMD card with a 192-bit memory bus; the company normally just doesn’t do it. Power-of-two configurations are easier to deal with for various reasons, and it’s always allowed AMD to pack in more memory than NVIDIA’s competing mid-range cards. Ultimately the use of a 192-bit bus means that AMD can put to work Navi 10 GPUs with a bad memory controller, all the while using fewer (and cheaper) memory chips than in a 256-bit card. But it also means they are only able to outfit the card with 6GB of memory, as opposed to 8GB.
Without getting entirely sidetracked here, I went into this matter a bit back in the RX 5500 XT launch, but I do have some concerns about cards with less than 8GB of VRAM. With the next-gen consoles set to launch this year, the bar on hardware requirements is about to be significantly raised in one fell swoop, and I worry that sub-8GB cards won’t have enough memory capacity. With all of that said, however, whatever the ramifications are, it will mean AMD and NVIDIA are on equal footing: both the RX 5600 XT and GTX 1660 series ship with 6GB of VRAM, so both cards will thrive or falter at the same time. Clearly explaining a 6GB RX 5600 XT versus an 8GB RX 5500 XT is going to be a trickier matter for AMD's marketing department, however.
Rounding out the hardware package, AMD tells us that the new cards – or at least, those cards that meet the reference specifications and aren’t factory overclocked – will ship with a total board power (TBP) of 150 watts, which like everything else is right between the RX 5700 and RX 5500 XT. The smaller VRAM amount and lower clockspeeds help to bring down power consumption versus the RX 5700, which helps to keep AMD's energy efficiency up. In fact, AMD tells us that their 150W TBP value for the retail card is a bit on the conservative side of matters, and that real-world power usage will sometimes be less, something we’ll see in a bit more detail later on.
Product Positioning & The Competition
Since the RX 5600 XT is built from the same Navi 10 GPU as AMD’s RX 5700 cards, AMD’s board partners are hitting the ground running here in terms of board designs. Some are just outright be reusing RX 5700 designs, while others have tweaked their designs to account for the narrower memory bus, lower TBP, and lower sticker price.
This also means that this is a board partner-driven launch. AMD doesn’t have a (public) reference card, and it is the board partners shipping custom designs from day 1. As these are sub-$300 cards we won’t be seeing anything too exotic – mostly the usual variations on two and three fan designs – though it looks like factory overclocked cards will be heavily peppered into the mix.
As mentioned previously, AMD’s target market for the RX 5600 XT is mainstream gamers, with the company offering a card that is supposed to be the final word in 1080p gaming. 1080p is still the single largest segment of the gaming market, and after watching rival NVIDIA sell GTX 1060 and GTX 1660 cards by the millions over the last few years, it's a market that AMD isn't content to leave alone. Within North America and Europe, the mainstream ($200-$300) market segment is the single largest in terms of volume. And while it doesn’t come with the halo mindshare effect that the winner of the high-end market gets, AMD believes its still important for overall mindshare and market share by making the company a viable and visible competitor in this big market.
Within AMD’s product stack, the RX 5600 XT doesn’t have a particularly good direct predecessor. The closest analog would be the Radeon RX 590, which is somewhat the same lineup that the RX 5700 served to replace. However as it’s not as powerful as the RX 5700, the RX 5600 XT, there's an argument to be had about whether it delivers a “proper” generational performance update over the RX 590. Instead, this launch is defined more by the price point and the current-generation products it competes with. AMD wants the mainstream market, and the RX 5600 XT is the card that will deliver it.
Of course, NVIDIA isn’t going to surrender without a fight. Over the past couple of years they have sold GTX 1660 and GTX 1060 cards by the boatload, giving them tight control of this market segment, and they’d like to keep it. This has led to the normally conservative company making a surprising move: cutting the price on an existing card. The lowest-tier GeForce RTX card, the RTX 2060, is now starting at $299 – and far from being a one-off matter even NVIDIA’s Founders Edition card received a price cut. And while NVIDIA will downplay this a bit so that they aren’t seen as being reactionary, there are only so many ways to interpret a Founders Edition card price cut less than a week before a known AMD launch.
At any rate, AMD designed the RX 5600 XT to beat the GTX 1660, and as we’ll see, this is a battle it handily wins. So NVIDIA has made the most logical move on their part and brought in a more powerful card to compete.
What we end up with then is a calculus of performance and prices, with each GPU marker trying to one-up the other without compromising on pricing by too much. At $299 the RTX 2060 isn’t a perfect competitor to the RX 5600 XT, but then again it’s a bit faster. Meanwhile the $279 RX 5600 clobbers the $279 GeForce GTX 1660 Ti, which is what AMD wanted all along. But the GTX 1660 Ti itself has been a dead card walking for the last three months as far as value goes; NVIDIA’s best option here is the $229 GeForce GTX 1660 Super, which delivers most of the Ti’s performance for a lot less.
So for better or worse, there are no head-to-head matchups to speak of today. Instead, everyone is banking on price tiers and total value. This means that the big challenge for AMD isn't even raw performance, but rather it's all about beating NVIDIA on overall value. Radeon RX 5600 XT needs to be fast enough to justify its price premium over the GTX 1660 Super, and close enough to the RTX 2060 to overcome the kind of interia that has helped NVIDIA rule the mainstream market for the last three years. As we’ll see, AMD can do it, but this is a game of inches and cents.
Finally, like virtually all of AMD’s other products launched in the last six months, the new RX 5600 XT will also qualify for AMD and Microsoft’s ongoing Xbox Games Pass offer. So video card buyers will get a 3 month free trial to Microsoft’s games subscription program.
|Q1 2020 GPU Pricing Comparison|
|Radeon RX 5700||$329|
|$299||GeForce RTX 2060|
|Radeon RX 5600 XT||$279||GeForce GTX 1660 Ti|
|$229||GeForce GTX 1660 Super|
|Radeon RX 5500 XT 8GB||$199/$209||GeForce GTX 1660|
|Radeon RX 5500 XT 4GB||$169/$159||GeForce GTX 1650 Super|
|$149||GeForce GTX 1650|
Post Your CommentPlease log in or sign up to comment.
View All Comments
Targon - Tuesday, January 21, 2020 - linkPerformance per dollar within a given price category makes sense, but in many situations, lower end cards will end up being better when it comes to performance per dollar. Beyond $400, your performance per dollar does drop, but you can't argue when people want a $600+ card because they want to game at 4k resolutions and the $400 cards just can't handle that resolution.
thecoolnamesweretaken - Tuesday, January 21, 2020 - linkWhile I agree with you I wish more benchmarks included the 1070 Ti rather than the 1070. I imagine as an owner of such a card I must be in the extreme minority or perhaps reviewers never bothered to acquire one since it was released so late in the cycle before the move to the RTX 20xx architecture.
Retycint - Tuesday, January 21, 2020 - linkSeconded. The 1070Ti was a relatively popular mining card (at least in my country) and hence the local used market is flooded with used 1070Ti's for about $170-180, which is an absolute steal for the performance and basically renders the entire mid range market obsolete
Krayzieka - Tuesday, January 21, 2020 - linkIs this With the new driver boost?
Ryan Smith - Tuesday, January 21, 2020 - linkThis is with the lastest drivers. AMD's Radeon Boost (dynamic resolution) feature is not enabled.
maroon1 - Tuesday, January 21, 2020 - linkAMD's Radeon Boost features is horrible specially if you running below 4K
Watch some review like a hardware unboxed about it. They even recommend not using it for 1080p because you sacrifice a lot of image quality
Duckferd - Tuesday, January 21, 2020 - linkAll contextual. If you are running at 1080p on certain games with an APU, for example, it's still worthwhile with a minimal amount of boost (83%) because it keeps frametimes consistent when you're already constrained and most need it (i.e. panning in FPS games).
How 'horrible' it is also depends on whether you can perceive the dynamic resolution changes as well. This is going to vary quite a bit depending on user configuration and tolerance, but I think the feature is worthwhile to include.
Cooe - Tuesday, January 21, 2020 - linkDid you even watch that Hardware Unboxed video? They were very extremely impressed with the performance/visuals at 4K (and using a resolution downscaler of ANY KIND on lesser resolutions is an inherently horrible idea, not anything wrong with AMD's approach), though of course, the algorithm still has it's issues and was more like a proof of concept than anything you'd want to daily drive yet. But your original comment is absolutely NOT the point they ended at, so please don't spreading nonsense.
Spunjji - Wednesday, January 22, 2020 - linkIt's possible that spreading nonsense is maroon1's actual job :/
Irata - Tuesday, January 21, 2020 - linkWell, Techspot really liked it and found it a lot better thsn DLSS in their review.