With the launch of the Radeon RX 5700 series and Radeon RX 5500 XT under their collective belt, AMD is now getting ready to fill in the divide between the card families. With the RX 5500 XT carrying a $199 price tag and the next step up being the $349 RX 5700, there’s a significant gap in price and performance between the two cards. All of which has left AMD at a disadvantage in the popular $200-$300 mainstream segment, where NVIDIA’s GeForce GTX 1660 cards dominate. To that end, today AMD is announcing the third sub-series of the Radeon RX 5000 family, the Radeon RX 5600 series. These latest cards will be coming to retail, OEM, and mobile, with the retail Radeon RX 5600 XT leading the charge.

Designed to fill that gap between the RX 5700 series and the RX 5500 series, the $279 RX 5600 XT is employing a further cut-down version of AMD’s class-leading Navi 10 GPU. For their latest card, AMD is dialing back on the amount of enabled hardware in order to offer a card with performance between the existing Radeon RX 5000 series cards, and with a price to match. The net result is a card that, in AMD's words, is intended to be the ultimate 1080p gaming card, and just as importantly, go head-to-head with NVIDIA’s GeForce GTX 1660 cards.

AMD Radeon RX Series Specification Comparison
  AMD Radeon RX 5600 XT AMD Radeon RX 5700 AMD Radeon RX 5500 XT AMD Radeon RX 590
CUs 36
(2304 SPs)
36
(2304 SPs)
22
(1408 SPs)
36
(2304 SPs)
Texture Units 144 144 88 144
ROPs 64 64 32 32
Base Clock 1235MHz? 1465MHz 1607MHz 1469MHz
Game Clock 1375MHz 1625MHz 1717MHz N/A
Boost Clock 1560MHz 1725MHz 1845MHz 1545MHz
Throughput (FP32) 7.2 TFLOPs 7.95 TFLOPs 5.2 TFLOPs 7.1 TFLOPs
Memory Clock 12 Gbps GDDR6 14 Gbps GDDR6 14 Gbps GDDR6 8 Gbps GDDR5
Memory Bus Width 192-bit 256-bit 128-bit 256-bit
VRAM 6GB 8GB 4GB/8GB 8GB
Transistor Count 10.3B 10.3B 6.4B 5.7B
Typical Board Power 150W 180W 130W 225W
Manufacturing Process TSMC 7nm TSMC 7nm TSMC 7nm GloFo/Samsung 12nm
Architecture RDNA (1) RDNA (1) RDNA (1) GCN 4
GPU Navi 10 Navi 10 Navi 14 Polaris 30
Launch Date 01/21/2020 07/07/2019 12/12/2019 11/15/2018
Launch Price $279 $349 $199/$169 $279

As mentioned earlier, AMD hasn’t minted a new GPU for the Radeon RX 5600 XT. Instead, the company is using a further cut-down version of Navi 10, which is already used for the Radeon 5700 series. And while AMD generally avoids talking about salvaging when discussing lower-tier products, it's difficult to imagine that's not a factor here, even if the official line is focused more on it being a conscientious choice to bring Navi 10 down to compete with NVIDIA's GTX 1660 series. Either way, with the RX 5600 XT essentially getting a third-tier Navi 10-based card.

All told, it is rather rare for AMD to offer a third-tier part on the desktop; normally the company only offers two cards/configurations before moving up or down to the next GPU. Part of their current situation is based out of sheer necessity on the GPU side of matters – Navi 14 isn't powerful enough for the market AMD is chasing – though I imagine yields on the 7nm process also have room to improve since it's still relatively young. At the end of the day, it means AMD can kill two birds with one stone by putting chips that would otherwise be discarded to use in a card to counter the GeForce GTX 1660 series.

Digging into the specifications, as it’s configured, the Radeon RX 5600 XT is set to be firmly between the RX 5700 and RX 5500 XT in performance. AMD’s interesting choice of functional blocks to cut (and not to cut) means that the RX 5600 XT actually retains a lot of the RX 5700’s "core" GPU performance: on paper, the card's shading, texturing, pixel throughput, and compute performance are all quite close to the RX 5700, with the only difference is 15% lower average clockspeeds. AMD has even retained Navi 10's 64 ROPs, meaning that the card has plenty of pixel pushing power.

Instead, the big tradeoff in specing out the RX 5600 XT has been in memory bandwidth, and to a lesser extent memory capacity. The RX 5600 XT ships with 6GB of GDDR6 on a 192-bit memory bus, which is to say that 1/4 of Navi 10's MC/L2 partitions have been disabled. Furthermore the card ships with 12Gbps GDDR6 instead of the 14Gbps chips found on the RX 5700/5500. As a result, the RX 5600 XT has 36% less memory bandwidth than the RX 5700, coming in at 288GB/sec, making this the biggest factor differentiating the RX 5600 XT from its RX 5700 sibling.

The flip side of the coin, however, is that a 192-bit memory bus means that the RX 5600 XT has a lot more memory bandwidth than the RX 5500 XT, with the new card delivering 29% more memory bandwidth here. And with significantly more ROPs and CUs to feed than the RX 5500 XT, the RX 5600 XT is posed to shoot well past AMD's cheaper 1080p card.

Cutting back on memory bandwidth makes a lot of sense for AMD, both in terms of differentiating products with regards to performance as well as bringing down the card’s manufacturing price. That said, this is the first time in a long while that we’ve seen an AMD card with a 192-bit memory bus; the company normally just doesn’t do it. Power-of-two configurations are easier to deal with for various reasons, and it’s always allowed AMD to pack in more memory than NVIDIA’s competing mid-range cards. Ultimately the use of a 192-bit bus means that AMD can put to work Navi 10 GPUs with a bad memory controller, all the while using fewer (and cheaper) memory chips than in a 256-bit card. But it also means they are only able to outfit the card with 6GB of memory, as opposed to 8GB.

Without getting entirely sidetracked here, I went into this matter a bit back in the RX 5500 XT launch, but I do have some concerns about cards with less than 8GB of VRAM. With the next-gen consoles set to launch this year, the bar on hardware requirements is about to be significantly raised in one fell swoop, and I worry that sub-8GB cards won’t have enough memory capacity. With all of that said, however, whatever the ramifications are, it will mean AMD and NVIDIA are on equal footing: both the RX 5600 XT and GTX 1660 series ship with 6GB of VRAM, so both cards will thrive or falter at the same time. Clearly explaining a 6GB RX 5600 XT versus an 8GB RX 5500 XT is going to be a trickier matter for AMD's marketing department, however.

Rounding out the hardware package, AMD tells us that the new cards will ship with a TBP of 150 watts, which like everything else is right between the RX 5700 and RX 5500 XT. The smaller VRAM amount and lower clockspeeds help to bring down power consumption versus the RX 5700, which helps to keep AMD's energy efficiency up. In fact, AMD tells us that their 150W TBP value for the retail card is a bit on the conservative side of matters, and that real-world power usage will sometimes be less, so I’ll be interested in seeing just what our benchmarks turn up in a couple of weeks for the new card.

Product Positioning & The Competition

Since the RX 5600 XT is built from the same Navi 10 GPU as AMD’s RX 5700 cards, AMD’s board partners will be hitting the ground running here in terms of board designs. Some will just outright be reusing RX 5700 designs, while others will be tweaking their designs to account for the narrower memory bus, lower TBP, and lower sticker price.

This also means that this is a board partner-driven launch. AMD doesn’t have a (public) reference card, and it is the board partners shipping custom designs from day 1. As these are sub-$300 cards we won’t be seeing anything too exotic – mostly the usual variations on two and three fan designs – though factory overclocked cards will be peppered into the mix.

As mentioned previously, AMD’s target market for the RX 5600 XT is mainstream gamers, with the company offering a card that is supposed to be the final word in 1080p gaming. 1080p is still the single largest segment of the gaming market, and after watching rival NVIDIA sell GTX 1060 cards by the millions over the last few years, it's a market that AMD isn't content to leave alone.

Within AMD’s product stack, the RX 5600 XT doesn’t have a particularly good direct predecessor. The closest analog would be the Radeon RX 580 or RX 590, which is somewhat the same lineup that the RX 5700 served to replace. However as it’s not as powerful as the RX 5700, the RX 5600 XT, there's an argument to be had about whether it can deliver a full generational performance update over the RX 580/590. Though as an upgrade to older RX 300-era systems it would be a gigantic step up, or as a natural step within AMD’s expanding product stack for new system builds.

The competition for the RX 5600 XT then is both NVIDIA’s GeForce GTX 1660 series and their older GTX 1060 cards. AMD is especially keen on winning over GTX 1060 owners who are looking for an upgrade, but aren't satisfied with NVIDIA's offerings. None of the Turing cards have been true generational upgrades over their Pascal predecessors, so this is a sentiment that AMD is hoping to build off of by offering a better 1080p card than NVIDIA.

And with a cut-down Navi 10 part, AMD should be able to get there. While all vendor numbers should be taken with a large grain of salt, AMD's internal benchmarks have them ahead of the entire GTX 1660 family, and given the RX 5600 XT's specs (and what RX 5700 can do), this is fairly believable. In fact the biggest challenge for AMD isn't even raw performance, but rather it's beating NVIDIA on overall value.

While NVIDIA’s recent mid-generation Super refresh has kind of made a mess of their product stack here, the $229 GTX 1660 Super has become a clear front-runner of the GTX 1660 family, as its great perf-per-dollar ratio has made the other GTX 1660 cards (e.g. GTX 1660 Ti) rather redundant. So although AMD can handily beat the entire GTX 1660 series on performance, it's not the $279 GTX 1660 Ti where AMD's stiffest competition lies. Instead, they need to be able to convince potential buyers that iRX 5600 XT is worth the $50 premium over the competitively priced GTX 1660 Super.

Overall, AMD is rolling out the Radeon RX 5600 XT with high hopes. The company has a clear goal in mind: to dethrone NVIDIA at the high-end of 1080p, and they have hardware that should be able to do it. Now we'll just have to see where the cards come to rest when the Radeon RX 5600 XT launches on January 21th. So stay tuned for our full look at AMD’s latest video card later this month.

Q1 2020 GPU Pricing Comparison
AMD Price NVIDIA
Radeon RX 5700 $329 GeForce RTX 2060
Radeon RX 5600 XT $279 GeForce GTX 1660 Ti
  $229 GeForce GTX 1660 Super
Radeon RX 5500 XT 8GB $199/$209 GeForce GTX 1660
Radeon RX 5500 XT 4GB $169/$159 GeForce GTX 1650 Super
  $149 GeForce GTX 1650
Radeon RX 5600 For OEMs, & Radeon RX5600M For Mobile
Comments Locked

83 Comments

View All Comments

  • neblogai - Tuesday, January 7, 2020 - link

    Only for the hardware- because with the prices of games and services they more than recoupe it over their life time. That is why consoles are succesful only in richest countries, not in poor ones, where people count their money hard.
  • Gigaplex - Wednesday, January 8, 2020 - link

    I don't see that happening. When big console changes came about (vastly different architectures, eg PS2->PS3->PS4) the porting requirements changed which lead to poorly optimised code. The current leading consoles are pretty close to PC architectures, and the next generation will be largely the same but with higher specs - but still lower than high end PCs.
  • azazel1024 - Tuesday, January 7, 2020 - link

    Many older games I can play at 1080p with my i3570 and GTX750. Not even a GTX750ti. Newer ones, especially with the various feature knobs turned up...not a chance.

    As GPUs are able to handle the demands, and especially as consoles improve, you'll find what games demand at 1080p continuing to increase.
  • Spunjji - Tuesday, January 7, 2020 - link

    What's with the 30W limitation? The 1650 can do everything you asked, but at 75W. That's seriously impressive from any perspective besides "it arbitrarily has to be 30W".

    The 1030 was impressive in that regard when it was new, but time moves on, and neither AMD nor Nvidia have any pressing drive to replace their bottom-of-the-range GPU on a regular basis.
  • Hul8 - Tuesday, January 7, 2020 - link

    One consideration would be that if you try to dissipate that 75W with a single slot cooler, the GPU will either be unbearably loud, be clocked really low, or throttle (or multiple of them). And even worse clocks if you want a card with a passive cooler.
  • Hul8 - Tuesday, January 7, 2020 - link

    Single slot *was* one of @PeachNCream's requirements.
  • SirPerro - Wednesday, January 8, 2020 - link

    I find interesting the "can handle 1080p" mentality. As if games 4 years from now will be the same as current ones.

    One must be ready to accept the fact that a card which runs current games in ultra at 60fps will not run future games in ultra.

    And it's perfectly fine and normal. This is not the console world. There's no "GPU power freeze" here. Small steps in the market every few months.
  • flyingpants265 - Saturday, January 11, 2020 - link

    IMO the time for 1080p is completely over. Games should be targetting either 1440p@240hz, or 75-90hz.

    Screw 60hz honestly, I have a Samsung 19" 75hz from like 2006 or something and it works great.
  • Alexvrb - Wednesday, January 8, 2020 - link

    I mean ASIDE from the fact that 30W half height single slot graphics cards are in such HIGH demand from desktop gamers (nearly .00001% of users want one!)… why stop there? I think while you're asking for something that doesn't exist, you really need to lean into it. Sleep until they can do 16K full raycasting in a 5W power envelope...
  • flyingpants265 - Saturday, January 11, 2020 - link

    Well, 1050ti/1650 is pretty good at 75 watts. No power connector needed!

    Not sure why you care about single-slot coolers, maybe you have a personal reason for doing so.

    If anything, GPU coolers should be a lot bigger. GPUs are 300W+. Dual 140mm towers on both CPU/GPU. Linus has a recent video about this, and he (somewhat bizarrely) underreacts to the tremendous overwhelming improvement in heat dissipation, but it's just the obvious thing to do, both for temps/boost and the longevity of the card.

    Also, video cards should be mounted parallel to an ITX motherboard on a 90degree PCIe slot, it's not 1999 anymore. It's somewhat rare to see systems with more than one PCIe card in the wild.

Log in

Don't have an account? Sign up now