The other concern is this memory technology is more complex to implement than Micron originally lead the public to believe. I am a big fan of updating the GDDR5 standard opposed to relying on HBM because HBM has too many drawbacks, specifically cost and implementation complexity.
But now it is obvious GDDR5X is nearly as complex to implement as HBM (sans the interposer and die packaging) because the pincount, packaging size and PCB requirements are all different.
After what was revealed by Samsung about HBM's memory controller requirements, it is basically drop-in compatible with any GPU's memory controller that supports GDDR5 and just comes down to video bios support (further confirmed by AMD's Polaris shipping in GDDR5 and HBM cards)
That isn't to say old GPU architectures will be optimized for HBM, but it's just another knock against GDDR5X's supposed flexibility.
What I'm getting at is...GDDR5X isn't what we have been lead to believe. If it's as radically different from GDDR5 as we're now being told, from being QDR to having an entirely different pinout and size...this isn't just GDDR5 with a frequency bump and a voltage drop. It will not just "drop in" on existing video cards. Entirely new reference designs and complete OEM cooperation is going to be needed to implement it, which sucks because that means it isn't going to be cheap and it probably won't show up on any current-gen GPU architectures because AMD and nVidia are just going to wait until their next GPU's (Polaris and Pascal) instead of creating new reference designs for Fuji and Maxwell 2.
And I doubt and OEM partners are going to make the R&D investment into fitting GDDR5X to older GCN or Maxwell products without AMD or nVidia telling them how to do it. Where do those extra 20-pins for each memory die even go? If the voltage is lower, then that would theoretically mean a reduction in pins.
Well the name says GDDR5 but it's actually GQDR. So it may be a larger departure from GDDR than the name suggests. Maybe at some point this QDR technology will be integrated into HBMx given that the channel interface for HBM still has DDR buses.
@close: "Maybe at some point this QDR technology will be integrated into HBMx given that the channel interface for HBM still has DDR buses."
Entirely reasonable. Given the stricter control over line lengths, the power distribution advantage afforded by using a single piece of silicon (interposer) vs multiple discrete chips, and the higher number of "pins" that can be supported by the interposer, I'd say there is less of a barrier of entry here. It probably won't happen immediately due to lack of need, but once GPU design figure out how to make good use of HBMs throughput, they may start considering it.
@Samus: "After what was revealed by Samsung about HBM's memory controller requirements, it is basically drop-in compatible with any GPU's memory controller that supports GDDR5 and just comes down to video bios support (further confirmed by AMD's Polaris shipping in GDDR5 and HBM cards)"
To be fair, I think this is also being simplified, but your point still stands. There does not appear to be as much of an advantage to GDDR5X implementation over HBM as we were lead to beleive.
@Samus: "That isn't to say old GPU architectures will be optimized for HBM, but it's just another knock against GDDR5X's supposed flexibility."
I think the key here will be the optimization. There is no reason to incur the cost of HBM if your chip is designed with a 256bit wide bus. If your bus is wider than 512bit, then GDDR isn't practical. Higher bit width buses are usually reserved higher end parts anyways due to complexities and cost.
I imagine we'll see GDDR5X used to reduce bus width while maintaining the same performance or used to boost performance with the same bus width where bus width is constrained. One example might be bringing previous high end / mainstream designs down to the mainstream / entry level. Another could be squeezing more performance or battery life out of laptop chips. If the power requirements are the same for higher bandwidth, then reducing the bus width and maintaining the same bandwidth would reduce power requirements extending battery life (or potentially freeing up more power for GPU performance).
Good point on the bus bandwidth. Considering most NVidia cards are 128-bit and cap out between 192-256-bit historically (there are some 320-bit and 384-bit anomalies) they will probably stick with GDDR5X for Pascal unless it ends up having a wide 512-bit bus like many AMD GCN architectures. Obviously Fuji has a 1024-bit bus but it was built from the ground up around HBM, and this GPU would be crippled with a 256-bit GDDR5X implementation.
Seems like HBM2 for high end gaming and probably pro-segment, gddr5x for mid (maybe some mid cards could see HBM1/2?), gddr5 for the lower-seg and gddr3 for the bottom line then? I'm waiting to see a HBM2 vs GDDR5X comparison soon (price/performance/capacity/power cons. etc). Nonetheless, 2016 will be the year of great batles in tech-realm :)
I am willing to bet that GDDR5 will be phased out relatively quickly, in favor of GDDR5X (or HBM on the top tier cards). Bottom end will still be DDR3 (ugh!). There is little reason for GDDR5 to stick around as I bet GDDR5X will be very cost competitive relatively quickly.
I have said this before, but I bet we will see one GPU from AMD with HBM, and one or two from nVidia. GP100 for sure, and GP104 maybe GDDR5X or HBM.
I'd expect the bottom end to switch to DDR4 fairly quickly. At Newegg, a 4GB dimm (smallest offered size of DDR4) is only $20 vs $17, so the price gap is almost gone. On the technical side the higher throughput will be pure win while the increased random access latency that limits real world gains on the CPU is largely irrelevant; while gaming GPUs stream large sequential chunks of memory instead of hammering the system with random access operations.
I'd be shocked if GP104 doesn't offer HBM, even if GDDR5x is "good enough", they need to offer it on top end cards for marketing reasons. GP100 doesn't really count here since after skipping the high performance market with Maxwell, initial production is going to be sucked up by customers willing to pay more per card than even the most tricked out gaming PC cost. Titan might be GP100, but 1080/1070 will almost certainly be GP104.
I don't think they will phase out GDDR5 at all. There are dozens of architectures still built for it and considering all the reworking that must be done to setup a PCB for GDDR5X, I suspect only new products will use it. That means all current architectures like Maxwell and GCN 1.1/1.2 will continue on GDDR5, and they will be around for at least the next few years in mid-range products.
Don't forget NVidia is still producing Kepler and Fermi cards, many years after they were architecturally replaced by Maxwell 1/2.
GDDR3 has disappeared quite a while ago. Most entry-level cards (and older mid-level ones) have been using DDR3 instead, which has become the de facto standard in computers and mobile devices (DDR3/DDR3L/whatever derivates are similar, but GDDR3 actually is a bit different).
So you can expect a shift from DDR3 to DDR4 in the entry-mid level models as the relative prices of DDR3 and DDR4 shift around, although due to TTM and other issues, you won't see a complete shift to DDR4 before a little while after it has become cheaper.
I was wondering if someone from anandtech could write an article comparing the benefits and disadvantages between gddr5x and hbm2. Love you're guys articles keep it up
From an end-user perspective, it is fairly simple.
HBM2 is faster, enables smaller card designs, and costs more, while GDDR5X is an improvement over current GDDR5 in every way while being cheaper and slower than HBM2.
Seems like GDDR5X will not be suitable for value-oriented cards and they will be stuck with GDDR5 for quite some time. For high-end cards HBM2 offers too many advantages over GDRR5/X in terms of size and speed and prices of the technology are only bound to go lower with time. On the other hand, GRRD5 still offers good enough bandwidth for modern games as shown by GTX 980 ti and with the die shrink value-oriented cards can offer more than acceptable performance with the old memory technology. I don't think that AMD and NVidia will spare resources to implement GRRD5X if they can get away with it and push for long-term replacement of GDDR5 with HBM2.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
17 Comments
Back to Article
testbug00 - Tuesday, February 9, 2016 - link
And any Rumors of GDDR5X cards before 4Q are now almost 100% false.extide - Tuesday, February 9, 2016 - link
Well, Micron isn't the only manufacturer. You have Hynix, Samsung, etc, as well.Samus - Tuesday, February 9, 2016 - link
The other concern is this memory technology is more complex to implement than Micron originally lead the public to believe. I am a big fan of updating the GDDR5 standard opposed to relying on HBM because HBM has too many drawbacks, specifically cost and implementation complexity.But now it is obvious GDDR5X is nearly as complex to implement as HBM (sans the interposer and die packaging) because the pincount, packaging size and PCB requirements are all different.
After what was revealed by Samsung about HBM's memory controller requirements, it is basically drop-in compatible with any GPU's memory controller that supports GDDR5 and just comes down to video bios support (further confirmed by AMD's Polaris shipping in GDDR5 and HBM cards)
That isn't to say old GPU architectures will be optimized for HBM, but it's just another knock against GDDR5X's supposed flexibility.
What I'm getting at is...GDDR5X isn't what we have been lead to believe. If it's as radically different from GDDR5 as we're now being told, from being QDR to having an entirely different pinout and size...this isn't just GDDR5 with a frequency bump and a voltage drop. It will not just "drop in" on existing video cards. Entirely new reference designs and complete OEM cooperation is going to be needed to implement it, which sucks because that means it isn't going to be cheap and it probably won't show up on any current-gen GPU architectures because AMD and nVidia are just going to wait until their next GPU's (Polaris and Pascal) instead of creating new reference designs for Fuji and Maxwell 2.
And I doubt and OEM partners are going to make the R&D investment into fitting GDDR5X to older GCN or Maxwell products without AMD or nVidia telling them how to do it. Where do those extra 20-pins for each memory die even go? If the voltage is lower, then that would theoretically mean a reduction in pins.
close - Wednesday, February 10, 2016 - link
Well the name says GDDR5 but it's actually GQDR. So it may be a larger departure from GDDR than the name suggests. Maybe at some point this QDR technology will be integrated into HBMx given that the channel interface for HBM still has DDR buses.BurntMyBacon - Wednesday, February 10, 2016 - link
@close: "Maybe at some point this QDR technology will be integrated into HBMx given that the channel interface for HBM still has DDR buses."Entirely reasonable. Given the stricter control over line lengths, the power distribution advantage afforded by using a single piece of silicon (interposer) vs multiple discrete chips, and the higher number of "pins" that can be supported by the interposer, I'd say there is less of a barrier of entry here. It probably won't happen immediately due to lack of need, but once GPU design figure out how to make good use of HBMs throughput, they may start considering it.
BurntMyBacon - Wednesday, February 10, 2016 - link
@Samus: "After what was revealed by Samsung about HBM's memory controller requirements, it is basically drop-in compatible with any GPU's memory controller that supports GDDR5 and just comes down to video bios support (further confirmed by AMD's Polaris shipping in GDDR5 and HBM cards)"To be fair, I think this is also being simplified, but your point still stands. There does not appear to be as much of an advantage to GDDR5X implementation over HBM as we were lead to beleive.
@Samus: "That isn't to say old GPU architectures will be optimized for HBM, but it's just another knock against GDDR5X's supposed flexibility."
I think the key here will be the optimization. There is no reason to incur the cost of HBM if your chip is designed with a 256bit wide bus. If your bus is wider than 512bit, then GDDR isn't practical. Higher bit width buses are usually reserved higher end parts anyways due to complexities and cost.
I imagine we'll see GDDR5X used to reduce bus width while maintaining the same performance or used to boost performance with the same bus width where bus width is constrained. One example might be bringing previous high end / mainstream designs down to the mainstream / entry level. Another could be squeezing more performance or battery life out of laptop chips. If the power requirements are the same for higher bandwidth, then reducing the bus width and maintaining the same bandwidth would reduce power requirements extending battery life (or potentially freeing up more power for GPU performance).
Samus - Wednesday, February 10, 2016 - link
Good point on the bus bandwidth. Considering most NVidia cards are 128-bit and cap out between 192-256-bit historically (there are some 320-bit and 384-bit anomalies) they will probably stick with GDDR5X for Pascal unless it ends up having a wide 512-bit bus like many AMD GCN architectures. Obviously Fuji has a 1024-bit bus but it was built from the ground up around HBM, and this GPU would be crippled with a 256-bit GDDR5X implementation.xdesire - Tuesday, February 9, 2016 - link
Seems like HBM2 for high end gaming and probably pro-segment, gddr5x for mid (maybe some mid cards could see HBM1/2?), gddr5 for the lower-seg and gddr3 for the bottom line then? I'm waiting to see a HBM2 vs GDDR5X comparison soon (price/performance/capacity/power cons. etc). Nonetheless, 2016 will be the year of great batles in tech-realm :)extide - Tuesday, February 9, 2016 - link
I am willing to bet that GDDR5 will be phased out relatively quickly, in favor of GDDR5X (or HBM on the top tier cards). Bottom end will still be DDR3 (ugh!). There is little reason for GDDR5 to stick around as I bet GDDR5X will be very cost competitive relatively quickly.I have said this before, but I bet we will see one GPU from AMD with HBM, and one or two from nVidia. GP100 for sure, and GP104 maybe GDDR5X or HBM.
DanNeely - Tuesday, February 9, 2016 - link
I'd expect the bottom end to switch to DDR4 fairly quickly. At Newegg, a 4GB dimm (smallest offered size of DDR4) is only $20 vs $17, so the price gap is almost gone. On the technical side the higher throughput will be pure win while the increased random access latency that limits real world gains on the CPU is largely irrelevant; while gaming GPUs stream large sequential chunks of memory instead of hammering the system with random access operations.I'd be shocked if GP104 doesn't offer HBM, even if GDDR5x is "good enough", they need to offer it on top end cards for marketing reasons. GP100 doesn't really count here since after skipping the high performance market with Maxwell, initial production is going to be sucked up by customers willing to pay more per card than even the most tricked out gaming PC cost. Titan might be GP100, but 1080/1070 will almost certainly be GP104.
eddman - Tuesday, February 9, 2016 - link
1080/1070?! No way. 1800, 1700, 1600, etc. or they might even skip it and go to 2X00.IMO:
GP100/110: HBM2
GP104: HBM2 and/or GDDR5X
GP106: GDDR5
GP107/8: some GDDR5, mostly DDR4; maybe DDR3 on the lowest of the low cards.
Samus - Wednesday, February 10, 2016 - link
I don't think they will phase out GDDR5 at all. There are dozens of architectures still built for it and considering all the reworking that must be done to setup a PCB for GDDR5X, I suspect only new products will use it. That means all current architectures like Maxwell and GCN 1.1/1.2 will continue on GDDR5, and they will be around for at least the next few years in mid-range products.Don't forget NVidia is still producing Kepler and Fermi cards, many years after they were architecturally replaced by Maxwell 1/2.
nightbringer57 - Wednesday, February 10, 2016 - link
GDDR3 has disappeared quite a while ago. Most entry-level cards (and older mid-level ones) have been using DDR3 instead, which has become the de facto standard in computers and mobile devices (DDR3/DDR3L/whatever derivates are similar, but GDDR3 actually is a bit different).So you can expect a shift from DDR3 to DDR4 in the entry-mid level models as the relative prices of DDR3 and DDR4 shift around, although due to TTM and other issues, you won't see a complete shift to DDR4 before a little while after it has become cheaper.
eeessttaa - Tuesday, February 9, 2016 - link
I was wondering if someone from anandtech could write an article comparing the benefits and disadvantages between gddr5x and hbm2. Love you're guys articles keep it upA5 - Tuesday, February 9, 2016 - link
From an end-user perspective, it is fairly simple.HBM2 is faster, enables smaller card designs, and costs more, while GDDR5X is an improvement over current GDDR5 in every way while being cheaper and slower than HBM2.
Valantar - Wednesday, February 10, 2016 - link
Also, HBM greatly reduces power consumption, while GDDR5X only reduces it very slightly.Maxbad - Sunday, February 14, 2016 - link
Seems like GDDR5X will not be suitable for value-oriented cards and they will be stuck with GDDR5 for quite some time. For high-end cards HBM2 offers too many advantages over GDRR5/X in terms of size and speed and prices of the technology are only bound to go lower with time. On the other hand, GRRD5 still offers good enough bandwidth for modern games as shown by GTX 980 ti and with the die shrink value-oriented cards can offer more than acceptable performance with the old memory technology. I don't think that AMD and NVidia will spare resources to implement GRRD5X if they can get away with it and push for long-term replacement of GDDR5 with HBM2.