That Darn Memory Bus

Among the entire GTX 600 family, the GTX 660 Ti’s one unique feature is its memory controller layout. NVIDIA built GK104 with 4 memory controllers, each 64 bits wide, giving the entire GPU a combined memory bus width of 256 bits. These memory controllers are tied into the ROPs and L2 cache, with each controller forming part of a ROP partition containing 8 ROPs (or rather 1 ROP unit capable of processing 8 operations), 128KB of L2 cache, and the memory controller. To disable any of those things means taking out a whole ROP partition, which is exactly what NVIDIA has done.

The impact on the ROPs and the L2 cache is rather straightforward – render operation throughput is reduced by 25% and there’s 25% less L2 cache to store data in – but the loss of the memory controller is a much tougher concept to deal with. This goes for both NVIDIA on the design end and for consumers on the usage end.

256 is a nice power-of-two number. For video cards with power-of-two memory bus widths, it’s very easy to equip them with a similarly power-of-two memory capacity such as 1GB, 2GB, or 4GB of memory. For various minor technical reasons (mostly the sanity of the engineers), GPU manufacturers like sticking to power-of-two memory busses. And while this is by no means a true design constraint in video card manufacturing, there are ramifications for skipping from it.

The biggest consequence of deviating from a power-of-two memory bus is that under normal circumstances this leads to a card’s memory capacity not lining up with the bulk of the cards on the market. To use the GTX 500 series as an example, NVIDIA had 1.5GB of memory on the GTX 580 at a time when the common Radeon HD 5870 had 1GB, giving NVIDIA a 512MB advantage. Later on however the common Radeon HD 6970 had 2GB of memory, leaving NVIDIA behind by 512MB. This also had one additional consequence for NVIDIA: they needed 12 memory chips where AMD needed 8, which generally inflates the bill of materials more than the price of higher speed memory in a narrower design does. This ended up not being a problem for the GTX 580 since 1.5GB was still plenty of memory for 2010/2011 and the high pricetag could easily absorb the BoM hit, but this is not always the case.

Because NVIDIA has disabled a ROP partition on GK104 in order to make the GTX 660 Ti, they’re dropping from a power-of-two 256bit bus to an off-size 192bit bus. Under normal circumstances this means that they’d need to either reduce the amount of memory on the card from 2GB to 1.5GB, or double it to 3GB. The former is undesirable for competitive reasons (AMD has 2GB cards below the 660 Ti and 3GB cards above) not to mention the fact that 1.5GB is too small for a $300 card in 2012. The latter on the other hand incurs the BoM hit as NVIDIA moves from 8 memory chips to 12 memory chips, a scenario that the lower margin GTX 660 Ti can’t as easily absorb, not to mention how silly it would be for a GTX 680 to have less memory than a GTX 660 Ti.

Rather than take the usual route NVIDIA is going to take their own 3rd route: put 2GB of memory on the GTX 660 Ti anyhow. By putting more memory on one controller than the other two – in effect breaking the symmetry of the memory banks – NVIDIA can have 2GB of memory attached to a 192bit memory bus. This is a technique that NVIDIA has had available to them for quite some time, but it’s also something they rarely pull out and only use it when necessary.

We were first introduced to this technique with the GTX 550 Ti in 2011, which had a similarly large 192bit memory bus. By using a mix of 2Gb and 1Gb modules, NVIDIA could outfit the card with 1GB of memory rather than the 1.5GB/768MB that a 192bit memory bus would typically dictate.

For the GTX 660 Ti in 2012 NVIDIA is once again going to use their asymmetrical memory technique in order to outfit the GTX 660 Ti with 2GB of memory on a 192bit bus, but they’re going to be implementing it slightly differently. Whereas the GTX 550 Ti mixed memory chip density in order to get 1GB out of 6 chips, the GTX 660 Ti will mix up the number of chips attached to each controller in order to get 2GB out of 8 chips. Specifically, there will be 4 chips instead of 2 attached to one of the memory controllers, while the other controllers will continue to have 2 chips. By doing it in this manner, this allows NVIDIA to use the same Hynix 2Gb chips they already use in the rest of the GTX 600 series, with the only high-level difference being the width of the bus connecting them.

Of course at a low-level it’s more complex than that. In a symmetrical design with an equal amount of RAM on each controller it’s rather easy to interleave memory operations across all of the controllers, which maximizes performance of the memory subsystem as a whole. However complete interleaving requires that kind of a symmetrical design, which means it’s not quite suitable for use on NVIDIA’s asymmetrical memory designs. Instead NVIDIA must start playing tricks. And when tricks are involved, there’s always a downside.

The best case scenario is always going to be that the entire 192bit bus is in use by interleaving a memory operation across all 3 controllers, giving the card 144GB/sec of memory bandwidth (192bit * 6GHz / 8). But that can only be done at up to 1.5GB of memory; the final 512MB of memory is attached to a single memory controller. This invokes the worst case scenario, where only 1 64-bit memory controller is in use and thereby reducing memory bandwidth to a much more modest 48GB/sec.

How NVIDIA spreads out memory accesses will have a great deal of impact on when we hit these scenarios. In the past we’ve tried to divine how NVIDIA is accomplishing this, but even with the compute capability of CUDA memory appears to be too far abstracted for us to test any specific theories. And because NVIDIA is continuing to label the internal details of their memory bus a competitive advantage, they’re unwilling to share the details of its operation with us. Thus we’re largely dealing with a black box here, one where poking and prodding doesn’t produce much in the way of meaningful results.

As with the GTX 550 Ti, all we can really say at this time is that the performance we get in our benchmarks is the performance we get. Our best guess remains that NVIDIA is interleaving the lower 1.5GB of address while pushing the last 512MB of address space into the larger memory bank, but we don’t have any hard data to back it up. For most users this shouldn’t be a problem (especially since GK104 is so wishy-washy at compute), but it remains that there’s always a downside to an asymmetrical memory design. With any luck one day we’ll find that downside and be able to better understand the GTX 660 Ti’s performance in the process.

The GeForce GTX 660 Ti Review Meet The EVGA GeForce GTX 660 Ti Superclocked
Comments Locked


View All Comments

  • TheJian - Monday, August 20, 2012 - link

    LOL.. I can't read your language, and am unsure of any of the cards speeds etc in that link.

    You're comparing something you can do on your own, NOT out of the box. Which I already proved you can easily hit ridiculous speeds with the 660TI.

    So how do I know I got a special binned chip before buying like your forum (again we can't read your language - most of us anyway)? But again I'd say it's not out of the box at those speeds. There is a ref card in GREEN for the 660TI or did you miss that?
    I know, exactly what this article runs at...LOL. Only using 79 watts more to get it done and he NEEDED 1.25v to do it. There is a reason AMD has this as default on the BOOST (not all chips can easily do it...they're not purposefully running hot and overvolted ya know...They have to in order to get more to do it!). No amount of cooling will save you money on your electric bill. Your magical 1150mhz is examined in great detail in that article with caveats regarding how long your life may be...LOL. OC at your own risk. Firmware in the 600 series makes this impossible on the 600 series cards. Roll your own dice thanks. Feb2012 article, there isn't some magical binned versions of these chips YOU can magically guarantee I'll get. Not all chips are created equal and no manufacturer is guaranteeing your speeds or anywhere near them. Point me to your magically binned advertised chips? I can't see them on newegg. So you must have some magical website I didn't know I could buy from. Enlighten me please.

    Crysis 1 and warhead? Already debunked it's relevance. But here if you missed it:
    Games based on Cryengine v2? 7 total. CryEngine3? Check out the list, including crysis 2, and the coming crysis 3:
    Crysis and warhead (from march 2008 for warhead, earlier for crysis) are NOT relevant. There are only 5 other games made on it. Point me to some benchmarks showing something I can read (#1) and where I can actually see the test setup (#2). Until then all your benchmarks are meaningless. Also don't bother showing me anything over 1920x1200 and claiming victory as I already debunked that as being less than 2% market share according to hardware survey AND more importantly no 24in or below is sold with a native res above 1920x1200 on newegg! 68 monitors without even ONE being above 1920x1200 recommended.

    I already showed crysis 2 7950boost review vs. ref 660ti being a wash even at 2560x1920 (though a useless res). If you have to force the 660TI into something we'd never run at to show a victory your results are pointless. I'm sorry, does MSI sell your twinfrozr at 1150core out of the box? I must have missed that version on newegg. Value at $320? Not out of the box, and I can do the same thing for $299 on 660TI if I'm going to be doing overclocking myself and they are guaranteed out of box 100mhz over on both core and boost as shown before. Also I can't damage it (built into 600 series, the firmware won't let me do a dangerous overclock to shorten its life). The only two cards for $319 on newegg in 7950 are AFTER rebate #1 and only clocked at 800mhz #2. They're not going to spend on quality components to HELP your overclocking/life of the chip at that price. Quite a few of the overclocked 660's are SILENT in use.
    "For the card in a fully stressed status (in-game) 39 dBA, now that simply is silent. So if you do not tweak the card or something, during gaming you can expect a silent card."
    And that one kind of sucks russian.
    All of the 660TI's out of the box on the heels of the 7970 in anno2070. But I know, if you can get a magically binned chip, you might be able to hit a speed that makes the 7970 look like crap for bang/buck and at a speed not warrantied out of the box. But I can get almost 7970 doing nothing, no worries for less $$. So what's your point? :) Note 3 of the 660's beat the 670gtx out of the box...LOL. You got some version where the 7950 beats a 7970 out of the box for $299? I know I "CAN" get lucky, but no guarantee @1.175 like you say. Or do you think AMD is just stupid and clocks all the boost versions (that aren't out yet) at 1.25v for nothing? A hell of a lot of them WON'T get to boost guaranteed without 1.25 or they wouldn't be doing it and purposely making their cards look like shite in reviews. How dumb do you think AMD is? It's a reference for a reason.

    Ryan's review of the 7970ghz edition notes NV cards shipped clocks may not be the highest you'll get even default out of the box, only guaranteed (they will perform based on the tdp, better!):
    "Every 7970GE can hit 1050MHz and every 7970GE tops out at 1050MHz. This is as opposed to NVIDIA’s GPU Boost, where every card can hit at least the boost clock but there will be some variation in the top clock".
    Where out of the box all radeon cards perform the same (except watts used varies), but NV cards can go higher than out of the box even out of the box on boost speeds...LOL.
    Same article:
    "With that said there’s no such thing as a free lunch, and in practice the 7970GE’s power consumption has still increased relative to the 7970, as we’ll see in our benchmarks."
    These chips aren't special or wattage wouldn't climb at all. Your magical 7950 isn't special either.
    Skyrim 1920x1200 same article - Gtx580/670/680 cpu limited and BEATING the ghz 7970 edition 98fps to 86fps! Note no improvement from 7970 vs. 7970 ghz edition. with 4xmsaa/16af. (neither shows a difference at 2560x1600 useless res overclocking to ghz edition didn't improve the scores over the regular 7970? in either res...LOL)
    I can hear you say, that's not the 660TI. Got me..:
    What's that? AT 98% of the user res of 1920x1200 (and below) at 4xmsaa/16af all 660ti's beat the 7970? But the 660ti CRUMBLES you said at mxsaa...LOL. Whatever dude. I can keep going on...
    The 7950/7950B/7970/7970ghz all score the same at 1920x1200. You'll have to check both articles to get the 7970ghz edition as Ryan conveniently left it out of the benchmarks in the 660ti review...LOL. Gee why? Because it got nothing here too? Including warhead vs. crysis2 with HD and enhancement pack instead? A 2008 game vs. 2012 that has a crapload more games based on CryEngine 3.0? Only 7 on CryEngine 2.0 (and 2 of them are crysis 1 and warhead...LOL).
    Check all the 1920x1200 scores (sorry I already proved 2% or less run above this and most of those OVER 2560x1600, usually with more than one card), anand's games (as everywhere else) are maxed out at every res. You can't turn anything else on to help your cards. :)
    Shogun, 660's beating 7970
    "Overall this has become a fairly NVIDIA-friendly benchmark, with the GTX 660 Ti challenging even the 7970 at 1920."
    Challenging Ryan? Every 660ti but the reference beats the 7970 (which arguably NOBODY on newegg SELLS, most are clocked much faster, MSI N660 1019/1097boost far higher than 915 ref $299 since launch) ...But again he draws his conclusion based on 2560x1600, which by his own words just below the first benchmark (worthless Warhead from 2008 instead of crysis 2 maxed out) these cards are designed for 1920x1200/1080!
    Dirt3, tied with 7950 at 1920x1200, but again Biased Ryan (?)"while the GTX 660 Ti falls behind at 2560 as it runs out of memory bandwidth.". WHO FREAKING CARES what happens where 2% or less of us run, and at a res by your own words "For a $300 performance card the most important resolution is typically going to be 1920x1080/1200". TYPICALLY?...ROFL. should say 98% of users would use this or BELOW (actually only 29% use these two). He goes further in dirt too...LOL "Looking at the minimum framerates that 660/7950 standoff finally breaks in NVIDIA’s favor. Though a lack of memory bandwidth continues to pose a problem at 2560"
    Yeah, I know, because you've beat it like a dead horse as much as you can, it runs out again where NONE OF US RUN. Damn, as I read the review there is nothing to say but Ryan is getting some cash from AMD :) Jeez, twice on the same freaking page about the 2560 crap.

    Sorry ryan, AMD lost this round at 1920x1200 where 98% of us run (or below) and nothing you can say about 2560 changes the world for 98% of us where it's either a WASH or a dominating victory for 660 (heck all 600 series cards, you can argue none are for above these resolutions in single cards, 98% no matter what they have, gtx690/680/670/660 etc all run 1920x1200 or below, no 24in monitor on newegg is ABOVE THIS). Who you advertising for?
    Portal 2 - LANDSLIDE (even at useless 2560 beats 7970 >25% nevermind the far slower 7950 here...LOL), 45% faster than 7950 at 1920x1200...ROFL. Guess you better have a 35-45% magical card just to catch the out of the box 660 here...ROFL.
    "If NVIDIA could do this on more games then they’d be in an excellent position.". Umm...They did ryan, just quit looking at the res only 2% of us use. He points out the 660 can handle SSAA here (more taxing than MSAA Russian!!) so they concentrated on it.
    Google ssaa vs. msaa and you'll find stuff like this "SSAA theorhetically AA's the whole screen and would give a much more consistent AA. MSAA simply is limited to edges." and "Of course, there's a reason why people don't use SSAA: it costs a fortune". Tougher....Yet smoking on 660...Things like this are about the GPU/Shaders etc...NOT memory bandwidth as ryan beats like a dead horse at 2560x1600.
    Battlefield 3 same anandtech ryan article? 1920x1200 4xmsaa
    LANDSLIDE, ALL 660'S WINNING vs 7970!
    Pay attention to what Ryan is doing here people. 17% faster than the 7950B! FXAA is worse same page "At 1920 with FXAA that means the GTX 660 Ti has a huge 30% performance lead over the 7950, and even the 7970 falls behind the GTX 660 Ti." That's the GREEN REF bar that NOBODY sells as already noted, AMP speeds are almost had for $299. So really it's more like >34% faster than 7950B, not vs. 7950 reg. OH and it IS playable then at MSAA as ryan says is disappointing for the REF version...LOL. I know, and nobody will buy it (on accident maybe?), so for the rest of us, MSAA is ok with battlefield dropping to minimums of ~30fps. Just keep getting those digs in where you can... :)
    Sorry russian, I'd like to destroy more of your data, but the rest aren't benchmarked here, and I can't be bothered to do more than I already have just now to prove you wrong on the "crushed" comments... :) I think I proved my point. Sniper Elite2? What's that? Sell much?
    Nope metacritic score 65...I wouldn't even pirate a yellow game (under 74 score pretty unanimous not so good), let alone pay for it or care about it's benchmark. Too many shooters at 80+ scores.
    Dirt Showdown is based on the same engine as dirt 3 here. If it performs worse, I'm thinking it's a driver issue...But never mind: Score 72, user score 4.8 out of 10...ouch Gamespy quote:
    GameSpy wrote: “DiRT: Showdown delivers bargain-basement entertainment value for the high, high price of $50. With its neutered physics, limited driving venues, clunky multiplayer, and diminished off-road racing options, discerning arcade racing fans should just write this one off as an unanticipated pothole in Codemaster's trailblazing DiRT series.”
    Keeping quoting useless games if you'd like though. :) Sory, their review is frontpage at metacritic :)

    I'd like to see some bulletstorm, alan wake, serious sam3 benches so please LINKS? Hardware sites only please. I'd prefer review sites, rather than a forum.. People like ryan will slowly become useless with too many of these reviews. Forum users have no such worries and can more easily post anything they want. It's a good addition to something I'm looking into if I have questions after regular reviews, but I wouldn't want to base my buying purchase on forum posters benchmarks. Note I'm not posting my OWN benchmarks here after I do who knows what to my 660 TI (that I don't own yet...LOL), I'm pointing to results of review sites using stuff we BUY.

    Personally I'm buy this card (and my 22nm quad soon) to run it below default to soundly beat my Radeon 5850 but do it without driving me out of my AZ computer room. The quad should give me a great boost also at 3ghz (3770K downclocked) vs my current heater in the E3110 3ghz dual core. Sounds crazy, I know...But this week it hit 114 outside! I have a great cpu that can easily clock to 3.6ghz also - prime95 stable below default of stable 1.25v-1.35v reg E8400's! The default for my chip is 1.08 and boots well below this at 3ghz stable!...So it gives you an idea of the heat in AZ and it's affect on even what I'd call one of the best E8400 3ghz wolfdales in the world. I can't beat the heat with my bada$$ cool e3110 (a purposely bought xeon s775 for better thermals). Electric is already $250 here so, reducing temps is kind of up to my PC itself :) You can bet I went through a bunch of places to pick my specific week/lot/country of origin on that baby :) I'm no stranger to ocing, but I don't think people will all rush home with their shiny new $330 7950 (not boost) and OC the heck out of it to beat a TI that runs cooler by default and OC's out of the box at .987 volts. ONE more dig at Ryan...LOL :

    " As you’ll see in our benchmarks the existing 7950 maintains an uncomfortably slight lead over the GTX 660 Ti, which has spurred on AMD to bump up the 7950’s clockspeeds at the cost of power consumption in order to avoid having it end up as a sub-$300 product. The new 7950B is still scheduled to show up at the end of this week, with AMD’s already-battered product launch credibility hanging in the balance."

    Umm...Must be looking at those 2560x1200 2% user benchems again eh? At the res we all play at, and all 24in monitors on Newegg (68!) are at (1920x1200/1080), WHAT LEAD? IN that 2008 warhead game? Debunked already. The rest, at best the race is a wash (and not often, pretty much metro2033 ~4% faster 7950Boost) and the rest are landslides and at times landslides vs. the 7970 ghz edition. You got the AMD part right though...Only two makers actually announced it...None seem to even care as they already sell tons of 900mhz, which is faster than the 850 boost and you should have added here as you wouldn't buy a boost when you get get a 900 for likely less...LOL Just like NV cards basically come OC'd no matter what you buy at newegg...Review what we buy, who cares what AMD/NV want?

    You still confused about the conclusion you SHOULD have stated Ryan? 660TI rocks, and is a no brainer for all people using 24in and below (98%). For the other 2%...LOL. Whatever. Go ahead and remain confused about that...WE DON'T CARE. You can't be this dumb (I hope not), so I'll give you the benefit of the doubt and run with it can only be bias.

    Jeez, I just had to check real quick at 27in...ROFL.
    Check the recommended resolutions on the left side people. 41 at 1920x1080! Only 11 others, and they are not even 2560x1600! They are 2560x1440! My god man, you aren't even right at 27in! OK, now I think you're just a freaking moron. Still confused Ryan? Anand, you there? Still care about your site? Let me know if you need a new reviewer :) This is borderline ridiculous to not have a conclusion even at 27inches! Only 20% are LESS than the res Ryan draws all conclusions from the other 80% of the 27inchers on newegg recommend LOWER than tested 1920x1200...ROFLMAO. NOT a single 27in has a recommended resolution of 2560x1600 (is only 1440...less stressful). I digress...For now...ROFL.
  • RussianSensation - Monday, August 20, 2012 - link

    1) MSI TwinFrozr has been binned to include 80% ASIC 7950 chips. It will hit 1100-1200mhz on 1.175V. Every card.

    2) 3D Center compiled 12 professional reviews and GTX660Ti lost to an 800mhz 7950 overall:

    3) BitTech and Tom's Hardware already showed that a 1300mhz GTX660Ti cannot even match a stock GTX670 in graphics intensive situations:


    and here:

    4) Considering HD7950 1167mhz keeps up easily with a 1300mhz GTX670, that leaves GTX660Ti overclocked to 1300mhz in 1 spot only - BEHIND:

    The MSI TF3 7950 is a PROVEN overclocker and it competes with a $400+ GTX670 for almost $100 less.

    Further, you failed to mention that more and more games are starting to use DirectCompute for global lighting model and shadows:

    - Dirt Showdown
    - Sniper Elite 2
    - Sleeping Dogs

    GTX660Ti/670/680 are a no show in those games, choking.

    OTOH, HD7950 can be easily overclocked to reach an overclocked 660Ti in BF3:

    7950 OC in BF3 = 69.7 fps

    GTX660Ti OC in BF3 = 69 fps

    Overall, it's SIMPLE mathematics. An 1167 mhz 7950 keeps up with a 1300mhz GTX670. A 1300mhz GTX660Ti cannot overcome 24 ROP / 192-bit bus memory bottlenecks against a stock GTX670.

    Thus, by definition HD7950 OC > GTX660Ti OC.
  • Galidou - Tuesday, August 21, 2012 - link

    Good calculation there, logical but still we shouldn't forget about nvidia's advantage. Not everyone is overclocking like enthusiasts do. For anyone not fiddling with clocks and voltages, Nvidia is the clear choice. Overclocked 7950 might be better, but using even the aftermarket coolers will need a well ventilated case. Crossfire will mean lowering your overclock unless you watercool them...

    And most of the peolpe don't own superclocked CPU to get rid of the bottleneck it might cause. So Nvidia having a better relation with lower frequency cpus performs better at 1080p where lots of games are simply cpu limited unless you got a beast at 5ghz.
  • CeriseCogburn - Saturday, August 25, 2012 - link

    It's not a good calculation by the amd fanboy - I went to his forum link, then to the review he linked, and saw the 660Ti SMASHING his 7950 black edition $350+ card to bits.
    He lists one game then a bunch of power and heat charts and goes on a PR selling spree... boy it's amazing... talk about obsessed fanboyism...
  • Galidou - Tuesday, August 28, 2012 - link

    His example shows the 7950 once overclocked, getting to the level of a gtx 660 ti, and this game is battlefield 3, which has always been better on Nvidia.
  • CeriseCogburn - Wednesday, August 29, 2012 - link

    Okay, so an expensive 7950, or an aftermarket HS, or water cooling, and expensive air positive case with lots of fans, then a healthy PS for the extra voltage, then endless twiddling and driver hacks for stability.
    So +$50 on the cooler or card, $50 or $100 extra on the case and fans, then add in $100 for the CPU to be able to take advantage...
    After all that dickering around and dollars, just amd fan boy out and buy a rear exhaust 7970 and be cheaper and somewhat stable at stock.
    Right ? I mean WTH.
    Then we have the less smooth problem on the 78xx 79xx series vs nVidia - the gaming is not as smooth - plus you don't have adaptive v-sync - another SMOOTH OPERATOR addition.
    These are just a few reasons why the amd prices have plummeted.
    I suppose now if you go amd you shouldn't worry much about losing a lot of value quickly, but for 8 months we took a giant hit in the wallet for buying AMD, now our cards are worth CRAP compared to what we paid for them a short time ago - with the 6 months+ of crap driver support.
    It's great - yeah just great - amd did such a great job.
  • anubis44 - Thursday, August 23, 2012 - link

    Jesus Christ, TheJian, you wrote a goddam Russian novel when you could have just come out and merely said that you want to have Jen Hsun's baby, you silly nvidiot.

    Nvidia has simply pushed the default clocks on their cards much harder than AMD. So what? So AMD leaves more o/c'ing performance on the table. Big deal. That's hardly a decisive, knock-out blow for nvidia. As a matter of fact, I'm selling my Gigabyte Windforce GTX670 2Gb tomorrow (gorgeous cooler setup by the way, and utteryly silent) because, for $400, it only beats a 7870 by about 3-5FPS on my 3 monitor setup (5040x1050 resolution) in most games, and the thought that I could buy a 7870 for $240, or a Gigabyte 7950 card with 3Gb of memory for $300 made me ill. Long and short of it: if you're playing at 1920x1080, the GTX660 Ti looks pretty good (except for those AMD-optimized games) but if you're running 2560x1080 or higher, AMD's 3Gb-equipped 7950 is going to have the extra memory and muscle to keep your minimum frame rates playable, while the 2Gb GTX660 Ti is going to choke.

    Besides, I'm sick of nvidia's shitty 3 monitor driver support. Every time I update the video driver, I have to perform brain surgery to get 2 of the monitors to come back up again. On the other hand, the Asus Direct CUII (another outstanding cooler) 7850 I had temporarily about 2 months ago for a few days drove my 3 monitor setup instantly, and setup took about 2 minutes. The AMD driver even 'guessed' the bezel compensation accurately the first time), and played Diablo III at a solid 60FPS at 5040x1050 on one card with all the quality settings at maximum. That card now costs $189.99 after factory rebate here in Canada:

    I think nVidia is going to have a HUGE fight on its hands.
  • TheJian - Friday, August 24, 2012 - link

    Must people if you'd read all that, don't run over 1920x1200. The amount who do is <2% and you have to spend a lot to do it reliably over 30fps as hardocp showed etc. You made my point. It's great where 98% of us use it. Which is pretty much what the walls have been saying :) I won't apologize for being complete ;) But feel free to call me wordy, I'll accept it. Out of the 5 games tested at hardocp they found 2 (batman/withcer2) that hit 10-15fps (for a while) and 16fps on the 7950. You'd have to double it to have a good time in those games, which was my point. These are for lower res. specifically 1920x1200 or less. At which both do a great job. No disputing that.

    People usually resort to calling you nvidiot, and having the ceo's baby (really?...I'm a dude) when they lack an effective opposing opinion. Thanks for both. Look in the mirror. ;)
  • CeriseCogburn - Thursday, August 23, 2012 - link

    No need to pretend.

    It's way better than morphological aa, the CRAP amd spewed out while you cheered.
  • dishayu - Thursday, August 16, 2012 - link

    Why does the URL say "the-geforce-gtx-670-review"? Anyone care to explain?

Log in

Don't have an account? Sign up now