I have an e8400 OC'd to 3.6Ghz and the same cooler you have :-)
Thanks for posting these results since it is much more realistic to have people doing an upgrade from a Core 2 to Ivy/Haswell/Piledriver than it is to have somebody who just ran out and got a high-end Ivy Bridge 2 months ago to upgrade to Haswell.
Despite all the talk that modern CPUs aren't getting any faster, your benchmarks make it clear that even in single-threaded performance there is a big step up from Core 2 to more modern systems, and Haswell will continue the trend.
Yeah I agree. This review is very useful. A few more benchmarks might be interesting - e.g. office, sysmark benchmarks with an SSD (just to see which improves things more).
The hardware vendors might like it too - helping to convince more people that it's time to replace their trusty old Code 2 Duo system. ;)
Love my job, since I've been bringing in $5600… I sit at home, music playing while I work in front of my new iMac that I got now that I'm making it online(Click on menu Home) http://goo.gl/yiQsl
I upgraded from a C2D E6300 to a i5-3570k for the sole purpose of gaming, and the perceptible difference is substantial. Not to mention my SSD become much faster with the addition of SATA6 which isn't available on the previous platform. If games that are older than 2010 are being played, then an upgrade isn't necessary. But anything beyond that, a CPU upgrade is very very useful.
The benchmarks run are the ones from my motherboard testing suite. Go look at the motherboard reviews to see the parity between the two. As part of those reviews Metro 2033 and Dirt 3 are my primary gaming benchmarks.
As a trained scientist and academic with peer-reviewed and published papers, I should take offence to your supposition that a conclusion was made before anything was written. No definite conclusion was made before testing began at all - only that the newer platforms would perform better in the testing suite. If you know more about my presupposition before testing than I, then praise be your powers of clairvoyance - there are institutions that will pay good money if you can prove it under laboratory conditions. If anything I thought the gap between C2D and the modern Ivy Bridge would be larger than the results I got, and the 3DPM ST result stunned me to begin with.
In case you missed it, the brief overview was to see how much of an upgrade my brother got in terms of my normal testing routine.
Well said, Ian! Have you ever noticed how some people like to make rediculous statements in this and other forums, just to try and get an arguement started?
I thouroughly enjoyed reading the results of your comparison. I am using An HP Pavilion Entertainment Laptop PC with a Core2Duo processor. It is still great for word processing, spread sheet, database and email tasks as well as surfing the web. I do not play games. This laptop serves my purposes perfectly and I do not plan to upgrade, except maybe for the operating syste, This laptop shipped with Vista.
I don't have the $ to get a new rig so recently I've decided to upgrade my 6-7 yo. desktop. It's spent many years in my basement because I was using a laptop instead. I dindn't want to spend a lot as the whole thing is meant to serve a "hot fix" until I can throw some cash at a brand new desktop machine.
All I bought was a decent SSD, 2 Gb more RAM and one of the cheapest GPUs I could find to run two 1080p monitors and be able to play Bluray quality videos enjoyably (I'm not a gamer or anything, so it's fine).
So what I now have is:
Windows 8.1 Intel Dragontail DP35DP Intel Core 2 Duo E6750 (@ 2.66GHz) 4X1Gb Kingston DDR2 667MHz RAM Samsung 840 EVO 256Gb SSD (The excellent Anandtech review concludes that it's their "mainstream SSD of 2013") Gainward GT610 (1 Gb DDR3) All my old really slow hard drives in a storage pool of about 1Tb for storage.
Yes, the SSD is SATA 3 and my mobo only supports SATA 2 so the speed of my SSD is limited, and yes, 4X1Gb RAM performs worse than 2x2Gb would. But still, although I couldn't agree more with the conclusion of the article (more on that later), the performance gains of the upgrade were huge.
I only use my rig for studying, communicating, browsing, working (a lot) in Excel, mindmapping, watching movies, and playing online poker on 16-24 tables, multiple poker sites and using tracking software. And, more often than not, more of those things simultaneously. Before the upgrade, multi-tabling was almost impossible and when I was studying or researching stuff (meaning Chrome with 15+ tabs in multiple windows, Outlook, OneNote, Word and Excel running, Viber, Skype, qBittorrent and the like in the background) it was a real headache. A restart took almost 3 minutes. I didn't know what to expect from the upgrade. I knew an SSD will speed things up and adding more RAM won't hurt either, but in real life that translated into a more than noticeable performance improvement.
After clicking 'restart', I have full control over my desktop within 80 seconds (don't forget that BIOS part of the boot sequence on these ancient mobos take a lot longer than on recent ones), but now that boot time includes loading Outlook, OneNote and Chrome with all the windows and tabs from my last session, because I've added them to my startup programs. Before the upgrade, this would've been a very, very stupid thing to do. Now, right after entering my password, these apps just pop up like it's nothing.
Even when many applications are running the system loads new ones INSTANTLY, and while I was forced to close tabs and disable some extensions in Chrome to reduce memory load before the upgrade, it hasn't happened ever since. I literally haven't experienced ANY LAG. So, for just a bit more-than-average load it could seem like a good idea to upgrade your old machine. With the proper parts it gets the job done really well. Way better than I thought, to be honest.
BUT!!!!!
First of all, with an old system like that, I'm missing features and connectivity that are less than standard in a modern day PC. And many of these make a noticeable difference, even if you don't want to render 6K videos or do hardcore gaming or Photoshopping. I can't make use of the full potential of my SSD because I only have SATA2 (half the bandwidth of SATA3), and I can't transfer large amounts of data fast to external devices due to the lack of USB 3 ports (I copied 800 Gb to an external HDD yesterday and at 30 MB/s, by no means had it been a seamless experience…). DDR2 is also heavily outdated compared to DDR3. My RAM's maximum transfer rate is somewhere around 2 Gb/s as I recall, and recent chips are easily 3-4 times faster than that; not to mention response times, etc. I'm no expert but I think those differences are so huge they could actually be noticeable even under average use.
Also, prices of old but powerful components are unreasonably high. After upgrading and realizing all the benefits, I was playing with the thought of maxing out my mobo with one of the most powerful CPUs with the same socket (Intel's quad-core q6600, that is) and 8 Gb of 800MHz DDR2 RAM. I thought old stuff is cheap so I'll spend almost nothing and use a decent desktop while saving money for a new one. Not really. Q6600 sells for $280 and 4X2Gb of quality 800MHz DDR2 RAM is also over $200 (these are prices from Amazon for good measure, in my country they are even more expensive). So I'd have to shell out about $500 for outdated technology that is way below even mid-class by today's standards. Not to mention reliability , power consumption, compatibility, support, etc…
Conclusion:
I don't have the $ to get a new rig so recently I've decided to upgrade my 6-7 yo. desktop. It's spent many years in my basement because I was using a laptop instead. I dindn't want to spend a lot as the whole thing is meant to serve a "hot fix" until I can throw some cash at a brand new desktop machine.
All I bought was a decent SSD, 2 Gb more RAM and one of the cheapest GPUs I could find to run two 1080p monitors and be able to play Bluray quality videos enjoyably (I'm not a gamer or anything, so it's fine).
So what I now have is:
Windows 8.1 Intel Dragontail DP35DP Intel Core 2 Duo E6750 (@ 2.66GHz) 4X1Gb Kingston DDR2 667MHz RAM Samsung 840 EVO 256Gb SSD (The excellent Anandtech review concludes that it's their "mainstream SSD of 2013") Gainward GT610 (1 Gb DDR3) All my old really slow hard drives in a storage pool of about 1Tb for storage.
Yes, the SSD is SATA 3 and my mobo only supports SATA 2 so the speed of my SSD is limited, and yes, 4X1Gb RAM performs worse than 2x2Gb would. But still, although I couldn't agree more with the conclusion of the article (more on that later), the performance gains of the upgrade were huge.
I only use my rig for studying, communicating, browsing, working (a lot) in Excel, mindmapping, watching movies, and playing online poker on 16-24 tables, multiple poker sites and using tracking software. And, more often than not, more of those things simultaneously. Before the upgrade, multi-tabling was almost impossible and when I was studying or researching stuff (meaning Chrome with 15+ tabs in multiple windows, Outlook, OneNote, Word and Excel running, Viber, Skype, qBittorrent and the like in the background) it was a real headache. A restart took almost 3 minutes.
I didn't know what to expect from the upgrade. I knew an SSD will speed things up and adding more RAM won't hurt either, but in real life that translated into a more than noticeable performance improvement. After clicking 'restart', I have full control over my desktop within 80 seconds (don't forget that BIOS part of the boot sequence on these ancient mobos take a lot longer than on recent ones), but now that boot time includes loading Outlook, OneNote and Chrome with all the windows and tabs from my last session, because I've added them to my startup programs. Before the upgrade, this would've been a very, very stupid thing to do. Now, right after entering my password, these apps just pop up like it's nothing.
Even with many applications running, the system loads new ones INSTANTLY, and while I was forced to close tabs and disable some extensions in Chrome to reduce memory load before the upgrade, it hasn't happened ever since. I literally haven't experienced ANY LAG. So, for just a bit more-than-average load it could seem like a good idea to upgrade your old machine. With the proper parts it gets the job done really well. Way better than I initially thought, to be honest.
BUT!!!!!
First of all, with an old system like that, I'm missing features and connectivity that are less than standard in a modern day PC. And many of these make a noticeable difference, even if you don't want to render 6K videos or do hardcore gaming or Photoshopping. I can't make use of the full potential of my SSD because I only have SATA2 (half the bandwidth of SATA3), and I can't transfer large amounts of data fast to external devices due to the lack of USB 3 ports (I copied 800 Gb to an external HDD yesterday and at 30 MB/s, by no means had it been a seamless experience…). DDR2 is also heavily outdated compared to DDR3. My RAM's maximum transfer rate is somewhere around 2 Gb/s as I recall, and recent chips are easily 3-4 times faster than that; not to mention response times, etc. I'm no expert but I think those differences are so huge they could actually be noticeable even under average use.
Also, prices of old but powerful components are unreasonably high. After upgrading and realizing all the benefits, I was playing with the thought of maxing out my mobo with one of the most powerful CPUs with the same socket (Intel's quad-core q6600, that is) and 8 Gb of 800MHz DDR2 RAM. I thought old stuff is cheap so I'll spend almost nothing and use a decent desktop while saving money for a new one. Not really. Q6600 sells for $280 and 4X2Gb of quality 800MHz DDR2 RAM is also over $200 (these are prices from Amazon for good measure, in my country they are even more expensive). So I'd have to shell out about $500 for outdated technology that is way below even mid-class by today's standards. Not to mention reliability, power consumption, compatibility, support, etc…
Conclusion:
Have an old desktop? Thinking of upgrading and using it for a few more years? DON'T. Get a new one. Don't have the money and want/need good performance ASAP? Get a decent SSD (it'll work with your future rig, so why not?) and some cheap RAM. Well… 30 bucks for 2 Gb is certainly not cheap, but for a few months of better performance, I'm sold. There you go. You have a neat and completely functional 'temporary PC' for $30 (plus the SSD but you're going to use it anyway).
I too have 2 systems in heavy use with E8400's, one OC'ed to 3.6, the other at stock 3.0. Both with SSD's and HD6850's. They handle occasional gaming just fine, let alone common tasks.
I don't think anyone has said CPUs aren't getting faster, just that we've reached a point where even entry-level CPUs are "good enough". You have to spend a great deal of money just for small bumps in performance that most people won't even notice with their work loads.
I don't anticipate upgrading my i5-2500k @ 4.4GHz for quite a while, unless something unexpectedly craps out. I also generally look at AMD solutions for notebooks since the CPU performance is more than acceptable while the integrated GPU is far superior to Intel's, especially as GPU-acceleration is becoming more common.
There's a lot of moaning about the "5-15%" increase in performance in existing applications that Haswell is supposed to bring about. As this article shows quite clearly, it isn't about existing applications; it's about how future applications run. Look how terribly the C2D performs in comparison to even AMD machines in modern applications. 6 years down the road, the C2D chokes. Even dual core i3 fares much, much better.
I'm curious as to FX's performance in FP tests. Sure, there's one FPU per two cores, but each FPU is theoretically twice as capable at 128-bit calculations as any other FPU. They must've really castrated the units, which might also explain poor AVX performance as compared to SB and IVB. Bet they're regretting that now.
It's not that simple. Remember that most of the code is Intel optimised. If you want to have a comparable result on FP you would have to use 128bit optimised code path. Their strategy is totaly different: If you want heavy FPU calculations then use on-die GPU cores
Future applications? For your parent's/brother/grandparents pc? They will still be Office 2002, Email client of some description (but usually gmail/hotmail/yahoo via browser) a browser aaaaand maybe some sort of a movie player. (VLC?) Last one is a big maybe too.
6 years down the line? They are likely to be using the same things still.
And no offence but surely the upgrade time should then be when the c2d chokes as opposed to when it still works ok? :)
For geeks like us yes a C2D is long since past its prime. But we've also long since upgraded to w/e suits us better no?
The REAL problem is that in using modern PCs what one should care about in terms of UI is not throughput but "snappiness". The issue is no longer "how long will it take to transcode my mp3s", it's "does the machine respond instantly to everything I do? How often do I have to wait?" Obviously SSDs have done a lot to move us into this world.
The problem for a site like AnandTech, then, is that classical benchmarks do a truly lousy job of quantifying "snappiness". We need a new set of relevant benchmarks. You see the same sort of thing (at a more virulent level) in Android vs iOS fights, where both sides are claiming that their OS runs "smoother", more "like butter", but once again in the absence of actual numbers, both sides are talking past each other. (And the situation is not helped by the fact that the last iPhone Android guy used was an iPhone3GS, while the last Android phone iOS guy used was a Galaxy Nexus. Vague recollections of a phone three years old, and ZERO actual numbers, do not make for enlightening discussion.)
A further complication is that, for internet interaction, the properties of your TCP stack (perhaps tweaked), your router, and your ISP (does it offer "turbo boost" for the first 1MB of downloads) all also affect perceived snappiness, so it's no longer about the pure CPU+RAM.'
I don't have an answer, but I do see these sorts of benchmarks as becoming less and less relevant every year, and the web site that comes up with an alternative will RULE this space.
Snappiness has been solved since 2009, when SSDs and Windows 7 (RC) came around. Since then, I haven't owned a laggy computer. Even the 2nd gen MacBook Air (Core 2 Duo, 2 gigs of memory) of my wife is butter all the time.
On the smartphones, it was a talking point when powerhouse Android phones running Gingerbread were choking compared to old iPhones and Windows 7 phones. Ever since Ice Cream Sandwhich: snappiness achieved.
Microstutter can still be an issue when gaming. Depending on the focus of the site and if all the reviewers are gamers the best ways to evaluate this are either to collect the time between each frame to measure the number of times a frame is slow like Tech Report does; or like [H]ocp play parts of the games at various settings to determine which combination gives the best graphics while remaining fast enough to be playable.
The latter is IMO the gold standard; but is only really doable if all the reviewers are gamers. Tech Report's data gathering is more like the common FPS numbers in that anyone equipped with with a loaded steam account can collect the data.
One could measure maximum and average latencies between "sending task out" and "finished". It's a tough job to select meaningful benchmark scenarios suitable to such measurements and to interpret that data carefully. Anyway, pioneering work could be done here.
Yup, I'm going to keep my 3.6GHz E8400 for a little longer now. I thought it would significantly bottleneck if I put a decent GPU alongside it, but now I know they I can get 95% of the potential gaming performance with this high clocked CPU.
Well I daresay this (fairly poor by Anand's standards) "review" shows that actually the ANCIENT E6400 is still good enough for most people.
And that there is barely any reason to upgrade. But yeah all them "modern advances" which are once again only very arguably useful... (thunderbolt? really? UEFI? for your family's pc? USB 3.0 is also pretty pointless atm)
In the end a pointless review. Ancient cpu's are a bit slower than new ones. Well yeah? We all knew that.
This sort of a review is better suited for... idk? Engadget?
It's one thing to know, another to actually publish results in a testing environment to show the difference. Oh right, but you already knew. Then why bother reading, surely it's a waste of your time?
For the record, my father and grandfather already take advantage of USB 3.0, with my father moving his band music around and the grandfather with his veteran meets. Stick in some WiDi for less cables, or a simple UEFI so if something goes wrong I can tell them what to do over the phone with a lot less hassle.
This is despite the fact that I'm upgrading my brother's computer here and testing his old one. He plays a lot of games, he watches a lot of paintball streams and often does both across two monitors at the same time. As stated, he feels a benefit, and the numbers in the regular testing can at least quantify that to some degree based on the upgrade methodology listed.
Your bench shows it clearly: most Core 2 owner, especially owners of E8xxx or Q9xxx can still hang onto their rig and upgrade their GPU and get and SSD for an overall speed-up. The DDR2 may limit the performance, but it's not like kits of low-timing DDR2-1066 aren't availables and may provide better performance than generic DDR3-1333. And there's lot of second hand socket 775 CPU and motherboard (some DDR3), which may provide an incentive to fix instead of replace.
Hm, halving the time it takes to unpack a .rar or doubling/tripling/quadrupling the time of my encodes is not a trivial thing. If all you do is game and use word, maybe hanging on to the old stuff is good. But I think this shows the benefits of upgrading quite clearly. I'd love to see something similar done with the CPU I have (3.8GHz i7 860). I'm holding off on upgrading that for Haswell if I have the money then, don't want to upgrade to a dead platform again. :D
Actually I think you've got it backwards. doubling/tripling/quadrupling the time of encodes isn't a big deal at all to me and I'd guess most people. They spend MUCH more time gaming than bootlegging ripped DVDs or the occasional re-encode of a video file to youtube. A few more minutes to finish an encode isn't a big deal to most people, yet if they can't get playable framerates on Diablo 3 or Sim City then they need to upgrade. It's about whether or not their computer can run their apps rather than how fast they are. If a game is a slideshow then it is unplayable, if you encode at 3 fps then you still get it done, it just takes longer. It all depends on the applications that you run. If you run only office tasks and the internet, then even my parent's single core Sempron 140 feels snappy working in Word, taking out red eye from a photo, or even with web video because the 780G graphics accelerates it.
Agreed... unless you need the processor speed for work, who cares if it takes 3-4x the time to transcode or unpack a file?
Heck, I run transcoding in a VM on a seperate SSD, restricted to ~25-50% CPU utilization, so I can keep my system responsive during long encodes.
This may be an extreme case - I have a slow machine (Core 2 Duo / Phenom II X4, see my post several comments down) and don't need that file encoded (or unpacked) this very second.
But if you need the video now... why not just watch it directly?
Also, is unpacking RARs really CPU-limited? It's always been storage limited for me. 7zip seems to write the RAR as it unpacks it into a temporary file, then copy it and delete the temp file - 2x the optimal disk writes. Any more efficient and equally functional ZIP/RAR software?
I used to think this as well but you may be surprised if you benchmark it. The suffix sort in the BWT used by bzip2 and the markov chain predictors used in LZMA 7zip are actually quite slow and even a high end CPU may be slower than disk. This is especially true (for compression) if you get a good compression ratio since you end up writing that much less data to disk.
And this is true. But in the scope of the review above "for most people" (I know that's too general) how big is the rar they open going to be?
1gb tops? If its some stolen game. In reality usually even less. So say its some rar with some photos or some random songs? 200mb? takes what? 10 seconds to unpack? with c2d it would take 20. Oh The Horror.
Thanks for the information, that would make sense!
However, I have used other compression software that doesn't have this issue with drag and drop (most recently, PeaZip - don't use it now as the interface doesn't work well for me) so I'm not sure I trust their explanation of it :-)
You seem to misunderstand my intent: The OP talked like his usage case is encompassing nearly all users. I was specifically talking about my own usage cases where there were huge improvements. And I also mentioned that gaming is an outlier case here. So all you have done seems to be to reiterate my own post. I spend about as much time ripping my BDs and DVDs as I do playing these days and having the ability to get the encode done way faster as well as having a more responsive system in the mean time is a huge boon. Btw. not everyone here is a torrent kiddie, I buy my stuff, tyvm.
Given the context, you were very easy to misinterpret. And who's a torrent kiddie? Just because we don't need to encode rips as fast as you... sheesh :-)
Well your Haswell answer is one I'm looking forward to reading, Ian!
To be honest I kind of fear for AMD. Trinity didn't really enhance the A-series computationally, so if Intel releases a GT3/GT4 HD graphics i3 part they could seriously encroach on AMD's A-series niche market. Not only that, but as FM2 isn't truly available in Mini-ITX form factors a graphics-heavy i3 would be even more more versatile and enticing from a SFF/HTPC perspective, not to mention laptops.
I know Trinity is only a few months old, but it feels like AMD needs to release something very soon after Haswell in order to stay relevant.
You can get FM2 in mITX. Not many choices, but they do exist.
When talking low tech in 2013, my brother still uses the desktop I built him back in 2007 as his sole household PC. It's a single core S754 2.0ghz A64 running the classic nVidia 6100 IGP. I asked him if he wanted me to upgrade it, and he said not yet. :p
he must not click on the HD tab for youtube video. I have a similar setup at my office for when I forget my laptop. It's a S754 at 2.4ghz and AMD 9600pro and it feels ok on the web but falls over with video. Putting a real video card in it would fix mine but it's AG. The nvidia 6100 is PCI-e so you could actually help him a lot by getting a used 3450 video card for $10 online. I bet that system would last for several more years if it had video acceleration
Eh? Trinity *did* enhance the A-series computationally as compared to Llano. It wasn't enough to catch Intel (not that anyone seriously expected it would be), but progress is certainly being made, and the Piledriver cores are a good step up from Bulldozer cores.
I don't think AMD needs to equal Intel's CPU prowess in order to stay relevant. They just need to leverage their GPU advantage, which is what they're doing. When using a dGPU, there's no reason to get an AMD CPU. When using an iGPU, there's no reason to get an Intel CPU, IMO, unless all you need is enough GPU to animate UI elements. Although, Intel's GPU in the mobile space are still struggling even with that, Haswell notwithstanding.
If Haswell is all its hyped up to be (which I don't think it will be), then AMD may be in trouble if they aren't able to enhance their APU offerings to surpass it in graphics prowess fairly quickly.
Thanks for the article, I object to the screen resolution of 2560x1440 on the two gaming benchmarks.
Is your brother sporting a $1000+ screen ?
Do you think that people with a core2 generally are ?
Your pci-e was only 1.1, there's an awful lot of pci-e 2.0 core2 out there - and higher than E6400.
So the gaming tests were important but useless due to resolution.
OC'ing the cpu was useful, but brought it to a bare minimum, as a lower clocked core2 would be an easy $40 upgrade to say E5700 or something like that, or a $49 E8400 w/6mb cache and a 3600mhz easy potential.
I find the whole idea great, but dropping in a decent cheap cpu and gaming at 1650x or 1920x would have done the article proud for people here IMO.
No one with a core2 gives a darn about the Computational benchmarks.
You can't find C2D chips unless you hop onto ebay nowadays, which makes this sort of 'quick' upgrade more difficult than it was just a few years ago. I agree that this, and upping RAM to as close as 4GB as possible (a lot of people with earlier c2d systems have 1-2gb) is a much more realistic, painless procedure.
The cheap Korean panels are less than $400. At this resolution and settings, any CPU driven calculations can be a bottleneck. It also helps that all the other results I have for other CPUs were done at that resolution and those settings, otherwise this quick test for a mini article would have run into a couple of weeks of testing, rather than a day.
As for PCIe, it was just the board taken out of an essentially random system, but happened to be the one my brother was using. He has a dual screen setup at home, often running a video/audio stream on one while playing RuneScape almost full-screen on the other. We can always argue 'why didn't you test XYZ', but the truth of the matter is this is what I had to test.
If I didn't upgrade his machine, or didn't have the capacity to, then a motherboard or CPU swap would indeed be on the cards if he couldn't afford a full system. As you note, there are some cheap and cheerful prices to be had for s775, although a jump to Sandy Bridge could be as little as $125 new.
The point was to compare the system using the SAME benchmarks used on the modern systems, that way he could have numbers to compare to. It's not like he actually benched all of those machines for this article, he just benched the core2 system, and used pre-existing benchmarks to compare against.
what I did w/ a stepfather's system a couple years back was to get one of them low-cost 45nm C2D Pentiums and add a stick of RAM I wasn't using anymore, replacing a E4300 or equivalent, as well as doubling the RAM. Long as you're not doing video processing or 3D rendering those late Wolfdales seemed to be more than powerful enough for the job. I've done the same with my system by using a mid-tier Q9x00 CPU. Most of my gaming is console-based these days anyway, and aside from that my only gripe is that I'm stuck with 4GB of RAM till I upgrade to something better.
I'm looking forward to finally upgrading with Haswell, but much of the drive for that's been due to the power-saving features.
By any chance did you also log the minimum and maximum fps for the games?
I found that when I upgraded from my Q6600 (OC 3.2GHz) to a stock 3570K, while keeping the same GPU and SSD, I all but eliminated low fps spikes and max fps nearly doubled. Some games are immensely quicker while others simply no longer stutter.
E6400 @ 2.13 GHz, HD7970, min FPS: 17.3 E6400 @ 2.80 GHz, HD7970, min FPS: 32.3
E6400 @ 2.13 GHz, GTX580, min FPS: 18.6 E6400 @ 2.80 GHz, GTX580, min FPS: 22.5
Note that I did these on 12.3 / 296.10 drivers to remain consistent with my chronologically older results, so the newer drivers should probably push it up a little, especially those 12.11b11.
Interesting. How does that compare to the newer systems? Do you have minimums for those too?
A year and a half ago, I ran a similar upgrad-as-much-as-you-can Socket 939 Opteron machine with a then-new 6870 in it, and when the card was carried over to a new i7 2600 build, it was the minimum frame rates that really improved the most. When the old machine was CPU limited, it was really CPU limited.
Just upgraded my dad from a Pentium D, 4GB DDR2, 9500 GT system to a Core i7 920, 12GB DDR3, GTX 460 768MB hand me down system for his Christmas 2012 present. Let's see if he notices the difference?!
I think the biggest relative upgrade I performed was for my in-laws though. Took them from a Pentium 4 3.0Ghz to a Core 2 Quad q9400. Now that's an upgrade! They're still using that system and it still runs like a champ by the way.
I treated myself to an i7 3770k, 16GB DDR3, SLI 680 system this past fall.
I am so utterly jealous of how much money you have xD. I will be rocking a core i3-3220+GTX 650 and a decent ivy bridge laptop probably until broadwell. I might pick up a surface pro like machine with a broadwell chip, and a quad core desktop system by then.
Although broadwell might be really only more for mobile, so I might end up waiting for the revision after that to get a CPU, and upgrade my GPU first.
Well, I actually had the GTX 460 768MB as a dedicated PhysX card alongside my original GTX 680. Swapped out the motherboard, CPU, and RAM that I ended up repurposing for my dad's Christmas present, and also repurposed the GTX 460 for his rig. That left an empty PCIe 16x slot in my current rig, which obviously is a dangerous thing for my pocketbook, as I ended up going SLI. Found a great deal on the FS/FT boards that I couldn't pass up. ;-)
Darn those Microcenter deals that inevitably cause a cascading cash flow outlay!
I gave my parents a similar bump in two stages a few years ago. From an Athon-900 and 128Mb PC-100, initially to an A64-2.0 and 1gb DDR2-400, and a year or two later to a 2.4ghz dual core and 3GB ram when I retired my S939 box.
Arguably it's due for another upgrade soon; but they almost never use it anymore. Mom mostly uses my old netbook from her recliner while dad buys and batters a cheap 15" laptop to death on the road every 18 months or so. The desktop only ever really gets used if the wifi is down, or by my dad to use the networked printer. He only does this because he's terrified that connecting his laptop to wifi will screw up his ATT dongles software (this apparently happened once a half dozen years and at least two dongles ago). I've gone as far as offering to image his drive first to guarantee I can undo anything if it breaks but he won't let me touch it.... *sigh*
Interesting article. I'd be interested to see power consumption differences in the C2D and the latest and greatest as I imagine that may be a more decisive win for the modern hardware than the actual performance difference. Thanks so much for sharing this.
More striking is how little you gain when replacing a 6.5 years old CPU, 6.5 years used to be an eternity yet now it's still functional. If you would go back 6.5 more years you would land in the pentium 3 era,think how slow that was compared to core 2 duo. And this gave me an idea, would be cool to test every gen from Intel and AMD,as far back as you can manage to go and plot it in on a time line see how perf evolved (and slowed down) - ofc you would have to exclude the extreme (pricing) series to get relevant results.
I have thought about this. One issue is that when you get older the connections change. Moving back to IDE, AGP and further back adjusts where the bottleneck is and it is harder to keep consistency between the setups. When you get far enough back an OS change is needed too, which puts a different spin on things. What may be a 10 second benchmark today could be a 48 hour test if you go back too far :) Although I do wish I had more AMD chips for comparison in these graphs, such as Athlon, Athlon X2, Phenom and the like.
That's true but since we can't use just the CPU.we use the system, using the hardware that was available at the time for each system provides the relevant results you would be looking for. On the software side it might be hard to find the best benchmarks,since ideally you would have to use the same version of the software. In the end you should be able to figure out a reasonable solution and i do hope you find the energy to give it a try. Including ARM would be fun too but would be too limiting on the software side.
The only thing I can think of is something similar to superPI. It only tests the cpu, but it's probably the only thing that could be tested on all machines no matter what age they come from and what OS they use. I have a working IBM compatible 286 computer from 1986 at my parent's house, would be fun to compare that to something more modern ;D
Should you decide to give it a try some time, linux would take much of the OS incompatability away and a game like Spring RTS would be ideal for testing single threaded CPU performance by watching the same replay on each machine and noting the min/max/avg FPS (and on the really old stuff time to complete the run).
A PCI/SATA card would also allow the use of an SSD, which would be the absolute maximum IO performance the machines weren't even capable of, thus eliminating that bottleneck.
Would be one hell of a project though and I'm sure people here would be willing to donate hardware to the project. I for one could contribute a couple of Athlon64/X2 CPUs.
I'm sure ATI released an AGP card not long back aswell, which would keep that bottleneck away (other than the interface itself, but that's all part of the evolution).
It would be neat to see a Penryn CPU thrown in the mix with a P35/P45 chipset based motherboard. If you compare an e8600 to a e4700 (closest CPU I could find to the e6400 @ 2.8GHz) there are healthy gains. http://www.anandtech.com/bench/Product/54?vs=63
I think for the vast majority of home computer use, a Core2 is comfortably fast enough for most people. I'm actually running dual Core2 based Xeons and a Radeon 5870 with a 10k Raptor from 2006 and 4GB of RAM, and I never have any issues doing what most people do at home - web surfing, Netflixing, a bit of light gaming (in fact, the Radeon 5870 might be overkill for that last part).
With enough RAM and fast enough storage, these machines could last a very long time, especially if OSes and apps stay constant or even speed up slightly.
I always enjoy seeing some older hardware compared to the latest stuff. Gives a clear perspective on just how large a difference is really there.
Those chips can overclock signficantly further though. When Core came out I was among the first to buy in with the E6300 and a budget OCer board from GB. It would hit 3.5Ghz easily at reasonable temps on a top-end cooler for sustained load operation (F@H). Going from 965P to a midrange P35 allowed me to attain that golden 100% overclock at the same voltage (1.86Ghz to 3.73Ghz), which did wonders for boosting performance as these results can clearly illustrate from a lower 670Mhz boost.
Games love having that integrated memory controller. But for the CPU-centric tests I'd still love to see how a 3.4Ghz or higher Q6600 would fair, especially against AMD's offerings.
About a year ago I upgraded and old Core 2 Duo computer and was extremely pleased with the results.
It was a Dell Optiplex 755 desktop with a 2.33GHz CPU. Originally, it only had 2GB DDR2-667 RAM (1GB times 2 sticks), a sucky 80GB hard drive and even more sucky on-board graphics. I went to the Dell website, entered the machine's Service Tag number, and discovered that it could be upgraded to 8GB RAM using four 2GB sticks. At the time, DDR2-800 RAM was still cheap (although prices have gone up recently) so just for the hell of it, I pulled the DDR2-667 stick and replaced it with 8GB DDR2-800 I bought online. Then I replaced the 80GB hard drive with a 120GB SATA II SSD. Finally, I bought an ATI 6750 single slot graphics card with 1GB GDDR5 and 128-bit bus. Although I would have preferred a more powerful graphics card with a 256-bit bus, I was limited to a single slot solution because the CPU fan shroud was too close to the PCI-e x16 slot to accomodate a dual slot card. - Oh, yeah, and I upgraded the OS from 32-bit WinXP SP2 to 64-bit Win7 SP1.
The new WEI numbers were impressive. Although the CPU stayed at 5.8, the RAM went from 5.8 to 6.1, the graphics went from 3.4 to 7.3, and the disk I/O score went from 5.6 to 7.8.
Including the cost of a new 650 watt power supply (necessary, because the old 350 watt Dell power supply didn't have a 6-pin connector needed for the graphics card) the total upgrade cost came to about $350. Keep in mind that this machine (with a DVD burner and CD-ROM) originally sold for about $750. So for less than 50% or the original cost I wound up with a computer that boots in 25 seconds, plays 1080p H.264 video, and most games at 1920x1080 with medium settings.
I agree with the poster who said 2560x1440 gaming was a poor choice for your review. 1080p scores would have been far more useful to Core 2 Duo owners. I also agree that Core 2 Duo owners don't care about the multi-threaded benchmarks you included. Let's face it, the average computer user doesn't do advanced encoding and such, and anyone who does would have junked their Core 2 Duo machine long ago.
Although I have several more modern computers at my home and office, I find that this upgraded Dell is worth keeping around, probably for another 2 or 3 years. Although it looks like 2560x1440 monitors will become more popular as time goes on (and prices drop) the average user will probably still be using 1080p monitors for a long time to come, so an upgraded Core 2 Duo is still a worthwhile project.
Made a quick calculation comparing the single-threaded 3DPM bench of the i7-3770K and stock C2D. Taking the difference in clock speeds into account the i7 turns out to be merely 4% faster (assuming full turbo boost). Has the IPC really not improved or is it simply a matter of the benchmark not using AVX or any of the other new extensions?
Benchmark is written in plain C++, without extensions, similar to any non-CompSci oriented scientist who has been told to 'write code' to solve a problem.
Using the SSE4 C++ AMP n-body example in the SDK, at 10240 bodies in the calculation, the E6400 gets 3.8 GFlops at 2.13 GHz compared to 42.3 GFlops for an i7-3770K, if that helps :)
In that case: Good on Intel! That's a more than 3-fold improvement on a core-frequency basis if we're talking multithreaded here. Too bad this improvement does not come automatically though. There's probably a whole lot of programs that don't make use of these extensions.
BTW, nice to see a chemist on Anandtech. Keeps my fantasy of seeing a Gaussian09 bench on a Xeon Phi alive :)
Things always go boom when I'm in the lab. At least it takes me a couple of years to burn out a $300 GPU rather than a couple of minutes to have $10k of chemicals explode in my face / get washed down the drain :)
I just did an upgrade from Core2Quad Q9450 to a Core i7-3570K and some new Corsair DDR3-1600 memory. I went with the Asus P8Z77-V LK motherboard. I already had a Corsair 60GB SSD I had been using as a boot drive. For the new system, I moved the SSD to being used for Smart Response on an 1GB EARZX Western Digial drive. Those two went on the SATA 6Gbps ports and some other data drives and optical on the SATA 3Gbps ports. I kept my EVGA GeForce GTX 560 Ti graphics card. I decided to stick with Windows 7 Home Premium 64bit, for the upgrade.
I haven't bench marked games, but in generally I am really happy with the new system. Everything in the OS happens significantly faster, though boot time is a little slower. The old system was having stability issues, so this was as much a repair as it was an upgrade. Also, some things like opening up my Chrome session and closing are much faster. Games do seem to be more responsive as well.
Different times. Back then I was at university, where money was a scare resource, and after paying for my own new build I wanted to recoup some of the cost. Now in the world of jobs and such, it's less of an issue, and since he drives and I do not, his runabouts at my request have grown over time and I wanted to repay him.
I'm running a Core 2 Duo E8400 (stock 3.0GHz but oc'd to 3.7GHz) in a system that I bought used off Craigslist for $380 in mid 2010 (it was quite a steal). It originally had an HD 4870 512MB, but that died and I bought a friend's NVIDIA 460 GTX 1GB. It still runs any game I throw at it, usually at High or Highest settings (with or without anti-aliasing, depending on the game) @ 1080p.
Core 2 Duo E8400 with 6MB of L2 cache, and overclocked is quite a potent combo. Definitely a powerful performer from that generation, at a decent price. And in single/two-threaded workloads, it's not THAT much slower than today's offerings. It's definitely fast enough to be responsive in day to day tasks (like JS-heavy webpages and Facebook and HD video streaming).
The only area that I REALLY feel the lack of power is in video encoding (which I don't do that much of) and in multi-tasking situations where I'd love to have a full screen Twitch.tv video stream open on my second monitor while playing an intensive 3D game on my main. Not enough cores :P Also, the other time I feel the lack of speed is probably in boot-up and installation of certain things, because I have a Vertex 3 SSD (only in 3Gbps mode, though) which is fast enough to remove the HD bottleneck for most things.
I have a similar set up but with an E5200 OCed to 2.7GHz and 2mb L2 and get a similar experience to you. CPU performance is irrelevant for the majority of tasks if you do not have other high end components.
@Ian: I am a big fan of ANANDTECH, and I have read your articles on QX 9650 and X48 motherboards several times.
Your article gave me an idea: I am running a QX 9650 @3952 Mhz, DDR2 @1115 MHz and a Gigabyte Nividia GTX 580 SOC (Super Overclock) on an ASUS P5Q Deluxe PCI 2.0 MOBO. Why don't you guys do the same for my QX 9650. I only play WoW and it can hold its own quite well. I play at 1920X1080, all settings at ULTRA.
Pretty sure that mY QX9650 would score substantially higher on a X48 board with DDR3 memory and a higher 4.2GHz OC. Why don't you guys do a similar article on QX 9650 vs modern CPUS and see how much is it worth it to upgrade. I know QX 9650 is one of your favourite chips (you even managed to fry one!).
Just focus on a 1920X1080 resolution or even a bit lower, as really very few people have 2560X1600 monitors.
You might be getting me confused with Rajinder Gill, the previous motherboard reviewer. He tackled X48 - I've never touched a QX 9650 :) Though I would like to. I have some ideas for future articles :)
A Core 2 Duo from years ago can beat the A10 in single threads? That's gotta hurt. I knew AMD was lacking single threaded performance but I thought they had at least crawled past the Core 2 Duo.
My laptop is a Core 2 Duo t6500 (2.1GHz Penryn), and while I would definitely want something more for gaming, I must agree that it is still plenty capable for what the vast majority of people do on the computer. A few die shrinks down the road, when Core 2 Duo like power becomes the standard for smartphone/tablet power, I think desktops and laptops will start to shrink at an even faster rate than they are now. Some will still need them of course, like some need trucks, but for the majority a tablet will do just as well.
The older Stars cores performed much better in single thread, as shown by the X6-1100T, and the OC'ed E6400 only beat the A10-5800K at stock in a single non-memory related benchmark. Just to put it in perspective ;)
I too had a launch-model Core 2 Duo, the E6300 (1.86Ghz at stock). I was running it a bit faster (3.29Ghz), but had definitely started to notice it's age (GTX260 for graphics, 8Gb of DDR2).
I opted for a whole new PC in May (it was nearly GPU upgrade time, I wanted an SSD and I was sick of my old case) and the speed difference is actually quite astounding. A lot of the general responsiveness I put down to the SSD, but photoshop, gaming and compiling all got significantly faster with the upgrade to a 4Ghz Ivy Bridge quad core.
I gave my old computer away and the mate I gave it to was pretty stoked (he had a Pentium 4 and a Radeon 1950 or something like that), but I couldn't be happier that I've upgraded.
I only upgrade my CPU/Mobo every four-five years (with other upgrades as needed) and the difference when changing to the newer platform is always very significant.
Your brothers old hardware's not much different to mine ( E5400 @ 3.3 (used to be 3.5 but it's getting old), 4GB @ 800, Dual GTS250s, 120GB HyperX 3K SSD).
While it certainly can be frustrating slow when it comes to computation, I can still run most of the newest games well enough @ >= medium settings.
While I recently had enough cash to upgrade to an i3, SLI mobo and 8GB, I really couldn't find myself able to justify it still due to the lack of a major step in performance, or rather, due to the continued stubbornness of the old 775.
Hopefully I'll actually get a full time permanent job soon so it'll be easier to stomach a decent upgrade (K series and xxx(x)GPU).
The 'newer' E7200 (2.53 ghz, 45nm) continues to serve me well. The E7200 still makes an awesome home server, with exceptionally low power consumption (even decent by today's standards!) and more horsepower than any 3 Atoms ever built. Until this week, I ran my home server (with 2 VMs on top of Windows) off of it.
This week, it's my desktop again...
My Phenom II motherboard went kaput last week, so I swapped back to the old Core 2 Duo+motherboard as my desktop until a replacement arrives. With RAID SSDs and a good graphics card, I have no complaints except the 4 gb of ram, which is why I upgraded in the first place - at the time, 8 gb of DDR2 cost as much as the Microcenter Phenom II CPU+mobo deal and 8 gigs of DDR3.
For those who aren't power users or serious gamers, any 45nm Core 2 Duo should last at least a couple more years with an SSD and enough RAM. Any upgrade less than a Sandy Bridge or Ivy Bridge i5 isn't worthy.
I have a very similar system (E7300 with 4GB of DDR2-800 + a HD4670 GPU) which I mainly use as an HTPC with occasional gaming at 1366x768. Overall I'm very satisfied with it.
My main concern is the power consumption. I know it's based on a newer 45nm architecture (the reason I chose this particular CPU was its power efficiency back in 2008).
I just wanna know how much would I benefit from using a modern, say Ivy Bridge Core i3 instead of my current rig? From a power consumption standpoint. Since I can't build a new PC right now I thought It'd be better to just upgrade what I currently have? Maybe add an SSD and a new GPU.
Since you mentioned you used yours as a server which might have been on 24/7 I thought you'd know the estimated power consumption?
I don't have any way to measure power consumption, but TomsHardware (http://www.tomshardware.com/reviews/intel-e7200-g3... shows a E7200+G31 idling at 31 watts with integrated graphics and an efficient, low-output power supply.
The G31 is dated even by Socket 775 standards, so with underclocking/undervolting and a better motherboard, you can probably drop that - quick and dirty, either use SpeedStep to lower the multiplier or drop the FSB from 1066 to 800Mhz. then reduce voltage until it starts crashing :-)
I know the Radeon 4670 was very efficient for its day, but not sure what might be a good upgrade. First idea that comes to mind is G45+integrated graphics?
Thanks for the info, and sorry for the super-late reply!
About the GPU upgrade, the best option seems to be AMD Radeon 7750. It's really power-efficient and is the fastest graphics card right now that doesn't require an auxiliary power input.
ivy bridge....I dont actuaaly have one, just wanted to know if I should grab the currently shown series 7 chronos/ultra or wait for haswell equipped ones....how much will the difference be??
Since the Series 7 Chronos and Ultra both have a discrete GPU, the difference between the Ivy Bridge and Haswell integrated graphics won't matter at all.
I'm using a 4 year old X58 i7-920 system. It has since been upgraded with 24 GB of the cheapest ram and a new graphics card. Aside from USB 3, i don't see any reason to upgrade in next 4 years. Long gone are the days where you couldn't run mp3s on a 486 or a divx on Pentium II 266, or a 1080p x.264 on a C2D in a laptop.
...shame that the benches here are totally pointless in regards to the usual situation of 'hand me down PCS'.
Would have been more useful to see testing on how fast a Word document opens and closes compared to a top end i7. How fast Facebook opens, maybe how playable the Sims was on both machines, Farmville performance.
Thats more real world stuff that normal people do. I'm pretty sure the results would have been negligible.
Yes the tests show how far things have advanced but they don't address how pointless all that extra power is for 95% of users in general and 99.999% of hand me down owners.
Most I do now in such cases is just make sure it had a dual core, at least 2GB of ram and maybe slap a cheap SSD in it. Good to go for quite some time.
Second point is I'm intrigued to know what your brothers home looks like. Thats one dusty PC for 2 years of use. I can always tell what a persons home looks like when they bring me a PC. If its 5 years old and spotless inside, the home 99 times out of a 100 is spotless too. If it arrives full of dust and spiders then I know its a hell hole. Proven when I arrive to take it back and I don't stop long.
I just recently updated from Core 2 Quad Q9550 to Core i5-3570K, wanted to have 1920x1200@60 fps in Borderands 2 consistently. I tried overclocking the C2Q first but 3.4 GHz still wasn't enough so I gave up on that idea.
Now BL2 pushes enough frames. Probably Alan Wake too, if I ever get back to it although FPS drop weren't so visible in it. Other than that I haven't noticed much of a difference. Then again, not doing anything heavy other than games at the moment.
I also like the single page format of this article since I read this initally on a tablet.
You should have had some faster RAM and grown a pair! I had a Q9550 @ 3.8 and it was blazing. Actually, I gave that to my brother... (Have a 2500K now).
I'm running an i5 with ssd array and plenty of toys but must say a good word about my semi retired C2D.
A bit newer than this article:
C2D E8400 (@4GHz air easily) - 6GB DDR2-1066 Ocz Reapers - Asus P45 mobo - a pair of ocz vertex 2 60's in R0 for boot and 2 x 1TB WB black with an Asus 4850 with AC kit.. For the last "Rah!" of the high performance duals is still kicks damn good esp with the ssd array...makes a HUGE difference.
As an owner of an old E4300 (overclocked to 2.4 GHz, though), if I'm reading this correctly, buying a modern graphics card (I have a Radeon 4670 - don't laugh) would be enough to push me into the realm of "decent gaming at 1680x1050"? I don't have money for a full computer upgrade, and I do occasionally feel the urge to play a game that isn't half a decade old.
So if I buy something like a 7850, that would work reasonably well, right...? Help me out - I've been out of the hardware loop for many years now.
I've been using ATI cards since the VGA days, when only genuine "built by ATI" cards had analog output comparable to Matrox. I really don't know much about the new cards, but I figured something like a 7850 is "standard".
If it's CPU-limited, then I guess I could max out the graphics settings anyway...?
I saw a passively cooled 7750. It's interesting (silence is golden!), but costs only slightly less than the 7850, which could be a future-proof upgrade in case I somehow find money to upgrade the MBO, CPU and RAM in the next year or so.
In fact, the 7750 is easily 25% slower than a GTX 460 - http://www.anandtech.com/bench/Product/542?vs=535 If you don't have money for a full upgrade now, still suggest you get the GTX 460 (or a comparable used ATI card) and wait!
That said, silence is golden... my current desktop is built for near-silence, with large low-RPM fans and no hard drives.
As more and more software became GPU-accelerated over the last 2 years, the GPU fan now spins up all the time and I have to disable GPU acceleration (especially in Chrome and Flash).
Got an old fashion C2D E8400@4.1GHz on air - 4GB DDR2 CORSAIR Dominator - DFI X48 LT T2R rocking mobo ! and a Sapphire HD7970 GHZ VAPOR-X (up from a HD5870)... Getting Skyrim running at 1080p on a 27" Full HD Modded at no less than 40 fps... Only playing and photoshop editing with this rig ! See no reason to upgrade at the time :D I think that for gaming at a descent fps even in 1080p today's CPU's horsepower helps, but is not vital... Much is done by the GC; have seen a 80% to 100% increase in playnig comfort upgrading the CG !
Although I think that a system where every part is lacking is better replaced than upgraded, I thoroughly enjoyed this piece. For a person with an aging system, it's nearly impossible to find published tests of what targeted upgrades yield. I have no doubt this article will be very useful to folks still running old hardware and evaluating upgrades.
Something must be lost in translation here. You are apparently buying a very nice computer system for your late (i.e. dead) mother. That seems very generous, but even if your mother is still living (which I hope she is) it's unlikely that she would need that much computational power given that she's been working just fine on a Pentium 4 system.
Even if she keeps the computer for 8 years (like her previous P4 system), the reality is that the difference between an i5 and an i7 will be trivial compared to the difference between it and any modern computer system in the future. But then again, only the best will do for mom, who is hopefully just late for Bingo and not actually departed!
I recently replaced a C2D 6400 system I built in January 2007 with an Ivy Bridge Core i5 system. It started out with 2 GB and eventually got up to the maximum 6 GB PC6400 the Intel board would allow. Began with the Intel video, then got an Nvidia 210 card. A USB 3.0 card was added. Original OS was Vista beta, then Vista final, then Win7 beta, Win7 final, then Consumer and Release Previews of Win8.
The old C2D still has plenty of utility but I haven't the space to let it keep a position on the KVM. Still, it's sitting in reserve in case some situation comes up to put it in service again. For day to day use it had finally gotten old enough that a new machine could be justified, more for the assorted niceties beyond the CPU than for processing power.
It helps that Microsoft has been making an effort to reduce Windows resource requirements. In an earlier era a machine this old would be showing its age much more when running the latest Windows release.
Really should've tested multi-GPU configs, the CPU has been a serious bottleneck for gaming rigs since Nehalem. The results are even more apparent in multi-GPU configs where there is little or no improvement in performance scaling from additional GPUs with a slower CPU that isn't overclocked.
I like seeing the comparison to current CPUs...probably useful for people looking to upgrade too.
I actually have a Core 2 x2 @ 2.4GHz that I use regularly. I notice a big difference between it an Sandy Bridge for web browsing or the like (obviously not as big as between the Core 2 and c50, let alone my iPad, but you still notice it).
It all depends on what you want. I have a very nice 21" monitor that I bought 5 years ago that only runs at 1680 resolution not 1920x1080. Nvidia 8800 GT I have had for years is more than adequate for most games I play.
On the other hand, an Atom netbook I have had for about 3 years is incredibly and frustratingly slow.
Another area were the e8400 held/holds it's weight was encoding with that big 6MB cache. Re-encoded mannnny vids with it at 4.3GHz using a thermalright 120 hsf w/ no probs even when doing other stuff. Was the mutts nutts till the Q6600 launched and smoked everything.
to keep up the rant...before you sell off or give away an older system...
PLEASE try running it with an ssd boot drive....any size will do! ;)
should have used crysis instead, a core 2 duo even overclocked to 3.5ghz bottlenecks a 4890. i just upgraded recently and the performance increase was pretty staggering, even with the ancient gpu.
bf3 played better too, all the stutters and momentary hitches disappeared and it was just alot smoother overall.
even source games play alot better. instead of dipping to the 40's for instance in csgo my fps remained pegged over 100.
gta 4 is finally playable, after all these years.
i ended up getting a 7950, and i cant imagine how poorly it would have done on the old system, which was faster than the processor in the article. so yea.
It's worth pointing out how close the 5800K is to the 3770K in gaming, especially when you're pointing out that the overclocked E6400 is "within 1 fps" (which it isn't really, it's 1.5 fps).
Unfortunately, AMD's only hopes for equal performance in most games is to push the load onto the GPU. Throw in a second GPU and a huge gap appears, max details or not.
It's nice to see reviews like this, coming from reviewers. It brings things down to earth for the every-day, average end-user.
I would like to note that the significance of power consumption, while not exactly glossed over by the author, but not expressly noted with data either, is really where the bottom line is going to be. I mean, even in the last 2 generations of hardware, power consumption has dropped dramatically, while still maintaining a small, if not noteworthy increase in performance. To me, this is the more important selling point of newer hardware, than anything.
Either way, I certainly appreciate the time spent to reaffirm common knowledge in the component world. :)
I wouldn't count core 2 out just yet, I have a core 2, and it's doing a hell of job keeping my crysis 2 (with high-res and MaLDo in dx11 mode with everything on the very highest) min fps at 20. I would only add that a highly overclocked core 2 (whether 45 or 65 nm) is not only for browsing, and people don't throw them away or count them out so fast, they are fast. I did some benchmarks of my own and found out that my core 2 at it's current mhz (overclocked to the maximum) is about equal to the performance of an i3 (I suspect a penryn at over 3600 mhz would be faster than i3), and also in the games that performance does matter, it's more often a graphics card problem and not (as it used to be in the pre-core 2 era) the processor. if you are asking me core 2 at stock or i3, then i3 for sure, but I think most people who visit this site are not about stock clock :)
I also have many people who come to me to upgrade their systems. For many of these people who had core2Duo's, I merely upgraded them to Windows7, added more memory, gave them a SSD boot drive, and used their old drive as a storage drive. Some got upgraded video cards depending on thier needs.
The vast majority of people feel like they got a brand new faster computer for less money than a full upgrade would cost them. Anecdotal evidence is great, but seeing the numbers quantified in this article was very interesting. It makes me wonder how a E8400 @ 3.0 Ghz would fare or perhaps a Core2Quad Q9650 @ 3.0 Ghz. Pair one of these processors with a GTX680 and see how they handle the gambit of modern games. I would like to see if they would render decent enough framerates to put off an upgrade and justify spending $500 on a video card.
I have got a Core 2 Extreme QX 9650 clocked @3.95 GHz, a GTX 580 Super Overclock, 8 GB of GSkill PC2-8800 DDR2 memory overclockable to 1200 MHz on an ASUS P5Q Deluxe PCIe 2.0 MOBO, Windows 7 Pro 64 Bit.
I play World of Warcraft MOP and SWTOR (Star Wars: THe Old Republic) MMORGS, and I get from 38 to 114 FPS at 1920X1080, all settings ultra, Anti-aliasing on and set at highest.
My CPU was bought in March 2008 and has logged 19,854 hours of operation (mostly at stock or underclocked). Absolutely no reason to upgrade until Haswell.
I just went and read all these comments. Come on Anand Tech! Many of your readers are interested in this type of investigative journalism. We all have systems that we have pieced together for ourselves or friends or both. There is a lot of interest in what targeted upgrades can do for a system.
If I were a manufacturer, I know I would want you to test my part and have you reccomend it to my audience as your "Gold Award" upgrade route.
Best upgrade path for an old system?:
1. SSD Drive 2. New Video Card 2. Windows 7 64bit and 8 more gigs of ram 4. Mobo/CPU
I would love to see a couple of older systems put through variations of these upgrade paths to see which one or combination of two yields the best result v. how much it actually costs.
For ma and pa surfing the net and sending email a C2D is still boss. You can pick up E8400's for $35 bucks on ebay. Clock them to 4.0Ghz and you just bought 2-3 more years.
Core 2 Quad Q6600 (OC 3.0) + GTX 670, GTA IV Medium Quality, 25 fps i5 3570k (OC 4.2) + GTX 670, GTA IV MAX, 60 fps
For gaming, I'd have to say yes, especially those that work the CPU hard too. These are probably the more extreme examples and the majority of the games do still play fine on the old Q6600.
For regular use, like papa and mama reading emails, browsing the web and streaming videos, the Core 2 Duo and Core 2 Quads with 4GB+ ram still provide plenty of power. The SSD upgrade would certainly be cheaper and would make things more snappier.
These results are definitely missing in this comparison. Because the gaming tests were only limited, this comparison might led to the false believe that an OC'd E6400 can cope nearly as well with nowaday-games as current CPUs. As stated above, the difference in performance will be exaggerated in more CPU intensive games. I can confirm this (only qualitatively) by the stutter I experience while playing BF3 or Bioshock Infinite due to my E6400@3Ghz hitting 100%. In contrary, my 6-year-old machine copes well with nowaday-games as the Assassins Creed series.
I have 2 older systems built in 2008 and 2009 before the first generation Core i-series. One, which was my gaming system for a while, had a E8600 and the other is a Quad Q6700. They both still serve their purpose today, despite me not gaming on PC anymore. The E8600 is being used for a HTPC and the Q6700 is my main machine I use for productivity and surfing the internet as well as some light gaming. I have no need for a high-end Ivy Bridge processor but if I came across a Sandy Bridge setup for a decent price, I'd bite.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
136 Comments
Back to Article
CajunArson - Tuesday, January 15, 2013 - link
I have an e8400 OC'd to 3.6Ghz and the same cooler you have :-)Thanks for posting these results since it is much more realistic to have people doing an upgrade from a Core 2 to Ivy/Haswell/Piledriver than it is to have somebody who just ran out and got a high-end Ivy Bridge 2 months ago to upgrade to Haswell.
Despite all the talk that modern CPUs aren't getting any faster, your benchmarks make it clear that even in single-threaded performance there is a big step up from Core 2 to more modern systems, and Haswell will continue the trend.
lyeoh - Tuesday, January 15, 2013 - link
Yeah I agree. This review is very useful. A few more benchmarks might be interesting - e.g. office, sysmark benchmarks with an SSD (just to see which improves things more).The hardware vendors might like it too - helping to convince more people that it's time to replace their trusty old Code 2 Duo system. ;)
NichrolasHoult - Wednesday, January 16, 2013 - link
Love my job, since I've been bringing in $5600… I sit at home, music playing while I work in front of my new iMac that I got now that I'm making it online(Click on menu Home)http://goo.gl/yiQsl
Happy New Year!
jsmi3413 - Sunday, March 3, 2013 - link
ID10TAlucard291 - Thursday, January 17, 2013 - link
The reason why there aren't any more benchmarks is because they counter the conclusion that the author wanted to make.The conclusion that was made before the review was written.
I especially love how the author picked Metro 2033 (the worst optimised game in history of gaming) for this review.
Lord 666 - Thursday, January 17, 2013 - link
Well said. The E6400 rig with SSD is still well suited for browsing, email, office tasks, and overall general home/small business tasks.Alucard291 - Friday, January 18, 2013 - link
And even light gaming. I mean the majority of indie games will run on that without a problem. And these days indie games are pretty damn good.ThisIsChrisKim - Monday, November 25, 2013 - link
I upgraded from a C2D E6300 to a i5-3570k for the sole purpose of gaming, and the perceptible difference is substantial. Not to mention my SSD become much faster with the addition of SATA6 which isn't available on the previous platform. If games that are older than 2010 are being played, then an upgrade isn't necessary. But anything beyond that, a CPU upgrade is very very useful.IanCutress - Saturday, January 19, 2013 - link
The benchmarks run are the ones from my motherboard testing suite. Go look at the motherboard reviews to see the parity between the two. As part of those reviews Metro 2033 and Dirt 3 are my primary gaming benchmarks.As a trained scientist and academic with peer-reviewed and published papers, I should take offence to your supposition that a conclusion was made before anything was written. No definite conclusion was made before testing began at all - only that the newer platforms would perform better in the testing suite. If you know more about my presupposition before testing than I, then praise be your powers of clairvoyance - there are institutions that will pay good money if you can prove it under laboratory conditions. If anything I thought the gap between C2D and the modern Ivy Bridge would be larger than the results I got, and the 3DPM ST result stunned me to begin with.
In case you missed it, the brief overview was to see how much of an upgrade my brother got in terms of my normal testing routine.
Ian
jsmi3413 - Sunday, March 3, 2013 - link
Well said, Ian! Have you ever noticed how some people like to make rediculous statements in this and other forums, just to try and get an arguement started?I thouroughly enjoyed reading the results of your comparison. I am using An HP Pavilion Entertainment Laptop PC with a Core2Duo processor. It is still great for word processing, spread sheet, database and email tasks as well as surfing the web. I do not play games. This laptop serves my purposes perfectly and I do not plan to upgrade, except maybe for the operating syste, This laptop shipped with Vista.
pr0t0typ3 - Monday, December 23, 2013 - link
I don't have the $ to get a new rig so recently I've decided to upgrade my 6-7 yo. desktop. It's spent many years in my basement because I was using a laptop instead. I dindn't want to spend a lot as the whole thing is meant to serve a "hot fix" until I can throw some cash at a brand new desktop machine.All I bought was a decent SSD, 2 Gb more RAM and one of the cheapest GPUs I could find to run two 1080p monitors and be able to play Bluray quality videos enjoyably (I'm not a gamer or anything, so it's fine).
So what I now have is:
Windows 8.1
Intel Dragontail DP35DP
Intel Core 2 Duo E6750 (@ 2.66GHz)
4X1Gb Kingston DDR2 667MHz RAM
Samsung 840 EVO 256Gb SSD (The excellent Anandtech review concludes that it's their "mainstream SSD of 2013")
Gainward GT610 (1 Gb DDR3)
All my old really slow hard drives in a storage pool of about 1Tb for storage.
Yes, the SSD is SATA 3 and my mobo only supports SATA 2 so the speed of my SSD is limited, and yes, 4X1Gb RAM performs worse than 2x2Gb would. But still, although I couldn't agree more with the conclusion of the article (more on that later), the performance gains of the upgrade were huge.
I only use my rig for studying, communicating, browsing, working (a lot) in Excel, mindmapping, watching movies, and playing online poker on 16-24 tables, multiple poker sites and using tracking software. And, more often than not, more of those things simultaneously. Before the upgrade, multi-tabling was almost impossible and when I was studying or researching stuff (meaning Chrome with 15+ tabs in multiple windows, Outlook, OneNote, Word and Excel running, Viber, Skype, qBittorrent and the like in the background) it was a real headache. A restart took almost 3 minutes. I didn't know what to expect from the upgrade. I knew an SSD will speed things up and adding more RAM won't hurt either, but in real life that translated into a more than noticeable performance improvement.
After clicking 'restart', I have full control over my desktop within 80 seconds (don't forget that BIOS part of the boot sequence on these ancient mobos take a lot longer than on recent ones), but now that boot time includes loading Outlook, OneNote and Chrome with all the windows and tabs from my last session, because I've added them to my startup programs. Before the upgrade, this would've been a very, very stupid thing to do. Now, right after entering my password, these apps just pop up like it's nothing.
Even when many applications are running the system loads new ones INSTANTLY, and while I was forced to close tabs and disable some extensions in Chrome to reduce memory load before the upgrade, it hasn't happened ever since. I literally haven't experienced ANY LAG. So, for just a bit more-than-average load it could seem like a good idea to upgrade your old machine. With the proper parts it gets the job done really well. Way better than I thought, to be honest.
BUT!!!!!
First of all, with an old system like that, I'm missing features and connectivity that are less than standard in a modern day PC. And many of these make a noticeable difference, even if you don't want to render 6K videos or do hardcore gaming or Photoshopping. I can't make use of the full potential of my SSD because I only have SATA2 (half the bandwidth of SATA3), and I can't transfer large amounts of data fast to external devices due to the lack of USB 3 ports (I copied 800 Gb to an external HDD yesterday and at 30 MB/s, by no means had it been a seamless experience…). DDR2 is also heavily outdated compared to DDR3. My RAM's maximum transfer rate is somewhere around 2 Gb/s as I recall, and recent chips are easily 3-4 times faster than that; not to mention response times, etc. I'm no expert but I think those differences are so huge they could actually be noticeable even under average use.
Also, prices of old but powerful components are unreasonably high. After upgrading and realizing all the benefits, I was playing with the thought of maxing out my mobo with one of the most powerful CPUs with the same socket (Intel's quad-core q6600, that is) and 8 Gb of 800MHz DDR2 RAM. I thought old stuff is cheap so I'll spend almost nothing and use a decent desktop while saving money for a new one. Not really. Q6600 sells for $280 and 4X2Gb of quality 800MHz DDR2 RAM is also over $200 (these are prices from Amazon for good measure, in my country they are even more expensive). So I'd have to shell out about $500 for outdated technology that is way below even mid-class by today's standards. Not to mention reliability , power consumption, compatibility, support, etc…
Conclusion:
I don't have the $ to get a new rig so recently I've decided to upgrade my 6-7 yo. desktop. It's spent many years in my basement because I was using a laptop instead. I dindn't want to spend a lot as the whole thing is meant to serve a "hot fix" until I can throw some cash at a brand new desktop machine.
All I bought was a decent SSD, 2 Gb more RAM and one of the cheapest GPUs I could find to run two 1080p monitors and be able to play Bluray quality videos enjoyably (I'm not a gamer or anything, so it's fine).
So what I now have is:
Windows 8.1
Intel Dragontail DP35DP
Intel Core 2 Duo E6750 (@ 2.66GHz)
4X1Gb Kingston DDR2 667MHz RAM
Samsung 840 EVO 256Gb SSD (The excellent Anandtech review concludes that it's their "mainstream SSD of 2013")
Gainward GT610 (1 Gb DDR3)
All my old really slow hard drives in a storage pool of about 1Tb for storage.
Yes, the SSD is SATA 3 and my mobo only supports SATA 2 so the speed of my SSD is limited, and yes, 4X1Gb RAM performs worse than 2x2Gb would. But still, although I couldn't agree more with the conclusion of the article (more on that later), the performance gains of the upgrade were huge.
I only use my rig for studying, communicating, browsing, working (a lot) in Excel, mindmapping, watching movies, and playing online poker on 16-24 tables, multiple poker sites and using tracking software. And, more often than not, more of those things simultaneously. Before the upgrade, multi-tabling was almost impossible and when I was studying or researching stuff (meaning Chrome with 15+ tabs in multiple windows, Outlook, OneNote, Word and Excel running, Viber, Skype, qBittorrent and the like in the background) it was a real headache. A restart took almost 3 minutes.
I didn't know what to expect from the upgrade. I knew an SSD will speed things up and adding more RAM won't hurt either, but in real life that translated into a more than noticeable performance improvement. After clicking 'restart', I have full control over my desktop within 80 seconds (don't forget that BIOS part of the boot sequence on these ancient mobos take a lot longer than on recent ones), but now that boot time includes loading Outlook, OneNote and Chrome with all the windows and tabs from my last session, because I've added them to my startup programs. Before the upgrade, this would've been a very, very stupid thing to do. Now, right after entering my password, these apps just pop up like it's nothing.
Even with many applications running, the system loads new ones INSTANTLY, and while I was forced to close tabs and disable some extensions in Chrome to reduce memory load before the upgrade, it hasn't happened ever since. I literally haven't experienced ANY LAG. So, for just a bit more-than-average load it could seem like a good idea to upgrade your old machine. With the proper parts it gets the job done really well. Way better than I initially thought, to be honest.
BUT!!!!!
First of all, with an old system like that, I'm missing features and connectivity that are less than standard in a modern day PC. And many of these make a noticeable difference, even if you don't want to render 6K videos or do hardcore gaming or Photoshopping. I can't make use of the full potential of my SSD because I only have SATA2 (half the bandwidth of SATA3), and I can't transfer large amounts of data fast to external devices due to the lack of USB 3 ports (I copied 800 Gb to an external HDD yesterday and at 30 MB/s, by no means had it been a seamless experience…). DDR2 is also heavily outdated compared to DDR3. My RAM's maximum transfer rate is somewhere around 2 Gb/s as I recall, and recent chips are easily 3-4 times faster than that; not to mention response times, etc. I'm no expert but I think those differences are so huge they could actually be noticeable even under average use.
Also, prices of old but powerful components are unreasonably high. After upgrading and realizing all the benefits, I was playing with the thought of maxing out my mobo with one of the most powerful CPUs with the same socket (Intel's quad-core q6600, that is) and 8 Gb of 800MHz DDR2 RAM. I thought old stuff is cheap so I'll spend almost nothing and use a decent desktop while saving money for a new one. Not really. Q6600 sells for $280 and 4X2Gb of quality 800MHz DDR2 RAM is also over $200 (these are prices from Amazon for good measure, in my country they are even more expensive). So I'd have to shell out about $500 for outdated technology that is way below even mid-class by today's standards. Not to mention reliability, power consumption, compatibility, support, etc…
Conclusion:
Have an old desktop? Thinking of upgrading and using it for a few more years? DON'T. Get a new one. Don't have the money and want/need good performance ASAP? Get a decent SSD (it'll work with your future rig, so why not?) and some cheap RAM. Well… 30 bucks for 2 Gb is certainly not cheap, but for a few months of better performance, I'm sold. There you go. You have a neat and completely functional 'temporary PC' for $30 (plus the SSD but you're going to use it anyway).
EnzoFX - Tuesday, January 15, 2013 - link
Yes, useful!I too have 2 systems in heavy use with E8400's, one OC'ed to 3.6, the other at stock 3.0. Both with SSD's and HD6850's. They handle occasional gaming just fine, let alone common tasks.
kyuu - Tuesday, January 15, 2013 - link
I don't think anyone has said CPUs aren't getting faster, just that we've reached a point where even entry-level CPUs are "good enough". You have to spend a great deal of money just for small bumps in performance that most people won't even notice with their work loads.I don't anticipate upgrading my i5-2500k @ 4.4GHz for quite a while, unless something unexpectedly craps out. I also generally look at AMD solutions for notebooks since the CPU performance is more than acceptable while the integrated GPU is far superior to Intel's, especially as GPU-acceleration is becoming more common.
Homeles - Tuesday, January 15, 2013 - link
There's a lot of moaning about the "5-15%" increase in performance in existing applications that Haswell is supposed to bring about. As this article shows quite clearly, it isn't about existing applications; it's about how future applications run. Look how terribly the C2D performs in comparison to even AMD machines in modern applications. 6 years down the road, the C2D chokes. Even dual core i3 fares much, much better.silverblue - Tuesday, January 15, 2013 - link
I'm curious as to FX's performance in FP tests. Sure, there's one FPU per two cores, but each FPU is theoretically twice as capable at 128-bit calculations as any other FPU. They must've really castrated the units, which might also explain poor AVX performance as compared to SB and IVB. Bet they're regretting that now.wifiwolf - Wednesday, January 16, 2013 - link
It's not that simple. Remember that most of the code is Intel optimised. If you want to have a comparable result on FP you would have to use 128bit optimised code path. Their strategy is totaly different:If you want heavy FPU calculations then use on-die GPU cores
Alucard291 - Friday, January 18, 2013 - link
Future applications? For your parent's/brother/grandparents pc? They will still be Office 2002, Email client of some description (but usually gmail/hotmail/yahoo via browser) a browser aaaaand maybe some sort of a movie player. (VLC?) Last one is a big maybe too.6 years down the line? They are likely to be using the same things still.
And no offence but surely the upgrade time should then be when the c2d chokes as opposed to when it still works ok? :)
For geeks like us yes a C2D is long since past its prime. But we've also long since upgraded to w/e suits us better no?
name99 - Tuesday, January 15, 2013 - link
The REAL problem is that in using modern PCs what one should care about in terms of UI is not throughput but "snappiness". The issue is no longer "how long will it take to transcode my mp3s", it's "does the machine respond instantly to everything I do? How often do I have to wait?" Obviously SSDs have done a lot to move us into this world.The problem for a site like AnandTech, then, is that classical benchmarks do a truly lousy job of quantifying "snappiness". We need a new set of relevant benchmarks.
You see the same sort of thing (at a more virulent level) in Android vs iOS fights, where both sides are claiming that their OS runs "smoother", more "like butter", but once again in the absence of actual numbers, both sides are talking past each other. (And the situation is not helped by the fact that the last iPhone Android guy used was an iPhone3GS, while the last Android phone iOS guy used was a Galaxy Nexus. Vague recollections of a phone three years old, and ZERO actual numbers, do not make for enlightening discussion.)
A further complication is that, for internet interaction, the properties of your TCP stack (perhaps tweaked), your router, and your ISP (does it offer "turbo boost" for the first 1MB of downloads) all also affect perceived snappiness, so it's no longer about the pure CPU+RAM.'
I don't have an answer, but I do see these sorts of benchmarks as becoming less and less relevant every year, and the web site that comes up with an alternative will RULE this space.
tim851 - Tuesday, January 15, 2013 - link
Snappiness has been solved since 2009, when SSDs and Windows 7 (RC) came around. Since then, I haven't owned a laggy computer. Even the 2nd gen MacBook Air (Core 2 Duo, 2 gigs of memory) of my wife is butter all the time.On the smartphones, it was a talking point when powerhouse Android phones running Gingerbread were choking compared to old iPhones and Windows 7 phones. Ever since Ice Cream Sandwhich: snappiness achieved.
I don't see what we would be benchmarking here.
DanNeely - Tuesday, January 15, 2013 - link
Microstutter can still be an issue when gaming. Depending on the focus of the site and if all the reviewers are gamers the best ways to evaluate this are either to collect the time between each frame to measure the number of times a frame is slow like Tech Report does; or like [H]ocp play parts of the games at various settings to determine which combination gives the best graphics while remaining fast enough to be playable.The latter is IMO the gold standard; but is only really doable if all the reviewers are gamers. Tech Report's data gathering is more like the common FPS numbers in that anyone equipped with with a loaded steam account can collect the data.
MrSpadge - Tuesday, January 15, 2013 - link
One could measure maximum and average latencies between "sending task out" and "finished". It's a tough job to select meaningful benchmark scenarios suitable to such measurements and to interpret that data carefully. Anyway, pioneering work could be done here.ImSpartacus - Wednesday, January 16, 2013 - link
Yup, I'm going to keep my 3.6GHz E8400 for a little longer now. I thought it would significantly bottleneck if I put a decent GPU alongside it, but now I know they I can get 95% of the potential gaming performance with this high clocked CPU.Alucard291 - Thursday, January 17, 2013 - link
Well I daresay this (fairly poor by Anand's standards) "review" shows that actually the ANCIENT E6400 is still good enough for most people.And that there is barely any reason to upgrade. But yeah all them "modern advances" which are once again only very arguably useful... (thunderbolt? really? UEFI? for your family's pc? USB 3.0 is also pretty pointless atm)
In the end a pointless review. Ancient cpu's are a bit slower than new ones. Well yeah? We all knew that.
This sort of a review is better suited for... idk? Engadget?
IanCutress - Saturday, January 19, 2013 - link
It's one thing to know, another to actually publish results in a testing environment to show the difference. Oh right, but you already knew. Then why bother reading, surely it's a waste of your time?For the record, my father and grandfather already take advantage of USB 3.0, with my father moving his band music around and the grandfather with his veteran meets. Stick in some WiDi for less cables, or a simple UEFI so if something goes wrong I can tell them what to do over the phone with a lot less hassle.
This is despite the fact that I'm upgrading my brother's computer here and testing his old one. He plays a lot of games, he watches a lot of paintball streams and often does both across two monitors at the same time. As stated, he feels a benefit, and the numbers in the regular testing can at least quantify that to some degree based on the upgrade methodology listed.
Ian
Alucard291 - Saturday, January 19, 2013 - link
So why didn't you show the full range of tests?Or was it simply a matter of being too lazy? OR is it as I said above "they didn't fit your predetermined conclusion"?
ESPECIALLY if you say that these are your standard tests (which they are yes).
So why not run the whole suite?
So yep, engadget quality content.
don_k - Thursday, February 7, 2013 - link
Also upgrading to something soon from a e8200 at 4.2Ghz :) With a 580, gaming is no issue at good frame rates. An upgrade is certainly due, however.wreckeysroll - Sunday, February 17, 2013 - link
just move from my Q6600 to 3570. Was a great upgradewerfu - Tuesday, January 15, 2013 - link
Your bench shows it clearly: most Core 2 owner, especially owners of E8xxx or Q9xxx can still hang onto their rig and upgrade their GPU and get and SSD for an overall speed-up. The DDR2 may limit the performance, but it's not like kits of low-timing DDR2-1066 aren't availables and may provide better performance than generic DDR3-1333. And there's lot of second hand socket 775 CPU and motherboard (some DDR3), which may provide an incentive to fix instead of replace.Death666Angel - Tuesday, January 15, 2013 - link
Hm, halving the time it takes to unpack a .rar or doubling/tripling/quadrupling the time of my encodes is not a trivial thing. If all you do is game and use word, maybe hanging on to the old stuff is good. But I think this shows the benefits of upgrading quite clearly.I'd love to see something similar done with the CPU I have (3.8GHz i7 860). I'm holding off on upgrading that for Haswell if I have the money then, don't want to upgrade to a dead platform again. :D
Hubb1e - Tuesday, January 15, 2013 - link
Actually I think you've got it backwards. doubling/tripling/quadrupling the time of encodes isn't a big deal at all to me and I'd guess most people. They spend MUCH more time gaming than bootlegging ripped DVDs or the occasional re-encode of a video file to youtube. A few more minutes to finish an encode isn't a big deal to most people, yet if they can't get playable framerates on Diablo 3 or Sim City then they need to upgrade. It's about whether or not their computer can run their apps rather than how fast they are. If a game is a slideshow then it is unplayable, if you encode at 3 fps then you still get it done, it just takes longer. It all depends on the applications that you run. If you run only office tasks and the internet, then even my parent's single core Sempron 140 feels snappy working in Word, taking out red eye from a photo, or even with web video because the 780G graphics accelerates it.themossie - Wednesday, January 16, 2013 - link
Agreed... unless you need the processor speed for work, who cares if it takes 3-4x the time to transcode or unpack a file?Heck, I run transcoding in a VM on a seperate SSD, restricted to ~25-50% CPU utilization, so I can keep my system responsive during long encodes.
This may be an extreme case - I have a slow machine (Core 2 Duo / Phenom II X4, see my post several comments down) and don't need that file encoded (or unpacked) this very second.
But if you need the video now... why not just watch it directly?
themossie - Wednesday, January 16, 2013 - link
Also, is unpacking RARs really CPU-limited? It's always been storage limited for me. 7zip seems to write the RAR as it unpacks it into a temporary file, then copy it and delete the temp file - 2x the optimal disk writes. Any more efficient and equally functional ZIP/RAR software?Loki726 - Wednesday, January 16, 2013 - link
I used to think this as well but you may be surprised if you benchmark it. The suffix sort in the BWT used by bzip2 and the markov chain predictors used in LZMA 7zip are actually quite slow and even a high end CPU may be slower than disk. This is especially true (for compression) if you get a good compression ratio since you end up writing that much less data to disk.themossie - Thursday, January 17, 2013 - link
Will look that up, thank you!Alucard291 - Friday, January 18, 2013 - link
And this is true. But in the scope of the review above "for most people" (I know that's too general) how big is the rar they open going to be?1gb tops? If its some stolen game. In reality usually even less. So say its some rar with some photos or some random songs? 200mb? takes what? 10 seconds to unpack? with c2d it would take 20. Oh The Horror.
lyeoh - Wednesday, January 16, 2013 - link
7zip only uses temp files if you drag and drop, this appears to be because of the way windows explorer does things: http://www.7-zip.org/faq.htmlhttp://superuser.com/questions/302116/what-is-the-...
themossie - Thursday, January 17, 2013 - link
Thanks for the information, that would make sense!However, I have used other compression software that doesn't have this issue with drag and drop (most recently, PeaZip - don't use it now as the interface doesn't work well for me) so I'm not sure I trust their explanation of it :-)
Death666Angel - Wednesday, January 16, 2013 - link
You seem to misunderstand my intent: The OP talked like his usage case is encompassing nearly all users. I was specifically talking about my own usage cases where there were huge improvements. And I also mentioned that gaming is an outlier case here. So all you have done seems to be to reiterate my own post.I spend about as much time ripping my BDs and DVDs as I do playing these days and having the ability to get the encode done way faster as well as having a more responsive system in the mean time is a huge boon.
Btw. not everyone here is a torrent kiddie, I buy my stuff, tyvm.
themossie - Thursday, January 17, 2013 - link
Given the context, you were very easy to misinterpret. And who's a torrent kiddie? Just because we don't need to encode rips as fast as you... sheesh :-)Minion4Hire - Tuesday, January 15, 2013 - link
Well your Haswell answer is one I'm looking forward to reading, Ian!To be honest I kind of fear for AMD. Trinity didn't really enhance the A-series computationally, so if Intel releases a GT3/GT4 HD graphics i3 part they could seriously encroach on AMD's A-series niche market. Not only that, but as FM2 isn't truly available in Mini-ITX form factors a graphics-heavy i3 would be even more more versatile and enticing from a SFF/HTPC perspective, not to mention laptops.
I know Trinity is only a few months old, but it feels like AMD needs to release something very soon after Haswell in order to stay relevant.
MonkeyPaw - Tuesday, January 15, 2013 - link
You can get FM2 in mITX. Not many choices, but they do exist.When talking low tech in 2013, my brother still uses the desktop I built him back in 2007 as his sole household PC. It's a single core S754 2.0ghz A64 running the classic nVidia 6100 IGP. I asked him if he wanted me to upgrade it, and he said not yet. :p
Hubb1e - Tuesday, January 15, 2013 - link
he must not click on the HD tab for youtube video. I have a similar setup at my office for when I forget my laptop. It's a S754 at 2.4ghz and AMD 9600pro and it feels ok on the web but falls over with video. Putting a real video card in it would fix mine but it's AG. The nvidia 6100 is PCI-e so you could actually help him a lot by getting a used 3450 video card for $10 online. I bet that system would last for several more years if it had video accelerationkyuu - Tuesday, January 15, 2013 - link
Eh? Trinity *did* enhance the A-series computationally as compared to Llano. It wasn't enough to catch Intel (not that anyone seriously expected it would be), but progress is certainly being made, and the Piledriver cores are a good step up from Bulldozer cores.I don't think AMD needs to equal Intel's CPU prowess in order to stay relevant. They just need to leverage their GPU advantage, which is what they're doing. When using a dGPU, there's no reason to get an AMD CPU. When using an iGPU, there's no reason to get an Intel CPU, IMO, unless all you need is enough GPU to animate UI elements. Although, Intel's GPU in the mobile space are still struggling even with that, Haswell notwithstanding.
If Haswell is all its hyped up to be (which I don't think it will be), then AMD may be in trouble if they aren't able to enhance their APU offerings to surpass it in graphics prowess fairly quickly.
CeriseCogburn - Tuesday, January 15, 2013 - link
Thanks for the article, I object to the screen resolution of 2560x1440 on the two gaming benchmarks.Is your brother sporting a $1000+ screen ?
Do you think that people with a core2 generally are ?
Your pci-e was only 1.1, there's an awful lot of pci-e 2.0 core2 out there - and higher than E6400.
So the gaming tests were important but useless due to resolution.
OC'ing the cpu was useful, but brought it to a bare minimum, as a lower clocked core2 would be an easy $40 upgrade to say E5700 or something like that, or a $49 E8400 w/6mb cache and a 3600mhz easy potential.
I find the whole idea great, but dropping in a decent cheap cpu and gaming at 1650x or 1920x would have done the article proud for people here IMO.
No one with a core2 gives a darn about the Computational benchmarks.
frumply - Tuesday, January 15, 2013 - link
You can't find C2D chips unless you hop onto ebay nowadays, which makes this sort of 'quick' upgrade more difficult than it was just a few years ago. I agree that this, and upping RAM to as close as 4GB as possible (a lot of people with earlier c2d systems have 1-2gb) is a much more realistic, painless procedure.IanCutress - Tuesday, January 15, 2013 - link
The cheap Korean panels are less than $400. At this resolution and settings, any CPU driven calculations can be a bottleneck. It also helps that all the other results I have for other CPUs were done at that resolution and those settings, otherwise this quick test for a mini article would have run into a couple of weeks of testing, rather than a day.As for PCIe, it was just the board taken out of an essentially random system, but happened to be the one my brother was using. He has a dual screen setup at home, often running a video/audio stream on one while playing RuneScape almost full-screen on the other. We can always argue 'why didn't you test XYZ', but the truth of the matter is this is what I had to test.
If I didn't upgrade his machine, or didn't have the capacity to, then a motherboard or CPU swap would indeed be on the cards if he couldn't afford a full system. As you note, there are some cheap and cheerful prices to be had for s775, although a jump to Sandy Bridge could be as little as $125 new.
Ian
Folterknecht - Tuesday, January 15, 2013 - link
The PCIe 1.0 Board probably bottlenecked the both GPUs at least in some benchmarks.I would really like to see a follow up on this with a P45-board (PCIe 2.0) and a Q9550 or something similar.
extide - Tuesday, January 15, 2013 - link
The point was to compare the system using the SAME benchmarks used on the modern systems, that way he could have numbers to compare to. It's not like he actually benched all of those machines for this article, he just benched the core2 system, and used pre-existing benchmarks to compare against.silverblue - Tuesday, January 15, 2013 - link
It's not a bad point to make though, however as mentioned, the hardware has to be available. Perhaps for a follow-up article?LauRoman - Tuesday, January 15, 2013 - link
I still use a Core 2 Quad and may be using it until it melts.frumply - Tuesday, January 15, 2013 - link
what I did w/ a stepfather's system a couple years back was to get one of them low-cost 45nm C2D Pentiums and add a stick of RAM I wasn't using anymore, replacing a E4300 or equivalent, as well as doubling the RAM. Long as you're not doing video processing or 3D rendering those late Wolfdales seemed to be more than powerful enough for the job. I've done the same with my system by using a mid-tier Q9x00 CPU. Most of my gaming is console-based these days anyway, and aside from that my only gripe is that I'm stuck with 4GB of RAM till I upgrade to something better.I'm looking forward to finally upgrading with Haswell, but much of the drive for that's been due to the power-saving features.
nathanddrews - Tuesday, January 15, 2013 - link
By any chance did you also log the minimum and maximum fps for the games?I found that when I upgraded from my Q6600 (OC 3.2GHz) to a stock 3570K, while keeping the same GPU and SSD, I all but eliminated low fps spikes and max fps nearly doubled. Some games are immensely quicker while others simply no longer stutter.
IanCutress - Tuesday, January 15, 2013 - link
Dirt3, 2560x1440, Ultra, 8xMSAAE6400 @ 2.13 GHz, HD7970, min FPS: 17.3
E6400 @ 2.80 GHz, HD7970, min FPS: 32.3
E6400 @ 2.13 GHz, GTX580, min FPS: 18.6
E6400 @ 2.80 GHz, GTX580, min FPS: 22.5
Note that I did these on 12.3 / 296.10 drivers to remain consistent with my chronologically older results, so the newer drivers should probably push it up a little, especially those 12.11b11.
Mr Perfect - Wednesday, January 16, 2013 - link
Interesting. How does that compare to the newer systems? Do you have minimums for those too?A year and a half ago, I ran a similar upgrad-as-much-as-you-can Socket 939 Opteron machine with a then-new 6870 in it, and when the card was carried over to a new i7 2600 build, it was the minimum frame rates that really improved the most. When the old machine was CPU limited, it was really CPU limited.
Golgatha - Tuesday, January 15, 2013 - link
Just upgraded my dad from a Pentium D, 4GB DDR2, 9500 GT system to a Core i7 920, 12GB DDR3, GTX 460 768MB hand me down system for his Christmas 2012 present. Let's see if he notices the difference?!I think the biggest relative upgrade I performed was for my in-laws though. Took them from a Pentium 4 3.0Ghz to a Core 2 Quad q9400. Now that's an upgrade! They're still using that system and it still runs like a champ by the way.
I treated myself to an i7 3770k, 16GB DDR3, SLI 680 system this past fall.
karasaj - Tuesday, January 15, 2013 - link
I am so utterly jealous of how much money you have xD. I will be rocking a core i3-3220+GTX 650 and a decent ivy bridge laptop probably until broadwell. I might pick up a surface pro like machine with a broadwell chip, and a quad core desktop system by then.Although broadwell might be really only more for mobile, so I might end up waiting for the revision after that to get a CPU, and upgrade my GPU first.
Golgatha - Wednesday, January 16, 2013 - link
Well, I actually had the GTX 460 768MB as a dedicated PhysX card alongside my original GTX 680. Swapped out the motherboard, CPU, and RAM that I ended up repurposing for my dad's Christmas present, and also repurposed the GTX 460 for his rig. That left an empty PCIe 16x slot in my current rig, which obviously is a dangerous thing for my pocketbook, as I ended up going SLI. Found a great deal on the FS/FT boards that I couldn't pass up. ;-)Darn those Microcenter deals that inevitably cause a cascading cash flow outlay!
DanNeely - Tuesday, January 15, 2013 - link
I gave my parents a similar bump in two stages a few years ago. From an Athon-900 and 128Mb PC-100, initially to an A64-2.0 and 1gb DDR2-400, and a year or two later to a 2.4ghz dual core and 3GB ram when I retired my S939 box.Arguably it's due for another upgrade soon; but they almost never use it anymore. Mom mostly uses my old netbook from her recliner while dad buys and batters a cheap 15" laptop to death on the road every 18 months or so. The desktop only ever really gets used if the wifi is down, or by my dad to use the networked printer. He only does this because he's terrified that connecting his laptop to wifi will screw up his ATT dongles software (this apparently happened once a half dozen years and at least two dongles ago). I've gone as far as offering to image his drive first to guarantee I can undo anything if it breaks but he won't let me touch it.... *sigh*
sideshow23bob - Tuesday, January 15, 2013 - link
Interesting article. I'd be interested to see power consumption differences in the C2D and the latest and greatest as I imagine that may be a more decisive win for the modern hardware than the actual performance difference. Thanks so much for sharing this.jjj - Tuesday, January 15, 2013 - link
More striking is how little you gain when replacing a 6.5 years old CPU, 6.5 years used to be an eternity yet now it's still functional.If you would go back 6.5 more years you would land in the pentium 3 era,think how slow that was compared to core 2 duo.
And this gave me an idea, would be cool to test every gen from Intel and AMD,as far back as you can manage to go and plot it in on a time line see how perf evolved (and slowed down) - ofc you would have to exclude the extreme (pricing) series to get relevant results.
IanCutress - Tuesday, January 15, 2013 - link
I have thought about this. One issue is that when you get older the connections change. Moving back to IDE, AGP and further back adjusts where the bottleneck is and it is harder to keep consistency between the setups. When you get far enough back an OS change is needed too, which puts a different spin on things. What may be a 10 second benchmark today could be a 48 hour test if you go back too far :) Although I do wish I had more AMD chips for comparison in these graphs, such as Athlon, Athlon X2, Phenom and the like.Ian
jjj - Tuesday, January 15, 2013 - link
That's true but since we can't use just the CPU.we use the system, using the hardware that was available at the time for each system provides the relevant results you would be looking for.On the software side it might be hard to find the best benchmarks,since ideally you would have to use the same version of the software.
In the end you should be able to figure out a reasonable solution and i do hope you find the energy to give it a try.
Including ARM would be fun too but would be too limiting on the software side.
Kepe - Tuesday, January 15, 2013 - link
The only thing I can think of is something similar to superPI. It only tests the cpu, but it's probably the only thing that could be tested on all machines no matter what age they come from and what OS they use.I have a working IBM compatible 286 computer from 1986 at my parent's house, would be fun to compare that to something more modern ;D
fic2 - Tuesday, January 15, 2013 - link
Where do we send the old hardware?Peanutsrevenge - Wednesday, January 16, 2013 - link
Should you decide to give it a try some time, linux would take much of the OS incompatability away and a game like Spring RTS would be ideal for testing single threaded CPU performance by watching the same replay on each machine and noting the min/max/avg FPS (and on the really old stuff time to complete the run).A PCI/SATA card would also allow the use of an SSD, which would be the absolute maximum IO performance the machines weren't even capable of, thus eliminating that bottleneck.
Would be one hell of a project though and I'm sure people here would be willing to donate hardware to the project. I for one could contribute a couple of Athlon64/X2 CPUs.
I'm sure ATI released an AGP card not long back aswell, which would keep that bottleneck away (other than the interface itself, but that's all part of the evolution).
eBauer - Tuesday, January 15, 2013 - link
It would be neat to see a Penryn CPU thrown in the mix with a P35/P45 chipset based motherboard. If you compare an e8600 to a e4700 (closest CPU I could find to the e6400 @ 2.8GHz) there are healthy gains. http://www.anandtech.com/bench/Product/54?vs=63aliasfox - Tuesday, January 15, 2013 - link
I think for the vast majority of home computer use, a Core2 is comfortably fast enough for most people. I'm actually running dual Core2 based Xeons and a Radeon 5870 with a 10k Raptor from 2006 and 4GB of RAM, and I never have any issues doing what most people do at home - web surfing, Netflixing, a bit of light gaming (in fact, the Radeon 5870 might be overkill for that last part).With enough RAM and fast enough storage, these machines could last a very long time, especially if OSes and apps stay constant or even speed up slightly.
Kougar - Tuesday, January 15, 2013 - link
I always enjoy seeing some older hardware compared to the latest stuff. Gives a clear perspective on just how large a difference is really there.Those chips can overclock signficantly further though. When Core came out I was among the first to buy in with the E6300 and a budget OCer board from GB. It would hit 3.5Ghz easily at reasonable temps on a top-end cooler for sustained load operation (F@H). Going from 965P to a midrange P35 allowed me to attain that golden 100% overclock at the same voltage (1.86Ghz to 3.73Ghz), which did wonders for boosting performance as these results can clearly illustrate from a lower 670Mhz boost.
Games love having that integrated memory controller. But for the CPU-centric tests I'd still love to see how a 3.4Ghz or higher Q6600 would fair, especially against AMD's offerings.
Brandenburgh_Man - Tuesday, January 15, 2013 - link
About a year ago I upgraded and old Core 2 Duo computer and was extremely pleased with the results.It was a Dell Optiplex 755 desktop with a 2.33GHz CPU. Originally, it only had 2GB DDR2-667 RAM (1GB times 2 sticks), a sucky 80GB hard drive and even more sucky on-board graphics. I went to the Dell website, entered the machine's Service Tag number, and discovered that it could be upgraded to 8GB RAM using four 2GB sticks. At the time, DDR2-800 RAM was still cheap (although prices have gone up recently) so just for the hell of it, I pulled the DDR2-667 stick and replaced it with 8GB DDR2-800 I bought online. Then I replaced the 80GB hard drive with a 120GB SATA II SSD. Finally, I bought an ATI 6750 single slot graphics card with 1GB GDDR5 and 128-bit bus. Although I would have preferred a more powerful graphics card with a 256-bit bus, I was limited to a single slot solution because the CPU fan shroud was too close to the PCI-e x16 slot to accomodate a dual slot card. - Oh, yeah, and I upgraded the OS from 32-bit WinXP SP2 to 64-bit Win7 SP1.
The new WEI numbers were impressive. Although the CPU stayed at 5.8, the RAM went from 5.8 to 6.1, the graphics went from 3.4 to 7.3, and the disk I/O score went from 5.6 to 7.8.
Including the cost of a new 650 watt power supply (necessary, because the old 350 watt Dell power supply didn't have a 6-pin connector needed for the graphics card) the total upgrade cost came to about $350. Keep in mind that this machine (with a DVD burner and CD-ROM) originally sold for about $750. So for less than 50% or the original cost I wound up with a computer that boots in 25 seconds, plays 1080p H.264 video, and most games at 1920x1080 with medium settings.
I agree with the poster who said 2560x1440 gaming was a poor choice for your review. 1080p scores would have been far more useful to Core 2 Duo owners. I also agree that Core 2 Duo owners don't care about the multi-threaded benchmarks you included. Let's face it, the average computer user doesn't do advanced encoding and such, and anyone who does would have junked their Core 2 Duo machine long ago.
Although I have several more modern computers at my home and office, I find that this upgraded Dell is worth keeping around, probably for another 2 or 3 years. Although it looks like 2560x1440 monitors will become more popular as time goes on (and prices drop) the average user will probably still be using 1080p monitors for a long time to come, so an upgraded Core 2 Duo is still a worthwhile project.
tocket - Tuesday, January 15, 2013 - link
Made a quick calculation comparing the single-threaded 3DPM bench of the i7-3770K and stock C2D. Taking the difference in clock speeds into account the i7 turns out to be merely 4% faster (assuming full turbo boost). Has the IPC really not improved or is it simply a matter of the benchmark not using AVX or any of the other new extensions?IanCutress - Tuesday, January 15, 2013 - link
Benchmark is written in plain C++, without extensions, similar to any non-CompSci oriented scientist who has been told to 'write code' to solve a problem.Using the SSE4 C++ AMP n-body example in the SDK, at 10240 bodies in the calculation, the E6400 gets 3.8 GFlops at 2.13 GHz compared to 42.3 GFlops for an i7-3770K, if that helps :)
tocket - Tuesday, January 15, 2013 - link
In that case: Good on Intel! That's a more than 3-fold improvement on a core-frequency basis if we're talking multithreaded here. Too bad this improvement does not come automatically though. There's probably a whole lot of programs that don't make use of these extensions.BTW, nice to see a chemist on Anandtech. Keeps my fantasy of seeing a Gaussian09 bench on a Xeon Phi alive :)
IanCutress - Tuesday, January 15, 2013 - link
Oh I should say that n-body was MultiThreaded. Don't have ST numbers, sorry :(I don't have access to Gaussian09, but if there's a Linux+OpenMP version, link me up and I'll see what I can do if I get a Xeon Phi in :)
Golgatha - Wednesday, January 16, 2013 - link
Bunch of in-silico nutters. Get into the lab and get your hands dirty!IanCutress - Saturday, January 19, 2013 - link
Things always go boom when I'm in the lab. At least it takes me a couple of years to burn out a $300 GPU rather than a couple of minutes to have $10k of chemicals explode in my face / get washed down the drain :)danjw - Tuesday, January 15, 2013 - link
I just did an upgrade from Core2Quad Q9450 to a Core i7-3570K and some new Corsair DDR3-1600 memory. I went with the Asus P8Z77-V LK motherboard. I already had a Corsair 60GB SSD I had been using as a boot drive. For the new system, I moved the SSD to being used for Smart Response on an 1GB EARZX Western Digial drive. Those two went on the SATA 6Gbps ports and some other data drives and optical on the SATA 3Gbps ports. I kept my EVGA GeForce GTX 560 Ti graphics card. I decided to stick with Windows 7 Home Premium 64bit, for the upgrade.I haven't bench marked games, but in generally I am really happy with the new system. Everything in the OS happens significantly faster, though boot time is a little slower. The old system was having stability issues, so this was as much a repair as it was an upgrade. Also, some things like opening up my Chrome session and closing are much faster. Games do seem to be more responsive as well.
chrnochime - Tuesday, January 15, 2013 - link
For something like a computer, I would've just gave the pc to him instead of making money off my own family...chrnochime - Tuesday, January 15, 2013 - link
Then again you gave him the new system for free. Hmm..IanCutress - Tuesday, January 15, 2013 - link
Different times. Back then I was at university, where money was a scare resource, and after paying for my own new build I wanted to recoup some of the cost. Now in the world of jobs and such, it's less of an issue, and since he drives and I do not, his runabouts at my request have grown over time and I wanted to repay him.Ian
themossie - Wednesday, January 16, 2013 - link
Depends on the money situation, and surely it depends on the family?I don't give equipment to my family - if it's free, they treat it like crap and it breaks in weeks. And they have far more money than me...
For family, my time is free - and time is love. Gear is another story :-)
Paulman - Tuesday, January 15, 2013 - link
I'm running a Core 2 Duo E8400 (stock 3.0GHz but oc'd to 3.7GHz) in a system that I bought used off Craigslist for $380 in mid 2010 (it was quite a steal). It originally had an HD 4870 512MB, but that died and I bought a friend's NVIDIA 460 GTX 1GB. It still runs any game I throw at it, usually at High or Highest settings (with or without anti-aliasing, depending on the game) @ 1080p.Core 2 Duo E8400 with 6MB of L2 cache, and overclocked is quite a potent combo. Definitely a powerful performer from that generation, at a decent price. And in single/two-threaded workloads, it's not THAT much slower than today's offerings. It's definitely fast enough to be responsive in day to day tasks (like JS-heavy webpages and Facebook and HD video streaming).
The only area that I REALLY feel the lack of power is in video encoding (which I don't do that much of) and in multi-tasking situations where I'd love to have a full screen Twitch.tv video stream open on my second monitor while playing an intensive 3D game on my main. Not enough cores :P Also, the other time I feel the lack of speed is probably in boot-up and installation of certain things, because I have a Vertex 3 SSD (only in 3Gbps mode, though) which is fast enough to remove the HD bottleneck for most things.
GNUminex - Tuesday, January 15, 2013 - link
I have a similar set up but with an E5200 OCed to 2.7GHz and 2mb L2 and get a similar experience to you. CPU performance is irrelevant for the majority of tasks if you do not have other high end components.Achaios - Tuesday, January 15, 2013 - link
@Ian: I am a big fan of ANANDTECH, and I have read your articles on QX 9650 and X48 motherboards several times.Your article gave me an idea: I am running a QX 9650 @3952 Mhz, DDR2 @1115 MHz and a Gigabyte Nividia GTX 580 SOC (Super Overclock) on an ASUS P5Q Deluxe PCI 2.0 MOBO. Why don't you guys do the same for my QX 9650. I only play WoW and it can hold its own quite well. I play at 1920X1080, all settings at ULTRA.
Here's some gaming benchmarks for my QX 9650:
3D Mark Vantage: P20596 http://img715.imageshack.us/img715/2937/3dmarkvant...
(Note that my GTX 580 is not overclocked. I would score higher if I overclocked it. I don't need to.)
3D Mark Vantage score comparison with other similar CPUS:
http://img545.imageshack.us/img545/8894/3dmarkvant...
3D Mark 11 Pro Score: P6498
http://img21.imageshack.us/img21/5425/pscore39522m...
Memory Performance (Everest Benchmark) Kingston HyperX DDR2:
http://img16.imageshack.us/img16/3858/3953ghzp5qd1...
Pretty sure that mY QX9650 would score substantially higher on a X48 board with DDR3 memory and a higher 4.2GHz OC. Why don't you guys do a similar article on QX 9650 vs modern CPUS and see how much is it worth it to upgrade. I know QX 9650 is one of your favourite chips (you even managed to fry one!).
Just focus on a 1920X1080 resolution or even a bit lower, as really very few people have 2560X1600 monitors.
My system here: http://www.overclock.net/lists/display/view/id/464...
Best regards
IanCutress - Tuesday, January 15, 2013 - link
You might be getting me confused with Rajinder Gill, the previous motherboard reviewer. He tackled X48 - I've never touched a QX 9650 :) Though I would like to. I have some ideas for future articles :)Ian
andykins - Tuesday, January 15, 2013 - link
Just want to add my voice to the chorus that I'd love more of these comparisons in future. :)tipoo - Tuesday, January 15, 2013 - link
A Core 2 Duo from years ago can beat the A10 in single threads? That's gotta hurt. I knew AMD was lacking single threaded performance but I thought they had at least crawled past the Core 2 Duo.My laptop is a Core 2 Duo t6500 (2.1GHz Penryn), and while I would definitely want something more for gaming, I must agree that it is still plenty capable for what the vast majority of people do on the computer. A few die shrinks down the road, when Core 2 Duo like power becomes the standard for smartphone/tablet power, I think desktops and laptops will start to shrink at an even faster rate than they are now. Some will still need them of course, like some need trucks, but for the majority a tablet will do just as well.
IanCutress - Tuesday, January 15, 2013 - link
The older Stars cores performed much better in single thread, as shown by the X6-1100T, and the OC'ed E6400 only beat the A10-5800K at stock in a single non-memory related benchmark. Just to put it in perspective ;)Ian
IanCutress - Tuesday, January 15, 2013 - link
*Thuban on the X6-1100T, Stars on A6-3650cosminmcm - Wednesday, January 16, 2013 - link
Even if Stars/Thuban perform better than A10, they still won't touch Core 2 per clock:http://www.anandtech.com/bench/Product/88?vs=48
althaz - Tuesday, January 15, 2013 - link
I too had a launch-model Core 2 Duo, the E6300 (1.86Ghz at stock). I was running it a bit faster (3.29Ghz), but had definitely started to notice it's age (GTX260 for graphics, 8Gb of DDR2).I opted for a whole new PC in May (it was nearly GPU upgrade time, I wanted an SSD and I was sick of my old case) and the speed difference is actually quite astounding. A lot of the general responsiveness I put down to the SSD, but photoshop, gaming and compiling all got significantly faster with the upgrade to a 4Ghz Ivy Bridge quad core.
I gave my old computer away and the mate I gave it to was pretty stoked (he had a Pentium 4 and a Radeon 1950 or something like that), but I couldn't be happier that I've upgraded.
I only upgrade my CPU/Mobo every four-five years (with other upgrades as needed) and the difference when changing to the newer platform is always very significant.
Peanutsrevenge - Tuesday, January 15, 2013 - link
Your brothers old hardware's not much different to mine ( E5400 @ 3.3 (used to be 3.5 but it's getting old), 4GB @ 800, Dual GTS250s, 120GB HyperX 3K SSD).While it certainly can be frustrating slow when it comes to computation, I can still run most of the newest games well enough @ >= medium settings.
While I recently had enough cash to upgrade to an i3, SLI mobo and 8GB, I really couldn't find myself able to justify it still due to the lack of a major step in performance, or rather, due to the continued stubbornness of the old 775.
Hopefully I'll actually get a full time permanent job soon so it'll be easier to stomach a decent upgrade (K series and xxx(x)GPU).
themossie - Tuesday, January 15, 2013 - link
The 'newer' E7200 (2.53 ghz, 45nm) continues to serve me well. The E7200 still makes an awesome home server, with exceptionally low power consumption (even decent by today's standards!) and more horsepower than any 3 Atoms ever built. Until this week, I ran my home server (with 2 VMs on top of Windows) off of it.This week, it's my desktop again...
My Phenom II motherboard went kaput last week, so I swapped back to the old Core 2 Duo+motherboard as my desktop until a replacement arrives. With RAID SSDs and a good graphics card, I have no complaints except the 4 gb of ram, which is why I upgraded in the first place - at the time, 8 gb of DDR2 cost as much as the Microcenter Phenom II CPU+mobo deal and 8 gigs of DDR3.
For those who aren't power users or serious gamers, any 45nm Core 2 Duo should last at least a couple more years with an SSD and enough RAM. Any upgrade less than a Sandy Bridge or Ivy Bridge i5 isn't worthy.
benamoo - Saturday, January 19, 2013 - link
I have a very similar system (E7300 with 4GB of DDR2-800 + a HD4670 GPU) which I mainly use as an HTPC with occasional gaming at 1366x768. Overall I'm very satisfied with it.My main concern is the power consumption. I know it's based on a newer 45nm architecture (the reason I chose this particular CPU was its power efficiency back in 2008).
I just wanna know how much would I benefit from using a modern, say Ivy Bridge Core i3 instead of my current rig? From a power consumption standpoint. Since I can't build a new PC right now I thought It'd be better to just upgrade what I currently have? Maybe add an SSD and a new GPU.
Since you mentioned you used yours as a server which might have been on 24/7 I thought you'd know the estimated power consumption?
Any ideas on that?
Thanks in advance.
themossie - Sunday, January 20, 2013 - link
I don't have any way to measure power consumption, but TomsHardware (http://www.tomshardware.com/reviews/intel-e7200-g3... shows a E7200+G31 idling at 31 watts with integrated graphics and an efficient, low-output power supply.The G31 is dated even by Socket 775 standards, so with underclocking/undervolting and a better motherboard, you can probably drop that - quick and dirty, either use SpeedStep to lower the multiplier or drop the FSB from 1066 to 800Mhz. then reduce voltage until it starts crashing :-)
I know the Radeon 4670 was very efficient for its day, but not sure what might be a good upgrade. First idea that comes to mind is G45+integrated graphics?
benamoo - Sunday, March 10, 2013 - link
Thanks for the info, and sorry for the super-late reply!About the GPU upgrade, the best option seems to be AMD Radeon 7750. It's really power-efficient and is the fastest graphics card right now that doesn't require an auxiliary power input.
tech.noob.fella - Wednesday, January 16, 2013 - link
how much of a difference will haswell make to graphics performance if my computer already has a discrete graphics unit??themossie - Wednesday, January 16, 2013 - link
From Core 2 Duo to Haswell, or something else? What kind of programs do you run? You could be CPU, GPU, IO or RAM limited depending on the workload.tech.noob.fella - Wednesday, January 16, 2013 - link
ivy bridge....I dont actuaaly have one, just wanted to know if I should grab the currently shown series 7 chronos/ultra or wait for haswell equipped ones....how much will the difference be??themossie - Wednesday, January 16, 2013 - link
Since the Series 7 Chronos and Ultra both have a discrete GPU, the difference between the Ivy Bridge and Haswell integrated graphics won't matter at all.lukarak - Wednesday, January 16, 2013 - link
I'm using a 4 year old X58 i7-920 system. It has since been upgraded with 24 GB of the cheapest ram and a new graphics card. Aside from USB 3, i don't see any reason to upgrade in next 4 years. Long gone are the days where you couldn't run mp3s on a 486 or a divx on Pentium II 266, or a 1080p x.264 on a C2D in a laptop.jabber - Wednesday, January 16, 2013 - link
...shame that the benches here are totally pointless in regards to the usual situation of 'hand me down PCS'.Would have been more useful to see testing on how fast a Word document opens and closes compared to a top end i7. How fast Facebook opens, maybe how playable the Sims was on both machines, Farmville performance.
Thats more real world stuff that normal people do. I'm pretty sure the results would have been negligible.
Yes the tests show how far things have advanced but they don't address how pointless all that extra power is for 95% of users in general and 99.999% of hand me down owners.
Most I do now in such cases is just make sure it had a dual core, at least 2GB of ram and maybe slap a cheap SSD in it. Good to go for quite some time.
Second point is I'm intrigued to know what your brothers home looks like. Thats one dusty PC for 2 years of use. I can always tell what a persons home looks like when they bring me a PC. If its 5 years old and spotless inside, the home 99 times out of a 100 is spotless too. If it arrives full of dust and spiders then I know its a hell hole. Proven when I arrive to take it back and I don't stop long.
amrs - Wednesday, January 16, 2013 - link
I just recently updated from Core 2 Quad Q9550 to Core i5-3570K, wanted to have 1920x1200@60 fps in Borderands 2 consistently. I tried overclocking the C2Q first but 3.4 GHz still wasn't enough so I gave up on that idea.Now BL2 pushes enough frames. Probably Alan Wake too, if I ever get back to it although FPS drop weren't so visible in it. Other than that I haven't noticed much of a difference. Then again, not doing anything heavy other than games at the moment.
I also like the single page format of this article since I read this initally on a tablet.
IanCutress - Wednesday, January 16, 2013 - link
For any of our articles or reviews if you click 'Print This Article' it will show the whole thing in a single page format :)piroroadkill - Thursday, January 17, 2013 - link
You should have had some faster RAM and grown a pair!I had a Q9550 @ 3.8 and it was blazing. Actually, I gave that to my brother... (Have a 2500K now).
Movieman420 - Wednesday, January 16, 2013 - link
I'm running an i5 with ssd array and plenty of toys but must say a good word about my semi retired C2D.A bit newer than this article:
C2D E8400 (@4GHz air easily) - 6GB DDR2-1066 Ocz Reapers - Asus P45 mobo - a pair of ocz vertex 2 60's in R0 for boot and 2 x 1TB WB black with an Asus 4850 with AC kit.. For the last "Rah!" of the high performance duals is still kicks damn good esp with the ssd array...makes a HUGE difference.
Pinkynator - Wednesday, January 16, 2013 - link
As an owner of an old E4300 (overclocked to 2.4 GHz, though), if I'm reading this correctly, buying a modern graphics card (I have a Radeon 4670 - don't laugh) would be enough to push me into the realm of "decent gaming at 1680x1050"? I don't have money for a full computer upgrade, and I do occasionally feel the urge to play a game that isn't half a decade old.So if I buy something like a 7850, that would work reasonably well, right...? Help me out - I've been out of the hardware loop for many years now.
themossie - Wednesday, January 16, 2013 - link
The 7850's overkill. My 2.53 ghz Core 2 Duo is usually CPU-limited with a GTX 460, which should run <$60 on Craigslist.GPU comparison at http://www.anandtech.com/bench/Product/549?vs=542
Pinkynator - Wednesday, January 16, 2013 - link
I've been using ATI cards since the VGA days, when only genuine "built by ATI" cards had analog output comparable to Matrox. I really don't know much about the new cards, but I figured something like a 7850 is "standard".If it's CPU-limited, then I guess I could max out the graphics settings anyway...?
I saw a passively cooled 7750. It's interesting (silence is golden!), but costs only slightly less than the 7850, which could be a future-proof upgrade in case I somehow find money to upgrade the MBO, CPU and RAM in the next year or so.
themossie - Wednesday, January 16, 2013 - link
The 7750 and 7850 really aren't comparable in performance -http://www.anandtech.com/bench/Product/535?vs=549In fact, the 7750 is easily 25% slower than a GTX 460 - http://www.anandtech.com/bench/Product/542?vs=535
If you don't have money for a full upgrade now, still suggest you get the GTX 460 (or a comparable used ATI card) and wait!
That said, silence is golden... my current desktop is built for near-silence, with large low-RPM fans and no hard drives.
As more and more software became GPU-accelerated over the last 2 years, the GPU fan now spins up all the time and I have to disable GPU acceleration (especially in Chrome and Flash).
Anyone else have an experience like this?
astharo - Wednesday, January 16, 2013 - link
Got an old fashion C2D E8400@4.1GHz on air - 4GB DDR2 CORSAIR Dominator - DFI X48 LT T2R rocking mobo ! and a Sapphire HD7970 GHZ VAPOR-X (up from a HD5870)... Getting Skyrim running at 1080p on a 27" Full HD Modded at no less than 40 fps...Only playing and photoshop editing with this rig ! See no reason to upgrade at the time :D
I think that for gaming at a descent fps even in 1080p today's CPU's horsepower helps, but is not vital... Much is done by the GC; have seen a 80% to 100% increase in playnig comfort upgrading the CG !
Movieman420 - Wednesday, January 16, 2013 - link
Just upped from an E8400 @4GHz - P45 - 6GB Of ocz dd2-1066 and a 4850. Had 2 Ocz Vertex 2 in raid1 as boot drive and the thing is still tight as well.When it comes to speeding up an older rig...2 best/cheapest things that make the MOST difference:
AN SSD BOOT DRIVE (OR 2 IN RAID0 FOR REAL FUN)
MORE/FASTER MEMORY
astharo - Wednesday, January 16, 2013 - link
Much Agreed !TerdFerguson - Wednesday, January 16, 2013 - link
Although I think that a system where every part is lacking is better replaced than upgraded, I thoroughly enjoyed this piece. For a person with an aging system, it's nearly impossible to find published tests of what targeted upgrades yield. I have no doubt this article will be very useful to folks still running old hardware and evaluating upgrades.TrackSmart - Wednesday, January 16, 2013 - link
Something must be lost in translation here. You are apparently buying a very nice computer system for your late (i.e. dead) mother. That seems very generous, but even if your mother is still living (which I hope she is) it's unlikely that she would need that much computational power given that she's been working just fine on a Pentium 4 system.Even if she keeps the computer for 8 years (like her previous P4 system), the reality is that the difference between an i5 and an i7 will be trivial compared to the difference between it and any modern computer system in the future. But then again, only the best will do for mom, who is hopefully just late for Bingo and not actually departed!
themossie - Thursday, January 17, 2013 - link
beautiful, just beautifulHrel - Thursday, January 17, 2013 - link
TIL AMD is on par with 2008 performance. Sad AMD, just sad.epobirs - Thursday, January 17, 2013 - link
I recently replaced a C2D 6400 system I built in January 2007 with an Ivy Bridge Core i5 system. It started out with 2 GB and eventually got up to the maximum 6 GB PC6400 the Intel board would allow. Began with the Intel video, then got an Nvidia 210 card. A USB 3.0 card was added. Original OS was Vista beta, then Vista final, then Win7 beta, Win7 final, then Consumer and Release Previews of Win8.The old C2D still has plenty of utility but I haven't the space to let it keep a position on the KVM. Still, it's sitting in reserve in case some situation comes up to put it in service again. For day to day use it had finally gotten old enough that a new machine could be justified, more for the assorted niceties beyond the CPU than for processing power.
It helps that Microsoft has been making an effort to reduce Windows resource requirements. In an earlier era a machine this old would be showing its age much more when running the latest Windows release.
chizow - Thursday, January 17, 2013 - link
Really should've tested multi-GPU configs, the CPU has been a serious bottleneck for gaming rigs since Nehalem. The results are even more apparent in multi-GPU configs where there is little or no improvement in performance scaling from additional GPUs with a slower CPU that isn't overclocked.Marburg U - Friday, January 18, 2013 - link
Honestly i think that dumping a C2D without having upgraded to a Quadcore Penryn is a waste.Wolfpup - Friday, January 18, 2013 - link
I like seeing the comparison to current CPUs...probably useful for people looking to upgrade too.I actually have a Core 2 x2 @ 2.4GHz that I use regularly. I notice a big difference between it an Sandy Bridge for web browsing or the like (obviously not as big as between the Core 2 and c50, let alone my iPad, but you still notice it).
But still, Conroe was a monster!
cjs150 - Friday, January 18, 2013 - link
Buy your brother a vacuum cleaner!It all depends on what you want. I have a very nice 21" monitor that I bought 5 years ago that only runs at 1680 resolution not 1920x1080. Nvidia 8800 GT I have had for years is more than adequate for most games I play.
On the other hand, an Atom netbook I have had for about 3 years is incredibly and frustratingly slow.
Movieman420 - Saturday, January 19, 2013 - link
Another area were the e8400 held/holds it's weight was encoding with that big 6MB cache. Re-encoded mannnny vids with it at 4.3GHz using a thermalright 120 hsf w/ no probs even when doing other stuff. Was the mutts nutts till the Q6600 launched and smoked everything.to keep up the rant...before you sell off or give away an older system...
PLEASE try running it with an ssd boot drive....any size will do! ;)
snarfbot - Saturday, January 19, 2013 - link
should have used crysis instead, a core 2 duo even overclocked to 3.5ghz bottlenecks a 4890. i just upgraded recently and the performance increase was pretty staggering, even with the ancient gpu.bf3 played better too, all the stutters and momentary hitches disappeared and it was just alot smoother overall.
even source games play alot better. instead of dipping to the 40's for instance in csgo my fps remained pegged over 100.
gta 4 is finally playable, after all these years.
i ended up getting a 7950, and i cant imagine how poorly it would have done on the old system, which was faster than the processor in the article. so yea.
Eyefinity - Saturday, January 19, 2013 - link
It's worth pointing out how close the 5800K is to the 3770K in gaming, especially when you're pointing out that the overclocked E6400 is "within 1 fps" (which it isn't really, it's 1.5 fps).silverblue - Monday, January 21, 2013 - link
Unfortunately, AMD's only hopes for equal performance in most games is to push the load onto the GPU. Throw in a second GPU and a huge gap appears, max details or not.Navvie0 - Saturday, January 19, 2013 - link
Really interesting, thought provoking article.I realise the C2D on test was essentially gifted, but if you had access to a C2Q processor I'd find that article extremely beneficial.
LancerVI - Wednesday, January 23, 2013 - link
Would've been nice to see a 920 included.Just saying.
pandemonium - Wednesday, January 30, 2013 - link
It's nice to see reviews like this, coming from reviewers. It brings things down to earth for the every-day, average end-user.I would like to note that the significance of power consumption, while not exactly glossed over by the author, but not expressly noted with data either, is really where the bottom line is going to be. I mean, even in the last 2 generations of hardware, power consumption has dropped dramatically, while still maintaining a small, if not noteworthy increase in performance. To me, this is the more important selling point of newer hardware, than anything.
Either way, I certainly appreciate the time spent to reaffirm common knowledge in the component world. :)
Manoa - Wednesday, February 13, 2013 - link
I wouldn't count core 2 out just yet, I have a core 2, and it's doing a hell of job keeping my crysis 2 (with high-res and MaLDo in dx11 mode with everything on the very highest) min fps at 20. I would only add that a highly overclocked core 2 (whether 45 or 65 nm) is not only for browsing, and people don't throw them away or count them out so fast, they are fast. I did some benchmarks of my own and found out that my core 2 at it's current mhz (overclocked to the maximum) is about equal to the performance of an i3 (I suspect a penryn at over 3600 mhz would be faster than i3), and also in the games that performance does matter, it's more often a graphics card problem and not (as it used to be in the pre-core 2 era) the processor. if you are asking me core 2 at stock or i3, then i3 for sure, but I think most people who visit this site are not about stock clock :)faster - Sunday, February 17, 2013 - link
I also have many people who come to me to upgrade their systems. For many of these people who had core2Duo's, I merely upgraded them to Windows7, added more memory, gave them a SSD boot drive, and used their old drive as a storage drive. Some got upgraded video cards depending on thier needs.The vast majority of people feel like they got a brand new faster computer for less money than a full upgrade would cost them. Anecdotal evidence is great, but seeing the numbers quantified in this article was very interesting. It makes me wonder how a E8400 @ 3.0 Ghz would fare or perhaps a Core2Quad Q9650 @ 3.0 Ghz. Pair one of these processors with a GTX680 and see how they handle the gambit of modern games. I would like to see if they would render decent enough framerates to put off an upgrade and justify spending $500 on a video card.
Achaios - Saturday, March 30, 2013 - link
I have got a Core 2 Extreme QX 9650 clocked @3.95 GHz, a GTX 580 Super Overclock, 8 GB of GSkill PC2-8800 DDR2 memory overclockable to 1200 MHz on an ASUS P5Q Deluxe PCIe 2.0 MOBO, Windows 7 Pro 64 Bit.I play World of Warcraft MOP and SWTOR (Star Wars: THe Old Republic) MMORGS, and I get from 38 to 114 FPS at 1920X1080, all settings ultra, Anti-aliasing on and set at highest.
My CPU was bought in March 2008 and has logged 19,854 hours of operation (mostly at stock or underclocked). Absolutely no reason to upgrade until Haswell.
faster - Sunday, February 17, 2013 - link
I just went and read all these comments. Come on Anand Tech! Many of your readers are interested in this type of investigative journalism. We all have systems that we have pieced together for ourselves or friends or both. There is a lot of interest in what targeted upgrades can do for a system.If I were a manufacturer, I know I would want you to test my part and have you reccomend it to my audience as your "Gold Award" upgrade route.
Best upgrade path for an old system?:
1. SSD Drive
2. New Video Card
2. Windows 7 64bit and 8 more gigs of ram
4. Mobo/CPU
I would love to see a couple of older systems put through variations of these upgrade paths to see which one or combination of two yields the best result v. how much it actually costs.
superjim - Thursday, February 21, 2013 - link
For ma and pa surfing the net and sending email a C2D is still boss. You can pick up E8400's for $35 bucks on ebay. Clock them to 4.0Ghz and you just bought 2-3 more years.jimmyzaas - Saturday, April 13, 2013 - link
Core 2 Quad Q6600 (OC 3.0) + GTX 670, Guild Wars 2 Medium Quality, 21 fpsi5 3570k (OC 4.2) + GTX 670, Guild Wars 2 MAX, 90+ fps
Core 2 Quad Q6600 (OC 3.0) + GTX 670, GTA IV Medium Quality, 25 fps
i5 3570k (OC 4.2) + GTX 670, GTA IV MAX, 60 fps
For gaming, I'd have to say yes, especially those that work the CPU hard too. These are probably the more extreme examples and the majority of the games do still play fine on the old Q6600.
For regular use, like papa and mama reading emails, browsing the web and streaming videos, the Core 2 Duo and Core 2 Quads with 4GB+ ram still provide plenty of power. The SSD upgrade would certainly be cheaper and would make things more snappier.
soulshot - Saturday, May 4, 2013 - link
These results are definitely missing in this comparison. Because the gaming tests were only limited, this comparison might led to the false believe that an OC'd E6400 can cope nearly as well with nowaday-games as current CPUs. As stated above, the difference in performance will be exaggerated in more CPU intensive games. I can confirm this (only qualitatively) by the stutter I experience while playing BF3 or Bioshock Infinite due to my E6400@3Ghz hitting 100%. In contrary, my 6-year-old machine copes well with nowaday-games as the Assassins Creed series.Setnev - Monday, May 6, 2013 - link
I have 2 older systems built in 2008 and 2009 before the first generation Core i-series. One, which was my gaming system for a while, had a E8600 and the other is a Quad Q6700. They both still serve their purpose today, despite me not gaming on PC anymore. The E8600 is being used for a HTPC and the Q6700 is my main machine I use for productivity and surfing the internet as well as some light gaming. I have no need for a high-end Ivy Bridge processor but if I came across a Sandy Bridge setup for a decent price, I'd bite.