I think there are many of us that had the same idea. Unless needing to upgrade due to malfunction or new laptop purchase, holding C2D til past the i-Series was the best move to make; whereas buying into C2D asap was the best move at the time.
Still going to wait for prices to fall and more USB3 adoption. Expected new purchase: mid-2011-mid 2012
Ya know I usually do as you are but was an early adopter of the i7 920. Looking now it seems I made the right choice. I have had 2 years of kickassery and my processor still holds up rather well in this article.
Me too! I've got an e8400 running at 3.9 with almost zero OC know-how and its done me well. I might snap up an i7 if they and their mobos get cheap when sandy bridge has been out a few months... but may well skip that generation all together.
Holy crapola AMD really needs Bulldozer now. Even in heavily threaded video encoding the 2600K at $300 is blowing the 1100T x6 out of the water. This is the the Core 2 Duo vs. A64 X2 all over again. Will Bulldozer be another Phenom, a day late and a dollar short? TLB bug anyone? As a PC enthusiast I really want to see competition to keep prices in check. If I had to upgrade today, I can't see how I could turn down the 2600K...
Yeah, new Intel motherboard models are never cheap. I don't understand why the price remains so high when more an more functionality is moving to the CPU. The other killer is that you need a new board for every Intel CPU update.
Lastly, it's hard to throw the "buy now" tag on it with AMD's new architecture over the horizon. Sure, AMD has a tough act to follow, but it's still an unknown that I think is worth waiting for (if it's a dog, you can still buy Intel). Keep in mind that Bulldozer will have a pretty strong IGP, one that may make decent IGP gaming a reality. It will become a matter of how powerful the x86 portion of the Bulldozer is, and they are trying a considerably different approach. Considering the amount of money you'll be paying, you might as well see how AMD shakes out. I guess it just depends on if what you have today can get you by just a little longer.
You're conflating Bulldozer and Llano there. Bulldozer is the new architecture, coming to the desktop as an 8-core throughput monster. Llano is the first desktop APU, cramming 4 32nm K10.5 cores and a Redwood class GPU onto the die. The next generation of desktop APUs will be using Bulldozer cores.
There are a lot of cheaters nowadays concerning gambling. It all depends on the purpose of pokies slots games. For me it is a way to relax and enjoy the process of playing and winning at https://awfulannouncing.com/gambling/where-you-can... . I love the feeling of winning and the thrill of a large selection of slots. By the way you know that there are a lot of legitimate sites now that provide a large selection. Since every player likes to play in his own way
That's not true at all. Most $40-50 AM3 mobo's support X6. If you don't game or overclock, you don't need extra pci-e lanes and extra cooling. Especially for a workstation.
i think that amd 880g mainbord with cpu araound 90 dolars plus some 55xx series gpu can do better in terms of encoding decoding video playback games etc. and all that without alot of money spend on inetl new socekets wich you have to trow away when they make the next cpu.So please corect me if i am wrong
Hi everyone, i dicided to build a PC but made an 1 error getting the i7 2600 if anyone is interested in buying one please let me, it's brand new sealed in it original contents.
I'm wondering how supply will be on release day? Often we see new components with low supply and online stores start price gouging from day one. New Egg is particularly known for such. Lets hope supply is very good off the bat. That 2600K looks really appealing to me.
One of the local computer stores had Sandy Bridge parts up for sale last week, but they're all gone now save for a few Asus P8P67 standard, pro, and deluxe boards.
I wasn't able to see what kind of money they were asking.
This review has convinced me that once the 2600K shows up again it's all I'll need. I was going to wait for socket 2011 but damn, the 2600 is already more than twice as fast in everything than my poor ol' Q6600.
"As an added bonus, both K-series SKUs get Intel’s HD Graphics 3000, while the non-K series SKUs are left with the lower HD Graphics 2000 GPU."
Doesn't it seem like Intel has this backwards? For me, I'd think to put the 3000 on the lesser performing CPUs. Users will probably have their own graphics to use with the unlocked procs, whereas the limit-locked ones will more likely be used in HTPC-like machines.
This seems odd to me unless they're having yield problems with the GPU portion of their desktop chips. That doesn't seem too likely though because you'd expect the mobile version to have the same problem but they're all 12 EU parts. Perhaps they're binning more aggressively on TDP, and only had enough chips that met target with all 12 EUs to offer them at the top of the chart.
I agree with both of you. This should be the ultimate upgrade for my E8400, but I can't help thinking they could've made it even better if they'd used the die space for more CPU and less graphics and video decode. The Quick Sync feature would be awesome if it could work while you're using a discrete card, but for most people who have discrete graphics, this and the HD Graphics 3000 are a complete waste of transistors. I suppose they're power gated off so the thermal headroom could maybe be used for overclocking.
Great review, but does anyone know how often 1 active core is used. I know this is a matter of subjection, but if you're running an anti-virus and have a bunch of standard services running in the background, are you likely to use only one core when idling?
What should I advise people, as consumers, to really pay attention to? I know when playing games such as Counter-Strike or Battlefield: Bad Company 2, my C2D maxes out at 100%, I assume both cores are being used to achieve the 100% utilization. I'd imagine that in this age, hardly ever will there be a time to use just one core; probably 2 cores at idle.
I would think that the 3-core figures are where the real noticeable impact is, especially in turbo, when gaming/browsing. Does anyone have any more perceived input on this?
According to Bench, it looks like he used 1680×1050 for L4D, Fallout 3, Far Cry 2, Crysis Warhead, Dragon Age Origins, and Dawn of War 2, and 1024×768 for StarCraft 2. I couldn't find the tested resolution for World of Warcraft or Civilization V. I don't know why he didn't list the resolutions anywhere in the article or the graphs themselves, however.
what the hell is the point of posting gaming scores at resolutions that no one will be playing at?
If i am not mistaken, the grahics cards in the test are: eVGA GeForce GTX 280 (Vista 64) ATI Radeon HD 5870 (Windows 7) MSI GeForce GTX 580 (Windows 7)
So then, with a sandybridge processor, these resolutions are irrelevant. 1080p or above should be standard resolution for modern setup reviews.
Why, Anand, have you posted irrelevant resolutions for the hardware tested?
Games are usually limited in fps by the level of graphics, so processor speed doesn't make much of a difference unless you turn the graphics detail right down and use an overkill graphics card. As the point of this page was to review the CPU power, it's more representative to use low resolutions so that the CPU is the limiting factor.
If you did this set of charts for gaming at 2560x1600 with full AA & max quality, all the processors would be stuck at about the same rate because the graphics card is the limiting factor.
I expect Civ 5 would be an exception to this because it has really counter-intuitive performance.
For almost any game, the resolution will not affect the stress on the CPU. It is no harder for a CPU to play the game at 2560x1600 than it is to play at 1024x768, so to ensure that the benchmark is CPU-limited, low resolutions are chosen.
For instance, the i5 2500k gets ~65fps in the Starcraft test, which is run at 1024x768. The i5 2500k would also be capable of ~65fps at 2560x1600, but your graphics card might not be at that resolution.
Since this is a review for a CPU, not for graphics cards, the lower resolution is used, so we know what the limitation is for just the CPU. If you want to know what resolution you can play at, look at graphics card reviews.
Which is why the tests have limited real world value. Skewing the tests to maximize the cpu differences makes new cpus look impressive, but it doesn't show the reality that the new cpu isn't needed in the real world for most games.
Maybe I missed this in the review, Anand, but can you please confirm that SB and SB-E will require quad-channel memory? Additionally, will it be possible to run dual-channel memory on these new motherboards? I guess I want to save money because I already have 8GB of dual-channel RAM :).
This has been discussed in great detail. The i7, i3, and i5 2XXX series is dual channel. The rumor mill is abound with SB-E having quad channel, but I don't recall seen anything official from Intel on this point.
the K processors have the much better IGP and a variable multiplier, but to use the improved IGP you need an H67 chipset, which doesn't support changing the multiplier?
I wonder why though? Is this just officially? I can't really see a good technical reason why CPU OC would work with P67 but not H67 - it is just turbo going up some more steps after all. Maybe board manufacturers can find a way around that? Or is this not really linked to the chipset but rather if the IGP is enabled (which after all also is linked to turbo)?
I just checked the manual to MSI's 7676 Mainboard (high-end H67) and it lists cpu core multiplier in the bios (page 3-7 of the manual, only limitation mentioned is that of CPU support), with nothing grayed out and overclockability a feature. As this is the 1.1 Version, I think someone misunderstood something....
Unless MSI has messed up its Manual after all and just reused the P67 Manual.... Still, the focus on over-clocking would be most ridiculous.
yep. This is IMHO extremely stupid. Wanted to build a PC for someone that mainly needs CPU power (video editing). An overclocked 2600k would be ideal with QS but either wait another 3 month or go all compromise...in that case H67 probably but still paying for K part and not being able to use it. Intel does know how to get the most money from you...
haha, yeah that is stupid. You'd think on the CPU's you can overclock "K" they use the lower end GPU or not even use one at all. Makes for an awkward HTPC choice.
I'm surprised nobody cares there's no native USB 3.0 support coming from Intel until 2012. It's obvious they are abusing their position as the number 1 chip maker, trying to push Light Peak as a replacement to USB. The truth is, Light Peak needs USB for power, it can never live without it (unless you like to carry around a bunch of AC adapters). Intel wants light peak to succeed so badly, they are leaving USB 3.0 (it's competitor) by the wayside. Since Intel sits on the USB board, they have a lot of pull in the industry, and as long as Intel wont support the standard, no manufacturer will ever get behind it 100%. Sounds very anti-competitive to me. Considering AMD is coming out with USB 3.0 support in Llano later this year, I've already decided to jump ship and boycott Intel. Not because I'm upset with their lack of support for USB 3.0, but because their anti-competitive practices are inexcusable; holding back the market and innovation so their own proprietary format can get a headstart. I'm done with Intel.
Sure, if you're building a desktop you can find plenty with USB 3.0 support (via NEC). But if you're looking for a laptop, most will still not have it. For the fact that manufacturers don't want to have to pay extra for features, when they usually get features via the chipsets already included. Asus is coming out with a handful of notebooks in 2011 with USB 3.0 (that I know of), but wide-spread adoption will not be here this year.
Most decent laptops will have USB3. ASUS, Dell, HP, Clevo, and Compal have all used the NEC chip (and probably others as well). Low-end laptops won't get USB3, but then low-end laptops don't get a lot of things.
Even the netbooks usually have USB 3.0 these days and those almost all use intel atom CPUs. The cost to add the controller is negligible for large manufacturers. USB is not going to be the deciding factor for purchases.
Your claims are pretty silly seeing as how USB came about in the same way that Light Peak did-Intel invented USB and pushed it to legacy ports like PS/2, and slowly phased out support for the older ones entirely over the years. It makes no sense for them to support USB 3.0, especially without a real market of devices. But motherboard manufacturers will support USB 3.0 via add-in chips. I don't see how this anti-competitive at all, why should intel have to support a format it doesn't think makes sense? So far USB 3.0 hasn't really shown speeds close to it's theoretical, and the only devices that really need the higher bandwidth are external drives that are better off being run off E-SATA anyways. There's no real "killer app" for USB 3.0 yet. BTW Light Peak will easily support adding power to devices, so it definitely does not need USB in order to provide power. There'll just be two wires running alongside the fiber optics.
The eSata + USB (power) connector has never gone anywhere, which means that eSata devices need at least 2 cables to work. Flash drives and 2.5" HDs don't need enough power to require an external brick, and 80-90% of eSata speed is still much better than the USB2 bottleneck. With double the amount of power over USB2, USB3 could theoretically be used to run 3.5" drives with a double socket plug freeing them from the wall as well.
I've had my P67A-UD4 for almost 3 weeks now. Lets get the chips out already!
I'm confused, however. The fist paragraph talks of 4.1Ghz turbo mode and the chart on page 2 lists 3.8Ghz as the max for the 2600K. Is the chart talking about 4-core turbo or what?
If QuickSync is only available to those using the integrated GPU, does that mean you cant use QS with a P67 board, since they don't support integrated graphics? If so, I'll end up having to buy a dedicated QS box (a micro-ATX board, a S or T series CPU seem to be up to that challenge). Also what if the box is headless (e.g. Windows Home Server)?
Does the performance of QS have to do with the number of EUs? The QS testing was on a 12-EU CPU, does performance get cut in half on a 6-EU CPU (again, S or T series CPUs would be affected).
No mention of Intel AVX functions. I suppose thats more of an architecture thing (which was covered separately), but no benchmarks (synthetic or otherwise) to demo the new feature.
Yeah I think this is the case or according the the blurb below you can connect a monitor to the IGP in order to use QS. Is this a design flaw? Seems like a messy workaround :(
" you either have to use the integrated GPU alone or run a multimonitor setup with one monitor connected to Intel’s GPU in order to use Quick Sync."
I'm not that interested in playback on that device - its going to be streamed to my PS3, DLNA-enabled TVs, iPad/iPhone, etc. Considering this wont be supported as a hackintosh for a while, I might as well build a combo transcoding station and WHS box.
I had really set my mind on the 2500K as it offers unparalleled bang-for-buck and real-world testing have shown that Hyper-threading makes little difference in games.
With the compile tests it's clear there's a distinct benefit to going with the 2600K for me though, which means this'll end up more expensive than I had planned! :)
It wouldn't surprise me if that was intentional. I would hope that Anandtech reviewers were not letting companies dictate how their products were to be reviewed lest AT be denied future prerelease hardware. I can't tell from where I sit and there appears to be no denial that stating there is no such interference.
In addition, real world benchmarks aside from games looks to be absent. Seriously, I don't use my computer for offline 3D rendering and I suspect that very few other readers do to any significant degree.
Also, isn't SYSMark 2007 a broken, misleading benchmark? It was compiled on Intel's compiler, you know the broken one that degrades performance on AMD and VIA processors unnecessarily. Also there is this bit that Intel has to include with its comparisons that use BAPco(Intel) benchmarks that include Intel's processors with comparisons to AMD or VIA processors:
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchase, including the performance of that product when combined with other products.
It isn't perfect, but that is what the FTC and Intel agreed to, and until new benchmarks are released by BAPco that do not inflict poor performance on non-Intel processors, the results are not reliable. I don't see any problem if the graph did not contain AMD processors, but that isn't what we have here. If you are curious, for better or for worse, BAPco is a non-profit organization controlled by Intel.
Hardware vendors have no input into how we test, nor do they stipulate that we must test a certain way in order to receive future pre-release hardware. I should also add that should a vendor "cut us off" (it has happened in the past), we have many ways around getting supplied by them directly. In many cases, we'd actually be able to bring you content sooner as we wouldn't be held by NDAs but it just makes things messier overall.
Either way, see my response above for why the 1100T is absent from some tests. It's the same reason that the Core i7 950 is absent from some tests, maintaining Bench and adding a bunch of new benchmarks meant that not every test is fully populated with every configuration.
As far as your request for more real world benchmarks, we include a lot of video encoding, file compression/decompression, 3D rendering and even now a compiler test. But I'm always looking for more, if there's a test out there you'd like included let me know! Users kept on asking for compiler benchmarks which is how the VS2008 test got in there, the same applies to other types of tests.
Thanks for replying to my comment. I was understand why the review was missing some benchmarks for processors like the 1100T. I was also a bit hasty in my accusations with respect to interference from manufacturers, which I apologize for.
I still have trouble with including benchmarks compiled on the Intel compiler without a warning or explanation of what they mean. It really isn't a benchmark with meaningful results if the 1100T is used x87 code and the Core i7-2600K used SSE2/SSE3 code. I would have no problem with reporting results for benchmarks compiled with Intel's defective compiler, like SYSmark 2007 and Cinebench R10 assuming they did not include results for AMD or VIA processors along with an explanation of why they were not applicable to AMD and VIA processors. However, not giving context to such results I find problematic.
Sysmark2k7 is like the various 3dmark benches. Mostly useless but with a large enough fanbase that running it is less hassle than dealing with all the whining fanboi's/
There are a few holes in the data we produce for Bench, I hope to fill them after I get back from CES next week :) You'll notice there are some cases where there's some Intel hardware missing from benchmarks as well (e.g. Civ V).
Seems Intel did everything right for these to fit snuggly into next gen macs. Everthing nicely integrated into one chip and the encode/trascode speed boost is icing on the cake (If supported of course) being that Apple is content focused. Nice addition if youre a mac user.
Except for the whole thing about not knowing if the GPU is going to support OpenCL. I've heard Intel is writing OpenCL drivers for possibly a GPU/CPU hybrid, or utilizing the new AVX instructions for CPU-only OpenCL.
Other than that, the AT mobile SNB review included a last-gen Apple MBP 13" and the HD3000 graphics could keep up with the Nvidia 320M - it was equal to or ahead in low-detail settings and equal or slightly behind in medium detail settings. Considering Nvidia isn't going to rev the 320M again, Apple may as well switch over to the HD3000 now and then when Ivy Bridge hits next year, hopefully Intel can deliver a 50% perf gain in hardware alone from going to 18 EUs (and maybe their driver team can kick in some performance there too).
"Unlike P55, you can set your SATA controller to compatible/legacy IDE mode. This is something you could do on X58 but not on P55. It’s useful for running HDDERASE to secure erase your SSD for example" Or running old OSes.
It would have been nice to include 1st generation Core i7 processors such as 860/870/920-975 in Starcraft 2 bench as it seems to be very CPU intensive.
Also, perhaps a section with overclocking which shows us how far 2500k/2600k can go on air cooling with safe voltage limits (say 1.35V) would have been much appreciated.
Sounds like this is SO high end it should be the server market. I mean, why make yet ANOTHER socket for servers that use basically the same CPU's? Everything's converging and I'd just really like to see server mobo's converge into "High End Desktop" mobo's. I mean seriously, my E8400 OC'd with a GTX460 is more power than I need. A quad would help with the video editing I do in HD but it works fine now, and with GPU accelerated rendering the rendering times are totally reasonable. I just can't imagine anyone NEEDING a home computer more powerful than the LGS-1155 socket can provide. Hell, 80-90% of people are probably fine with the power Sandy Bridge gives in laptops now.
Perhaps it is like you say, however it's always good for buyers to decide if they want server-like features in a PC. I don't like manufacturers to dictate to me only one way to do it (like Intel does now with the odd combination of HD3000 graphics - Intel H67 chipset). Let us not forget that for a long time, all we had were 4 slots for RAM and 4-6 SATA connections (like you probably have). Intel X58 changed all that: suddenly we had the option of having 6 slots for RAM, 6-8 SATA connections and enough PCI-Express lanes. I only hope that LGA 2011 brings back those features, because like you said: it's not only the performance we need, but also the features. And, remeber that the software doesn't stay still, it usualy requires multiple processor cores (video transcoding, antivirus scanning, HDD defragmenting, modern OS, and so on...). All this aside, the main issue remains: Intel pus be persuaded to stop luting user's money and implement only one socket at a time. I usually support Intel, but in this regard, AMD deserves congratulations!
LGA 2011 is a high end desktop/server convergence socket. Intel started doing this in 2008, with all but the highest end server parts sharing LGA1366 with top end desktop systems. The exception was quad/octo socket CPUs, and those using enormous amounts of ram using LGA 1567.
The main reason why LGA 1155 isn't suitable for really high end machines is that it doesn't have the memory bandwidth to feed hex and octo core CPUs. It's also limited to 16 PCIe 2.0 lanes on the CPU vs 36 PCIe 3.0 lanes on LGA2011. For most consumer systems that won't matter, but 3/4 GPU card systems will start loosing a bit of performance when running in a 4x slot (only a few percent, but people who spend $1000-2000 on GPUs want every last frame they can get), high end servers with multiple 10GB ethernet cards and PCIe SSD devices also begin running into bottlenecks.
Not spending an extra dollar or five per system for the QPI connections only used in multi-socket systems in 1155 also adds up to major savings across the hundreds of millions of systems Intel is planning to sell.
I'm confused by the upset over playing video at 23.967hz. "It makes movies look like, well, movies instead of tv shows"? What? Wouldn't recording at a lower frame rate just mean there's missed detail especially in fast action scenes? Isn't that why HD runs at 60fps instead of 30fps? Isn't more FPS good as long as it's played back at the appropriate speed? IE whatever it's filmed at? I don't understand the complaint.
On a related note hollywood and the world need to just agree that everything gets recorded and played back at 60fps at 1920x1080. No variation AT ALL! That way everything would just work. Or better yet 120FPS and with the ability to turn 3D on and off as u see fit. Whatever FPS is best. I've always been told higher is better.
You are right about having more detail when filming with higher FPS, but this isn't about it being good or bad, it's more a matter of tradition and visual style. The look movies have these days, the one we got accustomed to, is mainly achieved by filming it in 24p or 23.967 to be precise. The look you get when filming with higher FPS just doesn't look like cinema anymore but tv. At least to me. A good article on this: http://www.videopia.org/index.php/read/shorts-main... The problem with movies looking like TV can be tested at home if you got a TV that has some kind of Motion Interpolation, eg. MotionFlow called by Sony or Intelligent Frame Creation by Panasonic. When turned on, you can see the soap opera effect by adding frames. There are people that don't see it and some that do and like it, but I have to turn it of since it doesn't look "natural" to me.
Why is that Quick Sync has better scaling? Very evident in the Dark Knight police car image as all the other versions have definite scaling artifacts on the car.
Scaling is something that should be very easy. Why is there so big a difference? Are these programs just made to market new stuff and no-one really uses them because they suck? So big scaling differences between codepaths make no sense.
It looks to me like some of the encodes have a sharpening effect applied, which is either good (makes text legible) or bad (aliasing effects) depending on your perspective. I'm quite happy overall with the slightly blurrier QS encodes, especially considering the speed.
Who needs the IGP for a tuned-up desktop PC anyway? Some for sure, but I see the main advantages of the SB GPU for business laptop users. As the charts show, for desktop PC enthusiasts, the GPU is still woefully slow, being blown away even by the (low-end) Radeon 5570. For this reason, I can't help feeling that the vast majority of overclockers will still want to have discrete graphics.
I would have preferred to dual core (4-thread) models to have (say) 32 shaders, instead of the 6 or 12 being currently offered. At 32nm, there's probably enough silicon real estate to do it. I guess Intel simply didn't want the quad core processors to have a lower graphics performance than the dual core ones (sigh).
Pity that the socket 2011 processors (without a GPU) are apparently not going to arrive for nearly a year (Q4 2011). I had previously thought the schedule was Q3 2011. Hopefully, AMD's Bulldozer-based CPUs will be around (or at least imminent) by then, forcing intel to lower the prices for its high-end parts. On the other hand, time to go - looks like I'm starting to dream again...
Using myself as an example showing the drawback of limiting overclocking on H67 would be the lack of a good selection of overclocking-friendly micro-ATX boards due to most, if not all, of those being H67.
Granted, that's not Intel's fault.
It's just that I have no need for more than one PCIe x16 slot and 3 SATA (DVD, HDD, SSD). I don't need PCI, FDD, PS2, SER, PAR or floppy connectors at all.
Which ideally means I'd prefer a rather basic P67 design in micro-ATX format but those are, currently, in short supply.
The perfect motherboard, for me, would probably be a P67 micro-ATX design with the mandatory x8/x8 Crossfire support, one x1 and one x4 slot, front panel connector for USB 3, dual gigabit LAN and the base audio and SATA port options.
Keep in mind though, that the over-clocking issue may not be as bad as pointed out. There are H67 boards being marketed for over-clocking ability and manuals showing how to adjust the multiplier for CPUs... I'm not yet convinced over-clocking will be disabled on H67.
Major bummer as I was going to order a Gigabyte H67 board and an i5-2500K but am put off now. They seem to over-clock so well and with low power consumption that it seemed the perfect platform for me… I don’t mind paying the small premium for the K editions but being forced to use a P67 and lose the graphics and have difficulty finding a mATX P67 board seems crazy!
I wonder if this limit is set in the chipset or it can be changed with a BIOS update?
Quick Sync only works if the IGP is in use (may be fixable via drivers later); for anyone who cares about video encoding performance that makes the IGP a major feature.
Looking at the Intel software encoding and the AMD encoding, it looks like the AMD is more washed out overall, which makes me think there's actually something related to colorspaces or color space conversion involved....
Are you guys sure there's no PC/TV mixup there with the luminance or ATI using the color matrix for SD content on HD content or something like that?
1. Transcoding @ 100fps is not uber fast. x264 ultrafast setting is even faster then that. So i hope there are further improvement or potentials in the Quick Sync that we haven't yet discovered.
2. OpenCL - No mention of OpenCL? At all?
3. I would think Intel GD has done very well this time around. And there are possibly 20 - 30% more performance to squeeze out given how Intel Gfx Drivers tend to be VERY POOR.
Thanks for the excellent run down of Sandy Bridge. As i have a x58 system i'm going to skip it and see what happens in Q4 . X58 has been a good platform and lasted longer than most others in recent years.
I've thought it over...and i don't believe that H67 only support GPU overclocking. Like what others said, buy a "K" cpu to get HD3000 graphic and cannot overclock...and on the other side, those with P67 buy unlocked "K" CPU get HD3000 but cannot use...then what's the point of making HD3000 graphics?
As they pointed out, with the Z series motherboard you can have both. That said, it does seem stupid that Intel would launch with those official guidelines, and in these comments others are saying some H67 motherboards are allowing the CPU multiplier to be changed.
As tempting is this chip looks, my 3.8 GHz Core 2 Quad is still more CPU than I can really use most of the time. I wonder if we're reaching the point where improved compute performance is not really necessary for mainstream and even most enthusiast users.
In any case, the upcoming 6-core/12-thread variant sounds interesting. Maybe I'll upgrade to that if Intel doesn't assign it to the $999 price point.
same here. For gaming or multimedia use, core2quad (mine at 4GHz) is still enough, and probably will be enough for 1-2 years. Best value/money is still in GPU upgrades.
Look at the PCI-e x16 from the CPU. Intel indicates a bandwidth of 16GB/s per line. That means 1GB/s per line. But PCI-e v2 has a bandwidth of 500MB/s per line only. Thats mean that the values that Intel Indicates for the PCI-e lines are the sum of the upload AND download bandwidth of the PCI-e.
Thats means that the PCI-e lines of the chipset run at 250MB/s speed! That is the bandwidth of the PCI-e v1, and Intel has done the same bullshit with the P55/H57, he indicates that they are PCI-e v2 but they limits their speed to the values of the PCI-e v1:
P55 chipset (look at the 2.5GT/s !!!) :
"PCI Express* 2.0 interface: Offers up to 2.5GT/s for fast access to peripheral devices and networking with up to 8 PCI Express* 2.0 x1 ports, configurable as x2 and x4 depending on motherboard designs. http://www.intel.com/products/desktop/chipsets/p55... "
Even for the ancient ICH7 Intel indicates 500MB/s per line, but at that time PCI-e v didn't even exist... That's because it's le sum of the upload and download speed of the PCI-e v1. http://img.tomshardware.com/us/2007/01/03/the_sout...
Because 2.0 speed for the southbridge lanes has been reported repeatedly (along with a 2x speed DMI bus to connect them), my guess is an error when making the slides with bidirectional BW listed on the CPU and unidirectional BW on the southbridge.
I think the OP is referring to Intel Insider, the not-so-secret DRM built into the sandy bridge chips. I can't believe people are overlooking the fact that Intel is attempting to introduce DRM at the CPU level and all everyone has to say is "wow, I can't WAIT to get one of dem shiny new uber fast Sandy Bridges!"
I for one applaud and welcome our benevolent DRM overlords.....
I have a q9400, if I compare it to the 2500K in bench and average (straight average) all scores the 2500K is 50% faster. The 2500K has a 24% faster base clock, so all the architecture improvements plus faster RAM, more cache and turbo mode gained only ~20% or so on average, which is decent but not awesome taking into account the c2q is 3+ year old design (or is it 4 years?). I know that the idle power is significantly lower due to power gating so due to hurry up and wait it consumes less power (cant remember c2q 45nm load power, but it was not much higher than this core 2011 chips).
So 50%+ faster sounds good (both chips occupy the same price bracket), but after equating clock speeds (yes it would increase load and idle power on the c2q) the improvement is not massive but still noticeable.
I will be holding out for Bulldozer (possibly slightly slower, especially in lightly threaded workloads?) or Ivy Bridge as mine is still fast enough to do what I want, rather spend the money on adding a SSD or better graphics card.
I think the issue with the latest launch is the complete and utter lack of competition for what you are asking. Anand's showed that the OC'ing headroom for these chips are fantastic.....and due to the thermals even possible (though not recommended by me personally) on the stock low-profile heatsink.
That tells you that they could have significantly increased the performance of this entire line of chips but why should they when there is no competition in sight for the near future (let's ALL hope AMD really produces a winner in the next release) or we're going to be dealing with a plodding approach with INTEL for a while. In a couple months when the gap shrinks (again hopefully by a lot) they simply release a "new" batch with slightly higher turbo frequencies (no need to up the base clocks as this would only hurt power consumption with little/no upside), and bam they get essentially a "free" release.
It stinks as a consumer, but honestly probably hurts us enthusiasts the least since most of us are going to OC these anyways if purchasing the unlocked chips.
I'm still on a C2D @ 3.85GHz but I'm mainly a gamer. In a year or so I'll probably jump on the respin of SDB with even better thermals/OC potential.
CPUs need to be stable in Joe Sixpack's unairconditioned trailer in Alabama during August after the heatsink is crusted in cigarette tar and dust, in one of the horrible computer desks that stuff the tower into a cupboard with just enough open space in the back for wires to get out; not just in an 70F room where all the dust is blown out regularly and the computer has good airflow. Unless something other than temperature is the limiting factor on OC headroom that means that large amounts of OCing can be done easily by those of us who take care of their systems.
Since Joe also wants to get 8 or 10 years out of his computer before replacing it the voltages need to be kept low enough that electromigration doesn't kill the chip after 3 or 4. Again that's something that most of us don't need to worry about much.
The article states that in order for quick sync to function, a display must be connected to the integrated graphics. Since p67 does not support the IGP, then quick sync will be disabled???
Opps, just saw Doormat already asked the question on page three, and I can't find a way to edit or delete my post. However no one has yet to give a clear answer.
There is not any problem with BIOS and 3TB drives. Using GPT you can boot such a drive either on BIOS or UEFI based system. You should only blame Windows and their obsolete MS-DOS partitioning scheme and MS-DOS bootloader.
It's not exactly true that HD3000 has less compute performance than HD5450, at least it's not that clear cut. It has 12 EUs, and since they are 128bit wide, this would amount to "48SP" if you count like AMD. Factor in the clock difference and that's actually more cores (when running at 1300Mhz at least). Though if you only look at MAD throughput, then it is indeed less (as intel igp still can't quite do MAD, though it can do MAC). It's a bit disappointing though to see mostly HD2000 on the desktop, with the exception of a few select parts, which is not really that much faster compared to Ironlake IGP (which isn't surprising - after all Ironlake had twice the EUs albeit at a lower clock, so the architectural improvements are still quite obvious).
That's not true. Each AMD SP is a pipeline, the 4th one on a 69xx (or 5th on a 58xx) series card is 64 bits wide, not 32. They can't all be combined into a single 128 (160, 196) bit wide FPU.
I have 2X GTX 470 video cards and a 3Ware PCIe X4 RAID controller. None of the P67 motherboards I've seen would handle all that hardware, even with cutting the video cards' I/O in half.
This chip fails in that one very important spot. if they had put a decent PCIe controller in it, with 36 PCIe lanes instead of 16, then I'd be much happier.
That's exactly why this is the mainstream platform, while x58 is the enthusiast one, though. Your requirements aren't exactly mainstream, indeed they are beyond what most enthusiasts need even.
Considering the K versions of the CPUs don't have it.
If I'm a developer and use VMs a lot, how important will VT-d be within the 3-4 years that I would own such a chip?
I know that it basically allows direct access to hardware and I don't want to get stuck without it, if it becomes hugely important (Like how you need VT-x to run 64 bit guests).
My question is whether or not that chart is even right. I'm having a hard time believing that Intel would disable a feature in an "enthusiast" chip. Disabling features in lower-end cheaper chips, sure, but in "enthusiast" chips?! Unless they are afraid of those K series (but not the non-K, apparently?) cannibalizing their Xeon sales?
Relatively unimportant IMHO if you're doing development. If you're running a VM/IO-intensive production workload (which isn't likely with one of these), then more important.
Remember, you need several things for Vt-d to work: 1. CPU support (aka "IOMMU"). 2. Chip-set/PCH support (e.g., Q57 has it, P57 does not). 3. BIOS support (a number of vendor implementations are broken). 4. Hypervisor support.
Any of 1-3 might result in "No" for the K parts. Even though it *should* apply only to the CPU's capabilities, Intel may simply be saying it is not supported. (Hard to tell as the detailed info isn't up on Intel's ark site yet, and it would otherwise require examining the CPU capability registers to determine.)
However, it's likely to be an intentional omission on Intel's part as, e.g., the i7-875K doesn't support Vt-d either. As to why that might be there are several possible reasons, many justifiable IMHO. Specifically, the K parts are targeted at people who are likely to OC, and OC'ing--even a wee bit, especially when using VT-d--may result in instability such as to make the system unusable.
If Vt-d is potentially important to you, then I suggest you back up through steps 4-1 above; all other things equal, 4-2 are likely to be far more important. If you're running VM/IO-intensive workloads where performance and VT-d capability is a priority, then IMHO whether you can OC the part will be 0 or -1 on the list of priorities.
And while VT-d can make direct access to hardware a more effective option (again, assuming Hypervisor support), it's primary purpose is to make all IO more efficient in a virtualized environment (e.g., IOMMU and interrupt mapping). It's less a matter of "Do I have to have it to get to first base?" than "How much inefficiency am I willing to tolerate?" And again, unless you're running IO-intensive VM workloads in a production environment, the answer is probably "The difference is unlikely to be noticeable for the work [development] I do."
p.s. code65536 -- I doubt Intel is concerned with OC'd SB parts cannibalizing Xeon sales. (I'd guess the count of potentially lost Xeon sales could be counted on two hands with fingers to spare.:) Stability is far more important than pure speed for anyone I know running VM-intensive loads and, e.g., no ECC support on these parts is for me deal killer. YMMV.
For as long as MS dev tools take to install, I'd really like to be able to do all my dev work in a VM backed up to the corporate lan to ease the pain of a new laptop and to make a loaner actually useful. Unfortunately the combination of lousy performance with MS VPC, and the inability of VPC to run two virtual monitors of different sizes mean I don't have a choice about running visual studio in my main OS install.
So just because I want to use VT-d I'll also be limited to 6 EUs and have no possibility to overclock?
Then there's the chipset-issue. Even if I got the enthusiast targeted K-series I would still need to get the: a) ...H67-chipset to be able to use the HD-unit and QS-capability - yet not be able to overclock. b) ...P67-chipset to be able to overclock - yet to lose QS-capability and the point of having 6 extra EUs as the HD-unit can't be used at all.
What the hell Intel, what the hell! This makes me furious.
Exactly my thoughts. No Quick Sync for enthusiasts right now - that's a disappointment. I think it should be stated more clearly in review. Another disappointment - missing 23.976 fps video playback.
Yeah, OK, lack of support for VT-d ostensibly sucks on the K parts, but as previously posted, I think there may be good reasons for it. But lets look at it objectively...
1. Do you have an IO-intensive VM workload that requires VT-d? 2. Is the inefficiency/time incurred by the lack of VT-d support egregious? 3. Does your hypervisor, BIOS and chipset support VT-d?
IF you answered "NO" or "I don't know" to any of those questions, THEN what does it matter? ELSE IF you answered "YES" to all of those questions, THEN IMHO SB isn't the solution you're looking for. END IF. Simple as that.
So because you--who want that feature and the ability to OC--which is likely 0.001% of the customers who are too cheap to spend the $300-400 for a real solution--the vendor should spend 10-100X to support that capability--which will thus *significantly* increase the cost to the other 99.999% of the customers. And that makes sense how and to whom (other than you and the other 0.0001%)?
IMHO you demand a solution at no extra cost to a potential problem you do not have (or have not articulated); or you demand a solution at no extra cost to a problem you have and for which the market is not yet prepared to offer at a cost you find acceptable (regardless of vendor).
General best practice is not to feed the trolls - but in this case your arguments are so weak I will go ahead anyway.
First off, I like how you - without having any insight in my usage profile - question my need for VT-d and choose to call it "lets look at it objectively".
VT-d is excellent when... a) developing hardware drivers and trying to validate functionality on different platforms. b) fooling around with GPU passthrough, something I did indeed hope to deploy with SB.
So yes, I am in need of VT-d - "Simple as that".
Secondly, _all_ the figures you've presented are pulled out of your ass. I'll be honest, I had a hard time following your argument as much of what you said makes no sense.
So I should spend more money to get an equivalent retail SKU? Well then Sir, please go ahead and show me where I can get a retail SB SKU clocked at >4.4GHz. Not only that, you're in essence implying that that people only overclock because they're cheap. In case you've missed it it's the enthusiasts buying high-end components that enable much of the next-gen research and development.
The rest - especially the part with 10-100X cost implication for vendors - is the biggest pile of manure I've come across on Anandtech. What we're seeing here is a vendor stripping off already existing functionality from a cheaper unit while at the same time asking for a premium price.
If I were to make a car analogy, it'd be the same as if Ferrari sold the 458 in two versions. One with a standard engine, and one with a more powerful engine that lacks headlights. By your reasoning - as my usage profile is in need of headlights - I'd just have to settle with the tame version. Not only would Ferrari lose the added money they'd get from selling a premium version, they would lose a sell as I'd rather be waiting until they present a version that fits my needs. I sure hope you're not running a business.
There is no other way to put it, Intel fucked up. I'd be jumping on the SB-bandwagon right now if it wasn't for this. Instead, I'll be waiting.
Apologies, didn't mean to come across as a troll or in-your-face idjit (although I admittedly did--lesson learned ). Everyone has different requirements/demands, and I presumed and assumed too much when I should not have, and should have been more measured in my response.
You're entirely correct to call me on the fact that I know little or nothing about your requirements. Mea culpa. That said, I think SB is not for the likes of you (or I). While it is a "mainstream" part, it has a few too many warts..
Does that mean Intel "fucked up"? IMHO no--they made a conscious decision to serve a specific market and not serve others. And no, that "10-100X" is not hot air but based on costing from several large scale deployments. Frickin amazing what a few outliers can do to your cost/budget.
I didn't have time to read all reviews, and furthermore I am not sure I will be able to express what I mean with the right nuances, since English is not my first language.
For the moment I am a bit disappointed. To account for my relative coldness, it is important to explain where I start from :
1) For gaming, I already have more than I need with a quad core 775 and a recent 6x ati graphic card.
2) For office work, I already have more than I need with an i3 clarkdale.
Therefore since I am already equipped, I am of course much colder than those who need to buy a new rig just now.
Also, the joy of trying on a new processor must be tempered with several considerations :
1) With Sandy Bridge, you have to add a new mobo in the price of the processor. That makes it much more expansive. And you are not even sure that 1155 will be kept for Ivy Bridge. That is annoying.
2) There are always other valuable things that you can buy for a rig, apart from the sheer processor horsepower : more storage, better monitor...
3) The power improvement that comes with Sandy Bridge with what I call a normal improvement for a new generation of processors. It is certainly not a quantum leap in the nature of processors.
Now, there are two things I really dislike :
1) If you want to use P67 with a graphic card, you still have that piece of hardware, the IGP, that you actually bought and that you cannot use. That seems to me extremely unelegant compared to the 775 generation of processors. It is not an elegant architecture.
2) If you want to use H67 and the Intel IGP for office work and movies, the improvement compared to clarkdale is not sufficient to justify the buying of a new processor and a new mobo. With H67 you will be able to do office work fluently and watch quasi perfectly, with clarkdale you already could.
The one thing that I like is the improvement in consumption. Otherwise it all seems to me a bit awkward.
Well, the IGP non being removable is like having on-board sound, but also having a dedicated soundcard. Not much of a deal, since you can't buy a motherboard withou integrated sound nowadays...
You say you want Intel to provide a $70 gpu. Well, here's a math problem for you: If the gpu on a 2600K is about 22% of the die, and the die costs $317 retail, then how much are you paying for the gpu? If you guessed $70, you win! Congrats, you now payed $70 for a crap gpu. The question is.... why? There is no tock here... only ridiculously high margins for Intel.
Anand, im not the biggest intel fan (due to their past grey area dealings) but I dont think the naming is that confusing. As I understand it they will move to the 3x00 series with Ivy Bridge, basically the higher the second number the faster the chip.
It would be nice if there was something in the name to easily tell consumers the number of cores and threads, but the majority of consumers just want the fatest chip for their money and dont care how many cores or threads it has.
The ix part tells enthusiasts the number of cores/threads/turbo with the i3 having 2/4/no, the i5 having 4/4/yes and i7 4/8/yes. I find this much simpler than the 2010 chips which had some dual and some quad core i5 chips for example.
I think AMD's gpus has a sensible naming convention (except for the 68/6900 renaming) without the additional i3/i5/i7 modifier by using the second number as the tier indicator while maintaining the rule of thumb of "a higher number within a- generation means faster", if intel adopted something similar it would have been better.
That said I wish they stick with a naming convention for at least 3 or 4 generations...
",,but until then you either have to use the integrated GPU alone or run a multimonitor setup with one monitor connected to Intel’s GPU in order to use Quick Sync"
So have you tested the Transcoding with QS by using an H67 chipset based motherboard? The Test Rig never mentions any H67 motherboard. I am somehow not able to follow how you got the scores for the Transcode test. How do you select the codepath if switching graphics on a desktop motherboard is not possible? Please throw some light on it as i am a bit confused here. You say that QS gives a better quality output than GTX 460, so does that mean, i need not invest in a discrete GPU if i am not gaming. Moreover, why should i be forced to use the discrete GPU in a P67 board when according to your tests, the Intel QS is giving a better output.
I need to update the test table. All of the Quick Sync tests were run on Intel's H67 motherboard. Presently if you want to use Quick Sync you'll need to have an H67 motherboard. Hopefully Z68 + switchable graphics will fix this in Q2.
I think this needs to be a front page comment because it is a serious deficiency that all of your reviews fail to properly describe. I read them all and it wasn't until the comments came out that this was brought to light. Seriously SNB is a fantastic chip but this CPU/mobo issue is not insignificant for a lot of people.
I haven't read through all the comments and sorry if it's been said but I find it weird that the most ''enthusiast'' chip K, comes with the better IGP when most people buying this chip will for the most part end up buying a discreet GPU.
Strangely enough i also have the same query. what is the point of better Integrated graphics when you cannot use them on a P67 mobo? also i came across this screen shot
where on the right hand corner you have a Drop Down menu which has selected Intel Quick Sync. Will you see a discrete GPU if you expand it? Does it not mean switching between graphics solutions. In the review its mentioned that switchable graphics is still to find its way in desktop mobos.
It looks like that drop down is dithered, which means it's only displaying the QS system at the moment, but has a possibility to select multiple options in the future or maybe if you had 2 graphics cards etc.
I also take issue with the statement that the 890GX (really HD 4290) is the current onboard video cream of the crop. Test after test (on other sites) show it to be a bit slower than the HD4250, even though it has higher specs.
I also think Intel is going to have a problem with folks comparing their onboard HD3000 to AMD's HD 4290, it just sounds older and slower.
No word on Linux video drivers for the new HD2000 and HD3000? Considering what a mess KMS has made of the old i810 drivers, we may be entering an era where accelerated onboard Intel video is no longer supported on Linux.
Actually, 890GX is just a re-badged 780G from 2008 with sideport memory.
And no HD4250 is NOT faster. While some specific implementation of 890GX wthout sideport _might_ be slower, it would also be cheaper and not really a "proper" representative. (890GX withou sedeport is like sayin i3 with dual channel RAM is "faster" in games than i5 with single channel RAM ...)
putting the 3000 on the the 2600k and 2500k parts ALMOST made sense as an up-sell, but you can't even use their IGP when on a P series board when you're overclocking! If the Z series wont' be out for a while why the hell would i buy an overclocking chip now? so i can spend more money to replace my H series motherboard with a Z series? Nice try.
It's frustrating that you have to pick your sacrifice.... you either get the 3000 with the K sku, or you get VT-d and TXT with the standard sku. Intel doesn't have an offering with both which is kind of ridiculous for high end chips.
Another great review from Anandtech - thanks guys.
It seems odd that the 3000 series graphics engine would be only included on a part designed for over clocking and the boards that support overclocking can't handle integrated graphics. I would have thought that the other way around would have made more sense.
In any case the 2600K and 2500K look like great value parts and are just what I was waiting for!
Does anyone know if QuickSync will appear on LGA-2011 chips? I know they aren't going to have the general purpose GPU components, but this is enough of a performance booster that I'd think Intel would want to keep it on their high end consumer platform in some fashion.
I see TXT in the last chart above with no explanation as to what it is or why it is differentiated. They -took out- functionality from the unlocked parts? That seems backwards...
This functionality will likely appear in Sandybridge Xeons for socket 1155. Intel *generally* segments the Xeons by core count and clock speed, not by feature set like they do for consumer chips. The other feature Intel is holding back is ECC which should be standard in socket 1155 Xeons.
It's a hardware security feature. It's best known for the Trusted Platform Module; an on board cryptographic device used in some corporate computers but not used in consumer systems. Probably they just want to keep people from building high end secure servers with cheap, overclocked K parts instead of the much more profitable XEONs for 2-3x as much.
Only to the extent that like all intel Core2 and later systems it supports a TPM module to allow locking down servers in the enterprise market and that the system *could* be used to implement consumer DRM at some hypothetical point in the future; but since consumer systems aren't sold with TPM modules it would have no impact on systems bought without.
Thanks for adding Visual Studio compilation benchmark. (Although you omitted the 920). It seems that not even SSD, nor can better processors do much for that annoying time waster. It does not matter how much money you throw at it.
I wish to see also SLI/3-way SLI/crossfire performance, since the better cards frequently are CPU bottlenecked. How much better it does relative to i7 920? And with good cooler at 5Ghz?
Note: you mention 3 video cards on test setup, but what one is on the benchmarks?
You're welcome on the VS compile benchmark. I'm going to keep playing with the test to see if I can use it in our SSD reviews going forward :)
I want to do more GPU investigations but they'll have to wait until after CES.
I've also updated the gaming performance page indicating what GPU was used in each game, as well as the settings for each game. Sorry, I just ran out of time last night and had to catch a flight early this morning for CES.
I wonder how this CPU scores with SwiftShader. The CPU part actually has more computing power than the GPU part. All that's lacking to really make it efficient at graphics is support for gather/scatter instructions. We could then have CPUs with more generic cores instead.
I have read that CPU overclock is only available on P67 motherboards, and H67 motherboards cannot overclock the CPU, so you can either use the onboard graphics OR get overclocking? Is this true?
"K-series SKUs get Intel’s HD Graphics 3000, while the non-K series SKUs are left with the lower HD Graphics 2000 GPU."
whats the point of improving the graphics on K series, if pretty much everyone who gets one will have a P67 motherboard which cannot even access the GPU?
Let me know if I am totally not reading this right...
Great review as always, but on the HTPC page I would have wished for a comparison of the deinterlacing quality of SD (480i/576i) and HD (1080i) material. Ati's onboard chips don't offer vector adaptive deinterlacing for 1080i material - can Intel do better?
My HD5770 does a pretty fine job, but I want to lose the dedicated video card in my next HTPC.
Thanks a ton Anand for adding a compiler benchmark. I spent the vast majority of my time on builds and this will help me spec out a few new machines. It's interesting to see results indicating that I should not go anywhere near a low-end Sandybridge system, and that a lot of cheap AMD cores might not be a bad idea.
Can't believe the 23.976Hz output bug is still in SB after all this time. Several years ago, the G35 had this issue and Intel proclaimed they'll have a fix for it. Subsequently, G45 still had the problem and even the iCores, but SB? C'mon....it's a big issue for HTPC buffs, because there's too much judder from 1) LCD displays 2) 3:2 cadencing from film to video conversion, so 1:1 (or rather 5:5 for most 120Hz sets) was a must for large screen HPTC setups. Yes, the bitstreaming is good and all, but most folks are content with just 7.1 DD/DTS output. I guess we'll have to wait (again) for iB and cling on to my ol' nVidia 9300 for now. :(
Was just looking at the pictures that are downloadable and comparing and notice a couple of differences. Maybe they are just a driver tweak but I thought I remember ATI and/or nVidia getting slammed in the past for pulling similar tactics.
The first thing I notice was when comparing the AA shots in COD. It appears that maybe the Sandy Bridge graphics isn't applying AA to the twigs in the ground. Or is this just an appearance thing where Intel might have a different algorithm that causing this?
The second is a little more obvious to me. In the Dirt 2 pictures I notice that Sandy Bridge is blurring and not clearly rendering the distance objects. The sign to the right side is what caught my eye.
One last thing is the DAO pictures. I've seen someone (in the past) post up pictures of the same exact place in the game. The quality looks a lot better then what Anand has shown and I was wondering if that is correct. I don't have the game so I have no way to confirm.
As always Anand I appreciate the time you and your staff take to do all of your articles and the quality that results. Its just one of the reasons why I've always found myself coming back here ever since the early years of your website.
Anand, Great review as always, I love the in depth feature analysis that Anandtech provides.
Bios updates have been released for Gigabyte, Asus, and Intel P67 boards that correct an internal PLL overvolt issue that was artificially limiting overclocks. Users in the thread over at HWbot are reporting that processors that were stuck at 4.8 before are now hitting 5.4ghz. http://hwbot.org/forum/showthread.php?t=15952
Would you be able to do a quick update on the overclocking results for your chips with the new BIOS updates?
please make clear how you have tested quick sync in your review.
i saw a few comments from people that are confused about your review. i guess you tested quick sync on an H67 mainboard but i did not notice that you mentioned that in the text.
for my it looks liek intel is screwing the user who buy this 1. generation sandy bridge chipsets.
In the quick sync test I missed a comparison with x264, that is currently the fastest and highest quality encoder for H.264, on an fast CPU. For example, using the presets superfast and very slow (one for speed with reasonable quality, the other for quality with reasonable speed). Also, with an too high bitrate, even the crapiest encoder will look good...
I also wanted to see how low you can undervolt an i5-2400 when it has hit the overclocking cap, and how is the power consumption then. The same for the other locked CPUs would be cool too. Also, what is the power consumption of the sandy bridge CPUs running the quick sync hardware encoder?
Wow, what a SLAP in AMD's face! The idea they nursed for gazillion years and were set to finally release somewhere this week is brought to you, dear customer, first to the market, with a sudden change in NDA deadline to please you sooner with a hyperperformer from Intel. Who cares that NDAs make an important play in all planning activities, PR, logistics and whatever follows - what matters is that they are first to put the GPU on-die and this is what the average Joe will now know, with a bit of PR, perhaps. Snatch another design win. Hey, AMD, remember that pocket money the court ordered us to pay you? SLAP! And the licence? SLAP! Nicely planned and executed whilst everyone was so distracted with the DAAMIT versus nVidia battles and, ironically, a lack of leaks from the red camp. I just hope Bulldozer will kick some assess, even though I doubt it's really going to happen...
If AMD didn't put a steel toed boot into their own nuts by blowing the original 09Q3 release date for fusion I'd have more sympathy for them. Intel won because they made their launch date while the competition blew theirs by at least half a year.
With the unlocked multipliers, the only substantive difference between the 2500K and the 2600K is hyperthreading. Looking at the benchmarks here, it appears that at equivalent clockspeeds the 2600K might actually perform worse on average than the 2500K, especially if gaming is a high priority.
A short article running both the 2500K and the 2600K at equal speeds (say "stock" @3.4GHz and overclocked @4.4GHz) might be very interesting, especially as a possible point of comparison for AMD's SMT approach with Bulldozer.
Right now it looks like if you're not careful you could end up paying ~$100 more for a 2600K instead of a 2500K and end up with worse performance.
The 2500K is faster in Crysis, Dragon Age, World of Warcraft and Starcraft II, despite being clocked slower than a 2600K. If it weren't for that clockspeed deficiency, it looks like it also might be faster in Left 4 Dead, Far Cry 2, and Dawn of War II. Just about the only game that looks like a "win" for HT is Civ5 and Fallout 3.
The 2500K also wins the x264 HD 3.03 1st Pass benchmark, and comes pretty close to the 2600K in a few others, again despite a clockspeed deficiency.
Intel's new "no overclocking unless you get a K" policy looks like it might be a double-edged sword. Ignoring the IGP stuff, the only difference between a 2500K and a 2600K is HT; if you're spending extra for a K you're going to be overclocking, making the 2500K's base clockspeed deficiency irrelevant. That means HT's deficiencies won't be able to hide behind lower clockspeeds and locked multipliers (as with the i5-7xx and i7-8xx.)
In the past HT was a no-brainer; it might have hurt performance in some cases but it also came with higher clocks that compensated for HT's shortcomings. Now that Intel has cut enthusiasts down to two choices, HT isn't as clear cut, especially if those enthusiasts are gamers - and most of them are.
I don't ever watch soap operas (why somebody can enjoy such crap is beyond me) but I game a lot. All my free time is spent gaming.
High frame rate reminds me of good video cards (or games that are not cutting edge) and the so called film 24p reminds me of the Michael Bay movies where stuff happens fast but you can't see anything, like in transformers.
Please don't assume that your readers know or enjoy soap operas. Standard TV is for old people and movies look amazing at 120hz when almost all you do is gaming.
Just want to say thanks for such a great opening article on desktop SNB. The VS2008 benchmark was also a welcome addition!
SNB launch and CES together must mean a very busy time for you, but it would be great to get some clarification/more in depth articles on a couple of areas.
1. To clarify, if the LGA-2011 CPUs won't have an on-chip GPU, does this mean they will forego arguably the best feature in Quick Sync?
2. Would be great to have some more info on the Overclocking of both the CPU and GPU, such as the process, how far you got on stock voltage, the effect on Quick Sync and some OC'd CPU benchmarks.
3. A look at the PQ of the on-chip GPU when decoding video compared to discrete low-end rivals from nVidia and AMD, as it is likely that the main market for this will be those wanting to decode video as opposed to play games. If you're feeling generous, maybe a run through the HQV benchmark? :P
Thanks for reading, and congrats again for having the best launch-day content on the web.
In the Quantum of Solace comparison, x86 and Radeon screens are the same.
I dug up a ~15Mbit 1080p clip with some action and transcoded it to 4Mbit 720p using x264. So entirely software-based. My i7 920 does 140fps, which isn't too far away from Quick Sync. I'd love to see some quality comparisons between x264 on fastest settings and QS.
Also, in the Dark Knight comparison, it looks like the Radeon used the wrong levels (so not the encoder's fault). You should recheck the settings used both in the encoder and when you took the screenshot.
One thing I miss is clock-for-clock benchmarks to highlight the effect of architectural changes. Though not perhaps within the scope of this review, it would nonetheless be interesting to see how SNB fairs against Bloomfield and Lynnfield at similar clock speeds.
Good performance for a bargain - that was amd's terrain.
Now sandy bridge for ~200 $ targets on amd's clientel. A Core i5-2500K for $216 - that's a bargain. (included is even a 40$ value gpu) And the overclocking ability!
If I understood it correctly: Intel Core i7 2600K @ 4.4GHz 111W under load is quite efficient. At 3.4 ghz 86 W and a ~30% more 4.4 ghz = ~30% more performance ... that would mean it scales ~ 1:1 power consumption/performance.
Many people need more performance per core, but not more cores. At 111 W under load this would be the product they wanted. e.g. People who make music with pc's, not playing mp3's but mixing, producing music.
But for more cores the x6 Thuban is the better choice on a budget. For e.g. building a server on a budget intel has no product to rival it. Or developers - they may also want as many cores as they can get for their apps to test multithreading performance. And Amd's also scores with their more conservative approach when it comes to upgrading e.g. motherboards. People don't like to buy a new motherboard every time they upgrade the cpu.
If I want to spend every year a big lot of money on something I'll sell on eBay at half price a few months later and if I'd like crappy quality images on my monitor, then I would buy Sandy Bridge... but sorry, I'm no no brainer for Intel.
It really impressed me as I do a lot of video transcoding and it's extremely slow on my triple-core Phenom II X3 720, even though I overclocked it to 4GHz. But there is one question: the acceleration needs EU in the GPU, and GPU is disabled in P67 chipset. Does it mean that if I paired my SNB with a P67 motherboard, I won't be able to use the transcoding accelerator?
Not talking about SNB-E this time, I know it will be the performance king again. But I wonder if Bulldozer can at least gain some performance advantage to SNB because it makes no sense that 8 cores running at stunning 4.0GHz won't overrun 4 cores below 3.5GHz, no matter what architectural differences there are between these two chips. SNB is only the new-generation mid-range parts, it will be out-performed by High-End Bulldozers. AMD will hold the low-end, just as it does now; as long as the Bulldozer regain some part that Phenoms lost in mainstream and performance market, things will be much better for it. Enthusiast market is not AMD's cup of tea, just as what it does in GPUs: let nVidia get the performance king and strike from lower performance niches.
I don't think we'll know until AMD releases Bulldozer and Intel counters (if they do). Seems the SNB chips can run significantly faster than they do right now, so if necessary Intel could release new models (or a firmware update) that allows turbo modes up past 4GHz.
They are already selling in Malaysia, but if you don't live in Malasia then your are SOL :) ... I see rumors around that the NDA was suppose to expire on the 5th with retail availability on the 9th... I was thinking about making the leap, but think I will hold off for more info on BD and Sk2011 SB.
Intel has essentially shoot itself in the foot this time. Between the letters restrictions, the new chipset and crazy chipset differentiations between a P and a H its crazy. Not to mention they lack USB 3.0, ability to have an overclock mobo with integrated graphics and the stupid turbo boost restrictions.
I'll go even more and say that the I3 core is pure crap and while its better than the old core I3 they are essentially leaving the biggest market the one up the $200 dollars wide open to AMD.
Those who purchase CPU's at $200 and higher have luck in the 2500 and 2600 variants, but for the majority of us who purchase cpu's bellow $200 its crap.
Essentially if you want gaming performance you buy I3 2100, but if you want overall better performance go for a phenom II.
Hopefully AMD comes up with some great CPU's bellow the $200 range that are going to be with 4 cores, unlimited turbo boost and not locked.
It seems that these benchmarks test the CPUs (cores) and GPU parts of SandyBridge separately. I'd like to know more about the effects of the CPU and GPU (usually data intensive) sharing the L3 cache.
One advantage a system with a discrete GPU is that the GPU and CPUs can happily work simultaneously without largely affecting one another. This is no longer the case with SandyBridge.
A test I would like to see is a graphics intensive application running while another another application performs some multi-threaded ATLAS-tuned LAPACK computations. Do either the GPU or CPUs swamp the L3 cache? Are there any instances of starvation? What happens to the performance of each application? What happens to frame rates? What happens to execution times?
To me it seems that marketing is defining the processors now in Intel rather than engineering. This is always the case but I think now it is more evident than ever.
Essentially if you want he features that the new architecture brings, you have to sell out for the higher end models. My ideal processor would be a i5-2520M for the desktop: Reasonable clocks, good turbo speeds (could be higher for the desktop since the TDP is not that limited), HT, good graphics etc. The combination of 2 cores and HT provides a good balance between power consumption and perfromance for most users.
Its desktop equivalent price-wise is the 2500, wich has no HT and a much higher TDP because of the four cores. Alternatively, maybe the 2500S, 2400S or 2390T could be considered if they are too overpriced.
Intel has introduced too much differentiation in this generation, and in an Apple-like fashion, i.e. they force you to pay more for stuff you don't need, just for an extra feature (eg. VT support, good graphics etc) that practically costs nothing since the silicon is already there. Bottomline, if you want to have the full functionality of the silicon that you get, you have to pay for the higher end models. Moreover, having features for specific functions (AES, transcoding etc) and good graphics makes more sense in lower-end models where CPU power is limited.
This is becoming like the software market, where you have to pay extra for licenses for specific functionalities. I wouldn't be surprised if Intel starts selling "upgrade licenses" sometime in the future that will simply unlock features.
I strongly prefer AMD's approach where all the fatures are available to all models.
I am also a bit annoyed that there is very little discusison about this problem in the review. I agree that technologically Sandy Bridge is impressive, but the artificial limiting of functionality is anti-technological.
Agreed, but, apart from the K-series/ higher IGP/ motherboard mess up (which I think should be shortly cleared up), all the rest of it is just smart product marketing...
It irritates readers of AnandTech, but for the most people who buy off-the-shelf it's all good, with integrators patching up any shortcomings in the core/ chipset.
The focus does seem to be mobile, low-power and video transcode, almost a recipe for macbook!!
He's talking about the general online price across a variety of sites and OEMs (Sapphire, Asus, Palit, etc) not a one-off MIR-inclusive price that can be found only by the obsessive.
Man, this is awesome, my wallet is trying to hide, but it won't do it any good...
I took the jump to AMD when Phenom II arrived, a friend of mine bought my C2D E7400 system, and already then I regretted when I was done building. There's no two ways about it, Intel systems - if they aren't the absolute low-end - runs so much smoother. Which seems to be the case again, even at a reasonable price.
There's one thing about the review I don't really understand: "...Another Anandtech editor put it this way: You get the same performance at a lower price..."
Has he read the review?
As far as I can see, you get pretty much more performance at a lower price.
Hey guys, two things i'm missing from the SB reviews over the web:
1) How well does the new IMC scales to memory clocks? I guess it's a matter of time until someone performs a in-depth analysis on that matter, but i'm particularly interested on that...
2) Adobe's Flash decoding can take advantage of Intel IGPs acceleration through Clear Video technology. Will it work in the new HD2000/3000 series as well?
Same reason it takes a while for AT to provide comparisons of the latest games - it takes an eternity to run a benchmark on all CPUs going back a couple generations.
I think this might be an error in your chart -- the last one on page 3 shows a Y for the i3-2100 in the AES-NI column. I would love to have this feature on an i3 CPU, but the following paragraph states "Intel also uses AES-NI as a reason to force users away from the i3 and towards the i5" which leads me to believe that i3 doesn't have said feature.
Please let me know if I'm wrong so I can get my pre-order in!!!
the six core 980x still owns them in all tests where all cores are used.
I dont know 22k in cinebench is really not a reason to buy the new i7, I reach 24k on air with i7 860 and my i5 runs on 20k on air.
Short term performance is real good, but I dont care if I wait for a package to unpack for 7 seconds or 8, for long term like rendering, neither there is a reason to upgrade.
I recommend you get the older 1156 off ebay and save a ton of money.
I have the i5 on hackintosh, I am wondering if 1155 will be hackintoshable
I have to disagree with Anand; I feel the QuickSync image is the best of the four in all cases. Yes, there is some edge-softening going on, so you lose some of the finer detail that ATi and SNB gives you, but when viewing on a small screen such as one on an iPhone/iPod, I'd rather have the smoothed-out shapes than pixel-perfect detail.
I started my computing days with Intel but I'm so put off by the way Intel is marketing their new toys. Get this but you can't have that...buy that, but your purchase must include other things. And even after I throw my wallet to Intel, I still would not have a OC'd Sandy Bridge with useful IGP and Quicksync. But wait, throw more money on a Z68 a little later. Oh...and there's a shiny new LGA2011 in the works. Anyone worried that they started naming sockets after the year it comes out? Yay for spending!
I'm a little confused why Quick Sync needs to have a monitor connected to the MB to work. I'm trying to understand why having a monitor connected is so important for video transcoding, vs. playback etc.
Is this a software limitation? Either in the UEFI (BIOS) or drivers? Or something more systemic in the hardware.
What happens on a P67 motherboard? Does the P67 board disable the on die GPU? Effectively disabling Quick Sync support? This seems a very unfortunate over-site for such a promising feature. Will a future driver/firmware update resolve this limitation?
Intel HD 3000 - ~115 Million transistors AMD Radeon HD 3450 - 181 Million transistors - 8 SIMDs AMD Radeon HD 4550 - 242 Million transistors - 16 SIMDs AMD Radeon HD 5450 - 292 Million transistors - 16 SIMDs AMD Xenos (Xbox 360 GPU) - 232 Million transistors + 105 Million (eDRAM daughter die) = 337 Million transistors - 48 SIMDs
Xenos I think in the end is still a good two, two and a half times more powerful than the Radeon 5450. Xenos does not have to be OpenCL, Direct Compute, DX11 nor fully DX10 compliant (a 50 million jump from the 4550 going from DX10.1 to 11), nor contains hardware video decode, integrated HDMI output with 5.1 audio controller (even the old Radeon 3200 clocks in at 150 million + transistors). What I would like some clarification on is if the transistor count for the Xenos includes Northbridge functions..............
Clearly PC GPUs have insane transistor counts in order to be highly compatible. It is commendable how well the Intel HD 3000 does with only 115 Million, but it's important to note that older products like the X1900 had 384 Million transistors, back when DX9.0c was the aim and in pure throughput, it should match or closely trail Xenos at 500 MHz. Going from the 3450 to 4550 GPUs, we go up another 60 million for 8 more SIMDs of a similar DX10.1 compatible nature, as well as the probable increases for hardware video decode, etc. So basically, to come into similar order as the Xenos in terms of SIMD counts (of which Xenos is 48 of it's own type I must emphasize), we would need 60 million transistors per 8 SIMDs, which would put us at about 360 million transistors for a 48 SIMD (240 SP) AMD part that is DX 10.1 compatible and not equipped with anything unrelated to graphics processing.
Yes, it's a most basic comparison (and probably fundamentally wrong in some regards), but I think it sheds some light on the idea that the Radeon HD 5450 really still pales in comparison to the Xenos. We have much better GPUs like Redwood that are twice as powerful with their higher clock speeds + 400 SPs (627 Million transistors total) and consume less energy than Xenos ever did. Of course, this isn't taking memory bandwidth or framebuffer size into account, nor the added benefits of console optimization.
I'm still rocking my Q6600 + Gigabyte X38 DS5 board, upgraded to a GTX580 and been waiting for Sandy, definitely looking forward to this once the dust settles..
I'm still on E6600 + P965 board. Honestly, I would upgrade my video card (HD3850) before doing a complete system upgrade, even with Sandy Bridge being so much faster than my old Conroe. I have yet to run a game that wasn't playable at full detail. Maybe my standards are just lower than others.
Though SB will be great for some applications, there are still rough edges in terms of the overall platform. I think it will be best to wait for SNB-E or at least the Z68. SNB-E seems to be the best future-proofing bet.
I also wonder how a part rated for 95W TDP was drawing 111W in the 4.4GHz OC (the Power Consumption Page). SB's power budget controller must be really smart to allow the higher performance without throttling down, assuming your cooling system can manage the thermals.
Anand, Thanks for the great schooling and deep test results -- something surely representing an enormous amount of time to write, produce, and massage within Intel's bumped-forward official announcement date.
Here's a crazy work-around question:
Can I have my Quick Synch cake and eat my Single-monitor-with-Discrete-Graphics-card too if I, say:
1). set my discrete card output to mirror Sandy Bridge's IGP display output;
2). and, (should something exist), add some kind of signal loopback adapter to the IGP port to spoof the presence of a monitor? A null modem, of sorts?
-- I have absolutely no mobo/video signaling background, so my idea may be laugh in my face funny to anybody who does but I figure it's worth a post, if only for your entertainment. :)
It makes me SO angry when Intel does stupid shit like disable HT on most of their CPU's even though the damn CPU already has it on it, they already paid for. It literally wouldn't cost them ANYTHING to turn HT on those CPU's yet the greedy bastards don't do it.
The HD Graphics 3000 performance is pretty impressive, but won't be utilized by most. Most who utilize Intel desktop graphics will be using the HD Graphics 2000, which is okay, but I ran back to the AMD Brazos performance review to get some comparisons.
In Modern Warfare 2, at 1024 x 768, the new Intel HD Graphics 2000 in the Core i3 2100 barely bests the E-350. Hmm--that's when it's coupled with a full-powered, hyper-threaded desktop compute core that would run circles around the compute side of the Brazos E-350, an 18w, ultra-thin chip.
This either makes Intel's graphics less impressive, or AMD's more impressive. For me, I'm more impressed with the graphics power in the 18w Brazos chip, and I'm very excited by what mainstream Llano desktop chips (65w - 95w) will bring, graphics-wise. Should be the perfect HTPC solution, all on the CPU (ahem, APU, I mean).
I'm very impressed with Intel's video transcoding, however. Makes CUDA seem...less impressive, like a bunch of whoop-la. Scary what Intel can do when it decides that it cares about doing it.
Very disappointed in the lack of vt-d and txt on k-variants. They are after all the high end products. I also find the fact that only the k-variants having the faster GPU very peculiar, as those are the CPUs most likely to be paired with a discrete GPU.
Agreed. I find the exclusion of VT-d particularly irritating: many of the overclockers and enthusiasts to whom the K chips are marketed also use virtualization. Though I don't expect many enthusiasts, if any, to miss TXT (it's more for locked down corporate systems, media appliances, game consoles, etc.).
With the Z68 chipset coming in the indeterminate near future, the faster GPU on K chips would have made sense if the K chips came with every other feature enabled (i.e. if they were the "do eveything chips").
Also, I'd like to have the Sandy Bridge video encode/decode features separate from the GPU functionality - i.e. I'd like to choose between Intel and Nvidia/AMD video decode/encode when using a discrete GPU.
"perhaps we should return to just labeling these things with their clock speeds and core counts? After all, it’s what Apple does—and that’s a company that still refuses to put more than one button on its mice. Maybe it’s worth a try."
I hate to sound like the resident Mac fanboy (I'm platform agnostic) but I want to point out:
1. Apple sells by trim and display, they don't really make a big deal of the CPU (probably because they stick to low-end and midrange CPUs)
2. They have been shipping multi-button mice for nearly six years now. Come on!
- gtx460 image quality definitely the worst - 6870 image quality next - quicksync/snb image quality are the best (marginally better than 6870); I did notice some color loss in the flowers behind the umbrella when I zoomed in on the quicksync picture, so I'd have to give SNB the title in terms of quality. QuickSync gets the title in terms of performance.
My last Intel cpu was a prescott 2.4ghz P4 OC'd to over 3ghz... back in 2004? My last 3 main system builds all AMD.... I was thinking about going to an X6 in the near future, now I guess maybe not. My price point is pretty much $200 for the cpu + motherboard so maybe I'll have to wait a couple months.
Suddenly my 2 year old Phenom II seems very, very slow...
Same here.. Very disapointed as I would have purchased a better heatsink if I knew. I guess I'll just do the install with the standard crap HS and hold off on over-clocking until I get a better one.
Many of us are using older equipment. And, for those of us with limited funds it would have been nice if you would have added the Intel Q9650 and run all game benchmarks at 3.4GHz [ the speed of the 2600K], except for the default 3.6GHz speed of the X4 975BE, leave it there.
I have a QX9650 that I purchased from eBay and it does 4GHz+ with ease, in a Gigabyte P35-DS3R motherboard, even with my ancient cooler [Thermalright XP-90] that I pulled from a socket 478 motherboard [$5 adapter].
Note: I lapped the XP-90 with a slight convex shape to better use with un-lapped CPUs.
In any event, a "quick and dirty" or simple overclock would have yielded at least some usable information. To save time, no need to try to get the maximum speed from all components.
As long as the CPUs were already overclocked, you could run all benchmarks at those speeds, not just games. Many of us overclock to get more for our money.
You included the ancient Q6600 at it's slow default speed - in some of the benchmarks. Why didn't you include it in all benchmarks?
Your normal benchmark page does not include a full, or nearly full, list of games and CPUs, so, comparisons are difficult to find, example here anandtech.com/bench/CPU/62
Where does this leave those of us with older equipment that is still chugging along?
I just bought a upgrade "kit" from an core 2 2.8 quad to i7 950 :( but I got 6 sata ports I noticed the new boards have 4+2 will the more advanced boards have more ?
I was wondering, does the integrated GPU provide any benefit if you're using it with a dedicated graphics card anyway (GTX470) or would it just be idle?
Just thought I would comment with my experience. I am unable to get bluray playback, or even CableCard TV playback with the Intel integrated graphics on my new I5-2500K w/ Asus Motherboard. Why you ask? The same problem Intel has always had, it doesn't handle the EDID's correctly when there is a receiver in the path between it and the display.
To be fair, I have an older Westinghouse Monitor, and an Onkyo TX-SR606. But the fact that all I had to do was reinstall my HD5450 (which I wanted to get rid of when I did the update to SandyBridge) and all my problems were gone kind of points to the fact that Intel still hasn't gotten it right when it comes to EDID's, HDCP handshakes, etc.
So sad too, because otherwise I love the upgraded platform for my HTPC. Just wish I didn't have to add-in the discrete graphics.
As i could understand from article, you have used just this one software for all these testings. And I understand why. Is it enough to conclude that CUDA causes bad or low picture quality.
I am very interested and do researches over H.264 and x264 encoding and decoding performance, especially over GPU. I have tested Xilisoft Video Converter 6, that supports CUDA, and i didn't problems with low quality picture when using CUDA. I did these test on nVidia 8600 GT and for TV station that i work for. I was researching for solution to compress video for sending over internet with low or no quality loss.
So, could it be that Arcsoft Media Converter co-ops bad with CUDA technology?
And must notice here how well AMD Phenom II x6 performs well comparable to nVidia GTX 460. This means that one could buy MB with integrated graphics and AMD Phenom II x6 and have very good encoding performances in terms of speed and quality. Though, Intel is winner here no doubt, but jumping from sck. to sck. and total platform changing troubles me.
I'm curious why bad company 2 gets left out of Anand's CPU benchmarks. It seems to be a CPU dependent game. When I play it all four cores are nearly maxed out while my GPU barely reaches 60% usage. Where most other games seem to be the opposite.
Nice article. It cleared up much about the new chips I had questions on.
A suggestion. I have worked in the chip making business. Perhaps you could run an article on how bin-splits and features are affected by yields and defects. Many here seem to believe that all features work on all chips (but the company chooses to disable them) when that is not true. Some features, such as virtualization, are excluded from SKU's for a business reason. These are indeed disabled by the manufacturer inside certain chips (they usually use chips where that feature is defective anyway, but can disable other chips if the market is large enough to sell more). Other features, such as less cache or lower speeds are missing from some SKU's because those chips have a defect which causes that feature to not work or not to run as fast in those chips. Rather than throwing those chips away, companies can sell them at a cheaper price. i.e. Celeron -> 1/2 the cache in the chip doesn't work right so it's disabled.
It works both ways though. Some of the low end chips must come from better chips that have been down-binned, otherwise there wouldn't be enough low-end chips to go around.
It is not expected to compete Core i7 processors to take its place. Sandy bridge uses fixed function processing to produce better graphics using the same power consumption as Core i series. visit http://www.techreign.com/2010/12/intels-sandy-brid...
Problem is we need to choose between using integrated GPU where we have to choose a H67 board or do some over clocking with a P67. I wonder why we have to make this option... this just means that if we dont do gaming and the 3000 is fine we have to go for the H67 and therefore cant OC the processor.....
and what about those who want to OC and dont need a dedicated Graphic board??? I understand Intel wanting to get money out of early adopters, but dont count on me.
With an even improved i7990 Extreme now out, with a base speed of 3.46 GHz, which would be the better choice, considering I am going to using a dedicated graphics card Nvidia Quadro 4000.
Also. what do you see on the horizon for three channel motherboards with more than 2 SATA lll 6 Gb/s connectors?
The benchmarks against the AMD processors are useless. All the compare is core-to-core performance (4 core to 4 core). You should be comparing is comparably priced processors/systems. For example, the 6-core AMD 1090T costs a hundred dollars less than the i7 2600 at newegg.com, yet your benchmarks fail to provide any comparative benchmarks. It's quite possible that for some applications, that the 6-core AMD may perform better than the more expensive i4-core 7 processors in your benchmarks.
Anand says, "frequency domain (how often pixels of a certain color appear)," but this definition of the frequency domain is incorrect. Frequency domain in the case of video is a 2 dimensional discrete cosine transform of the frame. It is not a count of pixels like a histogram (binning) or anything.
Does the QuickSync handle uprezing or only transcoding? Have you looked at the new WinFast HPVC1111 SpursEnginex4 and compared it to Quicksync, Cuda and Stream encoding and uprezing?
I am having Intel HD 3000 - Sandy Bridge in my system and i was willing to get the game called "oil rush" but then i have found weird response for the game here http://www.futurehardware.com/pc-gaming/288.htm , so i just wanted to know is there any one who have tested the Intel HD 3000 - Sandy Bridge for oil rush, any help for this will be highly appreciated.
I've got intel hd graphics 3000 and according to this forum/review it has a prob running dawn of war 2 on low graphics... i have it set to max graphics and i runs a dream... same with a lot of games i play on it...
guys, is it safe to overclock the Intel HD 3000 GPU ? I own a 2500K CPU. I can overclock the GPU to 1450mhz and it looks stable . But i dont know how to read the temperature from the GPU unit, so iam afraid i could burn my GPU/CPU .
hi.first off all sorry for my english.I have a doubt .I have seen the dells lap top.they are identical but one have the Intel Core i3-2350M 2.3GHz ,the odher is Intel Core i5-2450M 2.5GHz , and the third have Intel Core i7 -2670M 2.4GH
the prices is 600 $,670$ and 800 $,I am working some live multi channel audio production and .net teh programing.So for wich one i soud go.Thanks
I recently got this processor. It is ultimate for gaming. However in my windows CPU meter gadget, i can see only 2 cores functioning. Stock comes with unlocked multiplier afaik. But here in my system, it shows only 2 cores. Is there any way to activate all the cores for better performance?
Beside VS2008 compiler performance I would like to see growing a database with some Java compiler performance, either under NetBeans or Eclipse. Thank you.
I was wondering how Intel Quick Sync might impact PC Based Security Systems/CCTV like those from Avermedia or Geovision. For the longest time Aver advocated a dedicated graphics card but now says HD2000/3000 CPU is OK.
I read about limited software support in the article and guess that Aver does not yet take advantage Quick Sync. However, I had to RMA a NV6480 just for compatibility with a Sandy Bridge CPU (even using a dedicated GPU - ATI 5000 for multiple monitors) and wondered why.
Anyone know why Sandy Bridge might cause compatibility issues with DVR/NVR Cards and what advantages Quick Sync could bring to the IP Security Camera market if top companies like Geovision or Avermedia developed software for it?
Heh. I can run DiRT 4 at 30FPS+ ABSOLUTELY playable even on a bit higher settings With Intel HD Graphics (Bay Trail architecture) Even GTA 5 plays somewhat reasonably when you disable shadows and run at 640x480 :D
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
283 Comments
Back to Article
RMSe17 - Monday, January 3, 2011 - link
Time for an upgrade :)marc1000 - Monday, January 3, 2011 - link
I decided to jump the first core-i lineup, and sitck to an old core2duo for some more time... now seems the wait was worth it!I just hope the prices outside US/Europe will be reasonable..
thanks Anand,
vol7ron - Monday, January 3, 2011 - link
I think there are many of us that had the same idea. Unless needing to upgrade due to malfunction or new laptop purchase, holding C2D til past the i-Series was the best move to make; whereas buying into C2D asap was the best move at the time.Still going to wait for prices to fall and more USB3 adoption. Expected new purchase: mid-2011-mid 2012
vol7ron - Monday, January 3, 2011 - link
by "i-Series" it should have said "1st gen. i-Series"CptTripps - Tuesday, January 4, 2011 - link
Ya know I usually do as you are but was an early adopter of the i7 920. Looking now it seems I made the right choice. I have had 2 years of kickassery and my processor still holds up rather well in this article.hogey74 - Thursday, January 6, 2011 - link
Me too! I've got an e8400 running at 3.9 with almost zero OC know-how and its done me well. I might snap up an i7 if they and their mobos get cheap when sandy bridge has been out a few months... but may well skip that generation all together.Einy0 - Monday, January 3, 2011 - link
Holy crapola AMD really needs Bulldozer now. Even in heavily threaded video encoding the 2600K at $300 is blowing the 1100T x6 out of the water. This is the the Core 2 Duo vs. A64 X2 all over again. Will Bulldozer be another Phenom, a day late and a dollar short? TLB bug anyone? As a PC enthusiast I really want to see competition to keep prices in check. If I had to upgrade today, I can't see how I could turn down the 2600K...medi01 - Monday, January 3, 2011 - link
Did you add mobo price into equation?I don't get all the excitement, really. If anything, Intel's anti-overclocking moves
MonkeyPaw - Monday, January 3, 2011 - link
Yeah, new Intel motherboard models are never cheap. I don't understand why the price remains so high when more an more functionality is moving to the CPU. The other killer is that you need a new board for every Intel CPU update.Lastly, it's hard to throw the "buy now" tag on it with AMD's new architecture over the horizon. Sure, AMD has a tough act to follow, but it's still an unknown that I think is worth waiting for (if it's a dog, you can still buy Intel). Keep in mind that Bulldozer will have a pretty strong IGP, one that may make decent IGP gaming a reality. It will become a matter of how powerful the x86 portion of the Bulldozer is, and they are trying a considerably different approach. Considering the amount of money you'll be paying, you might as well see how AMD shakes out. I guess it just depends on if what you have today can get you by just a little longer.
dertechie - Monday, January 3, 2011 - link
You're conflating Bulldozer and Llano there. Bulldozer is the new architecture, coming to the desktop as an 8-core throughput monster. Llano is the first desktop APU, cramming 4 32nm K10.5 cores and a Redwood class GPU onto the die. The next generation of desktop APUs will be using Bulldozer cores.Mark_12 - Saturday, June 26, 2021 - link
There are a lot of cheaters nowadays concerning gambling. It all depends on the purpose of pokies slots games. For me it is a way to relax and enjoy the process of playing and winning at https://awfulannouncing.com/gambling/where-you-can... . I love the feeling of winning and the thrill of a large selection of slots. By the way you know that there are a lot of legitimate sites now that provide a large selection. Since every player likes to play in his own wayJumpingJack - Saturday, February 7, 2015 - link
Didn't turn out well did it?BSMonitor - Tuesday, January 4, 2011 - link
You'd spend $80 on a 6-core MB ?? LOLIf you buy a 6-core Phenom, likely you'll be in th 140-180 range for a decent MB..
Funny how the cheapers rationalize their cheapness.
zipzoomflyhigh - Tuesday, January 4, 2011 - link
That's not true at all. Most $40-50 AM3 mobo's support X6. If you don't game or overclock, you don't need extra pci-e lanes and extra cooling. Especially for a workstation.Oxford Guy - Monday, January 3, 2011 - link
Yeah, I'm stoked about the new low-level DRM.This is sure to run it fast.
talevski - Thursday, January 6, 2011 - link
i think that amd 880g mainbord with cpu araound 90 dolars plus some 55xx series gpu can do better in terms of encoding decoding video playback games etc. and all that without alot of money spend on inetl new socekets wich you have to trow away when they make the next cpu.So please corect me if i am wrongto anandtech&co
pshen7 - Tuesday, February 22, 2011 - link
The charts and the numbers say it all. This is definitely worth an upgrade for me!Peter Shen, founder Koowie.com
Shifu_V - Saturday, April 16, 2011 - link
Hi everyone, i dicided to build a PC but made an 1 error getting the i7 2600 if anyone is interested in buying one please let me, it's brand new sealed in it original contents.and i dont mind trading it in for a i7 2600k.
and i will match the price maybe even better
My email:vinay_chauhan20042000@yahoo.co.uk
Skott - Monday, January 3, 2011 - link
I'm wondering how supply will be on release day? Often we see new components with low supply and online stores start price gouging from day one. New Egg is particularly known for such. Lets hope supply is very good off the bat. That 2600K looks really appealing to me.evilspoons - Monday, January 3, 2011 - link
One of the local computer stores had Sandy Bridge parts up for sale last week, but they're all gone now save for a few Asus P8P67 standard, pro, and deluxe boards.I wasn't able to see what kind of money they were asking.
This review has convinced me that once the 2600K shows up again it's all I'll need. I was going to wait for socket 2011 but damn, the 2600 is already more than twice as fast in everything than my poor ol' Q6600.
vol7ron - Monday, January 3, 2011 - link
I'm also curious if there will be a hybrid P/H type mobo that will allow for OC'ing all components.sviola - Monday, January 3, 2011 - link
Yes. There will be a Z series to be released in the 2Q11.dacipher - Monday, January 3, 2011 - link
The Core i5-2500K was just what i was looking for. Performance/ Price is where it needs to be and overclocking should be a breeze.vol7ron - Monday, January 3, 2011 - link
I agree."As an added bonus, both K-series SKUs get Intel’s HD Graphics 3000, while the non-K series SKUs are left with the lower HD Graphics 2000 GPU."
Doesn't it seem like Intel has this backwards? For me, I'd think to put the 3000 on the lesser performing CPUs. Users will probably have their own graphics to use with the unlocked procs, whereas the limit-locked ones will more likely be used in HTPC-like machines.
DanNeely - Monday, January 3, 2011 - link
This seems odd to me unless they're having yield problems with the GPU portion of their desktop chips. That doesn't seem too likely though because you'd expect the mobile version to have the same problem but they're all 12 EU parts. Perhaps they're binning more aggressively on TDP, and only had enough chips that met target with all 12 EUs to offer them at the top of the chart.dananski - Monday, January 3, 2011 - link
I agree with both of you. This should be the ultimate upgrade for my E8400, but I can't help thinking they could've made it even better if they'd used the die space for more CPU and less graphics and video decode. The Quick Sync feature would be awesome if it could work while you're using a discrete card, but for most people who have discrete graphics, this and the HD Graphics 3000 are a complete waste of transistors. I suppose they're power gated off so the thermal headroom could maybe be used for overclocking.JE_Delta - Monday, January 3, 2011 - link
WOW........Great review guys!
vol7ron - Monday, January 3, 2011 - link
Great review, but does anyone know how often 1 active core is used. I know this is a matter of subjection, but if you're running an anti-virus and have a bunch of standard services running in the background, are you likely to use only one core when idling?What should I advise people, as consumers, to really pay attention to? I know when playing games such as Counter-Strike or Battlefield: Bad Company 2, my C2D maxes out at 100%, I assume both cores are being used to achieve the 100% utilization. I'd imagine that in this age, hardly ever will there be a time to use just one core; probably 2 cores at idle.
I would think that the 3-core figures are where the real noticeable impact is, especially in turbo, when gaming/browsing. Does anyone have any more perceived input on this?
dualsmp - Monday, January 3, 2011 - link
What resolution is tested under Gaming Performance on pg. 20?johnlewis - Monday, January 3, 2011 - link
According to Bench, it looks like he used 1680×1050 for L4D, Fallout 3, Far Cry 2, Crysis Warhead, Dragon Age Origins, and Dawn of War 2, and 1024×768 for StarCraft 2. I couldn't find the tested resolution for World of Warcraft or Civilization V. I don't know why he didn't list the resolutions anywhere in the article or the graphs themselves, however.karlostomy - Thursday, January 6, 2011 - link
what the hell is the point of posting gaming scores at resolutions that no one will be playing at?If i am not mistaken, the grahics cards in the test are:
eVGA GeForce GTX 280 (Vista 64)
ATI Radeon HD 5870 (Windows 7)
MSI GeForce GTX 580 (Windows 7)
So then, with a sandybridge processor, these resolutions are irrelevant.
1080p or above should be standard resolution for modern setup reviews.
Why, Anand, have you posted irrelevant resolutions for the hardware tested?
dananski - Thursday, January 6, 2011 - link
Games are usually limited in fps by the level of graphics, so processor speed doesn't make much of a difference unless you turn the graphics detail right down and use an overkill graphics card. As the point of this page was to review the CPU power, it's more representative to use low resolutions so that the CPU is the limiting factor.If you did this set of charts for gaming at 2560x1600 with full AA & max quality, all the processors would be stuck at about the same rate because the graphics card is the limiting factor.
I expect Civ 5 would be an exception to this because it has really counter-intuitive performance.
omelet - Tuesday, January 11, 2011 - link
For almost any game, the resolution will not affect the stress on the CPU. It is no harder for a CPU to play the game at 2560x1600 than it is to play at 1024x768, so to ensure that the benchmark is CPU-limited, low resolutions are chosen.For instance, the i5 2500k gets ~65fps in the Starcraft test, which is run at 1024x768. The i5 2500k would also be capable of ~65fps at 2560x1600, but your graphics card might not be at that resolution.
Since this is a review for a CPU, not for graphics cards, the lower resolution is used, so we know what the limitation is for just the CPU. If you want to know what resolution you can play at, look at graphics card reviews.
Tom - Sunday, January 30, 2011 - link
Which is why the tests have limited real world value. Skewing the tests to maximize the cpu differences makes new cpus look impressive, but it doesn't show the reality that the new cpu isn't needed in the real world for most games.Oyster - Monday, January 3, 2011 - link
Maybe I missed this in the review, Anand, but can you please confirm that SB and SB-E will require quad-channel memory? Additionally, will it be possible to run dual-channel memory on these new motherboards? I guess I want to save money because I already have 8GB of dual-channel RAM :).Thanks for the great review!
CharonPDX - Monday, January 3, 2011 - link
You can confirm it from the photos of it only using two DIMMs in photo.JumpingJack - Monday, January 3, 2011 - link
This has been discussed in great detail. The i7, i3, and i5 2XXX series is dual channel. The rumor mill is abound with SB-E having quad channel, but I don't recall seen anything official from Intel on this point.8steve8 - Monday, January 3, 2011 - link
the K processors have the much better IGP and a variable multiplier, but to use the improved IGP you need an H67 chipset, which doesn't support changing the multiplier?ViRGE - Monday, January 3, 2011 - link
CPU Multiplier: Yes, H67 cannot change the CPU multiplierGPU Multiplier: No, even H67 can change the GPU multiplier
mczak - Monday, January 3, 2011 - link
I wonder why though? Is this just officially? I can't really see a good technical reason why CPU OC would work with P67 but not H67 - it is just turbo going up some more steps after all. Maybe board manufacturers can find a way around that?Or is this not really linked to the chipset but rather if the IGP is enabled (which after all also is linked to turbo)?
Rick83 - Monday, January 3, 2011 - link
I just checked the manual to MSI's 7676 Mainboard (high-end H67) and it lists cpu core multiplier in the bios (page 3-7 of the manual, only limitation mentioned is that of CPU support), with nothing grayed out and overclockability a feature. As this is the 1.1 Version, I think someone misunderstood something....Unless MSI has messed up its Manual after all and just reused the P67 Manual.... Still, the focus on over-clocking would be most ridiculous.
Rick83 - Monday, January 3, 2011 - link
also, there is this:http://www.eteknix.com/previews/foxconn-h67a-s-h67...Where the unlocked multiplier is specifically mentioned as a feature of the H67 board.
So I think anandtech got it wrong here....
RagingDragon - Monday, January 3, 2011 - link
Or perhaps CPU overclocking on H67 is not *officially* supported by Intel, but the motherboard makers are supporting it anyway?IanWorthington - Monday, January 3, 2011 - link
Seems to sum it up. If you want both you have to wait until Q2.<face palm>
8steve8 - Monday, January 3, 2011 - link
so if im someone who wants the best igp, but doesn't want to pay for overclockability, i still have to buy the K cpu... weird.beginner99 - Monday, January 3, 2011 - link
yep. This is IMHO extremely stupid. Wanted to build a PC for someone that mainly needs CPU power (video editing). An overclocked 2600k would be ideal with QS but either wait another 3 month or go all compromise...in that case H67 probably but still paying for K part and not being able to use it.Intel does know how to get the most money from you...
Hrel - Monday, January 3, 2011 - link
haha, yeah that is stupid. You'd think on the CPU's you can overclock "K" they use the lower end GPU or not even use one at all. Makes for an awkward HTPC choice.AkumaX - Monday, January 3, 2011 - link
omg omg omg wat do i do w/ my i7-875k... (p.s. how is this comment spam?)AssBall - Monday, January 3, 2011 - link
Maybe because you sound like a 12 year old girl with ADHD.usernamehere - Monday, January 3, 2011 - link
I'm surprised nobody cares there's no native USB 3.0 support coming from Intel until 2012. It's obvious they are abusing their position as the number 1 chip maker, trying to push Light Peak as a replacement to USB. The truth is, Light Peak needs USB for power, it can never live without it (unless you like to carry around a bunch of AC adapters).Intel wants light peak to succeed so badly, they are leaving USB 3.0 (it's competitor) by the wayside. Since Intel sits on the USB board, they have a lot of pull in the industry, and as long as Intel wont support the standard, no manufacturer will ever get behind it 100%. Sounds very anti-competitive to me.
Considering AMD is coming out with USB 3.0 support in Llano later this year, I've already decided to jump ship and boycott Intel. Not because I'm upset with their lack of support for USB 3.0, but because their anti-competitive practices are inexcusable; holding back the market and innovation so their own proprietary format can get a headstart. I'm done with Intel.
IanWorthington - Monday, January 3, 2011 - link
Not really: the board manufacturers seem to be adding usb3 chipsets w/o real problems. Good enough.usernamehere - Monday, January 3, 2011 - link
Sure, if you're building a desktop you can find plenty with USB 3.0 support (via NEC). But if you're looking for a laptop, most will still not have it. For the fact that manufacturers don't want to have to pay extra for features, when they usually get features via the chipsets already included. Asus is coming out with a handful of notebooks in 2011 with USB 3.0 (that I know of), but wide-spread adoption will not be here this year.JarredWalton - Monday, January 3, 2011 - link
Most decent laptops will have USB3. ASUS, Dell, HP, Clevo, and Compal have all used the NEC chip (and probably others as well). Low-end laptops won't get USB3, but then low-end laptops don't get a lot of things.TekDemon - Monday, January 3, 2011 - link
Even the netbooks usually have USB 3.0 these days and those almost all use intel atom CPUs. The cost to add the controller is negligible for large manufacturers. USB is not going to be the deciding factor for purchases.DanNeely - Monday, January 3, 2011 - link
Are you sure about that? Newegg lists 99 netbooks on their site. Searching for USB 3 within netbooks returns 0 products.TekDemon - Monday, January 3, 2011 - link
Your claims are pretty silly seeing as how USB came about in the same way that Light Peak did-Intel invented USB and pushed it to legacy ports like PS/2, and slowly phased out support for the older ones entirely over the years. It makes no sense for them to support USB 3.0, especially without a real market of devices.But motherboard manufacturers will support USB 3.0 via add-in chips. I don't see how this anti-competitive at all, why should intel have to support a format it doesn't think makes sense? So far USB 3.0 hasn't really shown speeds close to it's theoretical, and the only devices that really need the higher bandwidth are external drives that are better off being run off E-SATA anyways. There's no real "killer app" for USB 3.0 yet.
BTW Light Peak will easily support adding power to devices, so it definitely does not need USB in order to provide power. There'll just be two wires running alongside the fiber optics.
DanNeely - Tuesday, January 4, 2011 - link
The eSata + USB (power) connector has never gone anywhere, which means that eSata devices need at least 2 cables to work. Flash drives and 2.5" HDs don't need enough power to require an external brick, and 80-90% of eSata speed is still much better than the USB2 bottleneck. With double the amount of power over USB2, USB3 could theoretically be used to run 3.5" drives with a double socket plug freeing them from the wall as well.ilkhan - Monday, January 3, 2011 - link
I've had my P67A-UD4 for almost 3 weeks now. Lets get the chips out already!I'm confused, however. The fist paragraph talks of 4.1Ghz turbo mode and the chart on page 2 lists 3.8Ghz as the max for the 2600K. Is the chart talking about 4-core turbo or what?
Spike - Monday, January 3, 2011 - link
Isn't it an i7-2600k? The article title says "i5 2600k"... just curious...Ryan Smith - Monday, January 3, 2011 - link
Oh dear...Fixed. Thanks for that.
omelet - Monday, January 3, 2011 - link
> The Sandy Bridge Review: Intel Core i5 2600K, i5 2500K and Core i3 2100 TestedDoesn't look fixed over here.
Zoomer - Monday, January 3, 2011 - link
Score one for intel marketing!Oh wait...
Beenthere - Monday, January 3, 2011 - link
I'll stick with my AMD 965 BE as it delivers a lot of performance for the price and I don't get fleeced on mobo and CPU prices like with Intel stuff.geek4life!! - Monday, January 3, 2011 - link
Exactly what I have been waiting on, time to build my RIG again. Been without a PC for 1 year now and itching to build a new one.Game on baby!!!!!!!!!!!!!!
Doormat - Monday, January 3, 2011 - link
If QuickSync is only available to those using the integrated GPU, does that mean you cant use QS with a P67 board, since they don't support integrated graphics? If so, I'll end up having to buy a dedicated QS box (a micro-ATX board, a S or T series CPU seem to be up to that challenge). Also what if the box is headless (e.g. Windows Home Server)?Does the performance of QS have to do with the number of EUs? The QS testing was on a 12-EU CPU, does performance get cut in half on a 6-EU CPU (again, S or T series CPUs would be affected).
No mention of Intel AVX functions. I suppose thats more of an architecture thing (which was covered separately), but no benchmarks (synthetic or otherwise) to demo the new feature.
MeSh1 - Monday, January 3, 2011 - link
Yeah I think this is the case or according the the blurb below you can connect a monitor to the IGP in order to use QS. Is this a design flaw? Seems like a messy workaround :(" you either have to use the integrated GPU alone or run a multimonitor setup with one monitor connected to Intel’s GPU in order to use Quick Sync."
SandmanWN - Monday, January 3, 2011 - link
The sad part is for all the great encoding you get, the playback sucks. Jacked up.Doormat - Monday, January 3, 2011 - link
I'm not that interested in playback on that device - its going to be streamed to my PS3, DLNA-enabled TVs, iPad/iPhone, etc. Considering this wont be supported as a hackintosh for a while, I might as well build a combo transcoding station and WHS box.JarredWalton - Monday, January 3, 2011 - link
How do you figure "playback sucks"? If you're using MPC-HC, it's currently broken, but that's an application issue not a problem with SNB in general.Absolution75 - Monday, January 3, 2011 - link
Thank you so much for the VS benchmarks!! Programmers rejoice!Exodite - Monday, January 3, 2011 - link
I'm of two minds about that really.I had really set my mind on the 2500K as it offers unparalleled bang-for-buck and real-world testing have shown that Hyper-threading makes little difference in games.
With the compile tests it's clear there's a distinct benefit to going with the 2600K for me though, which means this'll end up more expensive than I had planned! :)
Lazlo Panaflex - Monday, January 3, 2011 - link
FYI, the 1100T is missing from several of the gaming benchmarks.....Melted Rabbit - Monday, January 3, 2011 - link
It wouldn't surprise me if that was intentional. I would hope that Anandtech reviewers were not letting companies dictate how their products were to be reviewed lest AT be denied future prerelease hardware. I can't tell from where I sit and there appears to be no denial that stating there is no such interference.In addition, real world benchmarks aside from games looks to be absent. Seriously, I don't use my computer for offline 3D rendering and I suspect that very few other readers do to any significant degree.
Also, isn't SYSMark 2007 a broken, misleading benchmark? It was compiled on Intel's compiler, you know the broken one that degrades performance on AMD and VIA processors unnecessarily. Also there is this bit that Intel has to include with its comparisons that use BAPco(Intel) benchmarks that include Intel's processors with comparisons to AMD or VIA processors:
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchase, including the performance of that product when combined with other products.
It isn't perfect, but that is what the FTC and Intel agreed to, and until new benchmarks are released by BAPco that do not inflict poor performance on non-Intel processors, the results are not reliable. I don't see any problem if the graph did not contain AMD processors, but that isn't what we have here. If you are curious, for better or for worse, BAPco is a non-profit organization controlled by Intel.
Anand Lal Shimpi - Monday, January 3, 2011 - link
Hardware vendors have no input into how we test, nor do they stipulate that we must test a certain way in order to receive future pre-release hardware. I should also add that should a vendor "cut us off" (it has happened in the past), we have many ways around getting supplied by them directly. In many cases, we'd actually be able to bring you content sooner as we wouldn't be held by NDAs but it just makes things messier overall.Either way, see my response above for why the 1100T is absent from some tests. It's the same reason that the Core i7 950 is absent from some tests, maintaining Bench and adding a bunch of new benchmarks meant that not every test is fully populated with every configuration.
As far as your request for more real world benchmarks, we include a lot of video encoding, file compression/decompression, 3D rendering and even now a compiler test. But I'm always looking for more, if there's a test out there you'd like included let me know! Users kept on asking for compiler benchmarks which is how the VS2008 test got in there, the same applies to other types of tests.
Take care,
Anand
Melted Rabbit - Tuesday, January 4, 2011 - link
Thanks for replying to my comment. I was understand why the review was missing some benchmarks for processors like the 1100T. I was also a bit hasty in my accusations with respect to interference from manufacturers, which I apologize for.I still have trouble with including benchmarks compiled on the Intel compiler without a warning or explanation of what they mean. It really isn't a benchmark with meaningful results if the 1100T is used x87 code and the Core i7-2600K used SSE2/SSE3 code. I would have no problem with reporting results for benchmarks compiled with Intel's defective compiler, like SYSmark 2007 and Cinebench R10 assuming they did not include results for AMD or VIA processors along with an explanation of why they were not applicable to AMD and VIA processors. However, not giving context to such results I find problematic.
DanNeely - Monday, January 3, 2011 - link
Sysmark2k7 is like the various 3dmark benches. Mostly useless but with a large enough fanbase that running it is less hassle than dealing with all the whining fanboi's/Anand Lal Shimpi - Monday, January 3, 2011 - link
There are a few holes in the data we produce for Bench, I hope to fill them after I get back from CES next week :) You'll notice there are some cases where there's some Intel hardware missing from benchmarks as well (e.g. Civ V).Take care,
Anand
Lazlo Panaflex - Monday, January 3, 2011 - link
Thanks Anand :-)MeSh1 - Monday, January 3, 2011 - link
Seems Intel did everything right for these to fit snuggly into next gen macs. Everthing nicely integrated into one chip and the encode/trascode speed boost is icing on the cake (If supported of course) being that Apple is content focused. Nice addition if youre a mac user.Doormat - Monday, January 3, 2011 - link
Except for the whole thing about not knowing if the GPU is going to support OpenCL. I've heard Intel is writing OpenCL drivers for possibly a GPU/CPU hybrid, or utilizing the new AVX instructions for CPU-only OpenCL.Other than that, the AT mobile SNB review included a last-gen Apple MBP 13" and the HD3000 graphics could keep up with the Nvidia 320M - it was equal to or ahead in low-detail settings and equal or slightly behind in medium detail settings. Considering Nvidia isn't going to rev the 320M again, Apple may as well switch over to the HD3000 now and then when Ivy Bridge hits next year, hopefully Intel can deliver a 50% perf gain in hardware alone from going to 18 EUs (and maybe their driver team can kick in some performance there too).
DanNeely - Monday, January 3, 2011 - link
The increased power efficiency might allow Apple to squeeze a GPU onto their smaller laptop boards without loosing runtime due to the smaller battery.yuhong - Monday, January 3, 2011 - link
"Unlike P55, you can set your SATA controller to compatible/legacy IDE mode. This is something you could do on X58 but not on P55. It’s useful for running HDDERASE to secure erase your SSD for example"Or running old OSes.
DominionSeraph - Monday, January 3, 2011 - link
"taking the original Casino Royale Blu-ray, stripping it of its DRM"Whoa, that's illegal.
RussianSensation - Monday, January 3, 2011 - link
It would have been nice to include 1st generation Core i7 processors such as 860/870/920-975 in Starcraft 2 bench as it seems to be very CPU intensive.Also, perhaps a section with overclocking which shows us how far 2500k/2600k can go on air cooling with safe voltage limits (say 1.35V) would have been much appreciated.
Hrel - Monday, January 3, 2011 - link
Sounds like this is SO high end it should be the server market. I mean, why make yet ANOTHER socket for servers that use basically the same CPU's? Everything's converging and I'd just really like to see server mobo's converge into "High End Desktop" mobo's. I mean seriously, my E8400 OC'd with a GTX460 is more power than I need. A quad would help with the video editing I do in HD but it works fine now, and with GPU accelerated rendering the rendering times are totally reasonable. I just can't imagine anyone NEEDING a home computer more powerful than the LGS-1155 socket can provide. Hell, 80-90% of people are probably fine with the power Sandy Bridge gives in laptops now.mtoma - Monday, January 3, 2011 - link
Perhaps it is like you say, however it's always good for buyers to decide if they want server-like features in a PC. I don't like manufacturers to dictate to me only one way to do it (like Intel does now with the odd combination of HD3000 graphics - Intel H67 chipset). Let us not forget that for a long time, all we had were 4 slots for RAM and 4-6 SATA connections (like you probably have). Intel X58 changed all that: suddenly we had the option of having 6 slots for RAM, 6-8 SATA connections and enough PCI-Express lanes.I only hope that LGA 2011 brings back those features, because like you said: it's not only the performance we need, but also the features.
And, remeber that the software doesn't stay still, it usualy requires multiple processor cores (video transcoding, antivirus scanning, HDD defragmenting, modern OS, and so on...).
All this aside, the main issue remains: Intel pus be persuaded to stop luting user's money and implement only one socket at a time. I usually support Intel, but in this regard, AMD deserves congratulations!
DanNeely - Monday, January 3, 2011 - link
LGA 2011 is a high end desktop/server convergence socket. Intel started doing this in 2008, with all but the highest end server parts sharing LGA1366 with top end desktop systems. The exception was quad/octo socket CPUs, and those using enormous amounts of ram using LGA 1567.The main reason why LGA 1155 isn't suitable for really high end machines is that it doesn't have the memory bandwidth to feed hex and octo core CPUs. It's also limited to 16 PCIe 2.0 lanes on the CPU vs 36 PCIe 3.0 lanes on LGA2011. For most consumer systems that won't matter, but 3/4 GPU card systems will start loosing a bit of performance when running in a 4x slot (only a few percent, but people who spend $1000-2000 on GPUs want every last frame they can get), high end servers with multiple 10GB ethernet cards and PCIe SSD devices also begin running into bottlenecks.
Not spending an extra dollar or five per system for the QPI connections only used in multi-socket systems in 1155 also adds up to major savings across the hundreds of millions of systems Intel is planning to sell.
Hrel - Monday, January 3, 2011 - link
I'm confused by the upset over playing video at 23.967hz. "It makes movies look like, well, movies instead of tv shows"? What? Wouldn't recording at a lower frame rate just mean there's missed detail especially in fast action scenes? Isn't that why HD runs at 60fps instead of 30fps? Isn't more FPS good as long as it's played back at the appropriate speed? IE whatever it's filmed at? I don't understand the complaint.On a related note hollywood and the world need to just agree that everything gets recorded and played back at 60fps at 1920x1080. No variation AT ALL! That way everything would just work. Or better yet 120FPS and with the ability to turn 3D on and off as u see fit. Whatever FPS is best. I've always been told higher is better.
chokran - Monday, January 3, 2011 - link
You are right about having more detail when filming with higher FPS, but this isn't about it being good or bad, it's more a matter of tradition and visual style.The look movies have these days, the one we got accustomed to, is mainly achieved by filming it in 24p or 23.967 to be precise. The look you get when filming with higher FPS just doesn't look like cinema anymore but tv. At least to me. A good article on this:
http://www.videopia.org/index.php/read/shorts-main...
The problem with movies looking like TV can be tested at home if you got a TV that has some kind of Motion Interpolation, eg. MotionFlow called by Sony or Intelligent Frame Creation by Panasonic. When turned on, you can see the soap opera effect by adding frames. There are people that don't see it and some that do and like it, but I have to turn it of since it doesn't look "natural" to me.
CyberAngel - Thursday, January 6, 2011 - link
http://en.wikipedia.org/wiki/Showscanhmcindie - Monday, January 3, 2011 - link
Why is that Quick Sync has better scaling? Very evident in the Dark Knight police car image as all the other versions have definite scaling artifacts on the car.Scaling is something that should be very easy. Why is there so big a difference? Are these programs just made to market new stuff and no-one really uses them because they suck? So big scaling differences between codepaths make no sense.
JarredWalton - Monday, January 3, 2011 - link
It looks to me like some of the encodes have a sharpening effect applied, which is either good (makes text legible) or bad (aliasing effects) depending on your perspective. I'm quite happy overall with the slightly blurrier QS encodes, especially considering the speed.xxxxxl - Monday, January 3, 2011 - link
I've been so looking forward to SB...only to hear that H67 cant overclock CPU?!?!?!?!Disappointed.
digarda - Monday, January 3, 2011 - link
Who needs the IGP for a tuned-up desktop PC anyway? Some for sure, but I see the main advantages of the SB GPU for business laptop users. As the charts show, for desktop PC enthusiasts, the GPU is still woefully slow, being blown away even by the (low-end) Radeon 5570. For this reason, I can't help feeling that the vast majority of overclockers will still want to have discrete graphics.I would have preferred to dual core (4-thread) models to have (say) 32 shaders, instead of the 6 or 12 being currently offered. At 32nm, there's probably enough silicon real estate to do it. I guess Intel simply didn't want the quad core processors to have a lower graphics performance than the dual core ones (sigh).
Pity that the socket 2011 processors (without a GPU) are apparently not going to arrive for nearly a year (Q4 2011). I had previously thought the schedule was Q3 2011. Hopefully, AMD's Bulldozer-based CPUs will be around (or at least imminent) by then, forcing intel to lower the prices for its high-end parts. On the other hand, time to go - looks like I'm starting to dream again...
Exodite - Monday, January 3, 2011 - link
Using myself as an example showing the drawback of limiting overclocking on H67 would be the lack of a good selection of overclocking-friendly micro-ATX boards due to most, if not all, of those being H67.Granted, that's not Intel's fault.
It's just that I have no need for more than one PCIe x16 slot and 3 SATA (DVD, HDD, SSD). I don't need PCI, FDD, PS2, SER, PAR or floppy connectors at all.
Which ideally means I'd prefer a rather basic P67 design in micro-ATX format but those are, currently, in short supply.
The perfect motherboard, for me, would probably be a P67 micro-ATX design with the mandatory x8/x8 Crossfire support, one x1 and one x4 slot, front panel connector for USB 3, dual gigabit LAN and the base audio and SATA port options.
Gigabyte?
Anyone? :)
geofelt - Monday, January 3, 2011 - link
The only P67 based micro-ATX motherboard I have found to date is theAsus P8P67-M pro. (or evo?)
Any others?
Rick83 - Monday, January 3, 2011 - link
There's also a non-pro P8P67-M.Keep in mind though, that the over-clocking issue may not be as bad as pointed out. There are H67 boards being marketed for over-clocking ability and manuals showing how to adjust the multiplier for CPUs... I'm not yet convinced over-clocking will be disabled on H67.
smilingcrow - Monday, January 3, 2011 - link
Major bummer as I was going to order a Gigabyte H67 board and an i5-2500K but am put off now. They seem to over-clock so well and with low power consumption that it seemed the perfect platform for me…I don’t mind paying the small premium for the K editions but being forced to use a P67 and lose the graphics and have difficulty finding a mATX P67 board seems crazy!
I wonder if this limit is set in the chipset or it can be changed with a BIOS update?
DanNeely - Monday, January 3, 2011 - link
Quick Sync only works if the IGP is in use (may be fixable via drivers later); for anyone who cares about video encoding performance that makes the IGP a major feature.mariush - Monday, January 3, 2011 - link
On the Dark Knight test...Looking at the Intel software encoding and the AMD encoding, it looks like the AMD is more washed out overall, which makes me think there's actually something related to colorspaces or color space conversion involved....
Are you guys sure there's no PC/TV mixup there with the luminance or ATI using the color matrix for SD content on HD content or something like that?
iwodo - Monday, January 3, 2011 - link
1. Transcoding @ 100fps is not uber fast. x264 ultrafast setting is even faster then that. So i hope there are further improvement or potentials in the Quick Sync that we haven't yet discovered.2. OpenCL - No mention of OpenCL? At all?
3. I would think Intel GD has done very well this time around. And there are possibly 20 - 30% more performance to squeeze out given how Intel Gfx Drivers tend to be VERY POOR.
cactusdog - Monday, January 3, 2011 - link
Thanks for the excellent run down of Sandy Bridge. As i have a x58 system i'm going to skip it and see what happens in Q4 . X58 has been a good platform and lasted longer than most others in recent years.xxxxxl - Monday, January 3, 2011 - link
I've thought it over...and i don't believe that H67 only support GPU overclocking.Like what others said, buy a "K" cpu to get HD3000 graphic and cannot overclock...and on the other side, those with P67 buy unlocked "K" CPU get HD3000 but cannot use...then what's the point of making HD3000 graphics?
strikeback03 - Tuesday, January 4, 2011 - link
As they pointed out, with the Z series motherboard you can have both. That said, it does seem stupid that Intel would launch with those official guidelines, and in these comments others are saying some H67 motherboards are allowing the CPU multiplier to be changed.rs2 - Monday, January 3, 2011 - link
As tempting is this chip looks, my 3.8 GHz Core 2 Quad is still more CPU than I can really use most of the time. I wonder if we're reaching the point where improved compute performance is not really necessary for mainstream and even most enthusiast users.In any case, the upcoming 6-core/12-thread variant sounds interesting. Maybe I'll upgrade to that if Intel doesn't assign it to the $999 price point.
romanovskis - Monday, January 3, 2011 - link
same here. For gaming or multimedia use, core2quad (mine at 4GHz) is still enough, and probably will be enough for 1-2 years. Best value/money is still in GPU upgrades.iwodo - Monday, January 3, 2011 - link
Beat Value / Money is SSD...cgeorgescu - Monday, January 3, 2011 - link
Best Value/Money is Beer, everybody knows that. Not 6-core but 6-pack.karlostomy - Thursday, January 6, 2011 - link
WIN ^^^agr8man - Monday, January 3, 2011 - link
a great review from you guys, and imo, the i5 2500k is really a steal.-=Hulk=- - Monday, January 3, 2011 - link
That's crazy, are the chipsets PCI-e line still limited to v1 (250MB/s) speed or what????http://images.anandtech.com/reviews/cpu/intel/sand...
mino - Monday, January 3, 2011 - link
No, you read it wrong.There are altogether 8 PCIE 2.0 linex and all can be used independently, aka s as "PCIe x1".
The CPU-Chipset bandwith however is a basic PCIe x4 link, so do not expect wonders is more divices are in heavy use ...
-=Hulk=- - Monday, January 3, 2011 - link
No!Look at the PCI-e x16 from the CPU. Intel indicates a bandwidth of 16GB/s per line. That means 1GB/s per line.
But PCI-e v2 has a bandwidth of 500MB/s per line only. Thats mean that the values that Intel Indicates for the PCI-e lines are the sum of the upload AND download bandwidth of the PCI-e.
Thats means that the PCI-e lines of the chipset run at 250MB/s speed! That is the bandwidth of the PCI-e v1, and Intel has done the same bullshit with the P55/H57, he indicates that they are PCI-e v2 but they limits their speed to the values of the PCI-e v1:
P55 chipset (look at the 2.5GT/s !!!) :
"PCI Express* 2.0 interface:
Offers up to 2.5GT/s for fast access to peripheral devices and networking with up to 8 PCI Express* 2.0 x1 ports, configurable as x2 and x4 depending on motherboard designs.
http://www.intel.com/products/desktop/chipsets/p55... "
P55, also 500MB/s per line as for the P67
http://benchmarkreviews.com/images/reviews/motherb...
Even for the ancient ICH7 Intel indicates 500MB/s per line, but at that time PCI-e v didn't even exist... That's because it's le sum of the upload and download speed of the PCI-e v1.
http://img.tomshardware.com/us/2007/01/03/the_sout...
DanNeely - Monday, January 3, 2011 - link
Because 2.0 speed for the southbridge lanes has been reported repeatedly (along with a 2x speed DMI bus to connect them), my guess is an error when making the slides with bidirectional BW listed on the CPU and unidirectional BW on the southbridge.jmunjr - Monday, January 3, 2011 - link
Intel's sell out to big media and putting DRM in Sandy Bridge means I won't be getting one of these puppies. I don't care how fast it is...Exodite - Monday, January 3, 2011 - link
Uh, what exactly are you referencing?If it's TXT it's worth noting that the interesting chips, the 2500K and 2600K, doesn't even support it.
chirpy chirpy - Tuesday, January 11, 2011 - link
I think the OP is referring to Intel Insider, the not-so-secret DRM built into the sandy bridge chips. I can't believe people are overlooking the fact that Intel is attempting to introduce DRM at the CPU level and all everyone has to say is "wow, I can't WAIT to get one of dem shiny new uber fast Sandy Bridges!"I for one applaud and welcome our benevolent DRM overlords.....
http://www.pcmag.com/article2/0,2817,2375215,00.as...
nuudles - Monday, January 3, 2011 - link
I have a q9400, if I compare it to the 2500K in bench and average (straight average) all scores the 2500K is 50% faster. The 2500K has a 24% faster base clock, so all the architecture improvements plus faster RAM, more cache and turbo mode gained only ~20% or so on average, which is decent but not awesome taking into account the c2q is 3+ year old design (or is it 4 years?). I know that the idle power is significantly lower due to power gating so due to hurry up and wait it consumes less power (cant remember c2q 45nm load power, but it was not much higher than this core 2011 chips).So 50%+ faster sounds good (both chips occupy the same price bracket), but after equating clock speeds (yes it would increase load and idle power on the c2q) the improvement is not massive but still noticeable.
I will be holding out for Bulldozer (possibly slightly slower, especially in lightly threaded workloads?) or Ivy Bridge as mine is still fast enough to do what I want, rather spend the money on adding a SSD or better graphics card.
7Enigma - Monday, January 3, 2011 - link
I think the issue with the latest launch is the complete and utter lack of competition for what you are asking. Anand's showed that the OC'ing headroom for these chips are fantastic.....and due to the thermals even possible (though not recommended by me personally) on the stock low-profile heatsink.That tells you that they could have significantly increased the performance of this entire line of chips but why should they when there is no competition in sight for the near future (let's ALL hope AMD really produces a winner in the next release) or we're going to be dealing with a plodding approach with INTEL for a while. In a couple months when the gap shrinks (again hopefully by a lot) they simply release a "new" batch with slightly higher turbo frequencies (no need to up the base clocks as this would only hurt power consumption with little/no upside), and bam they get essentially a "free" release.
It stinks as a consumer, but honestly probably hurts us enthusiasts the least since most of us are going to OC these anyways if purchasing the unlocked chips.
I'm still on a C2D @ 3.85GHz but I'm mainly a gamer. In a year or so I'll probably jump on the respin of SDB with even better thermals/OC potential.
DanNeely - Monday, January 3, 2011 - link
CPUs need to be stable in Joe Sixpack's unairconditioned trailer in Alabama during August after the heatsink is crusted in cigarette tar and dust, in one of the horrible computer desks that stuff the tower into a cupboard with just enough open space in the back for wires to get out; not just in an 70F room where all the dust is blown out regularly and the computer has good airflow. Unless something other than temperature is the limiting factor on OC headroom that means that large amounts of OCing can be done easily by those of us who take care of their systems.Since Joe also wants to get 8 or 10 years out of his computer before replacing it the voltages need to be kept low enough that electromigration doesn't kill the chip after 3 or 4. Again that's something that most of us don't need to worry about much.
7Enigma - Monday, January 3, 2011 - link
Do you happen to remember the space heater.....ahem, I mean P4?DanNeely - Monday, January 3, 2011 - link
I do. Intel used bigger heatsinks than they do for mainstream parts today.panx3dx - Monday, January 3, 2011 - link
The article states that in order for quick sync to function, a display must be connected to the integrated graphics. Since p67 does not support the IGP, then quick sync will be disabled???panx3dx - Monday, January 3, 2011 - link
Opps, just saw Doormat already asked the question on page three, and I can't find a way to edit or delete my post. However no one has yet to give a clear answer.Next9 - Monday, January 3, 2011 - link
There is not any problem with BIOS and 3TB drives. Using GPT you can boot such a drive either on BIOS or UEFI based system. You should only blame Windows and their obsolete MS-DOS partitioning scheme and MS-DOS bootloader.mino - Monday, January 3, 2011 - link
Microsoft not supporting GPT on BIOS systems (hence 3TB drivers on BIOS systems) was a pure BUSINESS decision.It had nothing to do with technology which is readily available.
mino - Monday, January 3, 2011 - link
In the table there is "N" for the i3 CPUs.But in the text there is: "While _all_ SNB parts support VT-x, only three support VT-d"
Could you check it out and clarify? (there is no data on ark.intel.com yet)
mczak - Monday, January 3, 2011 - link
It's not exactly true that HD3000 has less compute performance than HD5450, at least it's not that clear cut.It has 12 EUs, and since they are 128bit wide, this would amount to "48SP" if you count like AMD. Factor in the clock difference and that's actually more cores (when running at 1300Mhz at least). Though if you only look at MAD throughput, then it is indeed less (as intel igp still can't quite do MAD, though it can do MAC).
It's a bit disappointing though to see mostly HD2000 on the desktop, with the exception of a few select parts, which is not really that much faster compared to Ironlake IGP (which isn't surprising - after all Ironlake had twice the EUs albeit at a lower clock, so the architectural improvements are still quite obvious).
DanNeely - Monday, January 3, 2011 - link
That's not true. Each AMD SP is a pipeline, the 4th one on a 69xx (or 5th on a 58xx) series card is 64 bits wide, not 32. They can't all be combined into a single 128 (160, 196) bit wide FPU.kallogan - Monday, January 3, 2011 - link
I'll wait for 22 nm. No point in upgrading for nowdgingeri - Monday, January 3, 2011 - link
I have a really good reason for X58: I/OI have 2X GTX 470 video cards and a 3Ware PCIe X4 RAID controller. None of the P67 motherboards I've seen would handle all that hardware, even with cutting the video cards' I/O in half.
This chip fails in that one very important spot. if they had put a decent PCIe controller in it, with 36 PCIe lanes instead of 16, then I'd be much happier.
Exodite - Monday, January 3, 2011 - link
That's exactly why this is the mainstream platform, while x58 is the enthusiast one, though. Your requirements aren't exactly mainstream, indeed they are beyond what most enthusiasts need even.sviola - Monday, January 3, 2011 - link
You may want to look at the Gigabyte GA-P67A-UD5 and GA-P67A-UD7 as they can run your configuration.Nihility - Monday, January 3, 2011 - link
Considering the K versions of the CPUs don't have it.If I'm a developer and use VMs a lot, how important will VT-d be within the 3-4 years that I would own such a chip?
I know that it basically allows direct access to hardware and I don't want to get stuck without it, if it becomes hugely important (Like how you need VT-x to run 64 bit guests).
Any thoughts?
code65536 - Monday, January 3, 2011 - link
My question is whether or not that chart is even right. I'm having a hard time believing that Intel would disable a feature in an "enthusiast" chip. Disabling features in lower-end cheaper chips, sure, but in "enthusiast" chips?! Unless they are afraid of those K series (but not the non-K, apparently?) cannibalizing their Xeon sales?has407 - Monday, January 3, 2011 - link
Relatively unimportant IMHO if you're doing development. If you're running a VM/IO-intensive production workload (which isn't likely with one of these), then more important.Remember, you need several things for Vt-d to work:
1. CPU support (aka "IOMMU").
2. Chip-set/PCH support (e.g., Q57 has it, P57 does not).
3. BIOS support (a number of vendor implementations are broken).
4. Hypervisor support.
Any of 1-3 might result in "No" for the K parts. Even though it *should* apply only to the CPU's capabilities, Intel may simply be saying it is not supported. (Hard to tell as the detailed info isn't up on Intel's ark site yet, and it would otherwise require examining the CPU capability registers to determine.)
However, it's likely to be an intentional omission on Intel's part as, e.g., the i7-875K doesn't support Vt-d either. As to why that might be there are several possible reasons, many justifiable IMHO. Specifically, the K parts are targeted at people who are likely to OC, and OC'ing--even a wee bit, especially when using VT-d--may result in instability such as to make the system unusable.
If Vt-d is potentially important to you, then I suggest you back up through steps 4-1 above; all other things equal, 4-2 are likely to be far more important. If you're running VM/IO-intensive workloads where performance and VT-d capability is a priority, then IMHO whether you can OC the part will be 0 or -1 on the list of priorities.
And while VT-d can make direct access to hardware a more effective option (again, assuming Hypervisor support), it's primary purpose is to make all IO more efficient in a virtualized environment (e.g., IOMMU and interrupt mapping). It's less a matter of "Do I have to have it to get to first base?" than "How much inefficiency am I willing to tolerate?" And again, unless you're running IO-intensive VM workloads in a production environment, the answer is probably "The difference is unlikely to be noticeable for the work [development] I do."
p.s. code65536 -- I doubt Intel is concerned with OC'd SB parts cannibalizing Xeon sales. (I'd guess the count of potentially lost Xeon sales could be counted on two hands with fingers to spare.:) Stability is far more important than pure speed for anyone I know running VM-intensive loads and, e.g., no ECC support on these parts is for me deal killer. YMMV.
DanNeely - Tuesday, January 4, 2011 - link
For as long as MS dev tools take to install, I'd really like to be able to do all my dev work in a VM backed up to the corporate lan to ease the pain of a new laptop and to make a loaner actually useful. Unfortunately the combination of lousy performance with MS VPC, and the inability of VPC to run two virtual monitors of different sizes mean I don't have a choice about running visual studio in my main OS install.mino - Wednesday, January 5, 2011 - link
VMware Workstation is what you need. VPC is for sadists.Even if your budget is 0(zero), and VPC is free, KVM/QEMU might be a better idea.
Also, Hyper-V locally and (via RDP) is pretty reasonable.
cactusdog - Monday, January 3, 2011 - link
If we cant overclock the chipset how do we get high memory speeds of 2000Mhz+? Is there still a QPI/Dram voltage setting?Tanel - Monday, January 3, 2011 - link
No VT-d on K-series? FFFFUUUU!So just because I want to use VT-d I'll also be limited to 6 EUs and have no possibility to overclock?
Then there's the chipset-issue. Even if I got the enthusiast targeted K-series I would still need to get the:
a) ...H67-chipset to be able to use the HD-unit and QS-capability - yet not be able to overclock.
b) ...P67-chipset to be able to overclock - yet to lose QS-capability and the point of having 6 extra EUs as the HD-unit can't be used at all.
What the hell Intel, what the hell! This makes me furious.
Kevin G - Monday, January 3, 2011 - link
There is the Z67 chipset which will allow both overclocking and integrated video. However, this chipset won't arrive until Q2.Tanel - Monday, January 3, 2011 - link
Well, yes, but one wonders who came up with this scheme in the first place. Q2 could be half a year from now.teohhanhui - Monday, January 3, 2011 - link
I've been thinking the same thing while reading this article... It makes no sense at all. Bad move, Intel.micksh - Monday, January 3, 2011 - link
Exactly my thoughts. No Quick Sync for enthusiasts right now - that's a disappointment. I think it should be stated more clearly in review.Another disappointment - missing 23.976 fps video playback.
has407 - Monday, January 3, 2011 - link
Yeah, OK, lack of support for VT-d ostensibly sucks on the K parts, but as previously posted, I think there may be good reasons for it. But lets look at it objectively...1. Do you have an IO-intensive VM workload that requires VT-d?
2. Is the inefficiency/time incurred by the lack of VT-d support egregious?
3. Does your hypervisor, BIOS and chipset support VT-d?
IF you answered "NO" or "I don't know" to any of those questions, THEN what does it matter? ELSE IF you answered "YES" to all of those questions, THEN IMHO SB isn't the solution you're looking for. END IF. Simple as that.
So because you--who want that feature and the ability to OC--which is likely 0.001% of the customers who are too cheap to spend the $300-400 for a real solution--the vendor should spend 10-100X to support that capability--which will thus *significantly* increase the cost to the other 99.999% of the customers. And that makes sense how and to whom (other than you and the other 0.0001%)?
IMHO you demand a solution at no extra cost to a potential problem you do not have (or have not articulated); or you demand a solution at no extra cost to a problem you have and for which the market is not yet prepared to offer at a cost you find acceptable (regardless of vendor).
Tanel - Tuesday, January 4, 2011 - link
General best practice is not to feed the trolls - but in this case your arguments are so weak I will go ahead anyway.First off, I like how you - without having any insight in my usage profile - question my need for VT-d and choose to call it "lets look at it objectively".
VT-d is excellent when...
a) developing hardware drivers and trying to validate functionality on different platforms.
b) fooling around with GPU passthrough, something I did indeed hope to deploy with SB.
So yes, I am in need of VT-d - "Simple as that".
Secondly, _all_ the figures you've presented are pulled out of your ass. I'll be honest, I had a hard time following your argument as much of what you said makes no sense.
So I should spend more money to get an equivalent retail SKU? Well then Sir, please go ahead and show me where I can get a retail SB SKU clocked at >4.4GHz. Not only that, you're in essence implying that that people only overclock because they're cheap. In case you've missed it it's the enthusiasts buying high-end components that enable much of the next-gen research and development.
The rest - especially the part with 10-100X cost implication for vendors - is the biggest pile of manure I've come across on Anandtech. What we're seeing here is a vendor stripping off already existing functionality from a cheaper unit while at the same time asking for a premium price.
If I were to make a car analogy, it'd be the same as if Ferrari sold the 458 in two versions. One with a standard engine, and one with a more powerful engine that lacks headlights. By your reasoning - as my usage profile is in need of headlights - I'd just have to settle with the tame version. Not only would Ferrari lose the added money they'd get from selling a premium version, they would lose a sell as I'd rather be waiting until they present a version that fits my needs. I sure hope you're not running a business.
There is no other way to put it, Intel fucked up. I'd be jumping on the SB-bandwagon right now if it wasn't for this. Instead, I'll be waiting.
has407 - Tuesday, January 4, 2011 - link
Apologies, didn't mean to come across as a troll or in-your-face idjit (although I admittedly did--lesson learned ). Everyone has different requirements/demands, and I presumed and assumed too much when I should not have, and should have been more measured in my response.You're entirely correct to call me on the fact that I know little or nothing about your requirements. Mea culpa. That said, I think SB is not for the likes of you (or I). While it is a "mainstream" part, it has a few too many warts..
Does that mean Intel "fucked up"? IMHO no--they made a conscious decision to serve a specific market and not serve others. And no, that "10-100X" is not hot air but based on costing from several large scale deployments. Frickin amazing what a few outliers can do to your cost/budget.
Akv - Monday, January 3, 2011 - link
I didn't have time to read all reviews, and furthermore I am not sure I will be able to express what I mean with the right nuances, since English is not my first language.For the moment I am a bit disappointed. To account for my relative coldness, it is important to explain where I start from :
1) For gaming, I already have more than I need with a quad core 775 and a recent 6x ati graphic card.
2) For office work, I already have more than I need with an i3 clarkdale.
Therefore since I am already equipped, I am of course much colder than those who need to buy a new rig just now.
Also, the joy of trying on a new processor must be tempered with several considerations :
1) With Sandy Bridge, you have to add a new mobo in the price of the processor. That makes it much more expansive. And you are not even sure that 1155 will be kept for Ivy Bridge. That is annoying.
2) There are always other valuable things that you can buy for a rig, apart from the sheer processor horsepower : more storage, better monitor...
3) The power improvement that comes with Sandy Bridge with what I call a normal improvement for a new generation of processors. It is certainly not a quantum leap in the nature of processors.
Now, there are two things I really dislike :
1) If you want to use P67 with a graphic card, you still have that piece of hardware, the IGP, that you actually bought and that you cannot use. That seems to me extremely unelegant compared to the 775 generation of processors. It is not an elegant architecture.
2) If you want to use H67 and the Intel IGP for office work and movies, the improvement compared to clarkdale is not sufficient to justify the buying of a new processor and a new mobo. With H67 you will be able to do office work fluently and watch quasi perfectly, with clarkdale you already could.
The one thing that I like is the improvement in consumption. Otherwise it all seems to me a bit awkward.
sviola - Monday, January 3, 2011 - link
Well, the IGP non being removable is like having on-board sound, but also having a dedicated soundcard. Not much of a deal, since you can't buy a motherboard withou integrated sound nowadays...Shadowmaster625 - Monday, January 3, 2011 - link
You say you want Intel to provide a $70 gpu. Well, here's a math problem for you: If the gpu on a 2600K is about 22% of the die, and the die costs $317 retail, then how much are you paying for the gpu? If you guessed $70, you win! Congrats, you now payed $70 for a crap gpu. The question is.... why? There is no tock here... only ridiculously high margins for Intel.nuudles - Monday, January 3, 2011 - link
Anand, im not the biggest intel fan (due to their past grey area dealings) but I dont think the naming is that confusing. As I understand it they will move to the 3x00 series with Ivy Bridge, basically the higher the second number the faster the chip.It would be nice if there was something in the name to easily tell consumers the number of cores and threads, but the majority of consumers just want the fatest chip for their money and dont care how many cores or threads it has.
The ix part tells enthusiasts the number of cores/threads/turbo with the i3 having 2/4/no, the i5 having 4/4/yes and i7 4/8/yes. I find this much simpler than the 2010 chips which had some dual and some quad core i5 chips for example.
I think AMD's gpus has a sensible naming convention (except for the 68/6900 renaming) without the additional i3/i5/i7 modifier by using the second number as the tier indicator while maintaining the rule of thumb of "a higher number within a- generation means faster", if intel adopted something similar it would have been better.
That said I wish they stick with a naming convention for at least 3 or 4 generations...
nimsaw - Monday, January 3, 2011 - link
",,but until then you either have to use the integrated GPU alone or run a multimonitor setup with one monitor connected to Intel’s GPU in order to use Quick Sync"So have you tested the Transcoding with QS by using an H67 chipset based motherboard? The Test Rig never mentions any H67 motherboard. I am somehow not able to follow how you got the scores for the Transcode test. How do you select the codepath if switching graphics on a desktop motherboard is not possible? Please throw some light on it as i am a bit confused here. You say that QS gives a better quality output than GTX 460, so does that mean, i need not invest in a discrete GPU if i am not gaming. Moreover, why should i be forced to use the discrete GPU in a P67 board when according to your tests, the Intel QS is giving a better output.
Anand Lal Shimpi - Monday, January 3, 2011 - link
I need to update the test table. All of the Quick Sync tests were run on Intel's H67 motherboard. Presently if you want to use Quick Sync you'll need to have an H67 motherboard. Hopefully Z68 + switchable graphics will fix this in Q2.Take care,
Anand
7Enigma - Monday, January 3, 2011 - link
I think this needs to be a front page comment because it is a serious deficiency that all of your reviews fail to properly describe. I read them all and it wasn't until the comments came out that this was brought to light. Seriously SNB is a fantastic chip but this CPU/mobo issue is not insignificant for a lot of people.Wurmer - Monday, January 3, 2011 - link
I haven't read through all the comments and sorry if it's been said but I find it weird that the most ''enthusiast'' chip K, comes with the better IGP when most people buying this chip will for the most part end up buying a discreet GPU.Akv - Monday, January 3, 2011 - link
It's being said in reviews from China to France to Brazil, etc.nimsaw - Monday, January 3, 2011 - link
Strangely enough i also have the same query. what is the point of better Integrated graphics when you cannot use them on a P67 mobo?also i came across this screen shot
http://news.softpedia.com/newsImage/Intel-Sandy-Br...
where on the right hand corner you have a Drop Down menu which has selected Intel Quick Sync. Will you see a discrete GPU if you expand it? Does it not mean switching between graphics solutions. In the review its mentioned that switchable graphics is still to find its way in desktop mobos.
sticks435 - Tuesday, January 4, 2011 - link
It looks like that drop down is dithered, which means it's only displaying the QS system at the moment, but has a possibility to select multiple options in the future or maybe if you had 2 graphics cards etc.HangFire - Monday, January 3, 2011 - link
You are comparing video and not chipsets, right?I also take issue with the statement that the 890GX (really HD 4290) is the current onboard video cream of the crop. Test after test (on other sites) show it to be a bit slower than the HD4250, even though it has higher specs.
I also think Intel is going to have a problem with folks comparing their onboard HD3000 to AMD's HD 4290, it just sounds older and slower.
No word on Linux video drivers for the new HD2000 and HD3000? Considering what a mess KMS has made of the old i810 drivers, we may be entering an era where accelerated onboard Intel video is no longer supported on Linux.
mino - Wednesday, January 5, 2011 - link
Actually, 890GX is just a re-badged 780G from 2008 with sideport memory.And no HD4250 is NOT faster. While some specific implementation of 890GX wthout sideport _might_ be slower, it would also be cheaper and not really a "proper" representative.
(890GX withou sedeport is like sayin i3 with dual channel RAM is "faster" in games than i5 with single channel RAM ...)
krazyderek - Monday, January 3, 2011 - link
putting the 3000 on the the 2600k and 2500k parts ALMOST made sense as an up-sell, but you can't even use their IGP when on a P series board when you're overclocking! If the Z series wont' be out for a while why the hell would i buy an overclocking chip now? so i can spend more money to replace my H series motherboard with a Z series? Nice try.It's frustrating that you have to pick your sacrifice.... you either get the 3000 with the K sku, or you get VT-d and TXT with the standard sku. Intel doesn't have an offering with both which is kind of ridiculous for high end chips.
mino - Wednesday, January 5, 2011 - link
Yeah, what is most disappointing is lack of Virtualization support even from i3's (!)For christ's sake, Virtualization is the most BASIC requirement for any box today and even s775 Pentium, not to mention the WHOLE AMD lineup have it!
For me this means nothing sub-i5 is useable in ANY capacity, business or private while i5 are (financially) and overkill for most uses.
Well done Intel. You have just lost ~100 $100 certain sales this year. Whatever, will have to wait for Llano for the mainstream stuff.
DrSlothy - Wednesday, January 12, 2011 - link
I think that's an error in the review table, though one I've seen in every Core review so far - did Intel marketing give out wrong specs?Intel website shows the entire Sandy Bridge line-up to have Hardware Virtualisation (VT-x) support, though some are missing VT-d
tech6 - Monday, January 3, 2011 - link
Another great review from Anandtech - thanks guys.It seems odd that the 3000 series graphics engine would be only included on a part designed for over clocking and the boards that support overclocking can't handle integrated graphics. I would have thought that the other way around would have made more sense.
In any case the 2600K and 2500K look like great value parts and are just what I was waiting for!
DanNeely - Monday, January 3, 2011 - link
Does anyone know if QuickSync will appear on LGA-2011 chips? I know they aren't going to have the general purpose GPU components, but this is enough of a performance booster that I'd think Intel would want to keep it on their high end consumer platform in some fashion.ThaHeretic - Monday, January 3, 2011 - link
I see TXT in the last chart above with no explanation as to what it is or why it is differentiated. They -took out- functionality from the unlocked parts? That seems backwards...Kevin G - Monday, January 3, 2011 - link
This functionality will likely appear in Sandybridge Xeons for socket 1155. Intel *generally* segments the Xeons by core count and clock speed, not by feature set like they do for consumer chips. The other feature Intel is holding back is ECC which should be standard in socket 1155 Xeons.DanNeely - Monday, January 3, 2011 - link
It's a hardware security feature. It's best known for the Trusted Platform Module; an on board cryptographic device used in some corporate computers but not used in consumer systems. Probably they just want to keep people from building high end secure servers with cheap, overclocked K parts instead of the much more profitable XEONs for 2-3x as much.http://en.wikipedia.org/wiki/Trusted_Execution_Tec...
kache - Monday, January 3, 2011 - link
I think I'll wait for the SB xeons and the new EVGA SR-2, hoping that EVGA will release it.adrien - Monday, January 3, 2011 - link
Numbers will probably speak by themselves. ;-)17:37 ~ % md5sum *.png
bee3c83b3ef49504e0608a601a03bfc2 6870.png
bee3c83b3ef49504e0608a601a03bfc2 snb.png
So the 6870 and cpu-rendering have the same image.
CreativeStandard - Monday, January 3, 2011 - link
PC mag reports these new i7's only support up to 1333 DDR3 but you are running faster, is PC mag wrong, what is the maximum supported memory speeds?Akv - Monday, January 3, 2011 - link
Is it true that it has embedded DRM ?DanNeely - Monday, January 3, 2011 - link
Only to the extent that like all intel Core2 and later systems it supports a TPM module to allow locking down servers in the enterprise market and that the system *could* be used to implement consumer DRM at some hypothetical point in the future; but since consumer systems aren't sold with TPM modules it would have no impact on systems bought without.shabby - Monday, January 3, 2011 - link
Drm is only on the h67 chipset, and its basically just for watching movies on demand and nothing more.Akv - Monday, January 3, 2011 - link
Mmmhh... ok...Nevertheless the intel HD + H67 was already modest, if it has DRM in addition then it becomes not particularly seducing.
marraco - Monday, January 3, 2011 - link
Thanks for adding Visual Studio compilation benchmark. (Although you omitted the 920).It seems that not even SSD, nor can better processors do much for that annoying time waster. It does not matter how much money you throw at it.
I wish to see also SLI/3-way SLI/crossfire performance, since the better cards frequently are CPU bottlenecked. How much better it does relative to i7 920? And with good cooler at 5Ghz?
Note: you mention 3 video cards on test setup, but what one is on the benchmarks?
Anand Lal Shimpi - Monday, January 3, 2011 - link
You're welcome on the VS compile benchmark. I'm going to keep playing with the test to see if I can use it in our SSD reviews going forward :)I want to do more GPU investigations but they'll have to wait until after CES.
I've also updated the gaming performance page indicating what GPU was used in each game, as well as the settings for each game. Sorry, I just ran out of time last night and had to catch a flight early this morning for CES.
Take care,
Anand
c0d1f1ed - Monday, January 3, 2011 - link
I wonder how this CPU scores with SwiftShader. The CPU part actually has more computing power than the GPU part. All that's lacking to really make it efficient at graphics is support for gather/scatter instructions. We could then have CPUs with more generic cores instead.aapocketz - Monday, January 3, 2011 - link
I have read that CPU overclock is only available on P67 motherboards, and H67 motherboards cannot overclock the CPU, so you can either use the onboard graphics OR get overclocking? Is this true?"K-series SKUs get Intel’s HD Graphics 3000, while the non-K series SKUs are left with the lower HD Graphics 2000 GPU."
whats the point of improving the graphics on K series, if pretty much everyone who gets one will have a P67 motherboard which cannot even access the GPU?
Let me know if I am totally not reading this right...
MrCromulent - Monday, January 3, 2011 - link
Great review as always, but on the HTPC page I would have wished for a comparison of the deinterlacing quality of SD (480i/576i) and HD (1080i) material. Ati's onboard chips don't offer vector adaptive deinterlacing for 1080i material - can Intel do better?My HD5770 does a pretty fine job, but I want to lose the dedicated video card in my next HTPC.
Loki726 - Monday, January 3, 2011 - link
Thanks a ton Anand for adding a compiler benchmark. I spent the vast majority of my time on builds and this will help me spec out a few new machines. It's interesting to see results indicating that I should not go anywhere near a low-end Sandybridge system, and that a lot of cheap AMD cores might not be a bad idea.estee - Monday, January 3, 2011 - link
Can't believe the 23.976Hz output bug is still in SB after all this time. Several years ago, the G35 had this issue and Intel proclaimed they'll have a fix for it. Subsequently, G45 still had the problem and even the iCores, but SB? C'mon....it's a big issue for HTPC buffs, because there's too much judder from 1) LCD displays 2) 3:2 cadencing from film to video conversion, so 1:1 (or rather 5:5 for most 120Hz sets) was a must for large screen HPTC setups. Yes, the bitstreaming is good and all, but most folks are content with just 7.1 DD/DTS output. I guess we'll have to wait (again) for iB and cling on to my ol' nVidia 9300 for now. :(mastrdrver - Monday, January 3, 2011 - link
Was just looking at the pictures that are downloadable and comparing and notice a couple of differences. Maybe they are just a driver tweak but I thought I remember ATI and/or nVidia getting slammed in the past for pulling similar tactics.The first thing I notice was when comparing the AA shots in COD. It appears that maybe the Sandy Bridge graphics isn't applying AA to the twigs in the ground. Or is this just an appearance thing where Intel might have a different algorithm that causing this?
The second is a little more obvious to me. In the Dirt 2 pictures I notice that Sandy Bridge is blurring and not clearly rendering the distance objects. The sign to the right side is what caught my eye.
One last thing is the DAO pictures. I've seen someone (in the past) post up pictures of the same exact place in the game. The quality looks a lot better then what Anand has shown and I was wondering if that is correct. I don't have the game so I have no way to confirm.
As always Anand I appreciate the time you and your staff take to do all of your articles and the quality that results. Its just one of the reasons why I've always found myself coming back here ever since the early years of your website.
RagingDragon - Monday, January 3, 2011 - link
Why don't K series parts get the full suite of virtualization features?xxtypersxx - Monday, January 3, 2011 - link
Anand,Great review as always, I love the in depth feature analysis that Anandtech provides.
Bios updates have been released for Gigabyte, Asus, and Intel P67 boards that correct an internal PLL overvolt issue that was artificially limiting overclocks. Users in the thread over at HWbot are reporting that processors that were stuck at 4.8 before are now hitting 5.4ghz.
http://hwbot.org/forum/showthread.php?t=15952
Would you be able to do a quick update on the overclocking results for your chips with the new BIOS updates?
Gothmoth - Monday, January 3, 2011 - link
".....Sandy Bridge will be worth the upgrade for Quick Sync alone."you say that and a few pages before you say it will not work on PC´s with a discreet grafic card.
i don´t know you but videoencoding is done here on performance systems.
system that have discreet GFX cards like a 460 GTX or better.
and i think most enthusiast will buy a P67 mainboard and that would mean NO QUICK SYNC for them.
so please do an update on your review and clarify what exactly happens when you use a P67 mainboard with a discreet GFX card.
will quick sync really don´t work...??
Gothmoth - Monday, January 3, 2011 - link
please make clear how you have tested quick sync in your review.i saw a few comments from people that are confused about your review.
i guess you tested quick sync on an H67 mainboard but i did not notice that you mentioned that in the text.
for my it looks liek intel is screwing the user who buy this 1. generation sandy bridge chipsets.
i will wait for Z68 thats for sure......
Manabu - Monday, January 3, 2011 - link
In the quick sync test I missed a comparison with x264, that is currently the fastest and highest quality encoder for H.264, on an fast CPU. For example, using the presets superfast and very slow (one for speed with reasonable quality, the other for quality with reasonable speed). Also, with an too high bitrate, even the crapiest encoder will look good...I also wanted to see how low you can undervolt an i5-2400 when it has hit the overclocking cap, and how is the power consumption then. The same for the other locked CPUs would be cool too. Also, what is the power consumption of the sandy bridge CPUs running the quick sync hardware encoder?
NJoy - Monday, January 3, 2011 - link
Wow, what a SLAP in AMD's face! The idea they nursed for gazillion years and were set to finally release somewhere this week is brought to you, dear customer, first to the market, with a sudden change in NDA deadline to please you sooner with a hyperperformer from Intel. Who cares that NDAs make an important play in all planning activities, PR, logistics and whatever follows - what matters is that they are first to put the GPU on-die and this is what the average Joe will now know, with a bit of PR, perhaps. Snatch another design win. Hey, AMD, remember that pocket money the court ordered us to pay you? SLAP! And the licence? SLAP! Nicely planned and executed whilst everyone was so distracted with the DAAMIT versus nVidia battles and, ironically, a lack of leaks from the red camp.I just hope Bulldozer will kick some assess, even though I doubt it's really going to happen...
DanNeely - Monday, January 3, 2011 - link
If AMD didn't put a steel toed boot into their own nuts by blowing the original 09Q3 release date for fusion I'd have more sympathy for them. Intel won because they made their launch date while the competition blew theirs by at least half a year.GeorgeH - Monday, January 3, 2011 - link
With the unlocked multipliers, the only substantive difference between the 2500K and the 2600K is hyperthreading. Looking at the benchmarks here, it appears that at equivalent clockspeeds the 2600K might actually perform worse on average than the 2500K, especially if gaming is a high priority.A short article running both the 2500K and the 2600K at equal speeds (say "stock" @3.4GHz and overclocked @4.4GHz) might be very interesting, especially as a possible point of comparison for AMD's SMT approach with Bulldozer.
Right now it looks like if you're not careful you could end up paying ~$100 more for a 2600K instead of a 2500K and end up with worse performance.
Gothmoth - Monday, January 3, 2011 - link
and what benchmarks you are speaking about?as anand wrote HT has no negative influence on performance.
GeorgeH - Monday, January 3, 2011 - link
The 2500K is faster in Crysis, Dragon Age, World of Warcraft and Starcraft II, despite being clocked slower than a 2600K. If it weren't for that clockspeed deficiency, it looks like it also might be faster in Left 4 Dead, Far Cry 2, and Dawn of War II. Just about the only game that looks like a "win" for HT is Civ5 and Fallout 3.The 2500K also wins the x264 HD 3.03 1st Pass benchmark, and comes pretty close to the 2600K in a few others, again despite a clockspeed deficiency.
Intel's new "no overclocking unless you get a K" policy looks like it might be a double-edged sword. Ignoring the IGP stuff, the only difference between a 2500K and a 2600K is HT; if you're spending extra for a K you're going to be overclocking, making the 2500K's base clockspeed deficiency irrelevant. That means HT's deficiencies won't be able to hide behind lower clockspeeds and locked multipliers (as with the i5-7xx and i7-8xx.)
In the past HT was a no-brainer; it might have hurt performance in some cases but it also came with higher clocks that compensated for HT's shortcomings. Now that Intel has cut enthusiasts down to two choices, HT isn't as clear cut, especially if those enthusiasts are gamers - and most of them are.
Shorel - Monday, January 3, 2011 - link
I don't ever watch soap operas (why somebody can enjoy such crap is beyond me) but I game a lot. All my free time is spent gaming.High frame rate reminds me of good video cards (or games that are not cutting edge) and the so called film 24p reminds me of the Michael Bay movies where stuff happens fast but you can't see anything, like in transformers.
Please don't assume that your readers know or enjoy soap operas. Standard TV is for old people and movies look amazing at 120hz when almost all you do is gaming.
mmcc575 - Monday, January 3, 2011 - link
Just want to say thanks for such a great opening article on desktop SNB. The VS2008 benchmark was also a welcome addition!SNB launch and CES together must mean a very busy time for you, but it would be great to get some clarification/more in depth articles on a couple of areas.
1. To clarify, if the LGA-2011 CPUs won't have an on-chip GPU, does this mean they will forego arguably the best feature in Quick Sync?
2. Would be great to have some more info on the Overclocking of both the CPU and GPU, such as the process, how far you got on stock voltage, the effect on Quick Sync and some OC'd CPU benchmarks.
3. A look at the PQ of the on-chip GPU when decoding video compared to discrete low-end rivals from nVidia and AMD, as it is likely that the main market for this will be those wanting to decode video as opposed to play games. If you're feeling generous, maybe a run through the HQV benchmark? :P
Thanks for reading, and congrats again for having the best launch-day content on the web.
ajp_anton - Monday, January 3, 2011 - link
In the Quantum of Solace comparison, x86 and Radeon screens are the same.I dug up a ~15Mbit 1080p clip with some action and transcoded it to 4Mbit 720p using x264. So entirely software-based. My i7 920 does 140fps, which isn't too far away from Quick Sync. I'd love to see some quality comparisons between x264 on fastest settings and QS.
ajp_anton - Monday, January 3, 2011 - link
Also, in the Dark Knight comparison, it looks like the Radeon used the wrong levels (so not the encoder's fault). You should recheck the settings used both in the encoder and when you took the screenshot.testmeplz - Monday, January 3, 2011 - link
Thanks for the great reveiw! I believe the colors in the legend of the graphs on the Graphics overclocking page are mixed up.THanks,
Chris
politbureau - Monday, January 3, 2011 - link
Very concise. Cheers.One thing I miss is clock-for-clock benchmarks to highlight the effect of architectural changes. Though not perhaps within the scope of this review, it would nonetheless be interesting to see how SNB fairs against Bloomfield and Lynnfield at similar clock speeds.
Cheerio
René André Poeltl - Monday, January 3, 2011 - link
Good performance for a bargain - that was amd's terrain.Now sandy bridge for ~200 $ targets on amd's clientel. A Core i5-2500K for $216 - that's a bargain. (included is even a 40$ value gpu) And the overclocking ability!
If I understood it correctly: Intel Core i7 2600K @ 4.4GHz 111W under load is quite efficient. At 3.4 ghz 86 W and a ~30% more 4.4 ghz = ~30% more performance ... that would mean it scales ~ 1:1 power consumption/performance.
Many people need more performance per core, but not more cores. At 111 W under load this would be the product they wanted. e.g. People who make music with pc's, not playing mp3's but mixing, producing music.
But for more cores the x6 Thuban is the better choice on a budget. For e.g. building a server on a budget intel has no product to rival it. Or developers - they may also want as many cores as they can get for their apps to test multithreading performance.
And Amd's also scores with their more conservative approach when it comes to upgrading e.g. motherboards. People don't like to buy a new motherboard every time they upgrade the cpu.
mosu - Monday, January 3, 2011 - link
If I want to spend every year a big lot of money on something I'll sell on eBay at half price a few months later and if I'd like crappy quality images on my monitor, then I would buy Sandy Bridge... but sorry, I'm no no brainer for Intel.nitrousoxide - Monday, January 3, 2011 - link
It really impressed me as I do a lot of video transcoding and it's extremely slow on my triple-core Phenom II X3 720, even though I overclocked it to 4GHz. But there is one question: the acceleration needs EU in the GPU, and GPU is disabled in P67 chipset. Does it mean that if I paired my SNB with a P67 motherboard, I won't be able to use the transcoding accelerator?nitrousoxide - Monday, January 3, 2011 - link
Not talking about SNB-E this time, I know it will be the performance king again. But I wonder if Bulldozer can at least gain some performance advantage to SNB because it makes no sense that 8 cores running at stunning 4.0GHz won't overrun 4 cores below 3.5GHz, no matter what architectural differences there are between these two chips. SNB is only the new-generation mid-range parts, it will be out-performed by High-End Bulldozers. AMD will hold the low-end, just as it does now; as long as the Bulldozer regain some part that Phenoms lost in mainstream and performance market, things will be much better for it. Enthusiast market is not AMD's cup of tea, just as what it does in GPUs: let nVidia get the performance king and strike from lower performance niches.strikeback03 - Tuesday, January 4, 2011 - link
I don't think we'll know until AMD releases Bulldozer and Intel counters (if they do). Seems the SNB chips can run significantly faster than they do right now, so if necessary Intel could release new models (or a firmware update) that allows turbo modes up past 4GHz.smashr - Monday, January 3, 2011 - link
This review and others around the web refer to the CPUs as 'launching today', but I do not see them on NewEgg or other e-tailer sites.When can we expect these babies at retail?
JumpingJack - Monday, January 3, 2011 - link
They are already selling in Malaysia, but if you don't live in Malasia then your are SOL :) ... I see rumors around that the NDA was suppose to expire on the 5th with retail availability on the 9th... I was thinking about making the leap, but think I will hold off for more info on BD and Sk2011 SB.slickr - Monday, January 3, 2011 - link
Intel has essentially shoot itself in the foot this time. Between the letters restrictions, the new chipset and crazy chipset differentiations between a P and a H its crazy.Not to mention they lack USB 3.0, ability to have an overclock mobo with integrated graphics and the stupid turbo boost restrictions.
I'll go even more and say that the I3 core is pure crap and while its better than the old core I3 they are essentially leaving the biggest market the one up the $200 dollars wide open to AMD.
Those who purchase CPU's at $200 and higher have luck in the 2500 and 2600 variants, but for the majority of us who purchase cpu's bellow $200 its crap.
Essentially if you want gaming performance you buy I3 2100, but if you want overall better performance go for a phenom II.
Hopefully AMD comes up with some great CPU's bellow the $200 range that are going to be with 4 cores, unlimited turbo boost and not locked.
Arakageeta - Tuesday, January 4, 2011 - link
It seems that these benchmarks test the CPUs (cores) and GPU parts of SandyBridge separately. I'd like to know more about the effects of the CPU and GPU (usually data intensive) sharing the L3 cache.One advantage a system with a discrete GPU is that the GPU and CPUs can happily work simultaneously without largely affecting one another. This is no longer the case with SandyBridge.
A test I would like to see is a graphics intensive application running while another another application performs some multi-threaded ATLAS-tuned LAPACK computations. Do either the GPU or CPUs swamp the L3 cache? Are there any instances of starvation? What happens to the performance of each application? What happens to frame rates? What happens to execution times?
morpheusmc - Tuesday, January 4, 2011 - link
To me it seems that marketing is defining the processors now in Intel rather than engineering. This is always the case but I think now it is more evident than ever.Essentially if you want he features that the new architecture brings, you have to sell out for the higher end models.
My ideal processor would be a i5-2520M for the desktop: Reasonable clocks, good turbo speeds (could be higher for the desktop since the TDP is not that limited), HT, good graphics etc. The combination of 2 cores and HT provides a good balance between power consumption and perfromance for most users.
Its desktop equivalent price-wise is the 2500, wich has no HT and a much higher TDP because of the four cores. Alternatively, maybe the 2500S, 2400S or 2390T could be considered if they are too overpriced.
Intel has introduced too much differentiation in this generation, and in an Apple-like fashion, i.e. they force you to pay more for stuff you don't need, just for an extra feature (eg. VT support, good graphics etc) that practically costs nothing since the silicon is already there. Bottomline, if you want to have the full functionality of the silicon that you get, you have to pay for the higher end models.
Moreover, having features for specific functions (AES, transcoding etc) and good graphics makes more sense in lower-end models where CPU power is limited.
This is becoming like the software market, where you have to pay extra for licenses for specific functionalities.
I wouldn't be surprised if Intel starts selling "upgrade licenses" sometime in the future that will simply unlock features.
I strongly prefer AMD's approach where all the fatures are available to all models.
I am also a bit annoyed that there is very little discusison about this problem in the review. I agree that technologically Sandy Bridge is impressive, but the artificial limiting of functionality is anti-technological.
ac2 - Tuesday, January 4, 2011 - link
Agreed, but, apart from the K-series/ higher IGP/ motherboard mess up (which I think should be shortly cleared up), all the rest of it is just smart product marketing...It irritates readers of AnandTech, but for the most people who buy off-the-shelf it's all good, with integrators patching up any shortcomings in the core/ chipset.
The focus does seem to be mobile, low-power and video transcode, almost a recipe for macbook!!
ac2 - Tuesday, January 4, 2011 - link
Oh yes and another-bloody-socket-thank-you-so-much...Lets not forget that the only reason Intel can get away with all this is that AMD have been off their game for a while now..
Wonder if ARM will be the next one to give Intel the occasional kick it needs to be a bit more customer friendly...
Hrel - Tuesday, January 4, 2011 - link
The HD5670 can be had for 65 bucks, so why include a 70 dollar 5570? illogical.Taft12 - Tuesday, January 4, 2011 - link
He's talking about the general online price across a variety of sites and OEMs (Sapphire, Asus, Palit, etc) not a one-off MIR-inclusive price that can be found only by the obsessive.kevith - Tuesday, January 4, 2011 - link
Man, this is awesome, my wallet is trying to hide, but it won't do it any good...I took the jump to AMD when Phenom II arrived, a friend of mine bought my C2D E7400 system, and already then I regretted when I was done building. There's no two ways about it, Intel systems - if they aren't the absolute low-end - runs so much smoother.
Which seems to be the case again, even at a reasonable price.
There's one thing about the review I don't really understand: "...Another Anandtech editor put it this way: You get the same performance at a lower price..."
Has he read the review?
As far as I can see, you get pretty much more performance at a lower price.
xsilver - Tuesday, January 4, 2011 - link
Is there going to be a memory scaling test for sandy bridge?eg. how much of a performance gap with ddr1333 ram vs ddr2000
also does sandy bridge's gpu allow for multi monitor setups? what about when stacked with a discrete gpu?
RicowSQL - Tuesday, January 4, 2011 - link
Hey guys, two things i'm missing from the SB reviews over the web:1) How well does the new IMC scales to memory clocks? I guess it's a matter of time until someone performs a in-depth analysis on that matter, but i'm particularly interested on that...
2) Adobe's Flash decoding can take advantage of Intel IGPs acceleration through Clear Video technology. Will it work in the new HD2000/3000 series as well?
ibudic1 - Tuesday, January 4, 2011 - link
But Why not VS 2010?Taft12 - Tuesday, January 4, 2011 - link
Same reason it takes a while for AT to provide comparisons of the latest games - it takes an eternity to run a benchmark on all CPUs going back a couple generations.Taft12 - Tuesday, January 4, 2011 - link
I think this might be an error in your chart -- the last one on page 3 shows a Y for the i3-2100 in the AES-NI column. I would love to have this feature on an i3 CPU, but the following paragraph states "Intel also uses AES-NI as a reason to force users away from the i3 and towards the i5" which leads me to believe that i3 doesn't have said feature.Please let me know if I'm wrong so I can get my pre-order in!!!
nedjinski - Tuesday, January 4, 2011 - link
Please comment on the Sandy Bridge / DRM 'controversy'.Thanks.
Taft12 - Tuesday, January 4, 2011 - link
You first.ReaM - Tuesday, January 4, 2011 - link
the six core 980x still owns them in all tests where all cores are used.I dont know 22k in cinebench is really not a reason to buy the new i7, I reach 24k on air with i7 860 and my i5 runs on 20k on air.
Short term performance is real good, but I dont care if I wait for a package to unpack for 7 seconds or 8, for long term like rendering, neither there is a reason to upgrade.
I recommend you get the older 1156 off ebay and save a ton of money.
I have the i5 on hackintosh, I am wondering if 1155 will be hackintoshable
Spivonious - Tuesday, January 4, 2011 - link
I have to disagree with Anand; I feel the QuickSync image is the best of the four in all cases. Yes, there is some edge-softening going on, so you lose some of the finer detail that ATi and SNB gives you, but when viewing on a small screen such as one on an iPhone/iPod, I'd rather have the smoothed-out shapes than pixel-perfect detail.wutsurstyle - Tuesday, January 4, 2011 - link
I started my computing days with Intel but I'm so put off by the way Intel is marketing their new toys. Get this but you can't have that...buy that, but your purchase must include other things. And even after I throw my wallet to Intel, I still would not have a OC'd Sandy Bridge with useful IGP and Quicksync. But wait, throw more money on a Z68 a little later. Oh...and there's a shiny new LGA2011 in the works. Anyone worried that they started naming sockets after the year it comes out? Yay for spending!AMD..please save us!
MrCrispy - Tuesday, January 4, 2011 - link
Why the bloody hell don't the K parts support VT-d ?! I can only imagine it will be introduced at a price premium in a later part.slick121 - Tuesday, January 4, 2011 - link
Wow I just realized this. I really hate this type of market segmentation.Navier - Tuesday, January 4, 2011 - link
I'm a little confused why Quick Sync needs to have a monitor connected to the MB to work. I'm trying to understand why having a monitor connected is so important for video transcoding, vs. playback etc.Is this a software limitation? Either in the UEFI (BIOS) or drivers? Or something more systemic in the hardware.
What happens on a P67 motherboard? Does the P67 board disable the on die GPU? Effectively disabling Quick Sync support? This seems a very unfortunate over-site for such a promising feature. Will a future driver/firmware update resolve this limitation?
Thanks
NUSNA_moebius - Tuesday, January 4, 2011 - link
Intel HD 3000 - ~115 Million transistorsAMD Radeon HD 3450 - 181 Million transistors - 8 SIMDs
AMD Radeon HD 4550 - 242 Million transistors - 16 SIMDs
AMD Radeon HD 5450 - 292 Million transistors - 16 SIMDs
AMD Xenos (Xbox 360 GPU) - 232 Million transistors + 105 Million (eDRAM daughter die) = 337 Million transistors - 48 SIMDs
Xenos I think in the end is still a good two, two and a half times more powerful than the Radeon 5450. Xenos does not have to be OpenCL, Direct Compute, DX11 nor fully DX10 compliant (a 50 million jump from the 4550 going from DX10.1 to 11), nor contains hardware video decode, integrated HDMI output with 5.1 audio controller (even the old Radeon 3200 clocks in at 150 million + transistors). What I would like some clarification on is if the transistor count for the Xenos includes Northbridge functions..............
Clearly PC GPUs have insane transistor counts in order to be highly compatible. It is commendable how well the Intel HD 3000 does with only 115 Million, but it's important to note that older products like the X1900 had 384 Million transistors, back when DX9.0c was the aim and in pure throughput, it should match or closely trail Xenos at 500 MHz. Going from the 3450 to 4550 GPUs, we go up another 60 million for 8 more SIMDs of a similar DX10.1 compatible nature, as well as the probable increases for hardware video decode, etc. So basically, to come into similar order as the Xenos in terms of SIMD counts (of which Xenos is 48 of it's own type I must emphasize), we would need 60 million transistors per 8 SIMDs, which would put us at about 360 million transistors for a 48 SIMD (240 SP) AMD part that is DX 10.1 compatible and not equipped with anything unrelated to graphics processing.
Yes, it's a most basic comparison (and probably fundamentally wrong in some regards), but I think it sheds some light on the idea that the Radeon HD 5450 really still pales in comparison to the Xenos. We have much better GPUs like Redwood that are twice as powerful with their higher clock speeds + 400 SPs (627 Million transistors total) and consume less energy than Xenos ever did. Of course, this isn't taking memory bandwidth or framebuffer size into account, nor the added benefits of console optimization.
frankanderson - Tuesday, January 4, 2011 - link
I'm still rocking my Q6600 + Gigabyte X38 DS5 board, upgraded to a GTX580 and been waiting for Sandy, definitely looking forward to this once the dust settles..Thanks Anand...
Spivonious - Wednesday, January 5, 2011 - link
I'm still on E6600 + P965 board. Honestly, I would upgrade my video card (HD3850) before doing a complete system upgrade, even with Sandy Bridge being so much faster than my old Conroe. I have yet to run a game that wasn't playable at full detail. Maybe my standards are just lower than others.aviat72 - Tuesday, January 4, 2011 - link
Though SB will be great for some applications, there are still rough edges in terms of the overall platform. I think it will be best to wait for SNB-E or at least the Z68. SNB-E seems to be the best future-proofing bet.I also wonder how a part rated for 95W TDP was drawing 111W in the 4.4GHz OC (the Power Consumption Page). SB's power budget controller must be really smart to allow the higher performance without throttling down, assuming your cooling system can manage the thermals.
marraco - Tuesday, January 4, 2011 - link
I wish to know more about this Sandy Bridge "feature":http://www.theinquirer.net/inquirer/news/1934536/i...
PeterO - Tuesday, January 4, 2011 - link
Anand, Thanks for the great schooling and deep test results -- something surely representing an enormous amount of time to write, produce, and massage within Intel's bumped-forward official announcement date.Here's a crazy work-around question:
Can I have my Quick Synch cake and eat my Single-monitor-with-Discrete-Graphics-card too if I, say:
1). set my discrete card output to mirror Sandy Bridge's IGP display output;
2). and, (should something exist), add some kind of signal loopback adapter to the IGP port to spoof the presence of a monitor? A null modem, of sorts?
-- I have absolutely no mobo/video signaling background, so my idea may be laugh in my face funny to anybody who does but I figure it's worth a post, if only for your entertainment. :)
Hrel - Wednesday, January 5, 2011 - link
It makes me SO angry when Intel does stupid shit like disable HT on most of their CPU's even though the damn CPU already has it on it, they already paid for. It literally wouldn't cost them ANYTHING to turn HT on those CPU's yet the greedy bastards don't do it.Moizy - Wednesday, January 5, 2011 - link
The HD Graphics 3000 performance is pretty impressive, but won't be utilized by most. Most who utilize Intel desktop graphics will be using the HD Graphics 2000, which is okay, but I ran back to the AMD Brazos performance review to get some comparisons.In Modern Warfare 2, at 1024 x 768, the new Intel HD Graphics 2000 in the Core i3 2100 barely bests the E-350. Hmm--that's when it's coupled with a full-powered, hyper-threaded desktop compute core that would run circles around the compute side of the Brazos E-350, an 18w, ultra-thin chip.
This either makes Intel's graphics less impressive, or AMD's more impressive. For me, I'm more impressed with the graphics power in the 18w Brazos chip, and I'm very excited by what mainstream Llano desktop chips (65w - 95w) will bring, graphics-wise. Should be the perfect HTPC solution, all on the CPU (ahem, APU, I mean).
I'm very impressed with Intel's video transcoding, however. Makes CUDA seem...less impressive, like a bunch of whoop-la. Scary what Intel can do when it decides that it cares about doing it.
andywuwei - Wednesday, January 5, 2011 - link
not sure if anybody else noticed. CPU temp of the i5@3.2GHz is ~140 degrees. any idea why it is so high?SantaAna12 - Wednesday, January 5, 2011 - link
Did I miss the part where you tell of about the DRM built into this chip?Cb422 - Wednesday, January 5, 2011 - link
When will Sandy Bridge be available on Newegg or Amazon for me to purchase?DesktopMan - Thursday, January 6, 2011 - link
Very disappointed in the lack of vt-d and txt on k-variants. They are after all the high end products. I also find the fact that only the k-variants having the faster GPU very peculiar, as those are the CPUs most likely to be paired with a discrete GPU.RagingDragon - Thursday, January 6, 2011 - link
Agreed. I find the exclusion of VT-d particularly irritating: many of the overclockers and enthusiasts to whom the K chips are marketed also use virtualization. Though I don't expect many enthusiasts, if any, to miss TXT (it's more for locked down corporate systems, media appliances, game consoles, etc.).With the Z68 chipset coming in the indeterminate near future, the faster GPU on K chips would have made sense if the K chips came with every other feature enabled (i.e. if they were the "do eveything chips").
Also, I'd like to have the Sandy Bridge video encode/decode features separate from the GPU functionality - i.e. I'd like to choose between Intel and Nvidia/AMD video decode/encode when using a discrete GPU.
saikyan - Thursday, January 6, 2011 - link
"perhaps we should return to just labeling these things with their clock speeds and core counts? After all, it’s what Apple does—and that’s a company that still refuses to put more than one button on its mice. Maybe it’s worth a try."I hate to sound like the resident Mac fanboy (I'm platform agnostic) but I want to point out:
1. Apple sells by trim and display, they don't really make a big deal of the CPU (probably because they stick to low-end and midrange CPUs)
2. They have been shipping multi-button mice for nearly six years now. Come on!
vol7ron - Friday, January 7, 2011 - link
- gtx460 image quality definitely the worst- 6870 image quality next
- quicksync/snb image quality are the best (marginally better than 6870); I did notice some color loss in the flowers behind the umbrella when I zoomed in on the quicksync picture, so I'd have to give SNB the title in terms of quality. QuickSync gets the title in terms of performance.
nitrousoxide - Monday, January 10, 2011 - link
lmaoBurticus - Monday, January 10, 2011 - link
My last Intel cpu was a prescott 2.4ghz P4 OC'd to over 3ghz... back in 2004? My last 3 main system builds all AMD.... I was thinking about going to an X6 in the near future, now I guess maybe not. My price point is pretty much $200 for the cpu + motherboard so maybe I'll have to wait a couple months.Suddenly my 2 year old Phenom II seems very, very slow...
magnusr - Tuesday, January 11, 2011 - link
I just received my 2600K. It only had the normal fan. No special heatsink/fan for the 2600K. The same heatsink as the rest....This is a fraud since I placed my decision to take 2600K instead of the 2500K based on the better heatsink and the cache.
mmcnally - Tuesday, January 11, 2011 - link
Same here.. Very disapointed as I would have purchased a better heatsink if I knew. I guess I'll just do the install with the standard crap HS and hold off on over-clocking until I get a better one.swing848 - Tuesday, January 11, 2011 - link
Many of us are using older equipment. And, for those of us with limited funds it would have been nice if you would have added the Intel Q9650 and run all game benchmarks at 3.4GHz [ the speed of the 2600K], except for the default 3.6GHz speed of the X4 975BE, leave it there.I have a QX9650 that I purchased from eBay and it does 4GHz+ with ease, in a Gigabyte P35-DS3R motherboard, even with my ancient cooler [Thermalright XP-90] that I pulled from a socket 478 motherboard [$5 adapter].
Note: I lapped the XP-90 with a slight convex shape to better use with un-lapped CPUs.
In any event, a "quick and dirty" or simple overclock would have yielded at least some usable information. To save time, no need to try to get the maximum speed from all components.
As long as the CPUs were already overclocked, you could run all benchmarks at those speeds, not just games. Many of us overclock to get more for our money.
You included the ancient Q6600 at it's slow default speed - in some of the benchmarks. Why didn't you include it in all benchmarks?
Your normal benchmark page does not include a full, or nearly full, list of games and CPUs, so, comparisons are difficult to find, example here anandtech.com/bench/CPU/62
Where does this leave those of us with older equipment that is still chugging along?
Kell_sw - Thursday, January 13, 2011 - link
DRM inside the cpu? People is blind?. The sad thing, everybody is going to buy this.Sweeo - Friday, January 14, 2011 - link
I just bought a upgrade "kit" from an core 2 2.8 quad to i7 950 :(but I got 6 sata ports I noticed the new boards have 4+2
will the more advanced boards have more ?
Ahumado - Sunday, January 16, 2011 - link
I didn't see it discussed. Did I miss it?auhgnist - Monday, January 17, 2011 - link
For example, between i3-2100 and i7-2600?timminata - Wednesday, January 19, 2011 - link
I was wondering, does the integrated GPU provide any benefit if you're using it with a dedicated graphics card anyway (GTX470) or would it just be idle?James5mith - Friday, January 21, 2011 - link
Just thought I would comment with my experience. I am unable to get bluray playback, or even CableCard TV playback with the Intel integrated graphics on my new I5-2500K w/ Asus Motherboard. Why you ask? The same problem Intel has always had, it doesn't handle the EDID's correctly when there is a receiver in the path between it and the display.To be fair, I have an older Westinghouse Monitor, and an Onkyo TX-SR606. But the fact that all I had to do was reinstall my HD5450 (which I wanted to get rid of when I did the update to SandyBridge) and all my problems were gone kind of points to the fact that Intel still hasn't gotten it right when it comes to EDID's, HDCP handshakes, etc.
So sad too, because otherwise I love the upgraded platform for my HTPC. Just wish I didn't have to add-in the discrete graphics.
palenholik - Wednesday, January 26, 2011 - link
As i could understand from article, you have used just this one software for all these testings. And I understand why. Is it enough to conclude that CUDA causes bad or low picture quality.I am very interested and do researches over H.264 and x264 encoding and decoding performance, especially over GPU. I have tested Xilisoft Video Converter 6, that supports CUDA, and i didn't problems with low quality picture when using CUDA. I did these test on nVidia 8600 GT and for TV station that i work for. I was researching for solution to compress video for sending over internet with low or no quality loss.
So, could it be that Arcsoft Media Converter co-ops bad with CUDA technology?
And must notice here how well AMD Phenom II x6 performs well comparable to nVidia GTX 460. This means that one could buy MB with integrated graphics and AMD Phenom II x6 and have very good encoding performances in terms of speed and quality. Though, Intel is winner here no doubt, but jumping from sck. to sck. and total platform changing troubles me.
Nice and very useful article.
ellarpc - Wednesday, January 26, 2011 - link
I'm curious why bad company 2 gets left out of Anand's CPU benchmarks. It seems to be a CPU dependent game. When I play it all four cores are nearly maxed out while my GPU barely reaches 60% usage. Where most other games seem to be the opposite.Kidster3001 - Friday, January 28, 2011 - link
Nice article. It cleared up much about the new chips I had questions on.A suggestion. I have worked in the chip making business. Perhaps you could run an article on how bin-splits and features are affected by yields and defects. Many here seem to believe that all features work on all chips (but the company chooses to disable them) when that is not true. Some features, such as virtualization, are excluded from SKU's for a business reason. These are indeed disabled by the manufacturer inside certain chips (they usually use chips where that feature is defective anyway, but can disable other chips if the market is large enough to sell more). Other features, such as less cache or lower speeds are missing from some SKU's because those chips have a defect which causes that feature to not work or not to run as fast in those chips. Rather than throwing those chips away, companies can sell them at a cheaper price. i.e. Celeron -> 1/2 the cache in the chip doesn't work right so it's disabled.
It works both ways though. Some of the low end chips must come from better chips that have been down-binned, otherwise there wouldn't be enough low-end chips to go around.
katleo123 - Tuesday, February 1, 2011 - link
It is not expected to compete Core i7 processors to take its place.Sandy bridge uses fixed function processing to produce better graphics using the same power consumption as Core i series.
visit http://www.techreign.com/2010/12/intels-sandy-brid...
jmascarenhas - Friday, February 4, 2011 - link
Problem is we need to choose between using integrated GPU where we have to choose a H67 board or do some over clocking with a P67. I wonder why we have to make this option... this just means that if we dont do gaming and the 3000 is fine we have to go for the H67 and therefore cant OC the processor.....jmascarenhas - Monday, February 7, 2011 - link
and what about those who want to OC and dont need a dedicated Graphic board??? I understand Intel wanting to get money out of early adopters, but dont count on me.fackamato - Sunday, February 13, 2011 - link
Get the K version anyway? The internal GPU gets disabled when you use an external GPU AFAIK.dansus - Saturday, February 19, 2011 - link
Looking at the results of Quick Sync transcoding, the results are very interesting.But which h264 encoder is ArcSoft using, im guessing its Mainconcept, would like to compare QS with x264 to be sure of the results.
In future, be nice to see the original frame to compare with too. Without the original, comparing just the encoded frames means little.
7eventh - Sunday, February 20, 2011 - link
Looking at cbscores.com (using the actual Cinebench 11.5) the 2600K is not THAT glorious at rendring-speed ... Why did you use Cinebench 10?pshen7 - Tuesday, February 22, 2011 - link
Who in the world named it Sandy Bridge? And Cougar Point is no better. They need a better marketing department. Seriously.Peter Shen, Koowie.com
zzzxtreme - Thursday, March 3, 2011 - link
does that mean I can't install windows XP/DOS on UEFI motherboards?dwade123 - Tuesday, March 8, 2011 - link
Intel i3 2100 is so underrated. It beats AMD's fastest's 6 core and older i7 Quadcores in many games and is only a little slower in other areas.Wouggie - Tuesday, March 15, 2011 - link
With an even improved i7990 Extreme now out, with a base speed of 3.46 GHz, which would be the better choice, considering I am going to using a dedicated graphics card Nvidia Quadro 4000.Also. what do you see on the horizon for three channel motherboards with more than 2 SATA lll 6 Gb/s connectors?
georgevt - Sunday, March 27, 2011 - link
The benchmarks against the AMD processors are useless. All the compare is core-to-core performance (4 core to 4 core). You should be comparing is comparably priced processors/systems. For example, the 6-core AMD 1090T costs a hundred dollars less than the i7 2600 at newegg.com, yet your benchmarks fail to provide any comparative benchmarks. It's quite possible that for some applications, that the 6-core AMD may perform better than the more expensive i4-core 7 processors in your benchmarks.scurrier - Friday, April 1, 2011 - link
Anand says, "frequency domain (how often pixels of a certain color appear)," but this definition of the frequency domain is incorrect. Frequency domain in the case of video is a 2 dimensional discrete cosine transform of the frame. It is not a count of pixels like a histogram (binning) or anything.aka_Warlock - Saturday, April 30, 2011 - link
Would be nice to see som test of how much of a performance difference lacking VT-d has on th CPU?AbdurRauf - Monday, May 2, 2011 - link
Does the QuickSync handle uprezing or only transcoding? Have you looked at the new WinFast HPVC1111 SpursEnginex4 and compared it to Quicksync, Cuda and Stream encoding and uprezing?samrty22331 - Wednesday, June 1, 2011 - link
visual studio 2010 professional not supported intel i5 2500kso
can you say
how to install vs 2010 in this
Okurka - Saturday, August 13, 2011 - link
The base clock of the HD 3000 GPU is 1100 MHz, not 850 MHz.That makes the 1550 MHz an overclock of 40,9 %, not 82,4 % as stated in the article.
khalnayak - Sunday, January 22, 2012 - link
I am having Intel HD 3000 - Sandy Bridge in my system and i was willing to get the game called "oil rush" but then i have found weird response for the game here http://www.futurehardware.com/pc-gaming/288.htm , so i just wanted to know is there any one who have tested the Intel HD 3000 - Sandy Bridge for oil rush, any help for this will be highly appreciated.thr0nez101 - Sunday, January 29, 2012 - link
I've got intel hd graphics 3000 and according to this forum/review it has a prob running dawn of war 2 on low graphics... i have it set to max graphics and i runs a dream... same with a lot of games i play on it...oliverr - Saturday, February 11, 2012 - link
guys, is it safe to overclock the Intel HD 3000 GPU ? I own a 2500K CPU. I can overclock the GPU to 1450mhz and it looks stable . But i dont know how to read the temperature from the GPU unit, so iam afraid i could burn my GPU/CPU .cerberaspeed12 - Thursday, February 16, 2012 - link
hi.first off all sorry for my english.I have a doubt .I have seen the dells lap top.they are identical but one have the Intel Core i3-2350M 2.3GHz ,the odher is Intel Core i5-2450M 2.5GHz , and the third have Intel Core i7 -2670M 2.4GH
the prices is 600 $,670$ and 800 $,I am working some live multi channel audio production and .net teh programing.So for wich one i soud go.Thanks
indyaah - Tuesday, February 21, 2012 - link
any suggestions how can i??weirdo2989 - Sunday, March 4, 2012 - link
Hi Techies,I recently got this processor. It is ultimate for gaming.
However in my windows CPU meter gadget, i can see only 2 cores functioning. Stock comes with unlocked multiplier afaik. But here in my system, it shows only 2 cores. Is there any way to activate all the cores for better performance?
Any suggestions/tips would be highly appreciated.
Thanks.
Regards,
Parth
0121birmingham - Saturday, May 12, 2012 - link
Just to say i wrote a small post on this issue at http://intel23976fpsproblem.blogspot.co.uk/It does not look like the problem has been fixed in the new z77 line up. DAM
milutzuk - Saturday, July 14, 2012 - link
Beside VS2008 compiler performance I would like to see growing a database with some Java compiler performance, either under NetBeans or Eclipse. Thank you.britchie - Wednesday, December 26, 2012 - link
I was wondering how Intel Quick Sync might impact PC Based Security Systems/CCTV like those from Avermedia or Geovision. For the longest time Aver advocated a dedicated graphics card but now says HD2000/3000 CPU is OK.I read about limited software support in the article and guess that Aver does not yet take advantage Quick Sync. However, I had to RMA a NV6480 just for compatibility with a Sandy Bridge CPU (even using a dedicated GPU - ATI 5000 for multiple monitors) and wondered why.
Anyone know why Sandy Bridge might cause compatibility issues with DVR/NVR Cards and what advantages Quick Sync could bring to the IP Security Camera market if top companies like Geovision or Avermedia developed software for it?
realflow100 - Sunday, September 6, 2015 - link
Heh. I can run DiRT 4 at 30FPS+ ABSOLUTELY playable even on a bit higher settingsWith Intel HD Graphics (Bay Trail architecture)
Even GTA 5 plays somewhat reasonably when you disable shadows and run at 640x480 :D
IdBuRnS - Wednesday, October 11, 2017 - link
Who would have thought that 7 years later the 2600k is still relevant and competitive?