Thanks :) This will hopefully become the new CPU testing standard for us. It's all scripted, making benchmarking relatively easy. Sourcing and writing are now the mentally consuming parts.
That is nice to know. Will you write an article about the testing itself? Like detailing the process or something along those lines? It would be interesting to know about those little details, for sure!
I'm sure you can glue together an article in no time! *wink wink*
What is lake to the party is Intel - it is just so firetrucking sad how they refuse to give customers more for their money. HT should be enabled on all their chips, it is there on the physical chip.
I will never buy another Intel cpu - what! You got a problem with that Intel?
"With the corresponding price, Ryzen 1500X 4c/8t is 90% of the i7 7700 for half the price."
This is just incorrect. Ryzen ipc is around 90% of kaby/skylake, but the 7700k oces around 25% higher and also has around a 20% higher out of the box frequency.
Can anyone explain to me how the 7600k and in some cases the 7600 beating the 7700k almost consistenly. I don't doubt the Ryzen results but the Intel side of results confuses the heck out of me.
Sustained turbo, temperatures, quality of chips from binning (a good 7600 chip will turbo much longer than a 7600K will), time of day (air temperature is sometimes a pain - air conditioning doesn't really exist in the UK, especially in an old flat in London), speed shift response, uncore response, data locality (how often does the system stall, how long does it take to get the data), how clever the prefetchers are, how a motherboard BIOS ramps up and down the turbos or how accurate its thermal sensors are (I try and keep the boards constant for a full generation because of this). If it's only small margin between the data, there's not much to discuss.
Are you absolutely sure your 7700k isn't broken? It sure looks like it is. I understand your point about margins but numbers are numbers and yours look wrong. No other benchmarks I've seen to date aligns with your findings. And please for the love of god ammend this article if it is.
Your constant refrain belonged in the bulldozer era (when the single threaded performance difference was on the order of 80-100 percent). Apparently you can't move past the Ryzen launch. If a different company such as Samsung had launched these CPUs the reception would have been very different. I've never bought AMD before but my Ryzen 1700 is incredible for its price, and I had to be disillusioned by my terrible Skylake upgrade first before I was willing to purchase from AMD.
It still stands that the best value in this group is the Ryzen 1600X, mostly because it's platform cost is 1/3rd that of Intel's HEDT. So unless you need those platform advantages (PCIe, which even x299 doesn't completely have on these KBL-X CPU's) it really won't justify spending $300 more on a system, even if single threaded performance is 15-20% better.
Just the fact an AMD system of less than half the cost can ice a high end Intel system in WinRAR speaks a lot to AMD's credibility here.
I look at it this way: in 2016 I bought a 6600k for $350 CAD. In 2017 I bought a Ryzen 1700 for $350 CAD. Overall speed increase 240%. So AMD delivered 240 percent more performance at the same price in one year. Intel continues to deliver less than 10 percent per dollar. I could care less if the single performance is the same.
Call me next time Intel releases a chip a year later that is 240 percent faster for the same price.
So you bought yourself inferior IPC and a sad attempt at ameliorating it by piling up cores, and now have to cope with this through wishful thinking of never materializing performance percents. Classic AMD victim behavior.
First of all, stop using IPC, an expression you don't understand. Use single core performance. In almost every single benchmark I see dramatic speed improvements. I'm comparing the i5 with a Ryzen 1700 as they were the same cost. People harping over the i7-7700k apparantly didn't notice the 1700 selling for as low as $279 USD.
Also get higher fps in almost every single game (Mass Effect Andromeda, Civilization and Overwatch in particular).
I have tremendous respect for Ian, whose knowledge and integrity is of the highest order. I just think some of his words in this review lose the plot. As he said, "it would appear Intel has an uphill struggle to convince users that Kaby Lake-X is worth the investment". He should have emphasized that a little more.
In Canada, Ryzen 1700 plus motherboard = $450. i5 (not i7) plus motherboard is $600. Yes, $150 dollars more!
Intel has 20 percent faster single core performance and yet Ryzen is 2.4 times (+140 percent) faster overall... Numbers should speak for themselves if you don't lose the plot. I agree single threaded performance is very important when the divergence is large, such as Apple's A10 vs Snapdragon 835, or the old Bulldozer. But the single threaded gap has mostly closed and a yawning gulf has opened up in total price/performance. Story of the year!
I think you should prove why you think Intel is the superior buy, instead of just trolling and not actually providing any rationale behind your "arguments".
On Amazon.co.uk right now, there are four Ryzen and one FX CPU in the top 10. Here's the list (some of the recommended retail price values are missing or a bit - in the case of the 8350 - misleading):
There must be a ton of stupid people buying CPUs now then, or perhaps they just prefer solder as their thermal interface material of choice.
Advantages for Intel right now: clock speed; overclocking headroom past 4 GHz; iGPU (not -X CPUs) Disadvantages for Intel right now: price; limited availability of G4560; feature segmentation (well, that's always been a factor); overall platform cost
An AMD CPU would probably consume similar amounts of power if they could be pushed past 4.1GHz so I won't list that as a disadvantage for Intel, nor will I list Intel's generally inferior box coolers as not every AMD part comes with one to begin with.
The performance gap in single threaded workloads at the same clock speed has shrunk from 60%+ to about 10%, power consumption has tumbled, and it also looks like AMD scales better as more cores are added. Unless you're just playing old or unoptimised games, or work in a corporate environment where money is no object, I don't see how AMD wouldn't be a viable alternative. That's just me, though - I'm really looking forward to your reasons.
That is not how IPC works, since it explicitly refers to single core - single thread performance. As the number of cores rises the performance of a *single* task never scales linearly because there is always some single thread code involved (Amdahl's law). For example if your task has 90% parallel and 10% serial code its performance will max out at x10 that of a single core at ~512 cores. From then on even if you had a CPU with infinite cores you couldn't extract half an ounce of additional performance. If your code was 95% parallel the performance of your task would plateau at x20. For that though you would need ~2048 cores. And so on.
Of course Amdahl's law does not provide a complete picture. It assumes, for example, that your task and its code will remain fixed no matter how many cores you add on them. And it disregards the possibility of computing distinct tasks in parallel on separate cores. That's where Gustafson's Law comes in. This "law" is not concerned with speeding up the performance of tasks but computing larger and more complex tasks at the same amount of time.
An example given in Wikipedia involves boot times : Amdahl's law states that you can speed up the boot process, assuming it can be made largely parallel, up to a certain number of cores. Beyond that -when you become limited by the serial code of your bootloader- adding more cores does not help. Gustafson's law, on the contrary, states that instead of speeding up the boot process by adding more cores and computing resources, you could add colorful GUIs, increase the resolution etc, while keeping the boot time largely the same. This idea could be applied to many -but not all- computing tasks, for example ray tracing (for more photorealistic renderings) and video encoding (for smaller files or videos with better quality), and many other heavily multi-threaded tasks.
No reason to laugh. I compared the 6600k vs the Ryzen 1700. 1 year speed increase of 144 percent (2.44 times the speed). Same as this: 1135 vs 466 points.
Interesting article but it seems intended to play down the extremely bad press x299 has received which is all over the internet and Youtube.
Once you get past Mr. Cuttress' glowing review, it's clear that the I5-7640x is not worth the money because of lackluster performance, the I7-7740X is marginally faster than the older 7700k, and the I7-7800x is regularly beaten by the 7740X in many benchmarks that actually count and is a monstrously inefficient energy pig. Therefore the only Intel CPUs of this batch worth buying are the 7700k/7740x, and there is no real advantage to x299. In summary, it doesn't actually change anything.
It's very telling that Mr. Cutress doesn't comment on the absolutely egregious energy consumption of the 7800x. The Test Bed setup section doesn't list the 7800x at all. The 7840x and 7740x are using a Thermalright True Copper (great choice!) but no info on the 7800x cooler. Essentially, the 7800x cameo appearance is only to challenge the extremely strong Ryzen multi-threaded results, but its negative aspects are not discussed, perhaps because it might frighten people from x299. Tsk, tsk. As my 11 year old daughter would say "No Fair." By the way, the 7800x is selling for ~ $1060 right now on Newegg, not $389.
Proudly typed on my Ryzen 1800x/Gigabyte AB350 Gaming 3. # ;-)
You may not have realised but this is the Kaby Lake-X review, so it focuses on the KBL-X parts. We already have a Skylake-X review for you to mull over. There are links on the first page.
Nevertheless, the wider picture is relevant here. The X299 platform is a mess. Intel is aiming KL-X at a market which doesn't exist, they've locked out features that actually make it useful, it's more power hungry, and a consumer needs a lot of patience and plenty of coffee to work out what the heck works and what doesn't on a mbd with a KL-X fitted.
This is *exactly* the sort of criticism of Intel which should have been much stronger in the tech journalism space when Intel started pulling these sorts of stunts back with the core-crippled 3930K, heat-crazy IB and PCIe-crippled 5820K. Instead, except for a few exceptions, the tech world has been way too forgiving of Intel's treading-on-water attitude ever since SB, and now they've panicked in response to Ryzen and released a total hodgebodge of a chipset and CPU lineup which makes no sense at all. And if you get any disagreement about what I've said by anyone at Intel, just wave a 4820K in their face and say well explain this then (quad-core chip with 40 PCIe lanes, da daa!).
I've been a big fan of Z68 and X79, but nothing about Intel's current lineup appeals in the slightest.
There's also the funny bit about motherboards potentially killing KBL-X CPUs if a Skylake-X was used previously.
What's with Intel's insane product segmentation strategy with all the crippling and inconsistent motherboard choices? It's like they want to make it hard to choose, so buyers either get the cheapest or most expensive chip.
On the first page, I assume the green highlight in the processor charts signifies an advantage for that side. Why are the cores/threads rows in the Ryzen side not highlighted? Or is 8/16 not better than 4/8?
This has been in the works for a while because our CPU failed. I work on the CPU stuff - other editors work on other things ;) If you've got an idea, reach out to us. I can never guarantee anything (I've got 10+ ideas that I don't have time to do) but if it's interesting we'll see what we can do. Plus it helps us direct what other content we should be doing.
This is an amazing amount of benchmarking with many options. thank you. Must have been a lot of work :-) The obvious idea is this:
Gaming (modern CPU limited and most played games) & Productive work (rendering, encoding, 4K video work, R/statistics/Matlab)
Test those under 4c/8t and 8c/16t CPUs both from AMD and Intel - all at most common non-esoteric overlock levels (+/-10%).
This is what many of your readers want:
How much does c. 5Ghz 4c/8t do vs 4.x Ghz 8c/16t when taken to it's everyday stable extreme, in modern games / productivity.
The web is already full of benchmarks at stock speed. Or overclocked Ryzen R 7 against stock Intel, or OC intel against overclocked Ryzen - and the game/app selections are not very varied.
The result is a simple graph that plots the (assumed) linear trend in performance/price and shows any deviations below/above the linear trend.
Of course, if you already have the Coffee lake 6c/12t sample, just skip the 4c/8t and go with 6c/12t vs 8c/16 comparision.
Thanks for all the hard work throughout all these years!
"For 2017, Intel is steering the ship in a slightly different direction, and launching the latest microarchitecture on the HEDT platform."
Skylake-S, Kaby Lake-S and Kaby Lake-X share the same microarchitecture, right? Then Skylake-X is newer microarchitecture than Kaby Lake-X (changes to L2 and L3 caches, AVX-512).
Correct me if I'm wrong: SKL-SP cores are derived from SKL-S, and 14nm. KBL-S/X are 14+, and shares most of its design with SKL-S, and the main changes are power related. Underneath there's no real performance (except Speed Shift v2), but Intel classifies Kaby Lake as its latest non-AVX512 IPC microarchitecture.
Kaby Lake-S has some errata fixes compared to Skylake-S. AFAIK, this is the only change to the CPU core (besides the Speed Shift v2, if it even involved hardware changes). David Kanter says Skylake-X/EP is 14+ nm http://www.realworldtech.com/forum/?threadid=16889...
4.6? Outrageous! I would be offended if I were a 2700K at a mere 4.6! Get that thing up to 5.0 asap. 8) Mbd-dependent I suppose, but I've built seven 2700K systems so far, 5.0 every time, low noise and good temps. Marvelous chip. And oh yeah, 2GB/sec with a 950 Pro. 8)
Either you're water cooling those systems, or you should consider investing in lottery tickets. My 2600k wouldn't push past 4.4 without very worrying amounts of voltage (1.4V+) and even 4.4 ran so hot I on my 212+ I settled for 4.2 to keep the core under 1.3V.
Wait, am I reading these graphs correctly? Unless I'm going mad, they seem to say that for gaming there's no need to upgrade if you already have a 2600K. Huh?
If true, and I have no reason to doubt the data, that would make the 2600K one of the greatest processors ever?
Yup, it's been said many times - if you have an i7 processor you really don't need to upgrade it for gaming, spend the money on a new GPU every few years. I have a 3700k & GF970, other than the video card the system is 6yrs old - I used to build a new one every other year. I've been considering the 7800\7820 though as I do a lot of encoding.
Some mistakes for the Ryzen entries in the comparisons on page 1. PCI-E (Ryzen die has 20 lanes non-chipset, not 16), clockspeeds (too high), TDP (1700 is 65W).
Also, I see your point of comparing non-sale prices, but the 1700X seems to be widely and consistently available at near the i7-7740x MSRP. It's all but an official price cut.
In the way everyone has historically been reporting PCIe lanes, Ryzen only has 16 PCIe lanes intended for graphics, with the other four for the chipset and another four for storage as an SoC. We've repeated this over and over and over again. Same with Threadripper: 60, plus four for chipset. If we're going to start counting PCIe lanes for chipsets (and DMI equivalents) and SoC related PCIe lanes for storage and others, we'll have to go and rewrite the PCIe lane counts for the last several generations of Intel and AMD CPUs.
If the category is PCIe lanes for graphics that is quite right. But by that token doesn't (non cut-down) Broadwell-E/Skylake-E only have 32 lanes intended for graphics, as the switching logic allows for 2x16 and 4x8 configurations.
Although this is getting quite in-the-weeds. Overall I really appreciate the time and effort put into PC component reviews by the Anandtech staff.
I agree with Ian as 4 PCIe lanes are always taken since you are running Ryzen with a chipset with no real way around that. I also would agree with say Skylake-x reporting 4 less PCIe lanes for the DMI link.
I'm running 4x 1070s on that, and PCIe based storage, and I doubled my throughput by moving the SSD to a riser card (until the 4th GPU went in), which means its back on the m/b.
Having the 4x PCIe 3.0 lanes for a NVMe drive is an advantage, it's connected directly to the CPU and bypasses the chipset link which allows more bandwidth for USB/PCIe 2.0 lanes and SATA. I don't agree with you on not counting them.
$339 is the 1k tray price - the one that Intel quotes in the price lists and applicable if you buy 1000 OEM CPUs. $350 is MSRP that retailers will apply from their stock from distributors. Add more if you want a cooler. The issue here is that sometimes Intel never quotes an MSRP for some OEM-only processors, and AMD never seem to quote tray/OEM prices for retail parts. I'll edit this and make it clearer.
Intel has 10nm and 7nm by 2020 / 2021. Core Count is basically a solved problem, limited only by price.
What we need is a substantial breakthrough in single thread performance. May be there are new material that could bring us 10+Ghz. But those aren't even on the 5 years roadmap.
Under Handbrake testing, just above the first graph you state: "Low Quality/Resolution H264: He we transcode a 640x266 H264 rip of a 2 hour film, and change the encoding from Main profile to High profile, using the very-fast preset."
I wish your team would finally add in an edit button to comments! :)
On the last graph ENCODING: Handbrake HEVC (4k) you don't list the 1800x, but it is present in the previous two graphs @ LQ and HQ. Was there an issue with the 1800x preventing 4k testing? Quite interested in it's results if you have them.
When I first did the HEVC testing for the Ryzen 7 review, there was a slight issue in it running and halfway through I had to change the script because the automation sometimes dropped a result (like the 1800X which I didn't notice until I was 2-3 CPUs down the line). I need to put the 1800X back on anyway for AGESA 1006, which will be in an upcoming article.
One thing that caught my eye for a while is how compile tests using GCC or clang show much better results on Ryzen compared to using Microsoft's VS compiler. Phoronix tests clearly shows that. Thus, I cannot really believe yet on Ian's recurring explanation of Ryzen suffering from its victim L3 cache. After all, the 1800X beats the 7700K by a sizable margin when compiling the Linux kernel.
Isn't Ryzen relatively poor performance compiling Chromium due to idiosyncrasies of the VS compiler?
The VS compiler seems to love L3 cache, then. The 1800X does have 2x threads and 2x cores over the 7700K, accounting for the difference. We saw a -17% drop going from SKL-S with its fully inclusive L3 to SKL-SP with a victim L3, clock for clock.
Chromium was the best candidate for a scripted, consistent compile workflow I could roll into our new suite (and runs on Windows). Always open for suggestions that come with an ELI5.
So we are married to chromium, because it only compiles with msvc on windows?
Or maybe because it is a shitty implementation that for some reason stacks well with intel's offerings?
Pardon my ignorance, I've only been a multi-platform software developer for 8 years, but people who compile stuff a lot usually don't compile chromium all day.
I'd say go GCC or Clang, because those are quality community drive open source compilers that target a variety of platforms, unlike msvc. I mean if you really want to illustrate the usefulness of CPUs for software developers, which at this point is rather doubtful...
Again, find me something I can rope into my benchmark suite with an ELI5 guide and I try and find time to look into it. The Chromium test took the best part of 2-3 days to get in a position where it was scripted and repeatable and fit with our workflow - any other options I examined weren't even close. I'm not a computer programmer by day either, hence the ELI5 - just years old knowledge of using Commodore BASIC, batch files, and some C/C++/CUDA in VS.
Dr. Ian, I would like to apologize for my poor choice of words. Reading it again, it sounds like I accused you of something which is not the case.
I'm merely puzzled by how Ryzen performs poorly using msvc compared to other compilers. To be honest, your finds are very relevant to anyone using Visual Studio. But again, I find Microsoft's VS compilar to be a bit of an oddball.
A few weeks ago I was running my own tests to determine wether my Core i5 4690K was up to my compiling tasks. Since most of my professional job sits on top of programming languages with either short compile times or no compilation needed at all, I never bothered much about it. But recently I've been using C++ more and more during my game development hobby and compile times started to bother me. What I found puzzling is that after running a few test I couldn't manage to get any gains through parallelism, even after verifying that msvc was indeed spanning all 4 threads to compile files. Than I tried disabling two cores and clocking the thing higher and... it was faster! Not by a lot, but faster still. How could it be faster with a 50% decrease in the number of active cores and consequently threads doing compile jobs? I'm fully aware that linking is single threaded, but at least a few seconds should be gained with two extra cores, at least in theory. Today I had the chance to compile the same project on a Core i7 7700HQ and it was substantially slower than my Core i5 4690K even with clocks capped to 3.2 GHz. In fact, it was 33% slower than my Core i5 at stock speeds.
Anyhow… Dr. Ian’s findings are a very good to point out to those compiling C++ using msvc that Skylake-X is probably worth it over Ryzen. For my particular case, it would appear that Kaby Lake-X with the Core i7 7740X could even be the best choice, since my project somehow only scales nicely with clocks.
I just would like to see the wording pointing out that Skylake-X isn’t a better compiling core. It’s a better compiling core using msvc at this particular workload. On the GCC side of things, Ryzen is very competitive to it and a much better value in my humble opinion.
As for the suggestion, I’d say that since Windows is a requirement trying to script something to benchmark compile times using GCC would be daunting and unrealistic. Not a lot of people are using GCC to work on the Windows side of things. If Linux could be thrown into the equation, I’d suggest a project based on CMake. That would make it somewhat easy to write a simple script to setup, create a makefile and compile the project. Unfortunately, I can not readily think of any big name projects such as Chromium that fulfill that requirement without having to meddle with eventual dependency problems as the time goes by.
These chips edge out their LGA 1151 counter parts at stock with overclocking also carrying a slight razor edge over LGA 1151 overclocks. There are gains but ultimately these really don't seem worth it, especially in light of the fragmentation that this causes the X299 platform. Hard to place real figures on this but I'd wager that the platform confusion is going to cost Intel more than what they will gain with these chips. Intel should have kept these in the lab until they could offer something a bit more substantial.
LGA2066 doesn't have video out pins because it was originally designed only for the bigger dies that don't include them; and even if Intel had some 'spare' pins it could use adding video out would only make already expensive mobos with a wide set of features that vary based on the CPU model even more expensive and more confusing. Unless they add a GPU to either future CPUs in the family (or IMO a bit more likely) a very basic one to a chipset variant (to remove the crappy one some server boards add for KVM support) keeping the IGP fully off in mainstream dies on the platform is the right call IMO.
"The benefits in the benchmarks are clear against the nearest competition: these are the fastest CPUs to open a complex PDF, at the top for office work, and at the top for most web interactions by a noticeable amount."
In most cases you're talking about a second or less between the Intel and AMD systems. That will not be noticeable to the average office worker. You're much more likely to run into scenarios where the extra cores or threads will make an impact. I know in my own user base shaving a couple of seconds off opening a large PDF will pale in comparison to running complex reports with 2 (4 threads) extra cores for less money. I have nothing against Intel, but I struggle to see anything in here that makes their product worth the premium for an Office environment. The conclusion seems a stretch to me.
Indeed, and for those dealing with office work it makes more sense to emphasise investment where it makes the biggest difference to productivity, which for PCs is having an SSD (ie. don't buy a cheap grunge box for office work), but more generally dear god just make sure employees have a damn good chair to sit on and a decent IPS display that'll be kind to their eyes. Plus/minus 1s opening a PDF is a nothingburger compared to good ergonomics for office productivity.
Yeah an SSD is by far the best bang for the buck. From a CPU standpoint there are more use cases for Ryzen 1600 than there is the i5/i7 options we have from HP/Dell. Even the Ryzen 1500 series would probably be sufficient and allow even more per unit savings to put into other areas that would benefit folks more.
The 7740X runs at a just over 2% higher clock speed than the 7700X. It can overclock maybe 4% higher than the 7700X. You'd really have to be a special kind of stupid to pay hundreds more for an X299 mobo just for those gains that are nearly within the margin of error.
It doesn't make sense as a "stepping stone" onto HEDT either, because you're much better off simply buying a real HEDT right away. You'll pay a lot more in total if you first get the 7740X and then the 7820X for example.
Intel seems to think there's a market for people who buy a HEDT platform but can't afford a relevant CPU, but would upgrade later. Highly unlikely such a market exists. By the time such a theoretical user would be in a position to upgrade, more than likely they'd want a better platform anyway, given how fast the tech is changing.
S'why I love my 5GHz 2700K (daily system). And the other one (gaming PC). And the third (benchmarking rig), the two I've sold to companies, another built for a friend, another set aside to sell, another on a shelf awaiting setup... :D 5GHz every time. M4E, TRUE, one fan, 5 mins, done.
Those decreased overclocking performance numbers aren't just red flags, they're blinding red flashing lights with the power of a thousand suns.
Seriously, that should have been the entire article - this platform is a disaster if it loses performance under sustained load. That's not hyperbole, it's cold hard truth. Sustained load is part of what HEDT is about, and with X299 you're spending more money for significantly less performance?
I sincerely hope you're going to get to the bottom of this and not just shrug and let it slide away as a mystery. Hopefully it's just platform immaturity that gets ironed out, but at the present time I have absolutely no clue how you could recommend X299 in any way. Significantly less sustained performance is a do not pass go, do not collect $200, turn the car around, oh hell no, all caps showstopper.
But they're big AVX workloads. We know heat and power get a bit crazy with the AVX, and at some point we should just step back and realize that overclocking may not be appropriate for these workloads.
Until we know exactly what is going on and what will be required to fix it, I can't comprehend how anyone can regard X299, at least with the quad core CPUs, as anything but "Nope". Maybe a BIOS update will help, or tuning the overclock, but maybe it'll require new motherboard revisions or delidding the CPU. I'm sure it'll get fixed/understood at some point, but for now recommending this platform is really hard to accept as a good idea.
I do a lot of Handbrake encoding to HEVC which will peg all cores on my O/C'd 3770, it uses AVX but obviously a much older version with less functionality, and I can have it going indefinitely without issue.
I've looked at the 7800\7820 as an upgrade but if they cannot sustain performance with a reasonable cooling setup then there is no point. The KBL-X parts don't offer enough of a performance improvement to be worth the cost of the X299 mobo which also seem to be having teething problems.
Future proofing is laughable, let's say you bought a 7740x today with the thought of upgrading in two years to a higher core count proc - how likely is it that your motherboard and the new proc will have the same pinout? History says it ain't happening at Camp Intel.
At this point I'm giving a hard pass to this generation of Intel products and hope that v2 will fix these issues. By then AMD may have come close enough in ST performance where I would consider them again, I really want the best ST & MT performance I can get in the $350 CPU zone which has traditionally been the top i7. AMD's MT performance almost tempts me to just build an encoding box.
I loved my Athlon back in the day, anyone remember Golden Fingers? :D
I recently went from a 4.6GHz 3770K to a 1700X @ 4GHz at home. I play some older games that don't thread well (WoW being one of them). The Ryzen is at least as fast or faster in those workloads. Run Handbrake or Sony Movie Studio and the Ryzen is MUCH faster. We use built 6 core 5820K stations at work for some users and have recently added Ryzen 1600 stations due to the tremendous cost savings. We have yet to run into any tangible difference between the two platforms.
Intel does have a lead in ST, but tests like these emphasize it to the point it seems like a bigger advantage than it is in reality. The only time I could see the premium worth it is if you have a task that needs ST the majority of the time (or a program is simply very poorly optimized for Ryzen). Otherwise AMD is offering an extraordinary value and as you point out AM4 will at least be supported for 2 more spins.
Kind of agreed. Ian, you should log the clock speeds during benchmark runs and check for anomalies. The chip or mainboard could throttle, or your 4.0 GHz AVX clock could just be way too low. What's the default AVX clock? Maybe 4.4 GHz? That would pretty much match the 10% performance degradation.
Something seems wrong with the 7700k results vs the 7600k results. How is the 7600k beating the 7700k so handily in all the games? Are you sure the graphs are not swapped? ROTR shows the 7600k beating the 7700k by 20 FPS which seems impossible considering most reviews on this game have the 7700k on top of the 7600k.
Ian, why didn't you check if the OC was being thermally throttled? Easy enough to check this. And easy enough to see if it's the temperature of the cores or not. Surprising you wouldn't include temperature or power consumption data with the OC (though I understand this hasn't typically been a focus of AT). Another site demonstrated throttling at ~95+ C.
Is that the same site which showed that the TIM Intel is using is just not allowing the heat to get from the die to the cap? Die temp shoots up, cap temp doesn't, even with a chiller cooler.
Yeah, don't bother starting the article unless you're willing to create yet another useless online identity. Shame, since it seemed moderately interesting, but...
re: overclocking That works well for the occasional heavy workload, but if you are going to be constantly running at peak load (like I did for engineering analysis), overclocking of any kind, from my experience, isn't worth the dead core or entire CPU.
I've already fried a core on the 3930K once before taking it up from 3.2 GHz stock, 3.5 GHz max TurboBoost to 4.5 GHz.
Alas this stuff does vary according to the invidual CPU, mbd, RAM, etc. What cooling did you use? Could also be the vcore was too high - a lot of SB-E users employed a high vcore, not realising that using a lower PLL would often make such a high vcore unnecessary. It's even more complicated if one fills all 8 RAM slots on a typical X79 mbd.
RAM was 8x 8 GB Cruical Ballistix Sport I think DDR3-1600? Something like that. Nothing special, but nothing super crappy either. I actually had the entire set of RAM (all eight DIMMs RMA'd once) so I know that I got a whole new set back when that happened about oh...maybe a-year-and-a-half ago now? Something like that.
Motherboard was Asus X79 Sabertooth.
Yeah, I had all 8 DIMM slots populated because it was a cheaper option compared to 4x 16 GB. Besides, using all 8 DIMMs also was able to make use of the quad-channel memory whereas going with 4x 16 GB - you can't/won't (since the memory needed to be installed in paired DIMM slots).
That CPU is now "castrated" down to 4 cores (out of 6) because 1 of the cores died (e.g. will consistently throw BSODs, but if I disable it, no problems). Makes for a decent job scheduler (or at least that's the proposed task/life for it).
*As specifically written down on that page and mentioned in the explanation for that benchmark*, GeoThermal Valley at 1080p on the GTX 1080 seems incredibly optimized: all the Core i5 chips do so much better than all the other chips.
"After several years of iterative updates, slowly increasing core counts and increasing IPC, we have gotten used to being at least one generation of microarchitecture behind the mainstream consumer processor families. There are many reasons for this, including enterprise requirements for long support platforms as well as enterprise update cycles."
You forgot 'milking their consumer, enthusiast and enterprise markets'...
Ian! You're a Brit - please help defend our common language. You meant to say "raises the question". "Begs the question" is totally different and does not even approximate what you intended.
Journos: You don't have to understand "begs the question" because you'll very rarely need it. If you mean "raises the question" then just use that - plain English.
How is that possible? The i5 has slower clocks and less cache. So how can it be faster, "optimization" isn't valid here IMO unless I am missing something.
I think you have a throttling issue or something else that needs to be examined. Monitoring long term clocks and temps is something that you need to look at incorporating if only to help validate results.
"The second is for professionals that know that their code cannot take advantage of hyperthreading and are happy with the performance. Perhaps in light of a hyperthreading bug (which is severely limited to minor niche edge cases), Intel felt a non-HT version was required."
This does not make any sense. All motherboards I've used since Hyper Threading exists (yes, all the way back to the P4) lets you disable HT. There is really no reason for the X299 i5 to exist.
First interesting point to extract from this review is that i7 2600K is still good enough for most gaming tasks. Another point that we can extract is that games are not optimized for more than 4 core so all AMD offerings are yet to show what they are capable of, since all of them have more than 4 cores / 8 threads. I think single threading argument absolute performance argument is plain air, because the differences in single thread performance between all top CPUs that you can currently buy is slim, very slim. Kaby Lake CPUs are best in this just because they are sold with high clocks out of the box, but this doesn't mean that if AMD tweaks its CPUs and pushes them to 5Ghz it won't get back the crown. Also, in a very short time there will be another uArch and another CPU that will have again better single threaded performance so it is a race without end and without reason. What is more relevant is the multi-core race, which sooner or later will end up being used more and more by games and software in general. And when games will move to over 4 core usage then all these 4 cores / 8 threads overpriced "monsters" will become useless. That is why I am saying that AMD has some real gems on their hands with the Ryzen family. I bet you that the R7 1700 will be a much better/competent CPU in 3 years time compared to 7700K or whatever you are reviewing here. Dirt cheap, push it to 4Ghz and forget about it.
They have been saying for years that we will use more cores. Here we are almost 20 years down the road and there are few non professional apps and almost no games that use more than 4 cores and the vast majority use just two. Yes, more cores help with running multiple apps & instances but if we are just looking at the performance of the focused app less cores and more MHz is still the winner. From all I have read the two issues are that not everything is parallelizable and that coding for more cores/threads is more difficult and neither of those are going away.
Thing is, until now there hasn't been a mainstream-affordable solution. It's true that parallel coding requires greater skill, but that being the case then the edu system should be teaching those skills. Instead the time is wasted on gender studies nonsense. Intel could have kick started this whole thing years ago by releasing the 3930K for what it actually was, an 8-core CPU (it has 2 cores disabled), but they didn't have to because back then AMD couldn't even compete with mid-range SB 2500K (hence why they never bothered with a 6-core for mainstream chipsets). One could argue the lack of market sw evolvement to exploit more cores is Intel's fault, they could have helped promote it a long time ago.
What can these chips do with a nice watercooling setup, and a goal of 24x7 stability? Maybe 4.7? 4.8?
These seem like pretty moderate OCs overall, but I guess we were a bit spoiled by Sandy Bridge, etc., where a 1GHz overclock wasn't out of the question.
Nice article Ian. What I will say is I am a little confused around this comment:
"Intel wins for the IO and chipset, offering 24 PCIe 3.0 lanes for USB 3.1/SATA/Ethernet/storage, while AMD is limited on that front, having 8 PCIe 2.0 from the chipset."
You forgot to mention the AMD total PCI-E IO. It has 24 PCI-E 3.0 lanes with 4xPCI-e 3.0 going to the chipset which can be set to 8x PCI-E 2.0 if 5Gbps is enough per lane, i.e in the case of USB3.0.
I have read that Kabylake-X only has 16 PCI-E 3.0 lanes native. Not sure about PCH support though...
With Kabylake-X, the only I/O that doesn't go through the chipset is the 16 PCI-E 3.0 lanes you mention. With Ryzen, in addition to what is provided by the chipset, the CPU provides
1) Four USB 3.1 connections 2) Two SATA connections 3) 18 PCI-E 3.0 lanes, or 20 lanes if you don't use the SATA connections
So if you just look at the CPU, Ryzen has more connectivity than Kabylake-X, but the X299 chip set used with Kabylake-X is much more capable (and expensive) than anything in the AMD lineup. Also, the X299 doesn't provide any USB 3.1 ports (or more precisely, 10 gb per second speed ports), so those are typically provided by a separate chip, adding to the cost of X299 motherboards.
Interesting review with great benchmarks. (I don't understand why so many reviews only report average frames pr. second) The ryzen r5 1600 seems to offer great value for money, but i'm a bit puzzled why the slowest clocked R5 beats the higher clocked R7 in a lot of the 99% benchmarks, Im guessing its because the latency delta when moving data from one core to another penalize the higher core count R7 more?
The gaming benchmarks are, uhm..... pretty useless.
Third tier graphics cards as a starting point, why bother?
Seems like an awful lot of wasted time. As a note you may want to consider- when testing a new graphics card you get the fastest CPU you can so we can see what the card is capable of, when testing a new CPU you get the fastest GPU you can so we can see what the CPU is capable of. The way the benches are constructed, pretty useless for those of us that want to know gaming performance.
I don't know that guy's particulars, but, to me, using X299 to game at 1080p seems like a waste. If I was going to throw down that kind of money, I would want to game at 1440p or 4K
Not really. If the GPU becomes the bottleneck at or around 1440p, and as such the CPU is the limiting factor below that, why go so far down when practically nobody games below 1080p anymore?
"Over the last few generations, Intel has increased IPC by 3-10% each generation, making a 30-45% increase since 2010 and Sandy Bridge..."
I have an old Sandy i5 2500K on an Asus Z68 that can do 5GHz all day on water and 4.8 on air. I know it's ancient IP...but I wonder if it could hold it's own vs a stock clocked Skylake i5? hmmmm...
I have an old Sandy i5 2500K on an Asus Z68 that can do 5GHz all day on water and 4.8 on air. I know it's ancient IP...but I wonder if it could hold it's own vs a stock clocked Skylake i5? hmmmm...
Much ado about nothing. So the best case for 7740 is Office applications or opening PDF files? The author seems to have lost the sight of the forest because of the trees.
Some benchmarks are odd, some are useless in the context. I watched the YouTube version of this: https://www.techspot.com/review/1442-intel-kaby-la... and it looked like a more realistic approach for a 7740k review.
well i guess intel is putting more advertising money on anandtech.
otherwise i cant´t explain how an overpriced product with heat problems and artificial crippled pci lanes on an enthusiast platform(!) can get so much praise without much criticism.
I miss the days when you saw a new bunch of CPUs come out and the reviews showed that there was a really good case for upgrading if you could afford to. You know a CPU upgrade once or twice a year. Now I upgrade (maybe) once every 6-7 years. Sure it's better but not so much fun.
Intel wins for the IO and chipset, offering 24 PCIe 3.0 lanes for USB 3.1/SATA/Ethernet/storage, while AMD is limited on that front, having 8 PCIe 2.0 from the chipset.
Funny that is, seeing as AM4 has 16 pci-e lanes available to it unless when go down the totem pole those lanes get segregated differently , even going from the above table Intel is offering 16 for x299not 24 as you put directly into words, so who wins in IO, no one, they both offer 16 lanes. Now if you are comparing via price, x299 is obviously a premium product, at least compare to current AM4 premium end which is x370 chipset, pretty even footing on the motherboards when compared similar "specs" comparing the best AMD will offer in the form of x399, it makes the best "specs" of x299 laughable.
AMD seems to NOT be shortchanging pci-e lanes, DRAM ability (or speed) functional, proper thermal interface used etc etc.
Your $ by all means, but seriously folks need to take blinders off, how much raw power is this "95w TDP" processors using when ramped to 5+Ghz, sure in theory it will be the fastest for per core performance, but how long will the cpu last running at that level, how much extra power will be consumed, what price of an acceptable cooler is needed to maintain it within thermal spec and so forth.
Interesting read, but much seems out of context to me. May not like it, but AMD has given a far better selection of product range this year for cpu/motherboard chipsets, more core, more threads, lots of IO connectivity options, fair pricing overall (the $ value in Canada greed of merchants does not count as a fault against AMD)
i'm interested in the details of the agility benchmark? how many photos are in your dataset and at what resolution? am doing similar work and i notice the working time doesn't seem to be linear with the number of photos.
reading this article again i must say im realyl ashamed. anandtech was once a great place but now it´s just like car magazines. who pays best gets the best reviews. where is the criticism? everyone and his grandmother things intel has big issues (tim, heat, pci lanes nonsense product) are you bend over so intel can inject more money more easily?
Impressive benchmarks. I could not ask for more. This revealed that Intel clearly doesn't have the premium or value position anymore. It is simply not there. They have to be in the 10nm process now to be superior in value and/or performance.
Hi, what RAM frequency is the AMD platform running on? if its the official maximum of 2666MHz, you can get +10-15% more performance using 3200MHz or faster memory
Please redo the Ryzen benchmarks using DDR3200 now it is officially supported, and also use the latest updates of the games - eg ROTR v770.1+ where Ryzen gets a 25% increase.
You can't compare one platform with the latest updates, and the other without - thats pointless
Can anyone explain to me how the 7600k and in some cases the 7600 beating the 7700k almost consistenly. I don't doubt the Ryzen results but the Intel side of results confuses the heck out of me.
Hi, thanks for the great review. Are you guys still using OCCT to check your overclock stability?
If so what version do you use and which test do you guys use? Is it the CPU OCCT or the CPU Linpack with AVX and for how long before you consider it stable?
Thanks, I'm trying to work on my own 7700k overclock at the minute!
I hate to say, but there is clearly something very wrong with your 7700K test system. Using the same settings for Tomb Raider, a GTX 1080 11Gbps, and a 7700k set at stock settings I am seeing about 40-50% better fps than you are getting on all three benchmarks--213 avg for Mountain Peak, 163 for Syria, and 166 for Geothermal Valley. This likely is not limited to just RotTR, as your other games have impossible results--technically the i5s cannot beat their respective i7s as they are slower and have less cache. How this was not caught is quite disturbing.
The test was run with a 1080, not a 1080ti. Depending on resolution, ti's can outperform the 1080 by 30%+. Could well be why you see such a big difference.
No. I'm pretty sure the 7700k used was broken. It worries me as well this was posted without further investigation. Basically invalidates all benchmarks.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
176 Comments
Back to Article
YukaKun - Monday, July 24, 2017 - link
Hat off to you, Mr Ian. A lot of good and interesting information there.Cheers!
Ian Cutress - Monday, July 24, 2017 - link
Thanks :) This will hopefully become the new CPU testing standard for us. It's all scripted, making benchmarking relatively easy. Sourcing and writing are now the mentally consuming parts.YukaKun - Monday, July 24, 2017 - link
That is nice to know. Will you write an article about the testing itself? Like detailing the process or something along those lines? It would be interesting to know about those little details, for sure!I'm sure you can glue together an article in no time! *wink wink*
Ian Cutress - Monday, July 24, 2017 - link
I've had one half-written about the new 2017 suite and an upcoming project for a couple of weeks. Need to get on it! Coffee time...Dr. Swag - Monday, July 24, 2017 - link
Let's hope you won't be Lake to the party...Cellar Door - Monday, July 24, 2017 - link
What is lake to the party is Intel - it is just so firetrucking sad how they refuse to give customers more for their money. HT should be enabled on all their chips, it is there on the physical chip.I will never buy another Intel cpu - what! You got a problem with that Intel?
leexgx - Monday, July 24, 2017 - link
Ryzen on some r3 cpus don't have SMTLolimaster - Tuesday, July 25, 2017 - link
With the corresponding price, Ryzen 1500X 4c/8t is 90% of the i7 7700 for half the price.Dr. Swag - Tuesday, July 25, 2017 - link
"With the corresponding price, Ryzen 1500X 4c/8t is 90% of the i7 7700 for half the price."This is just incorrect. Ryzen ipc is around 90% of kaby/skylake, but the 7700k oces around 25% higher and also has around a 20% higher out of the box frequency.
Diji1 - Wednesday, July 26, 2017 - link
Uh oh, now they have to swear to never buy an AMD chip ever ever!Gulagula - Wednesday, July 26, 2017 - link
Can anyone explain to me how the 7600k and in some cases the 7600 beating the 7700k almost consistenly. I don't doubt the Ryzen results but the Intel side of results confuses the heck out of me.Ian Cutress - Wednesday, July 26, 2017 - link
Sustained turbo, temperatures, quality of chips from binning (a good 7600 chip will turbo much longer than a 7600K will), time of day (air temperature is sometimes a pain - air conditioning doesn't really exist in the UK, especially in an old flat in London), speed shift response, uncore response, data locality (how often does the system stall, how long does it take to get the data), how clever the prefetchers are, how a motherboard BIOS ramps up and down the turbos or how accurate its thermal sensors are (I try and keep the boards constant for a full generation because of this). If it's only small margin between the data, there's not much to discuss.Funyim - Thursday, August 10, 2017 - link
Are you absolutely sure your 7700k isn't broken? It sure looks like it is. I understand your point about margins but numbers are numbers and yours look wrong. No other benchmarks I've seen to date aligns with your findings. And please for the love of god ammend this article if it is.Hurr Durr - Monday, July 24, 2017 - link
One wonders why would you relegate yourself to subpar performance of AMD processors.Alistair - Tuesday, July 25, 2017 - link
Your constant refrain belonged in the bulldozer era (when the single threaded performance difference was on the order of 80-100 percent). Apparently you can't move past the Ryzen launch. If a different company such as Samsung had launched these CPUs the reception would have been very different. I've never bought AMD before but my Ryzen 1700 is incredible for its price, and I had to be disillusioned by my terrible Skylake upgrade first before I was willing to purchase from AMD.Gothmoth - Tuesday, July 25, 2017 - link
don´t argue with trolls....StevoLincolnite - Tuesday, July 25, 2017 - link
Why would Intel enable HT when they could sell it as DLC?https://www.engadget.com/2010/09/18/intel-wants-to...
coolhardware - Tuesday, July 25, 2017 - link
Glad to hear that the benchmarking is (becoming) less of a chore :-) Kudos and thank you for the great article!fallaha56 - Tuesday, July 25, 2017 - link
Surely that AVX drop -10 when overclocking was too much?What about delidding?
Samus - Monday, July 24, 2017 - link
It still stands that the best value in this group is the Ryzen 1600X, mostly because it's platform cost is 1/3rd that of Intel's HEDT. So unless you need those platform advantages (PCIe, which even x299 doesn't completely have on these KBL-X CPU's) it really won't justify spending $300 more on a system, even if single threaded performance is 15-20% better.Just the fact an AMD system of less than half the cost can ice a high end Intel system in WinRAR speaks a lot to AMD's credibility here.
Alistair - Monday, July 24, 2017 - link
I look at it this way: in 2016 I bought a 6600k for $350 CAD. In 2017 I bought a Ryzen 1700 for $350 CAD. Overall speed increase 240%. So AMD delivered 240 percent more performance at the same price in one year. Intel continues to deliver less than 10 percent per dollar. I could care less if the single performance is the same.Call me next time Intel releases a chip a year later that is 240 percent faster for the same price.
Hurr Durr - Monday, July 24, 2017 - link
So you bought yourself inferior IPC and a sad attempt at ameliorating it by piling up cores, and now have to cope with this through wishful thinking of never materializing performance percents. Classic AMD victim behavior.Alistair - Monday, July 24, 2017 - link
First of all, stop using IPC, an expression you don't understand. Use single core performance. In almost every single benchmark I see dramatic speed improvements. I'm comparing the i5 with a Ryzen 1700 as they were the same cost. People harping over the i7-7700k apparantly didn't notice the 1700 selling for as low as $279 USD.Also get higher fps in almost every single game (Mass Effect Andromeda, Civilization and Overwatch in particular).
Alistair - Tuesday, July 25, 2017 - link
I have tremendous respect for Ian, whose knowledge and integrity is of the highest order. I just think some of his words in this review lose the plot. As he said, "it would appear Intel has an uphill struggle to convince users that Kaby Lake-X is worth the investment". He should have emphasized that a little more.In Canada, Ryzen 1700 plus motherboard = $450. i5 (not i7) plus motherboard is $600. Yes, $150 dollars more!
Intel has 20 percent faster single core performance and yet Ryzen is 2.4 times (+140 percent) faster overall... Numbers should speak for themselves if you don't lose the plot. I agree single threaded performance is very important when the divergence is large, such as Apple's A10 vs Snapdragon 835, or the old Bulldozer. But the single threaded gap has mostly closed and a yawning gulf has opened up in total price/performance. Story of the year!
Hurr Durr - Tuesday, July 25, 2017 - link
Extolling price slashing right after launch, boy you`re on a roll today.silverblue - Tuesday, July 25, 2017 - link
I think you should prove why you think Intel is the superior buy, instead of just trolling and not actually providing any rationale behind your "arguments".On Amazon.co.uk right now, there are four Ryzen and one FX CPU in the top 10. Here's the list (some of the recommended retail price values are missing or a bit - in the case of the 8350 - misleading):
1) i7-7700K £308.00; RRP £415.99
2) R5 1600 £189.19; RRP £219.95
3) R7 1700 £272.89; RRP £315.95
4) i5-7600K £219.99; RRP £?
5) i5-7500 £173.00; RRP £?
6) FX-8350 £105.50; RRP £128.09
7) i5-6500 £175.09; RRP £?
8) R5 1500X £165.99; RRP £189.98
9) Pentium G4400 £48.90; RRP £?
10) R5 1600X £215.79; RRP £249.99
There must be a ton of stupid people buying CPUs now then, or perhaps they just prefer solder as their thermal interface material of choice.
Advantages for Intel right now: clock speed; overclocking headroom past 4 GHz; iGPU (not -X CPUs)
Disadvantages for Intel right now: price; limited availability of G4560; feature segmentation (well, that's always been a factor); overall platform cost
An AMD CPU would probably consume similar amounts of power if they could be pushed past 4.1GHz so I won't list that as a disadvantage for Intel, nor will I list Intel's generally inferior box coolers as not every AMD part comes with one to begin with.
The performance gap in single threaded workloads at the same clock speed has shrunk from 60%+ to about 10%, power consumption has tumbled, and it also looks like AMD scales better as more cores are added. Unless you're just playing old or unoptimised games, or work in a corporate environment where money is no object, I don't see how AMD wouldn't be a viable alternative. That's just me, though - I'm really looking forward to your reasons.
Gothmoth - Tuesday, July 25, 2017 - link
no first of = stop arguing with stupid trolls...prisonerX - Monday, July 24, 2017 - link
I can double my IPC by having another core. Are you really that dumb?Hurr Durr - Tuesday, July 25, 2017 - link
AMD victim calling anyone dumb is peak ironing. You guys are out in force today, does it really hurt so bad?wira123 - Tuesday, July 25, 2017 - link
yeah intel victim is in full force as well today, which is indeed ironicSantoval - Tuesday, July 25, 2017 - link
That is not how IPC works, since it explicitly refers to single core - single thread performance. As the number of cores rises the performance of a *single* task never scales linearly because there is always some single thread code involved (Amdahl's law). For example if your task has 90% parallel and 10% serial code its performance will max out at x10 that of a single core at ~512 cores. From then on even if you had a CPU with infinite cores you couldn't extract half an ounce of additional performance. If your code was 95% parallel the performance of your task would plateau at x20. For that though you would need ~2048 cores. And so on.Of course Amdahl's law does not provide a complete picture. It assumes, for example, that your task and its code will remain fixed no matter how many cores you add on them. And it disregards the possibility of computing distinct tasks in parallel on separate cores. That's where Gustafson's Law comes in. This "law" is not concerned with speeding up the performance of tasks but computing larger and more complex tasks at the same amount of time.
An example given in Wikipedia involves boot times : Amdahl's law states that you can speed up the boot process, assuming it can be made largely parallel, up to a certain number of cores. Beyond that -when you become limited by the serial code of your bootloader- adding more cores does not help. Gustafson's law, on the contrary, states that instead of speeding up the boot process by adding more cores and computing resources, you could add colorful GUIs, increase the resolution etc, while keeping the boot time largely the same. This idea could be applied to many -but not all- computing tasks, for example ray tracing (for more photorealistic renderings) and video encoding (for smaller files or videos with better quality), and many other heavily multi-threaded tasks.
Rickyxds - Monday, July 24, 2017 - link
I just agree XD.Diji1 - Wednesday, July 26, 2017 - link
"Overall speed increase 240%."LMAO. Ridiculous.
Alistair - Wednesday, July 26, 2017 - link
No reason to laugh. I compared the 6600k vs the Ryzen 1700. 1 year speed increase of 144 percent (2.44 times the speed). Same as this: 1135 vs 466 points.http://cpu.userbenchmark.com/Compare/Intel-Core-i5...
Dr. Swag - Tuesday, July 25, 2017 - link
I disagree, best value is 1600 as it oces as well as 1600x, comes with a decent stock cooler, and is cheaper.vext - Monday, July 24, 2017 - link
Interesting article but it seems intended to play down the extremely bad press x299 has received which is all over the internet and Youtube.Once you get past Mr. Cuttress' glowing review, it's clear that the I5-7640x is not worth the money because of lackluster performance, the I7-7740X is marginally faster than the older 7700k, and the I7-7800x is regularly beaten by the 7740X in many benchmarks that actually count and is a monstrously inefficient energy pig. Therefore the only Intel CPUs of this batch worth buying are the 7700k/7740x, and there is no real advantage to x299. In summary, it doesn't actually change anything.
It's very telling that Mr. Cutress doesn't comment on the absolutely egregious energy consumption of the 7800x. The Test Bed setup section doesn't list the 7800x at all. The 7840x and 7740x are using a Thermalright True Copper (great choice!) but no info on the 7800x cooler. Essentially, the 7800x cameo appearance is only to challenge the extremely strong Ryzen multi-threaded results, but its negative aspects are not discussed, perhaps because it might frighten people from x299. Tsk, tsk. As my 11 year old daughter would say "No Fair." By the way, the 7800x is selling for ~ $1060 right now on Newegg, not $389.
Proudly typed on my Ryzen 1800x/Gigabyte AB350 Gaming 3. # ;-)
Ian Cutress - Monday, July 24, 2017 - link
You may not have realised but this is the Kaby Lake-X review, so it focuses on the KBL-X parts. We already have a Skylake-X review for you to mull over. There are links on the first page.mapesdhs - Monday, July 24, 2017 - link
Nevertheless, the wider picture is relevant here. The X299 platform is a mess. Intel is aiming KL-X at a market which doesn't exist, they've locked out features that actually make it useful, it's more power hungry, and a consumer needs a lot of patience and plenty of coffee to work out what the heck works and what doesn't on a mbd with a KL-X fitted.This is *exactly* the sort of criticism of Intel which should have been much stronger in the tech journalism space when Intel started pulling these sorts of stunts back with the core-crippled 3930K, heat-crazy IB and PCIe-crippled 5820K. Instead, except for a few exceptions, the tech world has been way too forgiving of Intel's treading-on-water attitude ever since SB, and now they've panicked in response to Ryzen and released a total hodgebodge of a chipset and CPU lineup which makes no sense at all. And if you get any disagreement about what I've said by anyone at Intel, just wave a 4820K in their face and say well explain this then (quad-core chip with 40 PCIe lanes, da daa!).
I've been a big fan of Z68 and X79, but nothing about Intel's current lineup appeals in the slightest.
serendip - Tuesday, July 25, 2017 - link
There's also the funny bit about motherboards potentially killing KBL-X CPUs if a Skylake-X was used previously.What's with Intel's insane product segmentation strategy with all the crippling and inconsistent motherboard choices? It's like they want to make it hard to choose, so buyers either get the cheapest or most expensive chip.
Haawser - Tuesday, July 25, 2017 - link
'EmergencyLake-X' is just generally embarrassing. Intel should just find a nearby landfill site and quietly bury it.Spoelie - Monday, July 24, 2017 - link
On the first page, I assume the green highlight in the processor charts signifies an advantage for that side. Why are the cores/threads rows in the Ryzen side not highlighted? Or is 8/16 not better than 4/8?Ian Cutress - Monday, July 24, 2017 - link
Derp. Fixed.Gothmoth - Monday, July 24, 2017 - link
intel must really push money into anandtech. :) so many interesting things to report about and they spend time on a niche product.....Ian Cutress - Monday, July 24, 2017 - link
This has been in the works for a while because our CPU failed. I work on the CPU stuff - other editors work on other things ;) If you've got an idea, reach out to us. I can never guarantee anything (I've got 10+ ideas that I don't have time to do) but if it's interesting we'll see what we can do. Plus it helps us direct what other content we should be doing.halcyon - Monday, July 24, 2017 - link
This is an amazing amount of benchmarking with many options. thank you. Must have been a lot of work :-)The obvious idea is this:
Gaming (modern CPU limited and most played games) & Productive work (rendering, encoding, 4K video work, R/statistics/Matlab)
Test those under 4c/8t and 8c/16t CPUs both from AMD and Intel - all at most common non-esoteric overlock levels (+/-10%).
This is what many of your readers want:
How much does c. 5Ghz 4c/8t do vs 4.x Ghz 8c/16t when taken to it's everyday stable extreme, in modern games / productivity.
The web is already full of benchmarks at stock speed. Or overclocked Ryzen R 7 against stock Intel, or OC intel against overclocked Ryzen - and the game/app selections are not very varied.
The result is a simple graph that plots the (assumed) linear trend in performance/price and shows any deviations below/above the linear trend.
Of course, if you already have the Coffee lake 6c/12t sample, just skip the 4c/8t and go with 6c/12t vs 8c/16 comparision.
Thanks for all the hard work throughout all these years!
Ryan Smith - Monday, July 24, 2017 - link
"so many interesting things to report about and they spend time on a niche product....."What can we say? CPUs have been our favorite subject for the last 20 years.=)
user_5447 - Monday, July 24, 2017 - link
"For 2017, Intel is steering the ship in a slightly different direction, and launching the latest microarchitecture on the HEDT platform."Skylake-S, Kaby Lake-S and Kaby Lake-X share the same microarchitecture, right?
Then Skylake-X is newer microarchitecture than Kaby Lake-X (changes to L2 and L3 caches, AVX-512).
Ian Cutress - Monday, July 24, 2017 - link
Correct me if I'm wrong: SKL-SP cores are derived from SKL-S, and 14nm. KBL-S/X are 14+, and shares most of its design with SKL-S, and the main changes are power related. Underneath there's no real performance (except Speed Shift v2), but Intel classifies Kaby Lake as its latest non-AVX512 IPC microarchitecture.user_5447 - Monday, July 24, 2017 - link
Kaby Lake-S has some errata fixes compared to Skylake-S. AFAIK, this is the only change to the CPU core (besides the Speed Shift v2, if it even involved hardware changes).David Kanter says Skylake-X/EP is 14+ nm http://www.realworldtech.com/forum/?threadid=16889...
extide - Wednesday, July 26, 2017 - link
I have a buddy who works in the fabs -- SKL-X is still on plain 14nmChaser - Monday, July 24, 2017 - link
Go 2600K. LMAO!YukaKun - Monday, July 24, 2017 - link
Hey, I'm still using my 4.6Ghz 2700K, so these numbers bring joy to me!Cheers! :P
mapesdhs - Monday, July 24, 2017 - link
4.6? Outrageous! I would be offended if I were a 2700K at a mere 4.6! Get that thing up to 5.0 asap. 8) Mbd-dependent I suppose, but I've built seven 2700K systems so far, 5.0 every time, low noise and good temps. Marvelous chip. And oh yeah, 2GB/sec with a 950 Pro. 8)lowlymarine - Tuesday, July 25, 2017 - link
Either you're water cooling those systems, or you should consider investing in lottery tickets. My 2600k wouldn't push past 4.4 without very worrying amounts of voltage (1.4V+) and even 4.4 ran so hot I on my 212+ I settled for 4.2 to keep the core under 1.3V.soliloquist - Monday, July 24, 2017 - link
Yeah, Sandy Bridge is holding up nicely. Its pretty ridiculous actually.colonelclaw - Monday, July 24, 2017 - link
Wait, am I reading these graphs correctly? Unless I'm going mad, they seem to say that for gaming there's no need to upgrade if you already have a 2600K. Huh?If true, and I have no reason to doubt the data, that would make the 2600K one of the greatest processors ever?
Icehawk - Monday, July 24, 2017 - link
Yup, it's been said many times - if you have an i7 processor you really don't need to upgrade it for gaming, spend the money on a new GPU every few years. I have a 3700k & GF970, other than the video card the system is 6yrs old - I used to build a new one every other year. I've been considering the 7800\7820 though as I do a lot of encoding.Gothmoth - Monday, July 24, 2017 - link
"...Intel’s official line is about giving customers options. ..."yeah like.. if you want more PCI lanes to use all oyu mainboard features just buy the 999$ CPU..... LOL.
mapesdhs - Monday, July 24, 2017 - link
Indeed, just like the "option" of a CPU like the 4820K (4-core but with 40 lanes) suddenly vanished after X79. :D Intel's current lineup is an insult.Kalelovil - Monday, July 24, 2017 - link
Some mistakes for the Ryzen entries in the comparisons on page 1.PCI-E (Ryzen die has 20 lanes non-chipset, not 16), clockspeeds (too high), TDP (1700 is 65W).
Also, I see your point of comparing non-sale prices, but the 1700X seems to be widely and consistently available at near the i7-7740x MSRP. It's all but an official price cut.
Ian Cutress - Monday, July 24, 2017 - link
In the way everyone has historically been reporting PCIe lanes, Ryzen only has 16 PCIe lanes intended for graphics, with the other four for the chipset and another four for storage as an SoC. We've repeated this over and over and over again. Same with Threadripper: 60, plus four for chipset. If we're going to start counting PCIe lanes for chipsets (and DMI equivalents) and SoC related PCIe lanes for storage and others, we'll have to go and rewrite the PCIe lane counts for the last several generations of Intel and AMD CPUs.Kalelovil - Monday, July 24, 2017 - link
If the category is PCIe lanes for graphics that is quite right.But by that token doesn't (non cut-down) Broadwell-E/Skylake-E only have 32 lanes intended for graphics, as the switching logic allows for 2x16 and 4x8 configurations.
Although this is getting quite in-the-weeds. Overall I really appreciate the time and effort put into PC component reviews by the Anandtech staff.
FreckledTrout - Monday, July 24, 2017 - link
I agree with Ian as 4 PCIe lanes are always taken since you are running Ryzen with a chipset with no real way around that. I also would agree with say Skylake-x reporting 4 less PCIe lanes for the DMI link.Trenteth - Wednesday, July 26, 2017 - link
except Ryzen has 16x GPU lanes, $x to the chipset and 4x diect to an NVMe or U.2 drive. it's 20 PCIe 3.0 lanes off the CPU usable.Notmyusualid - Tuesday, July 25, 2017 - link
I got 40 lanes on my E5-2690.I'm running 4x 1070s on that, and PCIe based storage, and I doubled my throughput by moving the SSD to a riser card (until the 4th GPU went in), which means its back on the m/b.
Though, you can't notice in everyday use. Oddly.
Trenteth - Wednesday, July 26, 2017 - link
Having the 4x PCIe 3.0 lanes for a NVMe drive is an advantage, it's connected directly to the CPU and bypasses the chipset link which allows more bandwidth for USB/PCIe 2.0 lanes and SATA. I don't agree with you on not counting them.Kalelovil - Monday, July 24, 2017 - link
Your charts seem to label the i7 7740X with a $329 MSRP.In contrast your first page (and Intel ARK) lists a $339-$350 MSRP.
I assume the former is a mistake?
Ian Cutress - Monday, July 24, 2017 - link
$339 is the 1k tray price - the one that Intel quotes in the price lists and applicable if you buy 1000 OEM CPUs. $350 is MSRP that retailers will apply from their stock from distributors. Add more if you want a cooler. The issue here is that sometimes Intel never quotes an MSRP for some OEM-only processors, and AMD never seem to quote tray/OEM prices for retail parts. I'll edit this and make it clearer.Kalelovil - Monday, July 24, 2017 - link
Oh, by former I was referring to the $329 in your charts not the $339 on ARKIan Cutress - Monday, July 24, 2017 - link
Oops, I misread the price and misread your comment. Graphs should be updated with a cache refresh.iwod - Monday, July 24, 2017 - link
Intel has 10nm and 7nm by 2020 / 2021. Core Count is basically a solved problem, limited only by price.What we need is a substantial breakthrough in single thread performance. May be there are new material that could bring us 10+Ghz. But those aren't even on the 5 years roadmap.
mapesdhs - Monday, July 24, 2017 - link
That's more down to better sw tech, which alas lags way behind. It needs skills that are largely not taught in current educational establishments.wolfemane - Monday, July 24, 2017 - link
Under Handbrake testing, just above the first graph you state:"Low Quality/Resolution H264: He we transcode a 640x266 H264 rip of a 2 hour film, and change the encoding from Main profile to High profile, using the very-fast preset."
I think you mean to say "HERE we transcode..."
Great article overall. Thank you!
Ian Cutress - Monday, July 24, 2017 - link
Thanks, corrected :)wolfemane - Monday, July 24, 2017 - link
I wish your team would finally add in an edit button to comments! :)On the last graph ENCODING: Handbrake HEVC (4k) you don't list the 1800x, but it is present in the previous two graphs @ LQ and HQ. Was there an issue with the 1800x preventing 4k testing? Quite interested in it's results if you have them.
Ian Cutress - Monday, July 24, 2017 - link
When I first did the HEVC testing for the Ryzen 7 review, there was a slight issue in it running and halfway through I had to change the script because the automation sometimes dropped a result (like the 1800X which I didn't notice until I was 2-3 CPUs down the line). I need to put the 1800X back on anyway for AGESA 1006, which will be in an upcoming article.IanHagen - Monday, July 24, 2017 - link
One thing that caught my eye for a while is how compile tests using GCC or clang show much better results on Ryzen compared to using Microsoft's VS compiler. Phoronix tests clearly shows that. Thus, I cannot really believe yet on Ian's recurring explanation of Ryzen suffering from its victim L3 cache. After all, the 1800X beats the 7700K by a sizable margin when compiling the Linux kernel.Isn't Ryzen relatively poor performance compiling Chromium due to idiosyncrasies of the VS compiler?
Ian Cutress - Monday, July 24, 2017 - link
The VS compiler seems to love L3 cache, then. The 1800X does have 2x threads and 2x cores over the 7700K, accounting for the difference. We saw a -17% drop going from SKL-S with its fully inclusive L3 to SKL-SP with a victim L3, clock for clock.Chromium was the best candidate for a scripted, consistent compile workflow I could roll into our new suite (and runs on Windows). Always open for suggestions that come with an ELI5.
ddriver - Monday, July 24, 2017 - link
So we are married to chromium, because it only compiles with msvc on windows?Or maybe because it is a shitty implementation that for some reason stacks well with intel's offerings?
Pardon my ignorance, I've only been a multi-platform software developer for 8 years, but people who compile stuff a lot usually don't compile chromium all day.
I'd say go GCC or Clang, because those are quality community drive open source compilers that target a variety of platforms, unlike msvc. I mean if you really want to illustrate the usefulness of CPUs for software developers, which at this point is rather doubtful...
Ian Cutress - Monday, July 24, 2017 - link
Again, find me something I can rope into my benchmark suite with an ELI5 guide and I try and find time to look into it. The Chromium test took the best part of 2-3 days to get in a position where it was scripted and repeatable and fit with our workflow - any other options I examined weren't even close. I'm not a computer programmer by day either, hence the ELI5 - just years old knowledge of using Commodore BASIC, batch files, and some C/C++/CUDA in VS.mapesdhs - Monday, July 24, 2017 - link
Ok, you get a billion points for knowing Commodore BASIC. 8)IanHagen - Monday, July 24, 2017 - link
Dr. Ian, I would like to apologize for my poor choice of words. Reading it again, it sounds like I accused you of something which is not the case.I'm merely puzzled by how Ryzen performs poorly using msvc compared to other compilers. To be honest, your finds are very relevant to anyone using Visual Studio. But again, I find Microsoft's VS compilar to be a bit of an oddball.
A few weeks ago I was running my own tests to determine wether my Core i5 4690K was up to my compiling tasks. Since most of my professional job sits on top of programming languages with either short compile times or no compilation needed at all, I never bothered much about it. But recently I've been using C++ more and more during my game development hobby and compile times started to bother me. What I found puzzling is that after running a few test I couldn't manage to get any gains through parallelism, even after verifying that msvc was indeed spanning all 4 threads to compile files. Than I tried disabling two cores and clocking the thing higher and... it was faster! Not by a lot, but faster still. How could it be faster with a 50% decrease in the number of active cores and consequently threads doing compile jobs? I'm fully aware that linking is single threaded, but at least a few seconds should be gained with two extra cores, at least in theory. Today I had the chance to compile the same project on a Core i7 7700HQ and it was substantially slower than my Core i5 4690K even with clocks capped to 3.2 GHz. In fact, it was 33% slower than my Core i5 at stock speeds.
Anyhow… Dr. Ian’s findings are a very good to point out to those compiling C++ using msvc that Skylake-X is probably worth it over Ryzen. For my particular case, it would appear that Kaby Lake-X with the Core i7 7740X could even be the best choice, since my project somehow only scales nicely with clocks.
I just would like to see the wording pointing out that Skylake-X isn’t a better compiling core. It’s a better compiling core using msvc at this particular workload. On the GCC side of things, Ryzen is very competitive to it and a much better value in my humble opinion.
As for the suggestion, I’d say that since Windows is a requirement trying to script something to benchmark compile times using GCC would be daunting and unrealistic. Not a lot of people are using GCC to work on the Windows side of things. If Linux could be thrown into the equation, I’d suggest a project based on CMake. That would make it somewhat easy to write a simple script to setup, create a makefile and compile the project. Unfortunately, I can not readily think of any big name projects such as Chromium that fulfill that requirement without having to meddle with eventual dependency problems as the time goes by.
Kevin G - Monday, July 24, 2017 - link
These chips edge out their LGA 1151 counter parts at stock with overclocking also carrying a slight razor edge over LGA 1151 overclocks. There are gains but ultimately these really don't seem worth it, especially in light of the fragmentation that this causes the X299 platform. Hard to place real figures on this but I'd wager that the platform confusion is going to cost Intel more than what they will gain with these chips. Intel should have kept these in the lab until they could offer something a bit more substantial.mapesdhs - Monday, July 24, 2017 - link
I wonder if it would have been at least a tad better received if they hadn't cripplied the on-die gfx, etc.DanNeely - Tuesday, July 25, 2017 - link
LGA2066 doesn't have video out pins because it was originally designed only for the bigger dies that don't include them; and even if Intel had some 'spare' pins it could use adding video out would only make already expensive mobos with a wide set of features that vary based on the CPU model even more expensive and more confusing. Unless they add a GPU to either future CPUs in the family (or IMO a bit more likely) a very basic one to a chipset variant (to remove the crappy one some server boards add for KVM support) keeping the IGP fully off in mainstream dies on the platform is the right call IMO.DrKlahn - Monday, July 24, 2017 - link
Great article, but the conclusion feels off:"The benefits in the benchmarks are clear against the nearest competition: these are the fastest CPUs to open a complex PDF, at the top for office work, and at the top for most web interactions by a noticeable amount."
In most cases you're talking about a second or less between the Intel and AMD systems. That will not be noticeable to the average office worker. You're much more likely to run into scenarios where the extra cores or threads will make an impact. I know in my own user base shaving a couple of seconds off opening a large PDF will pale in comparison to running complex reports with 2 (4 threads) extra cores for less money. I have nothing against Intel, but I struggle to see anything in here that makes their product worth the premium for an Office environment. The conclusion seems a stretch to me.
mapesdhs - Monday, July 24, 2017 - link
Indeed, and for those dealing with office work it makes more sense to emphasise investment where it makes the biggest difference to productivity, which for PCs is having an SSD (ie. don't buy a cheap grunge box for office work), but more generally dear god just make sure employees have a damn good chair to sit on and a decent IPS display that'll be kind to their eyes. Plus/minus 1s opening a PDF is a nothingburger compared to good ergonomics for office productivity.DrKlahn - Tuesday, July 25, 2017 - link
Yeah an SSD is by far the best bang for the buck. From a CPU standpoint there are more use cases for Ryzen 1600 than there is the i5/i7 options we have from HP/Dell. Even the Ryzen 1500 series would probably be sufficient and allow even more per unit savings to put into other areas that would benefit folks more.JimmiG - Monday, July 24, 2017 - link
The 7740X runs at a just over 2% higher clock speed than the 7700X. It can overclock maybe 4% higher than the 7700X. You'd really have to be a special kind of stupid to pay hundreds more for an X299 mobo just for those gains that are nearly within the margin of error.It doesn't make sense as a "stepping stone" onto HEDT either, because you're much better off simply buying a real HEDT right away. You'll pay a lot more in total if you first get the 7740X and then the 7820X for example.
mapesdhs - Monday, July 24, 2017 - link
Intel seems to think there's a market for people who buy a HEDT platform but can't afford a relevant CPU, but would upgrade later. Highly unlikely such a market exists. By the time such a theoretical user would be in a position to upgrade, more than likely they'd want a better platform anyway, given how fast the tech is changing.MTEK - Monday, July 24, 2017 - link
Random amusement: Sandy Bridge got 1st place in the Shadow of Mordor bench w/ a GTX 1060.shabby - Monday, July 24, 2017 - link
That's funny and sad at the same time unfortunately.mapesdhs - Monday, July 24, 2017 - link
S'why I love my 5GHz 2700K (daily system). And the other one (gaming PC). And the third (benchmarking rig), the two I've sold to companies, another built for a friend, another set aside to sell, another on a shelf awaiting setup... :D 5GHz every time. M4E, TRUE, one fan, 5 mins, done.GeorgeH - Monday, July 24, 2017 - link
Those decreased overclocking performance numbers aren't just red flags, they're blinding red flashing lights with the power of a thousand suns.Seriously, that should have been the entire article - this platform is a disaster if it loses performance under sustained load. That's not hyperbole, it's cold hard truth. Sustained load is part of what HEDT is about, and with X299 you're spending more money for significantly less performance?
I sincerely hope you're going to get to the bottom of this and not just shrug and let it slide away as a mystery. Hopefully it's just platform immaturity that gets ironed out, but at the present time I have absolutely no clue how you could recommend X299 in any way. Significantly less sustained performance is a do not pass go, do not collect $200, turn the car around, oh hell no, all caps showstopper.
deathBOB - Monday, July 24, 2017 - link
But they're big AVX workloads. We know heat and power get a bit crazy with the AVX, and at some point we should just step back and realize that overclocking may not be appropriate for these workloads.GeorgeH - Monday, July 24, 2017 - link
But other AVX workloads didn't have the issue.Until we know exactly what is going on and what will be required to fix it, I can't comprehend how anyone can regard X299, at least with the quad core CPUs, as anything but "Nope". Maybe a BIOS update will help, or tuning the overclock, but maybe it'll require new motherboard revisions or delidding the CPU. I'm sure it'll get fixed/understood at some point, but for now recommending this platform is really hard to accept as a good idea.
MrSpadge - Monday, July 24, 2017 - link
> But other AVX workloads didn't have the issue.Using a few of those instructions is different from hammering the CPU with them. Not sure what this software does, but this could easily explain it.
Icehawk - Monday, July 24, 2017 - link
I do a lot of Handbrake encoding to HEVC which will peg all cores on my O/C'd 3770, it uses AVX but obviously a much older version with less functionality, and I can have it going indefinitely without issue.I've looked at the 7800\7820 as an upgrade but if they cannot sustain performance with a reasonable cooling setup then there is no point. The KBL-X parts don't offer enough of a performance improvement to be worth the cost of the X299 mobo which also seem to be having teething problems.
Future proofing is laughable, let's say you bought a 7740x today with the thought of upgrading in two years to a higher core count proc - how likely is it that your motherboard and the new proc will have the same pinout? History says it ain't happening at Camp Intel.
At this point I'm giving a hard pass to this generation of Intel products and hope that v2 will fix these issues. By then AMD may have come close enough in ST performance where I would consider them again, I really want the best ST & MT performance I can get in the $350 CPU zone which has traditionally been the top i7. AMD's MT performance almost tempts me to just build an encoding box.
I loved my Athlon back in the day, anyone remember Golden Fingers? :D
mapesdhs - Monday, July 24, 2017 - link
Golden Fingers... I had to look that up, blimey! :DDrKlahn - Tuesday, July 25, 2017 - link
I recently went from a 4.6GHz 3770K to a 1700X @ 4GHz at home. I play some older games that don't thread well (WoW being one of them). The Ryzen is at least as fast or faster in those workloads. Run Handbrake or Sony Movie Studio and the Ryzen is MUCH faster. We use built 6 core 5820K stations at work for some users and have recently added Ryzen 1600 stations due to the tremendous cost savings. We have yet to run into any tangible difference between the two platforms.Intel does have a lead in ST, but tests like these emphasize it to the point it seems like a bigger advantage than it is in reality. The only time I could see the premium worth it is if you have a task that needs ST the majority of the time (or a program is simply very poorly optimized for Ryzen). Otherwise AMD is offering an extraordinary value and as you point out AM4 will at least be supported for 2 more spins.
MrSpadge - Monday, July 24, 2017 - link
> realize that overclocking may not be appropriate for these workloadsThat's going too far. Just don't overclock as far for heavy AVX usage.
MrSpadge - Monday, July 24, 2017 - link
Kind of agreed. Ian, you should log the clock speeds during benchmark runs and check for anomalies. The chip or mainboard could throttle, or your 4.0 GHz AVX clock could just be way too low. What's the default AVX clock? Maybe 4.4 GHz? That would pretty much match the 10% performance degradation.Ian Cutress - Monday, July 24, 2017 - link
I need to do a performance scaling piece, I know. It's on the to-do listKvaern1 - Monday, July 24, 2017 - link
As already mentioned it's heavy AVX workloads which makes it throttle when OC'ed. The same thing happens on OC'ed Skylakes.arh2o - Monday, July 24, 2017 - link
Something seems wrong with the 7700k results vs the 7600k results. How is the 7600k beating the 7700k so handily in all the games? Are you sure the graphs are not swapped? ROTR shows the 7600k beating the 7700k by 20 FPS which seems impossible considering most reviews on this game have the 7700k on top of the 7600k.ydeer - Monday, July 24, 2017 - link
I would have liked to see some idle power consumption numbers because my PC is always on.Ro_Ja - Monday, July 24, 2017 - link
This was an interesting read. Thank you!Marnox - Monday, July 24, 2017 - link
According to Intel (https://ark.intel.com/products/97129/Intel-Core-i7... the Turbo speed for the 7700K is the same as the 7740X.mapesdhs - Monday, July 24, 2017 - link
Is the Max Turbo for one core or two? Always bugged me that Intel doesn't list the individual core/bin levels.versesuvius - Monday, July 24, 2017 - link
It will be interesting to see how many of these CPUs Intel will actually produce (collect ?) and bring to the market.djayjp - Monday, July 24, 2017 - link
Ian, why didn't you check if the OC was being thermally throttled? Easy enough to check this. And easy enough to see if it's the temperature of the cores or not. Surprising you wouldn't include temperature or power consumption data with the OC (though I understand this hasn't typically been a focus of AT). Another site demonstrated throttling at ~95+ C.mapesdhs - Monday, July 24, 2017 - link
Is that the same site which showed that the TIM Intel is using is just not allowing the heat to get from the die to the cap? Die temp shoots up, cap temp doesn't, even with a chiller cooler.melgross - Monday, July 24, 2017 - link
This article gives a good reason why huge numbers of core are a waste of money for most users.http://www.computerworld.com/article/3209724/compu...
Old_Fogie_Late_Bloomer - Monday, July 24, 2017 - link
Yeah, don't bother starting the article unless you're willing to create yet another useless online identity. Shame, since it seemed moderately interesting, but...alpha754293 - Monday, July 24, 2017 - link
re: overclockingThat works well for the occasional heavy workload, but if you are going to be constantly running at peak load (like I did for engineering analysis), overclocking of any kind, from my experience, isn't worth the dead core or entire CPU.
I've already fried a core on the 3930K once before taking it up from 3.2 GHz stock, 3.5 GHz max TurboBoost to 4.5 GHz.
mapesdhs - Monday, July 24, 2017 - link
Alas this stuff does vary according to the invidual CPU, mbd, RAM, etc. What cooling did you use? Could also be the vcore was too high - a lot of SB-E users employed a high vcore, not realising that using a lower PLL would often make such a high vcore unnecessary. It's even more complicated if one fills all 8 RAM slots on a typical X79 mbd.alpha754293 - Tuesday, July 25, 2017 - link
The cooling that I was using was Corsair H80i v2.The temps were fine and were consistently fine.
RAM was 8x 8 GB Cruical Ballistix Sport I think DDR3-1600? Something like that. Nothing special, but nothing super crappy either. I actually had the entire set of RAM (all eight DIMMs RMA'd once) so I know that I got a whole new set back when that happened about oh...maybe a-year-and-a-half ago now? Something like that.
Motherboard was Asus X79 Sabertooth.
Yeah, I had all 8 DIMM slots populated because it was a cheaper option compared to 4x 16 GB. Besides, using all 8 DIMMs also was able to make use of the quad-channel memory whereas going with 4x 16 GB - you can't/won't (since the memory needed to be installed in paired DIMM slots).
That CPU is now "castrated" down to 4 cores (out of 6) because 1 of the cores died (e.g. will consistently throw BSODs, but if I disable it, no problems). Makes for a decent job scheduler (or at least that's the proposed task/life for it).
Dr. Swag - Monday, July 24, 2017 - link
Hey Ian, on the first page you listed the turbo of the 7700k as 4.4, whereas it's actually 4.5Yuriman - Monday, July 24, 2017 - link
Shouldn't the 7700K read "4.2-4.5ghz" rather than 4.2-4.4?Dug - Monday, July 24, 2017 - link
On RoTR-1-Valley 1080p it shows i5 7600k at 141fps and the i7 7700k at 103fps. Have a feeling these might be transposed.Ian Cutress - Monday, July 24, 2017 - link
*As specifically written down on that page and mentioned in the explanation for that benchmark*, GeoThermal Valley at 1080p on the GTX 1080 seems incredibly optimized: all the Core i5 chips do so much better than all the other chips.lixindiyi - Monday, July 24, 2017 - link
The frequency of Ryzen 7 1700 should be 3.0/3.7 GHz.Integr8d - Monday, July 24, 2017 - link
"After several years of iterative updates, slowly increasing core counts and increasing IPC, we have gotten used to being at least one generation of microarchitecture behind the mainstream consumer processor families. There are many reasons for this, including enterprise requirements for long support platforms as well as enterprise update cycles."You forgot 'milking their consumer, enthusiast and enterprise markets'...
Arbie - Monday, July 24, 2017 - link
Ian! You're a Brit - please help defend our common language. You meant to say "raises the question". "Begs the question" is totally different and does not even approximate what you intended.Journos: You don't have to understand "begs the question" because you'll very rarely need it. If you mean "raises the question" then just use that - plain English.
Mayank Singh - Monday, July 24, 2017 - link
Can someone explain how could the i5's could have got better performance than the i7 at 1080p?Ian Cutress - Monday, July 24, 2017 - link
Geothermal Valley on RoTR seems to be optimized for 1080p on a GTX 1080 and overly so, giving a lot more performance on that specific setup and test.Icehawk - Monday, July 24, 2017 - link
How is that possible? The i5 has slower clocks and less cache. So how can it be faster, "optimization" isn't valid here IMO unless I am missing something.I think you have a throttling issue or something else that needs to be examined. Monitoring long term clocks and temps is something that you need to look at incorporating if only to help validate results.
lucam - Monday, July 24, 2017 - link
When are you guys doing the iPad Pro review?dgz - Monday, July 24, 2017 - link
I remember a time when AT used to be a trustworthy. Who are you fulling, Ian? No one, that's who. Shame on you.Ian Cutress - Monday, July 24, 2017 - link
I've been called an AMD shill and an Intel shill in the space of two weeks. Fun, isn't it.mapesdhs - Monday, July 24, 2017 - link
Let the memes collide, focus the memetic radiation, aim it at IBM and get them to jump into the x86 battle. :Ddgz - Monday, July 24, 2017 - link
Man, I could really use an edit button. my brain has shit itselfmapesdhs - Monday, July 24, 2017 - link
Have you ever posted a correction because of a typo, then realised there was a typo in the correction? At that point my head explodes. :DGlock24 - Monday, July 24, 2017 - link
"The second is for professionals that know that their code cannot take advantage of hyperthreading and are happy with the performance. Perhaps in light of a hyperthreading bug (which is severely limited to minor niche edge cases), Intel felt a non-HT version was required."This does not make any sense. All motherboards I've used since Hyper Threading exists (yes, all the way back to the P4) lets you disable HT. There is really no reason for the X299 i5 to exist.
Ian Cutress - Monday, July 24, 2017 - link
Even if the i5 was $90-$100 cheaper? Why offer i5s at all?yeeeeman - Monday, July 24, 2017 - link
First interesting point to extract from this review is that i7 2600K is still good enough for most gaming tasks. Another point that we can extract is that games are not optimized for more than 4 core so all AMD offerings are yet to show what they are capable of, since all of them have more than 4 cores / 8 threads.I think single threading argument absolute performance argument is plain air, because the differences in single thread performance between all top CPUs that you can currently buy is slim, very slim. Kaby Lake CPUs are best in this just because they are sold with high clocks out of the box, but this doesn't mean that if AMD tweaks its CPUs and pushes them to 5Ghz it won't get back the crown. Also, in a very short time there will be another uArch and another CPU that will have again better single threaded performance so it is a race without end and without reason.
What is more relevant is the multi-core race, which sooner or later will end up being used more and more by games and software in general. And when games will move to over 4 core usage then all these 4 cores / 8 threads overpriced "monsters" will become useless. That is why I am saying that AMD has some real gems on their hands with the Ryzen family. I bet you that the R7 1700 will be a much better/competent CPU in 3 years time compared to 7700K or whatever you are reviewing here. Dirt cheap, push it to 4Ghz and forget about it.
Icehawk - Monday, July 24, 2017 - link
They have been saying for years that we will use more cores. Here we are almost 20 years down the road and there are few non professional apps and almost no games that use more than 4 cores and the vast majority use just two. Yes, more cores help with running multiple apps & instances but if we are just looking at the performance of the focused app less cores and more MHz is still the winner. From all I have read the two issues are that not everything is parallelizable and that coding for more cores/threads is more difficult and neither of those are going away.mapesdhs - Monday, July 24, 2017 - link
Thing is, until now there hasn't been a mainstream-affordable solution. It's true that parallel coding requires greater skill, but that being the case then the edu system should be teaching those skills. Instead the time is wasted on gender studies nonsense. Intel could have kick started this whole thing years ago by releasing the 3930K for what it actually was, an 8-core CPU (it has 2 cores disabled), but they didn't have to because back then AMD couldn't even compete with mid-range SB 2500K (hence why they never bothered with a 6-core for mainstream chipsets). One could argue the lack of market sw evolvement to exploit more cores is Intel's fault, they could have helped promote it a long time ago.cocochanel - Tuesday, July 25, 2017 - link
+1!!!twtech - Monday, July 24, 2017 - link
What can these chips do with a nice watercooling setup, and a goal of 24x7 stability? Maybe 4.7? 4.8?These seem like pretty moderate OCs overall, but I guess we were a bit spoiled by Sandy Bridge, etc., where a 1GHz overclock wasn't out of the question.
mapesdhs - Monday, July 24, 2017 - link
2700K, +1.5GHz every time.shabby - Monday, July 24, 2017 - link
So much for upgrading from a kbl-x to skl-x when the motherboard could fry the cpu, nice going intel.Nashiii - Monday, July 24, 2017 - link
Nice article Ian. What I will say is I am a little confused around this comment:"Intel wins for the IO and chipset, offering 24 PCIe 3.0 lanes for USB 3.1/SATA/Ethernet/storage, while AMD is limited on that front, having 8 PCIe 2.0 from the chipset."
You forgot to mention the AMD total PCI-E IO. It has 24 PCI-E 3.0 lanes with 4xPCI-e 3.0 going to the chipset which can be set to 8x PCI-E 2.0 if 5Gbps is enough per lane, i.e in the case of USB3.0.
I have read that Kabylake-X only has 16 PCI-E 3.0 lanes native. Not sure about PCH support though...
KAlmquist - Monday, July 24, 2017 - link
With Kabylake-X, the only I/O that doesn't go through the chipset is the 16 PCI-E 3.0 lanes you mention. With Ryzen, in addition to what is provided by the chipset, the CPU provides1) Four USB 3.1 connections
2) Two SATA connections
3) 18 PCI-E 3.0 lanes, or 20 lanes if you don't use the SATA connections
So if you just look at the CPU, Ryzen has more connectivity than Kabylake-X, but the X299 chip set used with Kabylake-X is much more capable (and expensive) than anything in the AMD lineup. Also, the X299 doesn't provide any USB 3.1 ports (or more precisely, 10 gb per second speed ports), so those are typically provided by a separate chip, adding to the cost of X299 motherboards.
Allan_Hundeboll - Monday, July 24, 2017 - link
Interesting review with great benchmarks. (I don't understand why so many reviews only report average frames pr. second)The ryzen r5 1600 seems to offer great value for money, but i'm a bit puzzled why the slowest clocked R5 beats the higher clocked R7 in a lot of the 99% benchmarks, Im guessing its because the latency delta when moving data from one core to another penalize the higher core count R7 more?
BenSkywalker - Monday, July 24, 2017 - link
The gaming benchmarks are, uhm..... pretty useless.Third tier graphics cards as a starting point, why bother?
Seems like an awful lot of wasted time. As a note you may want to consider- when testing a new graphics card you get the fastest CPU you can so we can see what the card is capable of, when testing a new CPU you get the fastest GPU you can so we can see what the CPU is capable of. The way the benches are constructed, pretty useless for those of us that want to know gaming performance.
Tetsuo1221 - Monday, July 24, 2017 - link
Benchmarking at 1080p... enough said.. Completely and utterly redundantQasar - Tuesday, July 25, 2017 - link
why is benchmarking @ 1080p Completely and utterly redundant ?????meacupla - Tuesday, July 25, 2017 - link
I don't know that guy's particulars, but, to me, using X299 to game at 1080p seems like a waste.If I was going to throw down that kind of money, I would want to game at 1440p or 4K
silverblue - Tuesday, July 25, 2017 - link
Yes, but 1080p shifts the bottleneck towards the CPU.Gothmoth - Tuesday, July 25, 2017 - link
so why not test at 640x480... shifts the bottleneck even more to the cpu... you are kidding yourself.silverblue - Tuesday, July 25, 2017 - link
Not really. If the GPU becomes the bottleneck at or around 1440p, and as such the CPU is the limiting factor below that, why go so far down when practically nobody games below 1080p anymore?Zaxx420 - Monday, July 24, 2017 - link
"Over the last few generations, Intel has increased IPC by 3-10% each generation, making a 30-45% increase since 2010 and Sandy Bridge..."I have an old Sandy i5 2500K on an Asus Z68 that can do 5GHz all day on water and 4.8 on air. I know it's ancient IP...but I wonder if it could hold it's own vs a stock clocked Skylake i5? hmmmm...
hbsource - Tuesday, July 25, 2017 - link
Great review. Thanks.I think I've picked the best nit yet: On the Civ 6 page, you inferred that Leonard Nimoy did the voiceover on Civ 5 when he actually did it on Civ 4.
gammaray - Tuesday, July 25, 2017 - link
it's kind of ridiculous to see the Sandy bridge chip beating new cpus at 4k gaming...Zaxx420 - Tuesday, July 25, 2017 - link
Kinda makes me grin...I have an old Sandy i5 2500K on an Asus Z68 that can do 5GHz all day on water and 4.8 on air. I know it's ancient IP...but I wonder if it could hold it's own vs a stock clocked Skylake i5? hmmmm...
Mugur - Tuesday, July 25, 2017 - link
Much ado about nothing. So the best case for 7740 is Office applications or opening PDF files? The author seems to have lost the sight of the forest because of the trees.Some benchmarks are odd, some are useless in the context. I watched the YouTube version of this: https://www.techspot.com/review/1442-intel-kaby-la... and it looked like a more realistic approach for a 7740k review.
Gothmoth - Tuesday, July 25, 2017 - link
well i guess intel is putting more advertising money on anandtech.otherwise i cant´t explain how an overpriced product with heat problems and artificial crippled pci lanes on an enthusiast platform(!) can get so much praise without much criticism.
jabber - Tuesday, July 25, 2017 - link
I miss the days when you saw a new bunch of CPUs come out and the reviews showed that there was a really good case for upgrading if you could afford to. You know a CPU upgrade once or twice a year. Now I upgrade (maybe) once every 6-7 years. Sure it's better but not so much fun.Dragonstongue - Tuesday, July 25, 2017 - link
Intel wins for the IO and chipset, offering 24 PCIe 3.0 lanes for USB 3.1/SATA/Ethernet/storage, while AMD is limited on that front, having 8 PCIe 2.0 from the chipset.Funny that is, seeing as AM4 has 16 pci-e lanes available to it unless when go down the totem pole those lanes get segregated differently , even going from the above table Intel is offering 16 for x299not 24 as you put directly into words, so who wins in IO, no one, they both offer 16 lanes. Now if you are comparing via price, x299 is obviously a premium product, at least compare to current AM4 premium end which is x370 chipset, pretty even footing on the motherboards when compared similar "specs" comparing the best AMD will offer in the form of x399, it makes the best "specs" of x299 laughable.
AMD seems to NOT be shortchanging pci-e lanes, DRAM ability (or speed) functional, proper thermal interface used etc etc.
Your $ by all means, but seriously folks need to take blinders off, how much raw power is this "95w TDP" processors using when ramped to 5+Ghz, sure in theory it will be the fastest for per core performance, but how long will the cpu last running at that level, how much extra power will be consumed, what price of an acceptable cooler is needed to maintain it within thermal spec and so forth.
Interesting read, but much seems out of context to me. May not like it, but AMD has given a far better selection of product range this year for cpu/motherboard chipsets, more core, more threads, lots of IO connectivity options, fair pricing overall (the $ value in Canada greed of merchants does not count as a fault against AMD)
Am done.
Firebat5 - Tuesday, July 25, 2017 - link
Ian,i'm interested in the details of the agility benchmark? how many photos are in your dataset and at what resolution? am doing similar work and i notice the working time doesn't seem to be linear with the number of photos.
Firebat5 - Tuesday, July 25, 2017 - link
agisoft* autocorrect strikes again.damianrobertjones - Thursday, July 27, 2017 - link
Capitals can be a good thing.Gothmoth - Tuesday, July 25, 2017 - link
reading this article again i must say im realyl ashamed. anandtech was once a great place but now it´s just like car magazines. who pays best gets the best reviews. where is the criticism? everyone and his grandmother things intel has big issues (tim, heat, pci lanes nonsense product) are you bend over so intel can inject more money more easily?damianrobertjones - Thursday, July 27, 2017 - link
Is your shift key broke? Where's are your capitals?zodiacfml - Wednesday, July 26, 2017 - link
Impressive benchmarks. I could not ask for more. This revealed that Intel clearly doesn't have the premium or value position anymore. It is simply not there. They have to be in the 10nm process now to be superior in value and/or performance.Walkeer - Wednesday, July 26, 2017 - link
Hi, what RAM frequency is the AMD platform running on? if its the official maximum of 2666MHz, you can get +10-15% more performance using 3200MHz or faster memorywarner001 - Wednesday, July 26, 2017 - link
Hey, This is a very useful post for the new ones. Thanks a lot. please visit http://forums.cat.com/t5/user/viewprofilepage/user...warner001 - Wednesday, July 26, 2017 - link
nice blogedsib1 - Wednesday, July 26, 2017 - link
Please redo the Ryzen benchmarks using DDR3200 now it is officially supported, and also use the latest updates of the games - eg ROTR v770.1+ where Ryzen gets a 25% increase.You can't compare one platform with the latest updates, and the other without - thats pointless
Gulagula - Wednesday, July 26, 2017 - link
Can anyone explain to me how the 7600k and in some cases the 7600 beating the 7700k almost consistenly. I don't doubt the Ryzen results but the Intel side of results confuses the heck out of me.PeterSun - Wednesday, July 26, 2017 - link
7800x is missing in LuxMark CPU OpenCL benchmark?kgh00007 - Thursday, July 27, 2017 - link
Hi, thanks for the great review. Are you guys still using OCCT to check your overclock stability?If so what version do you use and which test do you guys use? Is it the CPU OCCT or the CPU Linpack with AVX and for how long before you consider it stable?
Thanks, I'm trying to work on my own 7700k overclock at the minute!
fattslice - Thursday, July 27, 2017 - link
I hate to say, but there is clearly something very wrong with your 7700K test system. Using the same settings for Tomb Raider, a GTX 1080 11Gbps, and a 7700k set at stock settings I am seeing about 40-50% better fps than you are getting on all three benchmarks--213 avg for Mountain Peak, 163 for Syria, and 166 for Geothermal Valley. This likely is not limited to just RotTR, as your other games have impossible results--technically the i5s cannot beat their respective i7s as they are slower and have less cache. How this was not caught is quite disturbing.welbot - Tuesday, August 1, 2017 - link
The test was run with a 1080, not a 1080ti. Depending on resolution, ti's can outperform the 1080 by 30%+. Could well be why you see such a big difference.Funyim - Thursday, August 10, 2017 - link
No. I'm pretty sure the 7700k used was broken. It worries me as well this was posted without further investigation. Basically invalidates all benchmarks.