Linux Performance 

C-Ray: link

C-Ray is a simple ray-tracing program that focuses almost exclusively on processor performance rather than DRAM access. The test in Linux-Bench renders a heavy complex scene offering a large scalable scenario.

Linux-Bench c-ray 1.1 (Hard)

NAMD, Scalable Molecular Dynamics: link

Developed by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, NAMD is a set of parallel molecular dynamics codes for extreme parallelization up to and beyond 200,000 cores. The reference paper detailing NAMD has over 4000 citations, and our testing runs a small simulation where the calculation steps per unit time is the output vector.

Linux-Bench NAMD Molecular Dynamics

NPB, Fluid Dynamics: link

Aside from LINPACK, there are many other ways to benchmark supercomputers in terms of how effective they are for various types of mathematical processes. The NAS Parallel Benchmarks (NPB) are a set of small programs originally designed for NASA to test their supercomputers in terms of fluid dynamics simulations, useful for airflow reactions and design.

Linux-Bench NPB Fluid Dynamics

Redis: link

Many of the online applications rely on key-value caches and data structure servers to operate. Redis is an open-source, scalable web technology with a strong developer base, but also relies heavily on memory bandwidth as well as CPU performance.

Linux-Bench Redis Memory-Key Store, 1x

Linux-Bench Redis Memory-Key Store, 10x

Linux-Bench Redis Memory-Key Store, 100x

Conclusions on Linux-Bench

Our Linux testing actually affords ten tests, but we chose the most important to publish here (the other results can be found in Bench). But here we see some slight differences when it comes to overclocks - the NPB tests rely on multi-dimensional matrix solvers, which are often more cache/memory dependent and thus a higher frequency processor doesn't always help. With Redis, we are wholly cache/memory limited here. The other results are in-line with CPU performance deltas over the overclock range.

CPU Tests on Windows: Office Gaming Benchmarks: Integrated Graphics
Comments Locked

103 Comments

View All Comments

  • Oxford Guy - Sunday, August 30, 2015 - link

    Who is the one trolling?
  • SanX - Saturday, August 29, 2015 - link

    If Intel by moving to 14nm (with its potentially twice smaller surface area versus 22nm) made mainstream octacores overclockable like 4770k/4790k i'd be interested. Otherwise it is hard to see any progress at all. Shame, even cellphones are octacores.
  • Oxford Guy - Sunday, August 30, 2015 - link

    You can get AMD FX to run at over 4.5 GHz with generally only decent cooling. As for seeing progress, overclocking may be less and less viable as process nodes shrink.
  • Oxford Guy - Sunday, August 30, 2015 - link

    We seem to be seeing this borne out in the review, too. Voltages remain too high. Then, when that fails to do the trick, reviewers try to claim that unstable settings are good enough.
  • stephenbrooks - Sunday, August 30, 2015 - link

    I wonder how much of this is choice of the "optimum" frequency for the process? I think Intel only has two processes per node, a standard and a low-voltage/low-power one. If the standard one has been stretched to pretty low powers (10W?), the top end suffers from high voltages because it's not optimal there. They'd end up having to commission a 3rd "high frequency" process to get these top end chips right. There's no incentive until they have a real competitor there, but at this glacial rate of progress perhaps AMD really will catch up in a few years.
  • SanX - Saturday, August 29, 2015 - link

    Isn't true reason behind this zero progress that having no competition at all, Intel wants us to pay $3-4K for octacore chip which probably costs just $100-200 to produce
  • MrSpadge - Thursday, September 3, 2015 - link

    Of course they'd want us to pay 3-4k$ for an octacore, but they don't expect us to do so. That's why they sell it for 1k$.
  • IUU - Sunday, August 30, 2015 - link

    So, obviously it is not worth the benefit and the risk you take, to overclock it. Which is somewhat understandable. Overclocking it as a hobby is always nice, naturally.
    Of course, a progress would be to be able to overclock to 5.5Ghz, especially as 4.8 - 5.0 Ghz have been achieved already since several years. But you can't force nature.
    On the other hand, I am just curious about the theoretical fp performace and the impact of the new instruction sets on it. The fact that the market chooses to ignore them, does not negate the potential of these processors.
    And also curious as to how able they are to run narrow ai apps, or multitasking several "heavy" apps. Sorry, but the fixation on office, "professional apps", and first person shooter(or moving camera) games doesn't cut it anymore.
    Would you be so interested to see how quickly your pc plays pacman in 2000? This is something like it.
    I, myself have witbessed the effect of new instruction sets up to haswell, on niche apps. And I have to say , impressive is an understatement. We are talking a 40% percent improvement clock for clock. I just hope programmers will be in a position to take advantage of this untapped potential some day.
    Also, algorithmic optimizations do perform miracles. I have seen them increasing the performance on the same app and chip up to 50%, instructions sets excluded.
    And finishing, often seeing sites examining the performance of theses chips on browsers, is in my opinion, the peak of the decadence.(no matter how many useless addons we added or how much clunky we made, unneccesarily, the code.)
  • callous - Sunday, August 30, 2015 - link

    I don't think most people test properly their overclocks. You always need to run prim95 with some 3d component looping for at least 24 hours. prime95 stable does not mean 3d stable. Only testing both at the same time: prime95 + 4 instances of heaven benmark can you test most of the components inside the cpu at the same time while system is being stressed.

    I would go further and say if you can do this for 24 hours then you should run some VMware and see if there are bad things happening like bsod or weird crashes of programs running the background.
  • sonny73n - Sunday, August 30, 2015 - link

    I got excited when I first saw the title of this article but I'm a little disappointed now after finished reading it.

    1.52v for only 4.8Ghz? Does 14nm need that much voltage? My 32nm 2500k only needs 1.42v for 48x. What about temp under load? Regardless what settings Handbrake uses or 4K60fps it encodes, if it didn't cause BSOD with CPU at stock speed but BSOD when CPU OCed, then OC isn't stable. Since you OC this chip with the 2 best OCing mobo brands, then it's the chip's fault. Why did you use HB for stability test? P95 for 6 hours would do. Beside, IBT is about 8C hotter than P95 but HB runs about 10C cooler than P95. HB is hardly a stress test.

    I might've missed it but did you have power saving features disabled or enabled? If you let the MB auto OC, what are the settings?

Log in

Don't have an account? Sign up now