SKUs and Pricing

The AMD Opteron 6300 series has the same specifications as the 6200 series. The only changes are slightly higher clockspeeds and minor architectural improvements. So how much does AMD charge you for that?

AMD Opteron 6300 versus 6200 SKUs
Opteron 6300 Modules/
TDP Clock
Price Opteron 6200 Modules/
TDP Clock
High Performance High Performance
6386SE 8/16 140W 2.8/3.2/3.5 $1392          
          6284 SE 8/16 140W 2.7/3.1/3.4 $1265
          6282 SE 8/16 140W 2.6/3.0/3.3 $1019
Midrange Midrange
6380 8/16   2.5/2.8/3.4 $1088          
6378 8/16   2.4/2.7/3.3 $867 6278 8/16 115W 2.4/2.7/3.3 $989
6376 8/16   2.3/2.6/3.2 $703 6276 8/16 115W 2.3/2.6/3.2 $788
          6274 8/16 115W 2.2/2.5/3.1 $639
          6272 8/16 115W 2.0/2.4/3.0 $523
6348 6/12   2.8/3.1/3.4 $575 6238 6/12 115W 2.6/2.9/3.2 $455
6344 6/12   2.6/2.9/3.2 $415 6234 6/12 115W 2.4/2.7/3.0 $377
High clock / budget High clock / budget
6328 4/8 115W 3.2/3.5/3.8 $575          
          6220 4/8 115W 3.0/3.3/3.6 $455
6320 4/8 115W 2.8/3.1/3.3 $293 6212 4/8 115W 2.6/2.9/3.2 $266
6308 2/4 115W 3.5 $501          
Power Optimized Power Optimized
6366HE 8/16 85W 1.8/2.3/3.1 $575 6262HE 8/16 85W 1.6/2.1/2.9 $523

The top models with slightly increased clockspeeds (+100MHz) are also slightly more expensive than the previous models, so you're basically paying more for more performance, which should hopefully work out as a net positive in the long run. More interesting are the midrange chips: the Opteron 6378 and 6376 are slightly more powerful than the 6278 and 6276 (same clock speeds but with architectural improvements), but they come with a 11-12% lower price.

Let's compare the AMD chips with Intel's offerings.

AMD vs. Intel 2-socket SKU Comparison
TDP Clock
Price Opteron Modules/
TDP Clock
High Performance High Performance
2680 8/16 130W 2.7/3/3.5 $1723          
2665 8/16 115W 2.4/2.8/3.1 $1440 6386 SE 8/16 140W 2.8/3.2/3.5 $1392
2650 8/16 95W 2/2.4/2.8 $1107          
Midrange Midrange
          6380 8/16 115W 2.5/2.8/3.4 $1088
2640 6/12 95W 2.5/2.5/3 $885 6378 8/16 115W 2.4/2.7/3.3 $867
          6276 8/16 115W 2.3/2.6/3.2 $703
2630 6/12 95W 2.3/2.3/2.8 $639          
          6348 6/12 115W 2.8/3.1/3.4 $575
2620 6/12
2/2/2.5 $406 6234 6/12 115W 2.6/2.9/3.2 $415
High clock / budget High clock / budget
2643 4/8 130W 3.3/3.3/3.5 $885          
2609 4/4 80W 2.4 $294 6320 4/8 115W 3.0/3.3/3.6 $293
2637 2/4 80W 3/3.5 $885 6308 2/4 115W 3.5 $501
Power Optimized Power Optimized
2630L 8/16 60W 2/2/2.5 $662 6366HE 8/16 85W 1.8/2.3/3.1 $575

Our Xeon E5-2600 review showed that the 8-core Xeon E5 was between 12% and 40% faster than the 8-module Opteron at more or less the same clocks (Xeon E5 2660 at 2.2GHz versus Opteron 6276 at 2.3GHz). The AMD benchmarks seem to indicate that the new Opteron is 5 to 15% faster at the same clocks, so a 6386SE at 2.8GHz might be able to stay close to the 2.4GHz Xeon 2665, but the higher TDP does not make it very attractive. The 6386SE 2.8GHz might make sense for some HPC people though. If you can recompile your code (and use FMA), AMD claims that a 2.5GHz 6380 is just as fast as a 2.9GHz 2690.

AMD may offer pretty good value in the midrange for the server market. We measured a 7% to 18% advantage (in the most important applications) for the Xeon with 12 threads compared to the Interlagos CPU with 16 integer cores. The 5% to 15% higher single-threaded performance of the Opteron 6378 (compared to a 6278) might be good enough to beat the 2640 in some benchmarks. Of course we have to see how well the Opteron fares in the power consumption measurements.

AMD also has a few very nice budget offerings: a 3GHz to 3.3GHz 6320 with 8 integer cores sounds good compared to the 4 cores of the 2609 at 2.4GHz in a market where performance per dollar is more important than performance per Watt.

AMD fails to convince the low power market. An 8 module chip at 1.8GHz will not be able to beat a 2GHz Xeon 2630L that will consume less power. The performance per watt of the Intel chip will be significantly better and the performance alone will be about 15 to 45% better.

So far...

Besides the low power offering, the Opteron 6300 series looks quite good. The specifications and pricing of the 6276 and 6278 in particular are attractive, and those chips are catering to the bulk of the market. But the benchmarks AMD presents are hardly convincing. The SPECJBB2005 test is easy to inflate, while the recompiled HPC benchmarks are interesting to a small niche of the market but useless to the rest of us. The jury is thus still out on what Abu Dhabi will mean for AMD servers, but we hope to have a verdict in the coming weeks.

Performance According to AMD
Comments Locked


View All Comments

  • Notperk - Monday, November 5, 2012 - link

    Wouldn't it be better to compare these CPUs to Intel's E7 series enterprise server CPUs? I ask this, because of how technically an Opteron 6386 SE is two CPUs in one. Therefore, two of these would actually be four CPUs and would be a direct competitor (at least in terms of class) to four E7-4870s. If you went even further, four of those Opterons would be a competitor to eight E7-8870s. I understand that, performance wise, these are more similar to the E5s, but it just makes more sense to me to place them higher as enterprise server CPUs.
  • MrSpadge - Monday, November 5, 2012 - link

    It's actually the other way around: there may be 2 dies inside each CPU, but even combined they get less work done than the Intel chips in most situations. However, comparing a 4-socket Opti system with a 2-socket Intel system, which cost approximately the same to purchase, can get very interesting: massive memory capacity and bandwidth, lot's of threads for integer throughput and quite a few FPUs. With the drawback of much higher running costs through electricity costs, of course.
  • leexgx - Tuesday, November 6, 2012 - link

    Happy that the reviewer correctly got the module/cores right (as the Integer cores are more like hyper threading but not)

    in any case should compare the amd modules count to intel cpu cores
    (amd should be marketing them the same way, 4 module cpus with core assist, that are slower or the same as an phenom x4 real world use, its like saying an i7 is an 8 core cpu when its about the same speed of an i5 that lacks HT)
  • RicardoNeuer - Thursday, November 8, 2012 - link

    my co-worker's mother makes $60 an hour on the computer. She has been out of work for nine months but last month her pay was $13948 just working on the computer for a few hours. Read more on this
    (Click on menu Home more information)
  • thebluephoenix - Tuesday, November 6, 2012 - link

    E7 is nehalem, old technology. E5-2687W and E5-2690 are actual competition (~Double 2600K vs ~Double FX-8350)
  • JohanAnandtech - Tuesday, November 6, 2012 - link

    Minor nitpick: E7 is Westmere, improved Nehalem.

    But E5 is indeed the real competition. E7 is less about performance/Watt, but more about RAS and high scalability (corecounts of 40, up to 80 threads)
  • alpha754293 - Monday, November 5, 2012 - link

    I don't know if I would say that. Course, I'm biased because I'm somewhat in HPC. But I think that the HPC will also give an idea of how well (or how poorly) a highly multi-threaded/multi-processor capable/aware application is going to perform.

    In some HPC cases, having more integer cores is probably going to be WORSE since it's still fighting for FPU resources. And that running it on more processors isn't always necessarily better either (higher intercore communication traffic).
  • MrSpadge - Monday, November 5, 2012 - link

    If you compare a 4-socket Opti to a 2-socket Intel (comparable purchase cost) you can get massive memory bandwidth, which might be enough to tip the scale in Optis favor in some HPC applications. They need to profit from many cores and require this bandwdith, though.

    Personally for genereal HPC jobs I prefer less cores with higher IPC and clock speed (i.e. Intel), as they're more generally useful.
  • alpha754293 - Friday, November 9, 2012 - link

    I can tell you from experience that it really depends on the type of HPC workload.

    For FEA, if you can store the large matrices in the massive amount of memory that the Opterons can handle (upto 512 GB for a quad-socket system) - it can potentially help*.

    *You have to disable swap so that you don't get bottlenecked by the swap I/O performance.

    Then you'd really be able to rock 'n roll being able to solve the matrices entirely in-core.

    For molecular dynamics though - it's not really that memory intensive (compared to structural mechanics FEA) but it's CPU intensive.

    For CFD, that's also very CPU intensive.

    And even then it also depends too on how the solvers are written and what you're doing.

    CFD - you need to pass the pressure, velocity, and basically the information/state information about the fluid parcel from one cell to another; so if you partition the model at the junction and you need to transfer information from one cell on one core on one socket to another core sitting on another CPU sitting in another physical socket - then it's memory I/O limited. And most commercial CFD codes that I know of that enables MPI/MPP processing - they actually do somewhat of a local remesh at the partition boundaries so they actually create extra elements just to facilitate the data information transfer/exchange (and to make sure that the data/information is stable).

    So there's a LOT that goes into it.

    Same with crash safety simulations and explicit dynamics structural mechanics (like LS-DYNA) because that's an energy solver, so what happens to one element will influence what happens at your current element and that in turn will influence what happens at the next element. And for LS-DYNA, *MPP_DECOMPOSITION_<OPTION> can further tell how you want the problem to be broken down specifically (and you can do some pretty neat stuff with it) in order to make the MPI/MPP solver even more efficient.

    If you have a problem where what happens with one doesn't really have that much of an impact on another element (such as fatigue analysis - done at the finite element level) - you can process all of the elements individually, so having lots of cores means you can run it at lot faster.

    But for problems where there's a lot of "bi-directional" data/communication (hypersonic flow/shock wave for example) - then I THINK (if I remember correctly), the communication penalty is something like O(n^2) or O(n^3). So the CS side to an HPC problem is trying to optimize between these two. Run as many cores as possible, with as little communication as possible (so it doesn't slow you down), as fast as possible, as independently possible, and pass ONLY the information you NEED to pass along, WHEN you need to pass it along and try and do as much of it in-core as possible.

    And then to throw a wrench into that whole thing - the physics of the simulations basically is a freakin' hurricane on that whole parade (the physics makes a lot of that either very difficult or near impossible or outright impossible).
  • JohanAnandtech - Monday, November 5, 2012 - link

    I would not even dream of writing that! HPC software can be so much more fun than other enterprise software: no worries about backup or complex High Availability setups. No just focussing on performance and enjoying the power of your newest chip.

    I was talking about the HPC benchmarks that AMD reports. Not all HPC benchmarks can be recompiled and not all of them will show such good results with FMA. Those are very interesting, but they only give a very limited view.

    For the rest: I agree with you.

Log in

Don't have an account? Sign up now