Earlier this year AMD announced its return to the high-end server market with a series of new EPYC processors. Inside is AMD’s new Zen core, up to 32 of them, with the focus on the major cloud providers. We were the first media outlet to publish our review of EPYC, which showed AMD to be highly competitive in an Intel dominated x86 landscape. One of the concerns over the launch period was for the wider availability of EPYC: it was clear that AMD was announcing the product very early in its distribution cycle.

At SuperComputing 17 this week, the enterprise computing conference, AMD is announcing that it has ramped production of the processors and it has several OEMs ready, distributors ready, and system integrators expanding their portfolios.

OEMs with EPYC enabled systems at Supercomputing this week include ASUS, BOXX, GIGABYTE, HPE (Hewlett Packard Enterprise), Penguin Computing, Supermicro and Tyan. Each company is targeting certain niches: ASUS for HPC and Virtualization with its RS720A-E9 and RS700A-E9 1U/2U servers, BOXX combining EPYC with Radeon Instinct for multi-GPU compute solutions and deep learning, GIGABYTE with rackmount servers, HPE for complex workloads and Supermicro moving from tower form factors to 1U, 2U and 4U for HPC and storage.

“The industry’s leading system providers are here at SC17 with a full breadth of AMD-based solutions that deliver outstanding compute capability across HPC workloads,” said Forrest Norrod, SVP and GM of Enterprise, Embedded and Semi-Custom, AMD.

We had a meeting with AMD for this launch. Normally OEM systems coming to the market might not light the news on fire, but we did have an interesting line worth mentioning. AMD stated that there has been a steep production ramp for EPYC processors, after the initial phase with the cloud providers ensuring that the systems are ready to go, and so are ready to meet OEM requirements. We were told that all the SKUs announced at launch are in production as well, all the way down to the 8-core and the 1P parts, so OEMs that are interested in the full stack can now flex their product range muscles.

AMD also wheeled out the Inventec P47 system that it announced at launch, with a single EPYC processor and four Radeon Instinct MI25 GPUs. In partnership with AMAX, 47 of these systems were put together into a single rack, capable of one PetaFLOP of single precision in a turnkey solution. From today, AMAX is now taking pre-orders for this rack, for delivery in Q1.

ROCm 1.7 Gets Multi-GPU Support, Support for TensorFlow and Caffe

AMD also announced that its open-compute platform for graphics, ROCm, is being updated to 1.7. With this revision, ROCm adds support for multiple GPUs for the latest hardware, as well as support for TensorFlow and Caffe machine learning frameworks in the MIOpen libraries. This should be bigger news, given the prevalence of TensorFlow when it comes to machine learning. AMD also stated that ROCm 1.7 delivers additional math libraries and software development support, to provide a ‘foundation for a heterogeneous computing strategy’.

Related Reading

Comments Locked


View All Comments

  • Topweasel - Monday, November 13, 2017 - link

    No higher clock speeds. Look at Intel and it's LCC/HCC/XCC dies. The power usage with in a 100mhz or so are generally pretty close across the range of offerings. Meaning the 6c SL-X is penalized power usage wise because it's a 10c CPU. The 12C suffers because it's a 18c die.

    For that reason even an 8c EPYC while using less power than a 32c EPYC is still going to use significantly more power than a 8c Ryzen. It will produce more heat and require more cooling. EPYC is limited because of the use of 4 Dies and not because of limitations of the dies.

    That is why AMD doesn't aim the 8c or 16c EPYC for those types of use cases. Those CPU's are really for High Bandwidth or Extreme attachment configurations. It's about the Memory channels and PCIe lanes and not about the core count or clock speed.
  • ddriver - Monday, November 13, 2017 - link

    Obviously an epyc chip will have 4 times the IO transistors - memory controllers, pcie and whatnot.

    However, those don't use anywhere nearly as much power as even a single cpu core at load.

    The power and heat of an 8 core epyc will be a tad higher than those of a 8 core ryzen, but then again, an 8 core ryzen is bellow 100 watts.

    Clearly there is far more headroom, as I mentioned above, even the 24 core epyc is clocked higher, with the same number of IO as the 8 core and with three times the active cores.

    It is not about power. It is not about cooling. Probably some half mad, half insane internal segmentation strategy by amd.

    IMO it is a big miss of opportunity. An 8 core 3.5 Ghz epyc will be quite the nice deal and fill a significant and currently gaping hole in their portfolio. Currently amd has nothing in that space that offers decent performance for workloads that don't require or scale to a high thread count. It is a popular market and amd is simply not addressing it, even thou it easily can.
  • msroadkill612 - Monday, November 13, 2017 - link

    Its a cheap entry level to their range. I am sure they would rather sell you more cores & better clocks thrown in with the deal.

    Considering the total likely spend on an epyc, it seems a false economy not to spend at least $US1000 on the cpu.

    As I recall, the 24 core 1P looked good at ~$1050.
  • msroadkill612 - Monday, November 13, 2017 - link

    PS - I bet the 8 core OCs ell, but who OCs a server, right? :)

    It could still appeal to some smart cheapskate sys admins.

    Some apps are mainly concerned with lanes rather than performance.

    Ram is actually cheaper cos you can use more (up to 16x) of the cheaper, smaller capacity sticks on 8 channel epyc.
  • ddriver - Monday, November 13, 2017 - link

    There are a couple of good reasons:

    1 - some software licenses per core, so having more but slower cores is not an attractive prospect

    2 - some software doesn't scale that well, defeating the purpose of having many cores as you don't get any extra performance

    As I mentioned, that is actually a significant market, which amd is currently leaving to intel, even thou it can easily address it. Makes no sense for such a cash starved company. Makes no sense for any company when you think about it.
  • beginner99 - Monday, November 13, 2017 - link

    AMD has a very step curve to go in their AI / Deep learning endevour. Same can be said about Intel but intel theoretical would have the needed resources. The thing is software. In my field of work, everything is very CUDA oriented. Some stuff supposedly also works with OpenCL but if you look deeper it really doesn't or requires an esoteric setup. Even if AMD has the much better offer with their cards, it's a no brainer that we buy NV. And whether the card costs 2k or 10k doesn't matter if the cheaper one doesn't work with your software.
  • tuxRoller - Monday, November 13, 2017 - link

    "Even if AMD has the much better offer with their cards, it's a no brainer that we buy NV."

    That's the reason why everyone who isn't invested in nvda should be hopeful when amd, or whomever, attempts to port frameworks to something like ocl/sycl.
  • mdriftmeyer - Monday, November 13, 2017 - link

    ROCm already has Multi-GPU Support. They've just extended that to TensorFlow and Caffe.
  • dynamis31 - Tuesday, November 14, 2017 - link

    Coincidence ? HPE has gen10 DL385 upcoming. GoodBye Opteron, Hello Epyc
  • Kon21 - Wednesday, November 15, 2017 - link

    Hello Ian Cutress. One correction in your article, "In partnership with AMAX, 47 of these systems were put together into a single rack, capable of one PetaFLOP of single precision in a turnkey solution." That should be 20x P47 servers, not 47.

Log in

Don't have an account? Sign up now