Remember our review of Facebook's first OpenCompute Server? Facebook designed a server for their own purposes, but quickly released all the specs to the community. The result was a sort of "Open-source" or rather "Open-specifications" and "Open-CAD" hardware. The idea was that releasing the specifications to the public would advance and improve the platform quickly. The "public" in this case is mostly ODMs, OEMs and other semiconductor firms.

The cool thing about this initiative is that the organization managed to convince Intel and AMD to standarize certain aspects of the hardware. Yes, they collaborated. The AMD and Intel motherboards will have the same form factor, mounting holes, management interface and so on. The ODM/OEM only has to design one server: the AMD board can be swapped out for the Intel one and vice versa. The Mini-Mezzanine Slot, the way the power supply is connected, and so on are all standarized.

AMD is the first with this new "platform", and contrary to Intel's own current customized version of OpenCompute 2.0, is targeted at the mass market. The motherboard is designed and produced by several partners (Tyan, Quanta) and based upon the specifications of large customers such as Facebook. But again, this platform is not about the Facebooks of this world; the objective is to lower the power, space and cost of traditional servers. So although AVnet and Penguin Computing will be the first integrators that offer complete server systems based upon this spec, there is nothing stopping DELL, HP and others from doing the same. The motherboard design can be found below.

The T shape allows the PSU to be placed on the left, right, or on both sides (redundant PSUs). When placed in a case, the PSU is behind the rest of the hardware and thus does not heat up the rest of the cassis as you can see below.

The voltage regulation is capable of running EE and SE CPUs, ranging from 85W to 140W TDP. The voltage regulation disables several phases if they are not necessary in order to save power.

Servers can be 1U, 1.5U, 2U or 3U high. This server platform is highly modular, and the solutions built upon it can be widely different. AMD sees three different target markets.

Motherboards will not offer more than six SATA ports, but with the help of the PCIe cards you can get up to 35 SATA/SAS drives in there, to build a storage server. The HPC market demands exactly the opposite: in most cases CPU power and memory bandwidth matter the most. There will be an update around late February that wil support faster 1866 MHz DIMMs (1 DIMM per channel).

Our first impression is that this is a great initiative, building further upon the excellent ideas on which OpenCompute was founded. It should bring some of the cost and power savings that Facebook and Google have to the rest of us. The fact that the specifications are open and standarized should definitely result into some serious cost savings as vendors cannot lock you in like they do in the traditional SAN and blade server market.

We are curious how the final the management hardware and software will turn out. We don't expect it to be at the "HP ILO advanced" level, but we hope it is as good as an "Intel barebones server" management solution: being able to boot directly into the BIOS, a solid remote management solution and so on. The previous out of band management solution was very minimal as the first OpenCompute platform was mainly built for the "Hyperscale" datacenters.

The specifications of AMD Open 3.0 are available here.

Comments Locked

10 Comments

View All Comments

  • PCTC2 - Thursday, January 17, 2013 - link

    It is very reminiscent of the shape of Intel/Quanta's boards.

    On a side node, the last link has a %20 in the front of the http.
  • JarredWalton - Thursday, January 17, 2013 - link

    Fixed link - thanks!
  • speculatrix - Thursday, January 17, 2013 - link

    an openblade standard would be great, being able to buy blades and chassis from multiple vendors, so you could have some high efficiency cheap running Arm A15 blades along with some high end 56xx processors. The network arrangement would allow easy upgrade from low cost 10/100/1000 to CX4, SFP+, etc.

    Of course, the fans would also have to be modular so as to right-size the cooling and be upgradeable.
  • JohanAnandtech - Thursday, January 17, 2013 - link

    Not to mention that it would great to get some real modular network switches. In most cases you can chose between overpriced and ridiculously overpriced switches.
  • Assimilator87 - Thursday, January 17, 2013 - link

    Is this standard limited to 2P? There are plenty of 2P boards that're EATX or SSI CEB, but when I was looking into a 4P Folding rig, the boards were substantially more expensive and they were all custom form factors. A standardized 4P layout would be awesome for DIYers.
  • Kevin G - Thursday, January 17, 2013 - link

    It would have been nice to have seen the location of the SATA ports closer to the front of the motherboard. Running cables over the motherboard and along the side of the chassis just doesn't seem to be optimal.

    It would have been even nicer if they created a standardized back plane for hot swap usage that'd scale in terms of drive count while providing the basic front panel IO (VGA, RS-232, power/reset buttons etc.)

    I'm also curious if there would be a four socket G34 solution in this format. Dropping down to one DIMM per channel I think would create enough board space for the clever engineer. Four socket LGA 1567 I think would be possible with the usage of risers for the DIMM slots as several other LGA 1567 solutions utilize.
  • Beenthere - Thursday, January 17, 2013 - link

    As usual AMD is customer-centric and not the bully that InHell is. Good job by AMD as they have delivered again, unlike the unscrupulous talking heads who have been convicted umpteen times for violation of anti-trust laws and several times for U.S. tax evation.
  • phoenix_rizzen - Thursday, January 17, 2013 - link

    The 3U mobo, with 24 DIMMs, dual CPUs, and 4 low-profile SAS controllers (like the LSI 9207-8e) stuffed into a 2U chassis with a couple of SSDs installed locally, would make a great head unit to a storage server. Connect a bunch of JBOD chassis stuffed with drives to the SAS controllers, and away you go.

    Don't know why they would limit the number of PCIe slots in the 2U chassis. There's plenty of half-height/half-length cards out there.
  • Ktracho - Thursday, January 17, 2013 - link

    Nowadays I don't think the HPC market would want only high-powered CPUs in their power-efficient compute servers. You can dramatically increase performance per watt by adding GPUs. Do a couple GPUs fit horizontally, especially if you have just one power supply?
  • JohanAnandtech - Friday, January 18, 2013 - link

    I am not a market researcher, but I do think that the non-GPU HPC market is still considerably larger than the GPU enabled HPC market. There are thousands of smaller HPC apps where developers are probably not going to do the investment to redesign their apps for GPUs, and even the large companies like Ansys only have a few apps that are GPU accelerated (Fluent is, LS-DYNA not AFAIK).

Log in

Don't have an account? Sign up now