Over the last few years the SoC GPU space has taken an interesting path, and one I admittedly wasn’t expecting. At the start of this decade the playing field for SoC-class GPUs was rather diverse, with everyone from NVIDIA to Broadcom (and everything in between) participating in it. Consolidation in the GPU space would be inevitable – something we’ve already seen with SoC vendors dropping out – however I am surprised by just how quickly it has happened. In just six years, the number of GPU vendors with a major presence in high-end Android phones has been whittled down to only two: the vertically integrated Qualcomm, and the IP-licensing ARM.

That ARM has managed to secure most of the licensed GPU market for themselves is a testament to both their engineering and their IP licensing efforts. ARM’s path into this market has been non-traditional, having acquired an essentially unknown GPU vendor a decade ago, and growing it into the 800lb gorilla it has now become. ARM’s expertise in IP licensing, coupled with a somewhat unusual GPU architecture, has proven to be a powerful combination for the company as they have secured a number of significant wins from the high end to the low end.

Much of this growth was built on the back of the company’s GPU architecture of the last few years, Midgard. Initially launched in 2012, Midgard has been the cornerstone of ARM’s Mali 600, 700, and 800 series designs. As ARM’s first unified shader design for GPUs, Midgard has been extended over the years to support newer features such as geometry tessellation and 10bpc color, along with newer APIs such as OpenGL ES 3.1/3.2 and Vulkan.

However as Midgard approaches its fourth birthday and the SoC GPU landscape evolves, Midgard’s time at the top will soon be coming to an end. Amidst the backdrop of Computex 2016 and alongside their new Cortex-A73 CPU, ARM is announcing their next generation GPU architecture, Bifrost. A significant update to ARM’s GPU architecture, Bifrost will first be deployed in ARM’s Mali-G71 GPU.

Recap: Mali & VLIW

One of the interesting aspects of SoC GPU development over the years is that it has been a very distinct echo of larger discrete GPU development. Many innovations and changes that first show up with dGPUs will show up in SoC GPUs a few years later, as newer manufacturing processes allow for those developments to fit within the extreme space and power requirements of an SoC-class GPU. At the same time mobile games/graphics development follows a similar path, with mobile application developers picking up rendering techniques first used elsewhere.

ARM’s architectural development, in turn, has been a good example of this process. The non-unified Utgard architecture gave way to the unified Midgard architecture in 2012, about 6 years after dGPUs first made the transition. And as we learned when we examined the Midgard architecture in depth, Midgard was an architecture well suited for the rendering paradigms of the time.

Midgard’s shader core, in short, was an Instruction Level Parallelism-centric design, employing a Very Long Instruction Word (VLIW) instruction format. To achieve maximum utilization out of Midgard’s shader cores, you needed to be able to extract a significant amount of ILP – 4 concurrent instructions – in order to fill all of the slots in a shader core. This sort of design maps well to basic graphics workloads, as 4 color component RGBA is a natural fit for the 4 lanes of ARM’s VLIW-4 design. Furthermore VLIW designs are traditionally very space efficient, as there’s relatively little overhead logic, which is always a boon for the tight constraints of the SoC space.

However getting back to what we said earlier about SoC GPUs being an echo of discrete GPUs, as we’ve seen there, VLIW does have a limited shelf life. Newer rendering paradigms often work with just 1 or 2 components at once, which leaves open lanes that need to be filled to achieve full GPU utilization. A good shader compiler can help here, but it does become an escalating technology war over time, as getting good performance becomes increasingly compiler-centric, and writing a compiler that can extract the necessary ILP is a challenge in and of itself. What history has shown us – and what is going to happen again in the mobile market – is that rendering workloads will continue to shift away from a style that is suitable for VLIW.

The Bifrost Quad: Replacing ILP with TLP
Comments Locked

57 Comments

View All Comments

  • Shadow7037932 - Tuesday, May 31, 2016 - link

    Mobile VR (hopefully, meaning "cheaper" VR) for starters.
  • Spunjji - Wednesday, June 1, 2016 - link

    Your lack of imagination is staggering.
  • Zyzzyx - Wednesday, June 1, 2016 - link

    This technology is going to affect millions if not billions of customers over the next few years, your 1080 will be used by a limited number of gamers, we know already about the Pascal architecture, as it has already been covered. Claiming that mobile dGPUs have no importance sounds like Ballmer saying the iphone had no market.

    You might also have missed the iPad Pro and what you can use it for, and no it is not mobile gaming ...

    Also your claim that only Windows 10 is a real OS shows your short sightedness, we will see in 5-10 years which OS will be dominating, since i am sure both Android and iOS will slowly but surely keep creeping into the professional space as features will be added.
  • Wolfpup - Friday, June 3, 2016 - link

    You sound like me LOL. Still interesting though just on a tech level, but I use PCs and game systems as they have actual games (and good interfaces to play them).
  • SolvalouLP - Monday, May 30, 2016 - link

    We all are waiting for AT review of GTX 1080, but please, this behaviour is childish at best.
  • Ryan Smith - Monday, May 30, 2016 - link

    And your hope will be rewarded.
  • makerofthegames - Tuesday, May 31, 2016 - link

    Go ahead and take as long as you need. I don't read AT for the hot takes, I read AT because you do real testing and real analysis to give us real information. I definitely want to read the review, but I'm willing to wait for it to be good.
  • edlee - Monday, May 30, 2016 - link

    It all sounds great, unfortunately I am stuck with devices that get powered by qualcomm sons, since I have verizon, and most flagship phone use qualcomm snapdragons
  • Krysto - Monday, May 30, 2016 - link

    It's a shame Samsung isn't selling its Exynos chip to other device makers, isn't it? I mean, it's probably not even economically worth it for Samsung to design a chip for only 1 or 2 of its smartphone models. I don't understand why they don't try to compete more directly with Qualcomm in the chip market. I also don't understand why they aren't buying AMD so they can compete more directly with Intel as well, but I digress.
  • Tabalan - Monday, May 30, 2016 - link

    Samsung sells certain Exynos SoC to Meizu, Meizu MX4 Pro had Exynos 5430, Pro 5 had Exynos 7420. With Pro 6 they went Mediatek X25.
    About costs of designing and profit - they used to use CPU core and GPU designed by ARM, it's cheaper this way than buying license to modify these cores (and you have to add R&D costs of modifying uarch). Moreover, Samsung is using their latest process node only for high end SoCs (Apple AX series, Snapdragon 8XX series), which is very profitable market share. It could be easier to just manufacture SoC and get cash for it than looking for partners, vendees for their SoC. Plus, they would have to create whole line up of Exynos SoC to compete with Qualcomm (I assume Qc would give discount for buying chips only from them).

Log in

Don't have an account? Sign up now