Bill Kircos, Intel’s Director of Product & Technology PR, just posted a blog on Intel’s site entitled “An Update on our Graphics-Related Programs”. In the blog Bill addresses future plans for what he calls Intel’s three visual computing efforts:

The first is the aforementioned processor graphics. Second, for our smaller Intel Atom processor and System on Chip efforts, and third, a many-core, programmable Intel architecture and first product both of which we referred to as Larrabee for graphics and other workloads.

There’s a ton of information in the vague but deliberately worded blog post, including a clear stance on Larrabee as a discrete GPU: We will not bring a discrete graphics product to market, at least in the short-term. Kircos goes on to say that Intel will increase funding for integrated graphics, as well as pursue Larrabee based HPC opportunities. Effectively validating both AMD and NVIDIA’s strategies. As different as Larrabee appeared when it first arrived, Intel appears to be going with the flow after today’s announcement.

My analysis of the post as well as some digging I’ve done follows.

Intel Embraces Fusion, Long Live the IGP

Two and a half years ago Intel put up this slide that indicated the company was willing to take 3D graphics more seriously:

By 2010, on a 32nm process, Intel’s integrated graphics would be at roughly 10x the performance of what it was in 2006. Sandy Bridge was supposed to be out in Q4 2010, but we’ll see it shipping in Q1 2011. It’ll offer a significant boost in integrated graphics performance. I’ve heard it may finally be as fast as the GPU in the Xbox 360.

Intel made huge improvements to its integrated graphics with Arrandale/Clarkdale. This wasn’t an accident, the company is taking graphics much more seriously. The first point in Bill’s memo clarifies this:

Our top priority continues to be around delivering an outstanding processor that addresses every day, general purpose computer needs and provides leadership visual computing experiences via processor graphics. We are further boosting funding and employee expertise here, and continue to champion the rapid shift to mobile wireless computing and HD video – we are laser-focused on these areas.

There’s a troublesome lack of addressing the gaming market in this statement. A laser focus in mobile wireless computing and HD video sounds a lot like an extension of what Intel integrated graphics does today, and not what we hope it will do tomorrow. Intel does have a fairly aggressive roadmap for integrated graphics performance, so perhaps missing the word gaming was intentional to downplay the importance of the market that its competitors play in for now.


The current future of Intel graphics

The second point is this:

We are also executing on a business opportunity derived from the Larrabee program and Intel research in many-core chips. This server product line expansion is optimized for a broader range of highly parallel workloads in segments such as high performance computing. Intel VP Kirk Skaugen will provide an update on this next week at ISC 2010 in Germany.

In a single line Intel completely validates NVIDIA’s Tesla strategy. Larrabee will go after the HPC space much like NVIDIA has been doing with Fermi and previous Tesla products. Leveraging x86 can be a huge advantage in HPC. If both Intel and NVIDIA see so much potential in HPC for parallel architectures, there must be some high dollar amounts at stake.

NVIDIA Tesla Seismic Supercomputing Universities Defence Finance
GPU TAM $300M $200M $150M $250M $230M
NVIDIA's calculated TAM for HPC applications for GPUs

The third point is the one that drives the nail in the coffin of the Larrabee GPU:

We will not bring a discrete graphics product to market, at least in the short-term. As we said in December, we missed some key product milestones. Upon further assessment, and as mentioned above, we are focused on processor graphics, and we believe media/HD video and mobile computing are the most important areas to focus on moving forward.

Intel wasn’t able to make Larrabee performance competitive in DirectX and OpenGL applications, so we won’t be getting a discrete GPU based on Larrabee anytime soon. Instead, Intel will be dedicating its resources to improving its integrated graphics. We should see a nearly 2x improvement in Intel integrated graphics performance with Sandy Bridge, and greater than 2x improvement once more with Ivy Bridge in 2012.

All isn’t lost though. The Larrabee ISA, specifically the VPU extensions, will surface in future CPUs and integrated graphics parts. And Intel will continue to toy with the idea of using Larrabee in various forms, including a discrete GPU. However, the primary focus has shifted from producing a discrete GPU to compete with AMD and NVIDIA, to integrated graphics and a Larrabee for HPC workloads. Intel is effectively stating that it sees a potential future where discrete graphics isn’t a sustainable business and that integrated graphics will become good enough for most of what we want to do, even from a gaming perspective. In Intel’s eyes, discrete graphics would only serve the needs of a small niche if we reach this future where integrated graphics is good enough.

Much like the integration of cache controllers and FPUs into the CPU, Intel expects the GPU to take the same path. The days of discrete coprocessors have always been numbered. One benefit of a tightly coupled CPU-GPU is the bandwidth present between the two, an advantage used by game consoles for years.

This does conflict (somewhat) with AMD’s strategy of a functional Holodeck in 6 years, but that’s why Intel put the “at least in the short-term” qualifier on their statement. I believe Intel plans on making integrated graphics, over the next 5 years, good enough for nearly all 3D gaming. I’m not sure AMD’s Fusion strategy is much different.

For years Intel made a business case for delivering cheap, hardly accelerated 3D graphics on aging process technologies. Intel has apparently recognized the importance of the GPU and is changing directions. Intel will commit more resources (both in development and actual transistor budget) to the graphics portion of its CPUs going forward. Sandy Bridge will be the beginning, the ramp from there will probably mimic what we saw ATI and NVIDIA do with their GPUs over the years with a constant doubling of transistor count. Intel has purposefully limited the GPU transistor budget in the past. From what I’ve heard, that limit is now gone. It will start with Sandy Bridge, but I don’t think we’ll be impressed until 2013.

What About Atom & Moorestown?

Anything can happen, but by specifically calling out the Atom segment I get the impression that Intel is trying to build its own low power GPU core for use in SoCs. Currently the IP is licensed from Imagination Technologies, a company Intel holds a 16% stake in, but eventually Intel may build its own integrated graphics core here.

Previous Intel graphics cores haven’t been efficient enough to scale down to the smartphone SoC level. I get the impression that Intel has plans (if it is not doing so already) to create its own Atom-like GPU team to work on extremely low power graphics cores. This would ultimately eliminate the need for licensing 3rd party graphics IP in Intel’s SoCs. Planning and succeeding are two different things so only time will tell if Imagination has a long term future at Intel. The next 3 years are pretty much guaranteed to be full of Imagination graphics, at least in the Intel smartphone/SoC products.

Final Words

Intel cancelled plans for a discrete Larrabee graphics card because it could not produce one that was competitive with existing GPUs from AMD and NVIDIA in current games. Why Intel lacked the foresight to stop from even getting to this point is tough to say. The company may have been too optimistic or genuinely lacked the experience in building discrete GPUs, something it hadn’t done in more than a decade. Maybe it truly was Pat Gelsinger's baby.

This also validates AMD and NVIDIA’s strategy and their public responses to Larrabee. Those two often said that the most efficient approach to 3D graphics was not through x86 cores but through their own specialized, but programmable hardware. The x86-tax would effectively always put Larrabee at a disadvantage. When running Larrabee native code this would be less of an issue, but DX and OpenGL performance is another situation entirely. Intel executed too poorly, NVIDIA and most definitely AMD executed too well. Intel couldn’t put out a competitive Larrabee quickly enough, it fell too far behind.

A few years ago Intel attempted to enter the ARM SoC race with an ARM based chip of its own: XScale. Intel admitted defeat and sold off XScale, stating that it was too late to the market. Intel has since focused on the future of the SoC market with Moorestown. Rather than compete in a maturing market, Intel is now attempting to get a foot in the door on the next evolution of that market: high performance SoCs.

I believe this may be what Intel is trying with its graphics strategy. Seeing little hope for a profitable run at discrete graphics, Intel is now turning its eye to unexplored territory: the hybrid CPU-GPU. Focusing its efforts there, if successful, would be far easier and far more profitable than struggling to compete in the discrete GPU space.

The same goes for using Larrabee in the HPC space. NVIDIA is the most successful GPU company in HPC and even its traction has been limited. It’s still early enough that Intel could show up with Larrabee and take a big slice of the pie.

Clearly AMD sees value in the integrated graphics strategy as it spent over $5 billion acquiring ATI in order to bring Fusion to market. Next year we’ll see the beginnings of that merger come to fruition. Not only does Intel’s announcement today validate NVIDIA’s HPC strategy, but it also validates AMD’s acquisition of ATI. While Larrabee as a discrete GPU cast a shadow of confusion over the future of the graphics market, Intel focusing on integrated graphics and HPC is much more harmonious with AMD and NVIDIA’s roadmaps. We used to not know who had the right approach, now we have one less approach to worry about.

Whether Intel is committed enough to integrated graphics remains to be seen. NVIDIA has no current integrated graphics strategy (unless it can work out a DMI/QPI license with Intel). AMD’s strategy is very similar to what Intel is proposing today and it has been for some time, but AMD at least has a far more mature driver and ISV compatibility teams with its graphics cores. Intel has a lot of catching up to do in this department.

I’m also unsure what AMD and Intel see as the future of discrete graphics. Based on today’s trajectory you wouldn’t have high hopes for discrete graphics, but as today’s announcement shows: anything can change. Plus, I doubt the Holodeck will run optimally on an IGP.

Comments Locked

55 Comments

View All Comments

  • stalker27 - Wednesday, May 26, 2010 - link

    The analogy isn't very good... the reason most of us don't need discreet sound solution has to do with the fact that the sound processing itself is already stretched to a very good potential. Remember, in the past there have been just beeps, but now we can play sound files that have sound's we can't even hear. What's the point in advancing over that?

    On the graphics side, well, there are more parameters. Sure, cards ca go 100 fps+, but TV's and monitor's get bigger and they will struggle to do that many fps on the next HD revision. BTW, what is it? 2540p and 4320p? So there's need for faster and faster chips with more and more memory.

    What intel is doing is trying not to depend so much on the graphics orientated companies like nvidia for usual graphics usage. AMD did clever in merging with ATI and intel does well in devoting more resources for this segment.
  • _PC_ - Saturday, May 29, 2010 - link

    I just ordered a Asus Xonar Essence STX. Why? Because quality matters. You get what you pay for and watching HD movies or listening to music with crappy onboard sound just sucks.
    Invest in a good sound card + THX certified boxes and you will stop being a bean counter.
    Good sound cards and good graphics cards will never die.
  • MrSpadge - Tuesday, May 25, 2010 - link

    "Intel removing the brakes from IGP transistor budget"

    All nice and well for performance at first glance. However, one shouldn't forget that fast graphics also requires a lot of bandwidth, naturally. No amount of on-cpu cache is going to change this. By moving the GPU into the CPU we constrain the bandwidth for both. To get performance even comparable to a mainstram graphics card we'd need a couple more memory channels and higher memory clocks for the CPU, which additionally means a more expensive socket and motherboard. Aside from the cost for memory and the huge graphics part within the CPU.

    If this graphics engine can also be used as a co-processor for general purpose tasks we may have a deal.. but if not and we'd have to pay for this in each and every office box I'd rather say "forget IGP gaming".

    MrS
  • Panzergeist - Tuesday, May 25, 2010 - link

    now, in say 5 years, wel be at what? 22-20 or maby 16nm. now havent the nay sayers been calling the dicrete gpu dead for god knows how long? 3d gaming? theres not an IGP on earth that will pull 3d at any acceptable level. while the igp's continue to evolve, so does gaming and within the timespan mentioned will have new consoles, and thus another huge leap in development and requirements. this industrie fuels itself. just dont see discrete cards dying off.
  • Zingam - Wednesday, May 26, 2010 - link

    I think that CPU and GPU will merge eventually so there will be no descrete GPUs at the end. There will be multicore floating point capable CPUs. And then instead SLI you'll be putting in the PC more CPUs with more cores. That way it will be easier to program and faster.
  • chemist1 - Tuesday, May 25, 2010 - link

    My main interest in graphics chips is their potential to offer vastly faster general computing, though their evolution into GP-GPUs (example: Folding@Home, a distributed protein folding program, runs 40X faster on a PS3 than on a modern CPU).

    So can anyone explain the ramifications, on this, of Intel's announcement? Is their (perhaps temporary) abandonment of high-performance integrated graphics bad news for using GPUs to take on CPU tasks, or not?

    Thanks.
  • AstroGuardian - Tuesday, May 25, 2010 - link

    Clearly not. Intel's abandonment on Larrabee means just one competitor short in the tight GPU market (not forgetting to mention GP-GPU)
  • chemist1 - Tuesday, May 25, 2010 - link

    sorry, correction: I meant Intel's "abandonment of high-performance DISCRETE graphics..."
    I should add the reason I raised the issue is that it was my understanding that Larrabee's architecture was supposed to make it particularly suitable for GP use (what I don't know is whether this would put them ahead of ATI and Nvidia in that regard).
  • Jaybus - Thursday, May 27, 2010 - link

    Larrabee is still a good idea, just not necessarily for a discrete graphics controller. Take a look at the forthcoming SCC chip that similarly uses "tiles" of simplified x86 cores, but wires them together via an on-chip mesh network with 256 GB/s bandwidth. Each of 24 tiles has it's own address space and 2 x86 cores in a SMP configuration sharing that memory. The 48 cores are Atom-like, but have message passing hardware. Every tile runs its own OS in its own address space, so it is a HPC on a chip, but with much faster networking. Now consider the on-chip optical signaling stuff Intel is working on that will increase the on-chip network bandwidth to the point that message passing is just as fast as if all cores accessed the same RAM. This gets around GPGPU's problem of continually having to shuffle data between texture memory and off-chip system memory over PCIe. The GPGPU is very fast at fully parallel tasks, but the SCC can move data between cores and other cores or system RAM at much higher rates. Real world tasks are often a serial set of parallel sub-tasks.

    And who says it has to be one or the other? Why not include a GPGPU tile with its own on-chip texture memory? Then it would transfer data over the on-chip network as opposed to the much slower PCIe. It's not as dumb an idea as some seem to think.
  • chemist1 - Friday, May 28, 2010 - link

    Jaybus, thanks for the thoughtful reply.
    I also wonder if it might be easier to get integrated graphics units (as opposed to discrete graphics cards) to perform CPU-like tasks, either because of their architecture or their integration with the CPU.

Log in

Don't have an account? Sign up now