After several requests and a week’s break from our initial DirectX 12 article, we’re back again with an investigation into Star Swarm DirectX 12 performance scaling on AMD APUs. As our initial article was run on various Intel CPU configurations, this time we’re going to take a look at how performance scales on AMD’s Kaveri APUs, including whether DX12 is much help for the iGPU, and if it can help equalize the single-threaded performance gap been Kaveri and Intel’s Core i3 family.

To keep things simple, this time we’re running everything on either the iGPU or a GeForce GTX 770. Last week we saw how quickly the GPU becomes the bottleneck under Star Swarm when using the DirectX 12 rendering path, and how difficult it is to shift that back to the CPU. And as a reminder, this is an early driver on an early OS running an early DirectX 12 application, so everything here is subject to change.

CPU: AMD A10-7800
AMD A8-7600
Intel i3-4330
Motherboard: GIGABYTE F2A88X-UP4 for AMD
ASUS Maximus VII Impact for Intel
Power Supply: Rosewill Silent Night 500W Platinum
Hard Disk: OCZ Vertex 3 256GB OS SSD
Memory: G.Skill 2x4GB DDR3-2133 9-11-10 for AMD
G.Skill 2x4GB DDR3-1866 9-10-9 at 1600 for Intel
Video Cards: MSI GTX 770 Lightning
AMD APU iGPU
Video Drivers: NVIDIA Release 349.56 Beta
AMD Catalyst 15.200 Beta
OS: Windows 10 Technical Preview 2 (Build 9926)

 

Star Swarm CPU Scaling - Extreme Quality - GeForce GTX 770

 

Star Swarm CPU Scaling - Mid Quality - GeForce GTX 770

Star Swarm CPU Scaling - Low Quality - GeForce GTX 770

To get right down to business then, are AMD’s APUs able to shift the performance bottleneck on to the GPU under DirectX 12? The short answer is yes. Highlighting just how bad the single-threaded performance disparity between Intel and AMD can be under DirectX 11, what is a clear 50%+ lead for the Core i3 with Extreme and Mid qualities becomes a dead heat as all 3 CPUs are able to keep the GPU fully fed. DirectX 12 provides just the kick that the AMD APU setups need to overcome DirectX 11’s CPU submission bottleneck and push it on to the GPU. Consequently at Extreme quality we see a 64% performance increase for the Core i3, but a 170%+ performance increase for the AMD APUs.

The one exception to this is Low quality mode, where the Core i3 retains its lead. Though initially unexpected, examining the batch count differences between Low and Mid qualities gives us a solid explanation as to what’s going on: low pushes relatively few batches. With Extreme quality pushing average batch counts of 90K and Mid pushing 55K, average batch counts under Low are only 20K. With this relatively low batch count the benefits of DirectX 12 are still present but diminished, leading to the CPU no longer choking on batch submission and the bottleneck shifting elsewhere (likely the simulation itself).

Star Swarm CPU Batch Submission Time - Extreme - GeForce GTX 770

Meanwhile batch submission times are consistent between all 3 CPUs, with everyone dropping down from 30ms+ to around 6ms. The fact that AMD no longer lags Intel in batch submission times at this point is very important for AMD, as it means they’re not struggling with individual thread performance nearly as much under DirectX 12 as they were DirectX 11.

Star Swarm GPU Scaling - Mid Quality

Star Swarm GPU Scaling - Low Quality

Finally, taking a look at how performance scales with our GPUs, the results are unsurprising but none the less positive for AMD. Aside from the GTX 770 – which has the most GPU headroom to spare in the first place – both AMD APUs still see significant performance gains from DirectX 12 despite running into a very quick GPU bottleneck. This simple API switch is still enough to get another 44% out of the A10-7800 and 25% out of the A8-7600. So although DirectX 12 is not going to bring the same kind of massive performance improvements to iGPUs that we’ve seen with dGPUs, in extreme cases such as this it still can be highly beneficial. And this still comes without some of the potential fringe benefits of the API, such as shifting the TDP balance from CPU to GPU in TDP-constrained mobile devices.

Looking at the overall picture, just as with our initial article it’s important not to read too much into these results right now. Star Swarm is first and foremost a best case scenario and demonstration for the batch submission benefits of DirectX 12. And though games will still benefit from DirectX 12, they are unlikely to benefit quite as greatly as they do here, thanks in part to the much greater share of non-rendering tasks a CPU would be burdened with in a real game (simulation, AI, audio, etc.).

But with that in mind, our results from bottlenecking AMD’s APUs point to a clear conclusion. Thanks to DirectX 12’s greatly improved threading capabilities, the new API can greatly close the gap between Intel and AMD CPUs. At least so long as you’re bottlenecking at batch submission.

Comments Locked

152 Comments

View All Comments

  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    It's a "prosumer" so it's a special thang only the internal mind legends know about....
  • close - Monday, February 16, 2015 - link

    Companies buy boatloads of devices that require powerful CPUs but only OK graphics. That's the business laptop/desktop. And the market there is way bigger than the normal consumer one. It was done since forever, they just moved the entire NB into the CPU. If you don't need it just... ignore it or go bigger (E series) :).
  • Pissedoffyouth - Saturday, February 14, 2015 - link

    Oh not this again
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    I know hundreds of people who game on HD3000 and related intel iGPU's.
    Whilst you have a farm, most people have a city home, and Intel KNOWS that.
    They will completely ignore your opinion, as do I, interpreted as mere bragging.
  • close - Monday, February 16, 2015 - link

    You haven't felt the need to upgrade your farm since way back when 4770 launched? When was the last time you "felt the need" to upgrade the last generation of hardware to the current one? Or to upgrade 1 year after you bought the latest and the greatest?

    I can understand wanting to upgrade the very lowest end every year since some benefits must come out of it. Upgrading the high end can't be qualified as "a need" for year now. Because the benefits are barely visible there. Anyway not enough to justify the price.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Vaporware bro. Wasted resources, AMD PR, totaling nothing, ever, in any future imagined, like so many other "AMD technologies".
  • mikato - Friday, February 13, 2015 - link

    Yeah I agree, but your previous comment was talking about graphics. About making a new ISA, that could be tough for an underdog. Intel wasn't really able to pull it off with Itanium IA-64.
  • ddriver - Friday, February 13, 2015 - link

    No, my previous comment was about AMD trying to downplay how much their CPUs suck by exploiting the reducement of CPU overhead in graphics performance. I somehow feel like this is the wrong approach to address lacking CPU performance. "Hey, if you play games, and are willing to wait for next-gen games, our CPUs won't suck that bad" - not hugely inspiring message. Especially keeping in mind that CPU cost is not all, and power consumption is a factor that can easily overwhelm in time. We have a saying "not rich enough to afford cheap", you know, because most of the time "cheap" comes at a price too.

    The failure of IA-64 did not lie in it being "non x86", but in compile complexity and in Intel not pouring enough resources in its dev toolchain, plus it was a long time ago, previous century really. Intel could easily push a new ISA today, but it has to be mandated. And even today, who would want Itanum when you have Power, there is simply no demand for Itanium, nor much room to shine. Itanium is not mandated, too "over-engineered", efficient affordable ISA was mandated, enter ARM, a tiny holding really, something AMD totally has the resources to pull off targeting between x86 and ARM.
  • Alexvrb - Saturday, February 14, 2015 - link

    It's not a "wrong approach" since they can do both. They can and are attacking the problem from both sides. It's just that one approach (Mantle, and pushing MS to produce a low-level API as a result) could be done quickly and cheaply. The other side, the development of a new architecture? That takes time. They needed to buy time. Plus the benefits of low-level APIs can still be reaped to various degrees regardless. It's not a wasted effort.
  • Michael Bay - Monday, February 16, 2015 - link

    Wake up, Mantle is not just dead, it never even lived.

Log in

Don't have an account? Sign up now