As part of today’s public session for AMD’s 2014 GPU product showcase, AMD has announced a new audio technology for some of their upcoming GPUs. Dubbed TrueAudio, Although technical details are light at this time – more is certainly to come under NDA – what AMD is describing would be consistent with them having integrated some form of audio DSP into their relevant GPUs.

The inclusion of an audio DSP comes at an interesting time for the industry. The launch of the next generation consoles has afforded everyone the chance to make significant technology changes, as the consoles and the realities of multi-platform game publishing meant that many developers stuck with a least common denominator on input, graphics, and audio. For PC game audio this meant that most audio was implemented entirely in software, just as it was with the consoles.

This also coincides with significant changes to the Windows audio stack that came with Windows Vista. Vista saw a significant overhaul of the Windows audio stack, where after years of bad experiences with audio hardware and dodgy drivers for low-end audio chips that implemented most of their functionality in software, Microsoft outright moved the bulk of the audio stack into the user space, i.e. into software. This vastly improved the audio stack stability and baseline features, however in doing so it cut off hardware audio acceleration of the principle 3D audio API of the time, DirectSound 3D.

But with the new consoles and Windows 8, the opportunity has arisen for changes to how audio is handled, and this is what AMD is seeking to capitalize on.

Audio DSPs are nothing new, with pioneers such as Creative Labs and Aureal jump-starting the market for those back in the late 90s. But due to the aforementioned issues they haven’t been a serious market since the launch of Creative Labs’ X-Fi back in 2005. Consequently what AMD is going to be doing here – offloading audio processing to their DSP to take advantage of the greater capabilities of task-dedicated hardware – isn’t itself new. But this is the first serious effort on the subject since 2005.

The advantages of utilizing the DSP are fairly straightforward. Simple audio calculations are cheap, and even simple 3D effects such as panning and precomputed reverb can be done similarly cheaply, but real-time reflections, reverb, and 3D transformations are expensive. Running the calculations to provide 3D audio over headphones and 2.1 speakers, or phantom speakers and above/below audio positioning in 5.1 setups is all very expensive. And for these reasons these effects aren’t used in current generation games. These are the kinds of effects AMD wants to bring (back) to PC gaming.

The challenge for AMD is that they’re going to need to get developers on board to utilize the technology, something that was a continual problem for Aureal and Creative. We don’t know just how the consoles will compare – we know the XB1 has its own audio DSPs, we know less about the PS4 – but in the PC space this would be an AMD-exclusive feature, which means the majority of enthusiast games (who historically have been NVIDIA equipped) will not be able to access this technology.

To jump ahead of that AMD is already forging relationships with the most important firms in the PC gaming audio space: the audio middleware providers. AMD is working very closely with audio firm GenAudio of AstoundSound fame, who in turn has developed audio engines utilizing the TrueAudio DSP. GenAudio will be releasing plugins for the common PC audio middleware to jumpstart the process, Firelight Technologies’ FMOD and AudioKinetics’ Wwise.  AMD is also working with AudioKinetics directly towards the same goal.

AMD is also approaching game developers directly on this matter. Eidos has pledged support in their upcoming Thief game, and newcomer Xaviant pledging support for their in development magical loot game, Lichdom. All of this will of course be available to anyone using the Wwise or FMOD audio engines.

It bears mentioning that AMD’s audio DSP is not part of a stand-alone audio card, rather it’s a dedicated processor created so that developers can take advantage of the hardware to process their audio, and then passing that back to the sound card for presentation. This means that the audio DSP can be utilized regardless of the audio output method used – speakers, headphones, TVs via HDMI, etc – but it also means that developers need to actively include support for TrueAudio to use it. This won’t allow 5.1 audio to headphone downmixing for existing software, for example. Developers will at a minimum need to patch in support or design it into future games.

Wrapping things up, I had a chance to briefly try Xaviant’s Lichdom audio demo, which is already TrueAudio enabled. As someone who’s already a headphones-only gamer, this ended up being more impressive than any game/demo I’ve tried in the past. Xaviant has positional audio down very well – at least as good as Creative’s CMSS3D tech – and elevation effects were clearly better than anything I’ve heard previously. They’re also making heavy use of reverb, to the point where it’s being overdone for effect, but what’s there works very well.

To be clear here, nothing here is really groundbreaking; it’s merely a better implementation of existing ideas on positioning and reverb. But after a several year span of PC audio failing to advance (if not regressing) this is a welcome change to once again see positional audio and advanced audio processing taken seriously.

We’ll have more information on TrueAudio later on as AMD releases more details on the technology and what software will be using it.

Comments Locked

62 Comments

View All Comments

  • JNo - Thursday, September 26, 2013 - link

    A few months ago, Razer brought our Surround Sound for headphone use and it's free and meant to be pretty good (I haven't tried it yet). There's CMSS for creative's X-Fi chips, mentioned in the article (which I use) and there's also Dolby Headphone, which some people like though I don't think it supports full 3D i.e. I don't think it calculates for exact sound effect positioning or elevation etc.

    Now that this is coming out from AMD, it would be good to have a round up of the technology behind, and the effectiveness of, the various 3D positional audio solutions for the PC using headphones. And do use headphones with a wide soundstage such as Audio Technica AD700 or AD900 or Sennheiser 558 or 598 (or even the older 555 or 595)!
  • mr_tawan - Thursday, September 26, 2013 - link

    At first I've heard about 'Programmable Audio Audio Engine', I thought about something similar to shaders in Graphics. But the more they revealed, the less I'm sure it is. It sounds like they come with the predefined effects rather than to get the developer to create their own.

    But if that's the case, then it would not be called 'Programmable' right ?

    Probably they don't just stress the word enough, and instead announce the partner/technology based on it. I don't know, may be I've got to check the info in the developer website in the future.
  • MrSpadge - Thursday, September 26, 2013 - link

    Upon reading "programmable" I first thought it was going to be a software solution running on the shaders. Which should work pretty well, especially if the GPU could finally run several tasks at once without major headaches. Which is a direction they should have moved into anyway. Especially considering this is supposed to be GCN 2.0 rather than 1.0/1.1.

    Anyway, this should have made it into DX 11.2 or 11.3, open to be supported by anyone. Let the game enable it or not. And if it's enabled, but no supporting hardware is present, just use the current simple positioning etc. instead.
  • erple2 - Friday, September 27, 2013 - link

    No, I think that this is actually programmable sound, not just enabling reverb and echo (which ultimately was the only thing that EAX did). The trick isn't to figure out whether you have a weird reverberation in a scene, its to calculate what the sound stage should sound like based on the direction you are looking, the physical makeup of the scene (where are the walls, what material they're made of, and ultimately how does that pact how sound reverberates off that surface), where the sources of the sounds are, and the cumulative affect each surface has on the sound sources as it travels to the listener. That's a hard problem to solve. Think of raytracing but for sound, and add appropriate algorithms to figure out how the listener (with two ears) would perceive the sound and ship that to the speakers. I imagine that the shader component comes in to play when you're determining the effect (by frequency) of a surface on incoming and outgoing sound. That's way more complicated than I originally thought. Doing that in real time is very expensive to do.
  • risa2000 - Friday, September 27, 2013 - link

    Actually, this is what Aureal did 15 years ago with theirs Aureal Vortex 2 chips. Think you are in the room closer to one wall. You reload your weapon. You hear the "click" echo from the closer wall sooner and louder, while from the other side it comes later and more fuzzy. Imagine, you slightly turn around while reloading and you can immediately figure out where is the wall which is closer and where is the one far from you, even if there is pitch dark.

    Or imagine you ride on train through the tunnel and echo of the wheels bumping the rails is literally pressing on you, and suddenly the tunnel expands to the large room and the echo is suddenly much more delayed and attenuated.

    Those were some examples of what you can experience in Half-Life (first one) if you played it with Aureal Vortex 2 hardware and headphones (as I did).
  • BrightCandle - Saturday, September 28, 2013 - link

    Was an amazing effect and I used my Aureal Vortex 2 for as long as I could before finally there was no reason to anymore. Real shame they were sued into destruction because it was a great technology, gave a real advantage in some games.

    But I wish AMD was making a sound card instead with this, so we could get this capability without an AMD graphics card.
  • Tig3RStylus - Thursday, September 26, 2013 - link

    I dont understand why people are complaining. If the work results in improvements, small or large it is a benefit. If it triggers their competitor to do the same, everybody wins. Granted it would be better if it was brand agnostic, but at least somebody is doing something to push the envelope.
  • Flunk - Friday, September 27, 2013 - link

    I think maybe it should be mentioned that the amount of GPU space this sort of DSP would take up is miniscule. So you're getting a lot of benefit, for almost nothing. I can completely understand why AMD would do this, it fits in to their fusion strategy of integrating everything and gives them a feature that Nvidia doesn't have for very little of their overall transistor budget.
  • Wolfpup - Wednesday, October 2, 2013 - link

    Ooooh hey, yeah if they can stick this on their A series CPUs, that would be a nice little bullet point for 'em. They're already IMO the best choice at the low end.
  • Sleeper0013 - Tuesday, October 1, 2013 - link

    Coming from a audio engineers perspective this is going to make the process of audio engineering for pc games much simpler, also creating more realistic real time free perspective 3d Sound fields which i promise you an i7 can't handle.

    This is also going to make recording into a digital environment more precise considering digital audio engineers are always feuding with latency. I assure you i welcome a dedicated APU, which is vastly going to improve my recording work flow.

    This will also translate into using a PC generated digital effects signal chain with live instruments which i assure you isn't a possibility today because of latency problems due to a lack of a buffer layer with dedicated APU.

Log in

Don't have an account? Sign up now