Microsoft: DirectStorage 1.1 with GPU Decompression Finally on Its Wayby Ryan Smith on October 14, 2022 10:45 AM EST
- Posted in
- DirectX 12
- Windows 11
As part of this week’s Microsoft Ignite developers conference, Microsoft’s DirectX team has published a few blog posts offering updates on the state of various game development-related projects. The biggest and most interesting of these is an update on DirectStorage, Microsoft’s API for enabling faster game asset loading. In short, the long-awaited 1.1 update, which adds support for GPU asset decompression, is finally on its way, with Microsoft intending to release the API to developers by the end of this year.
As a quick refresher, DirectStorage is Microsoft’s next-generation game asset loading API, and is designed to take advantage of the modern capabilities of both GPUs and storage hardware to allow for game assets to be more efficiently transferred directly to GPU. On the I/O side of matters, DirectStorage offers new batched I/O operations that are designed to cut down on the number of individual I/O operations, reducing the overall I/O overhead. But more even more notable than that, DirectStorage also enables (or rather, will enable) GPU asset decompression, allowing for modern compressed assets to bypass the CPU and be decompressed on the GPU instead.
The significance of DirectStorage is that Microsoft wants PCs (and console) to be able to better leverage the low random access times and high transfer rates of modern SSDs, enabling games to quickly stream in new assets rather than having to pre-load everything or suffering noticeably slow asset loading, as can be the case today. Under current game development paradigms, the CPU can be a bottlenecking factor in scaling up I/O rates to meet what SSDs can provide, as there are significant CPU costs both to tracking so many I/O operations and for decompressing game assets before passing them on to the GPU. DirectStorage, in turn, is designed to minimize both of these loads, and ultimately, try to remove the CPU as much as possible from game asset streaming.
DirectStorage technology was already implemented on Microsoft/s Xbox Series X/S consoles for their launch in 2020, so more recent efforts have been around porting DirectStorage to Windows and accounting for the non-homogenous hardware ecosystem. Earlier this year Microsoft rolled out DirectStorage 1.0, which implemented the I/O batching improvements, but not the GPU decompression capabilities. This is where DirectStorage 1.1 will come in, as it will finally be enabling the second (and most important) aspect of DirectStorage for PCs.
By allowing GPUs to do game asset decompression, that entire process is offloaded from the CPU. This not only frees the CPU up for other tasks, but it removes a potentially critical bottleneck in game asset streaming. Because modern SSDs are so fast – on the order of hundreds of thousands of IOPS and data transfer rates hitting 7GB/second – the CPU is the weakest link between speedy SSDs and massively parallel GPUs. So under DirectStorage, the CPU is getting cut out almost entirely.
As far as the performance benefits of DirectStorage 1.1 go, the full gains will depend on both the hardware used and how much data a game or other application is attempting to push. Games moving large amounts of data on very fast systems are expected to see the largest gains from the full DirectStorage 1.1 stack, though even lighter games can benefit from the fast access times to NVMe SSDs.
As part of Microsoft’s blog post, the company posted a screenshot from their Bulk Loading sample program for game developers, which offers a simple demonstration and benchmark of DirectStorage 1.1 in action. In Microsoft’s case, they were able to load 5.65GB of assets in 0.8 seconds using GPU decompression on an undisclosed PC, versus 2.36 seconds on the same system with CPU decompression – while maxing out the load on the CPU in the process. Like most SDK sample programs, this is a simple test case focused on just one feature, so the real-world gains aren’t likely to be quite so extreme, but it underscores the performance benefits of moving asset decompression from the CPU to the GPU when you have a large amount of asset data.
Moving under the hood, DirectStorage GPU decompression is being enabled via the introduction GDeflate, a general purpose compression algorithm that was originally developed by NVIDIA. GDeflate is a GPU-optimized variation on Deflate, which has been designed to better mesh with the massively parallel (and not-very-serial) nature of GPUs.
DirectStorage, in turn, will be implementing GDeflate support in two different manners. The first (and preferred) manner is to pass things off to the GPU drivers and have the GPU vendor take care of it as they see fit. This will allow hardware vendors optimize for the specific hardware/architecture used, and leverage any special hardware processing blocks if they’re available. All three companies are eager to get the show on the road, and it's likely some (if not all) of them will have DirectStorage 1.1-capable drivers ready before the API even ships to game developers.
Failing that, Microsoft is also providing a generic (but optimized) DirectCompute GDeflate decompressor, which can be run on any DirectX12 Shader Model 6.0-compliant GPU. Which means that, in some form or another, GDeflate will be available with virtually any PC GPU made in the last 10 years – though more recent GPUs are expected to offer much better performance.
Otherwise, the only things that will eventually be needed to take advantage of GPU decompression – and DirectStorage 1.1 in general – will be Windows 10 1909 (or later) or Windows 11, as well as a fast storage device. Technically, DirectStorage works against any storage device, including SATA SSDs, but it is explicitly being optimized for (and deliver the best results on) systems using NVMe SSDs.
Do note, however, that it will be up to individual games to implement DirectStorage to see the benefits of the API. That means not only using the necessary API hooks, but also shipping games with assets packed using the new GDeflate algorithm. The vast backwards compatibility of GDeflate means that game devs can essentially hit the ground running here on DX12 games – anything worth running a new game on is going to support DirectStorage and GDeflate – but the fact that it involves game assets means that full DirectStorage 1.1 support cannot be trivially added to existing games. Developers would need to redistribute (or otherwise recompress) game assets for GDeflate, which is certainly do-able, but would require gamers to re-download a large part of a game. So gamers should plan on seeing DirectStorage 1.1 arrive as a feature in future games, rather than backported into existing games.
Finally, as for Microsoft’s audience at hand (developers), this week’s announcement from Microsoft is meant to prod them into getting ready for the updated API ahead of its release later this year. Microsoft isn’t releasing the API documentation or tools at this time, but they are encouraging developers to get started with DirectStorage 1.0, so that they can take the next step and add GPU decompression once 1.1 is available later this year.
Source: Microsoft DirectX Dev Blog
Post Your CommentPlease log in or sign up to comment.
View All Comments
Ryan Smith - Monday, October 17, 2022 - linkAsset compression is a layer of lossless compression that can be applied to most game assets; not just textures, but also meshes and other forms of geometry. GDeflate is just a variation on Deflate, so it can be used on any data type with easily-identified redundancies.
This does overlap with texture compression in a bit of a confusing way. Lossy texture compression does significantly reduce the size of a texture, but it's not guaranteed to make the texture as small as possible. Because of the need for random access (so that the texture units can operate on the data), texture compression operates on very small groups of pixels (typically 16 at a time), so there's additional redundancy that can be removed with lossless compression. This is in stark contrast to things like JPEG (which most people are far more familiar with), where the algorithm is far more advanced and doesn't generate the same kind of redundancies.
So why asset compression? Because at the end of the day, games are getting too big for their own good. Even with texture compression, games can be massive. Compressing them not only saves valuable SSD space, but with DirectStorage, it will speed up load times as well because the assets can be sent to the GPU in compressed form, allowing those transfers to complete more quickly.
willis936 - Monday, October 24, 2022 - linkTextures with lossy compression are already going to be stored at entropy. Lossless compression will only help models and instructions. Models do compress well and need to be loaded into VRAM, so there is savings to be had.
Ryan Smith - Tuesday, October 25, 2022 - link"Textures with lossy compression are already going to be stored at entropy."
That would generally be true for a more advanced compression format such as JPEG. But that's not the case for fixed ratio texture compression. These formats are very simple so that they can be readily used by texture units - particularly, that they allow random access - and the results cached with respect to page size boundaries. There's really no effort to remove redundancy or otherwise do entropy coding; it's closer to clever tricks to degrade an image and interpolate back a reasonable approximation.
For reference, after grabbing a 32KB colormap that's been DXTC1 compressed and placed in a DDS container, that ZIPs (DEFLATE) down to 12KB.
If you really want to go down the rabbit hole, look up BCPack, which is the lossless compression algorithm that the Xbox Series X uses to store its textures. As well as the Oodle suite of tools, which have been in use for several years now.
Exotica - Monday, October 17, 2022 - linkWill CXL benefit game load times similar to direct storage? What other areas/types of programs are bottlenecked by the PCIe interface... with the CPU being the middleman? Are there other aspects of the system that can be accelerated by cutting out the CPU and letting devices interact directly over the PCIe bus?