Image Quality Explained

by Ga'ash Soffer on November 7, 1998 2:59 PM EST
The latest 3D accelerators support "Anisotrophic filtering", "Trilinear filtering", "Single pass Multitexturing", etc. What do all these names really mean? How do they affect image quality. How come people say trilinear filtering is so much better than bilinear filtering when screen shots look virtually identical? All of this and more will be answered in this article.
NOTE: For the image quality examples, a 24 or 32bit screen resolution is helpful. (It's hard to see the difference between 24bit and 16bit if your desktop is running at 16bit!)

Color Depth

Perhaps one of the most noticeable improvements over previous hardware architectures is the jump from 256 color to 16bit color. With 16bit color, a full 64k colors are displayable on the screen. This improvement brought life to newer games, eliminated sharply contrasting color, allowed for cool features such as colored lighting, true smooth color transitions, and more. I won't even bother showing the difference between 256 color and 16bit color; I'm sure you all know the difference; but how about 24bit color? The latest hardware accelerators support 24bit color output. Is there really a big difference between that and 16bit color? Check out these snapshots:

16bit Color, Click for Enlarged Image

24bit Color, Click for Enlarged Image

Obviously, there isn't much of a difference between these two snapshots. (Where did I get these ugly snapshots? Well, they are from my homebrewed software rendering engine)

Why 24bit color?

Isn't 16bit color enough? I mean, there are 64k colors available, why do we need anymore? The problem isn't really the number of colors; it is WHICH colors are available. The standard format for a 16bit color is 5bits Red, 6bits Green, and 5bits Blue. This means that there are only 2^5 = 32 shades of Red available, 2^6 = 64 shades of Green available, and 2^5 = 32 shades of Blue available. The problem with this is that if you want to create a gradient of red which covers the whole 800x600 game screen, it will look pretty ugly, because 600/32 = ~19 pixels will be of the same color before the gradient changes. This will be painfully noticeable. (Grandmaster B elaborates on this a bit)

With 24bit color, the colors are split up into 8bits Red, 8bits Green, and 8bits Blue. This gives us 2^8, or 256 shades of each color. This is much more visually pleasing than only 32 or 64 shades of each color. A gradient of red for our 800x600 screen will only have ~2 rows which are of the same color; barely noticeable.

If 24bit color allows for such smoother color transitions, how come it isn't visible in the above snapshots. The answer is, it is, just on a much smaller scale.

16vs24-close.jpg (13642 bytes)

As you can see from this snapshot, (the coarsest part is actually the dark brown in the 16bit color, not the part circled (blame it on JPEG) The 16bit color image has harsh transitions between colors, while the 24bit image is much much smoother. Does it really make a difference if you can only tell that it is 16bit color by zooming in? Not really; however, wherever in a game there is some sort of gradient, or gradual fading, (lighting is an example) 24bit color is going to look significantly better than 16bit color. If you are running your desktop in 16bit color, you will notice that the pinkish clouds in the background will probably not look as smooth as intended, when, in fact, on a 24bit color screen, they look virtually flawless.

The Deal with 32bit Color

What is 32bit color? Does 32bit color gives us 10,12,10 bits of shades? Actually, 32bit gives the exact same image quality as 24bit color. The extra byte is generally labeled A (for ALPHA) and tells how transparent the pixel is. Notice that the video card does not use this A value automatically; however, programmers can take advantage of this extra storage per pixel to do some neat effects.

Why do we need 32bit color if it is the same thing?

One of the major reasons 32bit color was "invented" is because certain video architecture (especially 128bit architectures) can more easily deal with a power of 2 number of bits, rather than an odd number like 24. I am guessing that the RIVA 128, and other high bandwidth 128bit architectures) pass 128bits every time, and dealing with "fractional pixels" (i.e. passing in 128bits worth of 24bit pixel will give the RIVA 5 pixels , and the Red component of the 6th pixel to process, not very pretty)

Read on to find out more about texture mapping.

Texture Mapping
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now