Wiki:
Page name: CGT - Color Depth [Logged in view] [RSS]
2006-11-19 04:44:49
Last author: wulfman
Owner: wulfman
# of watchers: 1
Fans: 0
D20: 9
Bookmark and Share

CG TOOLS



<img:img/drawing/19683_1095874738.jpg>  CGT - HARDWARE






<img:img/drawing/19683_1095874738.jpg>  CGT - GRAPHICS FILES



  <img:stuff/annee_greendot_house.jpg> CGT - File Attributes


   <img:img/drawing/80746_1099420452.jpg> CGT - Color Depth






The Color Depth of a file format CGT - File Format plays a role in the size of the file CGT - Image Size.


1 bit or 21 = 2 colors
2 bits or 22 = 4 colors
3 bits or 23 = 8 colors
4 bits or 24 = 16 colors
8 bits or 28 = 256 colors
16 bits or 216 = 65,536 colors
24 bits or 224 = 16,777,216 colors
32 bits or 232 = 4,294,967,296 colors


Common color modes for image files correspond to their data depth. Color depth has evolved with the improvement of technology and some standards are seldom used now but understanding why things are done a certain way sometimes requires a little history. In the Bad ol' Days there were green on black CGA monitors. If that made your eyes cross you could spend another hundered dollars and get an amber on black CRT, but it was two colors and that was all you got. The picture was grainy at .50 points pixel size but that was okay because the storage technology would barely support a full screen monochrome image. A monitor was little more than a TV set without a tuner. The industry realized though, that there was no reason to render text as an image when each text character would be exactly the same each time it was displayed. Rather than render each character it was much more efficient to use a block of bits to describe which pixels were turned on or off in the block. This became the ASCII text standard. The font set was stored on a chip rather than in CPU memory and each letter was an array of dots. Artists used their computers rarely because the medium was terribly limited and screen drawing required that the artist set each pixel to either on, or off in a special file that took advantage of the only other real difference between a computer monitor and a TV set, Hardware Encoded Text. Computer software publishers attempted to break the monotony of the bland C:\ prompt user interface by including special characters that were actually custom text characters. Assembled into a pattern, these custom characters allowed simple artwork to be displayed using the same commands that already existed for the display of text. That was the beginning of the end for the monochrome CGA monitor. Due to the open nature of the coding, young programmers with lots of time began pushing this to its limit... The need was perceived for color monitors.

At first color monitors weren't much better than monochrome, having 8 colors and 2 intensity levels (VGA). They used the same basic hardware encoding for screen display extended to include the new color depth. The same methods applied to display because the architecture was still too limited. Then the SVGA standard doubled the color depth to 8 bit. Personal computers began to become common and the internet sprang into being... demanding graphics that were colorful, attractive and lightweight enough to transmit quickly over dial-up service. This was the birthplace of the .GIF and .BMP formats. .GIF was the child of Sperry Univac and was quickly adopted by Compuserve in its bid to dominate the user end ISP market. It was not only a good choice for color because it exploited the limits of the display modes available, but introduced a new concept, transparency. It assigned a single color, Magenta, to a new role. Magenta would display as having no color of its own, but would adopt whatever color happened to be behind it in the background. Web images would no longer be limited to boxes with pictures in them. In .GIF format the box became invisible, and the image seemed to stand alone on the unbroken background. Another file attribute the .GIF file incorporated was CGT - Compression.

The SVGA standard came along with the 386 processor and the introduction of "high end" 16 bit graphics cards which allowed enough colors to be used that graphics could be made realisticlly shaded and toned. The limiting factor was still bandwidth. Images of this type took twice as long to download because they were twice as large. Most of the development of image file standards owe there format specifications to this period in computer history. Most image files used today for computer graphics are 8 or 16 bit color depth. .TGA was the first attempt to standardize a 32 bit image file format, but it was actually based on a double precision mode 16 bit system and designed expressly for the IMB Truecolor accelerator card by TrueVision. While the color was very close to lifelike, the file size and CPU burden were extreme and it was a format which was severely ahead of its time. The Targa TrueColor file never became popular with most CG users due to the constraints of the hardware it was written for. It could produce as many distinct shades and hues as the human eye was able to see, but its use was limited to film graphics and Apple systems were it still enjoys support. TrueVison has since changed its name to Pinnicle. With HD flat panel displays becoming more popular and computers growing in excess of the capacity required to handle TGA files effortlessly this format may become more popular with PC users.




Black and white images (Monochrome) have a color depth of 1 bits, expressing only a single binary state of difference between colors. The color is either White (true), or Black (false). Monochrome images can be amazingly useful, but seldom get the attention they deserve. While these files are very small and aren't necessarily a good mode for storing artwork or images, they have enormous value for texture files.

Grayscale can be any depth from 2 to 16 bits though 8 bit is the most common. 256 shades of gray will store a black and white photograph with very reasonable accuracy. 8 bit grayscale can also be used to store textures, height maps and shadow or transparency masks. Greater than 16 bit grayscale is possible (and used) but exceeds the limit of the human eye to detect discreet differences in shade.

Pallet Optimized Color uses a list of color values. There is a default list that will display basic "cartoony" graphics pretty well and makes files very fast to transmit for web graphics. There is an additional default color pallet for GUI displays on Windows systems, which is not similar to the Web Optimized Pallet. In addition, the pallet itself can be custom defined in the graphics file header. The image can still only use 256 colors, but the author can pick the colors from any possible RGB Hex values. This makes skin tones much more realistic in low color depth images, but sacrifices for the overall appearance of the image. Often several color values which are close to each other in value are averaged together to give an approximation for those colors. This improves the appearance a bit but can result in parts of the image being blotchy or having odd shading. Overall it is a strategy best used for icons, buttons, and other cartoony graphics for application UI's and web graphics where realism isn't the primary goal.

RGB "Red, Green, Blue" uses 3 Hex encoded color channels to define each pixel. Very realistic color and shading is possible though the dynamic range of the image may be a little flat. RGB images are appropriate for most art and photographs when image size can allow relatively larger files. In an RGB image the image is designed to best be utilized as display voltages for a color monitor. Black is implied by zero voltage for each of the color channels. Small values for each color channel indicate dark shades and the shades of color become lighter as the channel values increase. They contain more information than lower color depth images and are usually larger, often many times larger, even when compression if used. Each channel uses 8 bits and the limit for display is 16.7 million colors.

CMYK "Cyan, Magenta, Yellow, Black" starts out ready to print by its very design. In CMYK the image is seperated into color channels which correspond not to display voltages, but to color masks for print seperation. In CMYK black isn't implied by low values of each of the colors, but is explicit in the format as a seperate channel.

HDRI "High Dynamic Range Image" is a format developed for realistic CG film effects and uses floating point values to precisely define the color of each pixel. HDRI formats include exceptional range of color, transparency and shading. HDRI images are demanding on the computer and most CRT displays won't take full advantage of the format since it was never intended for the limits of CRT monitors or TV. High Definition flat panel sets are designed to use HDRI formats.


Username (or number or email):

Password:

Show these comments on your site

Elftown - Wiki, forums, community and friendship.