Color depth


Color depth or colour depth, also known as bit depth, is either the number of bits used to indicate the color of a single pixel, in a bitmapped image or video framebuffer, or the number of bits used for each color component of a single pixel. For consumer video standards, such as High Efficiency Video Coding, the bit depth specifies the number of bits used for each color component. When referring to a pixel, the concept can be defined as bits per pixel, which specifies the number of bits used. When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color, and also bits per pixel component, bits per color channel or bits per sample. Color depth is only one aspect of color representation, expressing the precision with which colors can be expressed; the other aspect is how broad a range of colors can be expressed. The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

Comparison

Indexed color

With the relatively low color depth, the stored value is typically a number representing the index into a color map or palette. The colors available in the palette itself may be fixed by the hardware or modifiable by software. Modifiable palettes are sometimes referred to as pseudocolor palettes.
Old graphics chips, particularly those used in home computers and video game consoles, often have the ability to use a different palette per sprites and tiles in order to increase the maximum number of simultaneously displayed colors, while minimizing use of then-expensive memory. For example, in the ZX Spectrum the picture is stored in a two-color format, but these two colors can be separately defined for each rectangular block of 8×8 pixels.
The palette itself has a color depth. While the best VGA systems only offered an 18-bit palette from which colors could be chosen, all color Macintosh video hardware offered a 24-bit palette. 24-bit palettes are pretty much universal on any recent hardware or file format using them.
If instead the color can be directly figured out from the pixel values, it is "direct color". Palettes were rarely used for depths greater than 12 bits per pixel, as the memory consumed by the palette would exceed the necessary memory for direct color on every pixel.

List of common depths

1-bit color

2 colors, often black and white direct color. Sometimes 1 meant black and 0 meant white, the inverse of modern standards. Most of the first graphics displays were of this type, the X window system was developed for such displays, and this was assumed for a 3M computer. The first Macintoshes, Atari ST high resolution. In the late 80's there were professional displays with resolutions up to 300dpi but color proved more popular.

2-bit color

4 colors, usually from a selection of fixed palettes. The CGA, gray-scale early NeXTstation, color Macintoshes, Atari ST medium resolution.

3-bit color

8 colors, almost always all combinations of full-intensity red, green, and blue. Many early home computers with TV displays, including the ZX Spectrum and BBC Micro.

4-bit color

16 colors, usually from a selection of fixed palettes. Used by the EGA and by the least common denominator VGA standard at higher resolution, color Macintoshes, Atari ST low resolution, Commodore 64, Amstrad CPC.

5-bit color

32 colors from a programmable palette, used by the Original Amiga chipset.

8-bit color

256 colors, usually from a fully-programmable palette. Most early color Unix workstations, VGA at low resolution, Super VGA, color Macintoshes, Atari TT, Amiga AGA chipset, Falcon030, Acorn Archimedes. Both X and Windows provided elaborate systems to try to allow each program to select its own palette, often resulting in incorrect colors in any window other than the one with focus.
Some systems placed a color cube in the palette for a direct-color system. Usually fewer levels of blue were provided than others, as the normal human eye is less sensitive to the blue component than to the red or green Popular sizes were:
4096 colors, usually from a fully-programmable palette. Some Silicon Graphics systems, Color NeXTstation systems, and Amiga systems in HAM mode.

High color (15/16-bit)

In high-color systems, two bytes are stored for each pixel. Most often, each component is assigned five bits, plus one unused bit ; this allows 32,768 colors to be represented. However, an alternate assignment which reassigns the unused bit to the G channel allows 65,536 colors to be represented, but without transparency. These color depths are sometimes used in small devices with a color display, such as mobile phones, and are sometimes considered sufficient to display photographic images. Occasionally 4 bits per color are used plus 4 bits for alpha, giving 4096 colors.
The term "high color" has recently been used to mean color depths greater than 24 bits.

18-bit

Almost all of the least expensive LCDs provide 18-bit color to achieve faster color transition times, and use either dithering or frame rate control to approximate 24-bit-per-pixel true color, or throw away 6 bits of color information entirely. More expensive LCDs can display 24-bit color depth or greater.

True color (24-bit)

24 bits almost always use 8 bits each of R, G, and B. As of 2018, 24-bit color depth is used by virtually every computer and phone display and the vast majority of image storage formats. Almost all cases of 32 bits per pixel assigns 24 bits to the color, and the remaining 8 are the alpha channel or unused.
224 gives 16,777,216 color variations. The human eye can discriminate up to ten million colors and since the gamut of a display is smaller than the range of human vision, this means this should cover that range with more detail than can be perceived. However, displays do not evenly distribute the colors in human perception space, so humans can see the changes between some adjacent colors as color banding. Monochromatic images set all three channels to the same value, resulting in only 256 different colors and thus, potentially, more visible banding, as the average human eye can only distinguish between about 30 shades of gray. Some software attempts to dither the gray level into the color channels to increase this, although in modern software this is more often used for subpixel rendering to increase the space resolution on LCD screens where the colors have slightly different positions.
The DVD-Video and Blu-ray Disc standards support a bit depth of 8 bits per color in YCbCr with 4:2:0 chroma subsampling. YCbCr can be losslessly converted to RGB.
Macintosh systems refer to 24-bit color as "millions of colors". The term true color is sometimes used to mean what this article is calling direct color. It is also often used to refer to all color depths greater or equal to 24.

Deep color (30-bit)

Deep color consists of a billion or more colors. 230 is approximately 1.073 billion. Usually this is 10 bits each of red, green, and blue. If an alpha channel of the same size is added then each pixel takes 40 bits.
Some earlier systems placed three 10-bit channels in a 32-bit word, with 2 bits unused ; the Cineon file format, for example, used this. Some SGI systems had 10- bit digital-to-analog converters for the video signal and could be set up to interpret data stored this way for display. BMP files define this as one of its formats, and it is called "HiColor" by Microsoft.
Video cards with 10 bits per component started coming to market in the late 1990s. An early example was the Radius ThunderPower card for the Macintosh, which included extensions for QuickDraw and Adobe Photoshop plugins to support editing 30-bit images. Some vendors call their 24-bit color depth with FRC panels 30-bit panels; however, true deep color displays have 10-bit or more color depth without FRC.
The HDMI 1.3 specification defines a bit depth of 30 bits.
In that regard, the Nvidia Quadro graphics cards manufactured after 2006 support 30-bit deep color and Pascal or later GeForce and Titan cards when paired with the Studio Driver as do some models of the Radeon HD 5900 series such as the HD 5970. The ATI FireGL V7350 graphics card supports 40- and 64-bit pixels.
The DisplayPort specification also supports color depths greater than 24 bpp in version 1.3 through "VESA Display Stream Compression, which uses a visually lossless low-latency algorithm based on predictive DPCM and YCoCg-R color space and allows increased resolutions and color depths and reduced power consumption."
At WinHEC 2008, Microsoft announced that color depths of 30 bits and 48 bits would be supported in Windows 7, along with the wide color gamut scRGB.
High Efficiency Video Coding defines the Main 10 profile, which allows for 8 or 10 bits per sample with 4:2:0 chroma subsampling. The Main 10 profile was added at the October 2012 HEVC meeting based on proposal JCTVC-K0109 which proposed that a 10-bit profile be added to HEVC for consumer applications. The proposal stated that this was to allow for improved video quality and to support the Rec. 2020 color space that will be used by UHDTV. The second version of HEVC has five profiles that allow for a bit depth of 8 bits to 16 bits per sample.
A few smartphones use 30-bit color depth, namely the OnePlus 8 Pro, the Oppo Find X2 & Find X2 Pro, the Sony Xperia 1 II and the Sharp Aquos Zero 2.

36-bit

Using 12 bits per color channel produces 36 bits, approximately 68.71 billion colors. If an alpha channel of the same size is added then there are 48 bits per pixel.

48-bit

Using 16 bits per color channel produces 48 bits, approximately 281.5 trillion colors. If an alpha channel of the same size is added then there are 64 bits per pixel.
Image editing software such as Photoshop started using 16 bits per channel fairly early in order to reduce the quantization on intermediate results. In addition, digital cameras were able to produce 10 or 12 bits per channel in their raw data; as 16 bits is the smallest addressable unit larger than that, using it would allow the raw data to be manipulated.

High dynamic range and wide gamut

Some systems started using those bits for numbers outside the 0–1 range rather than for increasing the resolution. Numbers greater than 1 were for colors brighter than the display could show, as in high-dynamic-range imaging, with 2 bits for the integer portion and 10 for the fraction. The Cineon imaging system used 10-bit professional video displays with the video hardware adjusted so that a value of 95 was black and 685 was white. The amplified signal tended to reduce the lifetime of the CRT.

Linear color space and floating point

More bits also encouraged the storage of light as linear values, where the number directly corresponds to the amount of light emitted. Linear levels makes calculation of light much easier. However, linear color results in disproportionately more samples near white and fewer near black, so the quality of 16-bit linear is about equal to 12-bit sRGB.
Floating point numbers can represent linear light levels spacing the samples semi-logarithmically. Floating point representations also allow for drastically larger dynamic ranges as well as negative values. Most systems first supported 32-bit per channel single-precision, which far exceeded the accuracy required for most applications. In 1999, Industrial Light & Magic released the open standard image file format OpenEXR which supported 16-bit-per-channel half-precision floating-point numbers. At values near 1.0, half precision floating point values have only the precision of an 11-bit integer value, leading some graphics professionals to reject half-precision in situations where the extended dynamic range is not needed.

More than three primaries

Virtually all television displays and computer displays form images by varying the strength of just three primary colors: red, green, and blue. For example, bright yellow is formed by roughly equal red and green contributions, with no blue contribution.
Additional color primaries can widen the color gamut of a display, since it is no longer limited to the shape of a triangle in the CIE 1931 color space. Recent technologies such as Texas Instruments's BrilliantColor augment the typical red, green, and blue channels with up to three other primaries: cyan, magenta and yellow. Mitsubishi and Samsung, among others, use this technology in some TV sets to extend the range of displayable colors. The Sharp Aquos line of televisions has introduced Quattron technology, which augments the usual RGB pixel components with a yellow subpixel. However, formats and media supporting these extended color primaries are extremely uncommon.
For storing and working on images, it is possible to use "imaginary" primary colors that are not physically possible so that the triangle does enclose a much larger gamut, so whether more than three primaries results in a difference to the human eye is not yet proven, since humans are primarily trichromats, though tetrachromats exist.