How to use color management, Part 1

 

When we capture a color photograph on a sensor and then display it on a computer monitor we see the same colors as appeared in the natural scene. Or do we? When, as a child, I had my first camera with color film, I assumed the film somehow magically captured the colors of the scene. I knew nothing about the chemistry of film emulsions or its limitations. Modern digital cameras are capable of capturing a wide color range, but what happens when we send the image to a monitor screen or printer? How the computer software handles the color as the image is passed from one device to another is the subject of color management. Fortunately for the photographer, most of this is handled automatically by the computer software as long as you take some proper precautions. However, there are a few things that we as photographers need to understand to maintain color accuracy as we process images and transfer them from one device to another.

What colors can we see in nature?

The first issue is what colors we can see in nature. Color is the subjective appreciation of a narrow spectrum of wavelengths of light, or the visible spectrum. The illustration shows the range of visible wavelengths (440 nm – 700 nm) and the colors associated with them.


The colors of the visible spectrum can each be produced by a single wavelength of light. However, these are not all the colors we can see. For example magenta does not appear on the visible spectrum, and white, although not technically a color, also does not appear. Some colors, such as these, are produced by multiple wavelengths of light simultaneously striking our retinas, e.g. magenta produced by red plus blue light. In fact, pure isolated wavelengths are rarely encountered in nature. Most colors we see are produced by multiple wavelengths reflecting from the same source.

In the 1920’s experiments were done to determine the full range of colors appreciable by the human eye. A defined range of colors is referred to as a color space. The CIE 1931 color space (named for the organization who defined it in 1931) remains a standard color space to represent all visible colors. This color space is often graphically represented as shown in the illustration below.

 

 

The graph represents a somewhat complex mathematical transformation to convert a three dimensional color space into two dimensions. Explaining the details of how this is done would take up too much space in this summary. See the Wikipedia article on CIE 1931 color space if you are curious and mathematically inclined. For the simple explanation, the graphic representation is horseshoe-shaped. The outside edge of the horseshoe represents the colors of the visible spectrum with the corresponding wavelengths noted on the above illustration. Everything in middle of the horseshoe represents those colors which can be mixtures of individual wavelengths including magenta at the bottom of the figure and white which occurs as a small white spot in the low center where all the colors intersect. In theory, the colors become more intense as you get closer to the outer edge of the horseshoe, but you will not be able to fully appreciate this on your monitor, because there is no computer monitor currently in existence which can show the full range of visible colors, especially the most intense colors.

What colors can I see on my computer monitor?

Digital images define color (among other things) by assigning a set of numbers to each color. In the early days of digital imaging, cameras and computers used 8-bit processing, meaning that there were 2^8 or 256 definable levels of intensity for any particular color. Absence of color (black) was defined as 0, while the brightest color was assigned 255. This is not a lot of levels. Therefore, it did not make sense to assign numbers to colors which the monitor or printer could not display, since this would leave an even smaller number of levels to assign to the usable color intensities. Therefore, color spaces smaller than CIE 1931 were designed to assign the largest number (255 with an 8-bit processor) to the brightest colors which could actually be displayed on the current monitors. One of the earliest, designed in 1996 by Hewlett Packard and Microsoft, was called sRGB (for standard RGB). It is shown on the illustration below.

sRGB assigns numbers only to colors included within the black triangle. It excludes the brightest greens and blues, in addition to some of the bright reds. Most consumer grade computer monitors (including mine) still are only capable of displaying the sRGB color space, although this may change in the future. sRGB remains the default color space for internet browsers, since not all browsers are able to work in larger color spaces. The good news for nature photographers is that most colors we encounter in nature are also contained within the sRGB color space. Some exceptions might be very bright flowers or certain very colorful bird feathers. Manmade pigments and light sources are a bigger challenge to show in sRGB.

Because subsequent development of computer monitors and printer ink sets allowed a larger range of colors to be displayed than sRGB, and because of color management features in Adobe Photoshop, Adobe developed a larger color space in 1998 called, not surprisingly, Adobe RGB (1998). Adobe RGB assigns numbers to more intense greens and blues than does sRGB. High-end (i.e. more expensive) computer monitors are able to show colors in the Adobe RGB color space. In addition, Apple iPhones 6 and above use a proprietary color space similar to Adobe RGB. More recently Kodak (remember them?) developed a large gamut color space for photographers called ProPhoto RGB. This color space expanded the defined colors to a much higher proportion of the CIE 1931 color space. See the illustration below.

Due to the mathematics of the color definitions, ProPhoto RGB strangely assigns numbers to colors which do not even exist. However, most cameras and computer image processing programs currently use greater bit-depth. For example, my Canon EOS 7D captures 14-bit RAW files meaning there are over 16,000 available levels to assign. Therefore, assigning numbers to colors which you rarely if ever encounter no longer poses a problem, since there are plenty of assignable levels to go around.

What happens if you change color spaces?

The main problem with color spaces occurs when you need to change from one color space to another. This commonly occurs when you process the image in one color space, but then transfer it to another device, such as a printer, which uses a different color space. Fortunately, most image processing software generally contains programs to convert from one color space to another, but may require you to make a decision. The main issue in converting from one color space to another is how to handle colors which are out of gamut (meaning not defined) in the new color space. For example, you might process an image in ProPhoto RBG, but plan to post it on the internet where it may be viewed in sRGB. If your image contains bright colors, there might be colors assigned number values in ProPhoto RGB which are not defined in sRGB. Therefore, you might choose to convert the image to sRGB before posting it. Most of the images on this website were processed in ProPhoto RGB and then converted to sRGB before posting. Most image processing software has the ability to make this conversion, but you will be asked to make a decision about the rendering intent. This is basically asking you how you want the software to handle colors which are not in the gamut of the target color space. Usually several choices are offered, but only two are relevant to photographers. The most commonly used rendering intent is called Relative Colorimetric. When you make this choice the software will take any color which is too bright for the target color space and assign it the highest number available in that space. Colors which are within the target color space are assigned the equivalent number in the target space. This may result in some loss of textural detail in the brightest part of the image, but assures that the rest of the image retains its native color intensity. The other choice is called Perceptual. In this case, the brightest out of gamut color is assigned the highest number in the new color space. However, the in-gamut colors are all reduced in intensity so that the textural detail is not lost. This choice preserves textural detail in the brightest areas at the cost of some loss of overall color saturation. Relative Colorimetric is the most commonly used rendering intent, but some images look better if you use Perceptual. If you didn’t convert a ProPhoto RGB image and tried to display it in a new color space which did not make the conversion, then the image would appear significantly undersaturated (i.e. duller). The reason is that the brightest color displayable in sRGB is much less bright than in ProPhoto RGB. In ProPhoto RGB that color would therefore be assigned a relatively smaller number than it would have been assigned in sRGB. If you tried to display the ProPhoto RGB image on an sRGB device, the smaller number would be used making the color appear less bright.

Back to Photography Techniques