Return to HomePage

Representation of 'Colour' in Digital Photography

 

Introduction

 

Users of digital cameras and of image editing software, such as Adobe Photoshop, soon encounter the terms 'colour space', '8-bit colour', or '16-bit colour'. The following notes discuss how these terms can be relevant to obtaining the best results when processing images from digital cameras. If you want to skip the technical stuff and go straight to my conclusions, click here.

 

Digital cameras and computer displays, such as monitors and printers, process information as a stream of numbers.  In order for the human perception of 'colour' to be represented in a consistent way on a range of different devices, it has been necessary to devise a method for defining this human perception in a mathematical way.  A practical system has to be able to do this within the constraints imposed by digital systems, which can only represent a limited range of numerical values.  For example, an '8-bit' representation can only have 256 different values (0 - 255).

 

In the discussion that follows, I assume a basic knowledge of the visible spectrum of light and the attribution of 'colour' to different 'wavelengths' within this spectrum.  The Wikipedia article on 'colour' can provide additional background if needed - see http://en.wikipedia.org/wiki/Color

 

When light enters the eye, either by reflection from an object or from a light source, such as an electric lamp or the sun, it is usually made up of a mixture of different wavelengths.  The different wavelengths stimulate receptors in the eye that, together, lead to a sensation of 'colour' in the brain. Thus, 'colour' is a quality constructed by the brain and not a property of objects as such.

 

During the19th century, several theories of colour vision were developed, based on the knowledge that the human eye has three types of receptors (called cone cells) for short (S), middle (M), and long (L) wavelengths of light. Thus, in principle, it was recognised that three parameters could be used to describe a colour sensation. The human eye can distinguish about 10 million different colours and each one of these can be represented in terms of three numbers, which are called the 'tristimulus values' of the colour.

 

Two factors make it possible to put the human perception of colour onto a mathematical basis, which has subsequently allowed the development of digital systems for encoding colours and representing them in a consistent way on devices such as computer monitors and printers.  These are, firstly, that different human observers agree fairly well on which mixtures of colours match one another and, secondly, that when different colours are mixed they do so in a consistent way that can be described as mathematically 'linear’.  This latter factor is known as 'Grassman's law' and is described more fully below.

 

It is, perhaps, worth pointing out that other living species have different types and numbers of colour receptors in their eyes; some have no cone cells, while birds, for example, have four types of cones, including very short (VS) wavelength receptors. These differences indicate that the perception of colour by such creatures must be very different from ours.

 

The Determination of 'Tristimulus Values'

 

The idea of a consistent representation of 'colour' by means of three mathematical quantities - the ‘tristimulus values' - was developed throughout the early years of the 20th century.  The International Commission on Illumination - also known as the CIE from its French title, the Commission Internationale de lŽEclairage - published a mathematical representation in 1931, which has formed the basis of most subsequent developments in colour theory, including digital photography. 

 

The CIE representation of colour was derived from a series of experiments carried out independently in London by W. David Wright, working at Imperial College, and John Guild, at the National Physical Laboratory (NPL), in the late 1920s.  Their two sets of data were combined into the specification of the CIE 1931 colorimetric system. 

 

What follows is a very brief overview, which omits many experimental details.  A thorough critique of the experiments can be found at the following link: http://www.cis.rit.edu/mcsl/research/broadbent/CIE1931_RGB.pdf

 

In general terms, the method that was used was to show an illuminated screen to a number of human observers.  The screen was divided down the middle and, on one side, a test colour was projected while, on the other side, an adjustable colour was projected.  The observer could vary three controls until the adjustable colour appeared to match the test colour. These three controls varied the brightness of three coloured beams of light, which were mixed to produce the colour on the screen.  For the CIE experiments, the three beams were set at wavelengths of 700 nm (red), 546.1 nm (green) and 435.8 nm (blue).  These three wavelengths were known as the 'primaries' in the experiments and were chosen on the basis of those monochromatic (single wavelength) light sources that were available in 1931.  The relative brightness values of the three primaries were calculated with reference to an equal-energy white source, so that the results could be normalised to a constant radiant power in each wavelength interval.

 

When carrying out the experiments, it was sometimes found that the observers could not match a test colour by using the three controls. When this happened, some light from one of the primaries was added to the test colour and the observer then made a match by adjusting the other two primary controls. When this was done, the amount of the primary added to the test colour was represented as a negative value in the mix.

 

After carrying out a large number of tests with several different observers, the amounts of each primary needed to match each test colour were tabulated. Fortunately, from studies of colour perception carried out in the 19th century by  H Grassmann, it was already known that if two adjustable beams are matched to a first monochromatic colour (colour 1) and then to a second (colour 2), the sum of  the two colour matching functions produces the same colour perception as mixing colours 1 and 2.  This is important, since it demonstrates that any colour can be uniquely represented as a set of tristimulus values or, mathematically, that human colour perception is 'linear'. 

 

The results are called the 'colour matching functions' for the experiment and are plotted in Figure 1, below:

 

Figure 1 - The CIE 1931 RGB Colour Matching Functions.
The colour matching functions are the amounts of primaries needed to match the monochromatic test primary at the wavelength shown on the horizontal scale.

 

The property of linearity ensures that consistent results are obtained even if the 'primary' beams have different wavelengths or intensities from those used in the CIE experiments and also allows the use of test colours that are not monochromatic. It is this property that makes it possible to design electronic systems such as colour televisions and digital photography.

 

Continue....