TopTechnical DictionaryImage sensor

Image sensor

Depending on the image sensor type, the basic working principle may differ. However, all have the same function - to convert the light flux passing the lens into an electric signal including information on the brightness of the recorded image. Contrary to the common knowledge, the image sensor type greatly affects the output image quality.


Fig. 1. CMOS image sensor used in APTI-24C2-36W camera


The most commonly used image sensor types – CCD and CMOS (Fig. 2a and 2b). In CCTV cameras, the latter is more commonly used due to its design and performance. CCD image sensors are often used in analogue PAL cameras gradually replaced by other camera types. Basic working principle and specifications of different image sensors are presented below.


Fig. 2a. CCD image sensor


Fig. 2b. CMOS image sensor


CCD image sensor (Charge Coupled Device) – is a charge coupled device.


Generally speaking, the basic working principle of CCD image sensor involves collecting the electric charge in specific areas of the image sensor, referred to as pixels. The photons (light) incident on the individual pixels eject the electrons (Fig. 3). A single pixel can be viewed as a container for the newly created electrons. The number of electrons is proportional to the luminous intensity and the exposure (Fig. 4).


Fig. 3. CCD matrix with individual pixels (b) and incident photons (a)


a - photon

b - pixel

Fig. 4. Individual pixel with electrons (b) being replaced by incident photons (a)


a - photon

b - electron

After accumulating a different number of electrons in the image sensor, a map is created reflecting the image observed by the camera. The image sensor matrix only detects the luminous intensity without colour, which will be discussed below.


The electrons accumulated in each pixel are read in sequence. It means that the electrons are transferred to a remote electronic circuit from a readout register along a single matrix line. After reading the number of electrons from the first matrix line, the electrons jump to the next line and are replaced by the electrons from the next line. The procedure is repeated after all the pixels are read (Fig. 5).


Fig. 5. Reading electrons (a) from pixels via a CCD channel (b) in sequence. All electrons are transferred to the readout registers (c), and further transferred to the electronic circuits.


a - electron

b - CCD channel

c - readout register

The charges from all the pixels are transferred to the electronic circuits that convert the voltage into the amount of light captured by the image sensor. Each value includes the pixel coordinates in the image sensor matrix. It is a simple representation of the image being captured by the image sensor.

Where does the colour comes from? To explain colour, we have to go back to the image sensor design (Fig. 6). The image sensor is coated with RGB filters (red, green and blue) one for each pixel, in this exact order. Each filter only allows the light of a specific colour to pass. As a result, each pixel stores the amount of light of each colour depending on under which filter it is stored. Since each pixel has its coordinates, the luminous intensity and colour for each pixel are known. The rest is taken care of by the electronics. The graphics processor has a pre-programmed filter map in the same layout as the matrix and can reproduce the image recorded by the image sensor into its digital version.


Fig. 6. CCD with RGB filters, each of which pass the light of a specific colour only.


The number of pixels with green filters is twice the number of pixels for other colours. The image sensor has been designed based on the human eye, which is most sensitive to green.


The filters have a critical function - to protect the image sensor against infrared radiation emitted by any object at temperature above an absolute zero. The image sensor is sensitive to the entire visible light band and unlike the human eye - to the infrared radiation which can affect colour rendering and brightness.


Based on the colour of 9 pixels (3x3 grid), the processor determines the resultant colour and stores it as the central pixel (Fig. 7), analyses the next 9 pixels moving the frame by one and determines the colour of another central pixel. The process is referred to as interpolation and allows to create images closer to the actual images as seen by the human eye.


Fig. 7. Pixels used in the interpolation process (a) and the pixel with a resultant colour (b)


a - interpolated pixels

b - resultant pixel

The interpolation method, i.e. determination of the resultant (average) colour based on adjacent colours does not work for pixels at the edge of the image sensor matrix. Considering the size of image sensor matrices installed in industrial cameras, it does not affect the image. However, the manufacturers of video and photo cameras, in particular high-end cameras, apart from the number of pixels also specify the number of effective pixels. It is the number of pixels actually used to produce an image, excluding pixels on the edge of the matrix and other auxiliary pixels.


CMOS image sensor (Complementary Metal Oxide Semiconductor).


Semiconductors used in CMOS image sensors, both due to design and data transmission methods are based on a mass storage architecture. They offer faster processing and lower energy demand compared to CCD image sensors. The basic working principle is similar to the CCD image sensor, however, all the pixels are read individually and not in sequence. Each pixel in the CMOS matrix has its own charge to voltage converter and its position address. As a result, all pixels can be read at the same time (Fig. 8).


Fig. 8. CMOS matrix design Due to the address bars (a), the distance between the pixels is higher; and each pixel has its own charge to voltage converter (b)


a - address bus

b - charge converter

However, this system has its disadvantages. With the additional components inside the image sensor, the distance between the pixels is larger compared to CCD image sensors. They are not as closely packed and the image sensor is larger. As a result, the matrix is less sensitive, since part of the light instead of reaching the photosensitive elements, reaches the spaces between them. Another disadvantage is the fact that it is impossible to manufacture several million identical photosensitive elements, where every charge converter has the same accuracy. As a result, the image that should be uniform in colour, may include smudges, also referred to as noise. Depending on the device class and quality, the image processor can handle this problem successfully.


Size of the image sensor installed in the camera is given in inches. The larger the image sensor, the more pixels it has and the better the image quality. The most popular image sensor sizes for CCTV cameras are 1/3” and 1/4”. However, the value has got nothing to do with the actual size of the image sensor. It is a relic from the days when the image sensors in video cameras was a glass cathode ray tube. The size referred not to the tube itself but to the diameter of it protective tube.


For example, 1" image sensor is the equivalent size of a cathode ray tube in a 1 inch diameter vacuum tube. The actual diagonal of the image sensor is approximately 2/3 of the specified size. The exact values are given in the size table.