The Image Sensor Part I

While early cameras used photographic film to capture an image, today’s digital cameras rely on optoelectronic sensor chips. The functional principles of image sensors are actually more complex as you might assume, because image sensors have several processes to perform and their technical layout is responsible for the quality and impression of the images taken. The image below shows a conventional CMOS image sensor of a Canon DSLR camera.

Canon CMOS sensor

Taking a look into data specifications of new digital cameras, one information is very often on the top – the image resolution. Modern digital cameras typically provide image resolutions between 10 and 20 million pixels. However, there has always been a race for higher image resolutions among camera manufacturers and the pixel count has almost become a figurehead for new camera models. For this reason even some smartphones have been equipped with cameras able to take images with resolutions of up to 41 million pixels. One could assume that the image resolution is the only quality feature of digital cameras. There is no denying that a decent image resolution is quintessential to achieve professional results, but it would be misleading to trust in the resolution as the sole indicator for a camera’s quality. After reading the sensor article, you will get an understanding of how a sensor with too many pixels can even result in a lower performance. Furthermore, this article will explain what other quality features of an image sensor exist.

Sensor formats

One differentiating factor of camera sensors is the size of the light sensitive area. The chart below illustrates a comparison of conventional image sensor formats. Please note that this is not a complete chart of all existing sensor formats but rather a composition of the most common ones. Sensors with a format smaller than 2/3″ are normally used for smartphones while larger formats are typically used for digital cameras. Some special film cameras like the ones used for IMAX films even use 70mm sensor formats (70mm x 48,5mm) which is almost twice the size of a medium format sensor. The full frame format plays a special role in photography and is therefore highlighted in green. The dimensions of a full frame sensor correspond to the size of the previously used photographic film.

Camera Sensor Formats

When producers of camera sensors increase the size of an image sensor, they have to decide for one of the following options: Either they can use the enlarged sensor area to place more pixels on the sensor to achieve a higher resolution. Alternatively, they can keep the number of pixels and increase their individual pixel sizes to improve the individual pixel quality. Option no. 1 (higher packaging density) may be neccessary to increase the image sharpness, however the smaller the pixels get, the more noise they produce. Option no. 2 (lower packaging density) generally improves the low-light performance of a sensor (less noise) and also provides a higher dynamic range. Also, as sensor size increases, the depth of field decreases for a given aperture which produces the backgrounds to soften even more what is often desired.

Sensor architecture

Modern image sensors are highly developed semiconductors with a complex circuit structure, also referred to as sensor architecture. This paragraph will illustrate the basic architecture and functionality of a 2D image sensor. In this context, two different types of sensor chips will be introduced and compared with each other. Subsequently, it will be explained how a single pixel can actually see light and which challenges are involved in this process.

A typical 2D image sensor consists of a light sensitive (photosensitive) area as well as an area for the power supply and readout electronics. The readout electronics primarily serve to process the signal each pixel accumulated during the photograph. Therefore, complex circuits allow that each pixel is being addressed individually and it’s signal is being transported to a central processing unit where all signals are then re-united to a digital picture. The chart below shows the drivers and readout units of an image sensor. Each green square refers to an individual pixel where all pixels are arranged in a rectangular array.

Image Sensor Architecture

There are mainly two sensor technologies to be distinguished. CCD-sensors (charge coupled devices) have already been in use in the 1970s but have later been largely replaced by CMOS-sensors (complementary metal-oxide semiconductor). Modern digital cameras and smartphones almost exclusively use CMOS technology for their sensors, but for some special applications CCD technology is still in use. The difference between these two technologies is, in particular, the way how signals are being processed. You can read some more details about these different types of sensors here:

Charge-Coupled Device: A CCD sensor consists of a grid-like array of photosensitive cells (photodiodes) that react to incoming light. However, these cells are not lined up directly adjacent to one another because every (vertical) line of pixels is attached to a line of (vertical) shift registers next to each other. The vertical shift register acts like a transport channel to shift pixel signals downwards to another (horizontal) shift register that transports the signal to the readout unit. When the sensor is exposed to incoming light, the photosensitive cells accumulate electric charges depending on the intensity of the incoming light. Once the sensor is being read out after the photo has been taken, each photo cell is emptied by loading their charge into the adjacent shift register. Then, the shift register can move all charges downwards by one position. As a result, the first charge is dropped onto the horizontal shift register that can now be shifted to the readout unit. The readout unit loads each charge into a capacitor and amplifier element that can transform the charge into a voltage. An AD (analog/digital) converter then transforms the intensity of this (analog) voltage into a digital signal, a binary number that can be read by a processor. This readout process is done step by step until the last charge has been converted. Once the horizontal shift register is empty, all vertical shift registers can shift down yet another position and the horizontal transport channel can be read out anew. This process is repeated until the last charge has been shifted all the way to the readout unit.
Although the basic principle to read out a CCD sensor is always the same, there are some variations in practice. For example, a full frame (FF) CCD sensor does not use separate shift registers to transport charges but rather has vertical shift registers included in the pixels themselves. The photo cells can directly move their charges to their neighbor cell row by row and a serial, horizontal shift register continues to shift charges to the readout unit as described above. The terminology full frame should not be confused with the identically called sensor format (see above), as a full frame CCD does not neccessarily have the same dimensions as a sensor with full frame format. The name full frame CCD refers to the full use of the available space for light sensitive area because all photo cells can be placed in close proximity. When a CCD sensor is read out with a technique called frame fransfer (FT), in a first step the whole arrangement of charges is being shifted into a frame storage region of the same dimensions as the main pixel array but not sensitive to light. In a second step the storage array is read out in the typical way. This process allows a very quick emptying of the light-sensitive sensor area so that a new picture can already be taken while readout is still in progress. This is because shift registers can be operated much faster than the serial readout process where the readout unit can slow down all shift registers. For this reason, frame transfer CCD sensors are often used in digital video cameras. The interline transfer (IT) CCD is the one described in the beginning of the CCD article. Lastly, the frame interline transfer (FIT) CCD can be described as a synthesis of frame transfer (FT) and interline transfer (IT) CCD sensors. The following chart is a summary of the readout procedures described. (L = light sensitive pixel, T = shift register, A = amplifier)

CCD Readout Types

Complementary Metal-Oxide Semiconductor: The pixels of a CMOS sensor are arranged in a rectangular grid similar to the ones on CCD arrays, however this type of sensor does not have shift registers included. The special characteristic of a CMOS sensor is that every pixel unit does not only consist of a photodiode but also has its own readout-circuit included. This allows an amplification and charge-to-voltage-conversion of the accumulated signals still on the pixel. Another great feature of CMOS pixels is that they can be read out on demand because every row of pixels is connected to an individual selection line. Once a selection line is activated by the surrounding electronics (row decoder), the CMOS pixel applies the previously converted voltage to the readout-line that directly leads to the analog processor. As a single selection line always activates an entire row with a large number of pixels, the analog processor needs to use a multiplexer unit to convert this parallel signal into a serial stream of data. As multiplexers are generally much faster than shift registers, this principle provides an advantage over CCD sensors when it comes to readout speed. After multiplexing, the signals can be processed by an AD (analog/digital) converter as usual. The block diagram below illustrates this process.

CMOS Readout Circuit

As already described, the main differences between CCD sensors and CMOS sensors lie in their individual pixel structure and their readout principles. Each type of photo sensor has its strengths and weaknesses. CCD sensors have their pixels filled with larger photodiodes which results in a higher light sensitivity – but they are more vulnerable to produce errors, because all charges have to travel long ways through shift registers and might be altered by surrounding charges. Especially bright spots can result in smear effects on CCD sensors. It remains to mention that the readout principle of CCD sensors is serial only and therefore readout speed is very much limited to the speed of the shift registers. By contrast, CMOS sensors benefit from a faster readout process because of parallel signal processing and the use of multiplexers. In return, they are more likely to be affected by readout noise, because their photodiodes are smaller and their signals are amplified more highly. Also, smaller photodiodes result in lower dynamic ranges what makes CMOS sensors vulnerable for either burnt highlights or black shadows. However, the advantage that finally achieved a breakthrough of CMOS sensors in today’s cameras is their lower power consumption and lower cost of production.

Photodiode structure

To understand the conversion of light into an electrical signal, you have to go further into detail and learn something about an individual pixel. A pixel consists of some supply structures such as chip substrate and supply electronics (mostly ultra-thin metal wires) and also a photosensitive area which in most cases is smaller than the total pixel area. The light sensitive area usually consists of a photodiode. A photodiode is a semiconductor element with a very special characteristic that enables the detection of light. The material that diodes are typically made of is silicon as this material has some very nice properties.

Silicon

A single silicon atom consists of a nucleus of fourteen positively charged protons and fourteen (electronically neutral) neutrons surrounded by fourteen negatively charged electrons. Each atom has various electron shells – depending on the number of electrons – where the inner shells always contain fewer electrons and the outer shells have a larger capacity for electrons. The formula to calculate the maximum electron capacity for a particular shell is 2n² where n is the number of the electron shell, beginning with 1 at the innermost shell. This results in a 2, 8, 4 layout for silicon while the outermost shell – called valence shell – could theoretically contain up to 18 electrons but the silicon atom simply does not have more than four electrons left. The silicon atom itself does not have an electric charge as there is an equal number of positive and negative charges. Of the fourteen electrons, only the four outer electrons – called valence electrons – are available for chemical bonding. The remaining 10 electrons do not form bonds to other atoms due to their tighter adhesion to the nucleus. For this reason, each silicon atom can bond to four other silicon atoms. Such a chemical bond consists of two electrons as one electron from each of the silicon atoms is involved in the bond. When electrons are shared equally by the atoms involved, this type of connection is called covalent bonding.

This structure of a silicon atom with four valence electrons gives silicon a very nice property: It can form a crystal involving all of the bonding electrons with none left over. This formation is very hard to break and creates an extremely stable material. The silicon crystal that is formed has no electric charge as it consists of atoms that have no electric charge (having the same number of electrons as protons) themselves. The chart below depicts the structure of a single silicon atom and the formation of a silicon crystal lattice.

Silicon

Doping

Applying a procedure called doping, the silicon’s pure crystal structure is intentionally changed by impuritiy atoms to modulate it’s electric characteristics. Negatively doped (n-type) silicon features an additional electron inbetween the atom bonds. The name n-type results from the negative charge of electrons. Positively doped (p-type) silicon features a missing electron between two silicon atoms, called hole. The following descriptions explain in detail what the doping procedure does to the silicon structure.

  • Substituting with Phosphorus Just like the silicon atom, a single phosphorus atom has no electric charge by itself as the number of protons and neutrons is balanced. Phosphorus however has fifteen electrons and fifteen protons. The doping procedure replaces some of the silicon atoms in the crystal with phosphorus atoms, called dopants. These dopant phosphorus atoms also create four covalent bonds with its neighbours in the same way as a silicon atom does. There is however a fifth valence enectron in phosphorus that can not be used for covalent bonding. This excess electron now plays an important role. This last valance electron is so weakly attached to the phosphorus atom that at normal temperatures the thermal energy within the crystal is sufficient to free it from the phosphorus atom. This results in an untethered electron that is free to travel around the crystal. When an atom with more than four bonding electrons is used to dope silicon, the resulting crystal material is called n-type silicon as the free electrons available from the dopant atoms each have a negative electric charge.
  • Charge As described, the fifth electron typically breaks away from the underlying phosphorus atom due to the thermal energy of the crystal. This electron is then free to move. However, the phosphorus atom that is substituted in the silicon crystal is fixed in place in the crystal because it is covalently bonded. With only fourteen electrons (the free electron is now drifting around the crystal) but fifteen protons, the stationary phosphorus atom now exhibits a positive electric charge. However, the sum of all of the electric charges in the doped material is zero and therefore the crystal in total has no net electric charge as the number of all free electrons in the material exactly matches the number of positive charges from the phosphorus atoms in the crystal.
  • Substituting with Boron A boron atom has five protons and five electrons. With this property it can also be used for doping a silicon crystal where a silicon atom is replaced by a boron atom. As a boron atom has only three electrons available in its valence shell, only three covalent bonds can be created between a boron atom and the silicon atoms in a crystal. At normal temperature, there is sufficient thermal energy to push a nearby electron into this vacancy. If this is the case, the atom that supplied the electron to the boron atom now has an electron vacancy that can be filled by an electron from another atom in the crystal. In this way, the vacancy (also called hole) can move from atom to atom. This can be viewed as positive charges moving through the material as moving holes. When an atom with fewer bonding electrons than silicon is used to dope silicon, the resulting material is called p-type silicon as these types of dopant atoms generate mobile holes in the crystal with each hole having a positive electric charge.
  • Charge At room temperatures, a boron atom in the crystal has been forced to have one more electron than the number of protons in its nucleus. For this reason, the boron atom has acquired a negative electric charge. In total however, the piece of p-type silicon does not exhibit any charge to the outside as the sum of all electric charges in the doped material is zero. The figure below shows both types of doped silicon including the different impurity atoms as described.

Doped Silicon

In an n-type phosphorus doped silicon crystal the free electrons will diffuse throughout the crystal in a purely random fashion until there is an equal distribution of free electrons throughout the volume of the n-type silicon crystal. In a p-type boron doped silicon crystal the corresponding holes will become equally distributed throughout the p-type crystal’s volume.

The p-n junction

Doping one side of a piece of silicon with boron (a p-type dopant) and the other side with phosphorus (an n-type dopant) forms a p-n junction. The n-type material has large numbers of free electrons that can move through the material. The number of positively charged phosphorus atoms (called positive ions), which are not free to move, exactly balance the number and charge of these negative free electrons. Similarly, for the p-type material, there are large numbers of free holes (positively charged) that can move through the material. Their number and positive charge is exactly counter-balanced by the number of negatively charged boron atoms (called negative ions). Now imagine that the n-type and the p-type materials are linked to each other.

PN Junction

Due to the doping of the silicon crystal, there are large numbers of mobile electrons on the n-type side, but very few mobile electrons on the p-type side. Because of the random thermal motion of these charge carriers, electrons from the n-type side start to diffuse into the p-type side. Similarly, due to the doping of the silicon, there are large numbers of mobile holes on the p-type side, but very few mobile holes on the n-type side. Holes in the p-type side, therefore, start to diffuse across into the n-type side.

Now, if the electrons and holes had no electric charge, this diffusion process would eventually result in the electrons and holes being uniformly distributed throughout the entire volume. They do, however, have an electric charge and this causes something interesting to happen! As the electrons in the n-type material diffuse across towards the p-type side, they leave behind positively charged phosphorus ions, near the interface between the n and p regions. Similarly, the positive holes in the p-type region diffuse towards the n-type side and leave behind negatively charged boron ions.

PN Junction

These fixed ions set up an electric field right at the junction between the n-type and p-type material. This electric field points from the positively charged ions in the n-type material to the negatively charged ions in the p-type material. The free electrons and holes are influenced by this “built-in” electric field with the electrons being attracted towards the positive phosphorus ions and the holes being attracted towards the negative boron ions. Thus, the “built-in” electric field causes some of the electrons and holes to flow in the opposite direction to the flow caused by diffusion.

PN Junction

These opposing flows eventually reach a stable equilibrium with the number of electrons flowing due to diffusion exactly balancing the number of electrons flowing back due to the electric field. The net flow of electrons across the junction is zero and the net flow of holes across the junction is also zero. This begs the question, “If there is no net current flowing, of what use is it?” Although there is no net flow of current across the junction there has been established an electric field at the junction and it is this electric field that is the basis of the operation of diodes, transistors and solar cells.

Depletion Region 

Within the depletion region, there are very few mobile electrons and holes. It is “depleted” of mobile charges, leaving only the fixed charges associated with the dopant atoms. As a result, the depletion region is highly resistive and now behaves as if it were pure crystalline silicon: as a nearly perfect insulator. The resistance of the depletion region can be modified by “adding” an external electric field to the “built-in” electric field. If the “added” electric field is in the same direction as the “built-in” electric field, the depletion region’s resistance will become greater. If the ”added” electric field is opposite in direction to the “built-in” electric field, the depletion region’s resistance will become smaller. The depletion region can therefore be considered to operate as a voltage-controlled resistor.

Forward Bias
If a positive voltage is applied to the p-type side and a negative voltage to the n-type side, current can flow (depending upon the magnitude of the applied voltage). This configuration is called “Forward Biased” (see Figure 5). At the p-n junction, the “built-in” electric field and the applied electric field are in opposite directions. When these two fields add, the resultant field at the junction is smaller in magnitude than the magnitude of the original “built-in” electric field. This results in a thinner, less resistive depletion region. If the applied voltage is large enough, the depletion region’s resistance becomes negligible. In silicon, this occurs at about 0.6 volts forward bias. From 0 to 0.6 volts, there is still considerable resistance due to the depletion region. Above 0.6 volts, the depletion region’s resistance is very small and current flows virtually unimpeded.

Forward Bias

Reverse Bias
If a negative voltage is applied to the p-type side and a positive voltage to the n-type side, no (or exceptionally small) current flows. This configuration is called “Reverse Biased”.

Reverse Bias

At the p-n junction, the “built-in” electric field and the applied electric field are in the same direction. When these two fields add, the resultant larger electric field is in the same direction as the “built in” electric field and this creates a thicker, more resistive depletion region. If the applied voltage becomes larger, the depletion region becomes thicker and more resistive. In reality, some current will still flow through this resistance, but the resistance is so high that the current may be considered to be zero. As the applied reverse bias voltage becomes larger, the current flow will saturate at a constant but very small value. The bias modes described are the key elements for the function of electronic diodes and the concept of doped silicon is also applied in most electronic devices such as transistors or solar cells.

For light detection, a photodiode with p-type and n-type silicon has to be driven in reverse bias mode. The goal is to prevent a current from flowing automatically. When an incident light particle (photon) impacts in the depletion region, an electron is knocked out of its position, creating an electron-hole-pair. The electron is attracted towards the n-type layer by the electric field, the hole is attracted towards the p-type. Both particles are then being absorbed by their surrounding material. It is the flowing electron that creates a small current that can be registered by the readout circuit. Depending on the sensor design, some current-to-voltage conversion is usually applied during the readout process. More details on readout electronics can be found on part II of this article.

There is a proportional relationship between the intensity of incoming light (number of photons) and current flowing in the diode. While low light only produces a small current, bright light generates higher currents. Nevertheless, the diode design as it is depicted has a slight disadvantage when it comes to light detection. The knocked-on electron can only create a current when it is accellerated in the depletion region. A photon knocking out an electron in either the p-type or n-type side of the diode will also create an electron-hole-pair, however these will quickly reunite as there is no electric field effecting them. As the depletion region typically is much smaller than the entire semiconductor, the quantum efficiency of this regular design is quite low. To improve the light reception performance of a diode, the depletion region is typically increased by inserting a lightly doped region (or not doped at all) between the n-type and p-type regions. This layer (called intrinsic layer) will be superimposed by the electric field and still prevents electrons from flowing automatically. This type of diode is called PIN-diode. Another improvement in quantum efficiency can be made by simply rotating the PIN structure for 90 degree so that photons do not hit from the side, but from the top. Incoming photons however will now have to travel through a top layer first to reach the intrinsic layer. Therefore, the layer on top needs to be as thin as possible (~1μm). For the intrinsic layer, is is usually sufficient to have it ~4-6μm thick. The figure below shows a cross-section through a typical photodiode as it can be found in modern digital cameras.

Structure of a Photodiode

Performance Improvements

In order to achieve a good sensor performance in low light conditions, it is an essential goal to collect as many photons as possible. Using today’s highly advanced production techniques, the photodiodes are increasingly getting more sensitive and utility circuits are made smaller. However, it is still not possible to place photodiodes directly adjacent to one another because of the readout circuits integrated in the pixels. Especially for CMOS sensors, their readout circuits are a limiting factor and reduce the active surface of a sensor. To counteract this disadvantage, there are several approaches to increase a sensor’s performance.

  • Micro Lens As described, the surface of a sensor pixel is not completely sensitive to light but rather features a smaller zone where the photodiode is placed. The remaining part of the pixel is used for supply and readout electronics. With this design given, a large proportion of the incoming light would actually illuminate the non-sensitive areas with the pixel unable to convert all photons into light. To improve the quantum efficiency, little micro lenses are typically placed on top of the pixels to direct as many photons into the active photodiode as possible. Some sensor designs even use two layers of microlenses to minimize the loss of photons.
  • Frontside Illumination / Backside Illumination The traditional type of photodiode is designed to collect photons coming from the front side. This is why the traditional photodiode design is also called frontside illumination. In photodiodes with front side illumination architectures, light must travel through multiple metal and dielectric layers before reaching the actual diode. These layers can block or deflect light from reaching the photodiode reducing performance and causing additional problems such as crosstalk. Crosstalk describes a phenomenon of incident photons being deflected by metal structures and causing them to accidentally land in a neighboring photodiode. To prevent this from happening, a reflective coating (light tunnel) is built around the wiring structures so that photons are guided into the intended photodiode. Another way to avoid the problems discussed and to increase a photodiode’s light sensitivity is to flip the photodiode upside down. By this design, light is collected from the backside of the photodiode with the metal and dielectric layers residing underneath. This innovative photodiode design is called backside illuminated and allows light to reach the sensitive area much easier which results in a better quantum efficiency. Please note that backside illumination is just an optional design and not all image sensors are based on this architecture. The figure below shows the principle of a backlit photodiode.

Structure of a Photodiode

In summary, digital camera sensors are highly complex microchips that constantly get optimized by new technologies. However, there is always a conflict when designing a camera sensor: An increasing sensor resolution will always effect the individual pixel size. The pixel size in turn has an effect on various quality features of a pixel such as its sensitivity, dynamic range, color precision, noise effects and possible blooming in bright situations. If the pixel size is reduced, sensitivity, dynamic range and color precision decrease while noise and blooming probability increase. This relation between individual pixel size and pixel quality makes it easy to understand why extremely high resolutions should be viewed critically.

Continue to Part II