Autofocus Systems Part II

  • TTL Phase Detection

In January 1985, the Minolta Maxxum 7000 was introduced with the first autofocus detection module that was integrated behind the photo-taking lens. Due to its layout, this type of measurement is called Through The Lens (TTL). Also, TTL Phase Detection is based on a principle where a secondary image is registered by a special sensor unit, which is why it is also referred to as Through The Lens Secondary Image Registration Phase Detection Autofocus. TTL Phase Detection still plays a significant role in digital SLR photography today and therefore this article places a clear emphasis on this passive autofocus system.

Claim: As per Feb. 2016, this subchapter on phase detection autofocus is one of the most detailed technical descriptions currently available on the internet. The creation of this article involved several weeks of research including the review of scientific publications, patent registrations, contacting professionals, the translation of japanese websites on phase detection autofocus and the disassembly of DSLR cameras.

General Principle

Phase detection is a technique where the primary image (the one that is recorded by the image sensor) is projected onto a specialized autofocus sensor unit located behind the primary image. Unfortunately, as the image sensor is not transparent, it is not possible to have this autofocus sensor unit placed on a line with the image sensor. The solution for this problem uses a simple yet intelligent trick to bypass the image sensor that will be explained later. Nevertheless, in the interest of clarity and to allow a linear representation of the concept, it shall be assumed that the camera’s image sensor is fully transparent.

The general principle is to install two optical systems that produce two individual images of the same object but from different perspectives. Once the object changes its distance from the optical system, the two images also change their position. An increasing distance between the object and the optical system results in the two images shifting towards each other. If the object is approaching, the two images move away from each other. For the analysis of both image positions, two screens are placed so that they can detect every possible situation. The figure below shows this principle.

Phase Detection Principle

The phase detection principle originates from the rangefinder decribed earlier. The substantial difference between the two systems is that TTL phase detection is performed inside the camera body using light from the photo-taking lens, whereas the rangefinder had a separate optical system with additional openings in the camera body and therefore was looking at the object from slightly different angles. TTL phase detection is considered to be more accurate as it includes exactly the optical system that produces the final image on the sensor, and analyzes exactly the primary image responsible for the image formation.

Two Halves of a Lens

Although there is only one photo-taking lens in a camera, TTL phase detection still requires to record two images from two different perspectives. Interestingly, this can be achieved by dividing light rays that pass through the photo-taking lens into two halves. This concept can be visualized easily by showing a lens that projects the image of a point-like source onto a screen. Two filters are placed in front of the lens so that the upper half of light rays is colored differently than the lower half. The separate colors help to distinguish the rays that pass through each half of the lens and to draw conclusions from the final image to the origin of the light ray. The figure below shows this idea.

Color Rays

If the point source is in focus, the image is also formed as a point. If the point source is out of focus, the image is formed as a disc where the arrangement of colors indicates the direction in which proper focus can be achieved.

The Principle

Phase detection adopts this principle and uses the light coming through opposite halves of the photo-taking lens. Each optical system receives light only from its half of the camera lens and forms a secondary image on individual, one-dimensional CCD sensors. When the subject is in focus, two secondary images are formed on the »in-focus-spots« of the linear detectors. These in-focus-spots are programmed into the system’s CPU and serve as a reference for the focus condition. The system detects a focusing error when the secondary images leave the in-focus-spots of their respective CCD sensors, shifting towards or away from each other. The figure below shows a simplified version of this basic principle, assuming that the camera is looking at a single bright spot on a black background, producing two sharp peaks on each of the CCD sensors.

Phase Detection Principle

The distance between the secondary images indicates the direction in which the camera lens has to be moved in order to achieve proper focus. It should be noted that the two images not only change their relative position, but also will become blurrier the further they leave their in-focus-spot. Fortunately, as the blur occurs itentically on both CCD sensors, the CPU is usually still able to calculate the required focus adjustment. Still, blur can be problematic for certain low-contrast scenes and other situations.

The secondary image formation lenses, also referred to as separator lenses, are located behind little masks that are designed to prevent stray light from reaching the CCD sensors. These masks also limit the portions of light so that not an entire half of the photo-taking lens is used for the secondary image formation but only two bundles of light forming two »windows« in the photo-taking lens. Each separator lens can therefore »see« through its own dedicated window.

Off-Axis Situations

In reality, photographic scenes do not only consist of point-like sources but rather of widespread objects including a large number of points. For that reason, a real phase detection analysis must be capable of analyzing widespread objects including both on-axis points and off-axis points. In this case, the CCD sensors do not register single peaks but rather a specific waveform. The waveform CCD I produces is identical to the one procuded by CCD II, however they may be phase-shifted, hence the name phase detection or phase comparison. The signals registered by the CCDs can also be compared to signatures that must be brought to coincidence in the reference position.

Unfortunately, a phase detection unit as depicted above would only be able to calculate the phase difference for a pinpoint object located on the optical axis or very close to it. With objects off the optical axis, a phase detection unit of this simplified type would lose its function. The cone of light from an off-axis object is running in a different direction, not covering both separator lenses symmetrically, and therefore causing the intensity on one CCD sensor to decrease or even drop to zero. The figure below shows both cases.

Off Axis Situations

It can be seen that there is an asymmetry between both windows that is increasing with the distance of the object from the optical axis. In case I, the top window is already narrowed by the top of the lens, causing the intensity of the signal registered by the lower CCD to be lower than the signal of the upper CCD. In case II, the asymmetry is so strong that the lower separator lens is entirely cut off from light.

The Condenser Lens

The solution to this problem is to install a condenser lens (also called field lens) that conjugates each separator lens with its specific window in the photo-taking lens. To put it simply, the condenser lens bends the cones of light from off-axis objects so that they always illuminate both separator lenses. In addition, the implementation of a condenser lens increases the flexibility in the choice of the focal lenghts of the separator lenses. Another mask is placed in front of the condenser lens to block those portions of light that are not used for phase comparison. The illustration below demonstrates this key role of the condenser lens.

Condenser Lens Introduction

For primary image points off the optical axis, both secondary images registered on the CCD sensors are shifted in the opposite direction. Once the phase detection system notices two identical signatures on the CCD sensors that are shifted in the same direction (other than the opposed out-of-focus-shifts described earlier) it can associate these signatures with an individual off-axis subject. The general principle of phase detection also applies for these off-axis subjects. This means that the in-focus-spots on the CCDs still serve as a reference to indicate the ideal focus position. As long as both off-axis signals are shifted in the same direction and have exactly the same distance towards the reference point, the system will recognize this as the ideal focus position for the respective off-axis point.

On closer inspection, the illustration reveals that there is still a small asymmetry between both windows, even with a condenser lens installed. However, this slight asymmetry isn’t problematic as both separator lenses are fully illuminated. In general, the size of the windows in the photo-taking lens is only limited by the separator masks and the lens aperture while their position depends on the position of the primary image point.

The following illustration is a combined view of the image formation process while looking at a widespread scene, including a central object point (white) and two off-axis points (green and cyan). The figure only includes those rays relevant for phase detection and not those blocked by the separator masks. For that reason it should be noted that the actual photo is taken with the full beam of light covering the entire lens surface, limited only by the size of the aperture.

Widespread Object

It can be observed in the enlarged view that the condenser lens bends the outside rays into the required direction. For that reason the condenser lens allows the autofocus unit to analyze widespread scenes. It also ensures a constant signal intensity along the active area of the CCD sensors. Finally, it slightly reduces the angle in which light rays hit the CCD sensors, and therefore it reduces blurring for the out-of-focus situations.

Depending on the scene to be photographed, TTL phase detection autofocus suffers from a general weakness. Firstly, phase detection can only be applied when the scene’s brightness is above a certain minimum level. Furthermore, one pair of linear CCD sensors arranged vertically is only sensitive to horizontal contrast edges and vice versa. If a subject contains a contrast edge that is oriented in the same direction as the pair of CCD sensors, the signals recorded do not include unique features and the comparator is unable to bring the secondary images to coincidence. Consequently, it is also impossible to determine focus if no contrast edge is available at all, like in a clear blue sky. It is almost equally impossible for a phase detection system to focus on highly repetitive surfaces such as fine checkerboard patterns. The illustration below shows two very typical problems of linear phase detection sensors.

Difficult Situations

In order to avoid focus inabilities as indicated in situations I and II, most focus points combine two pairs of linear CCD strips arranged in a right angle (cross-type AF points). The installation of cross-type AF points in turn requires four separator lenses with individual masks and therefore is not an option of all AF points due to space limitations. It is typically the AF points in the central area of the scene that have a cross-type design whereas the outer AF points have a linear design. With these cross-type autofocus points, a vertical contrast edge can be easily registered by the horizontal pair of CCD strips. Consequently, a cross-type autofocus point is comparing four secondary images being formed by four bundles of light within the photo-taking lens. The following diagram compares both AF point types with their associated separator lenses from a top view.

AF Point Types

Accuracy Improvements

One important aspect of phase detection is that the accuracy of focus detection is related with the distance of the analyzed light rays to each other. It was shown in the illustrations above that the phase detection unit analyzes light coming through opposite halves of the photo-taking lens. In fact, these zones are much smaller than half of the lens due to the separator masks. If the phase detection analysis was made on light zones lying too close together, the phase shifts on the CCDs would be too small, reducing accuracy too far. Instead, the distance between the opposite zones should be as large as possible to achieve noticeable phase shifts on the CCDs. The system can be compared to a rangefinder where each pair of CCD detectors forms a baseline for triangulation and the accuracy is increased as the distance between both CCDs is increased. On the other hand, it is required for the lenses to provide apertures that accommodate these zones. Unfortunately, not all photographic lenses have a maximum aperture larger than f5.6 which is why phase detection units typically are designed so that most of the AF points analyze light rays from two or four bundles of light within the f5.6 zone.

Nevertheless, some professional DSLR cameras provide high-precision AF points that can analyze bundles of light within the f2.8 zone. Logically, these types of AF points only unfold their high precision when the photo-taking lens provides a maximum aperture of f2.8. Without such as fast lens, the AF point only relies on the standard detectors. The linear CCD detectors for these high-precision autofocus points must be further apart than the standard detectors and are typically arranged vertically in relation to the other detectors. Also, an additional set of separator lenses and masks is required. For a certain shift in the primary image, the phase-shift registered by these high-precision detectors is larger than the one registered by the standard-precision ones. Therefore, the comparator unit can recognize the tiniest focus shifts even when the standard detectors already indicate proper focus. The combination of four standard-precision detectors and four high-precision detectors creates a double-cross-type AF point. The figure below summarizes the concept of high-precision autofocus points.

High Precision AF Points

It should be noted that it does not influence the autofocus ability if the aperture of a lens is set to a high f-number. For example, with an f2.8 lens applied, the photographer can choose an aperture of 16 or 22 on the f-number preselection and the phase detection unit is still able to use its high-precision autofocus points. This is because the aperture is always fully open when the mirror is in its idle position. The aperture blades only snap into the desired position when the shutter is pressed.

Although offering higher focusing precision, the high-precision AF points do not enhance focus accuracy in dark environments even if an f2.8 lens is used. An f2.8 lens inevitably is more suitable to record images in darker environments, but the phase detection unit always picks the same bundles of light for phase comparison. Rays of light from the f2.8 zone are projected onto a separate couple of CCD sensors, and therefore do not contribute to the intensity on the standard set of CCD sensors. The high-precision CCDs will either receive some light or no light at all, depending on whether the lens has a maximum aperture of f2.8 or smaller. Thus, the standard CCD sensors always receive the same amount of light regardless of the available lens aperture.

Another improvement to the accuracy of a phase detection autofocus sensor is achieved by a dual-line zig-zag arrangement of line detectors. The width of a single pixel on a standard-precision CCD strip is rougly between 10 and 14 µm. For certain AF points – usually the most commonly used ones in the central portion of the screen – the AF sensor has the vertical CCD strips replaced by four parallel detectors, two of which directly adjacent to each other. In addition, these two parallel detectors are shifted by half a pixel which effectively doubles the resolution of such an array. With the zig-zag design, the secondary image coincidence can be determined more accurately because unique signal features that would potentially fall between two pixels in a regular detector array will fall on a pixel of the additional array. The following diagram shows the sensor layout of the Canon EOS 7D with the central vertical zig-zag pattern clearly visible.

AF Zig Zag Pattern

AF Point Selection and Zone Autofocus

To increase flexibility for focusing on various objects, DSLR cameras usually allow the selection of different AF points. One pair of CCD strips can actually represent more than one AF point by switching the in-focus-spots that serve as the reference for phase comparison. Assuming that two CCD strips have their in-focus-spots exactly in the middle, the camera’s CPU can shift these reference lines in the same direction and define this position as the new in-focus-spot. In contrast, if an AF point should be moved perpendicularly to the CCD strips, additional line sensors have to be installed next to the existing ones. This means that for three vertical cross-type AF points it is only required to have one pair of slightly longer vertical CCD strips [|] and three pairs of horizontal CCD strips that are placed above each other [≡]. If two more pairs of vertical strips would be added, this arrangement could even represent nine cross-type AF points.

This principle requires the combinable CCD strips to be placed behind one set of separator lenses. A phase detection unit that is pointed towards the center of the image can indeed capture a fairly decent portion of the scene including the off-axis light rays that have been discussed earlier. Unfortunately, object points from the side regions of the image can only be analyzed by additional installations. Thanks to the use of condenser lenses, additional phase detection units can be placed to the outer zones of the image area. Depending on the required number of separate zones, a certain number of condenser lenses is placed in the primary image formation plane. Subsequently, new groups of separator lenses and CCD strips need to be arranged behind those condenser lenses. Today’s cameras typically have three AF zones for which three condenser lenses are installed. The following diagram shows the concept of creating multiple AF zones.

AF Zones

The figure below compares the technical design of an AF sensor with the resulting array of autofocus points. It can be helpful to note that the effective area used for the phase detection analysis is actually larger than the single AF points as they are displayed in the viewfinder. The effective area for phase comparison is determined by the size of linear detectors and the camera electronics. The AF point itself is only a visualization to display in the viewfinder.

Autofocus Sensor Design

It can be seen in the illustration of the sensor layout above that most of the AF points are standard-precision cross-type points (thick white squares). In the Canon EOS 1D X (introduced in 2012), only five inner AF points are high-precision double-cross-type points (thick green squares). The remaining points (thin white squares) are linear vertical detectors. It can clearly be seen that not every individual AF point has its own pair (or two pairs) of line detectors. Each pair of long vertical detectors in the central area is used to cover seven AF points vertically arranged. Each pair of horizontal detectors of the central area covers three AF points horizontally arranged. For this reason, these 28 pairs of (vertical and horizontal) detectors cover a total of 61 AF points. The 20 diagonal detectors do not form own AF points but enhance the precision of the inner five AF points.

Realistic representations

For reasons of simplification, it was assumed in the illustrations that the camera sensor is fully transparent so light could be captured by an autofocus detector unit behind the sensor plane. In fact, however, the image sensor is not transparent, and therefore this linear arrangement of optical elements is not applicable in practice. In reality, light rays coming from the photo-taking lens are deflected towards the lower area of the camera body by the secondary mirror – the one behind the semi-transparent primary mirror. With this setup, the primary image is formed at the geometrical equivalent of the sensor plane. The deflection of the incident light does not change the principles of operation described above. The figure below shows a cross-section of a phase detection autofocus unit as applied in the Canon EOS 5D Mark II and gives an idea of the proportions in a phase detection unit.

Phase Detection Sensor Array Side

Last but not least, what looks rather simple on two-dimensional illustrations is truly a high precision optical device capable of detecting phase shifts in the micrometer range. The final diagram is a three-dimensional representation of all the individual components that interact so perfectly.

Phase Detection Unit Parts

Conclusion

Phase detection is the most common type of autofocus system applied in today’s DSLR cameras due to its various advantages. The autofocus unit itself does not require any moving parts and is therefore not vulnerable to vibrations. Also, the position of the AF unit behind the photo-taking lens offers very high accuracy as the primary image is analyzed directly. Possibly the greatest advantage of phase detection autofocus is its high speed. Once the comparator has determined the phase-difference, the system already knows to which position the camera lens has to be moved. Unfortunately, the disadvantages are its low usability in dark environments and for scenes with low contrast or highly repetitive fine patterns. Also, phase detection can only be applied as long as the camera’s mirror is in its idle position. In live-view mode where the mirror is flipped up, another technique must be applied to determine proper focus. Finally, higher production cost is a slight drawback for phase detection systems.

Continue to Part III