We wrote about matrices about choosing a video camera for a family. There we touched on this issue easily, but today we will try to describe both technologies in more detail.

What is the matrix in a video camera? This is a microcircuit that converts a light signal into an electrical signal. Today there are 2 technologies, that is 2 types of matrices – CCD (CCD) and CMOS (CMOS). They are different from each other, each has its own pros and cons. It is impossible to say for sure which one is better and which one is worse. They develop in parallel. We will not go into technical details, because... they will be simply incomprehensible, but in general terms we will define their main pros and cons.

CMOS technology (CMOS)

CMOS matrices First of all, they boast about low power consumption, which is a plus. A video camera with this technology will work a little longer (depending on the battery capacity). But these are minor things.

The main difference and advantage is the random reading of cells (in CCD reading is carried out simultaneously), which eliminates smearing of the picture. Have you ever seen “vertical pillars of light” from point-like bright objects? So CMOS matrices exclude the possibility of their appearance. And cameras based on them are cheaper.

There are also disadvantages. The first of them is the small size of the photosensitive element (in relation to the pixel size). Here, most of the pixel area is occupied by electronics, so the area of ​​the photosensitive element is reduced. Consequently, the sensitivity of the matrix decreases.

Because Since electronic processing is carried out on the pixel, the amount of noise in the picture increases. This is also a disadvantage, as is the low scanning time. Because of this, a “rolling shutter” effect occurs: when the operator moves, the object in the frame may be distorted.

CCD technology

Video cameras with CCD matrices allow you to obtain high-quality images. Visually, it is easy to notice less noise in video captured with a CCD-based camcorder compared to video captured with a CMOS camera. This is the very first and most important advantage. And one more thing: the efficiency of CCD matrices is simply amazing: the fill factor is approaching 100%, the ratio of registered photons is 95%. Take the ordinary human eye - here the ratio is approximately 1%.


High price and high energy consumption are the disadvantages of these matrices. The thing is, the recording process here is incredibly difficult. Image capture is carried out thanks to many additional mechanisms that are not found in CMOS matrices, which is why CCD technology is significantly more expensive.

CCD matrices are used in devices that require color and high-quality images, and which may be used to shoot dynamic scenes. These are mostly professional video cameras, although there are household ones too. These are also surveillance systems, digital cameras, etc.

CMOS matrices are used where there are no particularly high requirements for picture quality: motion sensors, inexpensive smartphones... However, this was the case before. Modern CMOS matrices have various modifications, which makes them very high quality and worthy from the point of view of competing with CCD matrices.

Now it is difficult to judge which technology is better, because both demonstrate excellent results. Therefore, setting the type of matrix as the only selection criterion is, at a minimum, stupid. It is important to take many characteristics into account.


Please rate the article:

(lang: ‘ru’)

I continue the conversation about the device started in the previous publication.

One of the main elements of a digital camera that distinguishes it from film cameras is the photosensitive element, the so-called image intensifier or photosensitive digital camera. We have already talked about camera matrices, but now let’s look at the structure and operating principle of the matrix in a little more detail, although rather superficially so as not to tire the reader too much.

Nowadays, most digital cameras are equipped with CCD matrices.

CCD matrix. Device. Principle of operation.

Let's look at the device in general terms CCD matrices.

Semiconductors, as is known, are divided into n-type and p-type semiconductors. An n-type semiconductor has an excess of free electrons, while a p-type semiconductor has an excess of positive charges, “holes” (and therefore a lack of electrons). All microelectronics is based on the interaction of these two types of semiconductors.

So, the element CCD matrices of a digital camera is arranged as follows. See Fig.1:

Fig.1

Without going into details, a CCD element or a charge-coupled device, in English transcription: charge-coupled-device - CCD, is a MIS (metal-dielectric-semiconductor) capacitor. It consists of a p-type substrate - a layer of silicon, a silicon dioxide insulator and electrode plates. When a positive potential is applied to one of the electrodes, a zone is formed under it that is depleted of the main carriers - holes, since they are pushed away by the electric field from the electrode deeper into the substrate. Thus, a potential well is formed under this electrode, i.e., an energy zone favorable for the movement of minority carriers - electrons - into it. A negative charge accumulates in this hole. It can be stored in this well for quite a long time due to the absence of holes in it and, therefore, reasons for the recombination of electrons.

In photosensitive matrices The electrodes are films of polycrystalline silicon, transparent in the visible region of the spectrum.

Photons of light incident on the matrix enter the silicon substrate, forming a hole-electron pair in it. Holes, as mentioned above, are displaced deeper into the substrate, and electrons accumulate in the potential well.

The accumulated charge is proportional to the number of photons incident on the element, i.e., the intensity of the light flux. Thus, a charge relief is created on the matrix, corresponding to the optical image.

Movement of charges in the CCD matrix.

Each CCD element has several electrodes to which different potentials are applied.

When a potential greater than that applied to this electrode is applied to the adjacent electrode (see Fig. 3), a deeper potential well is formed under it, into which the charge from the first potential well moves. In this way, charge can move from one CCD cell to another. The CCD element shown in Fig. 3 is called three-phase; there are also 4-phase elements.

Fig.4. Scheme of operation of a three-phase charge-coupled device - a shift register.

To convert charges into current pulses (photocurrent), serial shift registers are used (see Fig. 4). This shift register is a row of CCD elements. The amplitude of the current pulses is proportional to the amount of charge transferred, and thus proportional to the incident light flux. The sequence of current pulses generated by reading the sequence of charges is then applied to the input of the amplifier.

Arrays of closely spaced CCD elements are combined into CCD matrix. The operation of such a matrix is ​​based on the creation and transfer of local charge in potential wells created by an electric field.

Fig.5.

The charges of all CCD elements of the register synchronously move to adjacent CCD elements. The charge that was in the last cell is output from the register and then fed to the input of the amplifier.

The input of a serial shift register receives the charges of perpendicularly arranged shift registers, which are collectively called a parallel shift register. Parallel and serial shift registers make up the CCD matrix (see Fig. 4).

Shift registers perpendicular to the serial register are called columns.

The movement of parallel register charges is strictly synchronized. All charges on one row are shifted simultaneously to the adjacent one. The charges of the last row go into the sequential register. Thus, in one operating cycle, a string of charges from the parallel register reaches the input of the serial register, freeing up space for newly formed charges.

The operation of serial and parallel registers is synchronized clock generator. Part digital camera matrix It also includes a microcircuit that supplies potentials to the register transfer electrodes and controls their operation.

An image intensifier tube of this type is called a full-frame CCD-matrix. For its operation, it is necessary to have a light-proof cover, which first opens the image intensifier tube for exposure to light, then, when it has received the number of photons necessary to accumulate a sufficient charge in the matrix elements, it closes it from light. This cover is a mechanical shutter, like in film cameras. The absence of such a gate leads to the fact that when charges move in the shift register, the cells continue to be irradiated with light, adding extra electrons to the charge of each pixel that do not correspond to the light flux of a given point. This leads to “smearing” of the charge and, accordingly, to distortion of the resulting image.

A single element is sensitive over the entire visible spectral range, so a light filter is used above the photodiodes of color CCD matrices, which transmits only one of three colors: red (Red), green (Green), blue (Blue) or yellow (Yellow), magenta ( Magenta), turquoise (Cyan). But in turn, there are no such filters in a black-and-white CCD matrix.


DEVICE AND PRINCIPLE OF OPERATION OF A PIXEL

A pixel consists of a p-substrate coated with a transparent dielectric, onto which a light-transmitting electrode is applied, forming a potential well.

Above the pixel there may be a light filter (used in color matrices) and a collecting lens (used in matrices where the sensitive elements do not completely occupy the surface).

A positive potential is applied to a light-transmitting electrode located on the surface of the crystal. Light falling on a pixel penetrates deep into the semiconductor structure, forming an electron-hole pair. The resulting electron and hole are pulled apart by the electric field: the electron moves to the carrier storage zone (potential well), and the holes flow into the substrate.

The pixel has the following characteristics:

  • The capacity of a potential well is the number of electrons that the potential well can accommodate.
  • Spectral sensitivity of a pixel is the dependence of sensitivity (the ratio of the photocurrent value to the light flux value) on the radiation wavelength.
  • Quantum efficiency (measured as a percentage) is a physical quantity equal to the ratio of the number of photons, the absorption of which caused the formation of quasiparticles, to the total number of absorbed photons. In modern CCD matrices this figure reaches 95%. By comparison, the human eye has a quantum efficiency of about 1%.
  • Dynamic range is the ratio of saturation voltage or current to the root mean square voltage or dark noise current. Measured in dB.
CCD MATRIX AND CHARGE TRANSFER DEVICE


The CCD is divided into rows, and in turn, each row is divided into pixels. The rows are separated from each other by stop layers (p +), which do not allow charges to flow between them. To move a data packet, parallel, also known as vertical (VCCD) and serial, also known as horizontal (HCCD) shift registers are used.

The simplest cycle of operation of a three-phase shift register begins with the fact that a positive potential is applied to the first gate, resulting in the formation of a well filled with the resulting electrons. Then we apply a potential to the second gate that is higher than on the first, as a result of which a deeper potential well is formed under the second gate, into which electrons will flow from under the first gate. To continue the movement of charge, you should reduce the potential value on the second gate, and apply a higher potential to the third. Electrons flow under the third gate. This cycle continues from the accumulation point to the direct reading horizontal resistor. All electrodes of the horizontal and vertical shift registers form phases (phase 1, phase 2 and phase 3).

Classification of CCD matrices by color:

  • Black and white
  • Colored

Classification of CCD matrices by architecture:

Green indicates photosensitive cells, gray indicates opaque areas.

The CCD matrix has the following characteristics:

  • Charge transfer efficiency is the ratio of the number of electrons in the charge at the end of the path through the shift register to the number at the beginning.
  • Fill factor is the ratio of the area filled with photosensitive elements to the total area of ​​the photosensitive surface of the CCD matrix.
  • Dark current - electricity, which flows through the photosensitive element in the absence of incident photons.
  • Read noise is noise that occurs in the output signal conversion and amplification circuits.

Matrices with frame transfer. (English frame transfer).

Advantages:

  • Possibility to occupy 100% of the surface with photosensitive elements;
  • Readout times are lower than full-frame transfer sensors;
  • Less blur than full-frame transfer CCD;
  • Has a duty cycle advantage over full-frame architecture: the frame-transfer CCD is constantly collecting photons.

Flaws:

  • When reading data, you should block the light source with the shutter to avoid blurring;
  • The charge travel path has been increased, which negatively affects the efficiency of charge transfer;
  • These sensors are more expensive to manufacture and produce than full-frame transfer devices.

Matrices with interline transfer or matrices with column buffering (eng. Interline-transfer).

Advantages:

  • There is no need to use a shutter;
  • No lubrication.

Flaws:

  • The ability to fill the surface with sensitive elements by no more than 50%.
  • The read speed is limited by the speed of the shift register;
  • Resolution is lower than frame and full frame transfer CCDs.

Matrices with line-frame transfer or matrices with column buffering (English interline).

Advantages:

  • The processes of charge accumulation and transfer are spatially separated;
  • The charge from the storage elements is transferred to the transfer registers, closed from the light of the CCD matrix;
  • The charge transfer of the entire image is carried out in 1 clock cycle;
  • No lubrication;
  • The interval between exposures is minimal and suitable for video recording.

Flaws:

  • The ability to fill the surface with sensitive elements by no more than 50%;
  • Resolution is lower than frame and full frame transfer CCDs;
  • The charge travel path is increased, which negatively affects the efficiency of charge transfer.

APPLICATION OF CCD MATRIXES

SCIENTIFIC APPLICATION

  • for spectroscopy;
  • for microscopy;
  • for crystallography;
  • for fluoroscopy;
  • for natural sciences;
  • for biological sciences.

SPACE APPLICATION

  • in telescopes;
  • in star trackers;
  • in tracking satellites;
  • when probing planets;
  • onboard and manual crew equipment.

INDUSTRIAL APPLICATION

  • to check the quality of welds;
  • to control the uniformity of painted surfaces;
  • to study the wear resistance of mechanical products;
  • for reading barcodes;
  • to control the quality of product packaging.

APPLICATION FOR PROTECTION OF OBJECTS

  • in residential apartments;
  • at airports;
  • on construction sites;
  • in the workplace;
  • in “smart” cameras that recognize a person’s face.

APPLICATION IN PHOTOGRAPHY

  • in professional cameras;
  • in amateur cameras;
  • in mobile phones.

MEDICAL USE

  • in fluoroscopy;
  • in cardiology;
  • in mammography;
  • in dentistry;
  • in microsurgery;
  • in oncology.

AUTO-ROAD APPLICATION

  • for automatic license plate recognition;
  • for speed control;
  • to control traffic flow;
  • for a parking pass;
  • in police surveillance systems.

How distortions occur when shooting moving objects on a sensor with a rolling shutter:


General information about CCD matrices.

Currently, most image capture systems use CCD (charge-coupled device) matrices as the photosensitive device.

The operating principle of a CCD matrix is ​​as follows: a matrix of photosensitive elements (accumulation section) is created on the basis of silicon. Each photosensitive element has the property of accumulating charges proportional to the number of photons hitting it. Thus, over some time (exposure time) in the accumulation section, a two-dimensional matrix of charges proportional to the brightness of the original image is obtained. The accumulated charges are initially transferred to the storage section, and then line by line and pixel by pixel to the output of the matrix.

The size of the storage section in relation to the accumulation section varies:

  • per frame (matrices with frame transfer for progressive scan);
  • per half-frame (matrices with frame transfer for interlaced scanning);

There are also matrices in which there is no storage section, and then line transfer is carried out directly through the accumulation section. Obviously, for such matrices to work, an optical shutter is required.

The quality of modern CCD matrices is such that the charge remains virtually unchanged during the transfer process.

Despite the apparent variety of television cameras, the CCD matrices used in them are practically the same, since mass and large-scale production of CCD matrices is carried out by only a few companies. These are SONY, Panasonic, Samsung, Philips, Hitachi Kodak.

The main parameters of CCD matrices are:

  • dimension in pixels;
  • physical size in inches (2/3, 1/2, 1/3, etc.). Moreover, the numbers themselves do not determine the exact size of the sensitive area, but rather determine the class of the device;
  • sensitivity.

Resolution of CCD cameras.

The resolution of CCD cameras is mainly determined by the size of the CCD matrix in pixels and the quality of the lens. To some extent, this can be influenced by the camera’s electronics (if it’s poorly made, it can worsen the resolution, but they rarely do anything frankly bad these days).

It is important to make one note here. In some cases, high-frequency spatial filters are installed in cameras to improve apparent resolution. In this case, an image of an object obtained from a smaller camera may appear even sharper than an image of the same object obtained objectively from a better camera. Of course, this is acceptable when the camera is used in visual surveillance systems, but it is completely unsuitable for constructing measurement systems.

Resolution and format of CCD matrices.

Currently, various companies produce CCD matrices covering a wide range of dimensions from several hundred to several thousand. This is how a matrix with a dimension of 10000x10000 was reported, and this message noted not so much the problem of the cost of this matrix as the problem of storing, processing and transmitting the resulting images. As we know, matrices with dimensions up to 2000x2000 are now more or less widely used.

The most widely, or more precisely, massively used CCD matrices, of course, include matrices with a resolution oriented to the television standard. These are matrices mainly of two formats:

  • 512*576;
  • 768*576.
512*576 matrices are usually used in simple and cheap video surveillance systems.

Matrices 768*576 (sometimes a little more, sometimes a little less) allow you to get the maximum resolution for a standard television signal. At the same time, unlike matrices of the 512*576 format, they have a grid arrangement of photosensitive elements close to a square, and, therefore, equal horizontal and vertical resolution.

Often, camera manufacturers indicate resolution in television lines. This means that the camera allows you to see N/2 dark vertical lines on a light background, arranged in a square inscribed in the image field, where N is the declared number of television lines. In relation to a standard television table, this assumes the following: by selecting the distance and focusing the table image, it is necessary to ensure that the upper and lower edges of the table image on the monitor coincide with the outer contour of the table, marked by the vertices of black and white prisms; then, after final subfocusing, the number is read in the place of the vertical wedge where the vertical strokes for the first time cease to be resolved. The last remark is very important because... and in the image of test fields of a table with 600 or more lines, alternating stripes are often visible, which, in fact, are moiré formed by the beating of the spatial frequencies of the lines of the table and the grid of sensitive elements of the CCD matrix. This effect is especially pronounced in cameras with high-frequency spatial filters (see above)!

I would like to note that, all other things being equal (this can mainly be influenced by the lens), the resolution of black-and-white cameras is uniquely determined by the size of the CCD matrix. So a 768*576 format camera will have a resolution of 576 television lines, although in some prospectuses you can find a value of 550, and in others 600.

Lens.

The physical size of the CCD cells is the main parameter that determines the requirement for the resolution of the lens. Another such parameter may be the requirement to ensure the operation of the matrix under light overload conditions, which will be discussed below.

For a 1/2 inch SONY ICX039 matrix, the pixel size is 8.6µm*8.3µm. Therefore, the lens must have a resolution better than:

1/8.3*10e-3= 120 lines (60 pairs of lines per millimeter).

For lenses made for 1/3-inch matrices, this value should be even higher, although this, oddly enough, does not affect the cost and such a parameter as aperture, since these lenses are made taking into account the need to form an image on a smaller light-sensitive field of the matrix. It also follows that lenses for smaller matrices are not suitable for large matrices due to significantly deteriorating characteristics at the edges of large matrices. At the same time, lenses for large sensors can limit the resolution of images obtained from smaller sensors.

Unfortunately, with all the modern abundance of lenses for television cameras, it is very difficult to obtain information on their resolution.

In general, we do not often select lenses, since almost all of our Customers install video systems on existing optics: microscopes, telescopes, etc., so our information about the lens market is in the nature of notes. We can only say that the resolution of simple and cheap lenses is in the range of 50-60 pairs of lines per mm, which is generally not enough.

On the other hand, we have information that special lenses produced by Zeiss with a resolution of 100-120 line pairs per mm cost more than $1000.

So, when purchasing a lens, it is necessary to conduct preliminary testing. I must say that most Moscow sellers provide lenses for testing. Here it is once again appropriate to recall the moire effect, the presence of which, as noted above, can mislead regarding the resolution of the matrix. So, the presence of moire in the image of sections of the table with strokes above 600 television lines in relation to the lens indicates a certain reserve of the latter’s resolution, which, of course, does not hurt.

One more, perhaps important note for those who are interested in geometric measurements. All lenses have distortion to one degree or another (pincushion-shaped distortion of the image geometry), and the shorter the lens, the greater these distortions, as a rule, are. In our opinion, lenses with focal lengths greater than 8-12 mm have acceptable distortion for 1/3" and 1/2" cameras. Although the level of “acceptability”, of course, depends on the tasks that the television camera must solve.

Resolution of image input controllers

The resolution of image input controllers should be understood as the conversion frequency of the analog-to-digital converter (ADC) of the controller, the data of which is then recorded in the controller’s memory. Obviously, there is a reasonable limit to increasing the digitization frequency. For devices with a continuous structure of the photosensitive layer, for example, vidicons, the optimal digitization frequency is equal to twice the upper frequency of the useful signal of the vidicon.

Unlike such light detectors, CCD matrices have a discrete topology, so the optimal digitization frequency for them is determined as the shift frequency of the output register of the matrix. In this case, it is important that the controller’s ADC operates synchronously with the output register of the CCD matrix. Only in this case can the best conversion quality be achieved both from the point of view of ensuring a “rigid” geometry of the resulting images and from the point of view of minimizing noise from clock pulses and transient processes.

Sensitivity of CCD cameras

Since 1994, we have been using SONY card cameras based on the ICX039 CCD matrix in our devices. The SONY description for this device indicates a sensitivity of 0.25 lux on an object with a lens aperture of 1.4. Several times already, we have come across cameras with similar parameters (size 1/2 inch, resolution 752*576) and with a declared sensitivity of 10 or even 100 times greater than that of “our” SONY.

We checked these numbers several times. In most cases, in cameras from different companies, we found the same ICX039 CCD matrix. Moreover, all the “piping” microcircuits were also SONY-made. And comparative testing showed almost complete identity of all these cameras. So what's the question?

And the whole question is at what signal-to-noise ratio (s/n) the sensitivity is determined. In our case, the SONY company conscientiously showed sensitivity at s/n = 46 dB, while other companies either did not indicate this or indicated it in such a way that it is unclear under what conditions these measurements were made.

This is, in general, a common scourge of most camera manufacturers - not specifying the conditions for measuring camera parameters.

The fact is that as the requirement for the S/N ratio decreases, the sensitivity of the camera increases in inverse proportion to the square of the required S/N ratio:

Where:
I - sensitivity;
K - conversion factor;
s/n - s/n ratio in linear units,

Therefore, many companies are tempted to indicate camera sensitivity at a low S/N ratio.

We can say that the ability of matrices to “see” better or worse is determined by the number of charges converted from photons incident on its surface and the quality of delivery of these charges to the output. The amount of accumulated charges depends on the area of ​​the photosensitive element and the quantum efficiency of the CCD matrix, and the quality of transportation is determined by many factors, which often come down to one thing - readout noise. The readout noise for modern matrices is on the order of 10-30 electrons or even less!

The areas of the elements of CCD matrices are different, but the typical value for 1/2-inch matrices for television cameras is 8.5 µm * 8.5 µm. An increase in the size of the elements leads to an increase in the size of the matrices themselves, which increases their cost not so much due to the actual increase in the production price, but due to the fact that the serial production of such devices is several orders of magnitude smaller. In addition, the area of ​​the photosensitive zone is affected by the topology of the matrix to the extent that the percentage of the total surface of the crystal is occupied by the sensitive area (fill factor). In some special matrices, the fill factor is stated to be 100%.

Quantum efficiency (how much on average the charge of a sensitive cell in electrons changes when one photon falls on its surface) for modern matrices is 0.4-0.6 (for some matrices without anti-blooming it reaches 0.85).

Thus, it can be seen that the sensitivity of CCD cameras, related to a certain S/N value, has come close to the physical limit. According to our conclusion, typical sensitivity values ​​of cameras for general use at s/w = 46 lie in the range of 0.15-0.25 lux of illumination on the object with a lens aperture of 1.4.

In this regard, we do not recommend blindly trusting the sensitivity figures indicated in the descriptions of television cameras, especially when the conditions for determining this parameter are not given and, if you see in the passport of a camera costing up to $500 a sensitivity of 0.01-0.001 lux in television mode, then before you are an example of, to put it mildly, incorrect information.

About ways to increase the sensitivity of CCD cameras

What do you do if you need to image a very faint object, such as a distant galaxy?

One way to solve this is to accumulate images over time. The implementation of this method can significantly increase the sensitivity of the CCD. Of course, this method can be applied to stationary objects of observation or in cases where movement can be compensated, as is done in astronomy.

Fig1 Planetary nebula M57.

Telescope: 60 cm, exposure - 20 sec., temperature during exposure - 20 C.
At the center of the nebula there is a stellar object of magnitude 15.
The image was obtained by V. Amirkhanyan at the Special Astrophysical Observatory of the Russian Academy of Sciences.

It can be stated with reasonable accuracy that the sensitivity of CCD cameras is directly proportional to the exposure time.

For example, sensitivity at a shutter speed of 1 second relative to the original 1/50s will increase 50 times, i.e. it will be better - 0.005 lux.

Of course, there are problems along this path, and this is, first of all, the dark current of the matrices, which brings charges that accumulate simultaneously with the useful signal. The dark current is determined, firstly, by the manufacturing technology of the crystal, secondly, by the level of technology and, of course, to a very large extent by the operating temperature of the matrix itself.

Usually, to achieve long accumulation times, on the order of minutes or tens of minutes, the matrices are cooled to minus 20-40 degrees. C. The problem of cooling the matrices to such temperatures has been solved, but it is simply impossible to say that this cannot be done, since there are always design and operational problems associated with fogging of protective glasses and heat release from the hot junction of a thermoelectric refrigerator.

At the same time, technological progress in the production of CCD matrices has also affected such a parameter as dark current. Here the achievements are very significant and the dark current of some good modern matrices is very small. In our experience, cameras without cooling allow making exposures at room temperature within tens of seconds, and with dark background compensation up to several minutes. As an example, here is a photograph of the planetary nebula M57, obtained with the VS-a-tandem-56/2 video system without cooling with an exposure of 20 s.

The second way to increase sensitivity is the use of electron-optical converters (EOC). Image intensifiers are devices that enhance the luminous flux. Modern image intensifiers can have very large gain values, however, without going into details, we can say that the use of image intensifiers can only improve the threshold sensitivity of the camera, and therefore its gain should not be made too large.

Spectral sensitivity of CCD cameras


Fig.2 Spectral characteristics of various matrices

For some applications, the spectral sensitivity of the CCD is an important factor. Since all CCDs are made on the basis of silicon, in their “bare” form the spectral sensitivity of the CCD corresponds to this parameter of silicon (see Fig. 2).

As you can see, with all the variety of characteristics, CCD matrices have maximum sensitivity in the red and near-infrared (IR) range and see absolutely nothing in the blue-violet part of the spectrum. The near-IR sensitivity of CCDs is used in covert surveillance systems illuminated by IR light sources, as well as when measuring thermal fields of high-temperature objects.


Rice. 3 Typical spectral characteristics of SONY black-and-white matrices.

SONY produces all its black-and-white matrices with the following spectral characteristics (see Fig. 3). As you can see from this figure, the sensitivity of the CCD in the near IR is significantly reduced, but the matrix began to perceive the blue region of the spectrum.

For various special purposes, matrices sensitive in the ultraviolet and even X-ray range are being developed. Usually these devices are unique and their price is quite high.

About progressive and interlaced scanning

The standard television signal was developed for a broadcast television system, and from the point of view of modern image input and processing systems, it has one big drawback. Although the TV signal contains 625 lines (of which about 576 contain video information), 2 half-frames are displayed sequentially, consisting of even lines (even half-frame) and odd lines (odd half-frame). This leads to the fact that if a moving image is input, then the Y resolution cannot be used in analysis more than the number of lines in one half-frame (288). In addition, in modern systems, when the image is visualized on a computer monitor (which has progressive scan), the image input from the interlaced camera when the object is moving causes an unpleasant visual effect of doubling.

All methods to combat this shortcoming lead to a deterioration in vertical resolution. The only way to overcome this disadvantage and achieve resolution that matches the resolution of the CCD is to switch to progressive scanning in the CCD. CCD manufacturers produce such matrices, but due to the low production volume, the price of such matrices and cameras is much higher than that of conventional ones. For example, the price of a SONY matrix with progressive scan ICX074 is 3 times higher than ICX039 (interlace scan).

Other camera options

These others include such a parameter as “blooming”, i.e. spreading of the charge over the surface of the matrix when its individual elements are overexposed. In practice, such a case may occur, for example, when observing objects with glare. This is a rather unpleasant effect of CCDs, since a few bright spots can distort the entire image. Fortunately, many modern matrices contain anti-blooming devices. So, in the descriptions of some of the latest SONY matrices, we found 2000, which characterizes the permissible light overload of individual cells, which does not yet lead to charge spreading. This is a fairly high value, especially since this result can be achieved, as our experience has shown, only with special adjustment of the drivers that directly control the matrix and the video signal pre-amplification channel. In addition, the lens also makes its contribution to the “spreading” of bright points, since with such large light overloads, even small scattering beyond the main spot provides a noticeable light support for neighboring elements.

It is also necessary to note here that according to some data, which we have not verified ourselves, matrices with anti-blooming have a 2-fold lower quantum efficiency than matrices without anti-blooming. In this regard, in systems that require very high sensitivity, it may make sense to use matrices without anti-blooming (usually these are special tasks such as astronomical ones).

About color cameras

The materials in this section somewhat go beyond the scope of consideration of measuring systems that we have established, however, the widespread use of color cameras (even more than black and white) forces us to clarify this issue, especially since Customers often try to use black and white cameras with our cameras. color television cameras with white frame grabbers, and they are very surprised when they find some stains in the resulting images, and the resolution of the images turns out to be insufficient. Let's explain what's going on here.

There are 2 ways to generate a color signal:

  • 1. use of a single matrix camera.
  • 2. use of a system of 3 CCD matrices with a color separation head to obtain R, G, B components of the color signal on these matrices.

The second way provides the best quality and is the only way to obtain measurement systems; however, cameras operating on this principle are quite expensive (more than $3000).

In most cases, single-chip CCD cameras are used. Let's look at their operating principle.

As is clear from the fairly wide spectral characteristics of the CCD matrix, it cannot determine the “color” of a photon hitting the surface. Therefore, in order to enter a color image, a light filter is installed in front of each element of the CCD matrix. Wherein total number matrix elements remains the same. SONY, for example, produces exactly the same CCD matrices for black-and-white and color versions, which differ only in the presence of a grid of light filters in the color matrix, applied directly to the sensitive areas. There are several matrix coloring schemes. Here is one of them.

Here 4 different filters are used (see Fig. 4 and Fig. 5).


Figure 4. Distribution of filters on CCD matrix elements



Figure 5. Spectral sensitivity of CCD elements with various filters.

Y=(Cy+G)+(Ye+Mg)

In line A1 the "red" color difference signal is obtained as:

R-Y=(Mg+Ye)-(G+Cy)

and in line A2 a “blue” color difference signal is obtained:

-(B-Y)=(G+Ye)-(Mg+Cy)

It is clear from this that the spatial resolution of a color CCD matrix, compared to the same black and white one, is usually 1.3-1.5 times worse horizontally and vertically. Due to the use of filters, the sensitivity of a color CCD is also worse than that of a black and white one. Thus, we can say that if you have a single-matrix receiver 1000 * 800, then you can actually get about 700 * 550 for the brightness signal and 500 * 400 (700 * 400 is possible) for the color signal.

Leaving aside technical issues, I would like to note that for advertising purposes, many manufacturers of electronic cameras report completely incomprehensible data on their equipment. For example, the Kodak company announces the resolution of its DC120 electronic camera as 1200*1000 with a matrix of 850x984 pixels. But gentlemen, information does not appear out of nowhere, although visually it looks good!

The spatial resolution of a color signal (a signal that carries information about the color of the image) can be said to be at least 2 times worse than the resolution of a black-and-white signal. In addition, the “calculated” color of the output pixel is not the color of the corresponding element of the source image, but only the result of processing the brightness of various elements of the source image. Roughly speaking, due to the sharp difference in brightness of neighboring elements of an object, a color can be calculated that is not there at all, while a slight camera shift will lead to a sharp change in the output color. For example: the border of a dark and light gray field will look like it consists of multi-colored squares.

All these considerations relate only to the physical principle of obtaining information on color CCD matrices, while it must be taken into account that usually the video signal at the output of color cameras is presented in one of the standard formats PAL, NTSC, or less often S-video.

The PAL and NTSC formats are good because they can be immediately reproduced on standard monitors with a video input, but we must remember that these standards provide a significantly narrower band for the color signal, so it is more correct to talk about a colored image rather than a color one. Another unpleasant feature of cameras with video signals that carry a color component is the appearance of the above-mentioned streaks in the image obtained by black-and-white frame grabbers. And the point here is that the chrominance signal is located almost in the middle of the video signal band, creating interference when entering an image frame. We do not see this interference on a television monitor because the phase of this “interference” changes to the opposite after four frames and is averaged by the eye. Hence the bewilderment of the Customer, who receives an image with interference that he does not see.

It follows from this that if you need to carry out some measurements or decipher objects by color, then this issue must be approached taking into account both the above and other features of your task.

About CMOS matrices

In the world of electronics, everything is changing very quickly, and although the field of photodetectors is one of the most conservative, new technologies have been approaching here recently. First of all, this relates to the emergence of CMOS television matrices.

Indeed, silicon is a light-sensitive element and any semiconductor product can be used as a sensor. The use of CMOS technology provides several obvious advantages over traditional technology.

Firstly, CMOS technology is well mastered and allows the production of elements with a high yield of useful products.

Secondly, CMOS technology allows you to place on the matrix, in addition to the photosensitive area and various devices frames (up to the ADC), which were previously installed “outside”. This makes it possible to produce cameras with digital output “on a single chip.”

Thanks to these advantages, it becomes possible to produce significantly cheaper television cameras. In addition, the range of companies producing matrices is expanding significantly.

At the moment, the production of television matrices and cameras using CMOS technology is just getting started. Information about the parameters of such devices is very scarce. We can only note that the parameters of these matrices do not exceed what is currently achieved; as for the price, their advantages are undeniable.

Let me give as an example a single-chip color camera from Photobit PB-159. The camera is made on a single chip and has the following technical parameters:

  • resolution - 512*384;
  • pixel size - 7.9µm*7.9µm;
  • sensitivity - 1 lux;
  • output - digital 8-bit SRGB;
  • body - 44 PLCC legs.

Thus, the camera loses four times in sensitivity, in addition, from information on another camera it is clear that this technology has problems with a relatively large dark current.

About digital cameras

Recently, a new market segment has emerged and is rapidly growing, using CCD and CMOS matrices - digital cameras. Moreover, at the present moment there is a sharp increase in the quality of these products simultaneously with a sharp decrease in price. Indeed, just 2 years ago, a matrix with a resolution of 1024*1024 alone cost about $3000-7000, but now cameras with such matrices and a bunch of bells and whistles (LCD screen, memory, vari-lens, convenient body, etc.) can be bought for less than $1000 . This can only be explained by the transition to large-scale production of matrices.

Since these cameras are based on CCD and CMOS matrices, all discussions in this article about sensitivity and the principles of color signal formation are valid for them.

Instead of a conclusion

The practical experience we have accumulated allows us to draw the following conclusions:

  • The production technology of CCD matrices in terms of sensitivity and noise is very close to physical limits;
  • on the television camera market you can find cameras of acceptable quality, although adjustments may be required to achieve higher parameters;
  • Do not be fooled by the high sensitivity figures given in camera brochures;
  • And yet, prices for cameras that are absolutely identical in quality and even for simply identical cameras from different sellers can differ by more than twice!

Unknown Sergei Ivanovich
Nikulin Oleg Yurievich

CHARGE COUPLED DEVICES -
THE BASIS OF MODERN TELEVISION EQUIPMENT.
MAIN CHARACTERISTICS OF CCD.

In the previous article, a brief analysis of existing semiconductor light receivers was made and the structure and operating principle of charge-coupled devices were described in detail.

This article will discuss the physical characteristics of CCD matrices and their impact on the general properties of television cameras.

Number of elements of the CCD matrix.

Perhaps the most “basic” characteristic of CCD matrices is the number of elements. As a rule, the overwhelming number of models have a standard number of elements oriented to the television standard: 512x576 pixels (these matrices are usually used in simple and cheap video surveillance systems) and 768x576 pixels (such matrices allow you to obtain the maximum resolution for a standard television signal).

The largest CCD manufactured and described in the literature is a single-crystal device from Ford Aerospace Corporation measuring 4096x4096 pixels with a pixel side of 7.5 microns.

In production, the yield of high-quality devices of large sizes is very low, so when creating CCD video cameras for shooting large-format images, a different approach is used. Many companies manufacture CCDs with leads located on three, two or one side (buttable CCD). From such devices they collect mosaic CCD. For example, Loral Fairchild produces a very interesting and promising device 2048x4096 15 microns. The conclusions of this CCD are placed on one narrow side. The achievements of Russian industry are somewhat more modest. NPP Silar (St. Petersburg) produces a 1024x1024 16 µm CCD with a volumetric charge transfer channel, a virtual phase and leads on one side of the device. This architecture of devices allows them to be connected to each other on three sides.

It is interesting to note that several specialized large-format light detectors based on CCD mosaics have now been created. For example, eight 2048x4096 CCDs from Loral Fairchild are used to assemble a 8192x8192 mosaic with total dimensions of 129x129 mm. The gaps between individual CCD chips are less than 1 mm. In some applications, relatively large gaps (up to 1 cm) are not considered a serious problem because full image can be obtained by summing several exposures in the computer memory, slightly offset relative to each other, thus filling the gaps. The image obtained by the 8196x8196 mosaic contains 128 MB of information, which is equivalent to approximately a 100-volume encyclopedia with 500 pages in each volume. While these numbers are impressive, they are still small compared to the size and resolution of photographic emulsions, which can be produced in huge sheets. Even the coarsest grain 35mm film contains up to 25 million resolvable grains (pixels) per frame.

Camera resolution

One of the main parameters of a television camera - resolution (or resolving power) directly depends on the number of elements of the CCD matrix. The resolution of the camera as a whole is also influenced by the parameters of the electronic signal processing circuit and the parameters of the optics.

Resolution is defined as the maximum number of black and white stripes (i.e., the number of transitions from black to white or vice versa) that can be transmitted by the camera and distinguished by the recording system at the maximum detectable contrast.

This means that the camera allows you to see N/2 dark vertical strokes on a light background, placed in a square inscribed in the image field, if the camera’s passport indicates that its resolution is N television lines. In relation to a standard television table, this implies the following: by selecting the distance and focusing the table image, it is necessary to ensure that the upper and lower edges of the table image on the monitor coincide with the outer contours of the table, marked by the vertices of the black and white prisms. Next, after final subfocusing, the number is read at the place of the vertical wedge where the vertical strokes for the first time cease to be distinguished. The last remark is very important, since in the image of the test fields of the table, which have 600 or more lines, alternating stripes are often visible, which, in fact, are moiré formed by the beating of the spatial frequencies of the lines of the table and the grid of sensitive elements of the CCD matrix. This effect is especially pronounced in cameras with high-frequency spatial filters.

The unit of measurement of resolution in television systems is TVL (TV line). The vertical resolution of all cameras is almost the same, because it is limited by the television standard - 625 television scan lines and they cannot transmit more than 625 objects at this coordinate. The difference in horizontal resolution is what is usually indicated in technical descriptions.

In practice, in most cases, a resolution of 380-400 TV lines is quite sufficient for general television surveillance tasks. However, for specialized television systems and tasks, such as telemonitoring of a large space with one camera, viewing a large perimeter with a television camera with variable angular magnification (zoom), tracking at airports, railway stations, piers, supermarkets, identification and recognition systems for license plates, identification systems by face, etc., a higher resolution is required (for this, cameras with a resolution of 570 or more TV lines are used).

The resolution of color cameras is slightly worse than black and white. This is a consequence of the fact that the pixel structure of CCD matrices used in color television differs from the pixel structure of black and white matrices. Figuratively speaking, a pixel in a color matrix consists of a combination of three pixels, each of which registers light in either the red (Red), green (Green), or blue (Blue) part of the optical spectrum. Thus, three signals (RGB signal) are taken from each element of the color CCD matrix. The effective resolution should be several times worse than that of black-and-white matrices. However, for color matrices the resolution deteriorates less, since their pixel size is one and a half times smaller compared to the pixel size of a similar black and white matrix, which as a result leads to a deterioration in resolution of only 30-40%. The negative side of this is a decrease in the sensitivity of color matrices, since the effective registration area of ​​the image element becomes significantly smaller. The typical resolution of color television cameras is 300 - 350 TV lines.

In addition, the camera resolution is affected by the frequency band of the video signal output by the camera. To transmit a 300 TVL signal, a frequency band of 2.75 MHz is required (150 periods per 55 μs television scan line). The relationship between the telescanning frequency band (n pphtr) and resolution (TVL) is determined by the relation:

n pchtr = (TVL/2) x n parts,

where frequency n pchtr is measured in MHz, TVL resolution in TV lines, horizontal telescanning frequency n parts = 18.2 kHz.

Currently, many different semiconductor amplifiers with good frequency characteristics have been developed, so the bandwidth of camera amplifiers is usually significantly (1.5-2 times) higher than necessary, so as not to in any way affect the final resolution of the system. So the resolution is limited precisely by the discrete topology of the light-receiving region of the CCD matrix. Sometimes the fact of using a good electronic amplifier is called beautiful words like “resolution enhancement” or “edge enhancement”, which can be translated as “contrast resolution” and “emphasized boundaries”. We must be aware that this approach does not improve the resolution itself; in this way, only the clarity of the transmission of the boundaries of black and white is improved, and even then not always.

However, there is one case when no tricks of modern electronics allow raising the video signal bandwidth above 3.8 MHz. This is a composite color video signal. Since the chrominance signal is transmitted on a carrier (in the PAL standard - at a frequency of about 4.4 MHz), the luminance signal is forced to be limited to a 3.8 MHz band (strictly speaking, the standard assumes comb filters to separate chrominance and luminance signals, but real equipment has just low pass filters). This corresponds to a resolution of about 420 TVL. Currently, some manufacturers declare the resolution of their color cameras to be 480 TVL or more. But they, as a rule, do not focus on the fact that this resolution is realized only if the signal is taken from the Y-C (S-VHS) or component (RGB) output. In this case, the brightness and color signals are transmitted by two (Y-C) or three (RGB) separate cables from the camera to the monitor. In this case, the monitor, as well as all intermediate equipment (switches, multiplexers, video recorders) must also have inputs/outputs Y-C type(or RGB). Otherwise, a single intermediate element processing the composite video signal will limit the bandwidth to the aforementioned 3.8 MHz and render all the expense of expensive cameras useless.

Quantum efficiency and quantum yield of a CCD camera.

By quantum efficiency we mean the ratio of the number of registered charges to the number of photons incident on the photosensitive region of the CCD crystal.

However, the concepts of quantum efficiency and quantum yield should not be confused. The quantum yield is the ratio of the number of photoelectrons produced in a semiconductor or near its boundary as a result of the photoelectric effect to the number of photons incident on this semiconductor.

Quantum efficiency is the quantum output of the light-recording part of the receiver, multiplied by the coefficient of conversion of the photoelectron charge into the registered useful signal. Since this coefficient is always less than unity, the quantum efficiency is also less than the quantum yield. This difference is especially large for devices with a low-efficiency signal recording system.

In terms of quantum efficiency, CCDs are unmatched. For comparison, out of every 100 photons entering the pupil of the eye, only one is perceived by the retina (quantum efficiency is 1%), the best photo emulsions have a quantum efficiency of 2-3%, electrovacuum devices (for example, photomultipliers) - up to 20%, for CCDs this the parameter can reach 95% with a typical value of 4% (low-quality CCDs used, as a rule, in cheap camcorders of the “yellow” build) to 50% (a typical unselected Western camcorder). In addition, the range of wavelengths to which the eye responds is much narrower than that of CCDs. The spectral range of photocathodes of traditional vacuum cameras and photographic emulsions is also limited. CCDs respond to light with wavelengths ranging from angstroms (gamma and x-rays) to 1100 nm (infrared). This enormous range is much greater than the spectral range of any other detector known to date.


Rice. 1.An example of the quantum efficiency of a CCD matrix.

Sensitivity and spectral range

Another important parameter of a television camera, sensitivity, is closely related to the concepts of quantum efficiency and quantum yield. If quantum efficiency and quantum yield are operated mainly by developers and designers of new telesystems, then sensitivity is used by setup engineers, operation services and designers of direct work projects at enterprises. In essence, the sensitivity and quantum output of the receiver are related to each other by a linear function. If the quantum yield relates the number of photons incident on the light detector and the number of photoelectrons generated by these photons as a result of the photoelectric effect, then sensitivity determines the response of the light detector in electrical units (for example, in mA) to a certain amount of incident light flux (for example, in W or lux). sec). In this case, the concept of bolometric sensitivity (i.e., the total sensitivity of the receiver over the entire spectral range) and monochromatic sensitivity, measured, as a rule, by the radiation flux with a spectral width of 1 nm (10 angstroms) are divided. When they say that the sensitivity of the receiver is at a wavelength (for example, 450 nm), this means that the sensitivity is converted to a flux in the range from 449.5 nm to 450.5 nm. This definition of sensitivity, measured in mA/W, is unambiguous and does not cause any confusion when using it.

However, for consumers of television equipment used in security systems, a different definition of sensitivity is more often used. Most often, sensitivity is understood as the minimum illumination on an object (scene illumination), at which the transition from black to white can be distinguished, or the minimum illumination on the matrix (image illumination).

From a theoretical point of view, it would be more correct to indicate the minimum illumination on the matrix, since in this case there is no need to specify the characteristics of the lens used, the distance to the object and its reflection coefficient (sometimes this coefficient is called the word “albedo”). Albedo is usually determined at a specific wavelength, although there is such a thing as bolometric albedo. It is very difficult to objectively operate with a sensitivity determination based on the illumination on the object. This is especially true when designing telerecognition systems over long distances. Many sensors cannot register an image of a person's face 500 meters away, even if it is illuminated by very bright light.*

Note

* Problems of this kind appear in the practice of closed circuit television, especially in places with an increased threat of terrorism, etc. Television systems of this kind were developed in 1998 in Japan and are being prepared for mass production.

But when selecting a camera, it is more convenient for the user to work with the illumination of the object, which he knows in advance. Therefore, they usually indicate the minimum illumination on an object, measured under standardized conditions - the object reflectance is 0.75 and the lens aperture is 1.4. The formula relating the illumination on the object and on the matrix is ​​given below:

Iimage=Iscene x R/(p x F2),

where Iimage, Iscene - illumination of the CCD matrix and the object (Table 1);
R - object reflection coefficient (Table 2);
p - number 3.14;
F - lens aperture.

The values ​​of Iimage and Iscene usually differ by more than 10 times.

Illumination is measured in suites. Lux - the illuminance produced by a point source of one international candle at a distance of one meter on a surface perpendicular to the rays of light.

Table 1. Approximate illumination of objects.

On the street (latitude of Moscow)
Cloudless sunny day 100,000 lux
Sunny day with light clouds 70,000 lux
It's a nasty day 20,000 lux
Early morning 500 lux
Twilight 0.1 - 4 lux
"White Nights"* 0.01 – 0.1 lux
Clear night, full moon 0.02 lux
Night, moon in the clouds 0.007 lux
Dark cloudy night 0.00005 lux
In room
Room without windows 100 – 200 lux
Well lit room 200 – 1000 lux

* “White nights” - lighting conditions that satisfy civil twilight, i.e. when the sun plunges below the horizon without taking into account atmospheric refraction by no more than 6°. This is true for St. Petersburg. For Moscow, the conditions of the so-called “navigational white nights” are met, i.e. when the disk of the sun plunges below the horizon by no more than 12°.

Often, camera sensitivity is indicated for an “acceptable signal,” which means a signal when the signal-to-noise ratio is 24 dB. This is an empirically determined limiting value of noise at which the image can still be recorded on videotape and hope to see something during playback.

Another way to determine an “acceptable” signal is the IRE (Institute of Radio Engineers) scale. The total video signal (0.7 volts) is taken as 100 IRE units. A signal around 30 IRE is considered “acceptable”. Some manufacturers, in particular BURLE, indicate for 25 IRE, some for 50 IRE (signal level -6 dB). The choice of an “acceptable” level is determined by the signal-to-noise ratio. It is not difficult to amplify an electronic signal. The trouble is that the noise will also increase. Sony's Hyper-HAD matrices, which have a microlens on each photosensitive cell, now have the highest sensitivity among mass-produced CCD matrices. They are used in most cameras High Quality. The dispersion of the parameters of cameras built on their basis mainly means a difference in the approaches of manufacturers to the definition of the concept of “acceptable signal”.

An additional problem with the definition of sensitivity is related to the fact that the unit of illumination measurement “lux” is defined for monochromatic radiation with a wavelength of 550 nm. In this connection, it makes sense to pay special attention to such a characteristic as the spectral dependence of the sensitivity of the video camera. In most cases, the sensitivity of black-and-white cameras is significantly extended, compared to the human eye, into the infrared range up to 1100 nm. Some modifications have sensitivity in the near-infrared region even higher than in the visible. These cameras are designed to work with infrared spotlights and in some respects are close to night vision devices.

The spectral sensitivity of color cameras is approximately the same as the human eye.


Rice. 2. An example of the spectral sensitivity of a color CCD matrix with RGB standard stripes.

Table 2. Approximate values ​​of reflection coefficients of various objects.

An object Reflection coefficient (%)
Snow 90
White paint 75-90
Glass 70
Brick 35
Grass, trees 20
Human face 15 – 25
Hard coal, graphite* 7

* It is interesting to note that the reflectivity of the lunar surface is also about 7%, i.e. The moon is actually black.

Ultra-highly sensitive cameras deserve special mention; in fact, they are a combination of a conventional camera and a night vision device (for example, a microchannel electron-optical converter - image intensifier). Such cameras have unique properties (sensitivity is 100 - 10,000 times higher than usual, and in the mid-infrared range, where the maximum radiation of the human body is observed, it itself glows), but, on the other hand, they also have unique capriciousness - the time between failures is about one year, and the cameras should not be turned on during the day; it is even recommended to cover their lens to protect the cathode of the image intensifier from burning out. At a minimum, you should install lenses with an automatic aperture range of up to F/1000 or more. During operation, the camera must be regularly rotated slightly in order to avoid “burning in” the image on the cathode of the image intensifier tube.

It is interesting to note that, unlike CCD matrices, image intensifier tube cathodes are very sensitive to maximum illumination. If the light-sensitive area of ​​a CCD camera after bright illumination returns relatively easily to its original state (it is practically not afraid of flare), then the cathode of the image intensifier after bright illumination takes a very long time (sometimes 3-6 hours) to “recover.” During this restoration, even with the input window closed, a residual, “enhanced” image is read from the cathode of the image intensifier. As a rule, after large exposures, due to the effects of reabsorption (the release of gases under the influence of bombardment of the channel walls with streams of accelerated electrons), the noise of the image intensifier tube and, in particular, multielectron and ion noise sharply increases over a large area of ​​microchannel plates. The latter appear in the form of frequent bright flashes of large diameter on the monitor screen, which makes it very difficult to isolate the useful signal. With even higher input light fluxes, irreversible processes can occur both with the cathode and with the output luminescent screen of the image intensifier tube: under the influence of a large flux, individual sections fail (“burn out”). With further operation, these areas have reduced sensitivity, which subsequently drops to zero.

Most ultra-high-sensitivity cameras use brightness amplifiers with yellow or yellow-green output fluorescent screens. In principle, the glow of these screens can be considered as a monochromatic radiation source, which automatically leads to the definition: systems of this type can only be monochrome (i.e., black and white). Taking this circumstance into account, system creators also select appropriate CCD matrices: with maximum sensitivity in the yellow-green part of the spectrum and with no sensitivity in the IR range.

A negative consequence of the high sensitivity of matrices in the IR range is the increased dependence of device noise on temperature. Therefore, IR matrices used for work in the evening and at night without brightness amplifiers, unlike television systems with image intensifier tubes, are recommended to be cooled. The main reason for the shift in the sensitivity of CCD cameras to the IR region compared to other semiconductor radiation detectors is due to the fact that redder photons penetrate further into silicon, since the transparency of silicon is greater in the long-wave region and at the same time the probability of capturing a photon (converting it into a photoelectron) ) tends to unity.


Rice. 3. Dependence of the photon absorption depth in silicon on the wavelength.

For light with a wavelength greater than 1100 nm, silicon is transparent (the energy of red photons is not enough to create an electron-hole pair in silicon), and photons with a wavelength of less than 300-400 nm are absorbed in a thin surface layer (already on the polysilicon structure of the electrodes) and not reach the potential well.

As mentioned above, when a photon is absorbed, an electron-hole carrier pair is generated, and the electrons are collected under the electrodes if the photon is absorbed in the depletion region of the epitaxial layer. With such a CCD structure, a quantum efficiency of about 40% can be achieved (theoretically, at this boundary the quantum efficiency is 50%). However, polysilicon electrodes are opaque to light with wavelengths shorter than 400 nm.

To achieve higher sensitivity in the short wavelength range, CCDs are often coated with thin films of substances that absorb blue or ultraviolet (UV) photons and re-emit them in the visible or red wavelength range.

Noise is any source of signal uncertainty. The following types of CCD noise can be distinguished.

Photon noise. It is a consequence of the discrete nature of light. Any discrete process obeys Poisson's law (statistics). The photon flux (S is the number of photons incident on the photosensitive part of the receiver per unit time) also follows these statistics. According to it, photon noise is equal to . Thus, the signal/noise ratio (denoted as S/N - signal/noise ratio) for input signal will:

S/N==.

Dark signal noise. If you do not apply a light signal to the input of the matrix (for example, tightly close the video camera lens with a light-proof cover), then at the output of the system we will get so-called “dark” frames, otherwise it is called snowball noise. The main component of the dark signal is thermionic emission. The lower the temperature, the lower the dark signal. Thermionic emission also obeys Poisson statistics and its noise is equal to: , where N t is the number of thermally generated electrons in the total signal. As a rule, all CCD video cameras used in CCTV systems are used without active cooling, as a result of which dark noise is one of the main sources of noise.

Transfer noise. During the transfer of a charge packet through the CCD elements, some electrons are lost. It is trapped on defects and impurities existing in the crystal. This transfer inefficiency varies randomly as a function of the number of charges transferred (N), the number of transfers (n), and the inefficiency of a single transfer event (e). Assuming that each packet is carried independently, the transfer noise can be represented by the following expression:

s =.

Example: for a transfer inefficiency of 10 -5, 300 transfers and the number of electrons in a packet of 10 5, the transfer noise will be 25 electrons.

Reading noise. When the signal stored in a CCD element is removed from the matrix, converted to voltage, and amplified, additional noise, called readout noise, appears in each element. Reading noise can be represented as some a basic level of noise that is present even in an image with a zero exposure level, when the sensor is in complete darkness and the dark signal noise is zero. Typical readout noise for good CCD samples is 15-20 electrons. The best CCDs manufactured by Ford Aerospace using Skipper technology achieve readout noise of less than 1 electron and a transfer inefficiency of 10 -6 .

Reset noise or kTC noise. Before introducing a signal charge into the detecting unit, it is necessary to remove the previous charge. A reset transistor is used for this. The electrical reset level depends only on the temperature and capacitance of the detecting node, which introduces noise:

s r =,

where k is Boltzmann's constant.

For a typical value of capacitance C equal to 0.1 pf at room temperature, the reset noise will be about 130 electrons. kTC noise can be completely suppressed by a special signal processing method: double correlated sampling (DCS). The DCV method effectively eliminates low-frequency signals usually introduced by power circuits.

Since the main load on CCTV systems occurs in the dark (or poorly lit rooms), it is especially important to pay attention to low-noise video cameras, which are more effective in low-light conditions.

The parameter describing the relative amount of noise, as mentioned above, is called the signal-to-noise ratio (S/N) and is measured in decibels.

S/N =20 x log(<видеосигнал>/<шум>)

For example, a signal to noise ratio of 60 dB means that the signal is 1000 times greater than the noise.

With a signal-to-noise ratio of 50 dB or more, a clear picture without visible signs of noise will be visible on the monitor; at 40 dB, flickering dots are sometimes noticeable; at 30 dB, “snow” all over the screen; at 20 dB, the image is almost unacceptable, although large Contrasting objects can still be seen through the continuous “snow” veil.

The data given in the camera descriptions indicate signal-to-noise values ​​for optimal conditions, for example, with 10 lux of illumination on the matrix and with automatic gain control and gamma correction turned off. As illumination decreases, the signal becomes smaller, and the noise, due to the action of AGC and gamma correction, becomes greater.

Dynamic range

Dynamic range is the ratio of the maximum possible signal generated by a light receiver to its own noise. For a CCD, this parameter is defined as the ratio of the largest charge packet that can be accumulated in a pixel to the readout noise. The larger the pixel size of a CCD, the more electrons it can hold. For different types CCD this value ranges from 75,000 to 500,000 and above. At 10 e - noise (CCD noise is measured in e - electrons), the dynamic range of the CCD reaches a value of 50,000. A large dynamic range is especially important for recording images in outdoor conditions in bright sunlight or in night conditions when there is a large difference in illumination: bright light from lantern and the unlit shadow side of the object. By comparison, the best photographic emulsions only have a dynamic range of about 100.

For a more clear understanding of some of the characteristics of CCD receivers and, above all, the dynamic range, let us briefly compare them with the properties of the human eye.

The eye is the most universal light receiver.

Until now, the most effective and perfect light detector in terms of dynamic range (and, especially, in terms of the efficiency of image processing and restoration) is the human eye. The fact is that the human eye combines two types of light detectors: rods and cones.

The rods are small in size and have relatively low sensitivity. They are located mainly in the area of ​​the central macula and are practically absent on the periphery of the fundus retina. The rods are good at distinguishing light of different wavelengths; more precisely, they have a mechanism for generating different neurosignals depending on the color of the incident flux. Therefore, under normal lighting conditions, the ordinary eye has maximum angular resolution near the optical axis of the lens and maximum difference in color shades. Although some people experience pathological abnormalities associated with a decrease, and sometimes absence, of the ability to form various neurosignals depending on the wavelength of light. This pathology is called color blindness. People with acute vision are almost never color blind.

The cones are distributed almost evenly throughout the retina, are larger in size and, therefore, more sensitive.

In daylight conditions, the signal from the rods significantly exceeds the signal from the cones, the eye is tuned to work with bright lighting (so-called “daytime” vision). Rods, compared to cones, have a higher level of “dark” signal (in the dark we see false light “sparkles”).

If a non-tired person with normal vision is placed in a dark room and allowed to adapt (“get used to”) to the darkness, then the “dark” signal from the rods will greatly decrease and the cones (“twilight” vision) will begin to work more efficiently in the perception of light. In the famous experiments of S.I. Vavilov, it was proven that the human eye (the “cone” version) is capable of registering individual 2-3 quanta of light.

Thus, the dynamic range of the human eye: from the bright sun to individual photons, is 10 10 (i.e. 200 decibels!). The best artificial light detector for this parameter is a photomultiplier tube (PMT). In photon counting mode, it has a dynamic range of up to 10 5 (i.e. 100 dB), and with a device for automatically switching to registration in analog mode, the dynamic range of the PMT can reach 10 7 (140 dB), which is a thousand times worse in dynamic range than the human eye.

The spectral sensitivity range of rods is very wide (from 4200 to 6500 angstroms) with a maximum at approximately 5550 angstroms. Cones have a narrower spectral range (from 4200 to 5200 angstroms) with a maximum at a wavelength of about 4700 angstroms. Therefore, during the transition from daylight to twilight vision, an ordinary person loses the ability to distinguish colors (it’s not for nothing that they say: “at night all cats are gray”), and the effective wavelength shifts to the blue part, to the region of high-energy photons. This effect of shifting spectral sensitivity is called the Purkinje effect. Many color CCD matrices that are unbalanced in the RGB signal to white have it (indirectly). This should be taken into account when obtaining and using color information in television systems with cameras that do not have automatic white correction.

Linearity and gamma correction.

CCDs have a high degree of linearity. In other words, the number of electrons collected per pixel is strictly proportional to the number of photons hitting the CCD.

The “linearity” parameter is closely related to the “dynamic range” parameter. The dynamic range, as a rule, can significantly exceed the linearity range if the system provides hardware or further software correction for the operation of the device in the nonlinear region. Typically, a signal with a deviation from linearity of no more than 10% can be easily corrected.

A completely different situation is observed in the case of photographic emulsions. Emulsions have a complex response to light and, at best, can achieve a photometric accuracy of 5%, and then only in part of their already narrow dynamic range. CCDs are linear with an accuracy of 0.1% over almost the entire dynamic range. This makes it relatively easy to eliminate the influence of sensitivity inhomogeneity across the field. In addition, CCDs are positionally stable. The position of an individual pixel is strictly fixed during the manufacture of the device.

The kinescope in the monitor has a power-law dependence of brightness on the signal (exponent of 2.2), which leads to a decrease in contrast in dark areas and an increase in bright areas; at the same time, as already noted, modern CCDs produce a linear signal. To compensate for general nonlinearity, a device (gamma corrector) is usually built into the camera, predicting the signal with an exponent of 1/2.2, i.e. 0.45. Some cameras provide a choice of predistortion factor, for example, the 0.60 option results in a subjective increase in contrast, which gives the impression of a “sharper” picture. A side effect is that gamma correction means additional amplification of weak signals (in particular noise), i.e. the same camera with G=0.4 turned on will be approximately four times more “sensitive” than with G=1. However, let us remind you once again that no amplifier can increase the signal-to-noise ratio.

Charge spreading.

The maximum number of electrons accumulated in a pixel is limited. For matrices of average workmanship and typical sizes, this value is usually 200,000 electrons. And if the total number of photons during the exposure (frame) reaches the limit value (200,000 or more with a quantum yield of 90% or more), then the charge packet will begin to flow into neighboring pixels. The details of the image begin to merge. The effect is enhanced when the “extra” light flux not absorbed by the thin body of the crystal is reflected from the base substrate. With light fluxes within the dynamic range, photons do not reach the substrate; almost all of them (with a high quantum yield) are transformed into photoelectrons. But near the upper limit of the dynamic range, saturation occurs and untransformed photons begin to “wander” around the crystal, predominantly maintaining the direction of the initial entrance into the crystal. Most of these photons reach the substrate, are reflected, and thereby increase the probability of subsequent transformation into photoelectrons, oversaturating the charge packets already located at the spreading boundary. However, if an absorbing layer, the so-called anti-reflective coating (anti-blooming), is applied to the substrate, the spreading effect will be greatly reduced. Many modern matrices produced using new technologies have anti-blooming, which is one of the components of the back light compensation system.

Stability and photometric accuracy.

Even the most sensitive CCD video cameras are useless for low-light applications if their sensitivity is inconsistent. Stability is an integral property of the CCD as a solid-state device. Here, first of all, we mean the stability of sensitivity over time. Temporal stability is checked by flux measurements from special stabilized radiation sources. It is determined by the stability of the quantum output of the matrix itself and the stability of operation electronic system reading, amplification and recording of the signal. This resulting stability of the video camera is the main parameter in determining photometric accuracy, i.e. accuracy of measurement of the recorded light signal.

For good matrix samples and a high-quality electronic system, the photometric accuracy can reach 0.4 - 0.5%, and in some cases, under optimal matrix operating conditions and the use of special signal processing methods, 0.02%. The resulting photometric accuracy is determined by several main components:

  • temporary instability of the system as a whole;
  • spatial heterogeneity of sensitivity and, above all, heterogeneity of high-frequency (i.e. from pixel to pixel);
  • the quantum efficiency of the video camera;
  • accuracy of video signal digitization for digital video cameras;
  • the magnitude of noise of different types.

Even if the CCD matrix has large inhomogeneities in sensitivity, their effect on the resulting photometric accuracy can be reduced by special signal processing methods, provided, of course, these inhomogeneities are stable over time. On the other hand, if the matrix has a high quantum efficiency, but whose instability is high, the resulting accuracy of recording the useful signal will be low. In this sense, for unstable devices, the accuracy of registration of the useful signal (or photometric accuracy) is more important characteristic than the signal-to-noise ratio characteristic.


Close