Sensors are devices that detect only grayscale levels (gradations of light intensity - from completely white to completely black). To enable the camera to distinguish colors, an array of color filters is applied to the silicon using a photolithography process. In sensors that use microlenses, filters are placed between the lenses and the photodetector. In scanners that use trilinear CCDs (three CCDs placed side by side that respond to red, blue, and green colors respectively), or in high-end digital cameras that also use three sensors, each sensor filters a different color of light. (Note that some multi-sensor cameras use combinations of multiple filter colors rather than the standard three). But for single-sensor devices, like most consumer digital cameras, color filter arrays (CFA) are used to process the different colors.

In order for each pixel to have its own primary color, a filter of the corresponding color is placed above it. Photons, before reaching a pixel, first pass through a filter that transmits only waves of its own color. Light of a different length will simply be absorbed by the filter. Scientists have determined that any color in the spectrum can be obtained by mixing just a few primary colors. There are three such colors in the RGB model.

For each application, its own arrays of color filters are developed. But in most digital camera sensors, the most popular filter arrays are the Bayer pattern filters. This technology was invented in the 70s by Kodak when they were researching spatial separation. In this system, the filters are arranged interspersed in a checkerboard pattern, and the number of green filters is twice as large as red or blue. The arrangement is such that the red and blue filters are located between the green ones.

This quantitative ratio is explained by the structure of the human eye - it is more sensitive to green light. And the checkerboard pattern ensures images are the same color no matter how you hold the camera (vertically or horizontally). When reading information from such a sensor, colors are written sequentially in lines. The first line should be BGBGBG, the next line should be GRGRGR, etc. This technology is called sequential RGB.

In CCD cameras, all three signals are combined not on the sensor, but in the image forming device, after the signal is converted from analog to digital. In CMOS sensors, this alignment can occur directly on the chip. In either case, the primary colors of each filter are mathematically interpolated based on the colors of neighboring filters. Note that in any given image, most dots are mixtures of the primary colors, and only a few actually represent pure red, blue, or green.

For example, to determine the influence of neighboring pixels on the color of the central one, a 3x3 matrix of pixels will be processed during linear interpolation. Let's take, for example, the simplest case - three pixels - with blue, red and blue filters, located in one line (BRB). Let's say you're trying to get the resulting color value of a red pixel. If all colors are equal, then the color of the central pixel is calculated mathematically as two parts blue to one part red. In fact, even simple linear interpolation algorithms are much more complex; they take into account the values ​​of all surrounding pixels. If interpolation is poor, then jagged edges appear at the color change boundaries (or color artifacts appear).

Note that the word “resolution” is used incorrectly in the field of digital graphics. Purists (or pedants, whichever you prefer) familiar with photography and optics know that resolution is a measure of the ability of the human eye or instrument to distinguish individual lines on a resolution grid, such as the ISO grid shown below. But in the computer industry it is customary to name the number of pixels, and since this is the case, we will also follow this convention. After all, even developers call resolution the number of pixels in the sensor.


Shall we count?

The image file size depends on the number of pixels (resolution). The more pixels, the larger the file. For example, an image from VGA standard sensors (640x480 or 307200 active pixels) will take up about 900 kilobytes in uncompressed form. (307200 pixels of 3 bytes (R-G-B) = 921600 bytes, which is approximately equal to 900 kilobytes) An image from a 16 MP sensor will take up about 48 megabytes.

It would seem that there is no such thing as counting the number of pixels in the sensor in order to determine the size of the resulting image. However, camera manufacturers present a bunch of different numbers, and each time they claim that this is the true resolution of the camera.

IN total number pixels includes all pixels that physically exist in the sensor. But only those that participate in obtaining the image are considered active. About five percent of all pixels will not contribute to the image. These are either defective pixels or pixels being used by the camera for another purpose. For example, there may be masks to determine the dark current level or to determine the frame format.

Frame format is the relationship between the width and height of the sensor. In some sensors, such as 640x480 resolution, this ratio is 1.34:1, which matches the frame format of most computer monitors. This means that the images created by such sensors will fit exactly onto the monitor screen, without prior cropping. In many devices, the frame format corresponds to the format of traditional 35 mm film, where the ratio is 1: 1.5. This allows you to take pictures of a standard size and shape.


Resolution interpolation

In addition to optical resolution (the actual ability of pixels to respond to photons), there is also resolution increased by hardware and software using interpolating algorithms. Like color interpolation, resolution interpolation mathematically analyzes data from neighboring pixels. In this case, as a result of interpolation, intermediate values ​​are created. This "implementation" of new data can be done quite smoothly, with the interpolated data being somewhere in between the actual optical data. But sometimes during such an operation various interferences, artifacts, and distortions may appear, as a result of which the image quality will only deteriorate. Therefore, many pessimists believe that resolution interpolation is not a way to improve image quality at all, but only a method for increasing files. When choosing a device, pay attention to what resolution is indicated. Don't get too excited about the high interpolated resolution. (It is marked as interpolated or enhanced).

Another image processing process at the software level is subsampling. Essentially, it is the inverse process of interpolation. This process is carried out at the image processing stage, after the data is converted from analog to digital form. This removes data from various pixels. In CMOS sensors, this operation can be performed on the chip itself, by temporarily disabling reading of certain lines of pixels, or by reading data only from selected pixels.

Downsampling serves two functions. Firstly, for data compaction - to store more snapshots in memory of a certain size. The lower the number of pixels, the smaller the file size, and the more pictures you can fit on a memory card or in storage. internal memory devices and the less often you will have to download photos to your computer or change memory cards.

The second function of this process is to create images of a specific size for specific purposes. Cameras with a 2MP sensor are quite capable of taking a standard 8x10-inch photo. But if you try to send such a photo by mail, it will noticeably increase the size of the letter. Downsampling allows you to process an image so that it looks normal on your friends' monitors (if you don't aim for detail) and at the same time sends it quickly enough even on machines with a slow connection.

Now that we are familiar with the principles of sensor operation and know how an image is produced, let's look a little deeper and touch on more complex situations that arise in digital photography.

The mobile phone market is filled with models with cameras with huge resolutions. There are even relatively inexpensive smartphones with sensors with a resolution of 16-20 megapixels. An unknowing buyer is chasing a “cool” camera and prefers the phone with a higher camera resolution. He doesn’t even realize that he is falling for the bait of marketers and sellers.

What is permission?

Camera resolution is a parameter that indicates the final size of the image. It only determines how large the resulting image will be, that is, its width and height in pixels. Important: the picture quality does not change. The photo may turn out to be of low quality, but large due to the resolution.

Resolution does not affect quality. It was impossible not to mention this in the context of smartphone camera interpolation. Now you can get straight to the point.

What is camera interpolation in a phone?

Camera interpolation is an artificial increase in image resolution. It is images, and not That is, this is special software, thanks to which an image with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less).

If we draw an analogy, camera interpolation is similar to binoculars. These devices enlarge the image, but do not make it look better or more detailed. So if interpolation is indicated in the phone's specifications, then the actual camera resolution may be lower than stated. It's not good or bad, it just is.

What is this for?

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers the resolution of the phone's camera on the advertising poster and position it as an advantage or something good. Not only does resolution itself not affect the quality of photographs, but it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in different ways tried to cram them into their smartphone sensors with as many sensors as possible. This is how smartphones with cameras with a resolution of 5, 8, 12, 15, 21 megapixels appeared. At the same time, they could take photographs like the cheapest point-and-shoot cameras, but when buyers saw the “18 MP camera” sticker, they immediately wanted to buy such a phone. With the advent of interpolation, it has become easier to sell such smartphones due to the ability to artificially add megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

Technical side

What is camera interpolation in a phone technically, since all the text above described only the basic idea?

Using special software, new pixels are “drawn” on the image. For example, to enlarge an image by 2 times, a new line is added after each line of pixels in the image. Each pixel in this new line is filled with a color. The fill color is calculated by a special algorithm. The very first way is to fill the new line with the colors of the nearest pixels. The result of such processing will be terrible, but this method requires a minimum of computational operations.

Most often, another method is used. That is, new rows of pixels are added to the original image. Each pixel is filled with a color, which in turn is calculated as the average of neighboring pixels. This method gives better results, but requires more computational operations.

Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program edits the image, trying to artificially increase its size.

There are many advanced interpolation methods and algorithms that are constantly being improved: the boundaries of the transition between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is banal and is unlikely to catch on in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in films does a small blurry picture become clear after applying a couple of filters. In practice this cannot happen.

Do you need interpolation?

Many users, out of ignorance, ask questions on various forums about how to do camera interpolation, believing that this will improve the quality of images. In fact, interpolation not only will not improve the quality of the picture, but may even make it worse, because new pixels will be added to the photos, and due to the not always accurate calculation of colors for filling, the photo may have undetailed areas and graininess. As a result, quality drops.

So interpolation in the phone is a marketing ploy that is completely unnecessary. It can increase not only the resolution of the photo, but also the cost of the smartphone itself. Don't fall for the tricks of sellers and manufacturers.

Camera interpolation is an artificial increase in image resolution. It is the image, not the matrix size. That is, this is special software, thanks to which an image with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less). To use an analogy, camera interpolation is like a magnifying glass or binoculars. These devices enlarge the image, but do not make it look better or more detailed. So if interpolation is indicated in the phone's specifications, then the actual camera resolution may be lower than stated. It's not good or bad, it just is.

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers the resolution of the phone's camera on the advertising poster and position it as an advantage or something good. Not only does resolution itself not affect the quality of photographs, but it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in various ways tried to cram sensors with as many megapixels as possible into their smartphones. This is how smartphones with cameras with a resolution of 5, 8, 12, 15, 21 megapixels appeared. At the same time, they could take photographs like the cheapest point-and-shoot cameras, but when buyers saw the “18 MP camera” sticker, they immediately wanted to buy such a phone. With the advent of interpolation, it has become easier to sell such smartphones due to the ability to artificially add megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

What is camera interpolation in a phone technically, since all the text above described only the basic idea?

Using special software, new pixels are “drawn” on the image. For example, to enlarge an image by 2 times, a new line is added after each line of pixels in the image. Each pixel in this new line is filled with a color. The fill color is calculated by a special algorithm. The very first way is to fill the new line with the colors of the nearest pixels. The result of such processing will be terrible, but this method requires a minimum of computational operations.

Most often, another method is used. That is, new rows of pixels are added to the original image. Each pixel is filled with a color, which in turn is calculated as the average of neighboring pixels. This method gives better results, but requires more computational operations. Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program edits the image, trying to artificially increase its size. smartphone camera interpolation There are many advanced interpolation methods and algorithms that are constantly being improved: the boundaries of the transition between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is banal and is unlikely to catch on in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in films does a small blurry picture become clear after applying a couple of filters. In practice this cannot happen.
.html

P2P camera- An IP camera containing software that allows you to identify it and connect to the camera remotely using a unique number (ID number) without using a static IP address or functions such as DDNS and UPnPct. P2P cameras were designed to make it easier for ordinary non-specialist users to set up remote camera access.

How does a P2P camera work?

When a p2p camera is connected to the Internet (via a router or 3G connection), the camera automatically sends a request to a remote server, which identifies the camera by its unique ID number. To access the camera and view video, the user needs to install it on the device (computer or mobile devices) a special application from the developer of the IP camera. IN this application the user enters the camera ID number (or takes a photo of the camera QR code so as not to enter the code manually), after which he can view video from the camera online, view the video archive from the SD card, control the PTZ device and use other functions. The server in this case acts as an intermediary, connecting the IP camera and the user’s device directly.

Why is P2P technology needed?

This technology is designed to make IP camera installation as easy as possible for the end user. Without this technology, to remotely access the camera, the user needs to connect a static IP address or have special skills. In the case of P2P cameras, an average user spends no more than 10 minutes installing the camera and setting up remote viewing.

Application areas of P2P cameras

P2P cameras allow you to get a full-fledged video surveillance system with remote access from anywhere in the world and easy to install for little money. The main areas of application of P2P cameras:

  • surveillance of a country house and/or plot
  • apartment security monitoring
  • pet monitoring
  • small business security and point of sale surveillance
  • patient monitoring
  • use in state and municipal institutions, etc.

Companies involved in the development and production of P2P cameras

The world leader in the production of P2P cameras is Cisco.

What does "5.0MP Interpolation" and "8.0MP Interpolation" mean?

In the description of the DOOGEE X5 smartphone I discovered an interesting and, at the same time, unclear point:
Dual Cameras: 2.0MP (5.0MP Interpolation) Front Camera; 5.0MP (8.0MP Interpolation) Rear camera with flash and autofocus.

What does "5.0MP Interpolation" and "8.0MP Interpolation" mean?
Really, how many megapixel cameras are there - 2 and 5 megapixel or 5 and 8 megapixel?

Living creature

It means “FUCKING”... they pass off shitty cameras as high-quality... A 2MP camera programmatically produces a 5MP image... they are trying to sell you a fake... the original DVRs do not use interpolation...

Vladssto

This means that the camera physically has a real resolution of, say, 5MP, and the smartphone has software that moves adjacent pixels apart and adds another pixel between them in color, something in between the neighboring ones, and the output is a photo with a resolution of 8MP.
This doesn’t really affect the quality, it’s just that you can zoom in on a photo with a higher resolution and view the details

The smartphone has an 8 MPix camera. What does interpolation up to 13 MPix mean?

Sergey 5

Up to 13 MPix - it could be 8 real MPix, like yours. Or 5 real MPix. Software camera interpolates the camera's graphic output up to 13 MPix, not enhancing the image, but electronically enlarging it. Simply put, like a magnifying glass or binoculars. The quality doesn't change.

This means that the camera can take a photo up to 8 MPIX, but in software it can enlarge photos up to 12 MPIX. This means that it enlarges it programmatically, but the image does not become better quality, the image will still be exactly 8 MPIX. This is purely a trick of the manufacturer and such smartphones are more expensive.

Consumer

To put it simply, when creating a photo, the smart processor adds its own pixels to the active pixels of the matrix, as if it calculates the picture and draws it to a size of 13 megapixels. The output is a matrix of 8 and a photo with a resolution of 13 megapixels. The quality doesn't improve much from this.

Violet a

Camera interpolation is a trick of the manufacturer; it artificially inflates the price of a smartphone.

If you have an 8 MPIX camera, then it can take a corresponding picture; interpolation does not improve the quality of the photo, it simply increases the size of the photo to 13 megapixels.

S s s r

Megapixel interpolation is a software blurring of the image. The real pixels are moved apart, and additional ones are inserted between them, with the color of the average value from the colors moved apart. Nonsense, self-deception that no one needs. The quality doesn't improve.

Mastermiha

On Chinese smartphones This is now used all the time, it’s just that a 13MP camera sensor costs much more than an 8MP one, that’s why they set it to 8MP, but the camera application stretches the resulting image, as a result, the quality of these 13MP ones will be noticeably worse if you look at the original resolution.

In my opinion, this function is of no use at all, since 8MP is quite enough for a smartphone; in principle, 3MP is enough for me, the main thing is that the camera itself is of high quality.

Azamatik

Good day.

This means that your smartphone stretches a photo/image taken with an 8 MPix camera to 13 MPix. And this is done by moving the real pixels apart and inserting additional ones.

But, if you compare the quality of an image/photo taken at 13 MP and 8 MP with interpolation to 13, then the quality of the second will be noticeably worse.

Doubloon

This means that in your camera, as there were 8 MPIX, they remain the same - no more and no less, and everything else is a marketing ploy, a scientific fooling of the people in order to sell the product at a higher price and nothing more. This function worthless, the quality of the photo is lost during interpolation.

Moreljuba

This concept assumes that the camera of your device will still take photos at 8 MPIX, but now in software it is possible to increase it to 13 MPIX. At the same time, the quality does not become better. It's just that the space between the pixels gets clogged up, that's all.

Gladius74

Interpolation is a method of finding intermediate values

If all this is translated into a more human language, applicable to your question, you get the following:

  • The software can process (enlarge, stretch)) files up to 13 MPIX.

Marlena

The fact is that the real camera in such phones is 8 megapixels. But with the help of internal programs, images are stretched to 13 megapixels. In fact, it doesn't reach the actual 13 megapixels.

Camera in a mobile phone

For several years now, manufacturers have been combining mobile phones with digital cameras. Such a camera is called digital because the image obtained with its help consists of dots, and the quality and quantity of these dots can be described in numbers, and therefore saved on modern digital media. Accordingly, the quality of a digital camera is usually determined by the maximum number of points in which the camera can save the resulting image. Of course, for professional separately made cameras, many other parameters are also important, such as the quality of the optics, the size of the light-sensitive matrix that directly receives an analog image from the lens, the operating principle of the matrix itself (CMOS, CCD) and much more. For cameras made in a phone body and without high-quality optics, with minimal matrix sizes and other similar minimization tricks, the main parameter remains the maximum number of points at which the camera can perceive an image from the lens. But many cameras can save an image in the phone’s memory in a higher resolution, this is called interpolation. During interpolation, the image obtained physically and realistically is enlarged programmatically to the dimensions declared by marketers. This operation can be performed on any computer, so the presence of such a function as interpolation is very doubtful in any not only phone, but also camera. So, choosing a phone with the most best camera, take the time to read the description of each device on the Internet so as not to run into an interpolation image.

Camera quality, or image size, is usually measured in megapixels. In our opinion it will be: millions of points. The more points the camera matrix can digitize an image, the better, in principle. All other factors being equal, we can assume that a 4 megapixel camera takes pictures, not 2, of course, there are other features, but somewhat better than a two megapixel camera. Although, it should be noted that there are cases when, with good optics, a high-quality matrix digitizes better than its low-quality multi-pixel counterpart.

Typically there are cameras of 0.3 megapixels (640x480), 1.3 megapixels (1280x960), 2 megapixels (1600x1200) and 4 megapixels (2304x1728). The lack of a normal flash and high-quality optics means that even a four-megapixel photo is not yet of good enough quality to print the image on photo paper. Flaws will be visible to the naked eye. However, with good natural (sun) lighting, a 1.3 megapixel camera is already capable of creating an image that, when printed on standard 10x15 size photo paper from an outstretched arm, will not differ from an image taken with a good camera.

Article provided by the site Mobile Life from Dolche-Mobile.Ru


Close