Interpolation of a digital image. What is camera interpolation in a phone and what is it for? how to do camera interpolation on android

Mobile phone camera

For several years, manufacturers have been combining mobile phones with digital cameras. Such a camera is called digital because the image obtained with its help consists of dots, and the quality and quantity of these dots can be described in numbers, and therefore stored on modern digital media. Accordingly, the quality of a digital camera is usually determined by the maximum number of points in which the camera can store the resulting image. Of course, for professional, separately made cameras, many other parameters also matter, such as the quality of optics, the size of the light-sensitive matrix that directly receives an analog image from the lens, the principle of operation of the matrix itself (CMOS, CCD) and much more. For cameras that are made in a phone case and do not have high-quality optics, have minimal matrix sizes and other similar minimization tricks, the main parameter is the maximum number of points at which the camera can perceive an image from the lens. But many cameras can store an image in the phone's memory in a higher resolution, this is called interpolation. During interpolation, the image obtained physically and realistically is programmatically increased to the dimensions declared by marketers. Such an operation can be performed on any computer, so the presence of such a function as interpolation is very doubtful in any not only a phone, but also a camera. So, when choosing a phone with the best camera, do not be too lazy to read the description of each device on the Internet so as not to run into an interpolation image.

Camera quality, or image size, is usually measured in megapixels. In our opinion it will be: millions of points. The more points the camera matrix can digitize the image, the better, in principle. Other things being equal, we can assume that a 4 megapixel camera shoots, not 2, of course, there are other features here, but somewhat better than a two-megapixel camera. Although, it should be noted that there are cases when, with good optics, a high-quality matrix digitizes better than its low-quality multi-pixel counterpart.

Usually there are cameras of 0.3 megapixels (640x480), 1.3 megapixels (1280x960), 2 megapixels (1600x1200) and 4 megapixels (2304x1728). The lack of a normal flash and high-quality optics make even a four-megapixel photo still not of good enough quality to print an image on photographic paper. Flaws will be visible to the naked eye. However, with good natural (sunlight) lighting, a 1.3 megapixel camera is already able to create an image that, when printed on standard 10x15 photo paper with an outstretched hand, will not differ from an image made by a good camera.

The article is provided by the site Mobile life from Dolche-Mobile.Ru

The mobile phone market is filled with models with huge resolution cameras. There are even relatively inexpensive smartphones with sensors with a resolution of 16-20 megapixels. The unknowing customer is chasing the "cool" camera and prefers the phone with the higher camera resolution. He does not even realize that he falls for the bait of marketers and sellers.

What is permission?

The camera resolution is a parameter that indicates the final size of the image. It only determines how big the resulting image will be, i.e. its width and height in pixels. Important: the image quality does not change. The photo may turn out to be of poor quality, but large due to the resolution.

Resolution does not affect quality. It was impossible not to mention this in the context of smartphone camera interpolation. Now we can go straight to the point.

What is camera interpolation in a phone?

Camera interpolation is an artificial increase in image resolution. It is images, not That is, this is special software, thanks to which a picture with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less).

If we draw an analogy, then the interpolation of the camera is similar to or binoculars. These devices enlarge the image, but do not make it better or more detailed. So if interpolation is indicated in the characteristics of the phone, then the actual resolution of the camera may be lower than the declared one. It's not bad or good, it just is.

What is it for?

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell the product. They use large numbers to indicate the resolution of the phone's camera on the advertising poster and position it as an advantage or something good. Not only does the resolution itself not affect the quality of the photos, but it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in various ways tried to cram them into their smartphones with as many sensors as possible. This is how smartphones with cameras with a resolution of 5, 8, 12, 15, 21 megapixels appeared. At the same time, they could take pictures like the cheapest soap dishes, but buyers, having seen the sticker "18 MP camera", immediately wanted to buy such a phone. With the advent of interpolation, it has become easier to sell such smartphones due to the possibility of artificially adding megapixels to the camera. Of course, the quality of the photo began to improve over time, but definitely not due to resolution or interpolation, but due to natural progress in terms of sensor and software development.

Technical side

What is camera interpolation in a phone technically, because all the text above described only the main idea?

With the help of special software, new pixels are "drawn" on the image. For example, to enlarge an image by 2 times, a new line is added after each line of image pixels. Each pixel in this new row is filled with a color. The fill color is calculated by a special algorithm. The very first way is to fill the new line with the colors that the nearest pixels have. The result of such processing will be terrible, but such a method requires a minimum of computational operations.

The most commonly used method is another. That is, new rows of pixels are added to the original image. Each pixel is filled with a color, which, in turn, is calculated as the average of neighboring pixels. This method gives the best results, but requires more computational operations.

Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program edits the image, trying to artificially increase its size.

There are many advanced interpolation methods and algorithms that are constantly being improved: the transition boundaries between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is banal and is unlikely to take root in the near future. With interpolation, it is not possible to make an image more detailed, add new details, or improve it in any other way. Only in films does a small blurry picture become clear after applying a couple of filters. In practice, this cannot be.

Do you need interpolation?

Many users, out of ignorance, ask questions on various forums on how to do camera interpolation, believing that this will improve the quality of images. In fact, interpolation not only does not improve the quality of the picture, but can even make it worse, because new pixels will be added to the photos, and due to the not always accurate calculation of colors for filling, there may be undetailed areas, graininess in the photo. As a result, the quality drops.

So interpolation in the phone is a marketing ploy that is completely unnecessary. It can increase not only the resolution of the photo, but also the cost of the smartphone itself. Do not fall for the tricks of sellers and manufacturers.

Sensors are devices that determine only grayscale (gradations of light intensity - from completely white to completely black). In order for the camera to distinguish colors, an array of color filters is superimposed on the silicon using a photolithography process. In those sensors where microlenses are used, filters are placed between the lenses and the photodetector. Scanners that use trilinear CCDs (three CCDs next to each other that respond to red, blue, and green, respectively) or high-end digital cameras that also use three sensors have a specific color of light filtered onto each sensor. (Note that some multi-sensor cameras use combinations of multiple colors in the filters rather than the standard three). But for single-sensor devices, like most consumer digital cameras, color filter arrays (CFAs) are used to process different colors.

In order for each pixel to have its own primary color, a filter of the corresponding color is placed above it. Photons, before hitting a pixel, first pass through a filter that allows only waves of their own color to pass through. Light of a different length will simply be absorbed by the filter. Scientists have determined that any color in the spectrum can be obtained by mixing just a few primary colors. There are three such colors in the RGB model.

Each application develops its own color filter arrays. But in most digital camera sensors, Bayer pattern filter arrays are the most popular. This technology was invented in the 1970s by Kodak when they were doing space separation research. In this system, the filters are interleaved, in a checkerboard pattern, and the number of green filters is twice as many as red or blue. The arrangement is such that the red and blue filters are located between the green ones.

This quantitative ratio is explained by the structure of the human eye - it is more sensitive to green light. And the checkerboard pattern ensures that images are the same color regardless of how you hold the camera (vertically or horizontally). When reading information from such a sensor, colors are written sequentially in lines. The first line should be BGBGBG, the next line should be GRGRGR, and so on. This technology is called sequential RGB (sequential RGB).

In CCD cameras, the combination of all three signals together takes place not on the sensor, but in the imaging device, after the signal has been converted from analog to digital. In CMOS sensors, this alignment can take place directly on the chip. In any case, the primary colors of each filter are mathematically interpolated, taking into account the colors of neighboring filters. Note that in any image, most of the dots are mixtures of primary colors, and only a few actually represent pure red, blue, or green.

For example, to determine the influence of neighboring pixels on the color of the central one, a 3x3 matrix of pixels will be processed during linear interpolation. Let's take, for example, the simplest case - three pixels - with blue, red and blue filters, located in one line (BRB). Suppose you are trying to get the resulting color value of a red pixel. If all colors are equal, then the color of the central pixel is calculated mathematically as two parts blue to one part red. In fact, even simple linear interpolation algorithms are much more complex, they take into account the values ​​of all surrounding pixels. If the interpolation is bad, then there are teeth at the boundaries of the color change (or color artifacts appear).

Note that the word "resolution" in the field of digital graphics is used incorrectly. Purists (or pedants, whichever you like) familiar with photography and optics know that resolution is a measure of the ability of the human eye or instrument to distinguish individual lines on a resolution grid, such as the ISO grid shown below. But in the computer industry, it's customary to refer to the number of pixels as resolution, and since that's the way it is, we'll follow that convention as well. After all, even developers call the resolution the number of pixels in the sensor.


Let's count?

The image file size depends on the number of pixels (resolution). The more pixels, the larger the file. For example, an image of VGA sensors (640x480 or 307200 active pixels) will occupy about 900 kilobytes in uncompressed form. (307200 pixels x 3 bytes (R-G-B) = 921600 bytes, which is about 900 kilobytes) A ​​16 MP sensor image will take up about 48 megabytes.

It would seem that such a thing - to count the number of pixels in the sensor to determine the size of the resulting image. However, camera manufacturers come up with a bunch of different numbers, and every time they claim that this is the true resolution of the camera.

The total number of pixels includes all the pixels that physically exist in the sensor. But only those that are involved in obtaining the image are considered active. About five percent of all pixels will not contribute to the image. These are either defective pixels or pixels used by the camera for another purpose. For example, there may be masks to determine the level of dark current or to determine the aspect ratio.

Frame format - the relationship between the width and height of the sensor. On some sensors, such as those with a resolution of 640x480, this ratio is 1.34:1, which matches the aspect ratio of most computer monitors. This means that the images created by such sensors will fit exactly on the monitor screen, without pre-cropping. In many devices, the frame format corresponds to the format of traditional 35 mm film, where the ratio is 1:1.5. This allows you to take pictures of a standard size and shape.


Resolution interpolation

In addition to optical resolution (the real ability of pixels to respond to photons), there is also a resolution increased by the hardware and software system using interpolating algorithms. As with color interpolation, resolution interpolation mathematically analyzes neighboring pixel data. In this case, intermediate values ​​are created as a result of interpolation. Such "embedding" of new data can be done quite smoothly, while the interpolated data will be something in between, between real optical data. But sometimes during such an operation, various interference, artifacts, and distortions may appear, as a result of which the image quality will only worsen. Therefore, many pessimists believe that resolution interpolation is not a way to improve the quality of images at all, but only a method for enlarging files. When choosing a device, pay attention to what resolution is indicated. Don't get too excited about the high interpolated resolution. (It is marked as interpolated or enhanced).

Another image processing process at the software level is sub-sampling. In fact, this is the reverse process of interpolation. This process is carried out at the stage of image processing, after the data has been converted from analog to digital form. This deletes the data of the various pixels. In CMOS sensors, this operation can be performed on the chip itself, temporarily disabling the reading of certain lines of pixels, or reading data only from selected pixels.

Subsampling performs two functions. Firstly, for data compaction - to store more pictures in memory of a certain size. The smaller the number of pixels, the smaller the file size, and the more pictures you can fit on a memory card or in the internal memory of the device, and the less often you have to download photos to your computer or change memory cards.

The second function of this process is to create images of a specific size for specific purposes. Cameras with a 2MP sensor are quite capable of taking a snapshot of a standard 8x10-inch photo. But if you try to send such a photo by mail, it will noticeably increase the size of the letter. Downsampling allows you to process the image so that it looks normal on your friends' monitors (if you do not aim for detail) and at the same time sends quickly enough even on machines with a slow connection.

Now that we are familiar with how sensors work and how an image is produced, let's dive a little deeper and touch on the more complex situations that arise in digital photography.

Camera interpolation is an artificial increase in image resolution. It is the image, not the size of the matrix. That is, this is special software, thanks to which an image with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less). In an analogy, camera interpolation is like a magnifying glass or binoculars. These devices enlarge the image, but do not make it better or more detailed. So if interpolation is indicated in the characteristics of the phone, then the actual resolution of the camera may be lower than the declared one. It's not bad or good, it just is.

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell the product. They use large numbers to indicate the resolution of the phone's camera on the advertising poster and position it as an advantage or something good. Not only does the resolution itself not affect the quality of the photos, but it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in various ways tried to cram them into their smartphones with as many sensors as possible. This is how smartphones with cameras with a resolution of 5, 8, 12, 15, 21 megapixels appeared. At the same time, they could take pictures like the cheapest soap dishes, but buyers, having seen the sticker "18 MP camera", immediately wanted to buy such a phone. With the advent of interpolation, it has become easier to sell such smartphones due to the possibility of artificially adding megapixels to the camera. Of course, the quality of the photo began to improve over time, but definitely not due to resolution or interpolation, but due to natural progress in terms of sensor and software development.

What is camera interpolation in a phone technically, because all the text above described only the main idea?

With the help of special software, new pixels are "drawn" on the image. For example, to enlarge an image by 2 times, a new line is added after each line of image pixels. Each pixel in this new row is filled with a color. The fill color is calculated by a special algorithm. The very first way is to fill the new line with the colors that the nearest pixels have. The result of such processing will be terrible, but such a method requires a minimum of computational operations.

The most commonly used method is another. That is, new rows of pixels are added to the original image. Each pixel is filled with a color, which, in turn, is calculated as the average of neighboring pixels. This method gives the best results, but requires more computational operations. Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program edits the image, trying to artificially increase its size. smartphone camera interpolation There are many advanced interpolation methods and algorithms that are constantly being improved: the transition boundaries between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is banal and is unlikely to take root in the near future. With interpolation, it is not possible to make an image more detailed, add new details, or improve it in any other way. Only in films does a small blurry picture become clear after applying a couple of filters. In practice, this cannot be.
.html

The smartphone has an 8 MPix camera. What does interpolation up to 13 MPix mean?

    Good day.

    This means that your smartphone will stretch the photo/image taken with the 8 MPix camera to 13 MPix. And this is done through the fact that real pixels are moved apart and additional ones are inserted.

    But, if we compare the quality of the image / photo taken at 13 MP and 8 MP with interpolation up to 13, then the quality of the second will be noticeably worse.

    To explain in a simple way, when creating a photo, the smart processor adds its own pixels to the active pixels of the matrix, as it were, it calculates the picture and draws it up to a size of 13 MP .. At the output, we have a matrix of 8 and a photo with a resolution of 13 MP. The quality doesn't improve much.

    This means that the camera can take a picture up to 8 MPIX, but it can programmatically increase pictures up to 12 MPIX. So it programmatically increases, but the image does not become better, the image will still be exactly 8 MPIX. This is purely a trick of the manufacturer and such smartphones are more expensive.

    This concept implies that the camera of your device will still take photos at 8 MPIX, but now it is possible to programmatically increase it to 13 MPIX. At the same time, the quality does not become better. It's just that the space between the pixels gets clogged, that's all.

    This means that in your cell, as there were 8 MPIX, they still remain - no more and no less, and everything else is a marketing ploy, a scientific fooling of the people in order to sell goods at a higher price and no more. This function is useless, the quality of the photo is lost during interpolation.

    On Chinese smartphones, this is now used all the time, it's just that a 13mp camera sensor is much more expensive than 8mp, which is why they put it on 8mp, but the camera application stretches the resulting image, as a result, the quality of these 13mp will be noticeably worse if you look at the original resolution .

    In my opinion, this function is generally useless, since 8mp is quite enough for a smartphone, in principle, 3mp is enough for me, the main thing is that the camera itself is of high quality.

    Camera interpolation is a trick of the manufacturer, so they artificially inflate the price of a smartphone.

    If you have an 8 MPIX camera, then it can take a corresponding picture, interpolation does not improve the quality of the photo, it simply increases the size of the photo to 13 megapixels.

    The fact is that the real camera in such phones is 8 megapixels. But with the help of internal programs, the image is stretched to 13 megapixels. In fact, it does not reach the real 13 megapixels.

    Megapixel interpolation is a kind of software smearing of an image. Real pixels are moved apart, and additional ones are inserted between them, with the color of the average value from the colors moved apart. Nonsense, useless self-deception. The quality does not improve.

  • Interpolation is a way of finding intermediate values

    If all this is translated into a more human language, applicable to your question, then the following will turn out:

    • the software can process (enlarge, stretch)) files up to 13 MPIX.
  • Up to 13 MPix - it can be 8 MPix real, like yours. Or 5 MPix real ones. The camera software interpolates the camera's graphic output up to 13 MPix without enhancing the images, but electronically enlarging it. Simply speaking, like a magnifying glass or binoculars. The quality doesn't change.