Friday, 26 July 2013

Digital Camera Sensors

So, how does your digital camera work? Well, it seems pretty straightforward: in place of film, put a grid of light sensors, and collect all the numbers to make pixels. Light them up on the screen the same way as the sensors detected they were lit up by the focused image, and presto, you've reproduced the original image.

But there's a complication: Nobody has yet found a cheap and effective way to get all the colour information from one sensor. In fact, a sensor can only tell how bright a pixel is. Our eyes have three different types of 'cone' cells in them, which means to trick our eyes into thinking we're looking at the same light, we need to track the brightnesses of 3 different frequencies of light. It's not just any three, either, but specifically red, green and blue.

So if each pixel needs these three colour 'components', and the sensor can only give us overall brightness, how do we put a picture together?

Almost all cameras work this way: As the light approaches the sensor, it passes through a filter, called a "Bayer mosaic". This mosaic of filters strips out the light information for two of the three components, but which components it strips out are different from pixel to pixel. Every other pixel is stripped down to its green component. This is because the human eye is most sensitive to green, and thus having the best possible precision in green improves the quality of the photo. Then, the remaining pixels are divided between red and blue components.

These then hit the sensor, typically a Charge Coupled Device, or CCD, which determines the overall brightness at each point. Since each point has already been reduced to only its Red, Green or Blue component, this determines the brightness of that component specifically. Additional light in other frequencies has been absorbed by the filter and does not reach the sensor.

This leaves us with a problem, though. We only have part of the colour information for each pixel. How do we get the rest? We guess – literally! There are many different ways to guess, and all of them get it wrong some of the time. Where they get it wrong, the image looks discoloured. Your camera has one method built into it. Higher-end cameras support saving images in a "raw" file format, like Nikon's NEF file format. In this format, the camera doesn't try to guess. It only stores the information it has. This is then read in by a piece of software on your computer, and that software lets you experiment with different algorithms and find the absolute best quality. The camera actually captures that one component in more detail than it normally gets encoded in the final result, and the result is that it is often possible to correct errors in exposure from a raw file in ways that are simply not possible with a processed image.

Are there other ways to solve the problem? Certainly!

Cameras with a very high resolution can use a technique called subsampling. As indicated in the diagramme, the same input is received, having passed through the Bayer mosaic, but instead of guessing the missing information to make up the full resolution, the image is processed by combining every 2x2 pixels down to a single pixel. Since each combined pixel now has Red, Green and Blue components (it even has two samples of the green component, so it can be extra accurate!), we're good to go. The sacrifice here is in resolution. A camera like the 40 megapixel beast in Nokia's recent Lumia phones works this way, and when it combines the pixel data down, you get half as many pixels – in both directions. Thus, the output image is only 10 megapixels, but none of what you see is guessed. Photos can be crisper and have truer colours than those taken with lower-resolution cameras.

A final option that is seen in very high-end cameras is an arrangement knows as 3 CCD. This is exactly what it sounds like: The camera contains three sensors instead of one. A specially-designed prism called a colour separation beam splitter separates the entire image into its colour components. Instead of being absorbed by a filter, the colour information that isn't needed for a given sensor is simply redirected to another sensor. All of the light information the camera receives is used, and the result is a full-resolution image with all colour components in every pixel.

Cameras with three CCDs are prevalent in video applications.

No comments:

Post a Comment