MIT chip makes photos look better

Smartphone snapshots could be instantly converted into perfectly-lit professional-looking photographs, thanks to a processor chip developed at MIT.

Existing computational photography systems in cameras and smartphones consume a lot of power, take a long time to run – and require a fair amount of knowledge on the part of the user.

“We wanted to build a single chip that could perform multiple operations, consume significantly less power compared to doing the same job in software, and do it all in real time,” says graduate student Rahul Rithe.

One feature of the chip is High Dynamic Range (HDR) imaging, designed to compensate for limitations on the range of brightness that can be recorded by existing digital cameras.

To do this, the chip’s processor automatically takes three separate ‘low dynamic range’ images with the camera: a normally exposed image, an overexposed image capturing details in the dark areas of the scene, and an underexposed image capturing details in the bright areas. It then merges them to create one image capturing the entire range of brightness.

And while software-based systems typically take several seconds to perform this operation, the chip can do it in a few hundred milliseconds on a 10-megapixel image – making it fast enough to apply to video. The chip consumes dramatically less power than existing CPUs and GPUs while performing the operation, says the teams.

The chip can also enhance the lighting in a darkened scene more realistically than conventional flash photography. “Typically when taking pictures in a low-light situation, if we don’t use flash on the camera we get images that are pretty dark and noisy, and if we do use the flash we get bright images but with harsh lighting, and the ambience created by the natural lighting in the room is lost,” Rithe says.

To handle this, the processor takes two images, one with a flash and one without. It then splits both into a base layer, containing just the large-scale features within the shot, and a detailed layer. Finally, it merges the two images, preserving the natural ambience from the base layer of the nonflash shot, while extracting the details from the picture taken with the flash.

The researchers have already built a working prototype of the chip using 40-nanometer CMOS technology, and integrated it into a camera and display. They will be presenting their chip at the International Solid-State Circuits Conference in San Francisco this month.