MIT develops Surround Vision to let you peek at what’s off-screen

Surround sound’s just so old hat, says MIT, which has developed a system that does the same thing with images on ordinary handheld devices.

“If you’re watching TV and you hear a helicopter in your surround sound, wouldn’t it be cool to just turn around and be able to see that helicopter as it goes into the screen?” says project leader Santiago Alfaro.

Surround Vision is intended to work with standard, internet-connected handheld devices such as cellphones. If a viewer wants to see what’s happening off the left edge of the screen, he or she could simply point the phone in that direction, and an image would pop up on the screen.

The technology could even allow a guest at a Super Bowl party, for instance, to consult several different camera angles on a particular play, without affecting what the other guests see on the TV screen.

Alfaro reckons that, if the system were commercialized, the video playing on the handheld device would stream over the internet: TV service providers wouldn’t have to modify their broadcasts or their set-top boxes.

“In the Media Lab, and even my group, there’s a combination of far-off-in-the-future stuff and very, very near-term stuff, and this is an example of the latter,” Bove says. “This could be in your home next year if a network decided to do it.”

Once Alfaro had rigged up a handheld with requisite motion sensors, he shot video footage of the street in front of the Media Lab from three angles simultaneously.

A television set replays the footage from the center camera. If a viewer points a motion-sensitive handheld device directly at the TV, the same footage appears on the device’s screen.

But if the viewer swings the device either right or left, it switches to one of the other perspectives. The viewer can, for instance, watch a bus approach on the small screen before it appears on the large screen.

The researchers now plan a series of user studies in the spring and summer, which will employ content developed in conjunction with a number of partners. Since sports broadcasts and other live television shows already feature footage taken from multiple camera angles, they’re a natural fit for the system, they say.

“We’re looking at sports; we’re looking at children’s programming, both live action and cartoons; we’re looking at, let’s say, ordinary entertainment programs, as well as programs shot in a studio like talk shows,” says Media Lab research scientist Michael Bove.

“And we hope to have examples of several of these fairly soon. There are also one or two other things that defy categorization right now, that you sort of have to see in order to understand what they are.”

One partner likely to participate in the user studies is Boston’s public-television station WGBH, which has a long history with Bove’s lab.

“We always learn from working with Mike and his group,” says Annie Valva, WGBH’s director of technology and interactive multimedia.