A new method for producing multiple-perspective 3-D images could prove more practical in the short term than holography.
As striking as it is, the illusion of depth now routinely offered by 3-D movies is a paltry facsimile of a true three-dimensional visual experience. In the real world, as you move around an object, your perspective on it changes. But in a movie theater showing a 3-D movie, everyone in the audience has the same, fixed perspective — and has to wear cumbersome glasses, to boot.
Despite impressive recent advances, holographic television, which would present images that vary with varying perspectives, probably remains some distance in the future. But in a new paper featured as a research highlight at this summer’s Siggraph computer-graphics conference, the MIT Media Lab’s Camera Culture group offers a new approach to multiple-perspective, glasses-free 3-D that could prove much more practical in the short term.
Instead of the complex hardware required to produce holograms, the Media Lab system uses several layers of liquid-crystal displays (LCDs), the technology currently found in most flat-panel TVs. To produce a convincing 3-D illusion, the displays would need to refresh at a rate of about 360 times a second, or 360 hertz. Such displays may not be far off: LCD TVs that boast 240-hertz refresh rates have already appeared on the market, just a few years after 120-hertz TVs made their debut.
“Holography works, it’s beautiful, nothing can touch its quality,” says Douglas Lanman, a postdoc at the Media Lab and one of the new paper’s co-authors. “The problem, of course, is that holograms don’t move. To make them move, you need to create a hologram in real time, and to do that, you need … little tiny pixels, smaller than anything we can build at large volume at low cost. So the question is, what do we have now? We have LCDs. They’re incredibly mature, and they’re cheap.”
Layers of research
The Nintendo 3DS — a portable, glasses-free 3-D gaming device introduced last year — uses two layered LCD screens to produce the illusion of depth, with the bottom screen simply displaying alternating dark and light bands. Two slightly offset images, which represent the different perspectives of the viewer’s two eyes, are sliced up and interleaved on the top screen. The dark bands on the bottom screen block the light coming from the display’s backlight in such a way that each eye sees only the image intended for it.
This technique is in fact more than a century old and produces a stereoscopic image, the type of single-perspective illusion familiar from 3-D movies. The bottom screen displays the same pattern of light and dark bands no matter the image on the top screen. But Lanman, graduate student Matthew Hirsch and professor Ramesh Raskar, who leads the Camera Culture group, reasoned that by tailoring the patterns displayed on the top and bottom screens to each other, they could filter the light emitted by the display in more sophisticated ways, creating an image that would change with varying perspectives. In a project they dubbed HR3D, they developed algorithms for generating the top and bottom patterns as well as a prototype display, which they presented at Siggraph Asia in 2010.
The problem is that, whereas a stereoscopic system such as a 3-D movie projector or the 3DS needs to display only two perspectives on a visual scene — one for each eye — the system the Media Lab researchers envisioned had to display hundreds of perspectives in order to accommodate a moving viewer. That was too much information to display at once, so for every frame of 3-D video, the HR3D screen in fact flickered 10 times, displaying slightly different patterns each time. With this approach, however, producing a convincing 3-D illusion would require displays with a 1,000-hertz refresh rate.
To get the refresh rate down to 360 hertz, the researchers added another LCD screen, which displays yet another pattern. That makes the problem of calculating the patterns exponentially more complex, however. In solving that problem, Raskar, Lanman and Hirsch were joined by Gordon Wetzstein, a new postdoc in the Camera Culture group.
CT in reverse
As it turns out, the math is similar to that behind computed tomography, or CT, an X-ray technique used to produce three-dimensional images of internal organs. In a CT scan, a sensor makes a slow circle around the subject, making a series of measurements of X-rays passing through the subject’s body. Each measurement captures information about the composition of tissues at different distances from the sensor; finally, all the information is stitched together into a composite 3-D image.
“The way I like to think about it is, we’re building a patient whose CT scan is the view,” Lanman says.
At Siggraph, the Media Lab researchers will demonstrate a prototype display that uses three LCD panels. They’ve also developed another prototype that uses only two panels, but between the panels they introduce a sheet of lenses that refract light left and right. The lenses were actually developed for stereoscopic display systems; an LCD panel beneath the lenses alternately displays one image intended for the left eye, which is diffracted to the left, and another for the right eye, which is diffracted to the right. The MIT display also takes advantage of the ability to project different patterns in different directions, but the chief purpose of the lenses is to widen the viewing angle of the display. With the three-panel version, the 3-D illusion is consistent within a viewing angle of 20 degrees, but with the refractive-lens version, the viewing angle expands to 50 degrees.
###
Written by Larry Hardesty, MIT News Office