OREANDA-NEWS. August 11, 2015. To help advance the science behind virtual reality, we’re collaborating with Stanford University to demonstrate a new technology at this week’s SIGGRAPH graphics conference that makes VR  more natural and comfortable.

VR has made big strides over the past several years. But the basic principle remains the same as when Sir Charles Wheatsone invented the first stereoscopic headset in 1838.

Sir Charles put two images of the same scene — drawn from slightly offset angles — inside a box attached to a viewer’s head. Your brain combines what each eye is seeing into something it interprets as three-dimensional.

“The only thing that’s really changed is that today we have computers,” explains NVIDIA researcher Fu-Chung Huang

Working with the Stanford Computational Imaging Group, Huang is using GPUs to generate not two but 50 different images of the same scene many times each second. The result: a sharper, more natural virtual reality experience.

That’s crucial to replicating the way our eyes work.

Our brains direct our eyes to move in unison — researchers call it “vergence” — and focus at the same time. So, when an object is far away, our eyes change focus so that we see an object clearly. But at the same time, they’ll slightly diverge, or rotate, to shift the pupils of each eye a little further apart.

VR that relies on just two images, one for each eye, breaks that relationship. It lacks what researchers call ‘focus cues.’ So, when your eyes rotate to look at a part of a virtual reality scene that appears closer, the eye changes focus as well. But the actual image remains at the same distance. That disconnect can result in blurred vision, fatigue, or even queasiness.

The solution: sandwich two transparent screens together to create a kind of hologram — or light field — that shows each eye 25 slightly offset versions of the same scene.

Sandwiching two displays together creates a more natural, immersive image.
Sandwiching two displays together creates a more natural, immersive image.

Here’s how it works. To create a scene, our GPUs generate a different pattern for each display. Sandwich together two transparent displays, with these patterns, and anything you see is a combination of these two patterns. As your eye moves from one part of the display to another, the two patterns line up differently to present a slightly different image. One that accounts for the change in the eye’s focus as it moves.

The idea is simple enough for Fu-Chung to demonstrate with a cardboard box and a pair of transparencies. But turning that into a moving image requires generating 25 different images of a scene, for each eye, many times each second. For that Fu-Chung relied on our CUDA platform and NVIDIA GeForce GTX 970 GPU based on our latest Maxwell architecture.

The result is dramatic. Strap on a headset and it’s much easier to shift focus to different parts of  a 3D scene. See it for yourself at booth ET18 at SIGGRAPH this week.