"One of [Magic Leap's] first signature images was of an elephant being cupped by a hand. That's what I experienced in the Avegant demo, to be able to hold a planet in my hands," Rubin told Seeker. "It's the difference between seeing animated objects out of your reach and being able to actually walk up to things, put your hands around them and navigate within the mixed reality space."
RELATED: When Will Augmented Reality Get Real?
Avegant's light field headset works by mimicking the way that real-world objects emit light. In the real world, light enters the eye as a collection of rays, which are focused for the brain. Objects that are different distances from the eye emit different "light fields," requiring the eye to constantly adjust like the autofocus setting on a camera. In humans, our combination of binocular vision and the ability to switch between different focal planes gives real-world objects their depth and distance.
To display virtual objects with the same realistic sense of depth, Avegant needed to give each digital object its own light field. Instead of presenting the viewer with a flat projection of virtual objects at a set distance, each object now exists on its own focal plane. When the user turns his attention from object to object, his eyes refocus to accommodate the object's specific light field, including objects very close at hand.