You can then toss items from one screen to another, making the data seem something that lives outside any one monitor. As Underkoffler put it: "Every device is a workspace where work might flow onto."
We used this to browse an interactive panorama of downtown L.A., select actors from a movie clip and drag them to another screen, and inspect a visualization of flight traffic across the nation.
The next demo combined lower-resolution gesture tracking via a Kinect-style camera with position data sent from the accelerometers in smartphones to let us play a Breakout-style video game.
After that came a simpler system still, in which a camera determined my hand's position so I could browse an onscreen map showing earthquake intensity. This had a different grammar (if this technology takes off, we'll all need to agree on a new language of gestures): make a fist to click, then move your hand to pan or zoom in or out.
NEWS: Touch-Screen Steering Wheel Keeps Eyes on Road
My last stop was a look at Oblong's Mezzanine video-conferencing system. Its least-fascinating aspect was using gyroscope-enabled sensors to play with 3D models onscreen. Its most fascinating: dragging any one object, from a Web page to deck in a slide show, to the foreground on one monitor and having it show up on all of the dozen or so other screens in the room - including an iPhone and an iPad–almost instantly.