ZeroTouch – A Multi-Touch Screen to Write in Thin Air
Touch-sensitive frames have enabled surfaces to become interactive for years, but their size and responsiveness tend to be limited. A new prototype called “ZeroTouch” might look like an empty frame, but it’s actually full of advanced capabilities.
The 28-inch ZeroTouch frame with scalloped edges can detect whatever moves around inside it. Fingertips, hands, arms, and even inanimate objects pass through an invisible two-dimensional optical web that tracks them. Put ZeroTouch on a computer screen and it turns into an interactive surface that can be manipulated with a stylus.
“What we can do is very precise sensing inside a specific plane of interaction,” said Jon Moeller, a research assistant at Texas A&M’s Interface Ecology Lab who collaborated with fellow research assistant Sashikanth Damaraju and lab director Andruid Kerne on the technology.
The technology itself is straightforward. The ZeroTouch frame contains 256 infrared sensors and 32 LEDs, and each light blinks at a specific frequency that is read in sequence by the sensors. The prototype is so responsive because each LED is blinked in sequence about 2,400 times a second, Moeller said. The frame is connected to a computer via USB, which provides power and collects the data.
“When you combine all the perspectives together, you get this sort of mesh that gives you the visual hull [the shape produced from two silhouette images] of any objects that are inside that touch area,” Moeller said.
The researchers presented ZeroTouch this week in Vancouver at the ACM CHI Conference on Human Factors in Computing Systems. They suspended one frame in midair where movements made inside the frame created colorful brushstrokes projected on a wall.
One big advantage to ZeroTouch, the researchers say, is its affordability. The research prototype was made using commercially available sensors usually found in TV remote controls. Moeller said that the frame, which wasn’t designed for mass-production, cost about $450 to construct.
Kerne said that ZeroTouch has many potential applications such as a training guide for surgeons that can track their fine hand movements, as well as for interactive instructions on how to construct complicated machinery.
Moeller pointed out that the technology creates more possibilities for interaction than capacitive interfaces like the glass touch-screens on smart phones and laptops. The technology simply requires the user to break the light beams — there’s no force required to activate the sensor.
“You can use it with gloves on,” Moeller said. “So it can be used in hazardous environments where capacitive would be unsuitable.”
Next, the team plans to work on making the technology larger and three dimensional, Kerne said. They will experiment to find out what kind of interactive potential can be achieved by stacking layers of the frames.
Daniel Wigdor, an assistant professor of computer science at the University of Toronto who specializes in user interfaces, interacted with ZeroTouch technology at the conference in Vancouver.
“It tracks very quickly,” he said. “You can detect a very large number of touches, whereas previous implementations have limited this to one or two fingers.”
Ben Bederson, a human-computer interaction expert and computer science professor at the University of Maryland, also tried out the technology. “It doesn’t feel like anything, which is just about right,” he said.