Chair: Shwetak Patel, University of Washington, USA
ZeroTouch: An Optical Multi-Touch and Free-Air Interaction Architecture
Contribution & Benefit: ZeroTouch is a unique optical sensing technique and architecture that allows precision sensing of hands, fingers, and objects within a 2-dimensional plane. We describes the architecture and technology in great detail.
Abstract » ZeroTouch (ZT) is a unique optical sensing technique and architecture that allows precision sensing of hands, fingers, and other objects within a constrained 2-dimensional plane. ZeroTouch provides tracking at 80 Hz, and up to 30 concurrent touch points. Integration with LCDs is trivial. While designed for multi-touch sensing, ZT enables other new modalities, such as pen+touch and free-air interaction. In this paper, we contextualize ZT innovations with a review of other flat-panel sensing technologies. We present the modular sensing architecture behind ZT, and examine early diverse uses of ZT sensing.ACM
Enabling Concurrent Dual Views on Common LCD Screens
Contribution & Benefit: A pure software solution that enables two independent views to be seen concurrently from different viewing angles on a common LCD screen without any hardware modification or augmentation.
Abstract » Researchers have explored a variety of technologies that enable a single display to simultaneously present different content when viewed from different angles or by different people. These displays provide new functionalities such as personalized views for multiple users, privacy protection, and stereoscopic 3D displays. However, current multi-view displays rely on special hardware, thus significantly limiting their availability to consumers and adoption in everyday scenarios. In this paper, we present a pure software solution (i.e. with no hardware modification) that allows us to present two independent views concurrently on the most widely used and affordable type of LCD screen, namely Twisted Nematic (TN). We achieve this by exploiting a technical limitation of the technology which causes these LCDs to show varying brightness and color depending on the viewing angle. We describe our technical solution as well as demonstrate example applications in everyday scenarios.ACM
Ultra-Tangibles: Creating Movable Tangible Objects on Interactive Tables
Contribution & Benefit: Presents a system that uses ultrasound-based air pressure waves to move multiple tangible objects, independently, around an interactive surface. Allows the creation of new actuated tangible interfaces for interactive surfaces.
Abstract » Tangible objects placed on interactive surfaces allow users to employ a physical object to manipulate digital content. However, creating the reverse effect�having digital content manipulate a tangible object placed on the surface�is a more challenging task. We present a new approach to this problem, using ultrasound-based air pressure waves to move multiple tangible objects, independently, around an interactive surface. We describe the technical background, design, implementation, and test cases for such a system. We conclude by discussing practical uses of our system, Ultra-Tangibles, in the creation of new tangible user interfaces.ACM
CapStones and ZebraWidgets: Sensing Stacks of Building Blocks, Dials and Sliders on Capacitive Touch Screens
Contribution & Benefit: Demonstrates how to create stackable tangibles that can be tracked on capacitive touch screens.
Abstract » Recent research proposes augmenting capacitive touch pads with tangible objects, enabling a new generation of mobile applications enhanced with tangible objects, such as game pieces and tangible controllers. In this paper, we extend the concept to capacitive tangibles consisting of multiple parts, such as stackable gaming pieces and tangible widgets with moving parts. We achieve this using a system of wires and connectors inside each block that causes the capacitance of the bottom-most block to reflect the entire assembly. We demonstrate three types of tangibles, called CapStones, Zebra Dials and Zebra Sliders that work with current consumer hardware and investigate what designs may become possible as touchscreen hardware evolves.ACM
Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input
Contribution & Benefit: Describes a working system that uses brain activity as a passive, implicit input channel to an interactive system. Shows improved performance and experience with little additional effort from the user.
Abstract » This paper describes the Brainput system, which learns to identify brain activity patterns occurring during multitasking. It provides a continuous, supplemental input stream to an interactive human-robot system, which uses this information to modify its behavior to better support multitasking. This paper demonstrates that we can use non-invasive methods to detect signals coming from the brain that users naturally and effortlessly generate while using a computer system. If used with care, this additional information can lead to systems that respond appropriately to changes in the user's state. Our experimental study shows that Brainput significantly improves several performance metrics, as well as the subjective NASA-Task Load Index scores in a dual-task human-robot activity.ACM