Sensory Interaction Modalities

Paper

May 9, 2012 @ 11:30, Room: Ballroom E

Chair: Daniel M. Russell, Google, USA
Humantenna: Using the Body as an Antenna for Real-Time Whole-Body Interaction - Paper
Community: user experience
Contribution & Benefit: Extends approach of using the human body as an antenna for sensing whole-body gestures. Demonstrates robust real-time gesture recognition and promising results for robust location classification within a building.
Abstract » Computer vision and inertial measurement have made it possible for people to interact with computers using whole-body gestures. Although there has been rapid growth in the uses and applications of these systems, their ubiquity has been limited by the high cost of heavily instrumenting either the environment or the user. In this paper, we use the human body as an antenna for sensing whole-body gestures. Such an approach requires no instrumentation to the environment, and only minimal instrumentation to the user, and thus enables truly mobile applications. We show robust gesture recognition with an average accuracy of 93% across 12 whole-body gestures, and promising results for robust location classification within a building. In addition, we demonstrate a real-time interactive system which allows a user to interact with a computer using whole-body gestures
ACM
SoundWave: Using the Doppler Effect to Sense Gestures - Note
Community: user experience
Contribution & Benefit: Describes SoundWave, which leverages the speaker and microphone already embedded in commodity devices to sense in-air gestures around the device. This allows interaction with devices in novel and rich ways.
Abstract » Gesture is becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture recognition sensors in existing mobile platforms. We present SoundWave, a technique that leverages the speaker and microphone already embedded in most commodity devices to sense in-air gestures around the device. To do this, we generate an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. We measure this shift with the microphone to infer various gestures. In this note, we describe the phenomena and detection algorithm, demonstrate a variety of gestures, and present an informal evaluation on the robustness of this approach across different devices and people.
ACM
Your Phone or Mine? Fusing Body, Touch and Device Sensing for Multi-User Device-Display Interaction - Note
Contribution & Benefit: Describes a technique for associating multi-touch interactions to individual users and their accelerometer-equipped mobile devices. Allows for more seamless device-display multi-user interactions including personalization, access control, and score-keeping.
Abstract » Determining who is interacting with a multi-user interactive touch display is challenging. We describe a technique for associating multi-touch interactions to individual users and their accelerometer-equipped mobile devices. Real-time device accelerometer data and depth camera-based body tracking are compared to associate each phone with a particular user, while body tracking and touch contacts positions are compared to associate a touch contact with a specific user. It is then possible to associate touch contacts with devices, allowing for more seamless device-display multi-user interactions. We detail the technique and present a user study to validate and demonstrate a content exchange application using this approach.
ACM
IllumiShare: Sharing Any Surface - Paper
Contribution & Benefit: A camera-projector device called IllumiShare that shares arbitrary objects and surfaces without visual echo is presented. Study of children’s remote play shows IllumiShare provides natural and seamless interactions over distance.
Abstract » Task and reference spaces are important communication channels for remote collaboration. However, all existing systems for sharing these spaces have an inherent weakness: they cannot share arbitrary physical and digital objects on arbitrary surfaces. We present IllumiShare, a new cost-effective, light-weight device that solves this issue. It both shares physical and digital objects on arbitrary surfaces and provides rich referential awareness. To evaluate IllumiShare, we studied pairs of children playing remotely. They used IllumiShare to share the task-reference space and Skype Video to share the person space. The study results show that IllumiShare shared the play space in a natural and seamless way. We also found that children preferred having both spaces compared to having only one. Moreover, we found that removing the task-reference space caused stronger negative disruptions to the play task and engagement level than removing the person space. Similarly, we found that adding the task-reference space resulted in stronger positive disruptions.
ACM
Rock-Paper-Fibers: Bringing Physical Affordance to Mobile Touch Devices - Note
Community: engineering
Contribution & Benefit: bringing physical affordance to mobile touch devices by making the touch device deformable.
Abstract » We explore how to bring physical affordance to mobile touch devices. We present Rock-Paper-Fibers, a device that is functionally equivalent to a touchpad, yet that users can reshape so as to best match the interaction at hand. For efficiency, users interact bimanually: one hand reshapes the device and the other hand operates the resulting widget.

We present a prototype that achieves deformability using a bundle of optical fibers, demonstrate an audio player and a simple video game each featuring multiple widgets. We demonstrate how to support applications that require responsiveness by adding mechanical wedges and clamps.
ACM
Shake'n'Sense: Reducing Interference for Overlapping Structured Light Depth Cameras - Note
Community: user experience
Contribution & Benefit: New method for reducing interference when two structured light cameras overlap by only mechanical augmentation.
Abstract » We present a novel yet simple technique that mitigates the interference caused when multiple structured light depth cam-eras point at the same part of a scene. The technique is particularly useful for Kinect, where the structured light source is not modulated. Our technique requires only mechanical augmentation of the Kinect, without any need to modify the internal electronics, firmware or associated host software. It is therefore simple to replicate. We show qualitative and quantitative results highlighting the improvements made to interfering Kinect depth signals. The camera frame rate is not compromised, which is a problem in approaches that modulate the structured light source. Our technique is non-destructive and does not impact depth values or geometry. We discuss uses for our technique, in particular within instrumented rooms that require simultaneous use of multiple overlapping fixed Kinect cameras to support whole room interactions.
ACM