Chair: Yang Li, Google Research, USA
Walking improves your cognitive map in environments that are large-scale and large in extent
Contribution & Benefit: No previous studies have used an omni-directional treadmill to investigate navigation. Contrary to previous studies using small-scale spaces, we show that physical locomotion is critical for rapid cognitive map development.
Abstract » This study investigated the effect of body-based information (proprioception, etc.) when participants navigated large-scale virtual marketplaces that were either small (Experiment 1) or large in extent (Experiment 2). Extent refers to the size of an environment, whereas scale refers to whether people have to travel through an environment to see the detail necessary for navigation. Each participant was provided with full body-based information (walking through the virtual marketplaces in a large tracking hall or on an omni-directional treadmill), just the translational component of body-based information (walking on a linear treadmill, but turning with a joystick), just the rotational component (physically turning but using a joystick to translate) or no body-based information (joysticks to translate and rotate). In large and small environments translational body-based information significantly improved the accuracy of participants’ cognitive maps, measured using estimates of direction and relative straight line distance but, on its own, rotational body-based information had no effect. In environments of small extent, full body-based information also improved participants’ navigational performance. The experiments show that locomotion devices such as linear treadmills would bring substantial benefits to virtual environment applications where large spaces are navigated, and theories of human navigation need to reconsider the contribution made by body-based information, and distinguish between environmental scale and extent.
Putting Your Best Foot Forward: Investigating Real-World Mappings for Foot-based Gestures
Contribution & Benefit: This paper investigates real-world mappings of foot-based gestures to virtual workspaces. It conducts a series of studies exploring: user-defined mappings, gesture detection and continuous interaction parameters.
Abstract » Foot-based gestures have recently received attention as an alternative interaction mechanism in situations where the hands are pre-occupied or unavailable. This paper investigates suitable real-world mappings of foot gestures to invoke commands and interact with virtual workspaces. Our first study identified user preferences for mapping common mobile-device commands to gestures. We distinguish these gestures in terms of discrete and continuous command input. While discrete foot-based input has relatively few parameters to control, continuous input requires careful design considerations on how the user's input can be mapped to a control parameter (e.g. the volume knob of the media player). We investigate this issue further through three user-studies. Our results show that rate-based techniques are significantly faster, more accurate and result if far fewer target crossings compared to displacement-based interaction. We discuss these findings and identify design recommendations.ACM
ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications
Contribution & Benefit: Describes a novel wearable device consisting of a shoe-mounted sensor and offering a novel and unique perspective for eyes-free gestural interaction. Presents and Evaluates three novel gesture sets.
Abstract » When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.ACM
Bootstrapper: Recognizing Tabletop Users by their Shoes
Contribution & Benefit: Reformulating the user recognition problem as a shoe recognition problem and present a prototype that recognizes tabletop users.
Abstract » In order to enable personalized functionality, such as to log tabletop activity by user, tabletop systems need to recognize users. DiamondTouch does so reliably, but requires users to stay in assigned seats and cannot recognize users across sessions. We propose a different approach based on distinguishing users� shoes. While users are interacting with the table, our system Bootstrapper observes their shoes using one or more depth cameras mounted to the edge of the table. It then identifies users by matching camera images with a database of known shoe images. When multiple users interact, Bootstrapper associates touches with shoes based on hand orientation. The approach can be implemented using consumer depth cameras because (1) shoes offer large distinct features such as color, (2) shoes naturally align themselves with the ground, giving the system a well-defined perspective and thus reduced ambiguity. We report two simple studies in which Bootstrapper recognized participants from a database of 18 users with 95.8% accuracy.ACM