Do You See What Eye See

Paper

May 10, 2012 @ 14:30, Room: 17AB

Chair: Andrew Duchowski, Clemson University, USA
Look & Touch: Gaze-supported Target Acquisition - Paper
Community: design
Contribution & Benefit: Describes and compares interaction techniques for combining gaze and touch input from a handheld for target selection. Can help improving the performance and usability for the interaction with distant displays.
Abstract » While eye tracking has a high potential for fast selection tasks, it is often regarded as error-prone and unnatural, especially for gaze-only interaction. To improve on that, we propose gaze-supported interaction as a more natural and effective way combining a user's gaze with touch input from a handheld device. In particular, we contribute a set of novel and practical gaze-supported selection techniques for distant displays. Designed according to the principle gaze suggests, touch confirms they include an enhanced gaze-directed cursor, local zoom lenses and more elaborated techniques utilizing manual fine positioning of the cursor via touch. In a comprehensive user study with 24 participants, we investigated the potential of these techniques for different target sizes and distances. All novel techniques outperformed a simple gaze-directed cursor and showed individual advantages. In particular those techniques using touch for fine cursor adjustments (MAGIC touch) and for cycling through a list of possible close-to-gaze targets (MAGIC tab) demonstrated a high overall performance and usability.
ACM
Gaze-Augmented Think-Aloud as an Aid to Learning - Paper
Community: design
Contribution & Benefit: The efficacy of Gaze-Augmented Think Aloud for teaching visual search strategy to learners is demonstrated empirically. An expert's gaze visualization indicates what to look for and what to avoid.
Abstract » The use of recorded eye movements, or scanpaths, has been demonstrated as an effective visualization for feed-forward visual search training, instruction, and stimulated retrospective think-aloud usability testing. In this paper we show that creation of a scripted or recorded video of an expert's think-aloud session augmented by an animation of their scanpaths can result in an effective aid for learners of visual search. Because the creation of such a video is relatively easy, the benefits-to-cost ratio may potentially be substantial, especially in settings where learned visual scanning strategies are indicators of expertise. We suggest that two such examples are examinations of Chest X-Rays and histological slides. Results are presented where straightforward construction of an instruction video provides measurable benefit to novice as well as experienced learners in the latter context.
ACM
An Exploratory Study of Eye Typing Fundamentals: Dwell Time, Text Entry Rate, Errors, and Workload - Paper
Contribution & Benefit: Presents a study of experienced users of eye typing and a detailed comparison of various metrics for analyzing their performance. Suggests a new metric for estimating expert performance.
Abstract » Although eye typing (typing on an on-screen keyboard via one's eyes as they are tracked by an eye tracker) has been studied for more than three decades now, we still know relatively little about it from the users' point of view. Standard metrics such as words per minute and keystrokes per character yield information only about the effectiveness of the technology and the interaction techniques developed for eye typing. We conducted an extensive study with almost five hours of eye typing per participant and report on extended qualitative and quantitative analysis of the relationship of dwell time, text entry rate, errors made, and workload experienced by the participants. The analysis method is comprehensive and stresses the need to consider different metrics in unison. The results highlight the importance of catering for individual differences and lead to suggestions for improvements in the interface.
ACM
Increasing the Security of Gaze-Based Cued-Recall Graphical Passwords Using Saliency Masks - Paper
Contribution & Benefit: Describes a gaze-based authentication scheme that uses saliency maps to mask image areas that most likely attract visual attention. Can significantly increase the security of gaze-based graphical passwords.
Abstract » With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user's interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.
ACM