See Hear Speak: Redesigning I/O for Effectiveness

Paper

May 9, 2012 @ 16:30, Room: 12AB

Chair: Eytan Adar, University of Michigan, USA
The SoundsRight CAPTCHA: An Improved Approach to Audio Human Interaction Proofs for Blind Users - Paper
Contribution & Benefit: Blind users cannot use visual CAPTCHAs, and audio CAPTCHAs have below 50% task success. Blind users had over 90% task success rate on our new real-time audio CAPTCHA.
Abstract » In this paper we describe the development of a new audio CAPTCHA called the SoundsRight CAPTCHA, and the evaluation of the CAPTCHA with 20 blind users. Blind users cannot use visual CAPTCHAs, and it has been documented in the research literature that the existing audio CAPTCHAs have task success rates below 50% for blind users. The SoundsRight audio CAPTCHA presents a real-time audio-based challenge in which the user is asked to identify a specific sound (for example the sound of a bell or a piano) each time it occurs in a series of 10 sounds that are played through the computer�s audio system. Evaluation results from three rounds of usability testing document that the task success rate was higher than 90% for blind users. Discussion, limitations, and suggestions for future research are also presented.
ACM
Voice Typing: A New Speech Interaction Model for Dictation on Touchscreen Devices - Paper
Community: user experience
Contribution & Benefit: Describes Voice Typing, a new speech interaction technique, where utterances are transcribed as produced to enable real-time error identification. Reduces user corrections and cognitive demand for text input via speech.
Abstract » Dictation using speech recognition could potentially serve as an efficient input method for touchscreen devices. However, dictation systems today follow a mentally disruptive speech interaction model: users must first formulate utterances and then produce them, as they would with a voice recorder. Because utterances do not get transcribed until users have finished speaking, the entire output appears and users must break their train of thought to verify and correct it. In this paper, we introduce Voice Typing, a new speech interaction model where users’ utterances are transcribed as they produce them to enable real-time error identification. For fast correction, users leverage a marking menu using touch gestures. Voice Typing aspires to create an experience akin to having a secretary type for you, while you monitor and correct the text. In a user study where participants composed emails using both Voice Typing and traditional dictation, they not only reported lower cognitive demand for Voice Typing but also exhibited 29% relative reduction of user corrections. Overall, they also preferred Voice Typing.
ACM
Legible, are you sure ? An Experimentation-based Typographical Design in Safety-Critical Context - Paper
Community: design
Contribution & Benefit: Presents a study involving the design of typeface suited for the cockpit. More widely than for Safety-critical contexts, Experimentation-based design process helps designers validate usability of text display.
Abstract » Designing Safety-critical interfaces entails proving the safety and operational usability of each component. Largely taken for granted in everyday interface design, the typographical component, through its legibility and aesthetics, weighs heavily on the ubiquitous reading task at the heart of most visualizations and interactions. In this paper, we present a research project whose goal is the creation of a new typeface to display textual information on future aircraft interfaces. After an initial task analysis leading to the definition of specific needs, requirements and design principles, the design constantly evolves from an iterative cycle of design and experimentation. We present three experiments (laboratory and cockpit) used mainly to validate initial choices and fine-tune font properties. Results confirm the importance of rigorously testing the typographical component as a part of text output evaluation in interactive systems.
ACM
SSMRecolor: Improving Recoloring Tools with Situation-Specific Models of Color Differentiation - Paper
Contribution & Benefit: Describes a recoloring tool that improves color differentiability by modeling user color perception abilities. Compared to existing recoloring tools, we improve accuracy by 20% and reduce selection time by two seconds.
Abstract » Color is commonly used to convey information in digital environments, but colors can be difficult to distinguish for many users � either because of a congenital color vision deficiency (CVD), or because of situation-induced CVDs such as wearing colored glasses or working in sunlight. Tools intended to improve color differentiability (recoloring tools) exist, but these all use abstract models of only a few types of congenital CVD; if the user�s color problems have a different cause, existing recolorers can perform poorly. We have developed a recoloring tool (SSMRecolor) based on the idea of situation-specific modeling � in which we build a performance-based model of a particular user in their specific environment, and use that model to drive the recoloring process. SSMRecolor covers a much wider range of CVDs, including acquired and situational deficiencies. We evaluated SSMRecolor and two existing tools in a controlled study of people�s color-matching performance in several environmental conditions. The study included participants with and without congenital CVD. Our results show both accuracy and response time in color-matching tasks were significantly better with SSMRecolor. This work demonstrates the value of a situation-specific approach to recoloring, and shows that this technique can substantially improve the usability of color displays for users of all types.
ACM