Chair: Carman Neustaedter, Simon Fraser University, Canada
Natural Use Profiles for the Pen: An Empirical Exploration of Pressure, Tilt, and Azimuth
Contribution & Benefit: This is the first study to investigate the natural profiles of pen pressure, tilt, and azimuth (PTA) and their inter-relationships, providing fundamental data for efficient natural UI design.
Abstract » Inherent pen input modalities such as tip pressure, tilt and azimuth (PTA) have been extensively used as additional input channels in pen-based interactions. We conducted a study to investigate the natural use profiles of PTA, which describes the features of PTA in the course of normal pen use such as writing and drawing. First, the study reveals the ranges of PTA in normal pen use, which can distinguish pen events accidently occurring in normal drawing and writing from those used for mode switch. The natural use profiles also show that azimuth is least likely to cause false pen mode switching while tip pressure is most likely to cause false pen mode switching. Second, the study reveals correlations among various modalities, indicating that pressure plus azimuth is superior to other pairs for dual-modality control.ACM
Evaluating and Understanding the Usability of a Pen-based Command System for Interactive Paper
Contribution & Benefit: User studies on a pen-gesture-based interactive paper system for Active Reading. Can help understand how such a system is learned and used in typical scenarios and how researchers evaluate it.
Abstract » To combine the affordance of paper and computers, prior research has proposed numerous interactive paper systems that link specific paper document content to digital operations such as multimedia playback and proofreading. Yet, it remains unclear to what degree these systems bridge the inherent gap between paper and computers when compared to existing paper-only and computer-only interfaces. In particular, given the special properties of paper, such as limited dynamic feedback, how well does an average novice user learn to master an interactive paper system? What factors affect the user performance? And how does the paper interface work in a typical use scenario?
To answer these questions, we conducted two empirical experiments on a generic pen-gesture-based command system, called PapierCraft [Liao, et al., 2008], for paper-based interfaces. With PapierCraft, people can select sections of printed documents and issue commands such as copy and paste, linking and in-text search. The first experiment focused on the user performance of drawing pen gestures on paper. It proves that users can learn the command system in about 30 minutes and achieve a performance comparable to a Table PC-based interface
supporting the same gestures. The second experiment examined the application of the command system in active reading tasks. The results show promise for seamless integration of paper and computers in active reading for their combined affordance. In addition, our study reveals some key design issues, such as the pen form factor and feedback of gestures. This paper contributes to better understanding on pros and cons of paper and computers, and sheds light on the design of future interfaces for document interaction.
A-Coord Input: Coordinating Auxiliary Input Streams for Augmenting Contextual Pen-Based Interactions
Contribution & Benefit: We explore a-coord input, a technique that involves coordinating two auxiliary pen channels in conjunction. Experiments demonstrate a-coord input's effectiveness for both discrete-item selection, and multi-parameter selection and manipulation tasks.
Abstract » The human hand can naturally coordinate multiple finger joints, and simultaneously tilt, press and roll a pen to write or draw. For this reason, digital pens are now embedded with auxiliary input sensors to capture these actions. Prior research on auxiliary input channels has mainly investigated them in isolation of one another. In this work, we explore the coordinated use of two auxiliary channels, a class of interaction techniques we refer to as a-coord input. Through two separate experiments, we explore the design space of a-coord input. In the first study we identify if users can successfully coordinate two auxiliary channels. We found a strong degree of coordination between channels. In a second experiment, we evaluate the effectiveness of a-coord input in a task with multiple steps, such as multi-parameter selection and manipulation. We find that a-coord input facilitates coordination even with a complex, aforethought sequential task. Overall our results indicate that users can control at least two auxiliary input channels in conjunction which can facilitate a number of common tasks can on the pen.ACM
Personalized Input: Improving Ten-Finger Touchscreen Typing through Automatic Adaptation
Contribution & Benefit: We introduce and evaluate two novel personalized keyboard interfaces. Results show that personalizing the underlying key-press classification model improves typing speed, but not when accompanied by visual adaptation.
Abstract » Although typing on touchscreens is slower than typing on physical keyboards, touchscreens offer a critical potential advantage: they are software-based, and, as such, the keyboard layout and classification models used to interpret key presses can dynamically adapt to suit each user�s typing pattern. To explore this potential, we introduce and evaluate two novel personalized keyboard interfaces, both of which adapt their underlying key-press classification models. The first keyboard also visually adapts the location of keys while the second one always maintains a visually stable rectangular layout. A three-session user evaluation showed that the keyboard with the stable rectangular layout significantly improved typing speed compared to a control condition with no personalization. Although no similar benefit was found for the keyboard that also offered visual adaptation, overall subjective response to both new touchscreen keyboards was positive. As personalized keyboards are still an emerging area of research, we also outline a design space that includes dimensions of adaptation and key-press classification features.ACM
Bimanual Marking Menu for Near Surface Interactions
Contribution & Benefit: We describe a mouseless, near-surface version of the Bimanual Marking Menu system. The system offers a large number of accessible commands and does not interfere with multi-touch interactions.
Abstract » We describe a mouseless, near-surface version of the Bimanual Marking Menu system. To activate the menu system, users create a pinch gesture with either their index or middle finger to initiate a left click or right click. Then they mark in the 3D space near the interactive area. We demonstrate how the system can be implemented using a commodity range camera such as the Microsoft Kinect, and report on several designs of the 3D marking system. ACM
Like the multi-touch marking menu, our system offers a large number of accessible commands. Since it does not rely on contact points to operate, our system leaves the non-dominant hand available for other multi-touch interactions.