Programming, Performance, and Sense Making

Case Study & Paper

May 9, 2012 @ 16:30, Room: Ballroom G

Chair: John Thomas, IBM Research, USA
Modeling Task Performance for a Crowd of Users from Interaction Histories - Note
Contribution & Benefit: Describes a system for human performance modeling that utilizes interaction histories from a crowd of end users. Can assist UI designers in quantitatively evaluating interfaces.
Abstract » We present Tome, a novel framework that helps developers quantitatively evaluate user interfaces and design iterations by using histories from crowds of end users. Tome collects user-interaction histories via an interface instrumentation library as end users complete tasks; these histories are compiled using the Keystroke-Level Model (KLM) into task completion-time predictions using CogTool. With many histories, Tome can model prevailing strategies for tasks without needing an HCI specialist to describe users' interaction steps. An unimplemented design change can be evaluated by perturbing a Tome task model in CogTool to reflect the change, giving a new performance prediction. We found that predictions for quick (5-60s) query tasks in an instrumented brain-map interface averaged within 10% of measured expert times. Finally, we modified a Tome model to predict closely the speed-up yielded by a proposed interaction before implementing it.
ACM
Applying Design Strategies in Publication Networks – A Case Study - Short Case Study
Community: designCommunity: user experience
Contribution & Benefit: A comparative case study that investigates the influence of design strategies on the user behavior. Can provide a guidance in choosing a design strategy in sensemaking tools.
Abstract » This case study shows how following two different designs strategies (Overview first, zoom and filter, then details on demand [8] and Start from what you know, then grow [5]) influences the sensemaking behavior [6] of users in the context of science2.0 [9]. To this end, we have designed, developed and evaluated two multi touch applications that provide interactive visualizations of authorship networks. Overview first steers people towards structural insight and overview sensemaking, while Start from what you know invites users to use topical information to explore the data.
Designing a Debugging Interaction Language for Cognitive Modelers: An Initial Case Study in Natural Programming Plus - Paper
Contribution & Benefit: Investigates how a debugging environment should support cognitive modelers. Suggests design implications as well as validation opportunities for interactive programming tools and languages.
Abstract » In this paper, we investigate how a debugging environment should support a population doing work at the core of HCI research: cognitive modelers. In conducting this investigation, we extended the Natural Programming methodology (a user-centered design method for HCI researchers of programming environments), to add an explicit method for mapping the outcomes of NP's empirical investigations to a language design. This provided us with a concrete way to make the design leap from empirical assessment of users' needs to a language. The contributions of our work are therefore: (1) empirical evidence about the content and sequence of cognitive modelers' information needs when debugging, (2) a new, empirically derived, design specification for a debugging interaction language for cognitive modelers, and (3) an initial case study of our "Natural Programming Plus" methodology.
ACM
CogTool-Explorer: A Model of Goal-Directed User Exploration that Considers Information Layout - Paper
Community: engineering
Contribution & Benefit: Describes a tool for predicting novice exploration behavior, including errors, that accounts for 63-82% of the variance in three usability metrics. Includes examples using the predictions to direct design effort.
Abstract » CogTool-Explorer 1.2 (CTE1.2) predicts novice exploration behavior and how it varies with different user-interface (UI) layouts. CTE1.2 improves upon previous models of information foraging by adding a model of hierarchical visual search to guide foraging behavior. Built within CogTool so it is easy to represent UI layouts, run the model, and present results, CTE1.2's vision is to assess many design ideas at the storyboard stage before implementation and without the cost of running human participants. This paper evaluates CTE1.2 predictions against observed human behavior on 108 tasks (36 tasks on 3 distinct website layouts). CTE1.2's predictions accounted for 63-82% of the variance in the percentage of participants succeeding on each task, the number of clicks to success, and the percentage of participants succeeding without error. We demonstrate how these predictions can be used to identify areas of the UI in need of redesign.
ACM
Easing the Generation of Predictive Human Performance Models from Legacy Systems - Paper
Community: engineering
Contribution & Benefit: Describes a tool that leverages GUI testing technology from Software Engineering in the creation of human performance models for evaluating existing systems. Many steps are automated, easing the modeler's job.
Abstract » With the rise of tools for predictive human performance
modeling in HCI comes a need to model legacy
applications. Models of legacy systems are used to compare
products to competitors, or new proposed design ideas to
the existing version of an application. We present
CogTool-Helper, an exemplar of a tool that results from
joining this HCI need to research in automatic GUI testing
from the Software Engineering testing community.
CogTool-Helper uses automatic UI-model extraction and
test case generation to automatically create CogTool
storyboards and models and infer methods to accomplish
tasks beyond what the UI designer has specified. A design
walkthrough with experienced CogTool users reveal that
CogTool-Helper resonates with a "pain point" of real-world
modeling and provide suggestions for future work.
ACM