Tools and Stats in Evaluation Studies

Case Study & Paper

May 8, 2012 @ 11:30, Room: 12AB

Chair: Jeff Heer, Stanford University, USA
Experiences with Collaborative, Distributed Predictive Human Performance Modeling - Long Case Study
Community: engineering
Contribution & Benefit: Case study using predictive human performance modeling in a real-world design project. Provides recommendations for avoiding pitfalls with existing modeling tools and design ideas for future collaborative modeling tools.
Abstract » Although predictive human performance modeling has been researched for 30 years in HCI, to our knowledge modeling has been conducted as a solitary task of one modeler or, occasionally, two modelers working in tight face-to-face collaboration. In contrast, we used predictive human performance modeling in a collaborative, distributed mode and reflect on that experience. We discovered that our tool for modeling, CogTool, while sufficiently functional and expressive to perform the modeling task, did not support collaborative, distributed modeling as well as we would like. We suggest process improvements in model construction, the management of assumptions, consistency, and communication, and suggest design solutions for the future of CogTool or other modeling tools. We further speculate on the generalization of our experiences to other types of usability evaluation when conducted in a distributed, collaborative environment.
Comparing Averages in Time Series Data - Paper
Community: design
Contribution & Benefit: This paper explores visualizations for efficient summarization through perceptually-motivated design and empirical assessment.
Abstract » ABSTRACT
Visualizations often seek to aid viewers in assessing the big
picture in the data, that is, to make judgments about aggregate
properties of the data. In this paper, we present an empirical
study of a representative aggregate judgment task: finding regions
of maximum average in a series. We show how a theory
of perceptual averaging suggests a visual design other than
the typically-used line graph. We describe an experiment that
assesses participants' ability to estimate averages and make
judgments based on these averages. The experiment confirms
that this color encoding significantly outperforms the standard
practice. The experiment also provides evidence for a
perceptual averaging theory.
ACM
Rethinking Statistical Analysis Methods for CHI - Paper
Contribution & Benefit: Identifies fundamental problems in the statistical methods commonly used in quantitative evaluations. Proposes solutions and recommendations for best practice.
Abstract » CHI researchers typically use a significance testing approach to statistical analysis when testing hypotheses during usability evaluations. However, the appropriateness of this approach is under increasing criticism, with statisticians, economists, and psychologists arguing against the use of routine interpretation of results using "canned" p values. Three problems with current practice - the fallacy of the transposed conditional, a neglect of power, and the reluctance to interpret the size of effects - can lead us to build weak theories based on vaguely specified hypothesis, resulting in empirical studies which produce results that are of limited practical or scientific use. Using publicly available data presented at CHI 2010 [19] as an example we address each of the three concerns and promote consideration of the magnitude and actual importance of effects, as opposed to statistical significance, as the new criteria for evaluating CHI research.
ACM
A Spatiotemporal Visualization Approach for the Analysis of Gameplay Data - Paper
Contribution & Benefit: Describes a visualization system for gameplay data which can be adapted to different kind of games and queries. It helps to analyze and better understand player behavior within a game.
Abstract » Contemporary video games are highly complex systems with many interacting variables. To make sure that a game provides a satisfying experience, a meaningful analysis of gameplay data is crucial, particularly because the quality of a game directly relates to the experience a user gains from playing it. Automatic instrumentation techniques are increasingly used to record data during playtests. However, the evaluation of the data requires strong analytical skills and experience. The visualization of such gameplay data is essentially an information visualization problem, where a large number of variables have to be displayed in a comprehensible way in order to be able to make global judgments. This paper presents a visualization tool to assist the analytical process. It visualizes the game space as a set of nodes which players visit over the course of a game and is also suitable to observe time-dependent information, such as player distribution. Our tool is not tailored to a specific type of genre. To show the flexibility of our approach we use two different kinds of games as case studies.
ACM