Usability Methods

Paper

May 9, 2012 @ 14:30, Room: 16AB

Chair: Effie Law, University of Leicester, UK
What Do Users Really Care About? A Comparison of Usability Problems Found by Users and Experts on Highly Interactive Websites - Paper
Contribution & Benefit: A new set of heuristics to assist in the development and evaluation of highly interactive websites, based on analysis of 935 problems encountered by users on websites.
Abstract » Expert evaluation methods, such as heuristic evaluation, are still popular in spite of numerous criticisms of their effectiveness. This paper investigates the usability problems found in the evaluation of six highly interactive websites by 30 users in a task-based evaluation and 14 experts using three different expert evaluation methods. A grounded theory approach was taken to categorize 935 usability problems from the evaluation. Four major categories emerged: Physical presentation, Content, Information Architecture and Interactivity. Each major category had between 5 and 16 sub-categories. The categories and sub-categories were then analysed for whether they were found by users only, experts only or both users and experts. This allowed us to develop an evidence-based set of 21 heuristics to assist in the development and evaluation of interactive websites.
ACM
The Effect of Task Assignments and Instruction Types on Remote Asynchronous Usability Testing - Paper
Community: engineering
Contribution & Benefit: This paper presents a study of the effect of task assignments and instruction types on the number and variability of identified usability problems in a remote asynchronous usability test
Abstract » Remote asynchronous usability testing involves users directly in reporting usability problems. Most studies of this approach employ predefined tasks to ensure that users experience specific aspects of the system, whereas other studies use no task assignments. Yet the effect of using predefined tasks is still to be uncovered. There is also limited research on instructions for users in identifying usability problems. This paper reports from a comparative study of the effect of task assignments and instruction types on the problems identified in remote asynchronous usability testing of a website for information retrieval, involving 53 prospective users. The results show that users solving predefined tasks identified significantly more usability problems with a significantly higher level of agreement than those working on their own authentic tasks. Moreover, users that were instructed by means of examples of usability problems identified significantly more usability problems than those who received a conceptual definition of usability problems.
ACM
Analysis in Practical Usability Evaluation: A Survey Study - Paper
Contribution & Benefit: A survey of 155 usability practitioners is presented, providing insight in current usability evaluation analysis practices and recommendations on how to align future research with practitioner needs for analysis support.
Abstract » Analysis is a key part of conducting usability evaluations, yet rarely systematically studied. Thus, we lack direction on how to do research on supporting practitioners' analysis and lose an opportunity for practitioners to learn from each other. We have surveyed 155 usability practitioners on the analysis in their latest usability evaluation. Analysis is typically flexible and light-weight. At the same time, practitioners see a need to strengthen reliability in evaluation. Redesign is closely integrated with analysis; more than half of the respondents provide visual redesign suggestions in their evaluation deliverables. Analysis support from academic research, including tools, forms and structured formats, does not seem to have direct impact on analysis practice. We provide six recommendations for future research to better support analysis.
ACM
Evaluating the Collaborative Critique Method - Paper
Contribution & Benefit: We introduce a new usability walkthrough method called Collaborative Critique, inspired by the human-computer collaboration paradigm of system-user interaction, and present the results of its evaluation with usability professionals.
Abstract » We introduce a new usability walkthrough method called Collaborative Critique (CC), which is inspired by the human-computer collaboration paradigm of system-user interaction. This method applies a ``collaboration lens" to assessing the system's behavior and its impact on the user's efforts in the context of the task being performed. We present findings from a laboratory evaluation of the CC method with usability practitioners, in which the results of the CC walkthrough were compared to a benchmark set of problems collected via user testing with two experimental Enterprise Resource Planning (ERP) system tasks. The development of this new usability evaluation method was driven by the need for an approach that assesses the adequacy of the system's support for reducing the user's cognitive and physical effort in the context of the interaction.
ACM