Communication Technologies for the Zombie Apocalypse: New Educational Initiatives
Contribution & Benefit: The zombie apocalypse will present a unique challenge as communication technologies fail. This video describes STEM initiatives that will prepare children to communicate when the undead hordes are upon us.
Abstract » The threat of the zombie apocalypse has finally begun to reach a level of popular concern, both in the media and in government organizations like the U.S. Centers for Disease Control and Prevention. The zombie apocalypse and subsequent destruction of modern communication technologies will present a unique challenge to future generations. This video describes new STEM initiatives that will enable today's children to maintain vital information links once the undead hordes are upon us.
Pet Video Chat: Monitoring and Interacting with Dogs over Distance
Contribution & Benefit: We designed a pet video chat system that augments a Skype audio-video connection with remote interaction features and evaluated it with pet owners to understand its usage.
Abstract » Companies are now making video-communication systems that allow pet owners to see, and, in some cases, even interact with their pets when they are separated by distance. Such ‘doggie cams’ show promise, yet it is not clear how pet video chat systems should be designed (if at all) in order to meet the real needs of pet owners. To investigate the potential of interactive dog cams, we then designed our own pet video chat system that augments a Skype audio-video connection with remote interaction features and evaluated it with pet owners to understand its usage. Our results show promise for pet video chat systems that allow owners to see and interact with their pets while away.
Designing Visualizations to Facilitate Multisyllabic Speech with Children with Autism and Speech Delays
Contribution & Benefit: VocSyl is a real-time voice visualization system to help teach multisyllabic speech to children with autism and speech delays.
Abstract » The ability of children to combine syllables represents an important developmental milestone. This ability is often delayed or impaired in a variety of clinical groups including children with autism spectrum disorders (ASD) and speech delays (SPD). This video illustrates some of the features of VocSyl, a real-time voice visualization system to shape multisyllabic speech. VocSyl was designed using the Task Centered User Interface Design methodology from the beginning to the end of the design process. Children with Autism and Speech Delays, targeted users of the software, were directly involved in the development process, thus allowing us to focus on what these children demonstrate they require.
TimeBlocks: “Mom, can I have another block of time?”
Contribution & Benefit: Time is a difficult concept for parents to communicate with young children. We developed TimeBlocks, a novel tangible, playful object to facilitate communication about concepts of time with young children.
Abstract » Time is a difficult concept for parents to communicate with young children. We developed TimeBlocks, a novel tangible, playful object to facilitate communication about concepts of time with young children. TimeBlocks consists of a set of cubic blocks that function as a physical progress bar. Parents and children can physically manipulate the blocks to represent the concept of time. We evaluated TimeBlocks through a field study in which six families tried TimeBlocks for four days at their homes. The results indicate that TimeBlocks played a useful role in facilitating the often challenging task of time-related communication between parents and children. We also report on a range of observed insightful novel uses of TimeBlocks in our study.ACM
An Augmented Multi-touch System Using Hand and Finger Identification
Contribution & Benefit: We introduce a multitouch system capable of identifying the finger and hand corresponding to each touch, and show how we use it in a multitouch 3D authoring tool.
Abstract » With the advent of devices such as smart phones and tablet computers, multi-touch applications are rapidly becoming commonplace. However, existing multi-touch sensors are not able to report which finger, or which hand, is responsible for each of the touches. To overcome this deficiency we introduce a multi-touch system that is capable of identifying the finger and hand corresponding to each touch. The system consists of a commercially available capacitive multi-touch display augmented with an infrared depth camera mounted above the surface of the display. We performed a user study to measure the accuracy of the system and found that our algorithm was correct on 92.7% of the trials.
TEROOS: A Wearable Avatar to Enhance Joint Activities (Video Preview)
Contribution & Benefit: The video shows what communication style a wearable robot avatar offers to daily life situations. Two users can communicate by sharing their vision via the robot avatar.
Abstract » This video shows a wearable avatar named TEROOS, which is mounted on the shoulder of a person. TEROOS allows the users who wear it and control it to remotely share a vision. Moreover, the avatar has an anthropomorphic face that enables the user who controls it to communicate with people that are physically around the user who wears it. We have conducted a eld test by using TEROOS and observed that the wearable avatar innovatively assisted the users to communicate during their joint activities such as route navigating, and buying goods at a shop. In addition, both users could easily identify objects that they discussed. Moreover, shop's stafs members communicated with the user controlling TEROOS and they exhibited a typical social behavior.ACM
The Design Evolution of LuminAR: A Compact and Kinetic Projected Augmented Reality Interface
Contribution & Benefit: LuminAR is kinetic projected augmented reality interface, in everyday objects, namely a light bulb and a task light. This video presents the design evolution iterations of the various LuminAR prototypes.
Abstract » LuminAR is a new form factor for a compact and kinetic projected augmented reality interface. This video presents the design evolution iterations of the LuminAR prototypes. In this video we document LuminAR’s design process, hardware and software implementation and demonstrate new kinetic interaction techniques. The work presented is motivated through a set of applications that explore scenarios for interactive and kinetic projected augmented reality interfaces. It also opens the door for further explorations of kinetic interaction and promotes the adoption of projected augmented reality as a commonplace user interface modality.
EyeRing: An Eye on a Finger
Contribution & Benefit: EYERING: a finger-worn personal assistant with visual analysis capabilities, that aid visually impaired people as well as the sighted.
Abstract » Finger-worn devices are a greatly underutilized form of interaction with the surrounding world. By putting a camera on a finger we show that many visual analysis applications, for visually impaired people as well as the sighted, prove seamless and easy. We present EyeRing, a ring mounted camera, to enable applications such as identifying currency and navigating, as well as helping sighted people to tour an unknown city or intuitively translate signage. The ring apparatus is autonomous, however our system also includes a mobile phone or computation device to which it connects wirelessly, and an earpiece for information retrieval. Finally, we will discuss how different finger worn sensors may be extended and applied to other domains.
Which Book Should I Pick?
Contribution & Benefit: This research suggests three possible textual visualizations of a book, which may help users to find a desirable book, with the use of intuitive information out of large book data.
Abstract » This video proposes readability visualization, genre visualization, and combined visualization to provide unconventional information for book selection. Data visualization was initiated for the practical purpose of delivering information, as it efficiently links visual perception and data so that readers are able to instantly recognize patterns in overcrowded data. In this interdisciplinary research we used the strength of data visualization, and this paper suggests three possible textual visualizations of a book, which may help users to find a desirable book, with the use of intuitive information out of a large volume of book data.
Video Mediated Recruitment for Online Studies
Contribution & Benefit: We illustrate that videos can support online research by driving the recruitment process. They can also help build an online community which in turn can provide many long term benefits.
Abstract » More than ever, researchers are turning to the internet as a means to conduct HCI studies. Despite the promise of a worldwide audience, recruiting participants can still be a difficult task. In this video we discuss and illustrate that videos - through their sharable and entertaining nature - can greatly assist the recruitment process. Videos can also be a crucial part in developing an online presence, which may yield a community of followers and interested individuals. This community in turn can provide many long term benefits to the research, beyond just the recruitment phase.
PINOKY: A Ring-like Device that Gives Movement to Any Plush Toy
Contribution & Benefit: PINOKY is a wireless ring-like device that can be externally attached to any plush toy as an accessory that animates the toy by moving its limbs.
Abstract » Everyone has owned or have been in contact with plush toys in their life, and plush toys play an integral part in many areas, for example in a child's growing up process, in the medical field, and as a form of communication media. In order to enhance the interaction experience with plush toys, we created the PINOKY. PINOKY is a wireless, ring-like device that can be externally attached to any plush toy as an accessory that animates the toy by moving its limbs. It is a non-intrusive device, and users can instantly convert their personal plush toys into soft robots. Currently, there are several interactions, such as letting the user control the toy remotely, or inputting the desired movement by moving the toy, and having the data recorded and played back.
Experience "panavi," Challenge to Master Professional Culinary Arts!
Contribution & Benefit: This video introduces the user experience of "panavi" that supports cooking for domestic users to master professional culinary arts in their kitchens by managing temperature and pan movement properly.
Abstract » This video introduces the user experience of "panavi" that supports cooking for domestic users to master professional culinary arts in their kitchens by managing temperature and pan movement properly. Utilizing a sensors-embedded frying pan wirelessly connected computer system, it analyzes sensors' data, recognizes users’ conditions, and provides the users situated navigation messages. In the video, a young lady tries to cook spaghetti Carbonara using panavi, and masters this "difficult" menu by enjoying cooking process. The full paper of this work is also published in CHI '12 conference proceedings.
Ferro Tale: Electromagnetic Animation Interface
Contribution & Benefit: Inspired by the expressiveness of sand drawing, we explore ways to use an electromagnetic array, camera feedback, computer vision, and ferromagnetic particles to produce animations.
Abstract » In this video we demonstrate the idea and the prototype
of an electromagnetic animation interface, ferro tale.
Ferromagnetic particles, such as iron ﬁlings, have very
fascinating characteristics. Therefore they are widely used
in art, education and as toys. Besides their potential to
enable visual and tactile feedback and to be used as a
medium for high resolution tangible input, peoples natural
desire to engage and explore the behavior of this material
makes them interesting for HCI.
Inspired by the expressiveness of sand drawing, we want to
explore ways to use an electromagnetic array, camera
feedback, computer vision, and ferromagnetic particles to
produce animations. The currently used magnetic
actuation device consists of a 3 by 3 coil array. Even with
such a small number of actuators, we are able
demonstrate several animation examples.
Supporting children with autism to participate throughout a design process
Contribution & Benefit: This short film portrays a representative participatory design session involving children with autism collaborating to generate ideas for user interface characters or personas, as active participants within a design team.
Abstract » A deficit in social communication is one of a number of core features of autism that can result in the exclusion of individuals with autism from the design process. Individuals with autism can be highly motivated by new technology, and the design of technologies for individuals with autism could potentially benefit from their direct input. We structured participatory design sessions using Cooperative Inquiry specifically to support the needs of individuals with autism. This video highlights how, when appropriately supported, the challenges of the social communication deficits associated with autism can be overcome and individuals with autism can take a full and active role within the design process.
Towards a Wearable Music System for Nomadic Musicians
Contribution & Benefit: This concept video shows the design of a wearable system for musicians to record their ideas while being away from their instruments, using an interactive shirt and belt.
Abstract » This concept video shows the design of a wearable system for musicians to record their ideas while being away from their instruments, using an interactive shirt and belt.
Tongueduino: Hackable, High-bandwidth Sensory Augmentation
Contribution & Benefit: The tongue has an extremely dense sensing resolution and extraordinary degree of neuroplasticity. Tongueduino is an electro-tactile tongue display that uses those characteristics to interface the user's body to electronic sensors.
Abstract » The tongue is known to have an extremely dense sensing resolution, as well as an extraordinary degree of neuroplasticity, the ability to adapt to and internalize new input. Research has shown that electro-tactile tongue displays paired with cameras can be used as vision prosthetics for the blind or visually impaired; users quickly learn to read and navigate through natural environments, and many describe the signals as an innate sense. However, existing displays are expensive and difficult to adapt. Tongueduino is an inexpensive, vinyl-cut tongue display designed to interface with many types of sensors besides cameras. Connected to a magnetometer, for example, the system provides a user with an internal sense of direction, like a migratory bird. Piezo whiskers allow a user to sense orientation, wind, and the lightest touch. Through tongueduino, we hope to bring electro-tactile sensory substitution beyond the discourse of vision replacement, towards open-ended sensory augmentation that anyone can access.
Pen-in-Hand Command: NUI for Real-Time Strategy eSports
Contribution & Benefit: We investigate the design of embodied interaction in the context of real-time strategy eSports. Specifically, we look at pen + multi-touch interaction using a Wacom Cintiq augmented with a ZeroTouch sensor.
Abstract » Electronic Sports (eSports) is the professional play and spectating of digital games. Real-time strategy games are a form of eSport that require particularly high- performance and precise interaction. Prior eSports HCI has been keyboard and mouse based. We investigate the real-time strategy eSports context to design novel interactions with embodied modalities, because of its rigorous needs and requirements, and the centrality of the human-computer interface as the medium of game mechanics. To sense pen + multi-touch interaction, we augment a Wacom Cintiq with a ZeroTouch multi-finger sensor. We used this modality to design new pen + touch interaction for play in real-time strategy eSports.
Plushbot: an Introduction to Computer Science
Contribution & Benefit: Plushbot is a system that allows children to create their own interactive plush toys with computational elements and ideas embedded.
Abstract » We present the Plushbot project that focuses on providing a more motivating introduction of computer science to middle school students, employing tangible programming of plush toys as its central activity. About sixty students, ages 12-14, participated in a 7.5-week study in which they created and programmed their own plush toys. In order to achieve these, they learned and used several tools, including LilyPad Arduino, Modkit and a web-based application called Plushbot, which permits the user to integrate circuitry design with a pattern of plush toy pieces. Once a design is complete, the user can print the pattern and use it as a template for creating a plush toy. Plushbot is a system that allows children to create their own interactive plush toys with computational elements and ideas embedded.
Fast and Frugal Shopping Challenge
Contribution & Benefit: A fast and frugal shopping challenge looks at the pros and cons of using various devices to help make purchase decisions in a grocery store.
Abstract » There are a number of mobile shopping aids and recommender systems available, but none can be easily used for a weekly shop at a local supermarket. We present a minimal, mobile and fully functional lambent display that clips onto any shopping trolley handle, intended to nudge people when choosing what to buy. It provides salient information about the food miles for various scanned food items represented by varying lengths of lit LEDs on the handle and a changing emoticon comparing the average miles of all the products in the trolley against a social norm. A fast and frugal shopping challenge is presented, in the style of a humorous reality TV show, where the pros and cons of using various devices to help make purchase decisions are demonstrated by shoppers in a grocery store.
Anyone Can Sketch Vignettes!
Contribution & Benefit: Presents a sketch-based application for interactive pen-and-ink illustration. The novel interaction and workflow enables to create a wide range of paintings easily and quickly, along with preserving personal artistic style.
Abstract » Vignette is an interactive system that facilitates texture creation in pen-and-ink illustrations. Unlike existing systems, Vignette preserves illustrators’ workflow and style: users draw a fraction of a texture and use gestures to automatically fill regions with the texture. Our exploration of natural work-flow and gesture-based interaction was inspired by traditional way of creating illustrations. We currently support both 1D and 2D synthesis with stitching. Our system also has interactive refinement and editing capabilities to provide a higher level texture control, which helps artists achieve their desired vision. Vignette makes the process of illustration more enjoyable and that first time users can create rich textures from scratch within minutes.
SIGCHI SPrAyCE: A Space Spray Input for Fast Shape Drawing.
Contribution & Benefit: SPrAyce is a spray-based device allowing people to design in space. It's a new way of designing objects and shapes.
Abstract » Current technological solutions that enable sharing some shape-based ideas are often time demanding and painful to use. The goal of this project is to create a new device, a new way of drawing in an intuitive way. A spray-based input is created to allow natural gestures to draw 3D objects and manipulate the drawing.
Looking Glass: A Field Study on Noticing Interactivity of a Shop Window
Contribution & Benefit: This video shows how passers-by interact with the Looking Glass, an interactive shop window.
Abstract » In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice inter- activity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect).ACM
WatchIt: Simple gestures for interacting with a watchstrap
Contribution & Benefit: WatchIt is a new way to interact with interactive wristwatch. The watchband bracelet becomes interactive, thus avoiding the fat finger problem and occlusion.
Abstract » We present WatchIt, a new interaction technique for wristwatch computers, a category of devices that badly suffers from a scarcity of input surface area. WatchIt considerably increases this surface by extending it from the touch screen to the wristband. The video shows a mockup of how simple gestures on the external and/or internal bands may allow the user to scroll a list (one-finger slide), to select an item (tap), and to set a continuous parameter like the volume of music playing (two-finger slide), avoiding the drawback of screen occlusion by the finger. Also shown is the prototype we are currently using to investigate the usability of our new interaction technique.
The Interactive Punching Bag
Contribution & Benefit: The ‘interactive punching bag’ is a programmable device that adds sensors, sound, lights, and a display to a conventional punching bag.
Abstract » The ‘interactive punching bag’ transforms a conventional punching bag into a programmable ‘smart device’ enhanced to provide various forms of stimulus and feedback (sound, lights, and displayed images). The physical characteristics of each punch are captured using impact sensors and accelerometers, and LEDs, speakers and an associated display can be used to provide different prompts and responses. Interactions are logged over time for analysis. The bag was devised as a means of investigating how to design interactions in the context of a fun, physical, familiar object. Preliminary studies suggest that users are surprised and engaged, and that first-time users spend more time in their first encounter if the bag is running an ‘unexpected’ program (e.g., giggling on impact rather than grunting). However, some users are sensitive about the nature of images and sounds associated with the bag, particularly where there is a conflict with social expectations or values. So far, the interactions that hold users’ attention are those, like the musical ‘punching bag keyboard’, that combine moderate physical activity with a creative element or an intellectual challenge.
Haptic Lotus - A Theatre Experience for Blind and Sighted Audiences
Contribution & Benefit: Can technologies facilitate comparable cultural experiences for both blind and sighted audiences? The Haptic Lotus is a device that changes its form as people walk through a dark immersive installation.
Abstract » How can new technologies be designed to facilitate comparable cultural experiences that are accessible by both blind and sighted audiences? An immersive theatre experience was designed to raise awareness and question perceptions of ‘blindness’, through enabling both sighted and blind members to experience a similar reality. We designed the Haptic Lotus, a novel device that changes its form in response to the audience’s journey through the dark. The device was deliberately designed to be suggestive rather than directive to encourage enactive exploration for both sighted and blind people. During a week of public performances in Battersea Arts Centre in London 150 sighted and blind people took part. People were seen actively probing the dark space around them and for many the Haptic Lotus provided a strong sense of reassurance in the dark.
During a week of public performances in Battersea Arts Centre in London 150 sighted and blind people took part. People were seen actively probing the dark space around them and for many the Haptic Lotus provided a strong sense of reassurance in the dark.
MAWL: Mobile Assisted Word-Learning
Contribution & Benefit: Word-learning is one of the basic steps in language learning. This video demonstrates Mobile Assisted Word-Learning (MAWL): An augmented reality based collaborative interface for learning new words using a smartphone.
Abstract » Word-learning is one of the basic steps in language
learning. A general traditional approach for learning new
words is to keep a dictionary and use it whenever one
encounters a new word. This video demonstrates Mobile
Assisted Word-Learning (MAWL): an augmented
reality based collaborative social-networking interface for
learning new words using a smartphone. MAWL keeps
track and saves all textual contexts during reading process
along with providing augmented reality-based assistance
such as images, translation into native language,
synonyms, antonyms, sentence usage etc.