The Living Room: Exploring the Haunted and Paranormal to Transform Design and Interaction

Woo! Next month, I will be going to Brisbane, Australia to present work that was done last summer in the DGP Lab by myself, Matthew Lakier, and Mingzhe (Franklin) Li, about Haunted User Interfaces. We were interested in developing new ways that information could be conveyed to users in a household setting and used ideas from haunted and paranormal phenomenon to do so.

Our animatronic moose built from LEGO and Servo Motors!

Along with a number of prototypes, we also ran a Mechanical Turk study to gather information about the objects people have in their living rooms and how they interact (or as it turned out, ignore) these objects. We also synthesized the survey results, prototypes, and construction lessons into a Haunted Design Framework that can be used to develop or re-imagine interfaces for the home.

A quick video illustrating some of the ideas and prototypes:

Abstract:
Within this work, a novel metaphor, haunted design, is explored to challenge the definitions of display’ used today. Haunted design draws inspiration and vision from some of the most multi-modal and sensory diverse experiences that have been reported, the paranormal and hauntings. By synthesizing and deconstructing such phenomena, four novel opportunities to direct display design were uncovered, e.g., intensity, familiarly, tangibility, and shareability. A large scale design probe, The Living Room, guided the ideation and prototyping of design concepts that exemplify facets of haunted design. By combining the opportunities, design concepts, and survey responses, a framework highlighting the importance of objects, their behavior, and the resulting phenomena to haunted design was developed. Given its emphasis on the odd and unusual, the haunted design metaphor should great spur conversation and alternative directions for future display-based user experiences.

UIST 2015 Publication! MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable Papercraft

This year at UIST (November 2015), I will be fortunate enough to give two presentations. The first is on the unintended touch ToCHI work that I did at Microsoft Research and the second is on a new project that I undertook while at Autodesk Research and the DGP Lab at the University of Toronto. As I am a papercrafter and love using my Silhouette machine, back in January / February I began working on a small idea to create a unique menu for my wedding. The end result was MoveableMaker (and a number of menus!), a novel software application that automates the creation of interactive, moveable papercraft. More wedding details will soon follow (and I can now talk about it) and the UIST publication will be posted as it becomes available.

Wedding Menu pre-MoveableMaker

Wedding Menu post-MoveableMaker (everything is much cleaner and required must less effort)

Abstract:
In this work, we explore moveables, i.e., interactive papercraft that harness user interaction to generate visual effects. First, we present a  survey of children’s books that captured the state of the art of moveables. The results of this survey were synthesized into a moveable taxonomy and informed MoveableMaker, a new tool to assist users in designing, generating, and assembling moveable papercraft. MoveableMaker supports the creation and customization of a number of moveable effects and employs moveable-specific features including animated tooltips, automatic instruction generation, constraint-based rendering, techniques to reduce material waste, and so on. To understand how MoveableMaker encourages creativity and enhances the workflow when creating moveables, a series of exploratory workshops were conducted. The results of these explorations, including the content participants created and their impressions, are discussed, along with avenues for future research involving moveables.

UIST 2011 Publication! – Medusa

So now that UIST 2011 is officially over, I can post about the work that I did at Autodesk Research in January of 2010! I’m super excited to finally get to talk about all my hard work and the awesome project (Medusa) that I got to work on. For now, I just want to share the video and a brief description of what I did (it was 4 months of work after all!). The full paper outlining Medusa (‘Medusa: a proximity-aware multi-touch tabletop’) can be found here.

Quick summary:

So in short, Medusa is a Microsoft Surface that has been instrumented with 138 proximity sensors (sort of like a 1 pixel Microsoft Kinect). These proximity sensors enable the Surface to sense users as they move around the tabletop, and detect a user’s hands and arms above the display area of the Surface. Not only are these sensors inexpensive and simple to configure, but also they enable an integrated hardware solution, without requiring any markers, cameras, or other sensing devices external to the display platform itself. 

As Medusa has an awareness of users’ locations, it can for example, identify touch points by user, and disambiguate between touches made with left or right hands. It can also make use of the touch-based information provided from the Surface to map touch points to specific users (as well as identify which hand they used, right or left), even in multi-user scenarios.

Using all of this information, there are an infinite number of ways that multi-touch interaction with a horizontal display can be enhanced and augmented. In the video below, I (along with Tovi Grossman), demonstrate a few of the techniques that we explored.

Abstract:

We present Medusa, a proximity-aware multi-touch tabletop. Medusa uses 138 inexpensive proximity sensors to: detect a user’s presence and location, determine body and arm locations, distinguish between the right and left arms, and map touch point to specific users and specific hands. Our tracking algorithms and hardware designs are described. Exploring this unique design, we develop and report on a collection of interactions enabled by Medusa in support of multi-user collaborative design, specifically within the context of Proxi-Sketch, a multi-user UI prototyping tool. We discuss design issues, system implementation, limitations, and generalizable concepts throughout the paper.