Surface and Pen-Based Gesture Research Study

*** Update ***
Fraser and I have recruited all the participants for our studies. We will be running another set of studies in 3 to 4 months, so please check back a little bit later for the more information!
*******

Once again, Fraser Anderson, and I are looking for participants for our research studies. We are currently running two user studies. Fraser’s experiment is concerned with how people learn pen-based gestures and mine is concerned with how people learn gestures on a large touchscreen. We hope that the results will help us understand how generalizable gestures are, how people perform gestures in different contexts, and will help guide the design of gesture-based interfaces in the future.

Fraser and I are currently seeking right handed University of Alberta students who are 18+ to participate. My study has the added caveat that potential participants must *not* be color blind. You can volunteer for one or both of our studies.

The studies take place in the Advanced Man-Machine Interface Lab in the Computing Science Center on the University of Alberta Campus. Your participation will be split over two days. On day 1, you will spend 1 hour learning a number of gestures. On day 2 (approximately 24 hours later), you will return for some follow-up tasks. Participation on day 2 takes less than 30 minutes. At the completion of the experiment, you will receive $15 cash. To be eligible to participate, you must be available for 1 full hour on day one and 30 minutes on day two.

If you are interested in participating, please email hci@ualberta.ca to set up an appointment. Unfortunately, everyone needs to have any appointment set up to participate – we cannot accommodate people who just show up at our lab. Thanks!
** These studies are being conducted by Michelle Annett and Fraser Anderson under the guidance of Dr. Walter Bischof and have been approved by the Research Ethics Board at the University of Alberta. **

Michelle’s Experiment Fraser’s Experiment

UIST 2011 Publication! – Medusa

So now that UIST 2011 is officially over, I can post about the work that I did at Autodesk Research in January of 2010! I’m super excited to finally get to talk about all my hard work and the awesome project (Medusa) that I got to work on. For now, I just want to share the video and a brief description of what I did (it was 4 months of work after all!). The full paper outlining Medusa (‘Medusa: a proximity-aware multi-touch tabletop’) can be found here.

Quick summary:

So in short, Medusa is a Microsoft Surface that has been instrumented with 138 proximity sensors (sort of like a 1 pixel Microsoft Kinect). These proximity sensors enable the Surface to sense users as they move around the tabletop, and detect a user’s hands and arms above the display area of the Surface. Not only are these sensors inexpensive and simple to configure, but also they enable an integrated hardware solution, without requiring any markers, cameras, or other sensing devices external to the display platform itself. 

As Medusa has an awareness of users’ locations, it can for example, identify touch points by user, and disambiguate between touches made with left or right hands. It can also make use of the touch-based information provided from the Surface to map touch points to specific users (as well as identify which hand they used, right or left), even in multi-user scenarios.

Using all of this information, there are an infinite number of ways that multi-touch interaction with a horizontal display can be enhanced and augmented. In the video below, I (along with Tovi Grossman), demonstrate a few of the techniques that we explored.

Abstract:

We present Medusa, a proximity-aware multi-touch tabletop. Medusa uses 138 inexpensive proximity sensors to: detect a user’s presence and location, determine body and arm locations, distinguish between the right and left arms, and map touch point to specific users and specific hands. Our tracking algorithms and hardware designs are described. Exploring this unique design, we develop and report on a collection of interactions enabled by Medusa in support of multi-user collaborative design, specifically within the context of Proxi-Sketch, a multi-user UI prototyping tool. We discuss design issues, system implementation, limitations, and generalizable concepts throughout the paper.

We need participants for our user study!! [Updated]

Update: Due to the overwhelming number of responses we have received, we are no longer looking for participants! Thank you very much to all of those who volunteered for our study!Dr. Walter Bischof, Fraser Anderson, and I have just started running a user study and are looking for people in the Edmonton area to participate! In our study, you will get to wear a super nifty motion capture jacket, have your muscular activity recorded by EMG electrodes, and use one of the AIRTouch multi-touch tabletops that Fraser and I have built. We are using all of these technologies to compare different rehabilitation activities: those performed on a multi-touch tabletop and those performed on a traditional table. The information collected during our study will be used to evaluate the potential of multi-touch tabletops in rehabilitation programs.

<
We are looking for people who are 18 years of age or older who weigh less than 220 lbs (due to the sizes of motion capture jackets that we have). The study will take place in the AMMI Lab (Department of Computing Science, University of Alberta) and will take approximately one hour. Participants will receive $15 CAD for participating in the study and an additional $5 CAD if all tasks in the study are completed. The study is running from July 18th, 2011 to August 5th, 2011.

For more information and/or to schedule a convenient time to participate, please contact our research assistant, Gauri Chaggar, at chaggar1@ualberta.ca.

New CyberPsychology and CyberTherapy Papers

I just found out that I have two papers (along with Fraser Anderson) that were accepted at CyberTherapy and CyberPsychology 15 to be held in Seoul, Korea in June (during the World Cup of Soccer)! I am very happy that I get to go back to CT & CP again this year, because I had a great time last year when it was held in Italy. As soon as I am able, I will post pictures, videos, and our accepted papers!

AIRWall

Inspired by the success of our AIRTouch project, Fraser and I wanted to adapt the multi-touch activities that I created for the AIRTouch system to a much larger surface. We used our large rear-projected “Disney” screen, a Wiimote, an off-the-shelf NEC projector, an inexpensive Bluetooth dongle (Deal Extreme is awesome), and a custom IR ‘light pen’ to make our AIRWall system. We initially used Johnny Chung Lee’s Wiimote software (he is the nicest guy) to handle the tracking of the IR ‘light pen’, and then later built our own.

Fraser’s IR Light Pen

A few of the activities that we tested on the AIRWall can be seen here: