CHI 2013 Paper Accepted

Back in December I found out that some of the work I had done last year at the UofA was accepted for publication / presentation at CHI 2013 in Paris, France. I’m super excited to talk about this work as I was really excited about the results and of course, a stop in Paris means a stop at Disneyland Paris!

Here is the abstract of the work, as well as the CHI Video Preview which was a new component for submissions this year.

This work examines intermanual gesture transfer, i.e., learning a gesture with one hand and performing it with the other. Using a traditional retention and transfer paradigm from the motor learning literature, participants learned four gestures on a touchscreen. The study found that touchscreen gestures transfer, and do so symmetrically. Regardless of the hand used during training, gestures were performed with a comparable level of error and speed by the untrained hand, even after 24 hours. In addition, the form of a gesture, i.e., its length or curvature, was found to have no influence on transferability. These results have important implications for the design of stroke-based gestural interfaces: acquisition could occur with either hand and it is possible to interchange the hand used to perform gestures. The work concludes with a discussion of these implications and highlights how they can be applied to gesture learning and current gestural systems.

Also, if you are interested in other applications of motor learning to HCI, check out Fraser’s paper that will also be presented at CHI this year!

Surface and Pen-Based Gesture Research Study

*** Update ***
Fraser and I have recruited all the participants for our studies. We will be running another set of studies in 3 to 4 months, so please check back a little bit later for the more information!
*******

Once again, Fraser Anderson, and I are looking for participants for our research studies. We are currently running two user studies. Fraser’s experiment is concerned with how people learn pen-based gestures and mine is concerned with how people learn gestures on a large touchscreen. We hope that the results will help us understand how generalizable gestures are, how people perform gestures in different contexts, and will help guide the design of gesture-based interfaces in the future.

Fraser and I are currently seeking right handed University of Alberta students who are 18+ to participate. My study has the added caveat that potential participants must *not* be color blind. You can volunteer for one or both of our studies.

The studies take place in the Advanced Man-Machine Interface Lab in the Computing Science Center on the University of Alberta Campus. Your participation will be split over two days. On day 1, you will spend 1 hour learning a number of gestures. On day 2 (approximately 24 hours later), you will return for some follow-up tasks. Participation on day 2 takes less than 30 minutes. At the completion of the experiment, you will receive $15 cash. To be eligible to participate, you must be available for 1 full hour on day one and 30 minutes on day two.

If you are interested in participating, please email hci@ualberta.ca to set up an appointment. Unfortunately, everyone needs to have any appointment set up to participate – we cannot accommodate people who just show up at our lab. Thanks!
** These studies are being conducted by Michelle Annett and Fraser Anderson under the guidance of Dr. Walter Bischof and have been approved by the Research Ethics Board at the University of Alberta. **

Michelle’s Experiment Fraser’s Experiment

Gesture Learning Research Studies

*** Update ***
Thanks to all those who have participated thus far! Both Fraser and I have recruited all the participants for our studies. We will be running another set of studies in two-three months, so please check back a little bit later for the more information!
*************

Fraser Anderson, and I are looking for participants for our (2) research studies. Each  experiment is designed to evaluate how people learn pen-based gestures and apply touchscreen gestures in different contexts. The results of the experiments will help us understand how generalizable gestures are, how people perform gestures in different contexts, and will help guide the design of gesture-based interfaces in the future. You can volunteer for one or both of our studies.

We are currently seeking right handed University of Alberta students who are 18+ to participate. Both studies will take place in the Advanced Man-Machine Interface Lab in the Computing Science Center on the University of Alberta Campus. Your participation will be split over two days. On day 1, you spend 1 hour learning a number of gestures. On day 2 (approximately 24 hours later), you will return for some follow-up tasks. Participation on day 2 takes less than 15 minutes. At the completion of the experiment, you will receive $15 cash.

To be eligible to participate, you must be available for 1 full hour on day one and 15 minutes on day two.

If you are interested in participating, please email hci@ualberta.ca to set up an appointment. Thanks!

** These studies are being conducted by Michelle Annett and Fraser Anderson under the guidance of Dr. Walter Bischof and have been approved by the Research Ethics Board at the University of Alberta. **

Michelle’s Experiment Fraser’s Experiment

UIST 2011 Publication! – Medusa

So now that UIST 2011 is officially over, I can post about the work that I did at Autodesk Research in January of 2010! I’m super excited to finally get to talk about all my hard work and the awesome project (Medusa) that I got to work on. For now, I just want to share the video and a brief description of what I did (it was 4 months of work after all!). The full paper outlining Medusa (‘Medusa: a proximity-aware multi-touch tabletop’) can be found here.

Quick summary:

So in short, Medusa is a Microsoft Surface that has been instrumented with 138 proximity sensors (sort of like a 1 pixel Microsoft Kinect). These proximity sensors enable the Surface to sense users as they move around the tabletop, and detect a user’s hands and arms above the display area of the Surface. Not only are these sensors inexpensive and simple to configure, but also they enable an integrated hardware solution, without requiring any markers, cameras, or other sensing devices external to the display platform itself. 

As Medusa has an awareness of users’ locations, it can for example, identify touch points by user, and disambiguate between touches made with left or right hands. It can also make use of the touch-based information provided from the Surface to map touch points to specific users (as well as identify which hand they used, right or left), even in multi-user scenarios.

Using all of this information, there are an infinite number of ways that multi-touch interaction with a horizontal display can be enhanced and augmented. In the video below, I (along with Tovi Grossman), demonstrate a few of the techniques that we explored.

Abstract:

We present Medusa, a proximity-aware multi-touch tabletop. Medusa uses 138 inexpensive proximity sensors to: detect a user’s presence and location, determine body and arm locations, distinguish between the right and left arms, and map touch point to specific users and specific hands. Our tracking algorithms and hardware designs are described. Exploring this unique design, we develop and report on a collection of interactions enabled by Medusa in support of multi-user collaborative design, specifically within the context of Proxi-Sketch, a multi-user UI prototyping tool. We discuss design issues, system implementation, limitations, and generalizable concepts throughout the paper.

Switching gears …

It’s been a long, awesome summer but I am happy to get back in the research groove. Ever since I completed my internship at Autodesk Research, I have been trying to narrow down the focus of my PhD research (because improving client enjoyment and motivation is still a huge area!!). After lots of reading, contemplating, and staring at the ceiling, I began to realize I was really interested in the basics of human movement: how people were moving their arms and hands on our multi-touch tabletop, why they were moving, why they weren’t moving, and how some movements were very similar from activity to activity, while others were completely different.

These questions lead me towards thinking about the core principles behind horizontal-based gestures, multi-finger interaction, and multi-touch interfaces, as well as motor learning, skill acquisition, and skill transfer. With all these great questions swirling around in my head, I have decided to switch gears and focus my forthcoming research on a number of the unanswered questions that exist with multi-touch gestures and look at them through a motor learning-inspired magnifying glass. Until next time!