A blast from the past: Resources for Story Structure, Dramatic Elements, and Student-Centered Learning

Back in 2007, I was fortunate enough to be part of the a Canadian Distributed Mentor Project (CDMP) which is now known as the Collaborative Research Experiences for Undergraduates – Canada (CREUC)). The CDMP is a program that encourages undergraduate women in Computer Science and Computer Engineering to go to graduate school. It matches female students who have finished their 2nd or 3rd year of undergraduate studies with female professors for a summer of research and mentoring. Participating in the program was a really great experience for me, as I got to work with Dr. Eleni Stroulia and on wiEGO,a java based applet that interacts with an open source content management system, Moodle, and a wiki, to assist Junior High students with their group projects. wiEGO supports the inter-play linguistic and spatial-visual intelligences held by the collaborating learners though the of a visualization toolkit. A report that I wrote up about the project is available here.

A few days ago, I was contacted by Audrey Plasse, a teacher in Vermont with the Green Mountain Central School District, whose student’s had stumbled a list of resources that I used during the project. They found a new link about Freidman’s Pyramid that they (and I) find to be very informative and thought that it should be added to my resources page. I wish that I found it when I was working on the project! Although I no longer have access to the original webpage, I wanted to repost the resources page here so that others could find it and I can easily add to it.
Story Structure & Dramatic Elements:

Collaboration, Student-Centered Learning & Wiki Links:

UIST 2011 Publication! – Medusa

So now that UIST 2011 is officially over, I can post about the work that I did at Autodesk Research in January of 2010! I’m super excited to finally get to talk about all my hard work and the awesome project (Medusa) that I got to work on. For now, I just want to share the video and a brief description of what I did (it was 4 months of work after all!). The full paper outlining Medusa (‘Medusa: a proximity-aware multi-touch tabletop’) can be found here.

Quick summary:

So in short, Medusa is a Microsoft Surface that has been instrumented with 138 proximity sensors (sort of like a 1 pixel Microsoft Kinect). These proximity sensors enable the Surface to sense users as they move around the tabletop, and detect a user’s hands and arms above the display area of the Surface. Not only are these sensors inexpensive and simple to configure, but also they enable an integrated hardware solution, without requiring any markers, cameras, or other sensing devices external to the display platform itself. 

As Medusa has an awareness of users’ locations, it can for example, identify touch points by user, and disambiguate between touches made with left or right hands. It can also make use of the touch-based information provided from the Surface to map touch points to specific users (as well as identify which hand they used, right or left), even in multi-user scenarios.

Using all of this information, there are an infinite number of ways that multi-touch interaction with a horizontal display can be enhanced and augmented. In the video below, I (along with Tovi Grossman), demonstrate a few of the techniques that we explored.

Abstract:

We present Medusa, a proximity-aware multi-touch tabletop. Medusa uses 138 inexpensive proximity sensors to: detect a user’s presence and location, determine body and arm locations, distinguish between the right and left arms, and map touch point to specific users and specific hands. Our tracking algorithms and hardware designs are described. Exploring this unique design, we develop and report on a collection of interactions enabled by Medusa in support of multi-user collaborative design, specifically within the context of Proxi-Sketch, a multi-user UI prototyping tool. We discuss design issues, system implementation, limitations, and generalizable concepts throughout the paper.

Please pardon the construction!

I’m gonna start off by saying that I love the new blogger Dynamic View templates! With next to zero work on my part, my blog now can be viewed/rendered in seven really cool ways (you are currently looking at the ‘Magazine View’). If you don’t really dig the magazine view, and want to see something else, you can use the drop down box in the upper left corner to see a different view (it all feels so much like Flipboard – which I really think should have a Mac app!!).


Needless to say, with the new template(s) there are things that will disappear, reappear, and need some work (e.g., all my posts need to have pictures now so that they are super exciting and there are cool things to look at in the different views). The next week or so will probably be spent updating old posts (to look cooler), figuring out how to make my static pages not look so dull, and writing up a new paragraphs or so on my hobby, Vinylmations!

Switching gears …

It’s been a long, awesome summer but I am happy to get back in the research groove. Ever since I completed my internship at Autodesk Research, I have been trying to narrow down the focus of my PhD research (because improving client enjoyment and motivation is still a huge area!!). After lots of reading, contemplating, and staring at the ceiling, I began to realize I was really interested in the basics of human movement: how people were moving their arms and hands on our multi-touch tabletop, why they were moving, why they weren’t moving, and how some movements were very similar from activity to activity, while others were completely different.

These questions lead me towards thinking about the core principles behind horizontal-based gestures, multi-finger interaction, and multi-touch interfaces, as well as motor learning, skill acquisition, and skill transfer. With all these great questions swirling around in my head, I have decided to switch gears and focus my forthcoming research on a number of the unanswered questions that exist with multi-touch gestures and look at them through a motor learning-inspired magnifying glass. Until next time!

More multi-touch in the news!

I am super excited that the tabletop I built for the Glenrose was featured in an article in the Wall Street Journal today (as well as a few other places)! It’s great to not only see the impact that the table has had on the Glenrose, but also see all the great research into technology-based rehabilitation interventions that is being conducted at other institutions. I hope that all this publicity will encourage more HCI researchers to think about entering into this awesome field and providing clients in therapy programs with some exciting, engaging technology to work with!

  • “Playing on a tablet as therapy”. Wall Street Journal, July 2011.
  • “Smartphones, Tablets Provide Therapy for Autism, Other Disabilities”. Mobiledia.com, July 2011. Also appeared on Forbes.com, July 2011.