Automatics Will Be in a Forthcoming Issue of ToCHI!

I’ve been a little radio silent over the last year – very busy with my online craft business jamakvinyl.com. In between filling orders, finding new products, being a one-woman customer service and social media maven, and doing a number of copy-editing jobs, I will have a new publication coming out this year!

Work that I did with Matthew Lakier and Daniel Wigdor at the University of Toronto while I was a post-doc, will appear in an issue of ToCHI later this year. Generally speaking, the project that will be published focuses on the next generation of instruction systems for physical assembly and fabrication tasks. Can’t say much more at this point (here’s a Figure from the forthcoming paper, which without the caption, doesn’t give too much away!), but I am super happy that the project will be published as a journal article.

 

 

GI 2017 Paper: Ivy

Also at GI this year will be another project that I was part of while at Autodesk Research. Barrett Ens, who was interning with Fraser (Anderson) and Tovi Grossman, had a keen interest in VR and 3D user interfaces and a really interesting idea to explore the next generation of programming environments that were based in VR and powered by Internet of Things devices and activities. Thus, Ivy was born! Ivy explored how intelligent, aware environments could be programmed in-situ, within VR representations of the target environment (or possibly in the future, AR representations of it). The project not only presented possible programming constructs and corresponding visualizations that would be useful for programmers of such spaces, but also explored how to integrate and represent real world data and breakpoints in a manner that was appropriate for spatially-situated environments. Aside from loving the name, I personally really appreciated seeing data flowing from sensors to other machines or equipment. The simple act of perceiving data as moving entities, really brings the notion of such programming paradigms life. The paper, Ivy: Exploring Spatially Situated Visual Programming for Authoring and Understanding Intelligent Environments, will be presented by Barrett and was also co-authored by Pourang Irani (University of Manitoba) and George Fitzmaurice (Autodesk Research).

Abstract:

The availability of embedded, digital systems has led to a multitude of interconnected sensors and actuators being distributed among smart objects and built environments. Programming and understanding the behaviors of such systems can be challenging given their inherent spatial nature. To explore how spatial and contextual information can facilitate the authoring of intelligent environments, we introduce Ivy, a spatially situated visual programming tool using immersive virtual reality. Ivy allows users to link smart objects, insert logic constructs, and visualize real-time data flows between real-world sensors and actuators. Initial feedback sessions show that participants of varying skill levels can successfully author and debug programs in example scenarios.

GI 2017 Paper: No Handed Interaction

This past year I had the pleasure of working with Seongkook Heo while he was an intern at Autodesk Research on quite a cool input and interaction techniques project. The project focused on analyzing, and developing an understanding of the situational factors that can constrain our opportunities for input with smart watches, and then used this knowledge (and the resulting taxonomy) to ideate on ways that we an utilize other body parts or actions to re-enable such input. From 3D printing fake hands to reading through participant comments about from Mechanical Turk about Seongkook kneading dough, the project was a very interesting exploration of input opportunities and ended up being a lot of fun.  The paper, No Need to Stop What You’re Doing: Exploring No-Handed Smartwatch Interaction, was also co-authored by Ben Lafreniere,  Tovi Grossman, and George Fitzmaurice from Autodesk Research, and will be presented at GI in May.

Abstract:

Smartwatches have the potential to enable quick micro-interactions throughout daily life. However, because they require both hands to operate, their full potential is constrained, particularly in situations where the user is actively performing a task with their hands. We investigate the space of no-handed interaction with smartwatches in scenarios where one or bot hhands are not free. Specifically, we present a taxonomy of scenarios in which standard touchscreen interaction with smartwatches is not possible, and discuss the key constraints that limit such interaction. We then implement a set of interaction techniques and evaluate them via two user studies: one where participants viewed video clips of the techniques and another where participants used the techniques in simulated hand-constrained scenarios. Our results found a preference for foot-based interaction and reveal novel design considerations to be mindful of when designing for no-handed smartwatch interaction scenarios.

alt. CHI 2017 Paper: Machines as Co-Designers

This year I was fortunate enough to collaborate with Jeeeun Kim and Tom Yeh (from the University of Colorado) and Haruki Takahashi and Homei Miyashita (from Meiji University) on a rather interesting alt. CHI paper. The work, entitled “Machines as Co-Designers: A Fiction on the Future of Human-Fabrication Machine Interaction” draws attention to the ways in which current fabrication practices do not facilitate the serendipitous and in-situ creativity discoveries that occur during traditional craft practices. For me, this project and the accompanying alt. CHI review process were very illuminating (I highly recommend that anyone who has not submitted an alt. CHI paper and experienced the nervousness that comes from reading community’s reviews of their work everyday to do so – it’s a great learning experience). The full paper will be submitted at CHI 2017 and I will link to it after it has been published. Until now, here is the abstract!

While current fabrication technologies have led to a wealth of techniques to create physical artifacts of virtual designs, they require unidirectional and constraining interaction workflows. Instead of acting as intelligent agents that support human’s natural tendencies to iteratively refine ideas and experiment, today’s fabrication machines function as output devices. In this work, we argue that fabrication machines and tools should be thought of as live collaborators to aid in-situ creativity, adapting physical dynamics come from unique materiality and/or machine specific parameters. Through a series of design narratives, we explore Human-FabMachine Interaction (HFI), a novel viewpoint from which to reflect on the importance of (i) interleaved design thinking and refinement during fabrication, (ii) enriched methods of interaction with fabrication machines regardless of skill level, and (iii) concurrent human and machine interaction.

(IEEE) CGA Article – (Digitally) Inking in the 21st Century

Last year, I was approached by Jim Foley to transform my dissertation on the challenges facing pen computing into an article for the IEEE Computer Graphics and Applications magazine. This was a very interesting experience, especially when it came time to distill an entire thesis down into a few pages! The process of disseminating my work in a venue that doesn’t commonly focus on HCI or pen computing was a very good exercise, as it made me reflect on why my work was important to the body of research knowledge as a whole, and the importance of articulation and conciseness when writing.

The article is available here.

Abstract:

The ubiquity and mobility of contemporary computing devices has enabled users to consume content, anytime, anywhere. Yet, when we need to create content, touch input is far from perfect. When coupled with touch input, the stylus should enable users to simultaneously ink, manipulate the page, and switch between tools with ease, so why has the stylus yet to achieve universal adoption? The author’s thesis sought to understand the usability barriers and tensions that have prevented stylus input from gaining traction and reaching widespread adoption. This article in particular explores the limits of human latency perception and evaluates solutions to unintended touch.