Back from Microsoft Research

So it has been a very busy year! I recently finished my time at Microsoft Research in Redmond, went on trips to Disneyland, Paris and Scotland / Ireland, helped with and attended my sister’s wedding, got engaged (!), passed my candidacy exam, and did a ton of research and paper writing! I’m now back in Edmonton and eager to finish up my thesis!

Sadly, due to the new Logo, this beauty is no longer 🙁

While in the US I discovered electronic paper cutters such as the Silhouette Cameo. Needless to say, I now have a new hobby.

Cliffs of Moher, Ireland

Chihuly Glass Art, Washington

We’re Engaged (happened in Disneyland Paris)!!

Inside the Natural History Museum in Dublin, Ireland.

Three Thumb Slider Example

A few days ago I was looking for a WPF slider that had three thumbs for a project I was working on. The three thumbs needed to be able to be customized individually and the central thumb needed to never exceed the values or locations of the left or right thumbs (i.e., think a video timeline with starting and ending points you can modify with the center thumb never exceeding these points). After searching for quite a while, all I could find was a slider with two thumbs so I modified it and decided to put it the modifications here to help out others who may need the same thing.


To break the code down a little bit there are three main sections, the actual slider component, the event handling of the three thumbs, and the invocation of the slider.

Slider Component

In the User Control xaml (ThreeThumbSlider.xaml), I added an additional slider thumb using the below code. This allows for the three thumbs to be rendered on the slider.

<Slider x:Name=”MiddleSlider” Minimum=”{Binding ElementName=root, Path=Minimum}” Maximum=”{Binding ElementName=root, Path=Maximum}” Value=”{Binding ElementName=root, Path=MiddleValue, Mode=TwoWay}” Template=”{StaticResource simpleSlider}” Margin=”0,0,0,0″/>

In addition, data binding and a dynamic resource allows for the color of each thumb to be modified. In the UserControl.Resources Track ControlTemplate I added the following:

<Rectangle Fill=”{DynamicResource ResourceKey=Clr}” Stroke=”Black” StrokeThickness=”1″ Width=”10″ Height=”18″ SnapsToDevicePixels=”True”/>

The following Resource Key was added within each of the thumbs:
<Slider x:Name=”MiddleSlider” Minimum=”{Binding ElementName=root, Path=Minimum}” Maximum=”{Binding ElementName=root, Path=Maximum}” Value=”{Binding ElementName=root, Path=MiddleValue, Mode=TwoWay}” Template=”{StaticResource simpleSlider}”  Margin=”0,0,0,0″>
      <Slider.Resources>
                    <Brush x:Key=”Clr”>Green</Brush>
            </Slider.Resources>
</Slider>

Event Handling of the Three Thumbs

The event handling of the three thumbs is a bit tricky because you need to make sure that the middle slider does not exceed the values or location of the upper and lower valued sliders. If it does, the interaction can get a bit confusing. What I ended up doing was making three additional event handlers and checking for instances where the middle value exceeded the upper or lower thumb values using Math.Max() and Math.Min(). When this occurred, I reset the middle thumb value or the upper/lower values.

public event EventHandler LowerValueChanged;
public event EventHandler MiddleValueChanged;
public event EventHandler UpperValueChanged;
….
void Slider_Loaded(object sender, RoutedEventArgs e)
{
LowerSlider.ValueChanged += LowerSlider_ValueChanged;
UpperSlider.ValueChanged += UpperSlider_ValueChanged;
MiddleSlider.ValueChanged += MiddleSlider_ValueChanged;
}
public void LowerSlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
MiddleSlider.Value = Math.Max(MiddleSlider.Value, LowerSlider.Value);
LowerValueChanged(this, e);
}
public void UpperSlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
MiddleSlider.Value = Math.Min(MiddleSlider.Value, UpperSlider.Value);
UpperValueChanged(this, e);
}
public void MiddleSlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
LowerSlider.Value = Math.Min(MiddleSlider.Value, LowerSlider.Value);
UpperSlider.Value = Math.Max(UpperSlider.Value, MiddleSlider.Value);
MiddleValueChanged(this, e);
}
To specify the thumb value for the middle slider, an additional variable and DependancyProperty were also added:

public double MiddleValue
{
get { return (double)GetValue(MiddleValueProperty); }
set { SetValue(MiddleValueProperty, value); }
}
public static readonly DependencyProperty MiddleValueProperty = DependencyProperty.Register(“MiddleValue”, typeof(double), typeof(ThreeThumbSlider), new UIPropertyMetadata(0d));

Invocation of the Slider

The following line can be added to any .xaml file (in my case the MainWindow.xaml) and allows for the specification of the lower, upper, and middle values of the thumbs as well as the handling of each thumb’s events:

<local:ThreeThumbSlider x:Name=”slider”  LowerValue=”30″ UpperValue=”70″ MiddleValue=”40″ Minimum=”0″ Maximum=”100″ LowerValueChanged=”valueChange” UpperValueChanged=”valueChange” MiddleValueChanged=”valueChange” Canvas.Left=”81.955″ Canvas.Top=”34.811″ Width=”333.534″/>

Within the code behind of the MainWindow.xaml.cs, one event handler receives the changes to the thumbs and modifies the lower, middle, and upper labels using the code below. They are currently tied to a single event handler, but could easily be changed to individual event handlers or other functionality.

public void valueChange(object sender, EventArgs e)
{
lowerLabel.Text = slider.LowerValue.ToString();
upperLabel.Text = slider.UpperValue.ToString();
middleLabel.Text = slider.MiddleValue.ToString();
}

CHI 2013 Paper Accepted

Back in December I found out that some of the work I had done last year at the UofA was accepted for publication / presentation at CHI 2013 in Paris, France. I’m super excited to talk about this work as I was really excited about the results and of course, a stop in Paris means a stop at Disneyland Paris!

Here is the abstract of the work, as well as the CHI Video Preview which was a new component for submissions this year.

This work examines intermanual gesture transfer, i.e., learning a gesture with one hand and performing it with the other. Using a traditional retention and transfer paradigm from the motor learning literature, participants learned four gestures on a touchscreen. The study found that touchscreen gestures transfer, and do so symmetrically. Regardless of the hand used during training, gestures were performed with a comparable level of error and speed by the untrained hand, even after 24 hours. In addition, the form of a gesture, i.e., its length or curvature, was found to have no influence on transferability. These results have important implications for the design of stroke-based gestural interfaces: acquisition could occur with either hand and it is possible to interchange the hand used to perform gestures. The work concludes with a discussion of these implications and highlights how they can be applied to gesture learning and current gestural systems.

Also, if you are interested in other applications of motor learning to HCI, check out Fraser’s paper that will also be presented at CHI this year!

Surface and Pen-Based Gesture Research Study

*** Update ***
Fraser and I have recruited all the participants for our studies. We will be running another set of studies in 3 to 4 months, so please check back a little bit later for the more information!
*******

Once again, Fraser Anderson, and I are looking for participants for our research studies. We are currently running two user studies. Fraser’s experiment is concerned with how people learn pen-based gestures and mine is concerned with how people learn gestures on a large touchscreen. We hope that the results will help us understand how generalizable gestures are, how people perform gestures in different contexts, and will help guide the design of gesture-based interfaces in the future.

Fraser and I are currently seeking right handed University of Alberta students who are 18+ to participate. My study has the added caveat that potential participants must *not* be color blind. You can volunteer for one or both of our studies.

The studies take place in the Advanced Man-Machine Interface Lab in the Computing Science Center on the University of Alberta Campus. Your participation will be split over two days. On day 1, you will spend 1 hour learning a number of gestures. On day 2 (approximately 24 hours later), you will return for some follow-up tasks. Participation on day 2 takes less than 30 minutes. At the completion of the experiment, you will receive $15 cash. To be eligible to participate, you must be available for 1 full hour on day one and 30 minutes on day two.

If you are interested in participating, please email hci@ualberta.ca to set up an appointment. Unfortunately, everyone needs to have any appointment set up to participate – we cannot accommodate people who just show up at our lab. Thanks!
** These studies are being conducted by Michelle Annett and Fraser Anderson under the guidance of Dr. Walter Bischof and have been approved by the Research Ethics Board at the University of Alberta. **

Michelle’s Experiment Fraser’s Experiment

Gesture Learning Research Studies

*** Update ***
Thanks to all those who have participated thus far! Both Fraser and I have recruited all the participants for our studies. We will be running another set of studies in two-three months, so please check back a little bit later for the more information!
*************

Fraser Anderson, and I are looking for participants for our (2) research studies. Each  experiment is designed to evaluate how people learn pen-based gestures and apply touchscreen gestures in different contexts. The results of the experiments will help us understand how generalizable gestures are, how people perform gestures in different contexts, and will help guide the design of gesture-based interfaces in the future. You can volunteer for one or both of our studies.

We are currently seeking right handed University of Alberta students who are 18+ to participate. Both studies will take place in the Advanced Man-Machine Interface Lab in the Computing Science Center on the University of Alberta Campus. Your participation will be split over two days. On day 1, you spend 1 hour learning a number of gestures. On day 2 (approximately 24 hours later), you will return for some follow-up tasks. Participation on day 2 takes less than 15 minutes. At the completion of the experiment, you will receive $15 cash.

To be eligible to participate, you must be available for 1 full hour on day one and 15 minutes on day two.

If you are interested in participating, please email hci@ualberta.ca to set up an appointment. Thanks!

** These studies are being conducted by Michelle Annett and Fraser Anderson under the guidance of Dr. Walter Bischof and have been approved by the Research Ethics Board at the University of Alberta. **

Michelle’s Experiment Fraser’s Experiment