The SNaP Framework – A Virtual Reality Tool for Spatial Navigation

Continuing off of the work I did for my independent study course with Dr. Walter F. Bischof, my thesis project required me to create a software framework (the SNaP Framework – Spatial Navigation Framework) that would enable psychologists to easy create, control, and deploy virtual reality-based spatial navigation experiments. My main goal was to eliminate the hardware and usability issues inherent in current VR systems and create a flexible system that is easy for novices to use. I also had a number of research goals that I wanted to investigate/achieve:

  • Eliminate issues in using VR for spatial navigation research
  • Simplify input and output peripheral usage
  • Enable novice specification and deployment
  • Decrease design and implementation time
  • Reduce the volume of incomparable results
  • Create environments with similar appearances
  • Include universal interaction metaphors and behavioral recording techniques

The software framework contains a number of components (Chapter 1 and 4 of thesis):

  • Parameter File
    User-created XML file that specifies the spatial navigation experiment to be performed
  • VR Configuration Creator
    Python module that transforms each trial specified in a parameter file into a configuration file; determines if a block of trials needs to be run again due to poor performance
  • Configuration Files
    XML-based file that specifies the environmental and protocol configurations for a given experimental trial
  • VR Launcher
    Python module that determines which deployment contexts and input devices are desired
  • VRPN Server
    Open source virtual reality server/client that transforms input devices into generic device types; the data streaming from the server is used to control participant movement
  • Virtools VR Player
    Virtools-provided component that renders virtual environments
  • Virtools Composition Files
    Virtools file that contains a paradigm’s virtual world; contains custom scripts, 3D models, and universal modules to generate and render a paradigm’s virtual implementation
  • Result Files
    Capture a participant’s performance (i.e., path traveled, camera frustum images, or overall experiment results)

There are a number of ways that different groups of users can use the SNaP framework in their research. Novices can use 3D Layout window to add, relocate, or retexture 3D elements, quickly specify desired input and output peripherals, and easily deploy experiments using the provided batch scripts. Expert users can extend the XML parameter and configuration file schemas, design and integrate new 3D models, modify or add new scripting or C++ SDK code, introduce new metrics, modify the VR Configuration Creator to handle new parameter and configuration files, establish new goal monitoring algorithms, navigation metaphors, or aids, and implement new paradigms using template environment. (Chapter 5 of thesis)To test the efficacy of the SNaP Framework, I implemented a number of popular spatial navigation paradigms using the framework (I also performed a pilot study using a small sample of individuals, those results were done after my thesis was completed so they are present in my publications but not thesis). (Chapter 3 of thesis)

Complex Maze

 >

Bucket World
  

Scatter Hoarding Task

Cheng Task

Virtual Morris Water Maze

Invenio – Tracking Music Trends Using Web Services

The Invenio project was part of the course requirements for my CMPUT 660 – Web Services course. When I took the course (Winter 2008), I was reading lots of celebrity gossip websites (such as Perez Hilton) everyday and was listening to a ton of top 40 hits in Winamp. It was around this time that Perez Hilton starting posting information about new, unsigned (or little known) artists in the USA and Europe. About once a week I would be finding out about some new person that I had never heard or, or had not yet gotten ‘big’ in Canada or Alberta. This started me thinking about how music starts to spread across North America and why some artists are popular everywhere, but some are entirely regional.

For my final course project, one of our requirements was to incorporate geographic information into a web service. Coupling this requirement with my thoughts on music trends, I decided to build a web service that would geovisualize radio station music. (At the time, XM and Sirius weren’t as big as they are today, and this project was a few months before last.fm [i think that’s right] came out with their big analytics software and algorithms). Over a period of 6 weeks (last week in January until 2nd week in March), I collected and organized the music chart information for 190 radio stations that are registered with Nelson SoundScan/Billboard Magazine.

I then took this information, and created a data-intensive, REST-based, RIA entitled, Invenio. Invenio combines a variety of different technologies (Yahoo! Maps, Amazon Associates Web Service, REST, and the Adobe Flex framework) to geographically visualize aggregated music chart information. You can watch a short-ish / long-ish video about Invenio’s features below (or on Youtube here). This project was very successful for me – I got a publication accepted into Cascon 2008 (co-authored by the course instructor Dr. Eleni Stroulia). You can read the paper in it’s entirely here on the ACM website or email me and I can send you a copy.

If you don’t feel like watching the whole video (I know it’s long), here are a couple of pictures that illustrate Invenio’s main features:

In the main Invenio view, you can select the artist, song, and time period and then view the song’s position on each of the 190 radio stations in the US and Canada over this time period. Each circle represents one radio station (or song, or artist, depending upon the view). In some views (Track By Artist), the size of the circle indicates the song or artist’s chart position, in other views the color of the circle indicates the genre of music (Track By Success). Pictured above is the Track by Success view, whereby one can view the top of bottom genre that was on year radio station during the selected time period. If one chooses to ‘View All Weeks’, then they will see a geographic time-lapsed animation of the options they have chosen (e.g., How Alicia Key’s song ‘No One’, fared on the charts for the 6 week period).

In this second picture, instead of circles to indicate song position or genre, we are shown the artist’s album cover (pulled from Amazon) that contains the song that is currently on the chart [in this case in the #1 position]). The main window’s maps are fully interactive – you can zoom in and out, pan the map, and change it’s type (e.g., satellite, hybrid, map). You can also elect to have tool tips appear (that provide additional information about the radio station and link to the radio station’s website). Also, you can provide other additional information which is pulled from the Amazon web site (i.e., album price, a link to the album’s page, a review of the album, the number of lists that the album is on, the genre of music, etc.).

Another type of visualization that is available in Invenio are the Cover Flows or Display Shelves (when I was making Invenio, Apple hadn’t popularized them yet). Each radio station has six display shelves associated to it, and each display shelf visualizes the music that was on each week’s chart. One can select a radio station using the combo box (or by clicking on a circle or artist album cover in the main application window) and all of the display shelves will appear. Once they appear, you can choose to flip through each of them individually, or ‘lock’ them according to chart position or song. This alternative view makes it easy to see how a song has fared using a method that is very different to the main application’s map. Similar to the main application window, you can choose to see tool tips and additional artist/album information from Amazon.

 

The last visualizations that are contained within Invenio are the charting views. Because most people are used to viewing information via charts rather than display shelves or maps, I chose to include three different charting options in Invenio. There are Bubble Charts (the picture directly below this one; the size of each bubble indicates the position of that song on the corresponding radio station’s chart), a line chart indicating a chart score (the picture in the middle; the average position a song across all radio stations), and a vertical chart (the picture at the bottom; it indicates if each song/artist has been fairly consistent over time or if the have had a large variance in chart position over the given time period).

 

AViz – Visualizing Execution Traces for the Dynamic Analysis of Software

For my CMPUT 666 – Reverse Software Engineering course (Fall 2007), I worked on an individual project relating to the visualization of execution traces for dynamic analysis of a software program.

As many software engineers will attest, one of the most important and time-consuming activities within the software development cycle is the continual maintenance of a software system or program. Contrary to popular belief, roughly 50% of the costs encountered during a typical software development cycle are incurred during the modification and maintenance phases of a system, not during the design or implementation activities. Estimates have shown that 50% of the software maintenance phase is spent trying to comprehend a software system. Because program comprehension contributes so much time, effort and money to the total cost of system development, a logical question to ask is: what are the problems with the methods, tools, and techniques that are being used, and how can they be improved upon?

Although each has their own problems, static and dynamic analyses are very beneficial for discovering the behavior and architecture of a system. One of the most surprising concerns that arises after reading static and dynamic analysis literature is that both of these techniques have not been combined together very often. A hybrid of these approaches would involve aggregating the static and dynamic artifacts together, and creating a new visualization that represents the system’s architecture and behavior. Of the limited research which has tried to create a ‘hybrid’ analysis, the most common visualizations that are created are UML collaboration diagrams (a hybrid UML class and sequence diagram):

Within the context of my final project, I wanted to explore the possibilities of using this hybrid approach to assist maintenance personnel in program comprehension when they are performing an evolutionary maintenance tasks. Motivated by the success of a previously created software program at the University of Alberta (JDEvAn Viewer), I wanted to extend this work to include a dynamic analysis element. To this end, I experimented with the creation of a hybrid analysis system, AViz (Aspect Visualization). I utilized the typical reverse engineering paradigm to extract execution traces from my program (using the AspectJ language and writing to files), analyzed the traces (in Java) and then output the results using a UML collaboration diagram (using the Java SWT and Draw2D libraries).

 
Thanks to Zhenchang Xing (for the JDEvAn Viewer code) and Mike Smit (for his AspectJ sample code to get me started) at the UofA.

Enulog – Determining and Visualizing the Polarity of Movie Reviews Using Sentiment Analysis

As part of my CMPUT 500 – Introduction to Natural Language Processing course (Fall 2007), I was required to do an individual course project. Because I was tied to the Software Engineering lab at the University of Alberta, I decided to integrate some NLP into an existing visualization project (eNulog) that was created by the Software Engineering lab. (Some of the following is from my Canadian AI paper which can be found here in its entirety).

The eNulog project aimed to aggregate and visualize RSS feeds and movie blog postings into a simple, easy to use interface. Within the eNulog interface, each movie, actor, director, or movie genre is represented by a node. A user can click on a node and view all of blogs, comments, or RSS feeds that relate to the given movie (or actor, director, or movie genre). The size of each node relates to the number of posts, comments, or feeds about that movie. Once a node has been clicked/selected, all of the nodes which are similar to it will aggregate around it; those nodes which are dissimilar (e.g., the comedy genre is vastly different than the action genre) will move farther away.


For my project, I took the eNulog program, and mined all of the blog postings to determine their relative polarity. In the Canadian AI conference paper that I wrote along with Dr. Greg Kondrak, we compared two different sentiment analysis techniques (a lexical/dictionary-based approach and a machine learning/support vector machine approach) using the eNulog blog data set. The stronger approach (using support vector machines via SVMLite) was incorporated into the eNulog visualization program. The paper can be found here. The two pictures illustrate the interface: green nodes indicating that the average movie reviews for the movie in question were positive, red indicating that they were negative, and yellow indicating that they were neutral or split 50-50 (or somewhat close to even).  The first picture shows the movie ‘Batman Begins’ has 14 total blog postings, that generally recommend the movie. In the second picture, the movie ‘Charlie and the Chocolate Factory’ has been selected/clicked. All of the movies similar to it have moved closer to it, and those that are dissimilar are father away.

Exploring Maya and Virtools, Underwater World

In the summer of 2008, Fraser Anderson and I supervised a WISEST student, Melissa Hall, in the AMMI lab. We decided to create a project for Melissa that would make use of her amazing artistic skills, whilst not requiring a huge amount of programming. As I was just finished the Caribbean Resort Project, we asked her to make us a large, detailed virtual environment that could be used for some of the spatial navigation research that I was working on for my thesis.

At the end of her 6 week internship, Melissa created a wonderful underwater environment, a la Finding Nemo / Little Mermaid. She used Maya 2008 for 3D object creation and Virtools 4.1 for character behaviors and environment deployment. The environment has a number of character behaviors and contains a vast array of ocean-esque elements such as tankers, a sunken ship, kelp, shells, fish, sharks, etc. The ‘Q’ and ‘E’ keys can be used to swim upwards and downwards, the ‘W’, ‘A’, ‘S’. and ‘D’ keys are used to swim forwards, backwards, left, and right, and the arrow keys are used to look around. The Spacebar can also be used to turn a flashlight on and off.

We commonly show visitors to the AMMI lab her environment in our CAVE or on a monitor or HMD. If you have been to the AMMI lab in the few years for a demo, then you have most certainly seen her virtual environment! For those of you who haven’t seen the underwater world, you can see below.