The SNaP Framework – A Virtual Reality Tool for Spatial Navigation

Continuing off of the work I did for my independent study course with Dr. Walter F. Bischof, my thesis project required me to create a software framework (the SNaP Framework – Spatial Navigation Framework) that would enable psychologists to easy create, control, and deploy virtual reality-based spatial navigation experiments. My main goal was to eliminate the hardware and usability issues inherent in current VR systems and create a flexible system that is easy for novices to use. I also had a number of research goals that I wanted to investigate/achieve:

  • Eliminate issues in using VR for spatial navigation research
  • Simplify input and output peripheral usage
  • Enable novice specification and deployment
  • Decrease design and implementation time
  • Reduce the volume of incomparable results
  • Create environments with similar appearances
  • Include universal interaction metaphors and behavioral recording techniques

The software framework contains a number of components (Chapter 1 and 4 of thesis):

  • Parameter File
    User-created XML file that specifies the spatial navigation experiment to be performed
  • VR Configuration Creator
    Python module that transforms each trial specified in a parameter file into a configuration file; determines if a block of trials needs to be run again due to poor performance
  • Configuration Files
    XML-based file that specifies the environmental and protocol configurations for a given experimental trial
  • VR Launcher
    Python module that determines which deployment contexts and input devices are desired
  • VRPN Server
    Open source virtual reality server/client that transforms input devices into generic device types; the data streaming from the server is used to control participant movement
  • Virtools VR Player
    Virtools-provided component that renders virtual environments
  • Virtools Composition Files
    Virtools file that contains a paradigm’s virtual world; contains custom scripts, 3D models, and universal modules to generate and render a paradigm’s virtual implementation
  • Result Files
    Capture a participant’s performance (i.e., path traveled, camera frustum images, or overall experiment results)

There are a number of ways that different groups of users can use the SNaP framework in their research. Novices can use 3D Layout window to add, relocate, or retexture 3D elements, quickly specify desired input and output peripherals, and easily deploy experiments using the provided batch scripts. Expert users can extend the XML parameter and configuration file schemas, design and integrate new 3D models, modify or add new scripting or C++ SDK code, introduce new metrics, modify the VR Configuration Creator to handle new parameter and configuration files, establish new goal monitoring algorithms, navigation metaphors, or aids, and implement new paradigms using template environment. (Chapter 5 of thesis)To test the efficacy of the SNaP Framework, I implemented a number of popular spatial navigation paradigms using the framework (I also performed a pilot study using a small sample of individuals, those results were done after my thesis was completed so they are present in my publications but not thesis). (Chapter 3 of thesis)

Complex Maze

 >

Bucket World
  

Scatter Hoarding Task

Cheng Task

Virtual Morris Water Maze

AViz – Visualizing Execution Traces for the Dynamic Analysis of Software

For my CMPUT 666 – Reverse Software Engineering course (Fall 2007), I worked on an individual project relating to the visualization of execution traces for dynamic analysis of a software program.

As many software engineers will attest, one of the most important and time-consuming activities within the software development cycle is the continual maintenance of a software system or program. Contrary to popular belief, roughly 50% of the costs encountered during a typical software development cycle are incurred during the modification and maintenance phases of a system, not during the design or implementation activities. Estimates have shown that 50% of the software maintenance phase is spent trying to comprehend a software system. Because program comprehension contributes so much time, effort and money to the total cost of system development, a logical question to ask is: what are the problems with the methods, tools, and techniques that are being used, and how can they be improved upon?

Although each has their own problems, static and dynamic analyses are very beneficial for discovering the behavior and architecture of a system. One of the most surprising concerns that arises after reading static and dynamic analysis literature is that both of these techniques have not been combined together very often. A hybrid of these approaches would involve aggregating the static and dynamic artifacts together, and creating a new visualization that represents the system’s architecture and behavior. Of the limited research which has tried to create a ‘hybrid’ analysis, the most common visualizations that are created are UML collaboration diagrams (a hybrid UML class and sequence diagram):

Within the context of my final project, I wanted to explore the possibilities of using this hybrid approach to assist maintenance personnel in program comprehension when they are performing an evolutionary maintenance tasks. Motivated by the success of a previously created software program at the University of Alberta (JDEvAn Viewer), I wanted to extend this work to include a dynamic analysis element. To this end, I experimented with the creation of a hybrid analysis system, AViz (Aspect Visualization). I utilized the typical reverse engineering paradigm to extract execution traces from my program (using the AspectJ language and writing to files), analyzed the traces (in Java) and then output the results using a UML collaboration diagram (using the Java SWT and Draw2D libraries).

 
Thanks to Zhenchang Xing (for the JDEvAn Viewer code) and Mike Smit (for his AspectJ sample code to get me started) at the UofA.

Exploring Maya and Virtools, Underwater World

In the summer of 2008, Fraser Anderson and I supervised a WISEST student, Melissa Hall, in the AMMI lab. We decided to create a project for Melissa that would make use of her amazing artistic skills, whilst not requiring a huge amount of programming. As I was just finished the Caribbean Resort Project, we asked her to make us a large, detailed virtual environment that could be used for some of the spatial navigation research that I was working on for my thesis.

At the end of her 6 week internship, Melissa created a wonderful underwater environment, a la Finding Nemo / Little Mermaid. She used Maya 2008 for 3D object creation and Virtools 4.1 for character behaviors and environment deployment. The environment has a number of character behaviors and contains a vast array of ocean-esque elements such as tankers, a sunken ship, kelp, shells, fish, sharks, etc. The ‘Q’ and ‘E’ keys can be used to swim upwards and downwards, the ‘W’, ‘A’, ‘S’. and ‘D’ keys are used to swim forwards, backwards, left, and right, and the arrow keys are used to look around. The Spacebar can also be used to turn a flashlight on and off.

We commonly show visitors to the AMMI lab her environment in our CAVE or on a monitor or HMD. If you have been to the AMMI lab in the few years for a demo, then you have most certainly seen her virtual environment! For those of you who haven’t seen the underwater world, you can see below.