Please enable javascript to view this page in its intended format.

Queen's University
 

Virtual Reality

 

The Purpose of the Virtual Reality Group

Our primary purpose in venturing into the field of virtual reality is to bring our knowledge of multisensory perceptual mechanisms to the technical advancement of Virtual and Artificial Perceptual Environment Research (VAPER). We do this by exploring the perceptual consequences of various types of stimuli and body motion constraints. We are also involved in research into basic perceptual mechanisms using human and animal models behaviour and physiology). Much of our work focuses on the distinctions between self motion and object motion and the segregation of objects using motion cues.

Overview of our present system

We have built several VR platforms. A virtual world is generated using custom made programs in OpenGL on Silicon Graphics Workstations and is presented on a Virtual i/O light-weight head-mounted LCD display. Head tracking is accomplished using an Ascension Flock of Birds 6DOF head tracker mounted on the helmet strap. Our Cyber bike is a standard mountain bike that has been modified to allow real world motion of the bike wheel (using optical sensors) and the steering (using standard pots) to determine the speed and direction of movement in the virtual world.

Bike

The SGI takes the position information from the bike and uses this to compute the 3D arrangement of the objects in the virtual world. This is only possible due the the ability of the SGI to accept this data at high speeds and use the data to compute the 3D environment in almost real time.

 

In a large VR space subjects ride a stationary bicycle or walk on a treadmill to explore the full spatial extent of the space. The viewpoint is dynamic and is determined by the steering angle and translational velocity of either the modified stationary mountain bike or treadmill which is serially linked to the SGI workstation. Both of these means of transportation require very natural motor output on the part of the user and can be readily modified to simulate different large spaces ranging from city scapes (for architectural and design purposes) to large plants (to simulate safety and emergency procedures).

 

Large Virtual Reality

Medium and small space VRs provide small virtual worlds for critical experiments in animal and human sensory physiological experiments where natural interactions by way of head movements changes dynamic displays while activity in animal brains is being monitored.

 

  • The various perceptual cues to depth and motion which enhance the perception of reality in VR.
  • The role of interactive control in VR.
  • Formation of spatial memory maps.
  • 3D auditory VR space perception.
Small Virtual Reality

We have found that VRs do indeed allow human subjects to build very precise cognitive maps of the virtual spaces they explore, and that the natural interactions with a VR our systems provide lead to superior spatial knowledge of the spaces we simulate. This then really validates what many people had expected from this technology.

 

  • Tong, F.H., Marlin, S.G., and Frost, B.J. Cognitive Map Formation in a 3D Visual Virtual World. IRIS/PRECARN Workshop, 1995. [HTML] [PDF]
  • Marlin, S.G., Tong, F., David, S. and Frost., B. Testing Cognitive Maps of Immersive 3D Virtual Reality Environments. Proceedings of the 4th conference of the Australasian Cognitive Science Society, 1997.
  • Marlin, S.M. and Frost, B.J. Incidental learning and memory of object location resulting from naturalistic exploration in a Virtual Reality room: one wall (2D object location), two or four wall (true 3D) object placement. IRIS/Precarn Conference, 1997. [PDF]

The best human interest story is that when we designed a simple VR environment to test what information humans use to brake successfully (i.e. while riding their bike in a virtual world and stopping in front of a barrier) it was precisely the same information, Tau, that is used universally over the animal kingdom. For example, flies use this same tau margin to effect effortless and safe landings, and ganets use it to fold their wings during 100 metre dives into the sea.

 

  • Sun, H.J. and Frost, B.J. Visual control of target-directed locomotion in a virtual environment. IRIS/Precarn Conference, 1998. [HTML]

Kingston, Ontario, Canada. K7L 3N6. 613.533.2000