Studying collaborative navigation in a Collaborative Virtual Environment (CVE)

H. Yang

School of Information, University of Michigan, Ann Arbor, MI, U.S.A.

 

A spatial 3-D collaborative virtual environment (CVE) allows multiple users distributed across a computer network to enter a shared space constructed from 3-D computer graphics [1]. Many tasks in these systems have an element of collaborative navigation, in which several users explore the virtual environment and then try to navigate to a common location to work together. One of the largest problems in this type of task is how to establish a mutual understanding of viewpoints among CVE participants [2]. To facilitate this type of work, we need to study people's behavior when they navigate collaboratively in a CVE.

In our ongoing investigation of the effects of different perspective displays on collaborative navigation [3], we utilize a variety of means to collect data. Our test paradigm involves a collaborative search and probe task, and a custom built CVE system has been developed for this task. A pair of subjects, one called a 'guider' and the other a 'driver', take part in each experimental session. The driver controls a virtual submarine within a virtual water tank and searches for a target, but without knowing which object is the target. On another computer in a different room, the guider is able to identify the target as it flashes once on the screen. The guider and the driver can communicate over an audio link.

To measure the performance of their collaborative search, a Target-Found-Time is recorded. To measure the time for the guider to guide the driver to the target, a Travel-Time is calculated. All of the reaction time data, as well as the view trajectory data, are sent in real time to a monitoring computer and logged. Here, the experimenter can see what both subjects see by switching between their views during each experimental session. The time-stamped view trajectory data are used to reconstruct subjects' actions for replay and analysis. Each experimental session is also captured on video and digitized for synchronized replay with the 3-D view transition. This facilitates the coding of communication patterns and is also helpful in identifying subjects' search strategies, along with specific problems of a particular perspective display.

To measure the subjects' geometric understanding of the virtual environment, the software periodically clears the screen and asks the subject multiple-choice questions, such as "what is your current position and orientation?". These questions are generated in real time and are graded automatically by the software. Similarly, at the end of a session, questions about the global distribution of targets are asked and graded by the software.

References

  1. Churchill, E.F. et al. (2001). Collaborative virtual environments: digital places and spaces for interaction. London: Springer.
  2. Hindmarsh, J. et al. (1998). Fragmented Interaction: Establishing Mutual Orientation in Virtual Environments. In: Proceedings of ACM CSCW'98, 217-226.
  3. Yang, H. (2002). Multiple Perspectives for Collaborative Navigation in CVE. In: Extended Abstract of ACM CHI'02, in press.


Paper presented at Measuring Behavior 2002 , 4th International Conference on Methods and Techniques in Behavioral Research, 27-30 August 2002, Amsterdam, The Netherlands

© 2002 Noldus Information Technology bv