Augmented Reality and Computer Vision

Watch the video!

Sand TableAugmented reality brings virtual (synthetic) artifacts to life for an enhanced view and experience in the real world. Using computer vision, virtual artifacts are produced in the right places for perceptually seamless integration with real experience. Computer vision also enables a multitude of advanced interfaces and information gathering applications.

Computer vision experience at MOVES includes full-body and hand posture recognition, vision-based human-computer interfaces, detection of suspicious and deceptive behavior, image analysis in virtual environments, video stabilization and mosaicing, embedded processing on unmanned vehicles, sonar image analysis and LiDAR data processing. We have put our experience with state-of-the-art graphics and computer vision to building and improving AR-enabling technology, as well as to creating AR applications and deploying them in new domains and with new user groups.

AR-VASTIIn the BASE-IT project, for example, we use a network of cameras and other sensors to automatically track US Marines during their pre-deployment training. This allows replication of the relevant real-world aspects in a virtual environment and thereby augmentation of the observer’s view of the training, event replay from arbitrary viewpoints, and simulation of new, unobserved behavior. It also allows for automatic behavior analysis for flagging of dangerous situations and for troop performance evaluation. For mission planning and after-action review (AAR), a Virtual Sand Table mixes reality (physical props) with dynamic computer-generated images projected on them, affording painting-style interaction. This is collaborative work with UNC Chapel Hill.

Involved faculty members

Chris DarkenMathias KölschNeil RoweAmela Sadagic

More information

NPS Vision Lab
BASE-IT project
3D Display and Capture of Humans for Live-Virtual Training
VR Blog (available to internal network users only)
AR-VAST

Title Start Date Sponsor
BASE-IT 2008 ONR