Augmented reality brings virtual (synthetic) artifacts to life for an enhanced view and experience in the real world. Using computer vision, virtual artifacts are produced in the right places for perceptually seamless integration with real experience. Computer vision also enables a multitude of advanced interfaces and information gathering applications.
Computer vision experience at MOVES includes full-body and hand posture recognition, vision-based human-computer interfaces, detection of suspicious and deceptive behavior, image analysis in virtual environments, video stabilization and mosaicing, embedded processing on unmanned vehicles, sonar image analysis and LiDAR data processing. We have put our experience with state-of-the-art graphics and computer vision to building and improving AR-enabling technology, as well as to creating AR applications and deploying them in new domains and with new user groups.
IIn the BASE-IT project, for example, we use a network of cameras and other sensors to automatically track US Marines during their pre-deployment training. This allows replication of the relevant real-world aspects in a virtual environment and thereby augmentation of the observer’s view of the training, event replay from arbitrary viewpoints, and simulation of new, unobserved behavior. It also allows for automatic behavior analysis for flagging of dangerous situations and for troop performance evaluation. For mission planning and after-action review (AAR), a Virtual Sand Table mixes reality (physical props) with dynamic computer-generated images projected on them, affording painting-style interaction. This is collaborative work with UNC Chapel Hill.
Involved faculty members
[wpv-view name=”Related Projects” focusarea=”augmented-reality”]