A real-time 3D motion tweening and augmented reality system.
Python Shell
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
res
src
videos
OVERVIEW.pdf
README.md

README.md

Nomad

A motion tweening and 3D augmented reality system designed to utilize 3D information recovered from a single 2D image to insert painted structures and mesh models automatically into the image and into subsequent frames of video.

The goals of this project were twofold:

  1. To replicate the parallel tracking behavior demonstrated in the 2007 Klein-Murray PTAM paper using analysis of a single two-dimensional image for multiple-surface detection.
  2. To replicate the nonplanar motion tracking behavior demonstrated in Disney’s Paperman short, which utilizes dense optical flow methods as part of the Meander system pipeline for motion tweening.

Our system consists of a primary user interface that iterates over a stream of static or real-time images and dispatches actions to several independent modules. Our main modules comprise a suite of detection, tracking, graphics, vector math, and contour merging classes and methods that make it easy to combine different algorithms for a variety of different video and image cases that may arise. Another module, smoothing and filtering, is included but not completed. This module is intended to reduce the error contributed by bad per-frame tracking measurements.

This diagram shows the structure visually:

Nomad System Pipeline

More information about our technical system design can be found in OVERVIEW.pdf, and our user interface design can be found at src/README.md.

Sparse Flow Surface Tracking

Feature-based Surface Tracking

Dense Optical Flow for Motion Tracking


Kevin Yeh, Matt Broussard, Kaelin Hooper, Conner Collins

3D Reconstruction with Computer Vision 2014