Simeon ADEBOLA edited this page Aug 23, 2018 · 9 revisions

About OpenPTrack v2

OpenPTrack is an open source project launched in 2013 to create a scalable, multi-camera solution for person tracking, to support applications in education, art and culture.

This wiki refers to the v2 "Gnocchi release".

Our objective is to enable “creative coders” to create body-based interfaces for large groups of people—for classrooms, art projects and beyond.

Based on the widely used, open source Robot Operating System (ROS), OpenPTrack provides:

  • user-friendly camera network calibration;
  • person detection from RGB/infrared/depth images;
  • efficient multi-person tracking;
  • object tracking from RGB and depth images;
  • reliable multiple-object tracking; and
  • multi-camera and multi-person pose annotation.
  • UDP and NDN streaming of tracking data in JSON format.

With the advent of commercially available consumer depth sensors, and continued efforts in computer vision research to improve multi-modal image and point cloud processing, robust person tracking with the stability and responsiveness necessary to drive interactive applications is now possible at low cost. But the results of the research are not easy to use for application developers.

We believe that a disruptive project is needed for artists, creators and educators to work with robust real-time person tracking in real-world projects. OpenPTrack aims to support those in the arts and cultural and education sectors who wish to experiment with real-time person & object tracking along with pose annotation, as an input for their applications. The project contains numerous state-of-the-art algorithms for RGB and/or depth tracking, and has been created on top of a modular node-based architecture, to support the addition and removal of different sensor streams online.

OpenPTrack is led by UCLA REMAP and Open Perception. Key collaborators include the University of Padova, Electroland, and Indiana University Bloomington. Code is available under a BSD license. Portions of the work are supported by the National Science Foundation (IIS-1323767).

Follow us on Twitter: @openptrack.

Guides

For further documentation, please see the navigation bar to the right.

Contributing to OpenPTrack

OpenPTrack is an open source project. If you discover optimizations, or if there is a feature you'd like to contribute, please make a branch for your feature and submit a pull request.

A special note: OpenPTrack is five years old in 2018! As we plan for the next five years, we would really appreciate hearing from users and developers of OPT via the OpenPTrack User & Developer Survey. It will help us to shape plans for the platform and improve outreach to our community.


About OpenPTrack v2

What's New In Gnocchi

Overview

Setting Up an OpenPTrack v2 System:

OpenPTrack System and Hardware Requirements

Build and Install

Configuration

Running OpenPTrack v2:

Calibration

Person Tracking

Pose Annotation

Object Tracking

Face Detection and Recognition

Tracking GUI

How to use the tracking data

How to receive tracking data in:

Deployment Guide

  1. Tested Hardware
  2. Network Configuration
  3. Imager Mounting and Placement
  4. Calibration in Practice
  5. Quick Start Example
  6. Imager Settings
  7. Manual Ground Plane
  8. Calibration Refinement (Person-Based)
  9. Calibration Refinement (Manual)

Troubleshooting

OPT on the NVidia Jetson

Hacking and Contributing to OPT.

OpenPTrack Website

Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.