Skip to content

Yolo Trainer

Simeon ADEBOLA edited this page Apr 2, 2020 · 7 revisions

We are currently developing a Python based program that allows users to create training sets, for YOLO, using Kinect streams.

This is especially useful where object detection is to be used for specific sets of objects.

Please contact us if you are interested in being a beta tester.

In lieu of the Yolo trainer, images from Kinect streams can be collected by following these steps:

  1. Launch each imager for OpenPTrack as usual (see here), setting all modules to false - we just want the sensors to be collecting images.
  2. For each imager from which you'd like to collect images, run the following command in a new terminal inside your OPT Docker container:
rosrun image_view extract_images image:=/<image_topic> \
    "_filename_format:=./<folder_name>/<sensor_name>_%04d.jpg" \
    "_sec_per_frame:=<time_in_seconds>"

where you should replace <image_topic> with topic name for the sensor's images, <folder name> with the location where you would like to save all images, and <sensor_name> is used as a unique prefix to distinguish each imager's output. Adjust <time_in_seconds> as needed.

For example, the command for a Kinectv2 with the name kinect01 would be:

rosrun image_view extract_images image:=/kinect01/hd/image_color_rect "_filename_format:=./object_images/kinect_1_image_%04d.jpg"  "_sec_per_frame:=2.0"

This will save kinect01's image every two seconds. After imaging of your objects is complete, you can compile each folder of images into one, ready for annotation in VOTT.

Setting Up an OpenPTrack v2 System:

Running OpenPTrack v2:

Tracking GUI

How to receive tracking data in:

  1. Tested Hardware
  2. Network Configuration
  3. Imager Mounting and Placement
  4. Calibration in Practice
  5. Quick Start Example
  6. Imager Settings
  7. Manual Ground Plane
  8. Calibration Refinement (Person-Based)
  9. Calibration Refinement (Manual)

OPT on the NVidia Jetson

Clone this wiki locally