Skip to content

mpolinowski/tensorflow2-starter

Repository files navigation

Tensorflow Boiler Plate

This is based on the Tensor Flow Object Detection Boiler Plate. This version is "freed" of the Jupyter Notebook dependency.

Setup

Arch Linux / PACMAN

sudo pacman -Syu bazel tensorflow-cuda python-tensorflow-cuda cuda cudnn protobuf

Python / PiP

Create a virtual environment:

python -m venv tfod

Re-enter with source tfod/bin/activate.

Install the Python dependencies inside your virtual environment:

pip install -r dependencies.txt

Collect Trainings Images

01_collecting_training_data.py

The script requires an RTSP Stream (e.g. an IP Camera) that you can configure here:

RTSP_URL = 'rtsp://admin:instar@192.168.2.19/livestream/12'

Define the objects you want to detect, e.g. hand gestures, here:

labels = ['thumbsup', 'thumbsdown', 'metal', 'ok']

Let the script run, and it will tell you what kind of object it is expecting to see - provide sample images. Note: that there can be a delay from the RTSP stream. You might have to adjust the sleep time to shift the timed captures.

The images will have to be labeled and divided on the training and test folders. See labelImg for details.

Train your Model

02_training_the_model.py

Run the script to run a training based on your images using the ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8 pre-trained model.

Test your Model

03_run_object_detection_from_file.py

Pick one of your images from the test folder to verify that it is recognized correctly:

TEST_IMAGE = 'metal.tyrxdf6-zzggg-RGdgfc-zdfg-1cDGF17f.jpg'

04_run_object_detection_from_stream.py

Alternatively, test the model on your live RTSP stream:

RTSP_URL = 'rtsp://admin:instar@192.168.2.19/livestream/12'

Freeze and Convert

05_freeze_and_convert_models.py

Export your model and convert it to use with TensorFlow.js and TensorFlow Lite