Skip to content

Latest commit

 

History

History
47 lines (29 loc) · 1.62 KB

File metadata and controls

47 lines (29 loc) · 1.62 KB

Capture Videostream

This tool has two basic mode. At first you can generate training data for manual labeling in cvat here is the relevant fork.

The other mode is for planned capturing the Videostream.

Requirements

I recommend using a virtual Env or a conda/miniconda Environment for executing this scripts. Please install all necessary packages with pip3 install -r requirements.txt

If you want to do it the manual way you need:

  • opencv
  • python-decouple
  • numpy
  • python-dateutil

For the camera access and automatic upload to the azure file storage you need to provide credentials, usernames and tokens like in this example file. Run cp env.example .env and place your secrets into .env If you don't want to use the azure upload function, the sas_token and filepath is not necessary for local capturing and storing videos.

Quick Start Normal Video Capturing

Just run

python3 main.py

and a video will be captured with a live view. To finish the capturing just press q on your keyboard.

Generating Video Training data

Run

python main.py -r -rd 2 -vd 1 This runs the script 2 minutes and every 1 minute a new video file will be created

Generate Random Images during runtime

If you just want some random images during the script runtime, just run

python3 main.py -i This will save every minute one image from the stream. This is perfect for generating a huge amount of training data for labeling.

Upload the files direct to azure

Run python3 main.py -u uploads the video file after capturing finished directly to azure file storage.