Video annotation tool for deep learning training labels
This tool is for drawing object bounding boxes in videos. It also includes support for Amazon Mechanical Turk. See the paper.
With small amount of changes, you can also:
- Draw bounding boxes in images
- Add additional attributes in bounding boxes
- Use a custom keyframe scheduler instead of user-scheduled keyframes
This tool currently does not support semantic segmentation.
- Clone this repository.
- Make sure Python 3 is installed.
brew install python3(Mac) or
sudo apt-get install python3(Ubuntu)
- Make sure virtualenv is installed.
pip3 install virtualenvor maybe
sudo pip3 install virtualenv
- Make the Python virtualenv for this project:
- Download sample data:
When running any
./manage.py commands, use
source venv/bin/activate to enter venv first.
/deployment for tips on using BeaverDam for production.
If using mturk
Replace the credentials below with your own:
export AWS_ID="AKIAAAAYOURIDHERE" export AWS_KEY="YOURmturkKEYhere5DyUrkm/81SRSMG+5174"
When ready for real turkers, edit
It is recommended to use IAM keys with only mturk permissions instead of root key.
Running the server
Then navigate to localhost:5000 in your browser.
Need to run on a custom port?
env PORT=1234 scripts/serve
For actual production deployment, we recommend using standard Django deployment procedures. Sample scripts using uWSGI & nginx are provided in
/deployment. Remember to set
Login is required to authenticate any changes. Turkers do not require accounts and are authenticated by BeaverDam via Mechanical Turk.
To make a superuser account, run inside venv
If you are using sample data, login with username
test and password
Additional non-turker worker accounts can be created via
To add videos, one must upload the video to a CDN (or use
/annotator/static/videos to serve on the same server), then create a Django video object that contains the url (
filename) to the video file.
To add video objects via web UI, navigate to
/admin and create Video objects.
./manage.py shell, and create
annotator.Video objects and call
Helper methods exist to create large number of video objects at once, see
Video objects can either use H.264 encoded video (See
scripts/convert-to-h264), or a list of frames provided in the attribute
image_list in JSON format (e.g.
video.image_list = '["20170728_085435.jpg"]').
By using single-frame videos, BeaverDam can be used for image annotation.
Video annotations can be accessed via admin,
/annotation/video_id, or through the Video objects' annotation attribute through the shell.
Tasks are created in the same way as Videos.
video attribute needs to be filled out at creation time.
They can be published to mturk by calling
Simulating mturk view in debug
To see what video pages look like on mturk preview mode, set url param
For mturk's HIT accepted mode, set url param
For MacOS, you may need to do uWSGI==2.0.17
Inside venv, run
Pull requests and contributions are welcome. See annotator/static/README.md for more info on frontend architecture.
For help setting up BeaverDam for your application/company, please contact me or leave an issue.