Dockerized object detection service using YOLO based on AlexeyAB's darknet fork and exposed as a REST API using connexion. For details see this post.
Pull the image from Docker Hub and spin up a container:
docker run -d --rm --name yolo_service -p 8080:8080 johannestang/yolo_service:1.0-yolov3_coco
This will expose two endpoints: detect
which returns the detected classes, and annotate
which returns a copy of the image annotated with the detections. Use a GET request if you want to provide an URL to the image, or a POST request if you want to upload an image file.
The service provides a user interface at localhost:8080/ui where the endpoints can be tested and the details of the input parameters are listed.
Python examples showing how to use the API are provided in the examples
folder.
You can build the images yourself using the build-local.sh
script or pull them from Docker Hub.
They come in multiple variants based on different models/data sets and different configurations of the darknet
library.
The different models are:
- YOLOv3 trained on the COCO dataset covering 80 classes listed here. Tag:
yolov3_coco
. - YOLOv3 trained on the Open Images dataset covering 601 classes listed here. Tag:
yolov3_openimages
. - YOLO9000 covering more than 9000 classes listed here. Tag:
yolo9000
. - YOLOv4 trained on the COCO dataset covering 80 classes listed here. Tag:
yolov4_coco
.
The different darknet
configurations:
- The base configuration set up to run on a CPU. Tag:
1.0
- Compiled using CUDA 10.0 and cudNN in order to utilize a GPU. Tag:
1.0_cuda10.0
. - Compiled using CUDA 10.0 and cudNN with Tensor Cores enabled in order to utilize a GPU with Tensor Cores. Tag:
1.0_cuda10.0_tc
.
When using the CUDA images make sure to use Docker version 19.03 (or newer) and have NVIDIA Container Toolkit installed, then the container can be started by running e.g.:
docker run -d --rm --name yolo_service -p 8080:8080 --gpus all johannestang/yolo_service:1.0_cuda10.0-yolov3_coco