YOLO Object Detection Service
Pull the image from Docker Hub and spin up a container:
docker run -d --rm --name yolo_service -p 8080:8080 johannestang/yolo_service:1.0-yolov3_coco
This will expose a single endpoint
detect that accepts GET and POST requests where the former takes a URL of an image and the latter lets you upload an image for detection.
The service provides a user interface at localhost:8080/ui where the endpoint can be tested and the details of the input parameters are listed.
You can build the images yourself using the
build-local.sh script or pull them from Docker Hub.
They come in nine variants based on three different models/data sets and three different configurations of the
The different models are:
- YOLOv3 trained on the COCO dataset covering 80 classes listed here. Tag:
- YOLOv3 trained on the Open Images dataset covering 601 classes listed here. Tag:
- YOLO9000 covering more than 9000 classes listed here. Tag:
- The base configuration set up to run on a CPU. Tag:
- Compiled using CUDA 10.0 and cudNN in order to utilize a GPU. Tag:
- Compiled using CUDA 10.0 and cudNN with Tensor Cores enabled in order to utilize a GPU with Tensor Cores. Tag:
When using the CUDA images make sure to use Docker version 19.03 (or newer) and have NVIDIA Container Toolkit installed, then the container can be started by running e.g.:
docker run -d --rm --name yolo_service -p 8080:8080 --gpus all johannestang/yolo_service:1.0_cuda10.0-yolov3_coco