Skip to content

This is a repository for an object detection inference API using the Yolov3 Darknet framework.

License

Notifications You must be signed in to change notification settings

IbrahimMCode/BMW-YOLOv3-Inference-API-GPU

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YOLOv3-v4 Darknet GPU Inference API

This is a repository for an object detection inference API using the Yolov3 Darknet framework.

This Repository has also support for state of the art Yolov4 models

This repo is based on AlexeyAB darknet repository.

The inference REST API works on GPU. It's supported only on Linux Operating systems.

Models trained using our training automation Yolov3 and Yolov4 repository can be deployed in this API. Several object detection models can be loaded and used at the same time.

predict image

Prerequisites

  • Ubuntu 18.04
  • NVIDIA Drivers (410.x or higher)
  • Docker CE latest stable release
  • NVIDIA Docker 2

Check for prerequisites

To check if you have docker-ce installed:

docker --version

To check if you have nvidia-docker installed:

nvidia-docker --version

To check your nvidia drivers version, open your terminal and type the command nvidia-smi

img

Install prerequisites

Use the following command to install docker on Ubuntu:

chmod +x install_prerequisites.sh && source install_prerequisites.sh

Install NVIDIA Drivers (410.x or higher) and NVIDIA Docker for GPU by following the official docs

Build The Docker Image

In order to build the project run the following command from the project's root directory:

sudo docker build -t yolov4_inference_api_gpu -f ./docker/dockerfile .

Behind a proxy

sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t yolov4_inference_api_gpu -f ./docker/dockerfile .

Run The Docker Container

To run the API go the to the API's directory and run the following:

Using Linux based docker:

sudo NV_GPU=0 nvidia-docker run -itv $(pwd)/models:/models -p <docker_host_port>:1234 yolov4_inference_api_gpu

The <docker_host_port> can be any unique port of your choice.

The API file will be run automatically, and the service will listen to http requests on the chosen port.

NV_GPU defines on which GPU you want the API to run. If you want the API to run on multiple GPUs just enter multiple numbers seperated by a comma: (NV_GPU=0,1 for example)

API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

http://<machine_IP>:<docker_host_port>/docs

The 'predict_batch' endpoint is not shown on swagger. The list of files input is not yet supported.

P.S: If you are using custom endpoints like /load, /detect, and /get_labels, you should always use the /load endpoint first and then use /detect or /get_labels

Endpoints summary

/load (GET)

Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again

load model

/detect (POST)

Performs inference on specified model, image, and returns bounding-boxes

detect image

/get_labels (POST)

Returns all of the specified model labels with their hashed values

get model labels

/models/{model_name}/predict_image (POST)

Performs inference on specified model, image, draws bounding boxes on the image, and returns the actual image as response

predict image

/models (GET)

Lists all available models

/models/{model_name}/load (GET)

Loads the specified model. Loaded models are stored and aren't loaded again

/models/{model_name}/predict (POST)

Performs inference on specified model, image, and returns bounding boxes.

/models/{model_name}/labels (GET)

Returns all of the specified model labels

/models/{model_name}/config (GET)

Returns the specified model's configuration

/models/{model_name}/predict_batch (POST)

Performs inference on specified model and a list of images, and returns bounding boxes

P.S: Custom endpoints like /load, /detect, and /get_labels should be used in a chronological order. First you have to call /load, and then call /detect or /get_labels

Model structure

The folder "models" contains subfolders of all the models to be loaded. Inside each subfolder there should be a:

  • Cfg file (yolo-obj.cfg): contains the configuration of the model

  • data file (obj.data): contains number of classes and names file path

    classes=<number_of_classes>
    names=/models/<model_name>/obj.names
    
  • Weights file (yolo-obj.weights)

  • Names file (obj.names) : contains the names of the classes

  • Config.json (This is a json file containing information about the model)

      {
        "inference_engine_name": "yolov3_darknet_detection",
        "detection_threshold": 0.6,
        "nms_threshold": 0.45,
        "hier_threshold": 0.5,
        "framework": "yolo",
        "type": "detection",
        "network": "network_name"
      }

    P.S

    • You can change detection_threshold, nms_threshold, and hier_threshold values while running the API
    • The API will return bounding boxes with a detection higher than the detection_threshold value. A high detection_threshold can show you only accurate predictions

Benchmarking

Windows Ubuntu
Network\Hardware Intel Xeon CPU 2.3 GHz Intel Xeon CPU 2.3 GHz Intel Core i9-7900 3.3 GHZ GeForce GTX 1080
pascalvoc_dataset 0.885 seconds/image 0.793 seconds/image 0.295 seconds/image 0.0592 seconds/image

Acknowledgment

inmind.ai

robotron.de

Antoine Charbel, inmind.ai , Beirut, Lebanon

Charbel El Achkar, Beirut, Lebanon

Hadi Koubeissy, Beirut, Lebanon

About

This is a repository for an object detection inference API using the Yolov3 Darknet framework.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 97.9%
  • Shell 2.1%