Skip to content

Latest commit

 

History

History

onnxruntime

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

ONNX Runtime Inference

OpenCV Linux Windows macOS

The ONNX Runtime inference for yolort, both CPU and GPU are supported.

Dependencies

  • ONNX Runtime 1.7+
  • OpenCV
  • CUDA [Optional]

We didn't impose too strong restrictions on the versions of dependencies.

Features

The ONNX model exported by yolort differs from other pipeline in the following three ways.

  • We embed the pre-processing into the graph (mainly composed of letterbox). and the exported model expects a Tensor[C, H, W], which is in RGB channel and is rescaled to range float32 [0-1].
  • We embed the post-processing into the model graph with torchvision.ops.batched_nms. So the outputs of the exported model are straightforward boxes, labels and scores fields of this image.
  • We adopt the dynamic shape mechanism to export the ONNX models.

Usage

  1. Export your custom model to ONNX.

    python tools/export_model.py --checkpoint_path {path/to/your/best.pt} --size_divisible 32/64

    And then, you can find that a ONNX model ("best.onnx") have been generated in the directory of "best.pt". Set the size_divisible here according to your model, 32 for P5 ("yolov5s.pt" for instance) and 64 for P6 ("yolov5s6.pt" for instance).

  2. [Optional] Quick test with the ONNX Runtime Python interface.

    from yolort.runtime import PredictorORT
    
    # Load the serialized ONNX model
    engine_path = "yolov5n6.onnx"
    device = "cpu"
    y_runtime = PredictorORT(engine_path, device=device)
    
    # Perform inference on an image file
    predictions = y_runtime.predict("bus.jpg")
  3. Compile the source code.

    cd deployment/onnxruntime
    mkdir build && cd build
    cmake .. -DONNXRUNTIME_DIR={path/to/your/ONNXRUNTIME/install/director}
    cmake --build .
  4. Now, you can infer your own images.

    ./yolort_onnx --image ../../../test/assets/zidane.jpg
                  --model_path ../../../notebooks/best.onnx
                  --class_names ../../../notebooks/assets/coco.names
                  [--gpu]  # GPU switch, which is optional, and set False as default