Option 1: Install all for PyTorch inference, export, and ONNX inference:
make install
Option 2: Install only for ONNX inference:
make install_for_onnx
Run inference using the PyTorch pre-trained model:
./infer_pytorch.py
./infer_pytorch.py --class-names person,shoes
./infer_pytorch.py --image-file data/images/dogs.jpg \
--class-names dog,eye,nose,ear,tail \
--iou-threshold 0.5 \
--score-threshold 0.09
Export YOLO-World and NMS models to ONNX format:
./export_onnx.py
./export_nms_onnx.py
Run inference using the exported ONNX model:
./infer_onnx.py
./infer_onnx.py --class-names person,shoes
./infer_onnx.py --image-file data/images/dogs.jpg \
--class-names dog,eye,nose,ear,tail \
--iou-threshold 0.5 \
--score-threshold 0.09
YOLO-World is an open-vocabulary object detection model published in CVPR2024.
Check out the following resources for more information: Paper, Code
GPLv3