Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YoloV5 inference support of ONNX models #272

Merged
merged 4 commits into from Nov 4, 2020

Conversation

thhart
Copy link
Contributor

@thhart thhart commented Nov 4, 2020

Description

adds support for YoloV5 object detection models via ONNX

Changes

added YoloV5Translator with following features: input squeezer, output parser and non max suppression boxes

Howto

YoloV5 supports at least two output formats, to use models here it is important to specify model.model[-1].export = False in export.py.
An ONNX export can be done like this: ultralytics/yolov5#251

Example

         final YoloV5Translator translator = new YoloV5Translator(YoloV5Translator.builder().optSynsetArtifactName("synset.txt"));
         Criteria<Image, DetectedObjects> criteria = Criteria.builder()
               .setTypes(Image.class, DetectedObjects.class) // defines input and output data type
               .optDevice(Device.cpu())
               .optTranslator(translator)
               .optModelUrls("file:///path-to-model/") // search models in specified path
               .optModelName("yolov5-exported-model.onnx")
               .optEngine("OnnxRuntime")
               .build();
         final ZooModel<Image, DetectedObjects> model = ModelZoo.loadModel(criteria);
         Predictor<Image, DetectedObjects> predictor = model.newPredictor(translator);
         final File input = new File("a-file-to-analyze.jpg");
         BufferedImageFactory factory = new BufferedImageFactory();
         final BufferedImage ix = ImageIO.read(input); //should be scaled or padded to model dims
         DetectedObjects detection = predictor.predict(factory.fromImage(ix));

Copy link
Contributor

@stu1130 stu1130 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution!
Could you run
./gradlew fJ build

@thhart thhart requested a review from frankfliu November 4, 2020 20:17
Copy link
Contributor

@frankfliu frankfliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me.

Would you mind import this mode into ONNX runtime model zoo in separate PR?

@thhart
Copy link
Contributor Author

thhart commented Nov 4, 2020

Would you mind import this mode into ONNX runtime model zoo in separate PR?

Please note I use DJL with own models however I can understand it can be good to have a sample model available and also for testing maybe, if I find some time I will PR a standard model.

Change-Id: I5ea9c204b19dc55d88ce84c4c4275b776d6f8a3b
@frankfliu frankfliu merged commit 58c4b5c into deepjavalibrary:master Nov 4, 2020
@thhart thhart deleted the yolov5 branch November 4, 2020 23:51
thhart added a commit to thhart/djl that referenced this pull request Nov 5, 2020
YoloV5 inference support of ONNX models (deepjavalibrary#272)
thhart added a commit to thhart/djl that referenced this pull request Nov 5, 2020
thhart added a commit to thhart/djl that referenced this pull request Nov 5, 2020
Revert "YoloV5 inference support of ONNX models (deepjavalibrary#272)"
@hongyaohongyao
Copy link

cost memory leak

Lokiiiiii pushed a commit to Lokiiiiii/djl that referenced this pull request Oct 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants