Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreML export. How to parse MLMultiArray #6123

Closed
1 task done
obohrer opened this issue Dec 28, 2021 · 4 comments · Fixed by #6195
Closed
1 task done

CoreML export. How to parse MLMultiArray #6123

obohrer opened this issue Dec 28, 2021 · 4 comments · Fixed by #6195
Labels
question Further information is requested

Comments

@obohrer
Copy link

obohrer commented Dec 28, 2021

Search before asking

Question

Hi, I'm relatively new to this domain and currently experimenting with running CoreML to detect objects on IOS.
I've managed to train a custom model using yolov5 and tested the weights using a webcam as source. It does work 🚀.
However, when exporting the model to coreml using export.py it has been a struggle.
The exported model has the input dimension 480x640, so far so good.
However, the outputs are the following var_875, var_860.
During detection CoreML returns 2 matrices :[1 x 3 x 80 x 60 x 6] and [1 x 3 x 40 x 30 x 6]
It seems that 80 x 60 and 40 x 30 are the same ratio as the dimension (*8,*16 multiplier)
Is there documentation on how to parse such outputs?

So far I've found many functions to parse those MLMultiArrayValue but none of them are producing correct scores or bounding boxes.

Any pointers on what those dimensions means and how to parse it would be greatly appreciated and happy to contribute back with code snippets / improve the exporter (eg name the outputs correctly)

Additional

No response

@obohrer obohrer added the question Further information is requested label Dec 28, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Dec 28, 2021

👋 Hello @obohrer, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@obohrer
Copy link
Author

obohrer commented Dec 30, 2021

This repository has been useful: https://github.com/dbsystel/yolov5-coreml-tools
Had to tweak the coremltools dependency and skip one of the outputNames in yolov5-coreml-tools.
So far it seems that the bounding boxes are correct, same as predictions.
Only issue I'm seeing on iOS now is the confidence always show 100% (same as with models created with Create ML when there is one class)

@glenn-jocher glenn-jocher linked a pull request Jan 5, 2022 that will close this issue
@glenn-jocher
Copy link
Member

@obohrer good news 😃! Your original issue may now be fixed ✅ in PR #6195. This PR adds support for YOLOv5 CoreML inference.

!python export.py --weights yolov5s.pt --include coreml  # CoreML export
!python detect.py --weights yolov5s.mlmodel  # CoreML inference (MacOS-only)
!python val.py --weights yolov5s.mlmodle  # CoreML validation (MacOS-only)

model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.mlmodel')  # CoreML PyTorch Hub model

Screen Shot 2022-01-04 at 5 41 07 PM

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@tcollins590
Copy link

This repository has been useful: https://github.com/dbsystel/yolov5-coreml-tools Had to tweak the coremltools dependency and skip one of the outputNames in yolov5-coreml-tools. So far it seems that the bounding boxes are correct, same as predictions. Only issue I'm seeing on iOS now is the confidence always show 100% (same as with models created with Create ML when there is one class)

Would you be able to share your updated dependency and which outputNames you skipped? Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants