-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trained YOLOv8 model converted to CoreML doesn't give any predictions #13794
Comments
👋 Hello @tardoe, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
Hi @tardoe, Thank you for reaching out and providing detailed information about your issue. Let's work through this together to identify the problem. Firstly, it's great to hear that the vanilla YOLOv8m model works fine after conversion to CoreML. This indicates that the conversion process itself is functioning correctly. The issue seems to be specific to your custom-trained model. Here are a few steps to help diagnose and potentially resolve the issue:
If the issue persists, please provide a minimum reproducible example of your code and dataset configuration. This will help us reproduce the bug and investigate further. You can find guidelines for creating a minimum reproducible example here. Feel free to reach out with any additional questions or updates on your progress. We're here to help! |
Thanks for the advice, I'll address each one accordingly:
>>> import coremltools as ct
>>> from PIL import Image
>>> import numpy as np
>>> coreml_model = ct.models.MLModel('./runs/detect/train3/weights/best.mlpackage')
>>> image = Image.open('./dataset/test/images/test_21.jpg')
>>> image_np = np.array(image).astype(np.float32)
>>> prediction = coreml_model.predict({'image': image_np})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "./venv/lib/python3.11/site-packages/coremltools/models/model.py", line 627, in predict
return MLModel._get_predictions(self.__proxy__, verify_and_convert_input_dict, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/coremltools/models/model.py", line 669, in _get_predictions
preprocess_method(data)
File "./venv/lib/python3.11/site-packages/coremltools/models/model.py", line 615, in verify_and_convert_input_dict
self._verify_input_dict(d)
File "./venv/lib/python3.11/site-packages/coremltools/models/model.py", line 741, in _verify_input_dict
self._verify_pil_image_modes(input_dict)
File "./venv/lib/python3.11/site-packages/coremltools/models/model.py", line 752, in _verify_pil_image_modes
raise TypeError(msg.format(input_desc.name))
TypeError: Image input, 'image' must be of type PIL.Image.Image in the input dict Trying again, by just passing in the PIL Image object, I get a valid prediction: >>> prediction = coreml_model.predict({'image': image})
>>> print(prediction)
{'var_1140': array([[[ 9.2656, 40.438, 31.188, ..., 501, 523, 543.5],
[ 33.344, 4.4141, 4.3789, ..., 621, 618, 610],
[ 23.406, 92.625, 166.38, ..., 398, 373.5, 249],
...,
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0]]], dtype=float32)} Though I don't understand the output. This is on the model converted without >>> from PIL import Image
>>> import numpy as np
>>> coreml_model = ct.models.MLModel('cpe_id/cpe_id_2/runs/detect/train3/weights/best.mlpackage')
>>> image = Image.open('/Users/tim/Development/cpe-id/cpe_id_model/cpe_id/cpe_id_2/dataset/test/images/tplink_vx220g2v_21.jpg')
>>> prediction = coreml_model.predict({'image': image})
>>> prediction
{'coordinates': array([], shape=(0, 4), dtype=float32), 'confidence': array([], shape=(0, 44), dtype=float32)} However, trying to use the model without
❯ pip freeze | grep -E "torch=|ultralytics=|coremltools="
coremltools==7.2
torch==2.2.0
ultralytics==8.2.36 |
Hi @tardoe, Thank you for the detailed follow-up! Let's address each point to help you resolve the issue:
If the issue persists, please provide a minimum reproducible example of your code and dataset configuration. This will help us reproduce the bug and investigate further. You can find guidelines for creating a minimum reproducible example here. Feel free to reach out with any additional questions or updates on your progress. We're here to help! 😊 |
Thanks, for the suggestions.
userDefined {
key: "Confidence threshold"
value: "0.25"
}
userDefined {
key: "IoU threshold"
value: "0.45"
}
I believe I've already provided the minimal examples above for reproducibility. |
Hi @tardoe, Thank you for the detailed follow-up and for confirming the steps you've taken so far. Let's dive deeper into resolving this issue. Key Points Recap:
Next Steps:Given that the CoreML model with
If the issue persists, please provide a more detailed minimum reproducible example, including the specific steps and code used for training, exporting, and testing the model. This will help us reproduce the issue on our end and investigate further. You can refer to our minimum reproducible example guide for more details. Thank you for your patience and cooperation. We're here to help you get this resolved! 😊 |
Thanks,
|
Hi @tardoe, Thank you for your detailed feedback and for confirming the steps you've taken. Let's address your points to help resolve the issue:
Next Steps:To further diagnose the issue, could you please provide a minimum reproducible example of your code and dataset configuration? This will help us reproduce the bug and investigate a solution. You can find guidelines for creating a minimum reproducible example here. Additionally, please ensure you are using the latest versions of Thank you for your cooperation and patience. We're here to help you get this resolved! 😊 |
>>> from PIL import Image
>>> import numpy as np
>>> coreml_model = ct.models.MLModel('cpe_id/cpe_id_2/runs/detect/train3/weights/best.mlpackage')
>>> image = Image.open('/Users/tim/Development/cpe-id/cpe_id_model/cpe_id/cpe_id_2/dataset/test/images/tplink_vx220g2v_21.jpg')
>>> prediction = coreml_model.predict({'image': image})
>>> prediction
{'coordinates': array([], shape=(0, 4), dtype=float32), 'confidence': array([], shape=(0, 44), dtype=float32)} |
Hi @tardoe, Thank you for providing the code example and detailed information. Let's work through this together to resolve the issue. Steps to Diagnose and Resolve:
If the issue persists, please provide a more detailed minimum reproducible example, including the specific steps and code used for training, exporting, and testing the model. This will help us reproduce the issue on our end and investigate further. You can refer to our minimum reproducible example guide for more details. Thank you for your cooperation and patience. We're here to help you get this resolved! 😊 |
Search before asking
Question
I'm having some issues exporting a YOLOv8 model I trained to CoreML. Testing the model in yolo format predicts just fine, after converting to CoreML, testing with coremltools predictions and using the Xcode preview return no results.
I trained the custom model based on 640x640 jpg images across 43 classes, used the yolov8m model as the base, using CUDA on an RTX4090. The training was done like this:
results = model.train(data="yolov8_dataset/dataset.yaml", epochs=350, device=0)
. The training ended normally after 330 epochs having not changed in 100 epochs.This is how I'm exporting the model:
model.export(format="coreml", nms=True, half=False, imgsz=640)
using coremltools 7.2.I'm using
YOLOv8.2.35 / Python-3.11.6 torch-2.2.0 CPU (Apple M1 Max)
to do the conversion. No errors in the conversion processTesting against the designated 640x640 test images from my original dataset.
I also tried this same process on the YOLOv8m "vanilla" model and it worked just fine, the Xcode preview showed predictions - what have I likely done wrong with the custom model?
What am I likely missing here?
Additional
No response
The text was updated successfully, but these errors were encountered: