New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker & EdgeTPU Segmentation fault #1829
Comments
👋 Hello @rolouis, thank you for your interest in YOLOv8 🚀! We recommend a visit to the YOLOv8 Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
can reproduce converting the official yolov8n.pt model:
Then fails at initialization:
|
@Niach thank you for the report! The documented example at github.com/ultralytics/yolov8/tree/master/models suggests that the YOLOv8 model is converted from PyTorch to TensorFlow Lite Edge TPU format with the Please make sure to follow the installation guide on https://docs.ultralytics.com and the example code on https://github.com/ultralytics/yolov8/tree/master. If the issue persists we encourage you to create a minimum reproducible example for this problem. |
The link is dead :( |
@rolouis thank you for bringing this to our attention. The link provided in my previous message seems to be dead. I apologize for the confusion. To load a TensorFlow Lite Edge TPU model in Ultralytics YOLOv8, please convert the PyTorch model to TensorFlow Lite Edge TPU format with the Afterwards you should load and run the TensorFlow Lite Edge model in the |
Hello, I tried to to run it with you suggested change. Unfortunately the YOLO constructor has not argument "format".
|
@rolouis i apologize for the confusion. It looks like I have provided you with incorrect information. The After reading more of the Ultralytics YOLOv8 documentation, it looks like you should use the |
Hi @glenn-jocher, I tried your approach but get errors e.g. that only UINT 8 is supported. |
@rolouis sure! Here is a general overview of the steps required to convert a PyTorch model to a TensorFlow Lite Edge TPU model that can be used with the Ultralytics YOLOv8 library to infer on a Google Coral edgeTPU device:
When converting the PyTorch model to TensorFlow Lite format, you need to ensure that the model's weights are quantized to UINT8 format. This is required for use with the Google Coral edgeTPU. Once you have converted and compiled your model, you can load it into the Ultralytics YOLOv8 library by passing the path to the compiled TensorFlow Lite model file to the |
No, please provide a complete workflow with code for using the standard model running on the google coral edge TPU |
@rolouis sure, here is a high-level overview of the process for running a standard YOLOv8 model on a Google Coral edge TPU:
Regarding the Ultralytics YOLOv8 library, we need to pass the path to the TensorFlow Lite model file that we created earlier to the YOLO object's constructor. This will allow the YOLO object to load and run the TensorFlow Lite model on the Google Coral edge TPU device. Then, we can use the Please let me know if you have any further questions or if you would like to see some code examples. |
Can you show some code examples? |
@rolouis certainly! To convert a PyTorch model to TensorFlow Lite format and then compile it for use with the Google Coral edge TPU, we need to use a combination of PyTorch, TensorFlow, and the Google Coral edge TPU tools. First, we will convert the PyTorch model to TensorFlow Lite format using either the Next, we will compile the TensorFlow Lite model for use with the Google Coral edge TPU using the Once we have the compiled model, we can load it into our Python script using the To use the Ultralytics YOLOv8 library with the Google Coral edge TPU, we need to pass the path to the compiled TensorFlow Lite model to the YOLO object's constructor. After the YOLO object is initialized, we can use the I hope this helps! Let me know if you have any further questions or if there is anything else you would like me to explain. |
The YOLO object has no infer method |
@rolouis my apologies for my previous message. I stand corrected. The To perform inference with a TensorFlow Lite model on a Google Coral edge TPU using the
Once we have the final detection results, we can display the results or write them to a file. If you would like specific code examples or further explanation on any of these steps, please let me know. |
Are these answers generated by a chat robot? Because they sure sound like it haha. But I was also facing this issue prior to installing pycoral (https://coral.ai/docs/accelerator/get-started/#2-install-the-pycoral-library), not entirely sure if this was the reason behind the error. Still, no luck with getting any detections using the v8 exported edgetpu model. |
Yes, @glenn-jocher answers sound like some GPT answer :/ Afaik the converted models from the export functions are not compatible with the edgetpu |
@35grain yes, that is correct. The exported PyTorch model must be converted to TensorFlow Lite format and then compiled for use with the Google Coral edge TPU using the edgetpu_compiler command. It is also important to ensure that the model's weights are quantized to UINT8 format for use with the edgeTPU. After the model is compiled, it can be loaded into memory using thetflite_runtime.Interpreter object and run on the Google Coral edge TPU using the edgetpu library. The tflite_runtime and edgetpu libraries provide the tools necessary to perform inference with the TensorFlow Lite model on the edge TPU device. |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
If you're experiencing issues similar to those described by @Skillnoob and have already posted a detailed issue on the TensorFlow GitHub repository, you've taken the right step in seeking community assistance for a TensorFlow-specific problem. From the YOLOv8 and Ultralytics perspective, it's essential to confirm that:
In the meantime, you may also want to review the relevant documentation and look for similar issues or solutions already discussed within the Coral Edge TPU community. If you find a solution, it would be beneficial for the community if you could share your learnings or resolution steps, either directly on your issue or in the relevant forums. Remember that troubleshooting complex integration problems between different tools (like PyTorch, TensorFlow, TFLite, and Edge TPU) can often involve iterative debugging and community cooperation. Your patience and persistence are key. |
The model is fully converted. |
@Skillnoob thank you for the update. If you have ensured that the model is fully converted and pycoral is not required, your setup should ideally be ready for running inference using an Edge TPU with the TensorFlow library. Please verify the following points to troubleshoot the issue:
Given that the error appears to be related to TensorFlow and the Edge TPU rather than the high-level ultralytics package, the problem may lie in the interaction between the TensorFlow Lite interpreter and the Edge TPU. If you don't find a solution, consider reaching out to the TensorFlow or Google Coral support forums, as they might have more specialized advice on dealing with TensorFlow Lite and Edge TPU integration issues. |
What is it with these messages which just look like they were generated by a chatbot? A short message written by a actual human would be way better than this wall of text that just repeats what has been said before. |
@Skillnoob i'm here to help you troubleshoot the problem. If you've already fully converted the model and ensured that all dependencies and runtime environments are correctly installed and configured, the next steps could involve more in-depth debugging. Given that this appears to be a TensorFlow Lite and Edge TPU-specific issue, the TensorFlow community, Google Coral forums, or the issue tracker on their respective GitHub repositories might provide additional assistance. They could offer insights or solutions that might not be apparent from within the context of the YOLOv8 Ultralytics framework. If you have any detailed error messages, logs, or specific symptoms of the issue that weren't addressed previously, please share them so we can offer more targeted advice. |
i have code and edgetpu file, which works perfectly fine for 192x192 but its working for any other exporting image size, and only error which arise is segmentation fault, i tried official ultralytics code to export the edge tpu file, but its giving the segmentation fault. can any one help me sort out? |
Hey there! It sounds frustrating dealing with a segmentation fault, especially when it only occurs with certain image sizes. 🤔 It'd be helpful if you ensure that your image preprocessing aligns with the model’s input requirements for different sizes. If possible, could you share a snippet of how you're loading and processing your images? Sometimes, slight discrepancies in how images are sized or batched can cause such issues when running on Edge TPU. Just to double-check, here’s a quick way to resize your images correctly: from PIL import Image
image = Image.open('path_to_image.jpg')
image = image.resize((192, 192)) # Make sure images are squared properly
image.save('resized_image.jpg') If you’re doing something similar and still facing issues, it might be good to revisit the model conversion steps or try with another simple model and see if the problem persists across different models. This can help isolate whether the issue is with the conversion process or something specific to the model! |
Search before asking
YOLOv8 Component
Detection, Integrations, Other
Bug
When trying to use inference in a docker container together with the edge TPU, the python process crashes.
Running the same model using the same environment and
tf.lite.Interpreter
it works fine.Environment
Python 3.8.16
ultralytics==8.0.61
Running a Docker container with EdgeTPU on a Raspberry Pi
docker run --rm -it --privileged --entrypoint bash -v /dev/bus/usb:/dev/bus/usb yolo-inference:0.0.9
Minimal Reproducible Example
Additional
PIP Freeze output:
The model was exported using the yolo cli and the "edgetpu" target
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: