-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualizing YOLOv5 Segmentation Data #13069
Comments
Hello, Thank you for reaching out with your questions about YOLOv5 segmentation and training on Google Colab. YOLOv5 Segmentation IssueFrom your description, it seems like your model setup and inference code are generally correct. However, the issue might be related to how the masks are being processed or displayed. Here are a few things you might want to check:
Google Colab DisconnectionRegarding the issue with Google Colab disconnecting during long training sessions, here are a couple of suggestions:
If the problem persists, reviewing the exact output or errors, if any, could provide more insights into what might be going wrong. Feel free to share any updates or additional information. We're here to help! |
@glenn-jocher, Thanks for the quick response! I have been unsuccessful in getting the YOLOv5-seg model to work as expected. I notice that i get this error when loading the model:
I think this has something to do with it. Is there a way that you know of to load the model for inference in Python? If so, it would help if you could provide some example code to help me get started. Also, about the Colab issue, I got a browser extension that seemingly fixed the issue! |
@DylDevs hello, Thank you for the update and I'm glad to hear that the browser extension resolved your Colab issue! Regarding the YOLOv5-seg model warning, it indicates that the model isn't compatible with the AutoShape feature, which simplifies the inference process. You can still run inference manually by handling the input and output tensors directly. Here’s a basic example to help you get started with manual inference: import torch
from PIL import Image
from torchvision.transforms import functional as F
# Load your model (ensure it's in the correct directory)
model = torch.load('path_to_yolov5s-seg.pt')
model.eval() # Set the model to evaluation mode
# Load an image
image = Image.open('path_to_your_image.jpg')
image = F.to_tensor(image).unsqueeze(0) # Transform image to tensor and add batch dimension
# Perform inference
with torch.no_grad():
results = model(image)
# Process results here (e.g., extracting masks)
# Note: You'll need to adapt this part based on how your model outputs data This code snippet manually handles the image transformation and model inference. Make sure to adapt the result processing part according to the specific output format of your segmentation model. If you encounter any more issues or have further questions, feel free to ask. Happy coding! |
@glenn-jocher
Any ideas as to how I can fix it? |
Hello @DylDevs, Thank you for your patience and for providing the error details. The error you're encountering, To resolve this, you should load the model using the
This approach should help you avoid the If you have any further questions or run into other issues, feel free to ask. We're here to help! 😊 |
@glenn-jocher This code does the same thing as I started with. I have figured out that results is a list of tensors. How should I decode this to get the actual output from the model. I would like to get this sorted out as quickly as possible so i can get this implemented in my project. Thanks! |
Hello @DylDevs, Thank you for your patience and for the additional details. Let's dive into decoding the results from the YOLOv5-seg model to get the actual output. When you perform inference with the YOLOv5-seg model, the results are typically a list of tensors. Each tensor contains information about the detected objects, including bounding boxes, confidence scores, class labels, and segmentation masks. Here's a step-by-step guide to decode and visualize the results:
This should help you decode the results and visualize the segmentation masks along with bounding boxes and class labels. If you have any further questions or need additional assistance, feel free to ask. We're here to help! 😊 |
With a little bit of modification, that code worked, thanks! |
Hello @DylDevs, I'm glad to hear that the code modifications helped! 🎉 If you encounter any further issues or have additional questions, feel free to reach out. To ensure we can assist you most effectively, please make sure to provide a minimum reproducible code example if you run into any new bugs or issues. This helps us reproduce the problem on our end and investigate a solution more efficiently. You can find more details on how to create a minimum reproducible example here: Minimum Reproducible Example. Additionally, always ensure you're using the latest versions of If you have any more questions or need further assistance, don't hesitate to ask. The YOLO community and the Ultralytics team are here to help! |
Search before asking
Question
Greetings, I would like to use YOLOv5 Segmentation for one of my projects. However, before I collect and annotate data, I want to get a proof of concept that v5-seg can do everything that I need to do.
I have the model loaded to my needs with this code:
When it comes to retrieving data for the model, I think I did it correctly.
However, no matter what image I give it, the output is always this:
I am not sure of what i'm doing wrong, is it model loading? Is it Inference? Is it plotting the data? From my knowledge this should work just fine.
Any help on the matter is appreciated!
Additional
Also, another small question. My GPU is not really good, so I tend to train my models in Google Colab. My issue is that my model will get to 150 - 200 epochs and then the runtime will disconnect because of Inactivity. I feel like i'm checking in on it and clicking around often enough for it to not crash (every ~10 minutes or so). I understand this can be easily fixed by getting a Colab subscription, but I would like to avoid paying for Colab if at all possible. Inout on this issue is also appreciated!
The text was updated successfully, but these errors were encountered: