Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I save the detections Yolov5 makes when he's working with a camera source? #13238

Closed
ComederoAVES2024 opened this issue Aug 1, 2024 · 4 comments
Labels
question Further information is requested

Comments

@ComederoAVES2024
Copy link

Hi! Soo what I want to do is that my Yolov5 model saves in a folder the detections it makes when working with a camera in real time, in my Raspberry PI 4B+.

I found this code made by glenn-jocher responding to user sanchaykasturey at the following link:
#11102, and I tried to modify it to my needs, but I realised that although it takes the images and saves them, it doesn't do it because it detects any object or class: It just takes them and doesn't classify any object... I tried changing the model to the 'yolov5s' but then the code doesn't even run.

I'm very confused as I'm new to this and I'm really not sure if this happens because I'm working on a Raspberry or if it's a problem with the code. Could someone help me?

Here is the code I have modified slightly to test...

import torch
from PIL import Image
import cv2
import datetime

CKPT_PATH = '/home/pi/yolov5/yolov5s.pt'
yolov5 = torch.hub.load('/home/pi/yolov5', 'custom', path=CKPT_PATH, source='local', force_reload=True)

vidcap = cv2.VideoCapture(0)
success, image = vidcap.read()

while success:
# Convert image to PIL format
img_pil = Image.fromarray(image)

# Perform YOLOv5 inference
results = yolov5(img_pil)

# Check if any detections are made
if len(results.pred) > 0:
    # Save the frame as an image
    timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
    image_name = f"image_{timestamp}.jpg"
    cv2.imwrite(image_name, image)

# Read the next frame
success, image = vidcap.read()

# Release the video capture

vidcap.release()

PS: Sorry if I didn't tag it correctly I'm new here and thought this didn't fit any tag.

@UltralyticsAssistant UltralyticsAssistant added python question Further information is requested labels Aug 1, 2024
Copy link
Contributor

github-actions bot commented Aug 1, 2024

👋 Hello @ComederoAVES2024, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@glenn-jocher
Copy link
Member

@ComederoAVES2024 hi there!

Thank you for reaching out and providing detailed information about your issue. It's great to see your enthusiasm for working with YOLOv5 on your Raspberry Pi 4B+! Let's address your concerns and help you get your model saving detections correctly.

From your description, it seems like the model isn't performing inference as expected. Here are a few steps and modifications to help you troubleshoot and resolve the issue:

  1. Ensure Model and Dependencies are Correctly Installed:
    Make sure you have the latest version of YOLOv5 and all dependencies installed. You can do this by cloning the repository and installing the requirements:

    git clone https://github.com/ultralytics/yolov5
    cd yolov5
    pip install -r requirements.txt
  2. Modify the Code for Correct Inference:
    The code you provided looks mostly correct, but let's make sure the inference and saving logic are properly implemented. Here’s an updated version of your script:

    import torch
    from PIL import Image
    import cv2
    import datetime
    import os
    
    # Load YOLOv5 model
    CKPT_PATH = '/home/pi/yolov5/yolov5s.pt'
    yolov5 = torch.hub.load('/home/pi/yolov5', 'custom', path=CKPT_PATH, source='local', force_reload=True)
    
    # Create a directory to save images
    save_dir = 'detections'
    os.makedirs(save_dir, exist_ok=True)
    
    # Open video capture
    vidcap = cv2.VideoCapture(0)
    success, image = vidcap.read()
    
    while success:
        # Convert image to PIL format
        img_pil = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
    
        # Perform YOLOv5 inference
        results = yolov5(img_pil)
    
        # Check if any detections are made
        if results.pred[0].shape[0] > 0:
            # Save the frame as an image
            timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
            image_name = os.path.join(save_dir, f"image_{timestamp}.jpg")
            cv2.imwrite(image_name, image)
            print(f"Saved {image_name}")
    
        # Read the next frame
        success, image = vidcap.read()
    
    # Release the video capture
    vidcap.release()
  3. Verify Model Path and Permissions:
    Ensure that the path to your model (CKPT_PATH) is correct and that your script has the necessary permissions to read the model file and write images to the specified directory.

  4. Check for Errors and Debug:
    Run the script and check for any error messages. If the script fails to run, it might provide clues about what’s going wrong. You can also add print statements to debug and ensure each part of the code is executing as expected.

  5. Performance Considerations:
    Running YOLOv5 on a Raspberry Pi can be resource-intensive. Ensure your Raspberry Pi has sufficient resources and consider using a smaller model like yolov5n.pt (nano) if performance is an issue.

If you continue to face issues, please ensure that the problem is reproducible with the latest versions of the packages and provide any error messages you encounter. The community and the Ultralytics team are here to help!

Best of luck with your project, and feel free to reach out if you have any more questions! 😊

@ComederoAVES2024
Copy link
Author

Hi glenn! Wow! thanks for your reply. I've already tested the code again and it works correctly, as it should. I'm going to check in detail how it all works. Thank you very much for your help and suggestions, I will take them into account from now on. Hope you have a great day!

@glenn-jocher
Copy link
Member

Hi @ComederoAVES2024,

I'm glad to hear that the code is working correctly for you now! 🎉 It's great to see your enthusiasm and dedication to understanding how everything works in detail. If you have any more questions or run into any other issues as you explore further, feel free to reach out. The YOLOv5 community and the Ultralytics team are always here to help.

Enjoy your journey with YOLOv5, and have a fantastic day ahead! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants