Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batch detect #13082

Closed
1 task done
xyh1108 opened this issue Jun 12, 2024 · 6 comments
Closed
1 task done

batch detect #13082

xyh1108 opened this issue Jun 12, 2024 · 6 comments
Labels
question Further information is requested Stale

Comments

@xyh1108
Copy link

xyh1108 commented Jun 12, 2024

Search before asking

Question

Hello, now we want to do a batch inspection of pictures with yolov5. We have combined four pictures into a batch and reasoned, and calculated the time used for reasoning in two ways. The results show that the two times are not the same, and is 0.1178 seconds normal? The model we used is based on yolov5x.pt as the pre-training model. It takes about 33ms to detect an image through this model
1
2
3

Additional

No response

@xyh1108 xyh1108 added the question Further information is requested label Jun 12, 2024
Copy link
Contributor

👋 Hello @xyh1108, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@glenn-jocher
Copy link
Member

@xyh1108 hello,

Thank you for reaching out and for providing detailed information about your batch detection process. To assist you effectively, could you please provide a minimum reproducible code example? This will help us understand your implementation better and identify any potential issues. You can refer to our guide on creating a minimum reproducible example here: Minimum Reproducible Example.

Additionally, please ensure that you are using the latest versions of torch and the YOLOv5 repository. You can update your packages with the following commands:

pip install --upgrade torch
git pull

Regarding your question about inference time, the time taken for batch processing can vary due to several factors, including hardware, batch size, and image resolution. The time of 0.1178 seconds for batch inference seems reasonable, but it can be influenced by the factors mentioned above. For a more accurate comparison, you might want to test with different batch sizes and measure the inference time for each.

Here is an example of how you can perform batch inference with YOLOv5:

import torch
from pathlib import Path
from PIL import Image
import numpy as np

# Load model
model = torch.hub.load('ultralytics/yolov5', 'yolov5x')

# Load images
img_paths = ['path/to/image1.jpg', 'path/to/image2.jpg', 'path/to/image3.jpg', 'path/to/image4.jpg']
imgs = [Image.open(img_path) for img_path in img_paths]

# Convert images to tensor
imgs_tensor = [torch.from_numpy(np.array(img)).permute(2, 0, 1).float() / 255.0 for img in imgs]
batch = torch.stack(imgs_tensor)

# Perform inference
results = model(batch, size=640)

# Print results
results.print()

This code demonstrates how to load multiple images, convert them to tensors, and perform batch inference using the YOLOv5 model.

If you continue to experience discrepancies or have further questions, please provide the reproducible code example, and we will be happy to investigate further.

@xyh1108
Copy link
Author

xyh1108 commented Jun 14, 2024

Thank you for your reply. I used the example you provided, and put eight pictures in a folder for inference, and calculated that the inference time was normal. I printed the type and value of the results, guessing that this was just an inference and no further action was taken.
4
5

@xyh1108
Copy link
Author

xyh1108 commented Jun 14, 2024

I also found a similar example of multi-picture reasoning. In this example I put the image to be detected into a folder and manually change the batch-size value size. The value and type of results are also printed, and the result shows that each image is inferred here and the inference result is obtained, and the time is calculated. It can be seen in this result that the average detection time of each image is 30ms, which is also normal. How can you get a complete inference time in the example you provided?
6
7

@glenn-jocher
Copy link
Member

Hello @xyh1108,

Thank you for your detailed follow-up and for sharing your observations. It's great to hear that the inference time is within the expected range. To address your question about obtaining the complete inference time, you can measure the time taken for the entire batch inference process using Python's time module. Here's an example of how you can do this:

import torch
from pathlib import Path
from PIL import Image
import numpy as np
import time

# Load model
model = torch.hub.load('ultralytics/yolov5', 'yolov5x')

# Load images
img_folder = Path('path/to/your/folder')
img_paths = list(img_folder.glob('*.jpg'))
imgs = [Image.open(img_path) for img_path in img_paths]

# Convert images to tensor
imgs_tensor = [torch.from_numpy(np.array(img)).permute(2, 0, 1).float() / 255.0 for img in imgs]
batch = torch.stack(imgs_tensor)

# Measure inference time
start_time = time.time()
results = model(batch, size=640)
end_time = time.time()

# Calculate and print total inference time
total_inference_time = end_time - start_time
print(f'Total inference time for batch: {total_inference_time:.4f} seconds')

# Print results
results.print()

This script will load all images from a specified folder, convert them to tensors, and perform batch inference while measuring the total time taken for the inference process. The total_inference_time variable will give you the complete inference time for the entire batch.

If you encounter any further issues or have additional questions, please feel free to share more details or a minimum reproducible code example. This will help us assist you more effectively. You can find more information on creating a minimum reproducible example here: Minimum Reproducible Example.

Thank you for your engagement and for being a part of the YOLO community! 🚀

Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Jul 15, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants