Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start multiple models at the same time #12755

Open
1 task done
ohj666 opened this issue May 17, 2024 · 4 comments
Open
1 task done

Start multiple models at the same time #12755

ohj666 opened this issue May 17, 2024 · 4 comments
Labels
question Further information is requested Stale

Comments

@ohj666
Copy link

ohj666 commented May 17, 2024

Search before asking

Question

Now that I have two cameras at the same time, I want them to perform inference at the same time, but the speed is not enough when only one model is opened. I want to open two models, but it seems to have no effect. Why is this?

model = YOLO('yolov8n.pt') 
model2 = YOLO(r'yolov8n.pt') 
model.predict("img.jpg")
model2.predict("img2.jpg")

### Additional

_No response_
@ohj666 ohj666 added the question Further information is requested label May 17, 2024
@glenn-jocher
Copy link
Member

Hello! It sounds like you're trying to run inference simultaneously on two different cameras using two instances of the YOLO model. If you're experiencing performance issues, it might be due to the resources available on your machine, especially if you're using a single GPU or CPU.

To potentially improve performance, you can try running each model on a separate thread or process to better utilize your hardware. Here’s a simple example using Python's threading module:

import threading
from ultralytics import YOLO

def run_inference(model_path, image_path):
    model = YOLO(model_path)
    model.predict(image_path)

# Thread for the first camera
thread1 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img.jpg'))

# Thread for the second camera
thread2 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img2.jpg'))

thread1.start()
thread2.start()

thread1.join()
thread2.join()

This approach initializes each model in its own thread, potentially improving the utilization of your computational resources. Make sure your system has enough memory and processing power to handle multiple models simultaneously. If you continue to experience issues, consider using more powerful hardware or optimizing your model for better performance.

@ohj666
Copy link
Author

ohj666 commented May 22, 2024

model1 = YOLO('model.pt')
model2 = YOLO('model.pt')

def infer(model, img_path):
    return model.predict(img_path)

thread1 = threading.Thread(target=infer, args=(model1, img_path))
thread2 = threading.Thread(target=infer, args=(model2, img_path))
thread1.start()
thread2.start()
thread1.join()
thread2.join()

Because my code is continuously inference, I don't want to reload the model every time, so I can write the method like this? However, I found in the test that even if the multi -threaded, for example, each diagram of the single -threaded diagram requires 100ms, and the two continuous single -threaded reasoning is 200ms, but when I use multi -threaded Each thread printed in the log requires 200ms

@glenn-jocher
Copy link
Member

Hello! It looks like you're trying to run inference in parallel using threading, but you're not seeing any performance improvement. This issue might be due to Python's Global Interpreter Lock (GIL), which prevents multiple native threads from executing Python bytecodes at once. This can be particularly restrictive for CPU-bound tasks.

For better performance with parallel processing in Python, consider using the multiprocessing module instead of threading. This module bypasses the GIL by using separate memory spaces and processes:

from multiprocessing import Process
from ultralytics import YOLO

def infer(model_path, img_path):
    model = YOLO(model_path)
    return model.predict(img_path)

if __name__ == '__main__':
    process1 = Process(target=infer, args=('model.pt', 'img1.jpg'))
    process2 = Process(target=infer, args=('model.pt', 'img2.jpg'))
    process1.start()
    process2.start()
    process1.join()
    process2.join()

This approach should help you better utilize your hardware capabilities and see improved performance when running inference on multiple inputs simultaneously.

Copy link

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants