-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start multiple models at the same time #12755
Comments
Hello! It sounds like you're trying to run inference simultaneously on two different cameras using two instances of the YOLO model. If you're experiencing performance issues, it might be due to the resources available on your machine, especially if you're using a single GPU or CPU. To potentially improve performance, you can try running each model on a separate thread or process to better utilize your hardware. Here’s a simple example using Python's import threading
from ultralytics import YOLO
def run_inference(model_path, image_path):
model = YOLO(model_path)
model.predict(image_path)
# Thread for the first camera
thread1 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img.jpg'))
# Thread for the second camera
thread2 = threading.Thread(target=run_inference, args=('yolov8n.pt', 'img2.jpg'))
thread1.start()
thread2.start()
thread1.join()
thread2.join() This approach initializes each model in its own thread, potentially improving the utilization of your computational resources. Make sure your system has enough memory and processing power to handle multiple models simultaneously. If you continue to experience issues, consider using more powerful hardware or optimizing your model for better performance. |
model1 = YOLO('model.pt')
model2 = YOLO('model.pt')
def infer(model, img_path):
return model.predict(img_path)
thread1 = threading.Thread(target=infer, args=(model1, img_path))
thread2 = threading.Thread(target=infer, args=(model2, img_path))
thread1.start()
thread2.start()
thread1.join()
thread2.join() Because my code is continuously inference, I don't want to reload the model every time, so I can write the method like this? However, I found in the test that even if the multi -threaded, for example, each diagram of the single -threaded diagram requires 100ms, and the two continuous single -threaded reasoning is 200ms, but when I use multi -threaded Each thread printed in the log requires 200ms |
Hello! It looks like you're trying to run inference in parallel using threading, but you're not seeing any performance improvement. This issue might be due to Python's Global Interpreter Lock (GIL), which prevents multiple native threads from executing Python bytecodes at once. This can be particularly restrictive for CPU-bound tasks. For better performance with parallel processing in Python, consider using the from multiprocessing import Process
from ultralytics import YOLO
def infer(model_path, img_path):
model = YOLO(model_path)
return model.predict(img_path)
if __name__ == '__main__':
process1 = Process(target=infer, args=('model.pt', 'img1.jpg'))
process2 = Process(target=infer, args=('model.pt', 'img2.jpg'))
process1.start()
process2.start()
process1.join()
process2.join() This approach should help you better utilize your hardware capabilities and see improved performance when running inference on multiple inputs simultaneously. |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Now that I have two cameras at the same time, I want them to perform inference at the same time, but the speed is not enough when only one model is opened. I want to open two models, but it seems to have no effect. Why is this?
The text was updated successfully, but these errors were encountered: