-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solving problems with multithreading when working with GPU #26
Comments
Thanks for the offer, I'll deal with it as soon, would you be willing to submit a PR with the changes in the meantime? |
What are you trying to achieve with |
I'm trying to get the most out of the GPU. The current configuration has multithreaded code. causes a qnx runtime error, it is necessary to start 1 thread in the code sector that will work with the GPU, and let it prepare images and post-processing in a multi-thread. |
I pushed an update recently, check if this commit does what you need. |
Yes, thank you. I got about a 30% increase. |
at the moment, no.. |
Good day, to address the issue of model failure in multi-threaded code without losing performance, I suggest wrapping the code segment where
using var outputs = _inference.Run(inputs);
is called in a SemaphoreSlim. In the constructor, accept an optional parameter that initializes SemaphoreSlim when working with GPU. No errors have been observed when working with CPU, and the model does not crash. This method would help increase the performance of the library. At the moment, on GPU in single-threaded mode, it processes 30 images per second. I believe that performance can be increased to even higher values.
The text was updated successfully, but these errors were encountered: