-
-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel Processing Deepstack Time Issue #13
Comments
First try the latest version - There is a zip attached to the other MQTT issue/feature request thread here. My first guess might be that if a url or image fails for some reason, it gets re-tried, and the new deepstack time gets appended to the last try. |
I just compiled the latest version this morning and did all my tests with that. I don't think they are failing as everything works when using either IP on its own. Times only get weird when I have more than 1 ip set. I'll try to get a log output later today. |
I reworked a few things, try the latest. The FIRST time I submit something to a deepstack URL is almost always 5 times as long as normal, so make sure thats not what is happening. Also the deepstack time in the log is now PER-URL - before it was a combined set of stats for all url's. I have two IP's setup and the times for both seem normal. |
Wasn't sure whether to open as a new subject but as it's related: I notice with the Deepatack URL's it takes them in order. Would it be possible that they could be prioritized. So first it uses URL1. Eg URL1,URL1,URL2,URL1,URL2,URL3etc |
@Tinbum1 - what is the use case? I kind of like the current load balancing where it will spread the load evenly between instances and not abuse one machines cpu. btw, how is the version from last night working for you? |
Thought was that if you have one URL that can process quicker than the others it might be best to use that one most. Everything seems to be going well for me. |
@SHerms - how is the latest working for you? |
The last version I compiled was on 9/11. It is hard to tell if it is better. The times still jump around a lot for the DeepStack time. |
When compiling I get this warning. Not sure if this is coming in to play when using multiple Deepstack URLs Source\Repos\bi-aidetection\src\UI\AITOOL.cs(331,35,331,49): warning CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(...)' to do CPU-bound work on a background thread. |
I wouldnt worry about that warning. But I did discover that my HttpClient requests for the deepstack server were probably building up connections that were not released. (even "USING") Working on trying to understand "IHttpClientFactory" to fix. It may have been causing your task canceled errors. |
Ok, see if the latest is more stable. Each deepstack URL now has its dedicated HttpClient and it gets reused now to prevent too many connections from build up behind the scenes. |
Everything has been working great. Thanks for all your work! |
First of I just want to say thank you for adding the multiple Deepstack IP's. This feature is extremely helpful for my server without a GPU and allows me to use Deepstack for multiple cameras.
So I've finally had some time to play around with multiple Deepstack docker containers and noticed some weird timing in the AI Tool logs.
If I use 1 IP in the Deepstack URL(s) field the the log is showing DeepStack Time as under 1000ms for almost all posts. If I add another IP to the Deepstack URL(s) field the log shows Deepstack Time double or triple what you would normally see.
The 2 docker containers are exactly the same. I can swap out and use either IP and both are showing the same 1000ms time. It is only when I use more than 1 that the times seem wrong. I would of suspected the times to stay the same but the Image queue to not pile up as much.
I'm not sure if this is just a log issue and the times are really not that high. Everything seems to be working fine in Blue Iris.
The text was updated successfully, but these errors were encountered: