-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time taken to return to application from hiredis library seems to be too long (>60sec) #142
Comments
This relates with #19 and the conditions haven't changed since. |
Ya connect-timeout is set to 2seconds currently. |
So, a blocking connect() call will effectively see a max wait time of connect_timeout value, right? |
I'm not following you regarding the following I guess the amount of outstanding callbacks add-on time, depending on max-retry-count. |
what I meant was this piece of code, which follows after a failed connect() call. |
Hope I made this clear @bjosv |
Update: |
Thanks for the update @bjosv . Making the reconnection attempts async would really help in improving this wait period. Shall wait for the changes to retry the scenario. |
A change covering this issue is now delivered, hope it fixes the problems in your setup as well. |
Nice! Thanks, bjosv, shall try pulling this and see how it goes. |
Hi,
Encountered a case wherein the processing seems to be stuck at hiredis-cluster library and not returning to application for more than 60seconds. There is a mechanism we have in place to check if a thread is taking longer to report a heartbeat, which is getting missed in this case, and results in an exception being thrown.
This is the backtrace from the flow:
From what I could gather from the backtrace and frames, this is the flow of events:
192.168.65.161, 192.168.121.79, 192.168.235.38, 192.168.221.225, 192.168.2.202, 192.168.165.198, 192.168.121.12, 192.168.165.182
I assume the analysis above holds good here. If this is the case, is there a way or suggestion to improve this? Do you see any other way of doing this that it doesn't get stuck at hiredis-cluster in processing.
@bjosv
The text was updated successfully, but these errors were encountered: