Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Watch API. Handle connection loss. #12022

Closed
bm0 opened this issue Jun 17, 2020 · 2 comments
Closed

Watch API. Handle connection loss. #12022

bm0 opened this issue Jun 17, 2020 · 2 comments
Labels

Comments

@bm0
Copy link

bm0 commented Jun 17, 2020

Hey.
I found that when the connection is disconnected, due to a restart of the etcd server or for other reasons, reading from the watch channel is blocked forever.

I am not an GRPC expert, maybe this problem can be solved if the connection is configured correctly (I'm talking about grpc.DialOption)

Is there a way to handle connection loss?
I would like to get an error from the watcher channel so that I can reconnect later when network or other problems are resolved. Help me please. If possible, then with an example. Thanks in advance.

@agargi
Copy link
Contributor

agargi commented Jul 5, 2020

@bm0 Looks like you are looking to something like this (refer link below). It basically wraps the context with 'WithRequireLeader' which ensures that a failure is returned when cluster is unable to determine a leader (after 3 re-election timeouts and each election timeout defaulting to 1000 ms).

kubernetes/kubernetes#89488

@stale
Copy link

stale bot commented Oct 3, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

2 participants