Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requests hang if watch=true and there are no objects matching query #263

Closed
corruptmem opened this issue Mar 21, 2019 · 10 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@corruptmem
Copy link

To reproduce:

var config = KubernetesClientConfiguration.BuildDefaultConfig();
var client = new Kubernetes(config);

Console.WriteLine("FOO");
var podStream = await client.ListNamespacedPodWithHttpMessagesAsync(
    "default",
    labelSelector: "app=1234", // <----- TRY COMMENTING ME OUT
    watch: true);

Console.WriteLine("BAR");

podStream.Watch<V1Pod>(
    (evt, pod) => Console.WriteLine($"{evt}: {pod?.Metadata?.Name}"),
    Console.WriteLine,
    () => Environment.Exit(0));

Console.WriteLine("WATCHING");
Console.ReadKey();

Expected: Output FOO, BAR, WATCHING then as any new pods get created, for the event to be printed

Actual: Outputs FOO then the program hangs until at least one pod gets created

If you comment out the labelSelector, the problem goes away - presumably unless you had a Kubernetes cluster with no pods in the default namespace.


Although it is possible to workaround this by treating the ListNamespacedPodWithHttpMessagesAsync (and similar) calls as taking an indefinite time to return under normal circumstances, it's not what I expected to happen when using the watcher.

The root cause of the issue appears to be the LineSeparatedHttpContent class used in the WatcherDelegatingHandler. It appears that inside SerializeToStreamAsync it "peeks" the first line which then calls ReadLineAsync and blocks until there's something to read.

@tg123
Copy link
Member

tg123 commented Mar 22, 2019

I think it is by design.
call with watch=true would return a stream, something like you are downloading a big file.
the call will hang until everything was read if there is no WatcherDelegatingHandler and it is impossible due to watched stream never end until timeout.

Maybe the better solution is to wrap List and Watch into one sugared watch

What do you think?

@corruptmem
Copy link
Author

@tg123 This is essentially what I am now doing, but it makes it all more difficult than it needs to be. Essentially having to translate the async-based List call into working the same way as the callback-based Watch call.

However I think it is possible to implement the List call so it returns immediately if watch is true. I implemented it as a test by having the LineSeparatedHttpContent.SerializeToStreamAsync method not bother peeking the stream, which makes the HttpContent stream (as returned by HttpContent.ReadAsStreamAsync) empty, rather than the first line of the watch result.

It works for my purposes, but I'm not sure what doing that might break in the rest of the client.

@brendandburns
Copy link
Contributor

List and Watch should be separate calls...

@tg123
Copy link
Member

tg123 commented Mar 27, 2019

call WatchNamespacedPodAsync?

@corruptmem
Copy link
Author

@tg123 WatchNamespacedPodAsync has the description: "deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter.". I believe the Watch* APIs use the /watch API that was deprecated by kubernetes/kubernetes#65147

@qmfrederik
Copy link
Contributor

@corruptmem Yes, that seems to be correct. We should be able to just update the Watch* APIs to use the ?watch=true parameter, though.

It'd involve updating the KubernetesWatchGenerator project and then re-running the generator.

Would you want to pop a PR for that?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 1, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 31, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants