New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exec timeout does not kick users out on kubectl client 1.21 #102569
Comments
Related to #97083. |
One perspective on this. We use the containerd stream idle timeout to implement a requirement that |
/assign |
/sig node |
/priority important-longterm |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Kubernetes v1.21 is out of support since quite some time. Do we need to keep this issue open? |
@Joseph-Goergen does this problem still exist? |
Yes, we expected to get kicked out after 15 minutes.
|
I don't think this is a bug, I think it's a misapplication of stream_idle_timeout to mean "no user activity", since the keepalive ping (needed to make slow-moving log and port-forward connections stable) is making the connection "not idle". |
The option |
To add to what @saschagrunert mentioned above. There are no users left of this field in the underlying type, and this is not used anywhere any more. I found out about this while looking into idle stream timeouts used together with CRI-O. That said, I also believe that streaming was moved out from kubelet some time ago, and now the CRI-compatible runtimes are expected to handle this functionality (CRI-O does already). @Joseph-Goergen, I believe that you can set this too for containerd, per: |
@liggitt @kwilczynski we do set |
@rtheis we already have |
@rtheis, we are troubleshooting a similar issue in OpenShift at the moment, albeit affecting our That said, there is a change in dependency for the CLI, both If you grab Any newer release will, at least in my testing, have an inconsistent behaviour, one of the:
I am not sure what is causing this. However, I am positive that it's not the CRI, so it's neither CRI-O nor containerd.
Update: This appears to be related to SPDY PING, contrary to what I initially thought, as when I reverted the change from #97083 (part of the 1.21 release), the timeout issue will no longer be impacted. Perhaps the solution to the problem suggested in the #115493 would be a way forward. That said, another person suggests that disabling SPDY PING didn't help them much, per: #115493 (comment). There is also a matter of expectations: many of our users expect idle connections to be closed, preferably without having to configure anything on the client side - which would then control the behaviour of this feature, and this might not be desirable. |
it is the SPDY ping containerd/containerd#5563 (comment) This is a problem of when a bug becomes a feature, the spdy library conflated the ping control frames with the data frame, so the pings account as part of the session, renewing it and never reaching the timeout. Ping control frames should not be considered for the stream timeout, however, it is not possible to fix that in a backwards compatible way, and my suggestion to add a new timeout based on session activity was discarded ... maybe now that we are moving from spdy to websockets we can fix it, but as today there is no more chances than disabling spdy pings |
What happened:
Exec-ing onto a pod does not kick you out.
What you expected to happen:
To get kicked out of the pod after the container runtime specified time is reached.
How to reproduce it (as minimally and precisely as possible):
kubectl exec -it <any pod> -- sh
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):We're using containerd 1.5.2 and konnectivity 0.19
/sig cli
The text was updated successfully, but these errors were encountered: