Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terminal size calculation is wrong when upgrade http to spdy stream protocol #68648

Closed
ghost opened this issue Sep 14, 2018 · 12 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@ghost
Copy link

ghost commented Sep 14, 2018

/kind bug

What happened:
When invoke pod/exec api with stdin=true&stdout=true&tty=true, the terminal resize will run to handle the terminal resize event, but according to the output log with loglevel=8, terminal size calculation is wrong(at least width).

I0913 22:18:39.213492    2664 round_trippers.go:383] POST https://apiserver/api/v1/namespaces/default/pods/docker-registry-3-bcxbs/exec?command=%2Fbin%2Fsh&container=registry&container=registry&stdin=true&stdout=true&tty=true
                                       I0913 22:18:39.213544    2664 round_trippers.go:390] Request Headers:
                                                                                                            I0913 22:18:39.213554    2664 round_trippers.go:393]     X-Stream-Protocol-Version: v4.channel.k8s.io
                                                                                                                                                                                                                 I0913 22:18:39.213562    2664 round_trippers.go:393]     X-Stream-Protocol-Version: v3.channel.k8s.io
                                                                                                   I0913 22:18:39.213568    2664 round_trippers.go:393]     X-Stream-Protocol-Version: v2.channel.k8s.io
                                                                                                                                                                                                        I0913 22:18:39.213580    2664 round_trippers.go:393]     X-Stream-Protocol-Version: channel.k8s.io
                                                                                       I0913 22:18:39.213587    2664 round_trippers.go:393]     User-Agent: kubectl/v1.11.0+d4cacc0 (linux/amd64) kubernetes/d4cacc0
 I0913 22:18:39.265686    2664 round_trippers.go:408] Response Status: 101 Switching Protocols in 52 milliseconds
                                                                                                                 I0913 22:18:39.265732    2664 round_trippers.go:411] Response Headers:
                                                                                                                                                                                       I0913 22:18:39.265741    2664 round_trippers.go:414]     Connection: Upgrade
                                                I0913 22:18:39.265748    2664 round_trippers.go:414]     Upgrade: SPDY/3.1
                                                                                                                          I0913 22:18:39.265754    2664 round_trippers.go:414]     X-Stream-Protocol-Version: v4.channel.k8s.io
            I0913 22:18:39.265761    2664 round_trippers.go:414]     Date: Fri, 14 Sep 2018 02:18:40 GMT
                                                                                                        sh-4.2$

What you expected to happen:
terminal size should be correct

I0913 22:18:39.213492    2664 round_trippers.go:383] POST https://apiserver/api/v1/namespaces/default/pods/docker-registry-3-bcxbs/exec?command=%2Fbin%2Fsh&container=registry&container=registry&stdin=true&stdout=true&tty=true
I0913 22:18:39.213544    2664 round_trippers.go:390] Request Headers:
I0913 22:18:39.213554    2664 round_trippers.go:393]     X-Stream-Protocol-Version: v4.channel.k8s.io
I0913 22:18:39.213562    2664 round_trippers.go:393]     X-Stream-Protocol-Version: v3.channel.k8s.io
I0913 22:18:39.213568    2664 round_trippers.go:393]     X-Stream-Protocol-Version: v2.channel.k8s.io
I0913 22:18:39.213580    2664 round_trippers.go:393]     X-Stream-Protocol-Version: channel.k8s.io
I0913 22:18:39.213587    2664 round_trippers.go:393]     User-Agent: kubectl/v1.11.0+d4cacc0 (linux/amd64) kubernetes/d4cacc0
I0913 22:18:39.265686    2664 round_trippers.go:408] Response Status: 101 Switching Protocols in 52 milliseconds
I0913 22:18:39.265732    2664 round_trippers.go:411] Response Headers:
I0913 22:18:39.265741    2664 round_trippers.go:414]     Connection: Upgrade
I0913 22:18:39.265748    2664 round_trippers.go:414]     Upgrade: SPDY/3.1
I0913 22:18:39.265754    2664 round_trippers.go:414]     X-Stream-Protocol-Version: v4.channel.k8s.io
I0913 22:18:39.265761    2664 round_trippers.go:414]     Date: Fri, 14 Sep 2018 02:18:40 GMT
sh-4.2$

How to reproduce it (as minimally and precisely as possible):
kubectl exec -it --loglevel=8 podname /bin/sh

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-09-07T12:30:06Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-09-07T12:30:06Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): kubectl os: fedora 28, gnome-terminal 3.28.2
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. kind/bug Categorizes issue or PR as related to a bug. labels Sep 14, 2018
@ghost
Copy link
Author

ghost commented Sep 14, 2018

/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 14, 2018
@yue9944882
Copy link
Member

/cc @adohe

@ghost
Copy link
Author

ghost commented Sep 18, 2018

After deep dig and might be terminal raw mode got something wrong with this scenario.
Still digging and trying to make a patch for this

@ghost
Copy link
Author

ghost commented Sep 25, 2018

After dig, and found the root reason,
we switch the terminal mode for exec, from cooked -> raw -> cooked,
but switch back to cooked from raw will got this issue.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 24, 2018
@jianzhangbjz
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 25, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 25, 2019
@jianzhangbjz
Copy link
Contributor

/remove-lifecycle stale

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 25, 2019
@jianzhangbjz
Copy link
Contributor

/remove-lifecycle rotten

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests

4 participants