Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client wait forever if kube-apiserver restart in slb environment #107266

Closed
smileusd opened this issue Dec 31, 2021 · 17 comments
Closed

client wait forever if kube-apiserver restart in slb environment #107266

smileusd opened this issue Dec 31, 2021 · 17 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@smileusd
Copy link
Contributor

smileusd commented Dec 31, 2021

What happened?

We found if use slb between kube-apiserver and client, when kube-apiserver restart, client watcher connection not close and wait an event forever. In this case has two connections: kube-apiserver <-> slb, slb <-> client. After close the first connection could not ensure the second connection also closed if slb hard to handle this long connection situation.

What did you expect to happen?

rebuild the client watcher and watch the new events

How can we reproduce it (as minimally and precisely as possible)?

restart apiserver and patch a new resource and watch the log from client, will find nothing.

Anything else we need to know?

No response

Kubernetes version

v1.18.8

Cloud provider

no

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@smileusd smileusd added the kind/bug Categorizes issue or PR as related to a bug. label Dec 31, 2021
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 31, 2021
@smileusd
Copy link
Contributor Author

/sig api-machinery
/wg api-machinery

@k8s-ci-robot k8s-ci-robot added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Dec 31, 2021
@k8s-ci-robot
Copy link
Contributor

@smileusd: The label(s) wg/api-machinery cannot be applied, because the repository doesn't have them.

In response to this:

/sig api-machinery
/wg api-machinery

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Dec 31, 2021
@aojea
Copy link
Member

aojea commented Dec 31, 2021

This has been fixed in recent versions that use HTTP2, by default there is an idle timeout of 30s in the connection that will detect that this is stale and restart it, solving this problem.

@aojea
Copy link
Member

aojea commented Dec 31, 2021

I think it went in in 1.19 #87615

@smileusd
Copy link
Contributor Author

smileusd commented Jan 4, 2022

@aojea Thank you for reply. But i can not find the log from client "use of closed network connection", In my case client didn't recieved any events include EOF or IsProbableEOF. The connection is still available from slb to client. But as you said, go 1.16.5 add HTTP2 health check can check idle timeout may be useful to also solving this problem. I will try it. Thanks.

@smileusd
Copy link
Contributor Author

smileusd commented Jan 4, 2022

@aojea I try to update golang from 1.14 to 1.16.12 and build a version. But issues still exist. The update can not fix this problem.

@aojea
Copy link
Member

aojea commented Jan 4, 2022

@aojea I try to update golang from 1.14 to 1.16.12 and build a version. But issues still exist. The update can not fix this problem.

is not a golang version update, you need kubernetes 1.19 or greater.
Bear in mind that latest release supported is 1.20, https://kubernetes.io/releases/ , this bug has to be present in one of the supported versions in order to be accepted

@leilajal
Copy link
Contributor

leilajal commented Jan 4, 2022

/triage accepted
/cc @wojtek-t @yliaog @deads2k

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 4, 2022
@smileusd
Copy link
Contributor Author

smileusd commented Jan 5, 2022

@aojea This is my own controller and use client-go. I update golang to 1.16.12, client-go to newest and using newest "golang.org/x/net/http2" pkg. But also not work. Looks like http2 and healthcheck has been enabled

@aojea
Copy link
Member

aojea commented Jan 5, 2022

@aojea This is my own controller and use client-go. I update golang to 1.16.12, client-go to newest and using newest "golang.org/x/net/http2" pkg. But also not work. Looks like http2 and healthcheck has been enabled

how can I reproduce it? please, be specific :)

@smileusd
Copy link
Contributor Author

smileusd commented Jan 6, 2022

@aojea That could be special condition. in our environment, we use slb(gateway) between kube-apiserver and client. I am not sure the specific technical details about the slb, but it will produce two tcp connections kube-apiserver <-> slb and slb <-> client. From my sight the slb hold the long connection from client side when kube-apiserver pod was deleted. The client side connection still active and health but controller can not watch any events because server has been changed to another ip. We are trying to add arguments in slb to auto close the client connection.
I am not sure it is the issue about slb or client or both.
Hope it is helpful.

@smileusd
Copy link
Contributor Author

smileusd commented Jan 6, 2022

After add client idle timeout from slb can resolved this. Client can receive EOF and rebuild a watcher:

I0106 10:16:51.074244       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I0106 10:16:51.074279       1 reflector.go:357] tess.io/ebay/vm-volume/pkg/controller/attach/attach_controller.go:282: Watch close - *v1.VmVolumeAttachment total 0 items received

@aojea
Copy link
Member

aojea commented Jan 7, 2022

After add client idle timeout from slb can resolved this. Client can receive EOF and rebuild a watcher:

yeah, that should be added by default

// The following enables the HTTP/2 connection health check added in
// https://github.com/golang/net/pull/55. The health check detects and
// closes broken transport layer connections. Without the health check,
// a broken connection can linger too long, e.g., a broken TCP
// connection will be closed by the Linux kernel after 13 to 30 minutes
// by default, which caused
// https://github.com/kubernetes/client-go/issues/374 and
// https://github.com/kubernetes/kubernetes/issues/87615.
t2.ReadIdleTimeout = time.Duration(readIdleTimeoutSeconds()) * time.Second
t2.PingTimeout = time.Duration(pingTimeoutSeconds()) * time.Second

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
5 participants