Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Respect timeout in ExecInContainer #87281

Closed
wants to merge 1 commit into from

Conversation

tedyu
Copy link
Contributor

@tedyu tedyu commented Jan 16, 2020

What type of PR is this?
/kind bug

What this PR does / why we need it:
As #26895 stated, timeoutSeconds is ignored for probe.

This PR adds code to respect the timeout parameter passed to ExecInContainer.

Which issue(s) this PR fixes:
Fixes #26895

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/bug Categorizes issue or PR as related to a bug. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jan 16, 2020
@tedyu
Copy link
Contributor Author

tedyu commented Jan 16, 2020

This PR is based on @rhcarvalho 's former PR.

@k8s-ci-robot k8s-ci-robot added area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 16, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: tedyu
To complete the pull request process, please assign vishh
You can assign the PR to them by writing /assign @vishh in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tedyu
Copy link
Contributor Author

tedyu commented Jan 16, 2020

/test pull-kubernetes-e2e-gce

Copy link
Contributor

@mattjmcnaughton mattjmcnaughton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At a high level, I'm wondering if its truly possible for us to "respect" the timeout in ExecInContainer, when docker doesn't actually support killing the process? In other words, we can "respect" the timeout, by failing the readiness probe and trying it again, BUT that raises the risk of us attempting to run the process while previous attempts are still running.

Might we instead have to wait on this fix until Docker supports killing processes execing in a container?

@@ -58,6 +59,10 @@ func (d *dockerExitError) ExitStatus() int {
// NativeExecHandler executes commands in Docker containers using Docker's exec API.
type NativeExecHandler struct{}

// ExecInContainer executes a command in a Docker container. It may leave a
// goroutine running the process in the container after the function returns
// because of a timeout. However, the goroutine does not leak, it terminates
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this ExecInContainer process never exits, we would leak goroutines, correct?

// Docker API doesn't support it. See
// https://github.com/docker/docker/issues/9098.
// For liveness probes this is probably okay, since the
// container will be restarted. For readiness probes it means
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried about this behavior... specifically in the following situation:

Imagine we have a readiness probe configured to run every minute. The timeout is set to 5s, but the actual command we are executing never terminates.

Because we can't kill the process in the container, I believe we're going to create more and more processes in the container (and more and more goroutines), which seems dangerous to me?

Copy link
Contributor Author

@tedyu tedyu Jan 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was where the previous PR got stalled.

I left a comment on the linked moby issue - surki updated his branch in Dec 2019.
Also reviewed surki's commit. Hopefully that issue can get some traction.

For the readiness probe you mentioned, maybe we can adopt a threshold for the timeout so that we don't create many processes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dims thoughts?

I'm curious... is the lack of timeout in the ExecInContainer causing issues for users in production? I'm just wondering if doing nothing until this is fixed in moby is a feasible option...

@k8s-ci-robot
Copy link
Contributor

@tedyu: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-gce 9061474 link /test pull-kubernetes-e2e-gce

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2020
@tedyu
Copy link
Contributor Author

tedyu commented Apr 16, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 15, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 14, 2020
@SergeyKanzhelev
Copy link
Member

This PR solves the same problem as the PR that is currently being actively discussed #94115. The agreement from SIG node meeting today was that the code change is only a small part of the fix and we need to actively look at side effects this change may bring to the production payloads. Please join the discussion at the PR: #94115.

/close

@k8s-ci-robot
Copy link
Contributor

@tedyu: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 1, 2020
@k8s-ci-robot
Copy link
Contributor

@SergeyKanzhelev: Closed this PR.

In response to this:

This PR solves the same problem as the PR that is currently being actively discussed #94115. The agreement from SIG node meeting today was that the code change is only a small part of the fix and we need to actively look at side effects this change may bring to the production payloads. Please join the discussion at the PR: #94115.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note-none Denotes a PR that doesn't merit a release note. sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Probe timeout ignored by ExecAction
6 participants