Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop timeout isn't respected at shutdown/reboot #77873

Open
mrunalp opened this issue May 14, 2019 · 2 comments

Comments

Projects
None yet
2 participants
@mrunalp
Copy link
Contributor

commented May 14, 2019

What happened:
Containers are getting terminated by systemd without respecting the terminationGracePeriodSeconds set in the pod yaml on reboot or shutdown of a node.

What you expected to happen:
terminationGracePeriodSeconds is respected by systemd when using systemd as the cgroup manager.

How to reproduce it (as minimally and precisely as possible):

  1. Use systemd as the cgroup manager in your container runtime.
  2. Create a pod yaml with terminationGracePeriodSeconds set to 120 seconds.
  3. Reboot the node.
  4. You will notice that the containers get SIGTERM, followed by systemd default stop timeout (typically 90 seconds) and then they are SIGKILLed.

Anything else we need to know?:
This can be fixed by passing the stop timeout to the containers as part of the CreateContainer CRI API. This will allow the container runtimes to set the systemd property for the scope to override the default stop timeout to the value set through terminationGracePeriodSeconds.
This needs changes across the stack as runc doesn't currently provide a way to set the TimeoutStopUSec for the systemd scope created for a container.
The behavior with cgroupfs cgroup manager will need further investigation.

Environment:

  • Kubernetes version (use kubectl version): All versions.

@mrunalp mrunalp added the kind/bug label May 14, 2019

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented May 14, 2019

@mrunalp: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mrunalp

This comment has been minimized.

Copy link
Contributor Author

commented May 14, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.