Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cronjob leaves behind pods after jobs are deleted #71772

Closed
andrewsav-bt opened this issue Dec 6, 2018 · 5 comments · Fixed by #71801
Closed

Cronjob leaves behind pods after jobs are deleted #71772

andrewsav-bt opened this issue Dec 6, 2018 · 5 comments · Fixed by #71801
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Milestone

Comments

@andrewsav-bt
Copy link

What happened:

There seems to be a regression after 1.13 release. A Cronjob with default settings does not clean up completed pods. The job themselves are getting deleted as expected, but the pods are left behind. This seems to be related to #70872 @juanvallejo

What you expected to happen:

Expected the pods for now non-existent child jobs of a cronjob to be cleaned up. This is the way it used to work prior to 1.13/

How to reproduce it (as minimally and precisely as possible):

run kubectl apply on this manifest:

kind: CronJob
apiVersion: batch/v1beta1
metadata:
  name: cjtest
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          name: cjtest
          labels:
            app: cjtest
        spec:
          restartPolicy: "Never"
          containers:
          - name: cjtest
            image: hello-world

Wait, more than 3 minutes, observe, that while only 3 successful jobs are present, the pods from now deleted jobs are left behind and never get deleted.

Anything else we need to know?:

@liggitt pointed out the following relevant places in the code base:

https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/batch/job/strategy.go#L53
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/injection.go#L121

deleting a job orphans pods by default and the cronjob controller doesn't do anything different than the default when deleting

Environment:

  • Kubernetes version (use kubectl version): 1.13.0
  • Cloud provider or hardware configuration: ubuntu VM in vSphere
  • OS (e.g. from /etc/os-release): 16.04.5 LTS (Xenial Xerus)
  • Kernel (e.g. uname -a): 4.4.0-135-generic
  • Install tools: kubeadm

/kind bug

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 6, 2018
@andrewsav-bt
Copy link
Author

/sig apps @liggitt @soltysh @juanvallejo

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 6, 2018
@mauilion
Copy link

mauilion commented Dec 6, 2018

Should include a known issues in 1.13.0 as well
/priority critical-urgent

@k8s-ci-robot k8s-ci-robot added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Dec 6, 2018
@soltysh soltysh self-assigned this Dec 6, 2018
@soltysh
Copy link
Contributor

soltysh commented Dec 6, 2018

Lemme see if I can easily reproduce this.

@liggitt
Copy link
Member

liggitt commented Dec 6, 2018

/milestone v1.13

@soltysh
Copy link
Contributor

soltysh commented Dec 6, 2018

Fixes are in #71801 (master) and #71802 (1.13).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants