New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cronjob leaves behind pods after jobs are deleted #71772

Closed
andrewsav-datacom opened this Issue Dec 6, 2018 · 5 comments

Comments

Projects
None yet
5 participants
@andrewsav-datacom

andrewsav-datacom commented Dec 6, 2018

What happened:

There seems to be a regression after 1.13 release. A Cronjob with default settings does not clean up completed pods. The job themselves are getting deleted as expected, but the pods are left behind. This seems to be related to #70872 @juanvallejo

What you expected to happen:

Expected the pods for now non-existent child jobs of a cronjob to be cleaned up. This is the way it used to work prior to 1.13/

How to reproduce it (as minimally and precisely as possible):

run kubectl apply on this manifest:

kind: CronJob
apiVersion: batch/v1beta1
metadata:
  name: cjtest
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          name: cjtest
          labels:
            app: cjtest
        spec:
          restartPolicy: "Never"
          containers:
          - name: cjtest
            image: hello-world

Wait, more than 3 minutes, observe, that while only 3 successful jobs are present, the pods from now deleted jobs are left behind and never get deleted.

Anything else we need to know?:

@liggitt pointed out the following relevant places in the code base:

https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/batch/job/strategy.go#L53
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/injection.go#L121

deleting a job orphans pods by default and the cronjob controller doesn't do anything different than the default when deleting

Environment:

  • Kubernetes version (use kubectl version): 1.13.0
  • Cloud provider or hardware configuration: ubuntu VM in vSphere
  • OS (e.g. from /etc/os-release): 16.04.5 LTS (Xenial Xerus)
  • Kernel (e.g. uname -a): 4.4.0-135-generic
  • Install tools: kubeadm

/kind bug

@andrewsav-datacom

This comment has been minimized.

andrewsav-datacom commented Dec 6, 2018

@k8s-ci-robot k8s-ci-robot added sig/apps and removed needs-sig labels Dec 6, 2018

@mauilion

This comment has been minimized.

Contributor

mauilion commented Dec 6, 2018

Should include a known issues in 1.13.0 as well
/priority critical-urgent

@soltysh

This comment has been minimized.

Contributor

soltysh commented Dec 6, 2018

Lemme see if I can easily reproduce this.

@liggitt

This comment has been minimized.

Member

liggitt commented Dec 6, 2018

/milestone v1.13

@soltysh

This comment has been minimized.

Contributor

soltysh commented Dec 6, 2018

Fixes are in #71801 (master) and #71802 (1.13).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment