Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop reapers #63979

Merged
merged 3 commits into from May 26, 2018

Conversation

@soltysh
Copy link
Contributor

soltysh commented May 17, 2018

/assign @deads2k @juanvallejo

Release note:

kubectl delete does not use reapers for removing objects anymore, but relies on server-side GC entirely
@soltysh

This comment has been minimized.

Copy link
Contributor Author

soltysh commented May 17, 2018

Only last commit matters, b/c it waits for #59851 to merge first.

})
}
}
// TODO: fix this test

This comment has been minimized.

@deads2k

deads2k May 17, 2018

Contributor

um... Yeah. you can't merge it like this.

This comment has been minimized.

@soltysh

soltysh May 18, 2018

Author Contributor

Duh 😝

@deads2k

This comment has been minimized.

Copy link
Contributor

deads2k commented May 17, 2018

/hold

don't want to accidentally merge with tests commented out.

@timothysc timothysc removed their request for review May 17, 2018

@soltysh soltysh force-pushed the soltysh:drop_reapers branch from 25c7502 to 149cf2a May 18, 2018

@soltysh soltysh force-pushed the soltysh:drop_reapers branch from 149cf2a to e84fbfa May 18, 2018

@nilebox nilebox referenced this pull request May 19, 2018

Merged

add kubectl wait #64034

@soltysh soltysh force-pushed the soltysh:drop_reapers branch from e84fbfa to 0c305fb May 24, 2018

@soltysh

This comment has been minimized.

Copy link
Contributor Author

soltysh commented May 24, 2018

/hold cancel
Rebased and test fixed. @deads2k @juanvallejo ptal

@k8s-ci-robot k8s-ci-robot removed the lgtm label May 25, 2018

@soltysh soltysh added the lgtm label May 25, 2018

@soltysh

This comment has been minimized.

Copy link
Contributor Author

soltysh commented May 25, 2018

Nit fixed, re-tagging.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented May 25, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

soltysh added some commits May 17, 2018

Remove initializer test
Initializers are alpha, broken and a subject for removal. They don't
work well with finalizers and the previous hack present in deployment
and replicaset reapers was just hiding this problem.

@soltysh soltysh force-pushed the soltysh:drop_reapers branch from 3888f08 to 2a76535 May 25, 2018

@k8s-ci-robot k8s-ci-robot removed the lgtm label May 25, 2018

@deads2k

This comment has been minimized.

Copy link
Contributor

deads2k commented May 25, 2018

Provide links to the hacks you showed me. These hacks were hiding more fundamental flaws in the implementation. They didn't work for other clients or other types. It was wrong to add the hacks.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label May 25, 2018

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented May 25, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: deads2k, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@soltysh

This comment has been minimized.

Copy link
Contributor Author

soltysh commented May 25, 2018

Hack in deployment reaper:

if deployment.Initializers != nil {
policy := metav1.DeletePropagationBackground
deleteOptions := &metav1.DeleteOptions{PropagationPolicy: &policy}
return deployments.Delete(name, deleteOptions)
}

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented May 25, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

2 similar comments
@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented May 26, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented May 26, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented May 26, 2018

Automatic merge from submit-queue (batch tested with PRs 63859, 63979). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 5f578f3 into kubernetes:master May 26, 2018

18 checks passed

Submit Queue Queued to run github e2e tests a second time.
Details
cla/linuxfoundation soltysh authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details

@soltysh soltysh deleted the soltysh:drop_reapers branch May 28, 2018

k8s-github-robot pushed a commit that referenced this pull request May 29, 2018

Kubernetes Submit Queue
Merge pull request #64375 from nilebox/delete-wait-cleanup
Automatic merge from submit-queue (batch tested with PRs 64300, 64375). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Declare kubectl wait flag in a way consistent with other deletion flags

**What this PR does / why we need it**:
A follow up PR for #64034 and #63979 that makes declaring wait flag consistent with the other flags.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #64401

**Special notes for your reviewer**:

**Release note**:

```release-note

```

k8s-github-robot pushed a commit that referenced this pull request May 30, 2018

Kubernetes Submit Queue
Merge pull request #64439 from soltysh/issue64362
Automatic merge from submit-queue.

Wait for the job to be removed

**What this PR does / why we need it**:
In master we've dropped reapers (#63979) which means the old client does not wait long enough for the resource to be gone (since it's being removed on the server along with its dependents). To fix our e2e ([failure here](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-new-master-gci-kubectl-skew/7980)) we need to backport parts of the aforementioned PR which is updating tests to wait for the resource to be removed. 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #64362

/assign @MaciekPytel 
/assign @deads2k 

**Release note**:
```release-note
NONE
```

k8s-github-robot pushed a commit that referenced this pull request Jun 6, 2018

Kubernetes Submit Queue
Merge pull request #64797 from janetkuo/ds-deletion
Automatic merge from submit-queue (batch tested with PRs 64749, 64797). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Handle deleted DaemonSet properly

**What this PR does / why we need it**:
After kubectl reapers are removed (#63979) and foreground deletion are used, DaemonSet controller may race with garbage collector when it tries to update DaemonSet status of the DaemonSet being deleted. 

Here's what happened:
1. Someone/something performs a foreground deletion on a DaemonSet
1. DaemonSet finalizer and DeletionTimestamp are both set
1. DaemonSet history objects (ControllerRevisions) and pods are being deleted by garbage collector; meanwhile, DaemonSet controller tries to update DaemonSet status. 
    * Updating DaemonSet status requires constructing DaemonSet history objects, to figure out current revision and which pods do/don't belong to current revision
1. When updating DaemonSet status, DaemonSet controller tries to create a DaemonSet history object that matches current DaemonSet spec
1. Garbage collector then tries to delete that DaemonSet history object. And repeat. 

Because we can't make DaemonSet pods be deleted before DaemonSet history objects (DaemonSet history objects don't own DaemonSet pods!), we cannot reliably calculate DaemonSet status without history objects anyways. Therefore, we don't update DaemonSet status for DaemonSet being deleted. 

Note that the reason why the kubectl delete hack works is because it forces DaemonSet pods to be removed before history objects. 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #64313

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.