-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unwind Failed Deploys Improvements #427
Conversation
aballman
commented
Feb 7, 2019
•
edited
Loading
edited
- Unwinds rollbacks with scaling operations
- Handles removing deploys/rs when they are a 'new' service part of an existing deploy
- Grabs ReplicaSet definitions before Deployment and stores in-memory
- Simplifies testing framework for Deployments test
…ent, not just a reference to them (by id)
Codecov Report
@@ Coverage Diff @@
## master #427 +/- ##
==========================================
+ Coverage 61.31% 62.05% +0.74%
==========================================
Files 51 51
Lines 5260 5239 -21
==========================================
+ Hits 3225 3251 +26
+ Misses 1717 1666 -51
- Partials 318 322 +4 |
plugins/kubernetes/deployment.go
Outdated
} | ||
} | ||
// This is used in case the above updatescale operation does not complete. | ||
// In that case we'll need to delete all the pods that are in this namespace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, is deleting pods this way the same as how deleting pods through kubectl is? If that's the case, wouldn't new pods spring up to replace the deleted pods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't looked through the source for kubectl, but in practice if you delete the replica set via the API it does NOT delete the associated pods. I tell the RS to scale down here to initiate the termination process and then delete the RS. If that all fails, then I just delete the pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see! I did find the PropagationPolicy
option in DeleteOptions that seems relevant to this problem: https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#DeleteOptions. If we specify Foreground
or Background
, then it will delete dependents of that particular object.
I'm not sure if we should handle dependent deletion ourselves if they already provide a way to do it. More information can be found here: https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#additional-note-on-deployments.
…vs revision annotation
Replicas: 0, | ||
}, | ||
} | ||
if firstDeploy { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kind of personal preference I suppose, but I think you can collapse this statement into the if/else on line 1399
* waitForDeploymentSuccess now reports what failed and what succeeded * WIP Unwind Failed Deploys * Apply deployment annotations to deployment. Not replicaset annotations to deployment * Corrected issues with retrieving replicasets based off of generation vs revision annotation * Added comments to deployment updates
* waitForDeploymentSuccess now reports what failed and what succeeded * WIP Unwind Failed Deploys * Apply deployment annotations to deployment. Not replicaset annotations to deployment * Corrected issues with retrieving replicasets based off of generation vs revision annotation * Added comments to deployment updates