Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine the Deployment proposal and switch hashing algorithm #477

Merged
merged 3 commits into from
Apr 27, 2017
Merged

Refine the Deployment proposal and switch hashing algorithm #477

merged 3 commits into from
Apr 27, 2017

Conversation

0xmichalis
Copy link
Contributor

@0xmichalis 0xmichalis commented Mar 23, 2017

Alternative proposal to #384

@kubernetes/sig-apps-api-reviews @kubernetes/api-reviewers

@janetkuo
Copy link
Member

@Kargakis we'd need to support migration if switching hashing algorithm, correct?

@0xmichalis
Copy link
Contributor Author

@janetkuo correct, I am linking the implementation in the comment above yours.

@0xmichalis
Copy link
Contributor Author

@janetkuo @kow3ns @bgrant0607 @smarterclayton @erictune updated as per our discussion yesterday

cc: @kubernetes/sig-apps-api-reviews

Copy link
Member

@janetkuo janetkuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM

@@ -65,7 +67,7 @@ const (
// Kill all existing pods before creating new ones.
RecreateDeploymentStrategyType DeploymentStrategyType = "Recreate"

// Replace the old RCs by new one using rolling update i.e gradually scale down the old RCs and scale up the new one.
// Replace the old RSs by new one using rolling update i.e gradually scale down the old RSs and scale up the new one.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RSs --> ReplicaSets

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

notice that the second RC exists already which it can ramp up while ramping down
Users can cancel a rollout by doing a non-cascading deletion of the Deployment
before it is complete. Recreating the same Deployment will resume it.
For example, consider the following case:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example shows how to pause and resume a rollout using non-cascading deletion, not cancel a rollout. However the first sentence goes "users can cancel a rollout..."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@janetkuo
Copy link
Member

janetkuo commented Apr 24, 2017

As a side note: this proposal = fnv + my comment in #384 (comment)

@erictune
Copy link
Member

I know that more than one person contributed significant time and ideas to this proposal. I think it would be a good idea to list each of them as authors at the top of the proposal.

@0xmichalis
Copy link
Contributor Author

I know that more than one person contributed significant time and ideas to this proposal. I think it would be a good idea to list each of them as authors at the top of the proposal.

Agreed. Added commenters from the initial proposal plus anybody else that participated in the recent discussions around hashing vs templateGeneration. Let me know if I am missing somebody.

@erictune
Copy link
Member

erictune commented Apr 27, 2017 via email

Includes commenters from the initial proposal plus anybody else that
participated in the recent discussions around hashing vs
templateGeneration.

Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
@0xmichalis
Copy link
Contributor Author

All comments addressed, I am going to merge this one and follow-up with the implementation which is in-flight in kubernetes/kubernetes#44774

@0xmichalis 0xmichalis merged commit 7bcff32 into kubernetes:master Apr 27, 2017
@0xmichalis 0xmichalis deleted the alternative-deployment-proposal branch April 27, 2017 16:45
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this pull request May 25, 2017
Automatic merge from submit-queue

Switch Deployments to new hashing algo w/ collision avoidance mechanism

Implements kubernetes/community#477

@kubernetes/sig-apps-api-reviews @kubernetes/sig-apps-pr-reviews 

Fixes #29735
Fixes #43948

```release-note
Deployments are updated to use (1) a more stable hashing algorithm (fnv) than the previous one (adler) and (2) a hashing collision avoidance mechanism that will ensure new rollouts will not block on hashing collisions anymore.
```
@smarterclayton
Copy link
Contributor

So I don't see this in the doc, but I think a rather significant use case for collisions needs to be clarified: initializers on replica sets. Effectively anything that mutates or defaults the replicaset (so that those defaults are applied exactly once, and each version is consistent). We hit this on a prototype initializer that resolves image tags to fully qualified hashes, which causes the deployment controller to not recognize the template as it's own. I would not want collision avoidance to apply in this scenario...

@bgrant0607
Copy link
Member

@smarterclayton I commented on this scenario here: kubernetes/kubernetes#1697 (comment)

And in the PR that (erroneously) changed image from optional to required.

I'd expect the Deployment's pod template to be updated, in which case there would be no collision.

The user needs to specify the unresolved image tag in a different field.

For instance, we could add an imageStream field (or struct) specifies the repo+image+tag. That would be automatically translated to the digest form into the existing image field in the DeploymentSpec, which would trigger the creation of a new ReplicaSet whose hash wouldn't collide with others'.

@smarterclayton
Copy link
Contributor

I was also thinking of other examples as well - autolabellers or auto annotators, snapshotting config maps (at deployment time automatically copy config maps and set owner refs on them to a particular replica set), possibly sidecar injectors (I think injecting some sidecars at pod scope is wrong). Also node selectors and tolerations, conceivably.

Are we ready to eliminate the possibility that replica set templates can be filled in (and thus preserve an atomic snapshot of rollout that can be reverted to under pressure)? I can think of lots of "on demand" policies that we can benefit from doing at the replica set level, where the natural action of the deployment (stamp a new desired RS out) can capture that point in time.

@bgrant0607
Copy link
Member

The hash shouldn't be used for comparison. It should just be a uniquifier. Strategic merge diff should be used to determine whether any of the ReplicaSets matches the current Deployment template.

The model I had in mind was that any automatic injection that we wanted to trigger a rollout would also modify DeploymentSpec fields that were not specified by the user.

perotinus pushed a commit to kubernetes-retired/cluster-registry that referenced this pull request Sep 2, 2017
Automatic merge from submit-queue

Switch Deployments to new hashing algo w/ collision avoidance mechanism

Implements kubernetes/community#477

@kubernetes/sig-apps-api-reviews @kubernetes/sig-apps-pr-reviews 

Fixes kubernetes/kubernetes#29735
Fixes kubernetes/kubernetes#43948

```release-note
Deployments are updated to use (1) a more stable hashing algorithm (fnv) than the previous one (adler) and (2) a hashing collision avoidance mechanism that will ensure new rollouts will not block on hashing collisions anymore.
```
MadhavJivrajani pushed a commit to MadhavJivrajani/community that referenced this pull request Nov 30, 2021
…t-proposal

Refine the Deployment proposal and switch hashing algorithm
danehans pushed a commit to danehans/community that referenced this pull request Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants