-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use apps/v1 Deployment/ReplicaSet in controller and kubectl #61419
Conversation
Automatic merge from submit-queue (batch tested with PRs 60980, 61273, 60811, 61021, 61367). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Use apps/v1 ReplicaSet in controller and tests. This updates the RS/RC controller and RS integration/e2e tests to use apps/v1 ReplicaSet, as part of #55714. It does *not* update the Deployment controller, nor its integration/e2e tests, to use apps/v1 ReplicaSet. That will be done in a separate PR (#61419) because Deployment has many more tendrils embedded throughout the system. ```release-note Conformance: ReplicaSet must be supported in the `apps/v1` version. ``` /assign @janetkuo
@enisoc is there any impact when customers upgrade from extensions/v1beta1 Deployments to apps/v1 ? More specifically we have a controller that continuously applies extensions/v1beta1/Deployments , when i upgrade to k8s, which supports apps1/v1 , can i switch my controller to start applying apps/v1/Deployments for the same name, and there will be no impact ? |
all of the deployments currently persisted can be fetched/updated as either extensions/v1beta1 or as apps/v1
If you mean If you mean once the objects are persisted as API objects, there is no impact what API version the controller uses to interact with it |
@liggitt our controller uses client-go. we have a controller thats takes git repo yamls and keeps applying them to k8s using client-go(Update calls on extensions.v1beta1), once we move to newer k8s versions, i woukd like to switch our client-go code to start applying, apps/v1/Deployments for existing Deployments. I am guessing for existing Deployments, they are persisted as the latest version supported, so once i switch, nothing should change and no restarts of existing pods should happen. It would be good to document this somewhere |
We don't have docs for apps/v1 but just the docs for comparing apps/v1beta2 and extensions/v1beta1: https://kubernetes.io/docs/reference/workloads-18-19/ |
LGTM |
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@@ -25,8 +25,8 @@ import ( | |||
"sync/atomic" | |||
"time" | |||
|
|||
apps "k8s.io/api/apps/v1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use appsv1
to clearly show we're working with v1 version. It's a pattern used through the entire code base.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this PR, I only wanted to do mechanical translations from extensions
to apps
. That makes it easy to see that the transform is the one we intended. If we want to fix the style, I think it should be a separate PR.
@@ -102,12 +102,12 @@ type DeploymentHistoryViewer struct { | |||
// ViewHistory returns a revision-to-replicaset map as the revision history of a deployment | |||
// TODO: this should be a describer | |||
func (h *DeploymentHistoryViewer) ViewHistory(namespace, name string, revision int64) (string, error) { | |||
versionedExtensionsClient := h.c.ExtensionsV1beta1() | |||
deployment, err := versionedExtensionsClient.Deployments(namespace).Get(name, metav1.GetOptions{}) | |||
versionedAppsClient := h.c.AppsV1() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you checked if these commands work fine with previous version of the server, it should but I'm asking to have that double checked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked a build of kubectl from this PR against a v1.9.3 cluster, and the following commands worked:
kubectl edit deployment <x> (to create history)
kubectl rollout history deployment/<x>
kubectl rollout undo deployment/<x>
kubectl rollout status deployment/<x>
We also have automated kubectl skew tests that should run post-submit.
As expected, I observed that the rollout commands do not work against a v1.8.9 cluster, because apps/v1 did not exist until v1.9.0. This is fine because this change will roll out with kubectl v1.11 at the earliest, and v1.8 clusters are outside the client/server compatibility window for that release.
/assign @lavalamp Can you approve for |
This is necessary since kubectl shares code with the controllers, and the controllers have been updated to use apps/v1.
This must be done at the same time as the controller update, since they share code.
This must be done at the same time as the controller update, since they share code.
/approve for the kube-controller-manager change. |
/retest |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: enisoc, janetkuo, lavalamp, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Automatic merge from submit-queue (batch tested with PRs 62756, 63862, 61419, 64015, 64063). If you want to cherry-pick this change to another branch, please follow the instructions here. |
🎉 extensions die 🎉 |
This updates the Deployment controller and integration/e2e tests to use apps/v1, as part of #55714.
This also requires updating any other components that use the
deployment/util
package, most notablykubectl
. That means client versions 1.11 and above will only work with server versions 1.9 and above. This is well within our client-server version skew policy of +/-1 minor version.However, this PR only updates the parts of
kubectl
that useddeployment/util
. So although kubectl now requires apps/v1, it still also depends on extensions/v1beta1. Migrating other parts of kubectl to apps/v1 is beyond the scope of this PR, which was just to change the Deployment controller and fix all the fallout.