Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use apps/v1 Deployment/ReplicaSet in controller and kubectl #61419

Merged
merged 6 commits into from May 24, 2018

Conversation

@enisoc
Copy link
Member

enisoc commented Mar 20, 2018

This updates the Deployment controller and integration/e2e tests to use apps/v1, as part of #55714.

This also requires updating any other components that use the deployment/util package, most notably kubectl. That means client versions 1.11 and above will only work with server versions 1.9 and above. This is well within our client-server version skew policy of +/-1 minor version.

However, this PR only updates the parts of kubectl that used deployment/util. So although kubectl now requires apps/v1, it still also depends on extensions/v1beta1. Migrating other parts of kubectl to apps/v1 is beyond the scope of this PR, which was just to change the Deployment controller and fix all the fallout.

kubectl: This client version requires the `apps/v1` APIs, so it will not work against a cluster version older than v1.9.0. Note that kubectl only guarantees compatibility with clusters that are +/-1 minor version away.

@enisoc enisoc force-pushed the enisoc:apps-v1-deploy branch from 1734225 to bb9abd5 Mar 20, 2018

k8s-github-robot pushed a commit that referenced this pull request Mar 22, 2018

Kubernetes Submit Queue
Merge pull request #61367 from enisoc/apps-v1-rs
Automatic merge from submit-queue (batch tested with PRs 60980, 61273, 60811, 61021, 61367). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use apps/v1 ReplicaSet in controller and tests.

This updates the RS/RC controller and RS integration/e2e tests to use apps/v1 ReplicaSet, as part of #55714.

It does *not* update the Deployment controller, nor its integration/e2e tests, to use apps/v1 ReplicaSet. That will be done in a separate PR (#61419) because Deployment has many more tendrils embedded throughout the system.

```release-note
Conformance: ReplicaSet must be supported in the `apps/v1` version.
```

/assign @janetkuo

@kow3ns kow3ns added this to In Progress in Workloads Mar 29, 2018

@krmayankk

This comment has been minimized.

Copy link
Contributor

krmayankk commented Apr 10, 2018

@enisoc is there any impact when customers upgrade from extensions/v1beta1 Deployments to apps/v1 ? More specifically we have a controller that continuously applies extensions/v1beta1/Deployments , when i upgrade to k8s, which supports apps1/v1 , can i switch my controller to start applying apps/v1/Deployments for the same name, and there will be no impact ?

@liggitt

This comment has been minimized.

Copy link
Member

liggitt commented Apr 10, 2018

is there any impact when customers upgrade from extensions/v1beta1 Deployments to apps/v1 ?

all of the deployments currently persisted can be fetched/updated as either extensions/v1beta1 or as apps/v1

More specifically we have a controller that continuously applies extensions/v1beta1/Deployments , when i upgrade to k8s, which supports apps1/v1 , can i switch my controller to start applying apps/v1/Deployments for the same name, and there will be no impact ?

If you mean kubectl apply, it has always had issues with cross-group or version changes - #16543 (comment)

If you mean once the objects are persisted as API objects, there is no impact what API version the controller uses to interact with it

@krmayankk

This comment has been minimized.

Copy link
Contributor

krmayankk commented Apr 10, 2018

@liggitt our controller uses client-go. we have a controller thats takes git repo yamls and keeps applying them to k8s using client-go(Update calls on extensions.v1beta1), once we move to newer k8s versions, i woukd like to switch our client-go code to start applying, apps/v1/Deployments for existing Deployments. I am guessing for existing Deployments, they are persisted as the latest version supported, so once i switch, nothing should change and no restarts of existing pods should happen. It would be good to document this somewhere

@janetkuo

This comment has been minimized.

Copy link
Member

janetkuo commented Apr 11, 2018

once we move to newer k8s versions, i woukd like to switch our client-go code to start applying, apps/v1/Deployments for existing Deployments

apps/v1 APIs are different and you need to be aware of those differences when switching. For example, some default values are different, selectors become immutable, and some fields are deprecated.

We don't have docs for apps/v1 but just the docs for comparing apps/v1beta2 and extensions/v1beta1: https://kubernetes.io/docs/reference/workloads-18-19/
cc @kow3ns we need to update this doc (kubernetes/website#8049)

@janetkuo

This comment has been minimized.

Copy link
Member

janetkuo commented May 15, 2018

/lgtm

@soltysh
Copy link
Contributor

soltysh left a comment

/lgtm

@@ -25,8 +25,8 @@ import (
"sync/atomic"
"time"

apps "k8s.io/api/apps/v1"

This comment has been minimized.

@soltysh

soltysh May 22, 2018

Contributor

Use appsv1 to clearly show we're working with v1 version. It's a pattern used through the entire code base.

This comment has been minimized.

@enisoc

enisoc May 22, 2018

Author Member

In this PR, I only wanted to do mechanical translations from extensions to apps. That makes it easy to see that the transform is the one we intended. If we want to fix the style, I think it should be a separate PR.

@@ -102,12 +102,12 @@ type DeploymentHistoryViewer struct {
// ViewHistory returns a revision-to-replicaset map as the revision history of a deployment
// TODO: this should be a describer
func (h *DeploymentHistoryViewer) ViewHistory(namespace, name string, revision int64) (string, error) {
versionedExtensionsClient := h.c.ExtensionsV1beta1()
deployment, err := versionedExtensionsClient.Deployments(namespace).Get(name, metav1.GetOptions{})
versionedAppsClient := h.c.AppsV1()

This comment has been minimized.

@soltysh

soltysh May 22, 2018

Contributor

Have you checked if these commands work fine with previous version of the server, it should but I'm asking to have that double checked.

This comment has been minimized.

@enisoc

enisoc May 22, 2018

Author Member

I checked a build of kubectl from this PR against a v1.9.3 cluster, and the following commands worked:

kubectl edit deployment <x> (to create history)
kubectl rollout history deployment/<x>
kubectl rollout undo deployment/<x>
kubectl rollout status deployment/<x>

We also have automated kubectl skew tests that should run post-submit.

As expected, I observed that the rollout commands do not work against a v1.8.9 cluster, because apps/v1 did not exist until v1.9.0. This is fine because this change will roll out with kubectl v1.11 at the earliest, and v1.8 clusters are outside the client/server compatibility window for that release.

@enisoc

This comment has been minimized.

Copy link
Member Author

enisoc commented May 22, 2018

/assign @lavalamp

Can you approve for cmd/kube-controller-manager?

enisoc added some commits Mar 19, 2018

kubectl: Use apps/v1 Deployment/ReplicaSet.
This is necessary since kubectl shares code with the controllers,
and the controllers have been updated to use apps/v1.
test/e2e: Use apps/v1 Deployment/ReplicaSet.
This must be done at the same time as the controller update,
since they share code.
test/integration: Use apps/v1 Deployment/ReplicaSet.
This must be done at the same time as the controller update,
since they share code.

@enisoc enisoc force-pushed the enisoc:apps-v1-deploy branch from 9687e1c to 046ae81 May 22, 2018

@k8s-ci-robot k8s-ci-robot removed the lgtm label May 22, 2018

@lavalamp

This comment has been minimized.

Copy link
Member

lavalamp commented May 23, 2018

/approve

for the kube-controller-manager change.

@enisoc

This comment has been minimized.

Copy link
Member Author

enisoc commented May 23, 2018

/retest

@janetkuo

This comment has been minimized.

Copy link
Member

janetkuo commented May 23, 2018

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label May 23, 2018

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented May 23, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: enisoc, janetkuo, lavalamp, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented May 24, 2018

Automatic merge from submit-queue (batch tested with PRs 62756, 63862, 61419, 64015, 64063). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 5fe35cd into kubernetes:master May 24, 2018

18 checks passed

Submit Queue Queued to run github e2e tests a second time.
Details
cla/linuxfoundation enisoc authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details

Workloads automation moved this from In Progress to Done May 24, 2018

@soltysh

This comment has been minimized.

Copy link
Contributor

soltysh commented May 24, 2018

🎉 extensions die 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.