Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update manifest to deploy on Kubernetes 1.16. #376

Merged
merged 8 commits into from
Sep 24, 2019
Merged

Update manifest to deploy on Kubernetes 1.16. #376

merged 8 commits into from
Sep 24, 2019

Conversation

jbrette
Copy link
Contributor

@jbrette jbrette commented Sep 22, 2019

Which issue is resolved by this Pull Request:
Resolves #
Numerous kubeflow manifests can't deploy on Kubernetes 1.16 #375

Description of your changes:

  • Check the apiVersion where needed.
  • Update the kustomization variables definition where needed.
  • Add selector to Deployment where needed.
  • Rerun make generate and make test but DID NOT CHECKIN the .go files on purpose yet to not clutter the PR.

Checklist:

  • Unit tests have been rebuilt:
    1. cd manifests/tests
    2. make generate
    3. make test

This change is Reviewable

@k8s-ci-robot
Copy link
Contributor

Hi @jbrette. Thanks for your PR.

I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jlewi
Copy link
Contributor

jlewi commented Sep 22, 2019

/ok-to-test
Thanks @jbrette Are the manifests still compatible with earlier versions of K8s?
The presubmits should be deploying Kubeflow on 1.12. So I think if the presubmits pass then that should confirm that it still works on 1.12. But it would be good to know if we forsee any backwards compatability issues.

If you send a PR to join the github org in kubeflow/internal-acls then you will be able to trigger the tests yourself.

- Update Deployment apiVersion to apps/v1
- Update Ingress apiVersion to networking.k8s.io
@jbrette
Copy link
Contributor Author

jbrette commented Sep 23, 2019

@jlewi @kkasravi Seems to be working ok now.

  • I had to comment out the "apiVersion: networking.k8s.io/v1beta" in 4 or 5 places, and revert it back extension/v1beta1 because we are still testing with 1.12.7. That change was added in Kubernetes 1.14. Still the Roles are ready (they refer "extensions" and "networking.k8s.io"), so the amount of work next time will be much smaller.
  • It was kind of difficult to figure out what exact version is being used. kubeflow/kubeflow has tons of swagger.json. kubeflow/manifest contains kubernetes 1.17. We should probably open a bug so that somebody cleanup.
  • I was surprised that we are still using kubectl 1.10 (in the testing/images/DockerFile) and not something consistent such as 1.12.7

@jbrette
Copy link
Contributor Author

jbrette commented Sep 23, 2019

/assign jlewi

@jlewi
Copy link
Contributor

jlewi commented Sep 23, 2019

I was surprised that we are still using kubectl 1.10 (in the testing/images/DockerFile) and not something consistent such as 1.12.7

Yeah lots of room for cleanup and improvement in our eng infrastructure.

I had to comment out the "apiVersion: networking.k8s.io/v1beta" in 4 or 5 places, and revert it back extension/v1beta1 because we are still testing with 1.12.7. That change was added in Kubernetes 1.14. Still the Roles are ready (they refer "extensions" and "networking.k8s.io"), so the amount of work next time will be much smaller.

Could we potentially use an overlay to allow multi-version support? So we'd have an overlay that would change the apiVersion to networking.k8s.io and then folks deploying on 1.16 could include that overlay in the list of applied overlays?

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: extensions/v1beta1 # networking.k8s.io/v1beta1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to add a TODO() about updating this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jlewi I will add an issue so that we remember to come back and fix them once with switch to Kubernete 1.14+. I'll add additional instruction on what to pay attention to in Role and ClusterRole definition.

selector:
matchLabels:
chart: gateways
heritage: Tiller
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we get rid of the heritage label in the selector?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jlewi I did not want to break anything. The labels, above an below the selectors have to match. Those currently contain Tiller everywhere. You will have to find somebody to run a script and do "grep -v Tiller" on the entire directory structure. We should do that in another PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jlewi Removal of Tiller has been done incrementally in this PR #378

selector:
matchLabels:
chart: gateways
heritage: Tiller
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit can we get rid of the heritage label in the selector?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not in this PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jlewi Removal of Tiller has been done incrementally in this PR #378

matchLabels:
app: jaeger
chart: tracing
heritage: Tiller
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Here and everywhere else get rid of heritage label in selector

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jlewi Removal of Tiller has been done incrementally in this PR #378

Copy link
Contributor Author

@jbrette jbrette left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was surprised that we are still using kubectl 1.10 (in the testing/images/DockerFile) and not something consistent such as 1.12.7

Yeah lots of room for cleanup and improvement in our eng infrastructure.

I had to comment out the "apiVersion: networking.k8s.io/v1beta" in 4 or 5 places, and revert it back extension/v1beta1 because we are still testing with 1.12.7. That change was added in Kubernetes 1.14. Still the Roles are ready (they refer "extensions" and "networking.k8s.io"), so the amount of work next time will be much smaller.

Could we potentially use an overlay to allow multi-version support? So we'd have an overlay that would change the apiVersion to networking.k8s.io and then folks deploying on 1.16 could include that overlay in the list of applied overlays?

We can not.

  • Kustomize is using the apiVersion field and the kind field for match resources between the base and the overlay,
  • It could be a real problem in some of the kustomize params.xxx. But we are lucky, most of those configuration files are using Ingress, not (Ingress + apiVersion).

@jlewi
Copy link
Contributor

jlewi commented Sep 23, 2019

Thanks
/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jlewi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 0842eb2 into kubeflow:master Sep 24, 2019
@jbrette jbrette deleted the kube16 branch September 24, 2019 00:56
yanniszark added a commit to arrikto/kubeflow-manifests that referenced this pull request Sep 24, 2019
…eway

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>
@yanniszark yanniszark mentioned this pull request Sep 24, 2019
1 task
k8s-ci-robot pushed a commit that referenced this pull request Sep 24, 2019
Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>
jbrette added a commit to prorates/manifests that referenced this pull request Oct 7, 2019
jbrette added a commit to prorates/manifests that referenced this pull request Oct 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants