Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Admission controller not supporting --server-dry-run #2850

Closed
trevex opened this issue May 27, 2019 · 11 comments · Fixed by #2963
Closed

Admission controller not supporting --server-dry-run #2850

trevex opened this issue May 27, 2019 · 11 comments · Fixed by #2963
Labels
priority/P1 Planned for Release

Comments

@trevex
Copy link

trevex commented May 27, 2019

Feature Request

What problem are you trying to solve?

Kubernetes >=1.12 added the new --server-dry-run flag (alpha in 1.12, beta in 1.13), which allows to test manifests against the apiserver and all the admission controllers. This is a great utility to fully test the integrity of a deployment as part CI and avoid any pitfalls caused by multiple admission controllers.

Furthermore the --server-dry-run can be used along with --output/-o to retrieve the computed result.

How should the problem be solved?

The admission controller should support --server-dry-run. An introductory post can be found here: https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/
If I understand the issue correctly only a minimal change to linkerd's admissionregistration.k8s.io/v1beta1.Webhook is necessary, it needs to specify the sideEffects-field to indicate that it does not have side-effects on dry-runs.
If there are side-effects I am unaware of (e.g. admission controller creates tls identities), the admission controller needs to disable the side-effects for workloads marked as dry-run.

How would users interact with this feature?

The user can then use the --server-dry-run to test deployments utilizing the linkerd admision controller, e.g.:

# test deployment
kubectl apply --server-dry-run -f my-deployment.yaml
# retrieve computed output
kubectl apply --server-dry-run -f my-deployment.yaml -o yaml
@trevex
Copy link
Author

trevex commented May 27, 2019

@olix0r @grampelberg This was the issue I was talking about at KubeCon EU, which is on my personal wishlist ☝️

@olix0r olix0r added the priority/P1 Planned for Release label May 28, 2019
@olix0r olix0r added this to To do in 2.5 - Release via automation May 28, 2019
@Pothulapati
Copy link
Contributor

I started looking at this issue, and From the docs here, A sideeffects field has to be added, with value as None or NoneOnDryRun as per the functionality.

I also started looking at the sp-validator and proxy-injector webhooks, I don't see any side-effects i.e creation of any other resources, by both of the webhooks. If that is the case, I should be able to submit a PR containing the field sideeffects: None on both the webhooks.

@ihcsim
Copy link
Contributor

ihcsim commented May 31, 2019

@Pothulapati Setting sideeffects: None sounds the right approach to me, since our webhooks don't modify any other resources. We will need to test it out to make sure there are no undesirable side-effects (no puns intended) on both the auto-injection and profile validation.

Per slack convo, the other thing I will look out for is that the sideeffects option was introduced in 1.12. We might need to confirm that this change doesn't cause linkerd install to fail for k8s 1.11. (Last I checked, Linkerd has to work on k8s 1.11+.)

@Pothulapati
Copy link
Contributor

Pothulapati commented Jun 2, 2019

Hmm, I was working on the issue, and I was trying the emojivoto example, and was wondering why the dry run wasn't failng as the webhook's sideEffects is unknown now. After a lot of tries, I realised that that our proxy-injector only listens for pod acivities, and the emojivoto example deals with deployments. So, The Dry Run dosent fail as our proxy dosent involve here. A pod spec with injection enabled does fail for a dry run. :p

Now, working on the PR.

@Pothulapati
Copy link
Contributor

Pothulapati commented Jun 3, 2019

For Kubernetes versions <1.12.0, The installation fails as sideEffects field is not ignored. We need a way to check and omit the field in that case.

If supporting 1.11.0 is a hard requirement, we can have the check during the rendering as we already talk to the cluster during install, we can get the version from the cluster and check. If its not we dont need have any checks.

@grampelberg @olix0r

@grampelberg
Copy link
Contributor

@Pothulapati I believe the currently support k8s versions are 1.12, 1.13 and 1.14 (https://kubernetes.io/docs/setup/version-skew-policy/).

We should add a check that fails for unsupported cluster versions if we go that route though.

@grampelberg
Copy link
Contributor

@Pothulapati didn't this merge?

@Pothulapati
Copy link
Contributor

Pothulapati commented Jun 17, 2019

@grampelberg I had the PR ready of getting the version and adding the flag only if the version is >= 1.12 but @ihcsim and I were discussing, as the checking version comes in the middle of install, should it proceed when we can't connect to a kubernetes API? As right now, when ignore-cluster flag is provided the manifest is returned even when there is no cluster to connect. Should we consider the same flag and if is added we do not fail? In that case should we add the field or not?

With all these paths, @ihcsim suggested to discuss with the team and check if this is really needed as 1.11 will not be supported somewhere down the line. WDYT?

@ihcsim
Copy link
Contributor

ihcsim commented Jun 17, 2019

@grampelberg There was a question on how to handle this with the linkerd install --ignore-cluster option, which doesn't make a call to the k8s API. Shall we just assume the fail safe default of not including the sideEffects option? Or can we assume that user has already done linkerd check --pre (that can be extended to validate the k8s api version), in which case we always add the sideEffects option during install? (I think version checks shouldn't be part of this PR.)

@grampelberg
Copy link
Contributor

grampelberg commented Jun 18, 2019

Officially supported k8s is now 1.13+, so my prefered solution would be to always include it and add a flag to remove it as part of install, foregoing checking the cluster version entirely.

Maybe as a separate PR bump the supported version in check --pre and update the website with a doc on how you can disable this feature if you really want to?

@ihcsim
Copy link
Contributor

ihcsim commented Jun 18, 2019

@grampelberg 👍

@Pothulapati Does that sound good to you? So we set sideEffects: None in the sp-validator and proxy injector webhook configs, by default. And add a flag to linkerd install, say call it --omit-webhook-sideeffect or something, to omit the field entirely. This new flag should be part of the install recordableFlagSet, so that it's persisted in the linkerd-config config map. We don't want the user's webhook configs to go 💥 during an upgrade, because the cluster is still on older k8s version.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
priority/P1 Planned for Release
Projects
No open projects
2.5 - Release
  
Done
Development

Successfully merging a pull request may close this issue.

5 participants