Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/global k8s version #28

Merged
merged 7 commits into from
Mar 20, 2023
Merged

Feature/global k8s version #28

merged 7 commits into from
Mar 20, 2023

Conversation

lukma99
Copy link
Contributor

@lukma99 lukma99 commented Mar 13, 2023

Added

  • Specify version for helm template through the k8sVersion parameter

Fixed

  • Don't fail when no values-shared.yaml is provided

@lukma99 lukma99 self-assigned this Mar 13, 2023
@schnatterer
Copy link
Member

@lukma99 We didn't talk about the env var in detail. Seeing the implementation now, I see some disadvantages about a build-in GITOPS_BUILD_LIB_K8S_VERSION env var.
First: It adds a convention (where we wouldn't need on IMO - see bellow) and creates some indirect coupling between the Jenkins env and our library.
Second: What if there for example multiple teams or apps within on Jenkins instance that use separate clusters (that might have different versions)?

My idea was that the user can explicitly pass an env var as param in it's gitOpsConfig in order to maintain is more easily.

def gitopsConfig = [  
    k8sVersion: env.K8S_VERSION_TEAM_A  
] 

We could suggest this idea of using env vars for easier maintenance in our readme. But it's up to the user to use it. I'd rather not hard code an env var in our library code.

What do you think? Are you open to deleting some code?

@schnatterer
Copy link
Member

I've also given the questions "should we use the k8sVersion in the kubectl image" another thought.

For me, from a user perspective, it would be much more intuitive and easier to maintain, if the k8sVersion would also be applied to the kubectl version we use. I know, not each an every kubectl version is present as an image tag of lachlanevenson/k8s-kubectl. But instead of having the user maintain the kubectl version redundantly in every pipeline, we could just document this as a warning in the readme. In >80% of the cases this should not cause problems.

As an alternative I've looked for other kubectl images. There is one from

  • bitnami - as usual their images are heavy weight and have a lot of CVEs. But: They have a moving tag for minor versions. e.g. 1.25 always points to the latest 1.25 version. Which in fact would be good enough for our use case. But a 80MB image when others are about 10 is just too bloated IMO.
  • rancher - created from scratch - which would be super lightweight. They seem to automatically check for new versions once a week

What do you think?

@schnatterer
Copy link
Member

schnatterer commented Mar 14, 2023

And one last thing for my first review: I think that the README should reflect, that right now the k8sVersion is only used with ArgoCD and helm.

Also, please use the Unreleased section in the Changelog on a Branch. I'd say we cut the release after the merge.

@schnatterer
Copy link
Member

Having set those meta meta things - the actual implementation looks good to me 👍️ 🙂
Good catch with the values-shared bug!

…bectl image with rancher/kubectl. Use k8sVersion parameter from config as kubectl version when it is not explicitly specified.
… rancher is so lightweight, it does not come with cat preinstalled. This is however essential for Jenkins to run docker.inside, because it always uses cat to keep the container alive.
@lukma99
Copy link
Contributor Author

lukma99 commented Mar 15, 2023

@schnatterer I agree with all of what you say and made the changes. However the rancher kubectl image is "too leightweight". It does not come with cat installed. While testing I found out, that Jenkins always runs cat on containers, where docker.inside is used, to keep them alive. So The builds fails with the rancher image, as this binary is missing. I now switched to the bitnami image, despite the size (rancher/kubectl:v1.24.8 = 45,7MB, bitnami/kubectl:1.24.8 = 218 MB)

@schnatterer
Copy link
Member

LGTM.
I feel a bit uncomfortable with the bitnami images due to size and number of CVEs. OTOH we only use it for createConfigMap(). So for both user perspective and maintenance it would probably be a good idea to replace the usage of the image with a native implementation (which shouldn't be too much effort). This is beyond the scope for this issue, so I'll create an issue for it.

@schnatterer schnatterer merged commit 91c28fa into main Mar 20, 2023
@schnatterer schnatterer deleted the feature/global_k8s_version branch March 20, 2023 16:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants