Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reevaluate CLI design #23

Closed
3 tasks done
hall opened this issue Jun 12, 2023 · 10 comments
Closed
3 tasks done

reevaluate CLI design #23

hall opened this issue Jun 12, 2023 · 10 comments

Comments

@hall
Copy link
Owner

hall commented Jun 12, 2023

I'm starting to come around to the idea of templating charts out of Helm and then piping them to kubectl (as opposed to installing with helm). In part, this is because the current CLI doesn't handle releases defined within submodules. So I've an immediate need to make improvements there. However, the current implementation is already clunky and I'm not convinced the existing complexity is worthwhile.

Overall, this is what I'd like from a CLI:

  • access to rendered manifests before they're applied; this is necessary to, e.g., inject secrets
    • keeping secrets out of manifests makes persisting to the nix store reasonable; thereby bringing config more inline with the overall ecosystem (rollbacks being a potentially major advantage should anyone wish to develop post-deploy testing).
    • currently, I'm using the pattern of writing all cluster secrets to a file with agenix then piping to vals; I'm pretty satisfied with this setup and supporting it is hardly one line of shell script on its own.
  • generate a diff prior to apply
    • it's important to know what will change; I'd be happy if we got this functionality from more generic tooling though (e.g., something that can diff arbitrary module config)
  • prune deleted resources
    • maybe the most "complicated" bit but everything should already be in place for this

All of these on their own are fairly simple with kubectl alone. Once I threw helm in the mix, things started getting a little hairy (with questionable benefit); especially, since the current modules all appear to be designed under the assumption that manifests will be piped to kubectl.

All this to say, I'm going to (on a branch for now) drop helm from the CLI to focus solely on the above tasks. I'd be happy to hear how others feel and what they'd like to see accomplished here.

@hall
Copy link
Owner Author

hall commented Jun 12, 2023

One drawback is that kubectl diff isn't as smart as helm diff in that it will throw an error if any resources reference missing namespaces or CRDs (even if said resources are within the same set). Not a deal breaker, I don't feel.

@hall
Copy link
Owner Author

hall commented Jun 12, 2023

Will also need a way to filter out noise if we're going to have useful diffs; otherwise, there will be countless

-    kubenix/hash: bf784eba18bc4ac54f30ad01873205d7486b0163
+    kubenix/hash: 67c3651392deea6752d5d05c0e83a369e6caabaa

on every execution.

@hall
Copy link
Owner Author

hall commented Jun 13, 2023

Hashes cluttering up the diff was solved by using KUBECTL_EXTERNAL_DIFF to add the -I flag for ignoring patterns like so:

diff -N -u -I ' kubenix/hash: ' $@

So the linked MR for the full implementation.

@hall
Copy link
Owner Author

hall commented Jun 14, 2023

I've added basic deletion with --prune; which seems to work fairly well for my simple use-case. 1.27 adds support for ApplySets which appears to be an even more robust approach to avoid adding said logic here.

More importantly, that raises another issue of verifying/matching the version skew policy. Should be possible since we already have access to the user's API version.

@adrian-gierakowski
Copy link
Contributor

access to rendered manifests before they're applied; this is necessary to, e.g., inject secrets

For what it's worth, the approach I use it to keep encrypted secrets within the repo and the k8s manifests and have them decrypted in an init container (a slightly evolved version of this), before the pod starts. I use sops for encrypting\decrypting as it integrates with various cloud providers.

generate a diff prior to apply

I use an intermediate repo with final generated manifests and then argocd which watches for changes on the repo to syncs them to the cluster. This also takes care of pruning deleted resources.

So a CI job, instead of running kubernetes apply, simply generates the manifests and pushes them to the manifest repo, either directly to main branch or creating a PR first which can be reviewed and acts as gating mechanism for deploys. This also allows you to make you cluster fully private and not accessible from the outside world (as argocd runs within the cluster).

@hall
Copy link
Owner Author

hall commented Jun 16, 2023

For what it's worth, the approach I use it to keep encrypted secrets within the repo and the k8s manifests and have them decrypted in an init container (a slightly evolved version of this), before the pod starts.

I like agenix as it matches how I manage secrets outside of k8s but this is also a nice approach (that's probably worth adding to the docs). I might have to play around with something along these lines and see how it goes.

I use an intermediate repo with final generated manifests and then argocd which watches for changes on the repo to syncs them to the cluster. This also takes care of pruning deleted resources.

I think gitops is a bit overkill for my own use-case. ArgoCD is pretty nice but adds more complexity than I personally want/need. That said, it's also a perfectly valid approach and does solve a lot of the same problems in a slightly different way.

I appreciate having these details though. I'd like to beef up the docs with more stuff like this. I recently added a "tips-n-tricks" section which I mean to use as a bucket to collect pages for these sort of "this is how you might do X" approaches. Might not be the best title but having them is nicer than having to discover things on your own.

@adrian-gierakowski
Copy link
Contributor

ArgoCD is pretty nice but adds more complexity than I personally want/need.

Fair point.

I appreciate having these details though. I'd like to beef up the docs with more stuff like this. I recently added a "tips-n-tricks" section

Cool, happy to contribute

@adrian-gierakowski
Copy link
Contributor

Another tool which can be used to track changes and prune resources when deploying kubectl apply style: https://carvel.dev/kapp/

@hall
Copy link
Owner Author

hall commented Jun 20, 2023

Ooh, kapp looks to fit this definition pretty nicely with ordered applies, ready checks, even targeting with labels. Thanks for pointing that out, hadn't seen it before.

So maybe a bigger question is should we provide a "golden" apply path? The alternative (so far as I understand) is just documenting a few suggestions and leaving implementation to the user.

@hall
Copy link
Owner Author

hall commented Jul 7, 2023

Thanks for being a sounding board here, @adrian-gierakowski 💯 I'm going to assume most users are in the same boat as you and are currently relying on some outside deployment mechanism (which is perfectly fine, and in many ways better, of course; maybe I'll go that way myself one of these days 🙃).

I've merged the associated MR which does about all I'm personally looking for here (will continue to iterate, of course).
I'm going to open a new issue for myself to document some of these alternative deployment methods based your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants