Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major refactoring and bug fixing #54

Merged
merged 29 commits into from
Aug 2, 2021
Merged

Major refactoring and bug fixing #54

merged 29 commits into from
Aug 2, 2021

Conversation

schnatterer
Copy link
Member

Hopefully the playground is now much more intuitive to start from the docs.
And it should also work locally and remote again.

"grep ${CLUSTER_NAME}" also matches gitops-playground, gitops-playground2, etc.
Remove confirmation for init. It's not necessary. If the cluster exists, deletion has to be confirmed. Otherwise just create.
DEBUG=1 scripts/init-cluster.sh
 will print each command (set -x)
It's nice to save resources. But it also makes the setup more complicated. Is it worth this complexity or is it premature optimization? Remove for now!
Good practices: exit on first error, fail on unset vars and fail on first error in pipe.

Turns out: SKIP_KUBECTL is never set. And never used: Deleted!
And hopefully more constant ones.
"/var/run/docker.sock permission denied" because queryDockerGroupOfJenkinsNode never had a change to get to actual docker group and so jenkins agents pod run with default group, which is not allowed to access docker socket.
Change up/downgrade query into an INFO. The users should decide for themselves and not be bugged by our potentially older k3ds version.
The default depends on k3d so it might change implicitly when upgrading k3d. We want deterministic behavior!
The name is misguided and confusing (at least to me) and the script can rather easily determine if it runs inside kubernetes for itself. This should make the whole process much simpler for the end user and requires less documentation.
It's much more convenient for end user than checking out the repo, installing all the tools and applying the script. Also, we hopefully get rid of the bash hell in the long run.
We dropped the "k8s-" prefix for simplicity a while ago. Still, some resource inside the repo still used this prefix. Not anymore.
For example if --argocd is passed, install only argo example app jobs.

Fixes #17
This should make the whole process much simpler for the end user and requires less documentation.
The script can query the node's ip via kubectl, which is true for local k3d cluster at least
This makes testing in remote cluster, reviews much more easy. Only pushes git revision as tag, never latest!
Also use a newer cluster version.
…lure.

There is some discussion going on about the kubectl run --serviceaccount flag. It is deprecated but still works: kubernetes/kubernetes#99732
So for now, use the "overrides" workaround.

--restart=OnFailure - leaves the pod in status completed, if no errors occur. Otherwise it would restart and apply again, even on success.

Why don't we use a job instead of a pod?

It's likely that we would have to use YAML, because "kubectl create job" does not support --override. This makes handling it much more complicate for the user. For example, passing flags such as --remote, --argocd, etc.
Also interactive mode (""-i --tty") is not possible.
This makes it more complicate to wait for success and to view output. This would likely require a combination of "kubectl wait --for=condition=failure job/myjob" and "kubectl logs -f".

So for now, "kubectl run" seems to be the easiest way here.
Less complexity and attack surface for the price of a little more redundancy (confirm functions is also defined in utils.sh).
Should simplify maintenance a bit.
Same meanings in apply.sh, destroy.sh and init-cluster.sh:

DEBUG: print debug log (no spinner)
TRACE: print each line (set -x)
Technically we could read the k8s version from a single source.
But this is rather complicated to implement (in terraform, Dockerfile and shell script) and would probably cause much more effort than changing the redundant version in three places.
@schnatterer schnatterer requested a review from phaenr July 28, 2021 09:12
@schnatterer schnatterer force-pushed the feature/refactoring branch 2 times, most recently from f03d84f to aeade4e Compare July 28, 2021 12:59
Signed-off-by: Marek Markiewka <marek.markiewka@cloudogu.com>
@marekzan marekzan merged commit b89f15c into main Aug 2, 2021
@marekzan marekzan deleted the feature/refactoring branch August 2, 2021 11:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants