Update common to latest common/main for templated value files#106
Merged
mbaldessari merged 39 commits intovalidatedpatterns:mainfrom Oct 16, 2023
Merged
Update common to latest common/main for templated value files#106mbaldessari merged 39 commits intovalidatedpatterns:mainfrom
mbaldessari merged 39 commits intovalidatedpatterns:mainfrom
Conversation
This is a simple quick check to see if all argo applications in all
namespaces are synced and error out if they are not.
Synced example:
$ make argo-healthcheck
make -f common/Makefile argo-healthcheck
make[1]: Entering directory '/home/michele/Engineering/cloud-patterns/multicloud-gitops'
Checking argo applications
mcg-private-hub acm -> Sync: Synced - Health: Healthy
mcg-private-hub config-demo -> Sync: Synced - Health: Healthy
mcg-private-hub golang-external-secrets -> Sync: Synced - Health: Healthy
mcg-private-hub hello-world -> Sync: Synced - Health: Healthy
mcg-private-hub vault -> Sync: Synced - Health: Healthy
openshift-gitops mcg-private-hub -> Sync: Synced - Health: Healthy
make[1]: Leaving directory '/home/michele/Engineering/cloud-patterns/multicloud-gitops'
Not synced example:
$ make argo-healthcheck
make -f common/Makefile argo-healthcheck
make[1]: Entering directory '/home/michele/Engineering/cloud-patterns/multicloud-gitops'
Checking argo applications
mcg-private-hub acm -> Sync: Synced - Health: Healthy
mcg-private-hub config-demo -> Sync: Synced - Health: Degraded
mcg-private-hub golang-external-secrets -> Sync: Synced - Health: Healthy
mcg-private-hub hello-world -> Sync: Synced - Health: Healthy
mcg-private-hub vault -> Sync: Synced - Health: Progressing
openshift-gitops mcg-private-hub -> Sync: Synced - Health: Healthy
Some applications are not synced or are unhealthy
make[1]: *** [common/Makefile:115: argo-healthcheck] Error 1
make[1]: Leaving directory '/home/michele/Engineering/cloud-patterns/multicloud-gitops'
make: *** [Makefile:12: argo-healthcheck] Error 2
Adding label validatedpatterns.io/pattern to all applications.
Drop the old logic and just install the CRD via OC and use helm template for the rest. The rationale is that helm install is very picky whenever it encounters things that already exist. We have three potential scenarios at work here: A) User installs operator+pattern via CLI and updates the pattern via CLI (this worked before this change as well) B) User installs operator+pattern via UI but runs updates (changing branch for example) via CLI (this worked before this change as well) C) User installs only the operator via UI. Installs and updated the pattern via CLI. This was broken before this change. The error you'd get was: ``` ./pattern.sh make install ... https://github.com/mbaldessari/multicloud-gitops.git - branch main: Running inside a container: Skipping git ssh checks + oc get crds patterns.gitops.hybrid-cloud-patterns.io + echo 'Reapplying helm chart:' Reapplying helm chart: + helm template --name-template multicloud-gitops common/operator-install/ -f values-global.yaml --set main.git.repoURL=https://github.com/mbaldessari/multicloud-gitops.git --set main.git.revision=main + oc apply set-last-applied --create-annotation -f- WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/michele/sno1-kubeconfig Error from server (NotFound): patterns.gitops.hybrid-cloud-patterns.io "multicloud-gitops" not found ``` With this change we simplify the process and we forcefully apply/install the CRD for patterns via the oc command. And then we simply template out the operator-install chart and oc apply it. We retry it a few times, because the CRD might not yet be fully registered in the cluster. Tested with on the A), B) and C) scenarios successfully.
Install the CRD inside the loop to simplify the code a bit
Rework installation target
The validate-cluster target will be in charge of doing some sanity check
on the cluster. Initially we just check the connection to the cluster
and that at least one storageclass is available to the cluster.
Tested as follows:
1) Cluster with a storageclass (LVM in my case)
$ make validate-cluster
Checking cluster:
cluster-info: OK
storageclass: OK
2) Cluster without a storageclass:
$ make validate-cluster
Checking cluster:
cluster-info: OK
storageclass: None Found
make: *** [Makefile:99: validate-cluster] Error 1
Introduce a validate-cluster target in the install target
IIB improvements
Our resourceCustomization is currently giving the following warning:
Warning DeprecationNotice 27m ResourceCustomizations is
deprecated, please use the new formats `ResourceHealthChecks`,
`ResourceIgnoreDifferences`, and `Resource Actions` instead.
This actually becomes a problem with gitops-1.10 because it dropped
support for v1alpha versions of argoCD (it upgrades them automatically
to v1beta). So the cluster-wide argo app which is in charge of creating
the namespaced argoCD instance will always be OutOfSync as it will never
be able to set the `resourceCustomization` field.
Move to resourceHealthcheck which is the new supported way. This is also
backwards compatible with gitops-1.8.
Tested as follows:
1. Deployed 4.13 with gitops-1.10 and observed the multicloud-gitops-hub
being OutOfSync
2. Applied this patch and observed it going to green and sync correctly
3. Tested this on gitops-1.8.5 on 4.13 and deployed MCG correctly with
all apps becoming green everywhere.
Fixes: validatedpatterns/common#367
Move from resourceCustomization to resourceHealthcheck
Upgrade to ESO 0.9.5
Introduce an argo-healthcheck make target
Release 0.0.3 golang-external-secrets
Release 0.0.3 clustergroup
Template values
Fix up tests after last PR
Release clustergroup v0.0.4
From https://docs.podman.io/en/latest/markdown/podman-run.1.html#pull-policy Pull image policy. The default is missing. always: Always pull the image and throw an error if the pull fails. missing: Pull the image only if it could not be found in the local containers storage. Throw an error if no image could be found and the pull fails. never: Never pull the image but use the one from the local containers storage. Throw an error if no image could be found. newer: Pull if the image on the registry is newer than the one in the local containers storage. An image is considered to be newer when the digests are different. Comparing the time stamps is prone to errors. Pull errors are suppressed if a local image was found. Switching to pull=newer will allow us to keep this image uptodate for users without erroring out if podman cannot check or pull a new image (i.e. we'd keep running the local one)
Add --pull=newer when running the container
Tested by commenting out the whole `imperative` section in values-hub and deploying MCG.
Allow imperative to be nil
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Separate PR to update common to make use of new templated value files feature