fix: drop OLM artifacts blocking gitops-operators sync on k3s#6
Merged
Conversation
…n k3s
operators/generator/ksops-generator.yaml had its only files: entry pointing
at operators/arc/dindsystem.yaml, which no longer exists. Per-subdir ksops
generators in arc/, cert-manager/, cloudflare/, etc. cover what's needed.
operators/cert-manager/:
- operator.yaml (OLM Subscription) — cert-manager is bootstrap-installed
by tfroot-libvirt cloud-init now
- apiserver-config.yaml (config.openshift.io APIServer) — no equivalent
needed on k3s; api.makeitwork.cloud isn't fronted by a managed apiserver
- certmanager-config.yaml (operator.openshift.io CertManager) — its only
operational value was the --dns01-recursive-nameservers args; those are
applied to the upstream cert-manager Deployment by cloud-init in
tfroot-libvirt
- keep cluster-issuer.yaml + the Cloudflare DNS-01 token Secret generator
operators/kustomization.yaml: comment out ansible and grafana — both still
ship OLM Subscriptions that require OperatorHub CRDs not present on k3s.
Re-enable once rewritten as upstream operator manifests.
The gh-cli image declares USER as a name, which kubelet cannot validate against runAsNonRoot=true without a numeric runAsUser. Pin to 1000 to match the image's gh user. Also refresh stale kube-linter ignore reasons.
01367af to
b3b14e9
Compare
3 tasks
xnoto
added a commit
that referenced
this pull request
Apr 30, 2026
## Summary Register Headlamp as a Dex static client in the ArgoCD CR's \`dex.config\`. Reuses the existing GitHub OAuth flow that ArgoCD already runs, so we don't need a second Dex install or a second GitHub OAuth app — Dex bridges GitHub OAuth → OIDC, Headlamp consumes the OIDC. Same wiring will land for Grafana in a small follow-up PR (replacing its built-in \`GF_AUTH_GITHUB_*\` with \`GF_AUTH_GENERIC_OAUTH_*\` pointing at the same Dex), per the consolidation we agreed on. ## Changes - \`bootstrap/argocd-config.yaml\` — add a \`staticClients\` block under \`dex.config\` registering \`id: headlamp\`, \`name: Headlamp\`, \`redirectURIs: [https://headlamp.makeitwork.cloud/oidc-callback]\`. Secret pulled via \`$dex.headlamp.clientSecret\` from \`argocd-secret\`. - \`bootstrap/secrets/github-oauth-secret.yaml\` — add \`dex.headlamp.clientSecret\` (sops-encrypted via the existing AGE key). - \`.sops.yaml\` — generalize the encrypted_regex from \`dex\\.github\\.client(ID|Secret)\` to \`dex\\.[a-z]+\\.client(ID|Secret)\` so any future Dex static clients pick up encryption automatically. ## Test plan - [x] \`kustomize build bootstrap/secrets\` decrypts the new field cleanly via sops/KSOPS - [x] After merge: \`kubectl -n argocd get cm argocd-cm -o jsonpath='{.data.dex\\.config}'\` shows the staticClients block - [x] After Headlamp install lands (next PR): GitHub OAuth → ArgoCD Dex → Headlamp callback succeeds and lands on the dashboard with a cluster-scoped session ## Pairs with - tfroot-cloudflare #6 (merged) — CNAMEs + Access app for headlamp/k3s - Next PR — operators/headlamp + workloads/headlamp install with this Dex issuer - Tiny follow-up — Grafana migrates from built-in GitHub OAuth to generic OAuth via this same Dex 🤖 Generated with [Claude Code](https://claude.com/claude-code)
3 tasks
xnoto
added a commit
that referenced
this pull request
Apr 30, 2026
## Summary Install Headlamp as a Helm-based ArgoCD Application, fronted by the cluster-apps Cloudflare Tunnel at \`https://headlamp.makeitwork.cloud\`, with login flowing through ArgoCD's embedded Dex (which bridges to GitHub OAuth — same path Grafana will move to next). ### \`operators/headlamp/\` - **\`namespace.yaml\`** — headlamp ns. - **\`oidc-secret.yaml\`** — Secret named \`oidc\` in headlamp ns, sops-encrypted. Consumed by the upstream chart with \`config.oidc.secret.create=false, name=oidc\`. \`clientSecret\` matches the value in \`argocd-secret.dex.headlamp.clientSecret\` from the merged Dex static-client PR. - **\`ksops-headlamp-secrets.yaml\`** — pulls the secret in via KSOps. - **\`application.yaml\`** — ArgoCD Helm Application installing chart v0.41.0 from \`https://kubernetes-sigs.github.io/headlamp/\`. Cluster-admin RBAC (single-user home cluster), modest resource limits. - **\`kustomization.yaml\`** + add \`headlamp\` to \`operators/kustomization.yaml\`. ### \`workloads/headlamp/\` - **\`tunnel-binding.yaml\`** — TunnelBinding fronts headlamp Service on \`headlamp.makeitwork.cloud\` via the existing cluster-apps tunnel. ### \`workloads/apps/\` - **\`headlamp-app.yaml\`** + add to \`workloads/apps/kustomization.yaml\` — ArgoCD Application that syncs the workload manifests. ## Pairs with - tfroot-cloudflare #6 (merged) — CNAME for headlamp.makeitwork.cloud. - kustomize-cluster #12 (merged) — Headlamp registered as a Dex static client. ## Test plan - [x] After merge: \`kube-prometheus-stack\` operators app + \`headlamp\` operators app reach Synced + Healthy - [x] After merge: headlamp-app workloads app reaches Synced + Healthy - [x] After merge: \`https://headlamp.makeitwork.cloud\` redirects to ArgoCD's Dex GitHub login, then back to a working dashboard with cluster-admin scope 🤖 Generated with [Claude Code](https://claude.com/claude-code)
3 tasks
xnoto
added a commit
that referenced
this pull request
Apr 30, 2026
) ## Summary Expose the kube-apiserver as a TCP tunnel through the existing cluster-apps Cloudflare Tunnel so admins can reach \`kubectl\` from anywhere without VPN, without opening the apiserver to the public internet, and gated on GitHub-org-admin authentication via the Cloudflare Access app already provisioned in tfroot-cloudflare. ### \`workloads/kubectl-tunnel/tunnel-binding.yaml\` TunnelBinding in \`default\` ns referencing the \`kubernetes\` Service: \`\`\`yaml subjects: - name: kubernetes spec: fqdn: k3s.makeitwork.cloud protocol: tcp target: tcp://kubernetes.default.svc:443 \`\`\` ### \`workloads/apps/kubectl-tunnel-app.yaml\` + \`kustomization.yaml\` Standard ArgoCD Application wiring; sync wave 1 (after gitops-operators brings up cloudflare-operator). ## Pairs with - tfroot-cloudflare #6 (merged) — CNAME for k3s.makeitwork.cloud + Cloudflare Access self_hosted app gating it. ## Client usage (one-time setup per laptop) \`\`\` brew install cloudflared cloudflared login # browser OIDC, stores cert cloudflared access tcp \\ --hostname k3s.makeitwork.cloud \\ --url localhost:6443 & # backgrounded shim # kubeconfig points at the local shim kubectl config set-cluster k3s --server=https://localhost:6443 --insecure-skip-tls-verify \`\`\` (Use \`--insecure-skip-tls-verify\` only if you don't have the apiserver CA pinned; ideally embed the cluster CA cert in the kubeconfig instead.) ## Test plan - [x] After merge: \`kubectl-tunnel\` Application Synced + Healthy - [x] After merge: \`cloudflared access tcp --hostname k3s.makeitwork.cloud --url localhost:6443\` opens after GitHub OAuth - [x] After merge: \`kubectl --server=https://localhost:6443 get nodes\` reaches the apiserver 🤖 Generated with [Claude Code](https://claude.com/claude-code)
5 tasks
xnoto
added a commit
that referenced
this pull request
Apr 30, 2026
…#25) ## Summary Single GitHub Actions runner-set running the rebuilt tfroot-runner image (based on \`ghcr.io/actions/actions-runner\`). No docker-in-docker, no nested \`container:\` blocks in caller workflows. Consumers move to \`runs-on: arc-tf\`. ### Removed (legacy summerwind ARC + dind plumbing) - \`operators/arc/dind-application.yaml\` — summerwind controller install - \`operators/arc/github-token-secret.yaml\` — its \`arc-dind-systems\` token - \`operators/arc/namespace.yaml\` — \`arc-dind-systems\` ns - \`operators/arc/ksops-arc-secrets.yaml\` — only listed the deleted token - \`workloads/arc/runner-application.yaml\` — old runner-set with \`docker:dind\` sidecar - \`workloads/arc/docker-daemon-config.yaml\` — dind registry-mirror config - \`workloads/arc/registry.yaml\` — internal docker-registry ns + SA + RB - \`workloads/arc/rbac.yaml\` — \`system:openshift:scc:privileged\` binding (the SCC ClusterRole doesn't exist on k3s) ### Added - \`workloads/arc/arc-tf-application.yaml\` — \`gha-runner-scale-set\` Helm Application, \`releaseName / runnerScaleSetName: arc-tf\`, \`image: ghcr.io/makeitworkcloud/tfroot-runner:latest\`. \`ignoreDifferences\` for the controller-mutated listener resources (same fix that was applied to the old generic runner-set in #11). ### Tidied - \`workloads/apps/arc-app.yaml\` — drop the OpenShift ImageStream \`ignoreDifferences\` block. - Both \`operators/arc/\` and \`workloads/arc/\` \`kustomization.yaml\` files trimmed to the surviving resources. ### Kept - \`operators/arc/arcsystem.yaml\` — the \`gha-runner-scale-set-controller\` Application (the new arc-tf runner-set depends on it). - \`workloads/arc/namespace.yaml\` — \`arc-runners\` ns reused for the new runner-set. - \`workloads/arc/github-token-secret.yaml\` + \`ksops-arc-secrets.yaml\` — \`arc-runner-github-token\` Secret reused as \`githubConfigSecret\`. ## Pairs with - images PR #6 (merged) — tfroot-runner image rebased onto \`ghcr.io/actions/actions-runner\`. - shared-workflows (incoming) — drop the nested \`container:\` block; default \`runs-on: arc-tf\`. - tfroot-libvirt (incoming) — caller switches from \`runs-on: arc-dind\` + \`container:\` to plain \`runs-on: arc-tf\`. ## Test plan - [x] After merge: \`kubectl -n arc-runners get autoscalingrunnerset arc-tf\` exists; listener pod registers with GitHub - [x] After merge: GitHub org → Actions → Runners shows an \`arc-tf\` runner set - [x] After merge: a job with \`runs-on: arc-tf\` spawns an ephemeral pod in arc-runners, runs to completion, pod terminates - [x] After merge: legacy \`arc-dind\` Application is pruned by gitops-operators; \`arc-dind-systems\` ns gone - [x] After merge: \`docker-registry\` ns gone (pruned) 🤖 Generated with [Claude Code](https://claude.com/claude-code)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Two unrelated bugs surface together as a blocked
gitops-operatorsApplication after the OpenShift→k3s migration in #6b3abd0:operators/.cert-manager/operator.yaml,cert-manager/apiserver-config.yaml,cert-manager/certmanager-config.yaml, plus theansible/andgrafana/operator manifests are all OLM Subscriptions / OperatorHub CRs. Their CRDs don't exist on k3s, so kustomize build → server-side apply fails withno matches for kind "Subscription"etc.operators/generator/ksops-generator.yamlhad its onlyfiles:entry pointing atarc/dindsystem.yaml, which was removed in945130b(selective-field-encryption refactor). Kustomize build aborts withno such file or directory.This PR:
operators/generator/ksops-generator.yaml. Per-subdir ksops generators inarc/,cert-manager/,cloudflare/,bootstrap/secrets/,workloads/*/cover all secret decryption — there's no centralized pipeline being lost.operators/cert-manager/down tocluster-issuer.yaml+cloudflare-api-token-secret.yaml(kept) +ksops-cert-manager-secrets.yaml(kept). Cert-manager itself is bootstrap-installed bytfroot-libvirtcloud-init now (see paired PR there); the--dns01-recursive-nameserverscontroller args from the deletedCertManagerCR are applied directly to the upstream Deployment by cloud-init.ansibleandgrafanafromoperators/kustomization.yaml. Re-enable once they're rewritten as upstream operator manifests (Phase B).bootstrap/ci-token-sync-job.yamlgetsrunAsUser: 1000so the Job'srunAsNonRoot=trueactually validates againstgh-cli'sghuser (paired with the images-repo PR pinningUSER 1000numerically).Pairs with
tfroot-libvirtPR feat: add GitHub OAuth, KSOPS fixes, and CD workflow #2 — bootstraps cert-manager from cloud-init.imagesPR —gh-cliswitches to numericUSER 1000.Test plan
kustomize build operators/succeeds (no missing-file or unknown-kind errors)bootstrap-secretsApplication is Synced + Healthygitops-operatorsApplication reaches Synced + Healthy (pending push so ArgoCD picks it up)ci-token-syncJob runs to completion, syncs the deploy token to GitHub🤖 Generated with Claude Code