-
Notifications
You must be signed in to change notification settings - Fork 1
feat(iac): provision local IAM stack via single terraform apply #228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Replace imperative bash script (setup-local.sh) with declarative Terraform in kernel/shell-iac/local/. Extract portable Logto K8s resources into reusable iac-logto-k8s submodule shared by both production (GKE) and local (minikube) environments. Closes #108 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
📝 WalkthroughWalkthroughAdds a Terraform-driven local infra path: new reusable Logto k8s module, Crossplane bootstrap and local PostgreSQL composition, a local root Terraform that starts Minikube and wires Crossplane → Postgres → Logto, refactors IAM Terraform to use the Logto module, and updates Makefile targets to run Terraform in Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as "Developer / Makefile"
participant TF as "Terraform (local root)"
participant Minikube as "Minikube"
participant Crossplane as "Crossplane (Helm + provider)"
participant DBComp as "Crossplane Postgres Composition"
participant LogtoMod as "Logto k8s Module"
participant K8s as "Kubernetes API"
Dev->>TF: cd teams/.../local && terraform init && terraform apply
TF->>Minikube: start minikube profile (null_resource)
TF->>Crossplane: helm_release -> install Crossplane + provider manifests
Crossplane->>K8s: install CRDs & ProviderConfig
TF->>DBComp: apply postgresql-local Composition / claim (null_resource/kubectl)
DBComp->>K8s: create StatefulSet, Service, Secrets
TF->>LogtoMod: instantiate module (namespace, image, db_secret, tls/ingress)
LogtoMod->>K8s: create ConfigMap, Deployment, Service, (Ingress conditional)
K8s-->>TF: resources ready
TF-->>Dev: outputs (logto_endpoint, admin, access_instructions)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
|
View your CI Pipeline Execution ↗ for commit 547cca2
☁️ Nx Cloud last updated this comment at |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@teams/kernel/security-iam-svc/iac-logto-k8s/variables.tf`:
- Around line 18-22: The variable "logto_image" currently defaults to the
floating tag "svhd/logto:latest"; update it to require an explicit, pinned image
tag by removing the default attribute from variable "logto_image" and update the
description to state that callers must supply a specific semver-pinned image
(e.g., svhd/logto:1.28.0) instead of using :latest so deployments remain
reproducible.
In `@teams/kernel/shell-iac/local/main.tf`:
- Around line 81-164: The PostgreSQL StatefulSet resource
"kubernetes_stateful_set_v1.postgresql" currently has no persistent volume
claim, so add a volume_claim_template to the spec with an appropriate
storageRequest and access_modes (e.g., ReadWriteOnce) and then add a matching
volume_mount in the container block (name must match the PVC template's
metadata.name, mount_path /var/lib/postgresql/data) so DB data persists across
pod restarts; alternatively document the ephemeral behavior if you intentionally
want no persistence.
🧹 Nitpick comments (3)
teams/kernel/shell-iac/local/versions.tf (1)
20-23: Hardcoded kubeconfig path may break in CI/CD environments.The
config_path = "~/.kube/config"assumes a standard local setup. In CI/CD pipelines or containers, the kubeconfig may be at a different location specified by theKUBECONFIGenvironment variable.Consider making this configurable or using the provider's default behavior which respects
KUBECONFIG:♻️ Proposed refactor to support flexible kubeconfig paths
+variable "kubeconfig_path" { + description = "Path to kubeconfig file. If null, uses default (~/.kube/config or KUBECONFIG env)." + type = string + default = null +} provider "kubernetes" { - config_path = "~/.kube/config" + config_path = var.kubeconfig_path config_context = var.minikube_profile }teams/kernel/security-iam-svc/iac-logto-k8s/main.tf (1)
167-173: Ingress references hardcoded service name instead of resource attribute.The ingress backend uses the hardcoded string
"logto"for the service name. While this works because it matcheskubernetes_service_v1.logto, using a reference would ensure consistency if the service name changes.♻️ Proposed refactor to use resource reference
backend { service { - name = "logto" + name = kubernetes_service_v1.logto.metadata[0].name port { number = 3001 }Also applies to: 184-190
teams/kernel/shell-iac/local/main.tf (1)
188-199: DB_URL secret may expose credentials in Terraform plan output.The
DB_URLvalue is constructed via string interpolation includingrandom_password.pg_password.result. Whilerandom_passwordresources are sensitive by default in provider 3.x, the interpolated string inkubernetes_secret_v1.datamay still appear in plan output.Consider using
sensitive()or verifying that Terraform correctly redacts this value in plans.🔒 Proposed fix to ensure sensitive handling
data = { - DB_URL = "postgres://pguser_${random_string.pg_username.result}:${random_password.pg_password.result}@postgresql.logto.svc.cluster.local:5432/logto" + DB_URL = sensitive("postgres://pguser_${random_string.pg_username.result}:${random_password.pg_password.result}@postgresql.logto.svc.cluster.local:5432/logto") }
…ocal env Introduce a local Crossplane Composition (postgresql-local) that uses provider-kubernetes Objects to manage PostgreSQL StatefulSet, Service, and credential Secrets — the same resources previously managed directly by Terraform. The local environment now uses the same PostgreSQLInstance Claim interface as production, with compositionRef selecting the backend. - Add crossplane-local-bootstrap module (Helm + provider-kubernetes) - Add postgresql-local.yaml Composition with 4 managed Objects - Parameterize crossplane-postgresql module with composition_name - Reduce shell-iac/local to bootstrapping only (minikube + Crossplane) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Terraform providers validate connections at plan time, so the peerlab-iam context must exist before `terraform init` runs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@teams/kernel/iac-modules/crossplane-local-bootstrap/variables.tf`:
- Around line 1-11: Update the default values for the Crossplane-related
variables: set variable "crossplane_version" default from "1.15.0" to "2.1.4"
and set variable "provider_kubernetes_version" default from "v0.14.1" to
"v1.2.0" in variables.tf (keep the type string and descriptions unchanged) so
the module uses the current stable Crossplane and provider-kubernetes releases.
In
`@teams/kernel/iac-modules/crossplane-postgresql/compositions/postgresql-local.yaml`:
- Around line 137-151: The host in connectionDetails is hardcoded to
"postgresql.logto.svc.cluster.local"; change the host entry under
connectionDetails (the host name mapping) to be generated dynamically using the
same pattern as DB_URL: replace the FromValue host with a CombineFromComposite
(or CombineFromField) that concatenates "postgresql.", the composite's
metadata.namespace, and ".svc.cluster.local" so the host resolves to
"postgresql.<namespace>.svc.cluster.local" for claims in any namespace;
reference the existing DB_URL CombineFromComposite logic as the implementation
example.
In `@teams/kernel/shell-iac/local/main.tf`:
- Around line 82-88: Replace the hardcoded db_secret_name in module "logto_k8s"
with the Crossplane PostgreSQL module's exported secret name; change
db_secret_name = "logto-db-credentials" to use the output from the database
module (e.g., set db_secret_name =
module.logto_database.db_connection_secret_name) so Logto gets the actual secret
created by module.logto_database (the Crossplane composition that produces the
`${database_name}-db-connection` secret).
🧹 Nitpick comments (4)
docs/insights-long-term.md (1)
100-100: Consider refining the insight wording for clarity.The insight is technically accurate and aligns well with the PR's Crossplane-based local infrastructure work. However, the phrasing "switches the same Claim between cloud and local backends" could be interpreted as the Claim object moving between backends, when more precisely it's about using
compositionRefto select which Composition (and therefore which backend implementation) provisions the resources.Consider a slightly clearer phrasing:
📝 Alternative phrasings for clarity
Option 1 (emphasizes selection):
-100. Crossplane compositionRef switches the same Claim between cloud and local backends. +100. Crossplane Claims use compositionRef to select between cloud and local resource implementations.Option 2 (emphasizes backend switching):
-100. Crossplane compositionRef switches the same Claim between cloud and local backends. +100. Crossplane compositionRef enables the same Claim to target cloud or local backends.Option 3 (emphasizes abstraction):
-100. Crossplane compositionRef switches the same Claim between cloud and local backends. +100. Crossplane Claims abstract environment differences via compositionRef selection.teams/kernel/iac-modules/crossplane-postgresql/compositions/postgresql-local.yaml (1)
165-180: Base DB_URL namespace inconsistent with patch, but correctly overwritten.The base
DB_URLat line 168 referencespostgresql.default.svc.cluster.local, while theCombineFromCompositepatch at lines 173-179 correctly overwrites it with the claim namespace. The base value is effectively a placeholder.For clarity, consider using a placeholder like
PLACEHOLDERinstead ofdefaultto make it obvious this value is always replaced:DB_URL: "postgres://logto:logto-local-dev@postgresql.PLACEHOLDER.svc.cluster.local:5432/logto"teams/kernel/iac-modules/crossplane-local-bootstrap/main.tf (1)
39-56: Consider adding input validation forkubeconfig_contextto prevent command injection.While this is a local development module, the
kubeconfig_contextvariable is interpolated directly into a shell command. Malicious or malformed input could lead to command injection.🛡️ Proposed validation in variables.tf
variable "kubeconfig_context" { description = "The kubectl context for local-exec commands (must match the target cluster)" type = string default = "" + + validation { + condition = can(regex("^[a-zA-Z0-9._@/-]*$", var.kubeconfig_context)) + error_message = "kubeconfig_context must contain only alphanumeric characters, dots, underscores, @, slashes, and hyphens." + } }teams/kernel/shell-iac/local/main.tf (1)
21-30: Make the minikube provisioner fail fast to avoid masked startup errors.The current script can continue to
addons enableeven ifminikube startfails, and the final exit code may mask the real failure. Safer to fail fast and only enable ingress after a successful start.🛠️ Proposed safer shell flow
provisioner "local-exec" { command = <<-EOT - minikube status --profile ${var.minikube_profile} >/dev/null 2>&1 || \ - minikube start \ - --profile ${var.minikube_profile} \ - --driver=docker \ - --memory=${var.minikube_memory} \ - --cpus=${var.minikube_cpus} - minikube addons enable ingress --profile ${var.minikube_profile} + set -eu + if ! minikube status --profile ${var.minikube_profile} >/dev/null 2>&1; then + minikube start \ + --profile ${var.minikube_profile} \ + --driver=docker \ + --memory=${var.minikube_memory} \ + --cpus=${var.minikube_cpus} + fi + minikube addons enable ingress --profile ${var.minikube_profile} EOT }
teams/kernel/iac-modules/crossplane-postgresql/compositions/postgresql-local.yaml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@Makefile`:
- Around line 75-78: The current shell chain "minikube status --profile
peerlab-iam >/dev/null 2>&1 || minikube start --profile peerlab-iam
--driver=docker --memory=4096 --cpus=2 && minikube addons enable ingress
--profile peerlab-iam" only enables the ingress addon when minikube is started
in that command; to fix it, change the logic so you always run "minikube addons
enable ingress --profile peerlab-iam" after ensuring minikube is running: first
check status and conditionally run "minikube start ..." if needed, then
unconditionally run "minikube addons enable ingress --profile peerlab-iam" (or
group the start and enable with proper shell grouping/parentheses) before the
Terraform init/apply call.
teams/kernel/iac-modules/crossplane-postgresql/compositions/postgresql-local.yaml
Fixed
Show fixed
Hide fixed
teams/kernel/iac-modules/crossplane-postgresql/compositions/postgresql-local.yaml
Fixed
Show fixed
Hide fixed
Replace kubernetes_manifest with null_resource + kubectl apply for resources that depend on CRDs not yet installed at plan time (Provider, ProviderConfig, PostgreSQLInstance Claim). Grant cluster-admin RBAC to provider-kubernetes SA so it can manage StatefulSets and Services across namespaces. Split Makefile into two-phase apply: Crossplane first, then full apply. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@teams/kernel/iac-modules/crossplane-local-bootstrap/main.tf`:
- Around line 111-114: The destroy provisioner block using provisioner
"local-exec" should include the Kubernetes context flag; update the command in
the provisioner to pass the same context variable used elsewhere (e.g., add
--context ${var.kube_context} or the existing local/context variable) so the
line command = "kubectl delete providerconfig default --ignore-not-found ||
true" becomes command = "kubectl --context ${var.kube_context} delete
providerconfig default --ignore-not-found || true", ensuring you reference the
same context variable name used in other resources.
- Around line 69-92: In resource null_resource.provider_kubernetes_rbac fix two
issues: ensure the destroy provisioner’s kubectl delete command includes the
optional context flag built from var.kubeconfig_context (same pattern used
elsewhere) and make the SA discovery/creation step robust by checking the
computed SA_NAME (derived from the SA_NAME assignment) before running kubectl
create clusterrolebinding; if SA_NAME is empty, emit an error and exit non‑zero
(or skip creation) to avoid silently binding to an invalid account, and keep the
kubectl create clusterrolebinding invocation using the same
${var.kubeconfig_context != "" ? "--context=..." : ""} pattern.
- Around line 38-41: The destroy provisioner in the provisioner "local-exec"
block (when = destroy) currently runs kubectl without specifying the kubeconfig
context; update the command string to include the appropriate --context
<var_or_input> flag so it targets the intended cluster (e.g., change the command
in the provisioner to include --context), and also add that same context
variable as a trigger on the provisioner so Terraform reruns the provisioner
when the context changes; locate the "provisioner \"local-exec\"" block and the
command line to modify and add a triggers map referencing the context variable
used by the module.
🧹 Nitpick comments (1)
teams/kernel/iac-modules/crossplane-postgresql/variables.tf (1)
63-67: Consider adding validation forkubeconfig_contextto prevent shell metacharacters.This variable is interpolated directly into shell commands. While this is a local development module with trusted input, adding validation would provide defense-in-depth against accidental misuse.
🛡️ Optional validation
variable "kubeconfig_context" { description = "The kubectl context for local-exec commands (must match the target cluster)" type = string default = "" + + validation { + condition = can(regex("^[a-zA-Z0-9_-]*$", var.kubeconfig_context)) + error_message = "kubeconfig_context must contain only alphanumeric characters, hyphens, and underscores." + } }
What HappenedLogto pod is running and authentication works (login flow completes successfully with 200s on Root cause: the ConfigMap sets This matched the expected behavior pattern — OIDC issuers must exactly match the URL the browser uses. What Did We LearnLogto's OIDC issuer is derived from the The init container also failed 5 times with Follow-up ExperimentNext: Configure nginx-ingress with TLS for |
What HappenedAfter enabling nginx-ingress with self-signed TLS, the OIDC issuer mismatch was fixed — the browser now accesses Logto at The Logto server-side code calls What Did We LearnTwo things needed to be fixed beyond just ingress+TLS:
The fix added Follow-up ExperimentNext: Verify the full Logto admin console works end-to-end — create a test application, configure sign-in experience, and confirm OIDC flows complete without errors. |
Enable the minikube nginx-ingress controller with self-signed TLS so Logto's OIDC issuer URLs (https://logto.localhost, https://logto-admin.localhost) match what the browser accesses, fixing token validation failures that caused "Oops! Something went wrong" on the admin dashboard. Key changes: - Generate self-signed TLS cert via Terraform tls provider - Add ingress_class_name, tls_secret_name, host_aliases, and extra_env variables to the iac-logto-k8s submodule (backward-compatible defaults) - Route pod-internal .localhost DNS through ingress via hostAliases - Add iam-local-tunnel Makefile target for minikube tunnel Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@teams/kernel/security-iam-svc/iac-logto-k8s/main.tf`:
- Around line 5-19: The current locals logto_endpoint and logto_admin_endpoint
hard-code "https" causing issuer mismatches when TLS/ingress is disabled; add a
new variable "endpoint_scheme" (default "https") and update the locals to build
endpoints using "${var.endpoint_scheme}://${var.domain_name}" (or support full
endpoint override variables if preferred), then ensure the
kubernetes_config_map_v1.logto data values ENDPOINT and ADMIN_ENDPOINT use the
updated locals; also add the variable "endpoint_scheme" to variables.tf with
description/type/default as suggested.
🧹 Nitpick comments (1)
Makefile (1)
94-95: Consider documenting that minikube cluster persists after teardown.The destroy removes Terraform-managed resources but leaves the minikube cluster running. This is reasonable for iterative development, but users expecting full cleanup may be surprised. Consider adding a comment or a separate
iam-local-deletetarget for complete teardown includingminikube delete --profile peerlab-iam.
Experiment: Replace
|
|
Decision: revert to know working state. |
Replace hardcoded PostgreSQL credentials in Crossplane compositions with templatefile() injection, add --context flags to all destroy provisioners, pin Logto image to semver tag, and add input validation for kubeconfig context. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Deployment failed with the following error: |
|



What and why was modified?
setup-local.sh) for local IAM setup with a declarative Terraform module atkernel/shell-iac/local/iac-logto-k8ssubmodule shared by both production (GKE) and local (minikube) environmentskubernetes_manifestwithnull_resource+kubectl applyfor resources that depend on CRDs not yet installed, and granted cluster-admin RBAC to provider-kubernetesmake iam-local-setupto provision the full stackCloses #108
How was it modified?
security-iam-svc/iac-logto-k8s/: Extracted Layer 4 (K8s resources) from productioniac/main.tfinto a standalone module with configurable inputs (domain, namespace, image, ingress annotations)security-iam-svc/iac/main.tf: Now calls the submodule withmovedblocks for zero-downtime state migrationshell-iac/local/: 4-layer Terraform config — minikube lifecycle (null_resource), Crossplane bootstrap, PostgreSQL (via Crossplane), Logto (via submodule)crossplane-local-bootstrap/: Replacedkubernetes_manifestwithnull_resource+kubectl applyfor Provider and ProviderConfig (avoids plan-time CRD validation). Added cluster-admin RBAC grant for provider-kubernetes service accountcrossplane-postgresql/: Converted Claim fromkubernetes_manifesttonull_resource+kubectl apply(CRD is created dynamically by XRD at apply time, not available at plan time). Addedkubeconfig_contextvariableMakefile: Two-phase apply —terraform apply -target=module.crossplanefirst (installs CRDs), then fullterraform applyReference links and evaluation steps
make iam-local-setupto provision the full local IAM stackkubectl get pods -A --context=peerlab-iamto verify healthy podsminikube service logto -n logto --profile peerlab-iamto access Logtomake iam-local-teardownto clean upExperiment Record
make iam-local-setupprovisions minikube + Crossplane + PostgreSQL + Logto. Verified bykubectl get pods -n logtoshowing all pods Running and Ready.terraform applyfailed becausekubernetes_manifestvalidates CRD GroupVersionKind at plan time, before Crossplane is installed. After fixing the bootstrap module, the Crossplane-managed PostgreSQL Objects failed with RBAC errors — provider-kubernetes lacked permissions for StatefulSets and Services in thelogtonamespace. After granting cluster-admin and restarting the provider, all resources reconciled and Logto came up successfully.kubernetes_manifestis unsuitable for bootstrapping CRD-dependent resources — usenull_resource+kubectl applyinstead. 2) Two-phase terraform apply is needed: install Crossplane first so its CRDs exist for planning. 3) Crossplane provider-kubernetes needs explicit cluster-admin RBAC to manage arbitrary resources across namespaces. 4) The PostgreSQLInstance Claim CRD is created dynamically by the XRD, so it also needs the kubectl approach.Next step
make iam-local-teardown && make iam-local-setupto validate the two-phase bootstrap end to endshell-iac/local/to provision additional services (PostgreSQL for peers, MongoDB, Kafka) to match full Docker Compose topology🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Infrastructure & Configuration
Documentation