fix(manifests): add CP token server Service and CP_TOKEN_URL for runner gRPC auth#1448
Conversation
…er gRPC auth Runners authenticate to the API server by fetching an OIDC token from the control-plane's token server (port 8080). The base manifests were missing both the K8s Service to make the token server reachable and the CP_TOKEN_URL env var that tells the CP what URL to inject into runner pods. Without these, runners start with an empty auth token and get UNAUTHENTICATED errors on all gRPC streams. Add: - ambient-control-plane-token-svc.yaml: Service exposing port 8080 - CP_TOKEN_URL env var on the CP Deployment The MPP overlay already has both (with MPP-specific namespaces) and is self-contained, so no deduplication is needed. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
✅ Deploy Preview for cheerful-kitten-f556a0 canceled.
|
📝 WalkthroughWalkthroughAdds a new Kubernetes Service to expose the ambient control plane's token endpoint on port 8080 and introduces a corresponding Changes
Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 1 warning)
✅ Passed checks (6 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
components/manifests/base/ambient-control-plane-service.yml (1)
58-59: Avoid hardcoding namespace inCP_TOKEN_URL
CP_TOKEN_URLis pinned toambient-code, which breaks namespace portability for this base manifest. Use same-namespace DNS (ambient-control-plane) or derive namespace dynamically to avoid auth regressions outside that namespace.Proposed change
- - name: CP_TOKEN_URL - value: "http://ambient-control-plane.ambient-code.svc:8080/token" + - name: CP_TOKEN_URL + value: "http://ambient-control-plane:8080/token"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/manifests/base/ambient-control-plane-service.yml` around lines 58 - 59, CP_TOKEN_URL is hardcoded to the ambient-code namespace; change the manifest to derive the namespace dynamically by adding an env var (e.g. CP_TOKEN_NAMESPACE) that uses valueFrom.fieldRef.metadata.namespace and then set CP_TOKEN_URL to use that var (e.g. "http://ambient-control-plane.$(CP_TOKEN_NAMESPACE):8080/token") so the URL uses same-namespace DNS (service name ambient-control-plane) and remains portable across namespaces.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@components/manifests/base/ambient-control-plane-token-svc.yaml`:
- Around line 8-15: The Service defined in ambient-control-plane-token-svc.yaml
exposes port "token" (8080) without a matching NetworkPolicy, allowing any
in-namespace pod to call :8080/token; add a base NetworkPolicy (similar to
components/manifests/overlays/mpp-openshift/ambient-cp-token-netpol.yaml) that
targets the same backend pods (podSelector: matchLabels: app:
ambient-control-plane) and restricts ingress (policyTypes: ["Ingress"]) to only
the expected callers by using specific podSelector and/or namespaceSelector
rules (e.g., allow from pods with the caller label or a namespace with a known
label) so only authorized pods/namespaces can reach port 8080 (name: token).
---
Nitpick comments:
In `@components/manifests/base/ambient-control-plane-service.yml`:
- Around line 58-59: CP_TOKEN_URL is hardcoded to the ambient-code namespace;
change the manifest to derive the namespace dynamically by adding an env var
(e.g. CP_TOKEN_NAMESPACE) that uses valueFrom.fieldRef.metadata.namespace and
then set CP_TOKEN_URL to use that var (e.g.
"http://ambient-control-plane.$(CP_TOKEN_NAMESPACE):8080/token") so the URL uses
same-namespace DNS (service name ambient-control-plane) and remains portable
across namespaces.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: f8ba71cc-d848-4b82-82ad-7f53259f98a9
📒 Files selected for processing (3)
components/manifests/base/ambient-control-plane-service.ymlcomponents/manifests/base/ambient-control-plane-token-svc.yamlcomponents/manifests/base/kustomization.yaml
| spec: | ||
| selector: | ||
| app: ambient-control-plane | ||
| ports: | ||
| - name: token | ||
| port: 8080 | ||
| targetPort: 8080 | ||
| protocol: TCP |
There was a problem hiding this comment.
Token service is exposed without a base ingress guard
This new Service exposes the token-minting endpoint, but base manifests do not add a matching NetworkPolicy. In clusters without default-deny, any pod in-namespace can hit :8080/token. Add a base policy limiting ingress to expected caller pods/namespaces (similar to components/manifests/overlays/mpp-openshift/ambient-cp-token-netpol.yaml).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/base/ambient-control-plane-token-svc.yaml` around lines
8 - 15, The Service defined in ambient-control-plane-token-svc.yaml exposes port
"token" (8080) without a matching NetworkPolicy, allowing any in-namespace pod
to call :8080/token; add a base NetworkPolicy (similar to
components/manifests/overlays/mpp-openshift/ambient-cp-token-netpol.yaml) that
targets the same backend pods (podSelector: matchLabels: app:
ambient-control-plane) and restricts ingress (policyTypes: ["Ingress"]) to only
the expected callers by using specific podSelector and/or namespaceSelector
rules (e.g., allow from pods with the caller label or a namespace with a known
label) so only authorized pods/namespaces can reach port 8080 (name: token).
Merge Queue Status
This pull request spent 10 seconds in the queue, including 1 second running CI. Required conditions to merge |
Summary
ambient-control-plane, port 8080) to expose the control-plane's token serverCP_TOKEN_URLenv var to the CP Deployment so it injects the token endpoint URL into runner podsUNAUTHENTICATEDerrors on all gRPC streamsRoot Cause
The control-plane runs a token server on
:8080that runner pods call to exchange an RSA-encrypted session ID for an OIDC access token. The base manifests had neither a Service to make this endpoint reachable nor theCP_TOKEN_URLenv var that tells the CP what URL to pass to runners. The MPP overlay already had both (with MPP-specific namespaces), but the base/production overlay did not.Test plan
UNAUTHENTICATEDwithout these changesAMBIENT_CP_TOKEN_URLand gRPC streams authenticate successfully🤖 Generated with Claude Code
Summary by CodeRabbit