Skip to content

Conversation

@vertex451
Copy link
Contributor

@vertex451 vertex451 commented Sep 29, 2025

I improved script to create a clusterAccess resource to ensure we have CRD registered first.

Summary by CodeRabbit

  • New Features

    • Streamlined ClusterAccess setup: script now ensures CRDs are installed, cleans prior resources, creates admin ServiceAccount/token, generates kubeconfig secret, and presents clearer ready-to-use output.
    • Added usage guide for the ClusterAccess creation script.
  • Documentation

    • Reorganized "Operation Modes" and revamped ClusterAccess docs with schema/flow focus and a comprehensive setup guide.
    • Updated gateway, listener, quickstart, authorization (Bearer), subscriptions, local testing, and virtual workspaces docs.
    • Removed obsolete multicluster kubeconfig flow doc; many clarity and formatting tweaks.

@vertex451 vertex451 requested a review from a team as a code owner September 29, 2025 12:17
@coderabbitai
Copy link

coderabbitai bot commented Sep 29, 2025

Walkthrough

Adds ClusterAccess documentation and a helper script; reorganizes ClusterAccess flow and related docs; removes an obsolete multicluster kubeconfig flow doc; updates README operation modes and multiple docs (authorization, gateway, listener, quickstart, local test, subscriptions, virtual workspaces); and enhances hack/create-clusteraccess.sh with CRD install, cleanup, token/SA creation, and kubeconfig secret handling.

Changes

Cohort / File(s) Change summary
ClusterAccess docs overhaul
docs/clusteraccess.md, docs/clusteraccess-setup.md, docs/multicluster-kubeconfig-flow.md
Reorganized ClusterAccess documentation into overview, flow, schema/file structure, and troubleshooting; added a setup guide; removed the old multicluster kubeconfig flow document.
Hack: ClusterAccess helper
hack/create-clusteraccess.md, hack/create-clusteraccess.sh
Added usage doc. Script: adds ensure_crd_installed() to apply/wait for CRD, cleanup_existing_resources(), creates ServiceAccount + token with cluster-admin in target cluster, extracts token, creates admin kubeconfig secret in management cluster, and updates logging/output.
Operation modes consolidation
README.md, docs/quickstart.md
Renamed "MultiCluster Support" to "Operation Modes", reduced scenarios to KCP vs ClusterAccess, updated links and examples, standardized Authorization header to "Bearer".
Gateway and Listener docs alignment
docs/gateway.md, docs/listener.md
Clarified that files map to KCP workspaces or ClusterAccess resources, updated URLs/examples, and added package/component responsibility overviews aligned with new modes.
Authorization doc tweaks
docs/authorization.md
Authorization header example changed to Bearer token, env var renamed to GATEWAY_INTROSPECTION_AUTHENTICATION, minor heading and prose adjustments.
Local test guidance
docs/local_test.md
Repo reference updated (helm-charts), added remediation step, clarified wording and examples, small formatting tweaks.
Subscriptions editorial pass
docs/subscriptions.md
Capitalization, punctuation, casing, and formatting refinements; no functional changes.
Virtual Workspaces minor edit
docs/virtual-workspaces.md
Minor capitalization/phrasing change to kubeconfig field description.
Other docs adjusted
docs/gateway.md, docs/listener.md, docs/clusteraccess-setup.md, docs/clusteraccess.md
Multiple prose, structure, and example adjustments to reflect new ClusterAccess-centric flow and package breakdowns.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title accurately and concisely captures the primary change of adding a check to ensure the ClusterAccess CRD is installed before creating resources, aligning directly with the pull request’s main objective without extraneous details.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/script

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 093457e and d374320.

📒 Files selected for processing (1)
  • docs/clusteraccess.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/clusteraccess.md

[grammar] ~57-~57: There might be a mistake here.
Context: ...port ENABLE_KCP=false ``` The listener: - Uses the ClusterAccessReconciler - Wa...

(QB_NEW_EN)


[grammar] ~58-~58: There might be a mistake here.
Context: ...KCP=false ``` The listener: - Uses the ClusterAccessReconciler - Watches for ClusterAccess resources - Fo...

(QB_NEW_EN)


[grammar] ~60-~60: There might be a mistake here.
Context: ...cess resources - For each ClusterAccess: - Extracts cluster connection info (host, ...

(QB_NEW_EN)


[grammar] ~99-~99: There might be a mistake here.
Context: ...OULD_IMPERSONATE=false ``` The gateway: - Watches the definitions directory for sc...

(QB_NEW_EN)


[grammar] ~101-~101: There might be a mistake here.
Context: ...for schema files - For each schema file: - Reads the x-cluster-metadata section ...

(QB_NEW_EN)


[grammar] ~105-~105: There might be a mistake here.
Context: ...arget cluster - Serves GraphQL API at /{cluster-name}/graphql - Does NOT require KUBECONFIG - all conn...

(QB_NEW_EN)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: pipe / testSource / test
  • GitHub Check: Analyze (go)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@vertex451 vertex451 changed the title ensured that we have ClusterAccess CRD installed feat: Ensured that we have ClusterAccess CRD installed Sep 29, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c866c3c and 1a0dd24.

📒 Files selected for processing (1)
  • scripts/create-clusteraccess.sh (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
scripts/create-clusteraccess.sh (1)
scripts/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: pipe / testSource / test
  • GitHub Check: Analyze (go)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
scripts/create-clusteraccess.sh (2)

102-139: Also wait when the CRD already exists; parametrize CRD name

Nice addition. One edge case: if another process just applied the CRD, kubectl get crd can succeed before the CRD becomes Established. Add the same kubectl wait path in the “already installed” branch. While here, DRY the CRD name to avoid typos.

 ensure_crd_installed() {
-    log_info "Checking if ClusterAccess CRD is installed in management cluster..."
+    log_info "Checking if ClusterAccess CRD is installed in management cluster..."
+    local CRD_NAME="clusteraccesses.gateway.platform-mesh.io"
     
     # Check if CRD exists
-    if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl get crd clusteraccesses.gateway.platform-mesh.io &>/dev/null; then
+    if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl get crd "$CRD_NAME" &>/dev/null; then
         log_info "ClusterAccess CRD not found. Installing CRD..."
@@
-        if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f "$CRD_PATH"; then
+        if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f "$CRD_PATH"; then
             log_info "ClusterAccess CRD installed successfully"
             
             # Wait for CRD to reach Established condition
             log_info "Waiting for ClusterAccess CRD to become established..."
-            if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established crd/clusteraccesses.gateway.platform-mesh.io --timeout=60s; then
+            if KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established "crd/${CRD_NAME}" --timeout=60s; then
                 log_info "ClusterAccess CRD is now established and ready"
             else
                 log_error "ClusterAccess CRD failed to reach Established condition within 60 seconds"
                 exit 1
             fi
         else
             log_error "Failed to install ClusterAccess CRD"
             exit 1
         fi
     else
-        log_info "ClusterAccess CRD is already installed"
+        log_info "ClusterAccess CRD is already installed"
+        log_info "Ensuring CRD is Established..."
+        if ! KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl wait --for=condition=Established "crd/${CRD_NAME}" --timeout=60s; then
+            log_error "ClusterAccess CRD exists but did not become Established within 60 seconds"
+            exit 1
+        fi
     fi
 }

216-218: Good placement; consider moving before cleanup to reduce noise

Calling ensure_crd_installed is the right addition. Optionally move it before cleanup_existing_resources so the initial kubectl get clusteraccess doesn’t log a warning on clusters where the CRD isn’t present yet.

-# Clean up existing resources if they exist
-cleanup_existing_resources
-
-# Ensure CRD is installed in management cluster
-ensure_crd_installed
+# Ensure CRD is installed in management cluster
+ensure_crd_installed
+
+# Clean up existing resources if they exist
+cleanup_existing_resources
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a0dd24 and cec31ce.

📒 Files selected for processing (1)
  • scripts/create-clusteraccess.sh (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
scripts/create-clusteraccess.sh (1)
scripts/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)

Copy link
Contributor

@nexus49 nexus49 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Often is see such scripts in a directory called hack instead of scripts.

Can you leave a documentation, possibly as a a header comment in the script or as an additional README, on the purpose of this script and how it can be used?

@vertex451
Copy link
Contributor Author

Often is see such scripts in a directory called hack instead of scripts.

Can you leave a documentation, possibly as a a header comment in the script or as an additional README, on the purpose of this script and how it can be used?

moved to hack dir and added a small readme about usage.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
hack/create-clusteraccess.sh (1)

169-170: Move CRD installation before cleanup to ensure proper ordering.

The cleanup_existing_resources function (line 170) attempts to delete ClusterAccess resources, but ensure_crd_installed isn't called until line 217. If the CRD doesn't exist when cleanup runs, the kubectl get clusteraccess command at line 145 may fail or produce unexpected behavior.

Move the CRD installation earlier in the flow:

 log_info "Management cluster is accessible"
 
+# Ensure CRD is installed in management cluster
+ensure_crd_installed
+
 # Clean up existing resources if they exist
 cleanup_existing_resources
 
 # Extract server URL from target kubeconfig

Then remove the duplicate call:

 fi
 log_info "Management cluster is accessible"
 
-# Ensure CRD is installed in management cluster
-ensure_crd_installed
-
 # Create kubeconfig secret in management cluster

Also applies to: 216-217

🧹 Nitpick comments (1)
hack/create-clusteraccess.md (1)

3-3: Consider standardizing Kubernetes terminology.

The phrase "Standard K8S cluster" uses informal capitalization. For consistency with technical documentation standards, consider using "standard Kubernetes cluster" or "standard K8s cluster".

-This script is used to create a ClusterAccess resource, which is needed for kubernetes-graphql-gateway to work with Standard K8S cluster.
+This script is used to create a ClusterAccess resource, which is needed for kubernetes-graphql-gateway to work with a standard Kubernetes cluster.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cec31ce and 8c23d38.

📒 Files selected for processing (2)
  • hack/create-clusteraccess.md (1 hunks)
  • hack/create-clusteraccess.sh (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / testSource / test
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
hack/create-clusteraccess.md (1)

5-5: Link verified: ../docs/clusteraccess.md exists. No further action needed.

@vertex451 vertex451 requested a review from nexus49 September 30, 2025 19:21
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
hack/create-clusteraccess.sh (2)

131-132: Consider extracting hardcoded values as script-level constants.

The ServiceAccount name and namespace are hardcoded here and repeated at lines 216-217. Extract these as constants at the top of the script for easier maintenance and consistency.

Apply this diff to extract the constants:

+# ServiceAccount configuration
+SA_NAME="kubernetes-graphql-gateway-admin"
+SA_NAMESPACE="default"
+
 # Default values
 TARGET_KUBECONFIG=""
 MANAGEMENT_KUBECONFIG="${KUBECONFIG:-$HOME/.kube/config}"

Then update the function to use these constants:

 cleanup_existing_resources() {
     log_info "Checking for existing ClusterAccess resource '$CLUSTER_NAME'..."
     
-    SA_NAME="kubernetes-graphql-gateway-admin"
-    SA_NAMESPACE="default"
-    
     # Check if ClusterAccess exists in management cluster

249-256: Replace arbitrary sleep with polling for token availability.

The 2-second sleep is arbitrary and may not be sufficient in all cases. Token population timing can vary based on cluster load and configuration.

Apply this diff to implement proper polling:

-log_info "Waiting for token to be populated..."
-sleep 2
-
-ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' | base64 -d)
+log_info "Waiting for token to be populated..."
+RETRY_COUNT=0
+MAX_RETRIES=30
+ADMIN_TOKEN=""
+
+while [[ $RETRY_COUNT -lt $MAX_RETRIES ]]; do
+    ADMIN_TOKEN=$(KUBECONFIG="$TARGET_KUBECONFIG" kubectl get secret $SA_NAME-token -n $SA_NAMESPACE -o jsonpath='{.data.token}' 2>/dev/null | base64 -d 2>/dev/null)
+    if [[ -n "$ADMIN_TOKEN" ]]; then
+        break
+    fi
+    sleep 1
+    RETRY_COUNT=$((RETRY_COUNT + 1))
+done
+
 if [[ -z "$ADMIN_TOKEN" ]]; then
-    log_error "Failed to retrieve admin token"
+    log_error "Failed to retrieve admin token after ${MAX_RETRIES} seconds"
     exit 1
 fi
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 77d328b and 50ca309.

📒 Files selected for processing (15)
  • README.md (2 hunks)
  • docs/authorization.md (2 hunks)
  • docs/clusteraccess-setup.md (1 hunks)
  • docs/clusteraccess.md (1 hunks)
  • docs/gateway.md (2 hunks)
  • docs/listener.md (1 hunks)
  • docs/local_test.md (4 hunks)
  • docs/multicluster-kubeconfig-flow.md (0 hunks)
  • docs/quickstart.md (3 hunks)
  • docs/subscriptions.md (4 hunks)
  • docs/virtual-workspaces.md (1 hunks)
  • gateway/manager/manager.go (1 hunks)
  • gateway/manager/roundtripper/roundtripper.go (4 hunks)
  • gateway/manager/roundtripper/roundtripper_test.go (11 hunks)
  • hack/create-clusteraccess.sh (4 hunks)
💤 Files with no reviewable changes (1)
  • docs/multicluster-kubeconfig-flow.md
✅ Files skipped from review due to trivial changes (2)
  • docs/clusteraccess-setup.md
  • docs/virtual-workspaces.md
🧰 Additional context used
🧬 Code graph analysis (4)
gateway/manager/roundtripper/roundtripper_test.go (2)
gateway/manager/mocks/mock_RoundTripper.go (1)
  • MockRoundTripper (12-14)
gateway/manager/roundtripper/roundtripper.go (1)
  • New (27-48)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
gateway/manager/roundtripper/roundtripper.go (1)
common/config/config.go (1)
  • Config (3-36)
gateway/manager/manager.go (1)
gateway/manager/roundtripper/roundtripper.go (2)
  • New (27-48)
  • NewUnauthorizedRoundTripper (51-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / testSource / test
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: Analyze (go)
🔇 Additional comments (8)
gateway/manager/manager.go (1)

28-30: LGTM!

The roundtripper factory correctly passes the tlsConfig parameter to align with the updated constructor signature.

gateway/manager/roundtripper/roundtripper_test.go (3)

14-14: LGTM!

All test cases consistently updated to pass the TLSClientConfig parameter to the roundtripper constructor, properly aligning with the updated signature.

Also applies to: 81-81, 273-273, 387-387, 462-462, 511-511, 540-540, 580-580


51-54: LGTM!

The test correctly handles the baseRT transport behavior by skipping mock expectations when the real base transport is used (expectedStatusCode 0). This accurately reflects the implementation change where bearer token authentication now uses baseRT instead of adminRT.

Also applies to: 91-95


522-522: LGTM!

Skipping this test is a reasonable decision since it would require mocking an internal implementation detail (baseRT). The comment clearly explains the rationale.

gateway/manager/roundtripper/roundtripper.go (4)

27-48: LGTM!

The TLS-based base transport creation is well-implemented with appropriate error handling and fallback to http.DefaultTransport. The constructor signature update properly extends the API to support TLS configuration.


84-85: LGTM!

Cloning the request before header manipulation prevents unintended side effects on the original request, and unconditionally removing the Authorization header prevents credential leakage. Both are good security practices.


115-121: LGTM!

In impersonation mode, the Authorization header is correctly set with the bearer token after cloning the request, and the impersonating round tripper properly wraps the adminRT. This maintains the expected behavior for impersonation.


87-90: Document bearer token transport change in docs
Non-impersonation now uses baseRT (TLS-configured transport) instead of adminRT for bearer-token auth. Add this change to the PR description and relevant docs (e.g. CHANGELOG.md or docs/), confirm it aligns with the security model, and include tests covering user-token validation against the cluster.

vertex451 and others added 9 commits October 10, 2025 15:35
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
* added nil checks in relation resovling

On-behalf-of: @SAP a.shcherbatiuk@sap.com
Signed-off-by: Artem Shcherbatiuk <vertex451@gmail.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
hack/create-clusteraccess.sh (1)

262-269: Don’t store personal kubeconfig; generate kubeconfig using SA token + CA.

Using the local kubeconfig in a Secret can leak personal creds/exec plugins and break in CI. Prefer a minimal kubeconfig built from SERVER_URL, CA, and the ServiceAccount token you just created.

-log_info "Creating admin kubeconfig secret in management cluster..."
-KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl create secret generic "${CLUSTER_NAME}-admin-kubeconfig" \
-    --namespace="$NAMESPACE" \
-    --from-file=kubeconfig="$TARGET_KUBECONFIG" \
-    --dry-run=client -o yaml | \
-KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f -
+log_info "Creating admin kubeconfig secret in management cluster (ServiceAccount token)..."
+cat <<KCF | KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl create secret generic "${CLUSTER_NAME}-admin-kubeconfig" \
+  --namespace="$NAMESPACE" \
+  --from-file=kubeconfig=/dev/stdin \
+  --dry-run=client -o yaml | KUBECONFIG="$MANAGEMENT_KUBECONFIG" kubectl apply -f -
+apiVersion: v1
+kind: Config
+clusters:
+- name: ${CLUSTER_NAME}
+  cluster:
+    server: ${SERVER_URL}
+    certificate-authority-data: $(echo -n "$CA_CERT" | base64 -w0)
+contexts:
+- name: ${CLUSTER_NAME}
+  context:
+    cluster: ${CLUSTER_NAME}
+    user: ${SA_NAME}
+current-context: ${CLUSTER_NAME}
+users:
+- name: ${SA_NAME}
+  user:
+    token: ${ADMIN_TOKEN}
+KCF

Also applies to: 279-297

🧹 Nitpick comments (3)
hack/create-clusteraccess.sh (1)

131-133: Avoid duplicate SA vars and hardcoded namespace.

SA_NAME/SA_NAMESPACE are defined twice and SA_NAMESPACE is hardcoded to "default", ignoring the --namespace flag. Define once and derive SA_NAMESPACE from NAMESPACE.

-    SA_NAME="kubernetes-graphql-gateway-admin"
-    SA_NAMESPACE="default"
+    SA_NAME="kubernetes-graphql-gateway-admin"
+    SA_NAMESPACE="${NAMESPACE}"
...
-SAINAME="kubernetes-graphql-gateway-admin"
-SA_NAMESPACE="default"
+# Reuse previously set values
+# SA_NAME and SA_NAMESPACE already set; keep them consistent with --namespace

Also applies to: 214-218

README.md (1)

27-30: Minor: unify casing of “ClusterAccess”.

Link text shows “Clusteraccess”; elsewhere docs use “ClusterAccess”. Standardize for consistency.

docs/clusteraccess.md (1)

120-129: Add CRD apply command specificity.

Instead of generic config/crd/, reference the exact ClusterAccess CRD path used in the script for consistency.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 50ca309 and 093457e.

📒 Files selected for processing (13)
  • README.md (2 hunks)
  • docs/authorization.md (2 hunks)
  • docs/clusteraccess-setup.md (1 hunks)
  • docs/clusteraccess.md (1 hunks)
  • docs/gateway.md (2 hunks)
  • docs/listener.md (1 hunks)
  • docs/local_test.md (4 hunks)
  • docs/multicluster-kubeconfig-flow.md (0 hunks)
  • docs/quickstart.md (3 hunks)
  • docs/subscriptions.md (4 hunks)
  • docs/virtual-workspaces.md (1 hunks)
  • hack/create-clusteraccess.md (1 hunks)
  • hack/create-clusteraccess.sh (4 hunks)
💤 Files with no reviewable changes (1)
  • docs/multicluster-kubeconfig-flow.md
✅ Files skipped from review due to trivial changes (3)
  • hack/create-clusteraccess.md
  • docs/gateway.md
  • docs/listener.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • docs/virtual-workspaces.md
  • docs/clusteraccess-setup.md
  • docs/subscriptions.md
🧰 Additional context used
🧬 Code graph analysis (1)
hack/create-clusteraccess.sh (1)
hack/test-clusteraccess-integration.sh (2)
  • log_info (12-14)
  • log_error (20-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pipe / dockerBuild / docker
  • GitHub Check: pipe / testSource / test
  • GitHub Check: Analyze (go)
🔇 Additional comments (4)
hack/create-clusteraccess.sh (1)

102-126: Good: idempotent CRD apply + Established wait.

The new ensure_crd_installed flow uses kubectl apply (idempotent) and waits for Established; clear errors and logs. LGTM.

docs/quickstart.md (1)

17-20: LGTM: clearer setup flow and proper Bearer header.

Operation Modes guidance and Authorization header examples are correct and consistent.

Also applies to: 55-63

docs/local_test.md (1)

8-13: LGTM: updated repo path and clearer instructions.

Also applies to: 16-18, 35-40, 52-53, 72-73

docs/authorization.md (1)

21-21: Verify config loader reads new env var
No occurrences of GATEWAY_INTROSPECTION_AUTHENTICATION or the old INTROSPECTION_AUTHENTICATION found in code; ensure the implementation is updated to use GATEWAY_INTROSPECTION_AUTHENTICATION.

When GraphiQL page is loaded, it makes a request to fetch the GraphQL schema and there is no way to add the `Authorization` header to that request.
When the GraphiQL page is loaded, it makes a request to fetch the GraphQL schema, and there is no way to add the `Authorization` header to that request.

We have this [issue](https://github.com/openmfp/kubernetes-graphql-gateway/issues/217) open to fix this.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Check GitHub issue link target.

Link points to openmfp/kubernetes-graphql-gateway#217. Confirm the org/repo is correct (this repo appears under platform-mesh).


🏁 Script executed:

#!/bin/bash
gh issue view 217 -R openmfp/kubernetes-graphql-gateway || echo "Issue not found under openmfp"

Length of output: 523


Update issue link and wording

  • In docs/authorization.md (line 28), change the URL to point to platform-mesh/kubernetes-graphql-gateway#217 and adjust the text since the issue is closed (e.g., replace “open to fix this” with “see issue” or past tense).
🤖 Prompt for AI Agents
In docs/authorization.md around line 28, the issue link and wording reference
the wrong repo and use present-tense phrasing for a closed issue; update the URL
to https://github.com/platform-mesh/kubernetes-graphql-gateway/issues/217 (or
the repo’s correct issue path) and change the text from “open to fix this” to
past-tense or neutral phrasing such as “see issue” or “see issue #217” so the
sentence correctly points to platform-mesh/kubernetes-graphql-gateway#217 and
reflects the issue state.

Comment on lines +12 to 14
# Create ClusterAccess resource with admin token authentication
./hack/create-clusteraccess.sh --target-kubeconfig ~/.kube/platform-mesh-config --management-kubeconfig ~/.kube/platform-mesh-config
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wording mismatch: “admin token authentication” vs kubeconfig-based spec.

Quick Setup comment says admin token, but example/spec use kubeconfigSecretRef and CA. Align the text to kubeconfig-based auth or update the spec to token-based if that’s intended.

Also applies to: 36-47

🤖 Prompt for AI Agents
In docs/clusteraccess.md around lines 12-14 (and similarly for lines 36-47), the
heading and Quick Setup text say "admin token authentication" but the
example/spec use kubeconfigSecretRef and CA (kubeconfig-based auth); update the
wording to refer to "kubeconfig-based authentication" (or "kubeconfig
authentication") and adjust the Quick Setup description to mention using a
kubeconfig secret and the --target-kubeconfig/--management-kubeconfig flags,
ensuring the language matches the provided spec and examples; apply the same
wording change and consistency fixes to lines 36-47 as well.

Comment on lines 113 to 114
./scripts/test-clusteraccess-integration.sh
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Broken script path.

Docs reference ./scripts/test-clusteraccess-integration.sh, but the repo appears to have hack/test-clusteraccess-integration.sh.


🏁 Script executed:

#!/bin/bash
fd -a 'test-clusteraccess-integration.sh'

Length of output: 57


Update script path in docs/clusteraccess.md
Replace ./scripts/test-clusteraccess-integration.sh with hack/test-clusteraccess-integration.sh.

🤖 Prompt for AI Agents
In docs/clusteraccess.md around lines 113 to 114, the script path is outdated:
replace the reference to ./scripts/test-clusteraccess-integration.sh with the
new path hack/test-clusteraccess-integration.sh. Update the markdown snippet or
inline text accordingly so all occurrences use
hack/test-clusteraccess-integration.sh and ensure formatting/backticks remain
correct.

@vertex451 vertex451 self-assigned this Oct 20, 2025
@vertex451 vertex451 merged commit baea1e4 into main Oct 21, 2025
12 checks passed
@vertex451 vertex451 deleted the feat/script branch October 21, 2025 09:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants