Skip to content

NO-JIRA: Add a dev container setup and a basic MCP server#3476

Merged
openshift-merge-bot[bot] merged 8 commits intoopenshift:mainfrom
smg247:dev-containers
May 1, 2026
Merged

NO-JIRA: Add a dev container setup and a basic MCP server#3476
openshift-merge-bot[bot] merged 8 commits intoopenshift:mainfrom
smg247:dev-containers

Conversation

@smg247
Copy link
Copy Markdown
Member

@smg247 smg247 commented Apr 28, 2026

This isn't meant to be perfect, but it gives us a basic dev containers set up and MCP server to iterate on. The main, glaring issue right now is the lack of ability to run lint and e2e from inside the dev container. I played around with nested podman, but that had some snags that I didn't want to spend additional time working through right now. I think rewritting the e2e script to be used without podman is probably the proper approach. Lint actually can work fine within the container by simply passing CI=true.

Summary by CodeRabbit

  • New Features

    • Added a local MCP server to run and manage repo-local workflows and tools.
    • Added a reproducible devcontainer with init/bootstrap scripts and service startup.
    • Added new skills to start services, run the frontend, perform DB migrations, prime regression cache, and run lint/test/e2e flows.
  • Documentation

    • Added comprehensive guides for devcontainer usage, MCP tools, development workflows, and an env/template example.
  • Chores

    • Updated ignore rules to exclude local dev artifacts.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Apr 28, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@smg247: This pull request explicitly references no jira issue.

Details

In response to this:

This isn't meant to be perfect, but it gives us a basic dev containers set up and MCP server to iterate on. The main, glaring issue right now is the lack of ability to run lint and e2e from inside the dev container. I played around with nested podman, but that had some snags that I didn't want to spend additional time working through right now. I think rewritting the e2e script to be used without podman is probably the proper approach. Lint actually can work fine within the container by simply passing CI=true.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 28, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a new FastMCP server "sippy-dev" with multiple tools for managing local Sippy services, registers Cursor skills to call those tools, and introduces a Podman-based VS Code devcontainer (Dockerfile, init/post-create scripts, env template), related docs, and logging/configuration helpers.

Changes

Cohort / File(s) Summary
MCP Server Core
mcp/server.py, mcp/requirements.txt, mcp/README.md
New FastMCP server exposing tools (migrate_db, regression_cache, sippy_serve, sippy_ng_start, run_lint, run_test, run_e2e); centralized repo-scoped path resolution, log management under sippy-dev-logs/, subprocess launch/deduplication, timeouts, BigQuery credential resolution, and tool documentation.
Cursor Integration & Skills
.cursor/mcp.json, .cursor/skills/sippy-dev-app/SKILL.md, .cursor/skills/sippy-dev-frontend/SKILL.md, .cursor/skills/sippy-dev-migrate/SKILL.md, .cursor/skills/sippy-dev-regression-cache/SKILL.md, .cursor/skills/sippy-dev-serve/SKILL.md, .cursor/skills/sippy-dev-tests/SKILL.md
Registers sippy-dev MCP server and adds multiple skill documentation files describing invocation patterns, args, credential fallbacks, sequencing, and expected log/listen-hint behaviors.
Devcontainer Image & Config
.devcontainer/Dockerfile, .devcontainer/devcontainer.json, .devcontainer/.env.example
Adds UBI9-based devcontainer Dockerfile, VS Code devcontainer config (ports, mounts, extensions, environment), and an env template for Vertex/Google credential variables.
Container Init & Bootstrap Scripts
.devcontainer/init-services.sh, .devcontainer/post-create.sh, .devcontainer/README.md
Adds Podman-backed init script to provision Postgres and Redis with readiness checks, post-create bootstrap to install Go tooling/npm deps and MCP venv, and a README documenting setup, limitations, and workflows.
Project Docs & Misc
DEVELOPMENT.md, AGENTS.md, .gitignore
Updates docs to reference the devcontainer and MCP server, adds MCP entry to AGENTS.md, and extends .gitignore for .devcontainer/.env, mcp/.venv/, Python caches, and local logs.
Skill Docs (individual)
.cursor/skills/.../SKILL.md
Adds several documentation-only skill frontmatter files with invocation guidance and defaults for each MCP tool.

Sequence Diagram(s)

sequenceDiagram
  participant Dev as Developer (Cursor/CLI)
  participant MCP as sippy-dev MCP Server
  participant Proc as Local Process
  participant DB as Postgres
  participant Cache as Redis

  Dev->>MCP: call_mcp_tool(tool, args)
  MCP->>Proc: inspect running processes / logs
  alt already running
    Proc-->>MCP: pid / listen hint / log path
  else start required
    MCP->>Proc: spawn command (stdout/stderr -> sippy-dev-logs/)
    Proc-->>MCP: pid / start status
    MCP->>DB: perform readiness check if needed
    MCP->>Cache: perform readiness check if needed
  end
  MCP-->>Dev: return status, log tail, listen hints
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 14 | ❌ 3

❌ Failed checks (3 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Coverage For New Features ⚠️ Warning PR adds 580 lines of new Python code in mcp/server.py with 7 tool functions containing subprocess execution, file I/O, and environment variable resolution logic, but no corresponding unit tests are included. Create test file (mcp/test_server.py) with unit tests covering pure utility functions, subprocess execution logic with timeout handling, and error cases; refactor to separate pure logic from side effects.
Single Responsibility And Clear Naming ⚠️ Warning Functions sippy_serve() and regression_cache() violate Single Responsibility Principle with 10 and 8 parameters respectively, bundling five distinct concerns: credentials, database/cache connections, config paths, logging, and service behavior. Refactor using configuration data classes (ServiceConnections, CredentialsConfig, ServiceOptions) and consolidate six process-introspection functions into ProcessDiscovery class to reduce complexity and eliminate repetitive naming.
✅ Passed checks (14 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: adding a dev container setup and MCP server. It is concise, clear, and directly reflects the primary contributions in the changeset.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Go Error Handling ✅ Passed This pull request does not introduce any new Go code or modify any existing Go files. The PR exclusively adds configuration files, documentation files, shell scripts, a Python MCP server, and updates to .gitignore. Since the Go Error Handling check applies only to Go language error handling patterns and this PR contains no new or modified Go code, the check is not applicable.
Sql Injection Prevention ✅ Passed The PR does not introduce SQL injection vulnerabilities. The mcp/server.py file delegates database operations to Go subprocesses via safe command-line arguments and environment variables, not through direct SQL query construction.
Excessive Css In React Should Use Styles ✅ Passed This PR contains only infrastructure, configuration, and Python files with no React components present.
Stable And Deterministic Test Names ✅ Passed The PR adds only development infrastructure files with no Ginkgo test definitions, making this check non-applicable.
Test Structure And Quality ✅ Passed PR contains no Ginkgo test files or Go test code modifications; check is not applicable.
Microshift Test Compatibility ✅ Passed This PR does not add any new Ginkgo e2e tests; all changes are infrastructure and documentation files.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This pull request does not add any new Ginkgo e2e tests. The PR introduces dev container infrastructure, a Python-based MCP server, and documentation files. No Ginkgo test patterns are introduced.
Topology-Aware Scheduling Compatibility ✅ Passed This pull request does not introduce any Kubernetes deployment manifests, operator code, or controllers that could have topology-aware scheduling concerns.
Ote Binary Stdout Contract ✅ Passed PR adds development infrastructure (devcontainer and MCP server) not invoked by test targets and containing no module-level stdout writes.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed The PR does not add any new Ginkgo e2e tests. The file list shows only configuration files, documentation, and infrastructure code. The custom check is not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested review from deads2k and xueqzhan April 28, 2026 18:29
@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 28, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🧹 Nitpick comments (5)
.devcontainer/Dockerfile (1)

1-3: Pin the devcontainer toolchain instead of following moving targets.

registry.access.redhat.com/ubi9/ubi:latest and gotestsum@latest make rebuilds non-reproducible; the same commit can produce a different image later. Please pin both to explicit versions/digests.

Also applies to: 45-45

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/Dockerfile around lines 1 - 3, Replace moving targets with
fixed versions: change the base image reference used in the FROM line
(registry.access.redhat.com/ubi9/ubi:latest) to a specific, immutable tag or
digest and pin any tool installs referenced elsewhere (e.g., gotestsum@latest)
to explicit released versions or digests; update or add ARGs like
GOLANGCI_LINT_VERSION only if they are set to a known stable version (keep ARG
GOLANGCI_LINT_VERSION but ensure its value is a fixed release), and verify the
gotestsum install command at the location around the gotestsum@latest occurrence
is updated to a specific version so rebuilds are reproducible.
.devcontainer/init-services.sh (1)

6-19: Validate reused service containers before starting them.

podman start will reuse any existing sippy-postgres / sippy-redis container by name, even if it was created with a different network, image, or env. In that state the devcontainer can come up pointed at stale or unreachable services.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/init-services.sh around lines 6 - 19, Ensure existing
containers are validated before reuse: for each named container (sippy-postgres
and sippy-redis) call podman inspect and verify required properties (network
sippy-net, expected image quay.io/enterprisedb/postgresql or redis:7-alpine, and
needed env vars like POSTGRES_PASSWORD/POSTGRES_HOST_AUTH_METHOD) and only run
podman start if they match; otherwise remove the stale container (podman rm -f
<name>) and recreate it with the podman run command shown. Use podman inspect
--format or jq to check .Config.Image, .NetworkSettings.Networks, and
.Config.Env for mismatches, then conditionally start vs. remove+run for
sippy-postgres and sippy-redis.
mcp/README.md (1)

64-67: Add a caution note to avoid repeated run_e2e executions.

Since this section defines the default test order, it should explicitly note that run_e2e is expensive and should be run sparingly (ideally once per request/session).

Based on learnings: Never run make e2e more than once per request to avoid expensive BigQuery queries and long test times.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/README.md` around lines 64 - 67, Add a short caution under the
sippy-dev-tests test order (the line listing run_lint → run_test → run_e2e)
stating that run_e2e is expensive and should be executed sparingly (ideally once
per request/session), and call out that running make e2e repeatedly will trigger
costly BigQuery queries and long test times; update the README section that
lists sippy-dev-tests to include this note so developers see it alongside the
default test order.
.devcontainer/post-create.sh (2)

22-25: Differentiate “not installed” vs “not authenticated” for gcloud.

Right now both cases end with the same message. A quick command -v gcloud check would make troubleshooting clearer.

💡 Proposed refactor
 echo "==> Checking GCP auth..."
-if ! gcloud auth application-default print-access-token >/dev/null 2>&1; then
-    echo "    GCP credentials not found. Run 'gcloud auth application-default login' to authenticate."
+if ! command -v gcloud >/dev/null 2>&1; then
+    echo "    gcloud CLI not found in PATH."
+elif ! gcloud auth application-default print-access-token >/dev/null 2>&1; then
+    echo "    GCP credentials not found. Run 'gcloud auth application-default login' to authenticate."
 fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/post-create.sh around lines 22 - 25, The script currently
treats missing gcloud binary and missing application-default credentials the
same; add an explicit existence check using command -v gcloud before running
gcloud auth application-default print-access-token and emit different messages:
if command -v gcloud fails, print a “gcloud not installed” message suggesting
installation, otherwise run gcloud auth application-default print-access-token
and on failure print the existing “GCP credentials not found. Run 'gcloud auth
application-default login'” guidance; update the block around the gcloud auth
application-default print-access-token invocation to branch on the command -v
gcloud result.

5-7: Pin Go tool versions to ensure reproducible devcontainer builds.

Lines 5-7 use @latest for gopls, dlv, and staticcheck, which causes non-deterministic installs and can break setups on future rebuilds.

♻️ Proposed refactor
+GOPLS_VERSION="${GOPLS_VERSION:-v0.16.2}"
+DLV_VERSION="${DLV_VERSION:-v1.24.0}"
+STATICCHECK_VERSION="${STATICCHECK_VERSION:-2024.1.1}"
+
 echo "==> Installing Go IDE tools..."
-go install golang.org/x/tools/gopls@latest
-go install github.com/go-delve/delve/cmd/dlv@latest
-go install honnef.co/go/tools/cmd/staticcheck@latest
+go install golang.org/x/tools/gopls@"${GOPLS_VERSION}"
+go install github.com/go-delve/delve/cmd/dlv@"${DLV_VERSION}"
+go install honnef.co/go/tools/cmd/staticcheck@"${STATICCHECK_VERSION}"

This approach allows overriding versions via environment variables while maintaining consistent defaults across rebuilds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/post-create.sh around lines 5 - 7, The install commands using
`@latest` for gopls, dlv and staticcheck are non-deterministic; change them to use
pinned versions (e.g., GOPLS_VERSION, DLV_VERSION, STATICCHECK_VERSION variables
with sensible defaults) and update the three install lines
(golang.org/x/tools/gopls, github.com/go-delve/delve/cmd/dlv,
honnef.co/go/tools/cmd/staticcheck) to reference those variables so builds are
reproducible while still allowing overrides via environment variables.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.cursor/skills/sippy-dev-frontend/SKILL.md:
- Line 14: The wording in SKILL.md is ambiguous about the open_browser default;
update the sentence that mentions "Defaults and `open_browser`: `mcp/server.py`"
to clearly state that default configuration values are defined in mcp/server.py
and that the `open_browser` option is a boolean which defaults to false (e.g.,
"Default values live in mcp/server.py; `open_browser` is a boolean flag that
defaults to false"). Ensure you reference the `open_browser` flag and
mcp/server.py so readers know where to find and change the default.

In @.cursor/skills/sippy-dev-regression-cache/SKILL.md:
- Line 16: The documentation incorrectly lists "mcp/server.py" as an argument to
the regression_cache call contract; remove that reference and any similar "Other
args" mention so the SKILL.md only documents actual public inputs (e.g.,
bigquery_credentials_file and the SIPPY_BIGQUERY_CREDENTIALS_FILE /
GOOGLE_APPLICATION_CREDENTIALS env vars) for the regression_cache skill, and do
not present implementation details of the MCP server (mcp/server.py) as a
call_mcp_tool argument.

In @.cursor/skills/sippy-dev-serve/SKILL.md:
- Line 13: The documentation line in SKILL.md incorrectly lists "mcp/server.py"
as an "Other args" parameter; remove that reference so the documented tool
parameters for bigquery_credentials_file only mention args accepted by
sippy_serve. Update the sentence containing "Other args: **`mcp/server.py`**" to
either delete the clause or replace it with a note that call_mcp_tool should
pass only sippy_serve args, and ensure mentions of call_mcp_tool and sippy_serve
remain to guide agents toward valid calls (do not document mcp/server.py as a
tool argument).

In @.cursor/skills/sippy-dev-tests/SKILL.md:
- Around line 11-17: Update the SKILL.md workflow description to call out
in-container limitations: explicitly note that run_lint requires CI=true to run
correctly inside the devcontainer and that run_e2e currently does not work from
within the devcontainer, so users should not expect the full run_lint → run_test
→ run_e2e flow to succeed in-container; reference the existing symbols
(call_mcp_tool, run_lint, run_test, run_e2e, bigquery_credentials_file,
GCS_SA_JSON_PATH, and sippy-dev-logs/run_*.log) and add a short caveat paragraph
explaining these constraints and any recommended workaround (e.g., run lint/e2e
from CI or an external environment).

In @.devcontainer/devcontainer.json:
- Line 25: The post-create script call (post-create.sh) runs pip to install
mcp/requirements.txt without the --ignore-scripts flag; update the pip install
invocation inside the post-create.sh script so the venv python install command
(the line that runs something like "<venv>/bin/python -m pip install -r
mcp/requirements.txt") includes the --ignore-scripts option (i.e., add
--ignore-scripts before -r) to match the npm install behavior and suppress
package install scripts.

In @.devcontainer/Dockerfile:
- Line 52: The Dockerfile RUN invoking the Claude install script is unpinned and
causes non-deterministic builds; update the RUN line that currently uses "curl
-fsSL https://claude.ai/install.sh | sh" to call the installer with an explicit
version argument via the installer's "-s <VERSION>" form (e.g., a numeric
version or "stable") so the build always installs a known release; ensure the
new RUN uses the same curl invocation but pipes to "sh -s <VERSION>" and commit
the chosen version string.

In @.devcontainer/init-services.sh:
- Around line 21-26: The init script (.devcontainer/init-services.sh) currently
loops 30 times running podman exec sippy-postgres pg_isready but always prints
"Services ready." and exits 0 even when the checks fail; update the for-loop
logic to detect timeout (e.g., track success via the pg_isready exit status or a
flag) and if the loop completes without a successful pg_isready, print a clear
error message and exit with a non-zero status instead of printing "Services
ready."; modify the block around the for loop and the echo "Services ready." to
only run on success and call exit 1 (or another non-zero code) on failure so the
devcontainer init fails fast.

In @.devcontainer/README.md:
- Around line 69-71: Add a language identifier to the fenced code block that
begins with the snippet `Cmd+Shift+P > "Dev Containers: Attach to Running
Container" > sippy-dev`; update that triple-backtick fence to include a language
tag such as `text` (e.g., ```text) so markdownlint rule MD040 is satisfied and
the block is explicitly marked as plain text.

In `@mcp/requirements.txt`:
- Line 1: Update the dependency specification for fastmcp in requirements to pin
a safe minimum version: replace the bare "fastmcp" entry with a constrained
specifier such as "fastmcp>=3.2.4" so pip will install a release that pulls
patched transitive dependencies (this change targets the fastmcp entry in the
requirements list).

In `@mcp/server.py`:
- Around line 73-80: Wrap the subprocess.run call that assigns r in migrate_db
with a try/except catching subprocess.TimeoutExpired; when a timeout occurs,
build a fallback CompletedProcess-like result (or set r to an object with
stdout/stderr containing a clear timeout message and a non-zero returncode) so
the code returns a usable status and log output path instead of raising, and
ensure you include the original timeout exception message in the stderr text;
reference the subprocess.run invocation that assigns r and the
subprocess.TimeoutExpired exception for locating the change.
- Around line 339-346: The spawned background process in sippy_serve() currently
inherits the parent stdin; update the subprocess.Popen call that creates proc so
it detaches stdin (set stdin to subprocess.DEVNULL) to prevent the child from
keeping the MCP session transport open or reading from the parent stdin; keep
other args (cwd=REPO_ROOT, env=os.environ.copy(), stdout=logf,
stderr=subprocess.STDOUT, start_new_session=True) unchanged when making this
change.
- Around line 21-25: _the _repo_path function currently allows absolute paths
and .. escapes which can let writable args overwrite files outside the repo; add
a new validator function named _repo_output_path(p: str) -> Path that expands
and resolves p but enforces the path is inside REPO_ROOT (reject/raise if
path.is_absolute() and not under REPO_ROOT after resolve or if resolved
path.relative_to(REPO_ROOT) would fail), create parent dirs as needed, and
return the resolved Path; then replace uses of writable output args (notably the
log_file call sites in regression_cache, sippy_serve, and sippy_ng_start) to
call _repo_output_path instead of _repo_path so writable outputs cannot escape
the workspace.

---

Nitpick comments:
In @.devcontainer/Dockerfile:
- Around line 1-3: Replace moving targets with fixed versions: change the base
image reference used in the FROM line
(registry.access.redhat.com/ubi9/ubi:latest) to a specific, immutable tag or
digest and pin any tool installs referenced elsewhere (e.g., gotestsum@latest)
to explicit released versions or digests; update or add ARGs like
GOLANGCI_LINT_VERSION only if they are set to a known stable version (keep ARG
GOLANGCI_LINT_VERSION but ensure its value is a fixed release), and verify the
gotestsum install command at the location around the gotestsum@latest occurrence
is updated to a specific version so rebuilds are reproducible.

In @.devcontainer/init-services.sh:
- Around line 6-19: Ensure existing containers are validated before reuse: for
each named container (sippy-postgres and sippy-redis) call podman inspect and
verify required properties (network sippy-net, expected image
quay.io/enterprisedb/postgresql or redis:7-alpine, and needed env vars like
POSTGRES_PASSWORD/POSTGRES_HOST_AUTH_METHOD) and only run podman start if they
match; otherwise remove the stale container (podman rm -f <name>) and recreate
it with the podman run command shown. Use podman inspect --format or jq to check
.Config.Image, .NetworkSettings.Networks, and .Config.Env for mismatches, then
conditionally start vs. remove+run for sippy-postgres and sippy-redis.

In @.devcontainer/post-create.sh:
- Around line 22-25: The script currently treats missing gcloud binary and
missing application-default credentials the same; add an explicit existence
check using command -v gcloud before running gcloud auth application-default
print-access-token and emit different messages: if command -v gcloud fails,
print a “gcloud not installed” message suggesting installation, otherwise run
gcloud auth application-default print-access-token and on failure print the
existing “GCP credentials not found. Run 'gcloud auth application-default
login'” guidance; update the block around the gcloud auth application-default
print-access-token invocation to branch on the command -v gcloud result.
- Around line 5-7: The install commands using `@latest` for gopls, dlv and
staticcheck are non-deterministic; change them to use pinned versions (e.g.,
GOPLS_VERSION, DLV_VERSION, STATICCHECK_VERSION variables with sensible
defaults) and update the three install lines (golang.org/x/tools/gopls,
github.com/go-delve/delve/cmd/dlv, honnef.co/go/tools/cmd/staticcheck) to
reference those variables so builds are reproducible while still allowing
overrides via environment variables.

In `@mcp/README.md`:
- Around line 64-67: Add a short caution under the sippy-dev-tests test order
(the line listing run_lint → run_test → run_e2e) stating that run_e2e is
expensive and should be executed sparingly (ideally once per request/session),
and call out that running make e2e repeatedly will trigger costly BigQuery
queries and long test times; update the README section that lists
sippy-dev-tests to include this note so developers see it alongside the default
test order.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 02714681-5d29-4292-abb1-cad56781673d

📥 Commits

Reviewing files that changed from the base of the PR and between b30049e and 5944910.

📒 Files selected for processing (19)
  • .cursor/mcp.json
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/.env.example
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/devcontainer.json
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • .gitignore
  • AGENTS.md
  • DEVELOPMENT.md
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py

Comment thread .cursor/skills/sippy-dev-frontend/SKILL.md Outdated
Comment thread .cursor/skills/sippy-dev-regression-cache/SKILL.md Outdated
Comment thread .cursor/skills/sippy-dev-serve/SKILL.md Outdated
Comment thread .cursor/skills/sippy-dev-tests/SKILL.md Outdated
Comment thread .devcontainer/devcontainer.json
Comment thread .devcontainer/README.md Outdated
Comment thread mcp/requirements.txt Outdated
Comment thread mcp/server.py Outdated
Comment thread mcp/server.py Outdated
Comment thread mcp/server.py Outdated
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

@coderabbitai[bot]: changing LGTM is restricted to collaborators

Details

In response to this:

Actionable comments posted: 12

🧹 Nitpick comments (5)
.devcontainer/Dockerfile (1)

1-3: Pin the devcontainer toolchain instead of following moving targets.

registry.access.redhat.com/ubi9/ubi:latest and gotestsum@latest make rebuilds non-reproducible; the same commit can produce a different image later. Please pin both to explicit versions/digests.

Also applies to: 45-45

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/Dockerfile around lines 1 - 3, Replace moving targets with
fixed versions: change the base image reference used in the FROM line
(registry.access.redhat.com/ubi9/ubi:latest) to a specific, immutable tag or
digest and pin any tool installs referenced elsewhere (e.g., gotestsum@latest)
to explicit released versions or digests; update or add ARGs like
GOLANGCI_LINT_VERSION only if they are set to a known stable version (keep ARG
GOLANGCI_LINT_VERSION but ensure its value is a fixed release), and verify the
gotestsum install command at the location around the gotestsum@latest occurrence
is updated to a specific version so rebuilds are reproducible.
.devcontainer/init-services.sh (1)

6-19: Validate reused service containers before starting them.

podman start will reuse any existing sippy-postgres / sippy-redis container by name, even if it was created with a different network, image, or env. In that state the devcontainer can come up pointed at stale or unreachable services.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/init-services.sh around lines 6 - 19, Ensure existing
containers are validated before reuse: for each named container (sippy-postgres
and sippy-redis) call podman inspect and verify required properties (network
sippy-net, expected image quay.io/enterprisedb/postgresql or redis:7-alpine, and
needed env vars like POSTGRES_PASSWORD/POSTGRES_HOST_AUTH_METHOD) and only run
podman start if they match; otherwise remove the stale container (podman rm -f
<name>) and recreate it with the podman run command shown. Use podman inspect
--format or jq to check .Config.Image, .NetworkSettings.Networks, and
.Config.Env for mismatches, then conditionally start vs. remove+run for
sippy-postgres and sippy-redis.
mcp/README.md (1)

64-67: Add a caution note to avoid repeated run_e2e executions.

Since this section defines the default test order, it should explicitly note that run_e2e is expensive and should be run sparingly (ideally once per request/session).

Based on learnings: Never run make e2e more than once per request to avoid expensive BigQuery queries and long test times.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/README.md` around lines 64 - 67, Add a short caution under the
sippy-dev-tests test order (the line listing run_lint → run_test → run_e2e)
stating that run_e2e is expensive and should be executed sparingly (ideally once
per request/session), and call out that running make e2e repeatedly will trigger
costly BigQuery queries and long test times; update the README section that
lists sippy-dev-tests to include this note so developers see it alongside the
default test order.
.devcontainer/post-create.sh (2)

22-25: Differentiate “not installed” vs “not authenticated” for gcloud.

Right now both cases end with the same message. A quick command -v gcloud check would make troubleshooting clearer.

💡 Proposed refactor
echo "==> Checking GCP auth..."
-if ! gcloud auth application-default print-access-token >/dev/null 2>&1; then
-    echo "    GCP credentials not found. Run 'gcloud auth application-default login' to authenticate."
+if ! command -v gcloud >/dev/null 2>&1; then
+    echo "    gcloud CLI not found in PATH."
+elif ! gcloud auth application-default print-access-token >/dev/null 2>&1; then
+    echo "    GCP credentials not found. Run 'gcloud auth application-default login' to authenticate."
fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/post-create.sh around lines 22 - 25, The script currently
treats missing gcloud binary and missing application-default credentials the
same; add an explicit existence check using command -v gcloud before running
gcloud auth application-default print-access-token and emit different messages:
if command -v gcloud fails, print a “gcloud not installed” message suggesting
installation, otherwise run gcloud auth application-default print-access-token
and on failure print the existing “GCP credentials not found. Run 'gcloud auth
application-default login'” guidance; update the block around the gcloud auth
application-default print-access-token invocation to branch on the command -v
gcloud result.

5-7: Pin Go tool versions to ensure reproducible devcontainer builds.

Lines 5-7 use @latest for gopls, dlv, and staticcheck, which causes non-deterministic installs and can break setups on future rebuilds.

♻️ Proposed refactor
+GOPLS_VERSION="${GOPLS_VERSION:-v0.16.2}"
+DLV_VERSION="${DLV_VERSION:-v1.24.0}"
+STATICCHECK_VERSION="${STATICCHECK_VERSION:-2024.1.1}"
+
echo "==> Installing Go IDE tools..."
-go install golang.org/x/tools/gopls@latest
-go install github.com/go-delve/delve/cmd/dlv@latest
-go install honnef.co/go/tools/cmd/staticcheck@latest
+go install golang.org/x/tools/gopls@"${GOPLS_VERSION}"
+go install github.com/go-delve/delve/cmd/dlv@"${DLV_VERSION}"
+go install honnef.co/go/tools/cmd/staticcheck@"${STATICCHECK_VERSION}"

This approach allows overriding versions via environment variables while maintaining consistent defaults across rebuilds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/post-create.sh around lines 5 - 7, The install commands using
`@latest` for gopls, dlv and staticcheck are non-deterministic; change them to use
pinned versions (e.g., GOPLS_VERSION, DLV_VERSION, STATICCHECK_VERSION variables
with sensible defaults) and update the three install lines
(golang.org/x/tools/gopls, github.com/go-delve/delve/cmd/dlv,
honnef.co/go/tools/cmd/staticcheck) to reference those variables so builds are
reproducible while still allowing overrides via environment variables.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.cursor/skills/sippy-dev-frontend/SKILL.md:
- Line 14: The wording in SKILL.md is ambiguous about the open_browser default;
update the sentence that mentions "Defaults and `open_browser`: `mcp/server.py`"
to clearly state that default configuration values are defined in mcp/server.py
and that the `open_browser` option is a boolean which defaults to false (e.g.,
"Default values live in mcp/server.py; `open_browser` is a boolean flag that
defaults to false"). Ensure you reference the `open_browser` flag and
mcp/server.py so readers know where to find and change the default.

In @.cursor/skills/sippy-dev-regression-cache/SKILL.md:
- Line 16: The documentation incorrectly lists "mcp/server.py" as an argument to
the regression_cache call contract; remove that reference and any similar "Other
args" mention so the SKILL.md only documents actual public inputs (e.g.,
bigquery_credentials_file and the SIPPY_BIGQUERY_CREDENTIALS_FILE /
GOOGLE_APPLICATION_CREDENTIALS env vars) for the regression_cache skill, and do
not present implementation details of the MCP server (mcp/server.py) as a
call_mcp_tool argument.

In @.cursor/skills/sippy-dev-serve/SKILL.md:
- Line 13: The documentation line in SKILL.md incorrectly lists "mcp/server.py"
as an "Other args" parameter; remove that reference so the documented tool
parameters for bigquery_credentials_file only mention args accepted by
sippy_serve. Update the sentence containing "Other args: **`mcp/server.py`**" to
either delete the clause or replace it with a note that call_mcp_tool should
pass only sippy_serve args, and ensure mentions of call_mcp_tool and sippy_serve
remain to guide agents toward valid calls (do not document mcp/server.py as a
tool argument).

In @.cursor/skills/sippy-dev-tests/SKILL.md:
- Around line 11-17: Update the SKILL.md workflow description to call out
in-container limitations: explicitly note that run_lint requires CI=true to run
correctly inside the devcontainer and that run_e2e currently does not work from
within the devcontainer, so users should not expect the full run_lint → run_test
→ run_e2e flow to succeed in-container; reference the existing symbols
(call_mcp_tool, run_lint, run_test, run_e2e, bigquery_credentials_file,
GCS_SA_JSON_PATH, and sippy-dev-logs/run_*.log) and add a short caveat paragraph
explaining these constraints and any recommended workaround (e.g., run lint/e2e
from CI or an external environment).

In @.devcontainer/devcontainer.json:
- Line 25: The post-create script call (post-create.sh) runs pip to install
mcp/requirements.txt without the --ignore-scripts flag; update the pip install
invocation inside the post-create.sh script so the venv python install command
(the line that runs something like "<venv>/bin/python -m pip install -r
mcp/requirements.txt") includes the --ignore-scripts option (i.e., add
--ignore-scripts before -r) to match the npm install behavior and suppress
package install scripts.

In @.devcontainer/Dockerfile:
- Line 52: The Dockerfile RUN invoking the Claude install script is unpinned and
causes non-deterministic builds; update the RUN line that currently uses "curl
-fsSL https://claude.ai/install.sh | sh" to call the installer with an explicit
version argument via the installer's "-s <VERSION>" form (e.g., a numeric
version or "stable") so the build always installs a known release; ensure the
new RUN uses the same curl invocation but pipes to "sh -s <VERSION>" and commit
the chosen version string.

In @.devcontainer/init-services.sh:
- Around line 21-26: The init script (.devcontainer/init-services.sh) currently
loops 30 times running podman exec sippy-postgres pg_isready but always prints
"Services ready." and exits 0 even when the checks fail; update the for-loop
logic to detect timeout (e.g., track success via the pg_isready exit status or a
flag) and if the loop completes without a successful pg_isready, print a clear
error message and exit with a non-zero status instead of printing "Services
ready."; modify the block around the for loop and the echo "Services ready." to
only run on success and call exit 1 (or another non-zero code) on failure so the
devcontainer init fails fast.

In @.devcontainer/README.md:
- Around line 69-71: Add a language identifier to the fenced code block that
begins with the snippet `Cmd+Shift+P > "Dev Containers: Attach to Running
Container" > sippy-dev`; update that triple-backtick fence to include a language
tag such as `text` (e.g., ```text) so markdownlint rule MD040 is satisfied and
the block is explicitly marked as plain text.

In `@mcp/requirements.txt`:
- Line 1: Update the dependency specification for fastmcp in requirements to pin
a safe minimum version: replace the bare "fastmcp" entry with a constrained
specifier such as "fastmcp>=3.2.4" so pip will install a release that pulls
patched transitive dependencies (this change targets the fastmcp entry in the
requirements list).

In `@mcp/server.py`:
- Around line 73-80: Wrap the subprocess.run call that assigns r in migrate_db
with a try/except catching subprocess.TimeoutExpired; when a timeout occurs,
build a fallback CompletedProcess-like result (or set r to an object with
stdout/stderr containing a clear timeout message and a non-zero returncode) so
the code returns a usable status and log output path instead of raising, and
ensure you include the original timeout exception message in the stderr text;
reference the subprocess.run invocation that assigns r and the
subprocess.TimeoutExpired exception for locating the change.
- Around line 339-346: The spawned background process in sippy_serve() currently
inherits the parent stdin; update the subprocess.Popen call that creates proc so
it detaches stdin (set stdin to subprocess.DEVNULL) to prevent the child from
keeping the MCP session transport open or reading from the parent stdin; keep
other args (cwd=REPO_ROOT, env=os.environ.copy(), stdout=logf,
stderr=subprocess.STDOUT, start_new_session=True) unchanged when making this
change.
- Around line 21-25: _the _repo_path function currently allows absolute paths
and .. escapes which can let writable args overwrite files outside the repo; add
a new validator function named _repo_output_path(p: str) -> Path that expands
and resolves p but enforces the path is inside REPO_ROOT (reject/raise if
path.is_absolute() and not under REPO_ROOT after resolve or if resolved
path.relative_to(REPO_ROOT) would fail), create parent dirs as needed, and
return the resolved Path; then replace uses of writable output args (notably the
log_file call sites in regression_cache, sippy_serve, and sippy_ng_start) to
call _repo_output_path instead of _repo_path so writable outputs cannot escape
the workspace.

---

Nitpick comments:
In @.devcontainer/Dockerfile:
- Around line 1-3: Replace moving targets with fixed versions: change the base
image reference used in the FROM line
(registry.access.redhat.com/ubi9/ubi:latest) to a specific, immutable tag or
digest and pin any tool installs referenced elsewhere (e.g., gotestsum@latest)
to explicit released versions or digests; update or add ARGs like
GOLANGCI_LINT_VERSION only if they are set to a known stable version (keep ARG
GOLANGCI_LINT_VERSION but ensure its value is a fixed release), and verify the
gotestsum install command at the location around the gotestsum@latest occurrence
is updated to a specific version so rebuilds are reproducible.

In @.devcontainer/init-services.sh:
- Around line 6-19: Ensure existing containers are validated before reuse: for
each named container (sippy-postgres and sippy-redis) call podman inspect and
verify required properties (network sippy-net, expected image
quay.io/enterprisedb/postgresql or redis:7-alpine, and needed env vars like
POSTGRES_PASSWORD/POSTGRES_HOST_AUTH_METHOD) and only run podman start if they
match; otherwise remove the stale container (podman rm -f <name>) and recreate
it with the podman run command shown. Use podman inspect --format or jq to check
.Config.Image, .NetworkSettings.Networks, and .Config.Env for mismatches, then
conditionally start vs. remove+run for sippy-postgres and sippy-redis.

In @.devcontainer/post-create.sh:
- Around line 22-25: The script currently treats missing gcloud binary and
missing application-default credentials the same; add an explicit existence
check using command -v gcloud before running gcloud auth application-default
print-access-token and emit different messages: if command -v gcloud fails,
print a “gcloud not installed” message suggesting installation, otherwise run
gcloud auth application-default print-access-token and on failure print the
existing “GCP credentials not found. Run 'gcloud auth application-default
login'” guidance; update the block around the gcloud auth application-default
print-access-token invocation to branch on the command -v gcloud result.
- Around line 5-7: The install commands using `@latest` for gopls, dlv and
staticcheck are non-deterministic; change them to use pinned versions (e.g.,
GOPLS_VERSION, DLV_VERSION, STATICCHECK_VERSION variables with sensible
defaults) and update the three install lines (golang.org/x/tools/gopls,
github.com/go-delve/delve/cmd/dlv, honnef.co/go/tools/cmd/staticcheck) to
reference those variables so builds are reproducible while still allowing
overrides via environment variables.

In `@mcp/README.md`:
- Around line 64-67: Add a short caution under the sippy-dev-tests test order
(the line listing run_lint → run_test → run_e2e) stating that run_e2e is
expensive and should be executed sparingly (ideally once per request/session),
and call out that running make e2e repeatedly will trigger costly BigQuery
queries and long test times; update the README section that lists
sippy-dev-tests to include this note so developers see it alongside the default
test order.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 02714681-5d29-4292-abb1-cad56781673d

📥 Commits

Reviewing files that changed from the base of the PR and between b30049e and 5944910.

📒 Files selected for processing (19)
  • .cursor/mcp.json
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/.env.example
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/devcontainer.json
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • .gitignore
  • AGENTS.md
  • DEVELOPMENT.md
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.devcontainer/init-services.sh:
- Around line 15-19: The Redis startup sequence currently just starts or runs a
container (sippy-redis) but never verifies Redis is accepting connections, so
the script can falsely report success once PostgreSQL is ready; update the
sippy-redis startup block to fail fast: after the existing podman start/run for
sippy-redis, run a readiness probe (e.g. attempt TCP connect to localhost:6379
or run redis-cli PING inside the sippy-redis container) with a short timeout and
retry loop, and if the probe fails after N attempts log an error and exit
non-zero so the script does not proceed to the "services ready" success path
tied to PostgreSQL readiness. Ensure the probe references the same container
name sippy-redis and port 6379 and that any podman run/start failure also
triggers an immediate exit.

In `@mcp/server.py`:
- Around line 390-391: The docstring claims absolute log paths outside the repo
are allowed, but the actual enforcement in _safe_log_path confines log files to
the repo root; update the docstring for the surrounding function/class to state
that log_file paths are normalized and validated to remain within the repository
root (absolute paths outside the repo are rejected), and mention CRA default
port 3000 and that starting is skipped if a matching npm/react-scripts process
is running to keep behavior description accurate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: fd8a6226-c33b-4a42-990f-205e7a94e501

📥 Commits

Reviewing files that changed from the base of the PR and between 5944910 and 46394ef.

📒 Files selected for processing (10)
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/README.md
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py
✅ Files skipped from review due to trivial changes (3)
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • .devcontainer/post-create.sh
  • mcp/README.md

Comment thread .devcontainer/init-services.sh
Comment thread mcp/server.py Outdated
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

@coderabbitai[bot]: changing LGTM is restricted to collaborators

Details

In response to this:

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.devcontainer/init-services.sh:
- Around line 15-19: The Redis startup sequence currently just starts or runs a
container (sippy-redis) but never verifies Redis is accepting connections, so
the script can falsely report success once PostgreSQL is ready; update the
sippy-redis startup block to fail fast: after the existing podman start/run for
sippy-redis, run a readiness probe (e.g. attempt TCP connect to localhost:6379
or run redis-cli PING inside the sippy-redis container) with a short timeout and
retry loop, and if the probe fails after N attempts log an error and exit
non-zero so the script does not proceed to the "services ready" success path
tied to PostgreSQL readiness. Ensure the probe references the same container
name sippy-redis and port 6379 and that any podman run/start failure also
triggers an immediate exit.

In `@mcp/server.py`:
- Around line 390-391: The docstring claims absolute log paths outside the repo
are allowed, but the actual enforcement in _safe_log_path confines log files to
the repo root; update the docstring for the surrounding function/class to state
that log_file paths are normalized and validated to remain within the repository
root (absolute paths outside the repo are rejected), and mention CRA default
port 3000 and that starting is skipped if a matching npm/react-scripts process
is running to keep behavior description accurate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: fd8a6226-c33b-4a42-990f-205e7a94e501

📥 Commits

Reviewing files that changed from the base of the PR and between 5944910 and 46394ef.

📒 Files selected for processing (10)
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/README.md
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py
✅ Files skipped from review due to trivial changes (3)
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • .devcontainer/post-create.sh
  • mcp/README.md

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@smg247
Copy link
Copy Markdown
Member Author

smg247 commented Apr 28, 2026

@coderabbitai, please close out issues that have been fixed once you do your next review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 28, 2026

@smg247 Sure! I'll perform a fresh review of the PR and resolve any comments that have already been addressed.

✅ Actions performed

Full review triggered.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
.devcontainer/init-services.sh (1)

15-19: Consider adding fail-fast error handling for Redis startup.

The Redis container startup (lines 15-19) doesn't explicitly check if the podman start or podman run command succeeds. While the readiness loop at lines 35-47 will eventually catch failures, adding explicit error handling would provide faster, clearer feedback:

♻️ Suggested improvement
-podman start sippy-redis 2>/dev/null || \
-    podman run -d --name sippy-redis \
+if ! (podman start sippy-redis 2>/dev/null || \
+    podman run -d --name sippy-redis \
         --network sippy-net \
         -p 6379:6379 \
-        redis:7-alpine
+        redis:7-alpine); then
+    echo "ERROR: Failed to start Redis container." >&2
+    exit 1
+fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/init-services.sh around lines 15 - 19, The Redis startup block
calls "podman start sippy-redis" and falls back to "podman run -d --name
sippy-redis ..." without checking return codes; add immediate fail-fast checks
after each command (check the exit status of podman start and podman run) and if
either fails, print a clear error including the failing command and any stderr
output and exit non-zero so the script stops early instead of relying only on
the later readiness loop; update the podman start / podman run sequence to
capture and log errors from those commands before continuing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mcp/server.py`:
- Around line 85-101: The migrate_db function's subprocess.run timeout path
returns before writing the log file so log_path is missing; catch
subprocess.TimeoutExpired in the except block and write the captured output to
log_path before returning (use the exception's stdout/stderr attributes if
available), or switch to streaming output to the file like regression_cache
does; update the except handler around the subprocess.run call (the block that
currently returns "migrate timed out after 600s. log: {log_path}") to write the
combined stdout/stderr to log_path (ensuring trailing newline) and then return
the timeout message.

---

Nitpick comments:
In @.devcontainer/init-services.sh:
- Around line 15-19: The Redis startup block calls "podman start sippy-redis"
and falls back to "podman run -d --name sippy-redis ..." without checking return
codes; add immediate fail-fast checks after each command (check the exit status
of podman start and podman run) and if either fails, print a clear error
including the failing command and any stderr output and exit non-zero so the
script stops early instead of relying only on the later readiness loop; update
the podman start / podman run sequence to capture and log errors from those
commands before continuing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 3dd6b3ef-bb31-49ad-b36a-dec29b386caf

📥 Commits

Reviewing files that changed from the base of the PR and between 46394ef and 6d03b75.

📒 Files selected for processing (19)
  • .cursor/mcp.json
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/.env.example
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/devcontainer.json
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • .gitignore
  • AGENTS.md
  • DEVELOPMENT.md
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py
✅ Files skipped from review due to trivial changes (11)
  • .cursor/mcp.json
  • AGENTS.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • DEVELOPMENT.md
  • .gitignore
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • mcp/README.md
  • .devcontainer/README.md
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • .devcontainer/post-create.sh

Comment thread mcp/server.py Outdated
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

@coderabbitai[bot]: changing LGTM is restricted to collaborators

Details

In response to this:

Actionable comments posted: 1

🧹 Nitpick comments (1)
.devcontainer/init-services.sh (1)

15-19: Consider adding fail-fast error handling for Redis startup.

The Redis container startup (lines 15-19) doesn't explicitly check if the podman start or podman run command succeeds. While the readiness loop at lines 35-47 will eventually catch failures, adding explicit error handling would provide faster, clearer feedback:

♻️ Suggested improvement
-podman start sippy-redis 2>/dev/null || \
-    podman run -d --name sippy-redis \
+if ! (podman start sippy-redis 2>/dev/null || \
+    podman run -d --name sippy-redis \
        --network sippy-net \
        -p 6379:6379 \
-        redis:7-alpine
+        redis:7-alpine); then
+    echo "ERROR: Failed to start Redis container." >&2
+    exit 1
+fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/init-services.sh around lines 15 - 19, The Redis startup block
calls "podman start sippy-redis" and falls back to "podman run -d --name
sippy-redis ..." without checking return codes; add immediate fail-fast checks
after each command (check the exit status of podman start and podman run) and if
either fails, print a clear error including the failing command and any stderr
output and exit non-zero so the script stops early instead of relying only on
the later readiness loop; update the podman start / podman run sequence to
capture and log errors from those commands before continuing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mcp/server.py`:
- Around line 85-101: The migrate_db function's subprocess.run timeout path
returns before writing the log file so log_path is missing; catch
subprocess.TimeoutExpired in the except block and write the captured output to
log_path before returning (use the exception's stdout/stderr attributes if
available), or switch to streaming output to the file like regression_cache
does; update the except handler around the subprocess.run call (the block that
currently returns "migrate timed out after 600s. log: {log_path}") to write the
combined stdout/stderr to log_path (ensuring trailing newline) and then return
the timeout message.

---

Nitpick comments:
In @.devcontainer/init-services.sh:
- Around line 15-19: The Redis startup block calls "podman start sippy-redis"
and falls back to "podman run -d --name sippy-redis ..." without checking return
codes; add immediate fail-fast checks after each command (check the exit status
of podman start and podman run) and if either fails, print a clear error
including the failing command and any stderr output and exit non-zero so the
script stops early instead of relying only on the later readiness loop; update
the podman start / podman run sequence to capture and log errors from those
commands before continuing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 3dd6b3ef-bb31-49ad-b36a-dec29b386caf

📥 Commits

Reviewing files that changed from the base of the PR and between 46394ef and 6d03b75.

📒 Files selected for processing (19)
  • .cursor/mcp.json
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/.env.example
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/devcontainer.json
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • .gitignore
  • AGENTS.md
  • DEVELOPMENT.md
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py
✅ Files skipped from review due to trivial changes (11)
  • .cursor/mcp.json
  • AGENTS.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • DEVELOPMENT.md
  • .gitignore
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • mcp/README.md
  • .devcontainer/README.md
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • .devcontainer/post-create.sh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
.devcontainer/Dockerfile (1)

52-52: ⚠️ Potential issue | 🟠 Major

Pin Claude installer input to avoid non-deterministic builds.

Current install pulls whatever the endpoint serves at build time. Please pass an explicit channel/version (for example stable or a fixed version string).

Proposed diff
+ARG CLAUDE_CODE_VERSION=stable
@@
-RUN curl -fsSL https://claude.ai/install.sh | sh
+RUN curl -fsSL https://claude.ai/install.sh | sh -s -- "${CLAUDE_CODE_VERSION}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/Dockerfile at line 52, The Dockerfile RUN line that executes
the remote installer (the command starting with "RUN curl -fsSL
https://claude.ai/install.sh | sh") must be made deterministic by pinning the
installer channel or version; change the curl URL to include an explicit channel
or version (for example append a query or path such as ?channel=stable or
/v1.2.3/install.sh) or fetch a specific release artifact and verify it (e.g.,
download https://claude.ai/install.sh?channel=stable or a fixed-version URL and
optionally validate a checksum) so builds no longer depend on whatever the
endpoint returns at build time.
mcp/server.py (1)

94-97: ⚠️ Potential issue | 🟡 Minor

Persist timeout output before returning from migrate_db.

On timeout, the function returns a log path but does not write timeout/partial output there, which can leave callers with no usable log context.

🛠️ Proposed fix
-    except subprocess.TimeoutExpired:
-        return f"migrate timed out after 600s. log: {log_path}"
+    except subprocess.TimeoutExpired as e:
+        partial = ((e.stdout or "") + (e.stderr or "")).strip()
+        log_path.write_text(
+            (partial + "\n") if partial else "(timed out with no output)\n",
+            encoding="utf-8",
+        )
+        return f"migrate timed out after 600s. log: {log_path}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/server.py` around lines 94 - 97, When migrate_db catches
subprocess.TimeoutExpired it currently returns without persisting partial
output; update the except block to capture the exception (e.g., except
subprocess.TimeoutExpired as e), extract any partial output from e.stdout and
e.stderr, assemble a string that includes a clear timeout message plus the
partial output, write that string to log_path using log_path.write_text(...,
encoding="utf-8") (matching the existing newline handling), and then return the
timeout message; reference the migrate_db function, the
subprocess.TimeoutExpired exception, the log_path variable and e.stdout/e.stderr
(or r.stdout/r.stderr semantics) so the timeout log is persisted before
returning.
🧹 Nitpick comments (3)
.cursor/skills/sippy-dev-tests/SKILL.md (1)

15-15: Polish wording for the e2e caveat.

Consider replacing “requires nested Podman/Docker” with “requires Podman/Docker-in-Docker support” for clearer phrasing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor/skills/sippy-dev-tests/SKILL.md at line 15, Update the caveat
wording for the run_e2e step: find the sentence in SKILL.md that reads “e2e
requires nested Podman/Docker and does not work inside the devcontainer — run it
on the host instead” (reference: the run_e2e bullet) and replace “requires
nested Podman/Docker” with “requires Podman/Docker-in-Docker support” so the
full caveat reads clearly that Podman/Docker-in-Docker support is needed and it
still does not work inside the devcontainer.
.devcontainer/README.md (1)

10-10: Make the devcontainer CLI install step platform-neutral.

brew install devcontainer is macOS-centric; adding an OS-agnostic pointer to official install methods would reduce onboarding friction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/README.md at line 10, Replace the macOS-specific install
instruction "brew install devcontainer" with a platform-neutral pointer to the
official Dev Containers CLI installation docs (e.g., change the line under
"devcontainer CLI" to link to the official install page or add text like "See
the official Dev Containers CLI installation guide for platform-specific
instructions"). Update the entry that currently reads "devcontainer CLI: `brew
install devcontainer`" to reference the generic install doc URL and optionally
list OS-agnostic alternatives (package manager doc, installer, or GitHub
releases) so onboarding is not macOS-centric.
.devcontainer/devcontainer.json (1)

21-24: Avoid hardcoded DSN/Redis values in tracked config when --env-file is already used.

runArgs already loads .devcontainer/.env (Line 9), so keeping credentials/connection strings in containerEnv is redundant and can override local values unexpectedly.

💡 Proposed simplification
-  "containerEnv": {
-    "SIPPY_DATABASE_DSN": "postgresql://postgres:password@sippy-postgres:5432/postgres",
-    "REDIS_URL": "redis://sippy-redis:6379"
-  },
+  // SIPPY_DATABASE_DSN and REDIS_URL are provided via --env-file (.devcontainer/.env)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/devcontainer.json around lines 21 - 24, The containerEnv block
currently hardcodes SIPPY_DATABASE_DSN and REDIS_URL which will override values
loaded via runArgs (--env-file .devcontainer/.env); remove the
SIPPY_DATABASE_DSN and REDIS_URL entries from the containerEnv object so the
container uses the .devcontainer/.env provided by runArgs instead, leaving
containerEnv for only non-sensitive defaults if needed and keeping runArgs and
.devcontainer/.env as the single source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.devcontainer/init-services.sh:
- Around line 6-19: The host port mappings expose Postgres and Redis on all
interfaces; update the podman run commands for sippy-postgres and sippy-redis to
bind to loopback only by changing the -p flags to include 127.0.0.1 (e.g.,
replace -p 5432:5432 and -p 6379:6379 with -p 127.0.0.1:5432:5432 and -p
127.0.0.1:6379:6379 respectively) so the containers remain reachable from
localhost but not from external hosts.

In `@mcp/server.py`:
- Around line 275-276: The fallback in sippy_ng_start currently returns any PID
that matches "react-scripts/scripts/start.js" via _pgrep_pids, which can include
processes from other repos; update the logic in _pgrep_pids (or immediately
after its call in sippy_ng_start) to filter returned PIDs by verifying each
PID's working directory or command path matches this repository (e.g., compare
/proc/<pid>/cwd or ps/pwdx output to the repo root or os.getcwd()). Only include
PIDs whose cwd/command path equals the project path so the function returns only
local CRA processes.

---

Duplicate comments:
In @.devcontainer/Dockerfile:
- Line 52: The Dockerfile RUN line that executes the remote installer (the
command starting with "RUN curl -fsSL https://claude.ai/install.sh | sh") must
be made deterministic by pinning the installer channel or version; change the
curl URL to include an explicit channel or version (for example append a query
or path such as ?channel=stable or /v1.2.3/install.sh) or fetch a specific
release artifact and verify it (e.g., download
https://claude.ai/install.sh?channel=stable or a fixed-version URL and
optionally validate a checksum) so builds no longer depend on whatever the
endpoint returns at build time.

In `@mcp/server.py`:
- Around line 94-97: When migrate_db catches subprocess.TimeoutExpired it
currently returns without persisting partial output; update the except block to
capture the exception (e.g., except subprocess.TimeoutExpired as e), extract any
partial output from e.stdout and e.stderr, assemble a string that includes a
clear timeout message plus the partial output, write that string to log_path
using log_path.write_text(..., encoding="utf-8") (matching the existing newline
handling), and then return the timeout message; reference the migrate_db
function, the subprocess.TimeoutExpired exception, the log_path variable and
e.stdout/e.stderr (or r.stdout/r.stderr semantics) so the timeout log is
persisted before returning.

---

Nitpick comments:
In @.cursor/skills/sippy-dev-tests/SKILL.md:
- Line 15: Update the caveat wording for the run_e2e step: find the sentence in
SKILL.md that reads “e2e requires nested Podman/Docker and does not work inside
the devcontainer — run it on the host instead” (reference: the run_e2e bullet)
and replace “requires nested Podman/Docker” with “requires
Podman/Docker-in-Docker support” so the full caveat reads clearly that
Podman/Docker-in-Docker support is needed and it still does not work inside the
devcontainer.

In @.devcontainer/devcontainer.json:
- Around line 21-24: The containerEnv block currently hardcodes
SIPPY_DATABASE_DSN and REDIS_URL which will override values loaded via runArgs
(--env-file .devcontainer/.env); remove the SIPPY_DATABASE_DSN and REDIS_URL
entries from the containerEnv object so the container uses the
.devcontainer/.env provided by runArgs instead, leaving containerEnv for only
non-sensitive defaults if needed and keeping runArgs and .devcontainer/.env as
the single source of truth.

In @.devcontainer/README.md:
- Line 10: Replace the macOS-specific install instruction "brew install
devcontainer" with a platform-neutral pointer to the official Dev Containers CLI
installation docs (e.g., change the line under "devcontainer CLI" to link to the
official install page or add text like "See the official Dev Containers CLI
installation guide for platform-specific instructions"). Update the entry that
currently reads "devcontainer CLI: `brew install devcontainer`" to reference the
generic install doc URL and optionally list OS-agnostic alternatives (package
manager doc, installer, or GitHub releases) so onboarding is not macOS-centric.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: e7c604a5-9ad2-4b3d-a68c-0044908d8aa1

📥 Commits

Reviewing files that changed from the base of the PR and between b30049e and 6d03b75.

📒 Files selected for processing (19)
  • .cursor/mcp.json
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/.env.example
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/devcontainer.json
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • .gitignore
  • AGENTS.md
  • DEVELOPMENT.md
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py

Comment thread .devcontainer/init-services.sh
Comment thread mcp/server.py Outdated
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

@coderabbitai[bot]: changing LGTM is restricted to collaborators

Details

In response to this:

Actionable comments posted: 2

♻️ Duplicate comments (2)
.devcontainer/Dockerfile (1)

52-52: ⚠️ Potential issue | 🟠 Major

Pin Claude installer input to avoid non-deterministic builds.

Current install pulls whatever the endpoint serves at build time. Please pass an explicit channel/version (for example stable or a fixed version string).

Proposed diff
+ARG CLAUDE_CODE_VERSION=stable
@@
-RUN curl -fsSL https://claude.ai/install.sh | sh
+RUN curl -fsSL https://claude.ai/install.sh | sh -s -- "${CLAUDE_CODE_VERSION}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/Dockerfile at line 52, The Dockerfile RUN line that executes
the remote installer (the command starting with "RUN curl -fsSL
https://claude.ai/install.sh | sh") must be made deterministic by pinning the
installer channel or version; change the curl URL to include an explicit channel
or version (for example append a query or path such as ?channel=stable or
/v1.2.3/install.sh) or fetch a specific release artifact and verify it (e.g.,
download https://claude.ai/install.sh?channel=stable or a fixed-version URL and
optionally validate a checksum) so builds no longer depend on whatever the
endpoint returns at build time.
mcp/server.py (1)

94-97: ⚠️ Potential issue | 🟡 Minor

Persist timeout output before returning from migrate_db.

On timeout, the function returns a log path but does not write timeout/partial output there, which can leave callers with no usable log context.

🛠️ Proposed fix
-    except subprocess.TimeoutExpired:
-        return f"migrate timed out after 600s. log: {log_path}"
+    except subprocess.TimeoutExpired as e:
+        partial = ((e.stdout or "") + (e.stderr or "")).strip()
+        log_path.write_text(
+            (partial + "\n") if partial else "(timed out with no output)\n",
+            encoding="utf-8",
+        )
+        return f"migrate timed out after 600s. log: {log_path}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/server.py` around lines 94 - 97, When migrate_db catches
subprocess.TimeoutExpired it currently returns without persisting partial
output; update the except block to capture the exception (e.g., except
subprocess.TimeoutExpired as e), extract any partial output from e.stdout and
e.stderr, assemble a string that includes a clear timeout message plus the
partial output, write that string to log_path using log_path.write_text(...,
encoding="utf-8") (matching the existing newline handling), and then return the
timeout message; reference the migrate_db function, the
subprocess.TimeoutExpired exception, the log_path variable and e.stdout/e.stderr
(or r.stdout/r.stderr semantics) so the timeout log is persisted before
returning.
🧹 Nitpick comments (3)
.cursor/skills/sippy-dev-tests/SKILL.md (1)

15-15: Polish wording for the e2e caveat.

Consider replacing “requires nested Podman/Docker” with “requires Podman/Docker-in-Docker support” for clearer phrasing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor/skills/sippy-dev-tests/SKILL.md at line 15, Update the caveat
wording for the run_e2e step: find the sentence in SKILL.md that reads “e2e
requires nested Podman/Docker and does not work inside the devcontainer — run it
on the host instead” (reference: the run_e2e bullet) and replace “requires
nested Podman/Docker” with “requires Podman/Docker-in-Docker support” so the
full caveat reads clearly that Podman/Docker-in-Docker support is needed and it
still does not work inside the devcontainer.
.devcontainer/README.md (1)

10-10: Make the devcontainer CLI install step platform-neutral.

brew install devcontainer is macOS-centric; adding an OS-agnostic pointer to official install methods would reduce onboarding friction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/README.md at line 10, Replace the macOS-specific install
instruction "brew install devcontainer" with a platform-neutral pointer to the
official Dev Containers CLI installation docs (e.g., change the line under
"devcontainer CLI" to link to the official install page or add text like "See
the official Dev Containers CLI installation guide for platform-specific
instructions"). Update the entry that currently reads "devcontainer CLI: `brew
install devcontainer`" to reference the generic install doc URL and optionally
list OS-agnostic alternatives (package manager doc, installer, or GitHub
releases) so onboarding is not macOS-centric.
.devcontainer/devcontainer.json (1)

21-24: Avoid hardcoded DSN/Redis values in tracked config when --env-file is already used.

runArgs already loads .devcontainer/.env (Line 9), so keeping credentials/connection strings in containerEnv is redundant and can override local values unexpectedly.

💡 Proposed simplification
-  "containerEnv": {
-    "SIPPY_DATABASE_DSN": "postgresql://postgres:password@sippy-postgres:5432/postgres",
-    "REDIS_URL": "redis://sippy-redis:6379"
-  },
+  // SIPPY_DATABASE_DSN and REDIS_URL are provided via --env-file (.devcontainer/.env)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/devcontainer.json around lines 21 - 24, The containerEnv block
currently hardcodes SIPPY_DATABASE_DSN and REDIS_URL which will override values
loaded via runArgs (--env-file .devcontainer/.env); remove the
SIPPY_DATABASE_DSN and REDIS_URL entries from the containerEnv object so the
container uses the .devcontainer/.env provided by runArgs instead, leaving
containerEnv for only non-sensitive defaults if needed and keeping runArgs and
.devcontainer/.env as the single source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.devcontainer/init-services.sh:
- Around line 6-19: The host port mappings expose Postgres and Redis on all
interfaces; update the podman run commands for sippy-postgres and sippy-redis to
bind to loopback only by changing the -p flags to include 127.0.0.1 (e.g.,
replace -p 5432:5432 and -p 6379:6379 with -p 127.0.0.1:5432:5432 and -p
127.0.0.1:6379:6379 respectively) so the containers remain reachable from
localhost but not from external hosts.

In `@mcp/server.py`:
- Around line 275-276: The fallback in sippy_ng_start currently returns any PID
that matches "react-scripts/scripts/start.js" via _pgrep_pids, which can include
processes from other repos; update the logic in _pgrep_pids (or immediately
after its call in sippy_ng_start) to filter returned PIDs by verifying each
PID's working directory or command path matches this repository (e.g., compare
/proc/<pid>/cwd or ps/pwdx output to the repo root or os.getcwd()). Only include
PIDs whose cwd/command path equals the project path so the function returns only
local CRA processes.

---

Duplicate comments:
In @.devcontainer/Dockerfile:
- Line 52: The Dockerfile RUN line that executes the remote installer (the
command starting with "RUN curl -fsSL https://claude.ai/install.sh | sh") must
be made deterministic by pinning the installer channel or version; change the
curl URL to include an explicit channel or version (for example append a query
or path such as ?channel=stable or /v1.2.3/install.sh) or fetch a specific
release artifact and verify it (e.g., download
https://claude.ai/install.sh?channel=stable or a fixed-version URL and
optionally validate a checksum) so builds no longer depend on whatever the
endpoint returns at build time.

In `@mcp/server.py`:
- Around line 94-97: When migrate_db catches subprocess.TimeoutExpired it
currently returns without persisting partial output; update the except block to
capture the exception (e.g., except subprocess.TimeoutExpired as e), extract any
partial output from e.stdout and e.stderr, assemble a string that includes a
clear timeout message plus the partial output, write that string to log_path
using log_path.write_text(..., encoding="utf-8") (matching the existing newline
handling), and then return the timeout message; reference the migrate_db
function, the subprocess.TimeoutExpired exception, the log_path variable and
e.stdout/e.stderr (or r.stdout/r.stderr semantics) so the timeout log is
persisted before returning.

---

Nitpick comments:
In @.cursor/skills/sippy-dev-tests/SKILL.md:
- Line 15: Update the caveat wording for the run_e2e step: find the sentence in
SKILL.md that reads “e2e requires nested Podman/Docker and does not work inside
the devcontainer — run it on the host instead” (reference: the run_e2e bullet)
and replace “requires nested Podman/Docker” with “requires
Podman/Docker-in-Docker support” so the full caveat reads clearly that
Podman/Docker-in-Docker support is needed and it still does not work inside the
devcontainer.

In @.devcontainer/devcontainer.json:
- Around line 21-24: The containerEnv block currently hardcodes
SIPPY_DATABASE_DSN and REDIS_URL which will override values loaded via runArgs
(--env-file .devcontainer/.env); remove the SIPPY_DATABASE_DSN and REDIS_URL
entries from the containerEnv object so the container uses the
.devcontainer/.env provided by runArgs instead, leaving containerEnv for only
non-sensitive defaults if needed and keeping runArgs and .devcontainer/.env as
the single source of truth.

In @.devcontainer/README.md:
- Line 10: Replace the macOS-specific install instruction "brew install
devcontainer" with a platform-neutral pointer to the official Dev Containers CLI
installation docs (e.g., change the line under "devcontainer CLI" to link to the
official install page or add text like "See the official Dev Containers CLI
installation guide for platform-specific instructions"). Update the entry that
currently reads "devcontainer CLI: `brew install devcontainer`" to reference the
generic install doc URL and optionally list OS-agnostic alternatives (package
manager doc, installer, or GitHub releases) so onboarding is not macOS-centric.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: e7c604a5-9ad2-4b3d-a68c-0044908d8aa1

📥 Commits

Reviewing files that changed from the base of the PR and between b30049e and 6d03b75.

📒 Files selected for processing (19)
  • .cursor/mcp.json
  • .cursor/skills/sippy-dev-app/SKILL.md
  • .cursor/skills/sippy-dev-frontend/SKILL.md
  • .cursor/skills/sippy-dev-migrate/SKILL.md
  • .cursor/skills/sippy-dev-regression-cache/SKILL.md
  • .cursor/skills/sippy-dev-serve/SKILL.md
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/.env.example
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/devcontainer.json
  • .devcontainer/init-services.sh
  • .devcontainer/post-create.sh
  • .gitignore
  • AGENTS.md
  • DEVELOPMENT.md
  • mcp/README.md
  • mcp/requirements.txt
  • mcp/server.py

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.devcontainer/Dockerfile (1)

52-54: ⚠️ Potential issue | 🟠 Major

CLAUDE_INSTALLER_VERSION still doesn't pin the installed release.

The build arg never reaches the installer, so rebuilds still consume whatever release the script serves that day. Please pass the chosen version/channel through so the image is reproducible.

Suggested fix
 ARG CLAUDE_INSTALLER_VERSION=latest
-RUN curl -fsSL https://claude.ai/install.sh | sh
+RUN curl -fsSL https://claude.ai/install.sh | sh -s -- "${CLAUDE_INSTALLER_VERSION}"
Does the official Claude Code install script support selecting a release via `sh -s -- <version>` or a channel like `stable`, and what is the documented syntax?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/Dockerfile around lines 52 - 54, The Dockerfile declares ARG
CLAUDE_INSTALLER_VERSION but never passes it to the installer, so make the build
reproducible by forwarding that argument into the install script invocation:
keep ARG CLAUDE_INSTALLER_VERSION (default "latest") and change the RUN that
pipes curl into sh to call the script with sh -s -- "$CLAUDE_INSTALLER_VERSION"
(or the documented installer syntax if different) so the chosen version/channel
is actually supplied to the installer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mcp/server.py`:
- Around line 50-74: _resolve_bigquery_creds currently forces both explicit and
env-derived credential paths through _repo_path, which blocks legitimate
external credential locations; change it so only paths meant to be repo-local
use _repo_path. Specifically, for the explicit parameter and the env keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE", "GOOGLE_APPLICATION_CREDENTIALS") skip
calling _repo_path and treat them as normal filesystem paths (expand user/home
if needed) when checking .is_file(); leave _repo_path usage only for values that
must be confined to the repo. Ensure error messages still return the original
path and clear context when file not found or when explicit is malformed.

---

Duplicate comments:
In @.devcontainer/Dockerfile:
- Around line 52-54: The Dockerfile declares ARG CLAUDE_INSTALLER_VERSION but
never passes it to the installer, so make the build reproducible by forwarding
that argument into the install script invocation: keep ARG
CLAUDE_INSTALLER_VERSION (default "latest") and change the RUN that pipes curl
into sh to call the script with sh -s -- "$CLAUDE_INSTALLER_VERSION" (or the
documented installer syntax if different) so the chosen version/channel is
actually supplied to the installer.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 93cf168c-86ca-4c9a-83a0-3017e9002e98

📥 Commits

Reviewing files that changed from the base of the PR and between 6d03b75 and f63ce6d.

📒 Files selected for processing (5)
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/init-services.sh
  • mcp/server.py
✅ Files skipped from review due to trivial changes (1)
  • .cursor/skills/sippy-dev-tests/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • .devcontainer/README.md

Comment thread mcp/server.py
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

@coderabbitai[bot]: changing LGTM is restricted to collaborators

Details

In response to this:

Actionable comments posted: 1

♻️ Duplicate comments (1)
.devcontainer/Dockerfile (1)

52-54: ⚠️ Potential issue | 🟠 Major

CLAUDE_INSTALLER_VERSION still doesn't pin the installed release.

The build arg never reaches the installer, so rebuilds still consume whatever release the script serves that day. Please pass the chosen version/channel through so the image is reproducible.

Suggested fix
ARG CLAUDE_INSTALLER_VERSION=latest
-RUN curl -fsSL https://claude.ai/install.sh | sh
+RUN curl -fsSL https://claude.ai/install.sh | sh -s -- "${CLAUDE_INSTALLER_VERSION}"
Does the official Claude Code install script support selecting a release via `sh -s -- <version>` or a channel like `stable`, and what is the documented syntax?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.devcontainer/Dockerfile around lines 52 - 54, The Dockerfile declares ARG
CLAUDE_INSTALLER_VERSION but never passes it to the installer, so make the build
reproducible by forwarding that argument into the install script invocation:
keep ARG CLAUDE_INSTALLER_VERSION (default "latest") and change the RUN that
pipes curl into sh to call the script with sh -s -- "$CLAUDE_INSTALLER_VERSION"
(or the documented installer syntax if different) so the chosen version/channel
is actually supplied to the installer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mcp/server.py`:
- Around line 50-74: _resolve_bigquery_creds currently forces both explicit and
env-derived credential paths through _repo_path, which blocks legitimate
external credential locations; change it so only paths meant to be repo-local
use _repo_path. Specifically, for the explicit parameter and the env keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE", "GOOGLE_APPLICATION_CREDENTIALS") skip
calling _repo_path and treat them as normal filesystem paths (expand user/home
if needed) when checking .is_file(); leave _repo_path usage only for values that
must be confined to the repo. Ensure error messages still return the original
path and clear context when file not found or when explicit is malformed.

---

Duplicate comments:
In @.devcontainer/Dockerfile:
- Around line 52-54: The Dockerfile declares ARG CLAUDE_INSTALLER_VERSION but
never passes it to the installer, so make the build reproducible by forwarding
that argument into the install script invocation: keep ARG
CLAUDE_INSTALLER_VERSION (default "latest") and change the RUN that pipes curl
into sh to call the script with sh -s -- "$CLAUDE_INSTALLER_VERSION" (or the
documented installer syntax if different) so the chosen version/channel is
actually supplied to the installer.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 93cf168c-86ca-4c9a-83a0-3017e9002e98

📥 Commits

Reviewing files that changed from the base of the PR and between 6d03b75 and f63ce6d.

📒 Files selected for processing (5)
  • .cursor/skills/sippy-dev-tests/SKILL.md
  • .devcontainer/Dockerfile
  • .devcontainer/README.md
  • .devcontainer/init-services.sh
  • mcp/server.py
✅ Files skipped from review due to trivial changes (1)
  • .cursor/skills/sippy-dev-tests/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • .devcontainer/README.md

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
mcp/server.py (1)

52-69: ⚠️ Potential issue | 🟠 Major

Stop repo-confining BigQuery credential inputs.

_resolve_bigquery_creds() still routes explicit and env-derived credential paths through _repo_path(), so common locations like ~/.config/... or /var/run/secrets/... are rejected. That unnecessarily breaks regression_cache(), sippy_serve(), and run_e2e() unless the secret is copied into the repo.

Possible fix
 def _resolve_bigquery_creds(explicit: str | None) -> tuple[Path | None, str | None]:
     if explicit:
-        try:
-            p = _repo_path(explicit)
-        except ValueError as e:
-            return None, str(e)
+        p = Path(explicit).expanduser().resolve()
         if not p.is_file():
             return None, f"BigQuery credentials file not found: {p}"
         return p, None
     for key in ("SIPPY_BIGQUERY_CREDENTIALS_FILE", "GOOGLE_APPLICATION_CREDENTIALS"):
         v = os.environ.get(key)
         if v:
-            try:
-                p = _repo_path(v)
-            except ValueError as e:
-                return None, f"{key}: {e}"
+            p = Path(v).expanduser().resolve()
             if p.is_file():
                 return p, None
             return None, f"{key} is set but file not found: {p}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/server.py` around lines 52 - 69, The function _resolve_bigquery_creds
wrongly forces both explicit and environment-sourced paths through _repo_path(),
rejecting valid external locations and breaking callers like regression_cache,
sippy_serve, and run_e2e; change it to accept raw filesystem paths for explicit
and env values instead of calling _repo_path() — for example, for explicit
(explicit param) convert to a Path, expanduser()/resolve(), check .is_file() and
return (Path, None) or (None, error) and do the same for environment keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE","GOOGLE_APPLICATION_CREDENTIALS") so only
existence is validated (and keep the existing error messages format) while
leaving _repo_path() usage only where repo-confinement is actually required.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mcp/server.py`:
- Around line 507-522: The make subprocess must be started in its own process
group and detached from MCP stdin so children are killed on timeout; replace the
subprocess.run(...) in this block with subprocess.Popen(...) using
stdin=subprocess.DEVNULL and platform-appropriate group creation
(preexec_fn=os.setsid on POSIX or creationflags=CREATE_NEW_PROCESS_GROUP on
Windows), wait with Popen.wait(timeout=tout) and on TimeoutExpired kill the
whole group (os.killpg(proc.pid, signal.SIGTERM)/signal.CTRL_BREAK_EVENT then
os.killpg(..., SIGKILL) as needed), close the process and then call
_tail_file(log_path, 80) and return the same timeout message (use the existing
symbols make_target, REPO_ROOT, run_env, tool_label, timeout_seconds, tout,
log_path and function _tail_file).

---

Duplicate comments:
In `@mcp/server.py`:
- Around line 52-69: The function _resolve_bigquery_creds wrongly forces both
explicit and environment-sourced paths through _repo_path(), rejecting valid
external locations and breaking callers like regression_cache, sippy_serve, and
run_e2e; change it to accept raw filesystem paths for explicit and env values
instead of calling _repo_path() — for example, for explicit (explicit param)
convert to a Path, expanduser()/resolve(), check .is_file() and return (Path,
None) or (None, error) and do the same for environment keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE","GOOGLE_APPLICATION_CREDENTIALS") so only
existence is validated (and keep the existing error messages format) while
leaving _repo_path() usage only where repo-confinement is actually required.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: e35a55d9-2f19-4a84-8ca0-83c67a57f1c2

📥 Commits

Reviewing files that changed from the base of the PR and between f63ce6d and e286714.

📒 Files selected for processing (1)
  • mcp/server.py

Comment thread mcp/server.py Outdated
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

@coderabbitai[bot]: changing LGTM is restricted to collaborators

Details

In response to this:

Actionable comments posted: 1

♻️ Duplicate comments (1)
mcp/server.py (1)

52-69: ⚠️ Potential issue | 🟠 Major

Stop repo-confining BigQuery credential inputs.

_resolve_bigquery_creds() still routes explicit and env-derived credential paths through _repo_path(), so common locations like ~/.config/... or /var/run/secrets/... are rejected. That unnecessarily breaks regression_cache(), sippy_serve(), and run_e2e() unless the secret is copied into the repo.

Possible fix
def _resolve_bigquery_creds(explicit: str | None) -> tuple[Path | None, str | None]:
    if explicit:
-        try:
-            p = _repo_path(explicit)
-        except ValueError as e:
-            return None, str(e)
+        p = Path(explicit).expanduser().resolve()
        if not p.is_file():
            return None, f"BigQuery credentials file not found: {p}"
        return p, None
    for key in ("SIPPY_BIGQUERY_CREDENTIALS_FILE", "GOOGLE_APPLICATION_CREDENTIALS"):
        v = os.environ.get(key)
        if v:
-            try:
-                p = _repo_path(v)
-            except ValueError as e:
-                return None, f"{key}: {e}"
+            p = Path(v).expanduser().resolve()
            if p.is_file():
                return p, None
            return None, f"{key} is set but file not found: {p}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/server.py` around lines 52 - 69, The function _resolve_bigquery_creds
wrongly forces both explicit and environment-sourced paths through _repo_path(),
rejecting valid external locations and breaking callers like regression_cache,
sippy_serve, and run_e2e; change it to accept raw filesystem paths for explicit
and env values instead of calling _repo_path() — for example, for explicit
(explicit param) convert to a Path, expanduser()/resolve(), check .is_file() and
return (Path, None) or (None, error) and do the same for environment keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE","GOOGLE_APPLICATION_CREDENTIALS") so only
existence is validated (and keep the existing error messages format) while
leaving _repo_path() usage only where repo-confinement is actually required.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mcp/server.py`:
- Around line 507-522: The make subprocess must be started in its own process
group and detached from MCP stdin so children are killed on timeout; replace the
subprocess.run(...) in this block with subprocess.Popen(...) using
stdin=subprocess.DEVNULL and platform-appropriate group creation
(preexec_fn=os.setsid on POSIX or creationflags=CREATE_NEW_PROCESS_GROUP on
Windows), wait with Popen.wait(timeout=tout) and on TimeoutExpired kill the
whole group (os.killpg(proc.pid, signal.SIGTERM)/signal.CTRL_BREAK_EVENT then
os.killpg(..., SIGKILL) as needed), close the process and then call
_tail_file(log_path, 80) and return the same timeout message (use the existing
symbols make_target, REPO_ROOT, run_env, tool_label, timeout_seconds, tout,
log_path and function _tail_file).

---

Duplicate comments:
In `@mcp/server.py`:
- Around line 52-69: The function _resolve_bigquery_creds wrongly forces both
explicit and environment-sourced paths through _repo_path(), rejecting valid
external locations and breaking callers like regression_cache, sippy_serve, and
run_e2e; change it to accept raw filesystem paths for explicit and env values
instead of calling _repo_path() — for example, for explicit (explicit param)
convert to a Path, expanduser()/resolve(), check .is_file() and return (Path,
None) or (None, error) and do the same for environment keys
("SIPPY_BIGQUERY_CREDENTIALS_FILE","GOOGLE_APPLICATION_CREDENTIALS") so only
existence is validated (and keep the existing error messages format) while
leaving _repo_path() usage only where repo-confinement is actually required.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: e35a55d9-2f19-4a84-8ca0-83c67a57f1c2

📥 Commits

Reviewing files that changed from the base of the PR and between f63ce6d and e286714.

📒 Files selected for processing (1)
  • mcp/server.py

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

@stbenjam
Copy link
Copy Markdown
Member

rewriting the e2e script to be used without podman is probably the proper approach

This makes it difficult for my work for make e2e to be fast locally. You need postgres and redis, and it really should be disposable container for quick iteration

Comment thread .cursor/skills/sippy-dev-app/SKILL.md Outdated
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, this is only available to cursor. If we merge #3472 and you use .apm/ we can make the rules available to everyone including Claude Code

Comment thread mcp/README.md
Copy link
Copy Markdown
Member

@stbenjam stbenjam Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am skeptical about the value of an MCP server over just telling the coding assistants how to do this, but I'm willing to try it.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the MCP commands are overly simplistic. I thought they might get more complex as we went on and supported other workflows. The real goal was to have a single place to define the skills though that many AI platforms would use. I think your APM stuff satisfies that, and we can drop the simple ones.

Comment thread .devcontainer/devcontainer.json
@openshift-merge-bot openshift-merge-bot Bot added the ready-for-human-review Indicates a PR has been reviewed by automated tools and is ready for human review label Apr 29, 2026
@smg247
Copy link
Copy Markdown
Member Author

smg247 commented Apr 29, 2026

/hold cancel

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 29, 2026

@coderabbitai[bot]: changing LGTM is restricted to collaborators

Details

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 29, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

Copy link
Copy Markdown
Member

@sosiouxme sosiouxme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just passing on the highlights from /code-review:pr:

- Zero test coverage for all new source code                                                          
- No test framework infrastructure for Python in the repository

Perhaps out of the scope of this PR :)

 3. DRY Compliance                                                                                     
                                                                                  
  High Priority Duplication                                                                             
                                
  1. Server Credentials Resolution (repeated 3 times: lines 98-103, 291-296, 511-513)                   
  # This pattern appears in regression_cache, sippy_serve, run_e2e:                                     
  creds_path, err = _resolve_bigquery_creds(bigquery_credentials_file)                                  
  if err:                                                                                               
      return err                         
  dsn = database_dsn or _default_database_dsn()                                                         
  redis = redis_url or os.environ.get("REDIS_URL", "redis://localhost:6379")                            
                                                                                                        
  Impact: Changes to credential resolution require 3 edits                                              
                                                                                                        
  Recommendation: Extract to _resolve_server_config() helper                                            
                                                                                                        
  2. Process Startup Validation (repeated 2 times: lines 367-375, 430-437)                              
  - Identical post-spawn validation in sippy_serve and sippy_ng_start                                   
  - Magic numbers: 0.75 (startup delay), 4000 (tail length)                                             
                                                                                                        
  Recommendation: Extract to _validate_process_started() helper

(maybe we don't need to chase individual items like this, but can improve prompting to try to avoid redundancy / be aware of DRY?)

  4. SOLID Principles                                                                                   
   
  1. Single Responsibility Principle (SRP) - Medium                                                     
  - regression_cache() function does 6 things: validation, path resolution, arg building, subprocess
  execution, timeout handling, log formatting                                                       
  - Similar pattern in sippy_serve() and sippy_ng_start()                                               
                                                         
  2. Open/Closed Principle (OCP) - High                                                                 
  - Adding new tools requires copying 80+ lines and modifying validation logic
  - No abstraction for the common "dev command execution" pattern                                       
  - All 4 tool functions follow nearly identical patterns                                               
                                                                                                        
  Recommendation: Create base execution pattern or command class                                        
                                                                                                        
  3. Interface Segregation Principle (ISP) - Medium                                                     
  - regression_cache() has 8 parameters  
  - Most callers will use defaults, hard to remember parameter order                                    
                                                                                                        
  4. Dependency Inversion Principle (DIP) - Medium                                                      
  - Hard-coded paths: "config/views.yaml", "./cmd/sippy"                                                
  - Hard-coded binaries: "stdbuf", "pgrep", "lsof"      
  - Cannot test with mock commands     

again not sure these really need attention individually, but probably ought to be part of our dev harness. the redundancy in the tool functions at least sounds worthy of attention here.

Comment thread mcp/server.py Outdated
Comment thread .devcontainer/post-create.sh Outdated
Comment thread .devcontainer/init-services.sh
Comment thread .devcontainer/post-create.sh
Comment thread .devcontainer/README.md Outdated
Comment thread .devcontainer/README.md Outdated
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Comment thread .devcontainer/README.md
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

Comment thread .devcontainer/post-create.sh Outdated
Comment thread .devcontainer/README.md Outdated
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

Comment thread .devcontainer/README.md
@stbenjam
Copy link
Copy Markdown
Member

This overall worked really well for me, nice addition

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

@stbenjam
Copy link
Copy Markdown
Member

stbenjam commented May 1, 2026

/lgtm

It'd be great to see a demo of this during a team meeting

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label May 1, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 1, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: coderabbitai[bot], smg247, stbenjam

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 1, 2026

@smg247: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot Bot merged commit eb97a46 into openshift:main May 1, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. ready-for-human-review Indicates a PR has been reviewed by automated tools and is ready for human review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants