Start preparing the skeleton project#1
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/test ? |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a Click-driven CI entrypoint, a Python Topsail launcher and container images, a skeleton project and README, comprehensive Jump‑CI tooling (Python modules, Ansible roles, templates, configs) for cluster locking, SSH tunnelling, preparation, multi-run test orchestration, artifact retrieval, and related launcher/config files. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant RunCI as run_ci (CLI)
participant FS as Filesystem
participant Script as Project CI Script
participant Project as Project Dir/Env
User->>RunCI: run_ci <project> <operation> [args]
RunCI->>FS: find project dir under TOPSAIL_HOME/projects
FS-->>RunCI: project_dir
RunCI->>FS: locate ci script (ci.py or operation.py)
FS-->>RunCI: ci_script_path
RunCI->>Script: run "--help" to enumerate commands
Script-->>RunCI: help output
RunCI->>RunCI: parse_cli_help(help output)
RunCI->>Script: execute chosen operation (or show dry-run)
Script->>Project: perform prepare/test/cleanup steps
Project-->>Script: artifacts / status
Script-->>RunCI: exit code / result
RunCI-->>User: report result
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/test jump-ci |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (11)
.gitignore (2)
103-103: Consider whether.env.*is too broad.The pattern
.env.*will ignore all files starting with.env., including common template files like.env.exampleor.env.templatethat teams often commit to document required environment variables. If you intend to commit such templates, consider being more specific (e.g., explicitly listing.env.local,.env.development, etc., or adding an exception pattern like!.env.example).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore at line 103, The .gitignore entry ".env.*" is too broad and will also ignore committed template files like .env.example; update the rule to either list specific environment variants you want ignored (e.g., .env.local, .env.development) or add an explicit negation for templates you want tracked (e.g., add an exception like "!.env.example" or "!.env.template") so that ".env.*" no longer hides committed examples; adjust the .env.* line and/or add the negation lines accordingly.
141-141: Duplicate ignore pattern.The pattern
*.swpappears on both line 141 and line 146. Consider removing one occurrence to avoid redundancy.♻️ Proposed fix to remove duplicate
# Artifact storage (project specific) artifacts/ *.tmp *.swp # IDE .vscode/ .idea/ -*.swp *.swoAlso applies to: 146-146
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore at line 141, Remove the duplicate ignore pattern '*.swp' from the .gitignore so it only appears once (remove either the entry at line 141 or the one at line 146); ensure the file still ends with a single '*.swp' entry and no extra blank duplicate lines, and scan for any other repeated patterns to avoid redundancy.projects/skeleton/orchestration/ci.py (1)
40-40: Use explicitOptionaltype annotation per PEP 484.The
descriptionparameter has an implicitOptionaltype (= Nonewithout type hint). Usestr | NoneorOptional[str]for clarity.♻️ Proposed fix
- def execute_command(self, command: str, description: str = None) -> bool: + def execute_command(self, command: str, description: str | None = None) -> bool:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/skeleton/orchestration/ci.py` at line 40, The parameter description in execute_command has an implicit Optional because of "= None"; update its type annotation to be explicit (use description: Optional[str] or description: str | None) and add the corresponding import (from typing import Optional) if you choose Optional[str]; ensure the function signature in def execute_command(self, command: str, description: Optional[str] = None) (or with str | None) is updated consistently across any references or stubs for execute_command.pyproject.toml (2)
15-22: Python classifiers don't include 3.13/3.14, but Containerfile uses Python 3.14.The classifiers list Python 3.11 and 3.12, but the Containerfile installs Python 3.14. Consider adding the appropriate classifiers for consistency and PyPI metadata accuracy.
♻️ Add missing classifiers
"Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", + "Programming Language :: Python :: 3.13", + "Programming Language :: Python :: 3.14", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 15 - 22, The pyproject.toml classifiers list is missing entries for Python 3.13 and 3.14 which makes package metadata inconsistent with the Containerfile; update the classifiers array (the "classifiers" key) to include "Programming Language :: Python :: 3.13" and "Programming Language :: Python :: 3.14" so the project metadata matches the runtime used in the Containerfile and PyPI consumers see correct supported versions.
36-46: Mypy is configured but not listed in dev dependencies.Line 103-107 configures
[tool.mypy], but mypy is not included in thedevoptional dependencies. Add it to ensure developers can run type checking.♻️ Add mypy to dev dependencies
[project.optional-dependencies] dev = [ "pytest>=7.4.0", "pytest-asyncio>=0.21.1", "black>=23.7.0", "ruff>=0.0.287", + "mypy>=1.5.0", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 36 - 46, The project config enables type-checking under [tool.mypy] but mypy is missing from the dev optional dependencies; add mypy (e.g. "mypy>=1.x" or a specific compatible version) to the [project.optional-dependencies] dev list so contributors can install and run the configured type checks — update the dev array where "pytest", "black", "ruff" are declared to include "mypy" and ensure the version spec matches your [tool.mypy] expectations.projects/core/ci_entrypoint/run_ci.py (3)
187-204: Silent exception swallowing hides debugging information.The bare
except Exception: passpattern silently ignores all errors when getting CI operations. Consider logging the error in verbose mode for debuggability.♻️ Proposed improvement
-def get_ci_operations(ci_script_path: Path, project_dir: Path) -> List[str]: +def get_ci_operations(ci_script_path: Path, project_dir: Path, verbose: bool = False) -> List[str]: """Get available operations from CI script.""" try: result = subprocess.run( [sys.executable, str(ci_script_path), "--help"], cwd=project_dir, capture_output=True, text=True, timeout=10 ) if result.returncode == 0: return parse_cli_help(result.stdout) - except Exception: - pass + except Exception as e: + if verbose: + click.echo(f"⚠️ Failed to get CI operations: {e}", err=True) return []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 187 - 204, The get_ci_operations function currently swallows all exceptions with a bare "except Exception: pass", losing useful debugging info; change that to "except Exception as e:" and log the error (e.g., using logging.exception or logger.debug/logger.error) including the exception details and context (ci_script_path and project_dir) before returning [] so behavior is unchanged but failures are visible; locate the try/except inside get_ci_operations to implement this change.
330-337: Parameterhelpshadows Python builtin.Using
helpas a parameter name shadows the built-inhelp()function. Consider renaming toshow_help.♻️ Proposed fix
`@click.option`('--verbose', '-v', is_flag=True, help='Enable verbose output') `@click.option`('--dry-run', is_flag=True, help='Show what would be executed without running it') -@click.option('--help', '-h', is_flag=True, help='Show help message') -def main(project, operation, args, verbose, dry_run, help): +@click.option('--show-help', '-h', 'show_help', is_flag=True, help='Show help message') +def main(project, operation, args, verbose, dry_run, show_help): ... - if help: + if show_help:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 330 - 337, The CLI command function main currently uses the parameter name help which shadows the Python builtin; rename the click option parameter from help to show_help by changing the `@click.option`('--help', '-h', is_flag=True, help='Show help message') decorated parameter and the function signature def main(project, operation, args, verbose, dry_run, help): to def main(project, operation, args, verbose, dry_run, show_help): then update all uses inside the function (references to help) to show_help and update any tests or call sites that reference the parameter name; keep the CLI flag strings unchanged so the CLI behavior remains the same.
109-111: Remove extraneousfprefix from strings without placeholders.Several f-strings contain no placeholders. Remove the
fprefix for cleaner code.♻️ Proposed fix (sample)
- click.echo(f" run <project> <operation> # Execute project operation") - click.echo(f" run projects # List projects explicitly") + click.echo(" run <project> <operation> # Execute project operation") + click.echo(" run projects # List projects explicitly")Apply similar changes to lines 241, 256, 260, 281, 287, 288.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 109 - 111, Several click.echo calls use f-strings with no placeholders; remove the unnecessary f prefix and make them plain string literals. Locate the click.echo(...) calls shown (e.g., click.echo(f" run <project> <operation> # Execute project operation") and click.echo(f" run projects # List projects explicitly")) and change them to click.echo("..."); also search run_ci.py for other f-strings without braces (the other occurrences you flagged) and convert them to normal strings to keep formatting consistent.projects/core/image/Containerfile.lightweight (1)
3-3: Replace deprecatedMAINTAINERwithLABEL.The
MAINTAINERinstruction is deprecated. Use aLABELinstead for maintainer information.♻️ Proposed fix
-MAINTAINER Perf&Scale PSAP Team <team-psap@redhat.com> +LABEL maintainer="Perf&Scale PSAP Team <team-psap@redhat.com>"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/image/Containerfile.lightweight` at line 3, Replace the deprecated MAINTAINER instruction with a LABEL key/value pair: remove the MAINTAINER line and add a LABEL entry such as LABEL maintainer="Perf&Scale PSAP Team <team-psap@redhat.com>" so the maintainer metadata is stored using LABEL instead of MAINTAINER; update the Containerfile.lightweight where the MAINTAINER entry appears (replace the MAINTAINER token with the LABEL token and the maintainer value).projects/core/image/Containerfile (1)
38-43: Consider pinning the OpenShift CLI version for reproducible builds.The URL uses
stablewhich can change over time, potentially breaking reproducibility. Consider pinning to a specific version or documenting the expected version.♻️ Example with pinned version
# Install OpenShift CLI (oc) -RUN curl -LO "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz" && \ +# Pin to specific version for reproducibility (update as needed) +ARG OC_VERSION=4.14.0 +RUN curl -LO "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${OC_VERSION}/openshift-client-linux.tar.gz" && \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/image/Containerfile` around lines 38 - 43, The Dockerfile RUN that downloads the OpenShift CLI from "https://.../stable/openshift-client-linux.tar.gz" is not pinned and breaks reproducible builds; update the RUN step that fetches and installs oc (openshift-client-linux.tar.gz and oc binary) to use a fixed versioned URL (replace "stable" with a specific release tag or version variable), document the chosen version near the command (or expose it as a build ARG like OPENSHIFT_CLI_VERSION), and ensure the checksum or signature of the tarball is verified after download before extracting to guarantee integrity.projects/skeleton/README.md (1)
17-24: Add language specifier to fenced code block.Static analysis flags this code block as missing a language specifier. Use
textorplaintextfor directory tree structures.♻️ Proposed fix
-``` +```text skeleton/ ├── ci.py # Main CI script with Click-based CLI ├── config.yaml # Project configuration🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/skeleton/README.md` around lines 17 - 24, Update the fenced code block in README.md that shows the project directory tree so it includes a language specifier (use "text" or "plaintext"); locate the triple-backtick block containing the skeleton/ tree and change the opening fence to ```text to satisfy the linter and preserve the existing tree content.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@projects/core/ci_entrypoint/run_ci.py`:
- Around line 49-70: find_ci_script currently only checks project root (ci.py
and {operation}.py) so CI scripts in subdirectories (e.g., orchestration/ci.py)
are missed; update find_ci_script to also search common subdirectories (e.g.,
"orchestration", "scripts", "ci") and/or perform a shallow recursive search for
files named "ci.py" or f"{operation}.py" and return the first match. Modify the
function find_ci_script to after the root checks iterate those subdir names and
check project_dir / subdir / "<name>", and if still not found use Path.rglob or
an os.walk limited to a small depth to find matching filenames, returning the
Path when found or None otherwise.
In `@projects/core/image/Containerfile`:
- Around line 69-73: The RUN that changes ownership/permissions (the commands
chgrp -R 0 "$HOME" && chmod -R g=u "$HOME") is placed after the USER 1001
directive and will fail because a non-root user can't chgrp/chmod root-owned
files; move that RUN block to execute before the USER 1001 line so the
permission changes run as root, then switch to USER 1001 afterward.
- Around line 54-59: The Dockerfile copies only pyproject.toml but later RUN
tries to cp pyproject-lightweight.toml to pyproject.toml, causing a missing-file
build error; fix by ensuring pyproject-lightweight.toml is added to the image
(add a COPY pyproject-lightweight.toml ./ before the RUN) or change the RUN to
use the already-copied pyproject.toml (remove the cp step), updating the
COPY/RUN sequence around the existing COPY pyproject.toml ./ and RUN cp
pyproject-lightweight.toml pyproject.toml && uv lock && uv pip install --system
-r pyproject.toml so the referenced file actually exists in the container.
In `@projects/core/image/Containerfile.lightweight`:
- Around line 29-35: The Dockerfile clones the repo to $HOME (/app) but the
generated run script at /usr/local/bin/run points to /opt/topsail/src so the
runner path is inconsistent; fix by making the clone target and run script agree
— either change the git clone command (the RUN git clone ... $HOME) to clone
into /opt/topsail/src (or /opt/topsail) or update the run script (the echo that
writes /usr/local/bin/run) to reference /app (or $HOME) instead of
/opt/topsail/src so the testing runner path is correct.
In `@projects/skeleton/orchestration/ci.py`:
- Around line 16-21: The code uses types.SimpleNamespace (see usage of
types.SimpleNamespace around line 196) but the types module isn't imported; add
"import types" to the module imports at the top of
projects/skeleton/orchestration/ci.py so types.SimpleNamespace resolves and
avoids the NameError at runtime.
- Around line 194-206: The CLI context stores the test runner under the wrong
attribute (ctx.objrunner) causing AttributeError when commands expect
ctx.obj.runner; change the assignment in the cli function to set ctx.obj.runner
= SkeletonTestRunner(verbose) (or consistently use ctx.objrunner everywhere), so
that functions like prepare (which reads ctx.obj.runner) and other commands can
access the runner; update the cli function where
ctx.ensure_object(types.SimpleNamespace) and the runner is assigned, and ensure
other command handlers (prepare and any that reference ctx.obj.runner) remain
unchanged.
---
Nitpick comments:
In @.gitignore:
- Line 103: The .gitignore entry ".env.*" is too broad and will also ignore
committed template files like .env.example; update the rule to either list
specific environment variants you want ignored (e.g., .env.local,
.env.development) or add an explicit negation for templates you want tracked
(e.g., add an exception like "!.env.example" or "!.env.template") so that
".env.*" no longer hides committed examples; adjust the .env.* line and/or add
the negation lines accordingly.
- Line 141: Remove the duplicate ignore pattern '*.swp' from the .gitignore so
it only appears once (remove either the entry at line 141 or the one at line
146); ensure the file still ends with a single '*.swp' entry and no extra blank
duplicate lines, and scan for any other repeated patterns to avoid redundancy.
In `@projects/core/ci_entrypoint/run_ci.py`:
- Around line 187-204: The get_ci_operations function currently swallows all
exceptions with a bare "except Exception: pass", losing useful debugging info;
change that to "except Exception as e:" and log the error (e.g., using
logging.exception or logger.debug/logger.error) including the exception details
and context (ci_script_path and project_dir) before returning [] so behavior is
unchanged but failures are visible; locate the try/except inside
get_ci_operations to implement this change.
- Around line 330-337: The CLI command function main currently uses the
parameter name help which shadows the Python builtin; rename the click option
parameter from help to show_help by changing the `@click.option`('--help', '-h',
is_flag=True, help='Show help message') decorated parameter and the function
signature def main(project, operation, args, verbose, dry_run, help): to def
main(project, operation, args, verbose, dry_run, show_help): then update all
uses inside the function (references to help) to show_help and update any tests
or call sites that reference the parameter name; keep the CLI flag strings
unchanged so the CLI behavior remains the same.
- Around line 109-111: Several click.echo calls use f-strings with no
placeholders; remove the unnecessary f prefix and make them plain string
literals. Locate the click.echo(...) calls shown (e.g., click.echo(f" run
<project> <operation> # Execute project operation") and click.echo(f" run
projects # List projects explicitly")) and change them to
click.echo("..."); also search run_ci.py for other f-strings without braces (the
other occurrences you flagged) and convert them to normal strings to keep
formatting consistent.
In `@projects/core/image/Containerfile`:
- Around line 38-43: The Dockerfile RUN that downloads the OpenShift CLI from
"https://.../stable/openshift-client-linux.tar.gz" is not pinned and breaks
reproducible builds; update the RUN step that fetches and installs oc
(openshift-client-linux.tar.gz and oc binary) to use a fixed versioned URL
(replace "stable" with a specific release tag or version variable), document the
chosen version near the command (or expose it as a build ARG like
OPENSHIFT_CLI_VERSION), and ensure the checksum or signature of the tarball is
verified after download before extracting to guarantee integrity.
In `@projects/core/image/Containerfile.lightweight`:
- Line 3: Replace the deprecated MAINTAINER instruction with a LABEL key/value
pair: remove the MAINTAINER line and add a LABEL entry such as LABEL
maintainer="Perf&Scale PSAP Team <team-psap@redhat.com>" so the maintainer
metadata is stored using LABEL instead of MAINTAINER; update the
Containerfile.lightweight where the MAINTAINER entry appears (replace the
MAINTAINER token with the LABEL token and the maintainer value).
In `@projects/skeleton/orchestration/ci.py`:
- Line 40: The parameter description in execute_command has an implicit Optional
because of "= None"; update its type annotation to be explicit (use description:
Optional[str] or description: str | None) and add the corresponding import (from
typing import Optional) if you choose Optional[str]; ensure the function
signature in def execute_command(self, command: str, description: Optional[str]
= None) (or with str | None) is updated consistently across any references or
stubs for execute_command.
In `@projects/skeleton/README.md`:
- Around line 17-24: Update the fenced code block in README.md that shows the
project directory tree so it includes a language specifier (use "text" or
"plaintext"); locate the triple-backtick block containing the skeleton/ tree and
change the opening fence to ```text to satisfy the linter and preserve the
existing tree content.
In `@pyproject.toml`:
- Around line 15-22: The pyproject.toml classifiers list is missing entries for
Python 3.13 and 3.14 which makes package metadata inconsistent with the
Containerfile; update the classifiers array (the "classifiers" key) to include
"Programming Language :: Python :: 3.13" and "Programming Language :: Python ::
3.14" so the project metadata matches the runtime used in the Containerfile and
PyPI consumers see correct supported versions.
- Around line 36-46: The project config enables type-checking under [tool.mypy]
but mypy is missing from the dev optional dependencies; add mypy (e.g.
"mypy>=1.x" or a specific compatible version) to the
[project.optional-dependencies] dev list so contributors can install and run the
configured type checks — update the dev array where "pytest", "black", "ruff"
are declared to include "mypy" and ensure the version spec matches your
[tool.mypy] expectations.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f67e2a9b-8c54-4bc6-aee9-5866151f33ac
📒 Files selected for processing (9)
.gitignorebin/run_ciprojects/core/ci_entrypoint/run_ci.pyprojects/core/image/Containerfileprojects/core/image/Containerfile.lightweightprojects/core/image/pyproject-lightweight.tomlprojects/skeleton/README.mdprojects/skeleton/orchestration/ci.pypyproject.toml
1aad820 to
8933c12
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
♻️ Duplicate comments (1)
projects/core/ci_entrypoint/run_ci.py (1)
49-70:⚠️ Potential issue | 🟠 MajorProject script discovery is still limited to top-level
.pyfiles.Both lookup paths only inspect the project root. Projects that keep their CI entrypoint under
orchestration/,ci/, orscripts/will still be shown as missing and won't execute.Also applies to: 134-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 49 - 70, The CI script discovery in find_ci_script currently only checks top-level ci.py and {operation}.py; update find_ci_script (and the similar lookup at lines ~134-139) to also search common subdirectories by iterating over ["orchestration", "ci", "scripts"] and checking each for "ci.py" and f"{operation}.py" (preserving existing root checks first), returning the first match; ensure you reference the function find_ci_script and the alternative lookup block so both places perform the same extended search logic.
🟠 Major comments (19)
projects/skeleton/README.md-17-24 (1)
17-24:⚠️ Potential issue | 🟠 MajorREADME still describes a different skeleton entrypoint.
The tree points readers to
skeleton/ci.py, and the quick-start/customization steps usevalidate,full, and--dry-run, butprojects/skeleton/orchestration/ci.pyin this PR only exposesprepare,test,cleanup, and--verbose. Following the guide as written will hit command-not-found paths immediately.Also applies to: 28-63, 76-77
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/skeleton/README.md` around lines 17 - 24, The README's tree and quick-start sections are incorrect: they reference skeleton/ci.py and commands validate/full/--dry-run which don't exist; update the README.md to point to projects/skeleton/orchestration/ci.py (or the correct entrypoint) and replace examples/usage to use the actual CLI commands exposed by that module (prepare, test, cleanup) and flag (--verbose), ensuring all examples, command references, and any CLI-related steps (lines around 28-63 and 76-77) match the real function names and flags in projects/skeleton/orchestration/ci.py.projects/jump_ci/testing/config.yaml-75-76 (1)
75-76:⚠️ Potential issue | 🟠 MajorHost key verification is disabled for every SSH hop.
StrictHostKeyChecking=notogether withUserKnownHostsFile=/dev/nullmakes the bastion/jump flow trust whichever host answers first. That's a significant downgrade for a path that carries cluster-lock and secret access; prefer shipping a pinnedknown_hostsfile with the secret bundle instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/config.yaml` around lines 75 - 76, The SSH config currently disables host-key verification via the flags StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null; replace these by shipping a pinned known_hosts file with the secret bundle and point SSH at it (e.g. remove or change -o StrictHostKeyChecking=no and -o UserKnownHostsFile=/dev/null, set -o StrictHostKeyChecking=yes and -o UserKnownHostsFile=/path/to/pinned_known_hosts). Ensure the pinned known_hosts file is added to the secret bundle and mounted into the runtime so the config (config.yaml) or the SSH invocation can reference that path; keep references to the original symbols StrictHostKeyChecking and UserKnownHostsFile when updating the config so reviewers can verify the change.projects/jump_ci/testing/prepare_jump_ci.py-112-127 (1)
112-127:⚠️ Potential issue | 🟠 MajorLocal
prepare()can't take the documented default path.Outside OpenShift CI this branch exits unless
pr_numberis set, even though the docstring sayspr_numberis optional and main should be used otherwise. That makes plain main-branch/bootstrap runs impossible without fabricating PR metadata.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/prepare_jump_ci.py` around lines 112 - 127, The prepare() flow currently treats pr_number as required when either pr_number or git_ref is present and exits for non-CI runs; change the logic so pr_number is optional and main-branch runs are allowed: modify the branch around the any([pr_number, git_ref]) check in prepare() to not require pr_number when git_ref is provided, and when neither pr_number nor git_ref is set (outside CI) set git_ref="main" or simply skip adding PR-specific keys to prepare_topsail_args instead of calling SystemExit; update the validation so prepare_topsail_args is only populated with repo_owner/repo_name/pr_number when pr_number exists and always accept a git_ref (defaulting to "main") for main-branch bootstrap runs.projects/skeleton/orchestration/ci.py-32-34 (1)
32-34:⚠️ Potential issue | 🟠 MajorArtifacts are created under
projects/, not the repo root.From
projects/skeleton/orchestration/ci.py,parent.parent.parent / "artifacts"resolves to<repo>/projects/artifacts. That diverges from the top-levelartifacts/path documented in the README.Proposed fix
- self.artifacts_dir = Path(__file__).parent.parent.parent / "artifacts" + self.artifacts_dir = Path(__file__).resolve().parents[3] / "artifacts"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/skeleton/orchestration/ci.py` around lines 32 - 34, The artifacts directory is being created under the projects folder because self.artifacts_dir is set with Path(__file__).parent.parent.parent; change the calculation to point to the repository root instead (e.g. use Path(__file__).resolve().parents[3] or parents[4] as appropriate) and assign self.artifacts_dir = repo_root / "artifacts" followed by mkdir(parents=True, exist_ok=True); update the assignment in projects/skeleton/orchestration/ci.py where self.artifacts_dir is defined so it uses the repo-root path (refer to the self.artifacts_dir symbol).projects/jump_ci/toolbox/jump_ci_prepare_topsail/tasks/main.yml-19-27 (1)
19-27:⚠️ Potential issue | 🟠 MajorArtifact index is always computed as
000.The wildcard is quoted in
ls "{{ artifact_dir }}/0*" -d, so the shell never expands it. That keepsnext_artifact_dir_count.stdoutat0, and every run reuses the same000__jump_ci__prepare_topsaildirectory.Proposed fix
- name: Compute the index of the next logs directory shell: - set -o pipefail; - printf "%03d" $(ls "{{ artifact_dir }}/0*" -d 2>/dev/null | wc -l) + set -o pipefail; + count=$(find "{{ artifact_dir }}" -maxdepth 1 -mindepth 1 -type d -name '0*' | wc -l) + printf "%03d" "$count"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_topsail/tasks/main.yml` around lines 19 - 27, The shell task computing the next index quotes the wildcard so the shell doesn't expand "{{ artifact_dir }}/0*", causing next_artifact_dir_count to always be 0; update the "Compute the index of the next logs directory" shell command to let the shell expand the glob (e.g., drop the quotes around the path/glob) so ls -d {{ artifact_dir }}/0* correctly lists matching dirs, and keep the redirection/printf logic unchanged to produce artifact_extra_logs_dir (the set_fact using next_artifact_dir_count.stdout) as before.projects/skeleton/orchestration/ci.py-87-88 (1)
87-88:⚠️ Potential issue | 🟠 MajorConfig lookup is relative to
orchestration/, not the project root.According to the README's documented project structure,
config.yamlshould live atprojects/skeleton/config.yaml(project root), but the code currently looks for it atprojects/skeleton/orchestration/config.yaml(same directory asci.py). This will always miss the file and emit the "using defaults" warning. Either move the file besideci.pyor correct the path resolution.Proposed fix
- config_file = Path(__file__).parent / "config.yaml" + config_file = Path(__file__).resolve().parent.parent / "config.yaml"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/skeleton/orchestration/ci.py` around lines 87 - 88, In ci.py the config lookup sets config_file using Path(__file__).parent which points at the orchestration directory; update the config_file resolution so it targets the project root (the parent of the orchestration directory) instead of the orchestration folder so projects/skeleton/config.yaml is found, and keep resolving paths (e.g., via Path.resolve()/parents) to avoid symlink issues; change the reference where config_file is defined and any dependent load logic to use that corrected path.projects/jump_ci/testing/config.yaml-48-55 (1)
48-55:⚠️ Potential issue | 🟠 MajorPreset mappings still use the old repo slug.
apply_preset_by_jobname()does an exactJOB_NAMElookup. Since this PR is foropenshift-psap/topsail-ng, thesepull-ci-openshift-psap-topsail-*keys look stale; if CI emits...-topsail-ng-..., no preset will apply and the run falls back to nullcluster.name/project.name. Please verify these keys against the generated$JOB_NAMEvalues for this repo.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/config.yaml` around lines 48 - 55, The preset map job_name_to_preset uses the old repository slug (topsail) so apply_preset_by_jobname (which does an exact JOB_NAME lookup) will miss jobs emitted for topsail-ng; update the keys in job_name_to_preset to match the actual JOB_NAME strings produced by CI for this repo (e.g., replace pull-ci-openshift-psap-topsail-* with the corresponding pull-ci-openshift-psap-topsail-ng-* entries or add both variants), then verify by comparing against generated $JOB_NAME values from a CI run or logs; alternatively, if broader matching is desired, change apply_preset_by_jobname to perform a normalized/substring or regex match against job_name_to_preset keys to avoid exact-slug brittleness.projects/jump_ci/testing/test.py-153-166 (1)
153-166:⚠️ Potential issue | 🟠 MajorBuild per-run overrides in
multi_run_args_dictinstead of mutating the shared base dict.Line 186 already expects the per-run additions to live in
multi_run_args_dict, but Lines 165-166 write intovariables_overrides_dictinstead. If one iteration has more args than the next, stalePR_POSITIONAL_ARG_*keys bleed into later runs.🐛 Proposed fix
- for idx, multi_run_arg in enumerate(multi_run_args_lst): - variables_overrides_dict[f"PR_POSITIONAL_ARG_{next_pr_positional_arg_count+idx}"] = multi_run_arg + for idx, multi_run_arg in enumerate(multi_run_args_lst): + multi_run_args_dict[f"PR_POSITIONAL_ARG_{next_pr_positional_arg_count+idx}"] = multi_run_argAlso applies to: 186-186
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` around lines 153 - 166, The loop that assigns per-run positional args is mutating the shared variables_overrides_dict (using variables_overrides_dict[f"PR_POSITIONAL_ARG_{next_pr_positional_arg_count+idx}"] = ...) causing stale keys to bleed across runs; instead, add those PR_POSITIONAL_ARG_* entries into multi_run_args_dict (built per-run) using next_pr_positional_arg_count + idx for keys, and then merge or overlay multi_run_args_dict onto variables_overrides_dict only for the current run (where the code expects per-run additions) so the shared base dict remains unchanged between iterations.projects/jump_ci/testing/test.py-191-203 (1)
191-203:⚠️ Potential issue | 🟠 MajorGuard cleanup so it doesn't hide the real failure.
The cleanup call in
finallyis unprotected. If it raises, the original test result is replaced andretrieve_artifacts()never runs, which is usually the point where the artifacts are most needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` around lines 191 - 203, The finally block calls tunnelling.run_with_ansible_ssh_conf(...) for cleanup unprotected, so if that raises it masks the original failure and prevents retrieve_artifacts() from running; change the finally to run the cleanup inside its own try/except that logs any cleanup error but does not raise (or re-raises only after retrieve_artifacts() is called), preserve the original exception/failed state (use the existing failed flag or capture the exception in a local variable), ensure retrieve_artifacts() is always executed before re-raising any original exception, and reference tunnelling.run_with_ansible_ssh_conf, retrieve_artifacts(), the failed variable and the cleanup command ("bash {cluster_lock_dir}/test/{command}/entrypoint.sh cleanup") when implementing this guard.projects/jump_ci/toolbox/jump_ci.py-78-82 (1)
78-82:⚠️ Potential issue | 🟠 MajorUpdate the default repo/image names to
topsail-ng.This PR is introducing
topsail-ng, but the default checkout and image names here still point attopsail. Any caller that relies on defaults will prepare and test the wrong source tree.🐛 Proposed fix
- repo_name="topsail", + repo_name="topsail-ng", @@ - image_name="localhost/topsail", + image_name="localhost/topsail-ng",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci.py` around lines 78 - 82, Update the defaults to point at the new repo/image name: change the repo_name value from "topsail" to "topsail-ng" and change image_name from "localhost/topsail" to "localhost/topsail-ng" (the surrounding kwargs where repo_owner, repo_name, git_ref, image_name, image_tag are set—look for the repo_name and image_name symbols in jump_ci.py).projects/jump_ci/testing/test.py-34-49 (1)
34-49:⚠️ Potential issue | 🟠 MajorDon't log override values verbatim.
This forwards arbitrary non-config overrides and then writes their values into CI logs. In practice that is where tokens, secret paths, and other sensitive job data tend to leak.
🛡️ Proposed fix
- logging.info(f"NOT passing '{k}: {v}' to the new variables overrides") + logging.info("NOT passing '%s' to the new variables overrides", k) @@ - logging.info(f"Passing '{k}: {v}' to the new variables overrides") + logging.info("Passing '%s' to the new variables overrides", k)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` around lines 34 - 49, The current loop in projects/jump_ci/testing/test.py that iterates variable_overrides_dict (inside the for k, v in variable_overrides_dict.items() block) logs override values verbatim which can leak secrets; change the logging calls around the new_variable_overrides assignment and the skip branch (the logging.info lines) to avoid printing v directly—log only the key k and either a redacted/placeholder value (e.g., "<redacted>" or masked string) or no value at all, and keep the existing behavior of copying into new_variable_overrides (new_variable_overrides[k] = v) unchanged so secrets are not emitted to CI logs.projects/jump_ci/testing/test.py-117-130 (1)
117-130:⚠️ Potential issue | 🟠 MajorAvoid reading
idxwhentest_argsis empty.If
project.argsresolves to[]or the caller passes an empty string, the loop on Lines 126-127 never runs and Line 130 raisesUnboundLocalError.🐛 Proposed fix
variables_overrides_dict = dict( PR_POSITIONAL_ARGS=test_args, PR_POSITIONAL_ARG_0="jump-ci", ) - for idx, arg in enumerate(test_args.split()): - variables_overrides_dict[f"PR_POSITIONAL_ARG_{idx+1}"] = arg + split_args = test_args.split() + for idx, arg in enumerate(split_args, start=1): + variables_overrides_dict[f"PR_POSITIONAL_ARG_{idx}"] = arg config.project.set_config("overrides", variables_overrides_dict) - next_pr_positional_arg_count = idx + 2 + next_pr_positional_arg_count = len(split_args) + 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` around lines 117 - 130, The code assumes idx is always set by the for-loop over test_args.split(), which fails when test_args is empty; fix by deriving an explicit args_list = test_args.split() if test_args else [] (or use (config_test_args or []).copy()) then iterate enumerate(args_list) to populate variables_overrides_dict and set next_pr_positional_arg_count = len(args_list) + 1; update references in this block (test_args, variables_overrides_dict, config.project.set_config, next_pr_positional_arg_count) so idx is not used outside the loop.projects/jump_ci/testing/test.py-23-32 (1)
23-32:⚠️ Potential issue | 🟠 MajorKeep
PR_POSITIONAL_ARGSand the next slot calculation in sync.Line 24 overwrites the rewritten
PR_POSITIONAL_ARGSwithPR_POSITIONAL_ARG_0, and Line 32 derives the next slot fromnb_args_to_eatinstead of the rewritten arg count. That leaves the override metadata inconsistent and makes later appends start from the wrong index.🐛 Proposed fix
- new_args_str = new_variable_overrides[f"PR_POSITIONAL_ARGS"] = " ".join(new_args) - new_variable_overrides["PR_POSITIONAL_ARGS"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"] + new_args_str = " ".join(new_args) + new_variable_overrides["PR_POSITIONAL_ARGS"] = new_args_str new_variable_overrides["PR_POSITIONAL_ARG_0"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"] - idx = 0 - for idx, value in enumerate(new_args): - new_variable_overrides[f"PR_POSITIONAL_ARG_{idx+1}"] = value - next_pr_positional_arg_count = idx + nb_args_to_eat + for idx, value in enumerate(new_args, start=1): + new_variable_overrides[f"PR_POSITIONAL_ARG_{idx}"] = value + next_pr_positional_arg_count = len(new_args) + 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` around lines 23 - 32, Remove the accidental overwrite of PR_POSITIONAL_ARGS and compute the next slot from the rewritten arg count: keep new_variable_overrides["PR_POSITIONAL_ARGS"] = new_args_str (do not assign it from variable_overrides_dict["PR_POSITIONAL_ARG_0"]), keep setting PR_POSITIONAL_ARG_0 from variable_overrides_dict only if that initial arg must be preserved, populate subsequent slots via the existing for idx, value in enumerate(new_args) loop (assigning PR_POSITIONAL_ARG_{idx+1}), and set next_pr_positional_arg_count = len(new_args) + nb_args_to_eat so the next free index is in sync with the actual rewritten PR_POSITIONAL_ARGS.projects/core/ci_entrypoint/run_ci.py-212-233 (1)
212-233:⚠️ Potential issue | 🟠 MajorDon't strip indentation before parsing the
Commands:block.Line 214 removes the leading spaces that Lines 222 and 226 rely on, so the first command entry immediately terminates the section and
get_ci_operations()always returns an empty list.🐛 Proposed fix
- lines = help_output.split('\n') - for line in lines: - line = line.strip() + for raw_line in help_output.splitlines(): + line = raw_line.rstrip() # Look for "Commands:" section - if line.lower().startswith('commands:'): + if line.strip().lower().startswith('commands:'): in_commands_section = True continue # Stop when we hit another section - if in_commands_section and line and not line.startswith(' '): + if in_commands_section and line and not raw_line.startswith(' '): break # Extract command names - if in_commands_section and line.startswith(' '): + if in_commands_section and raw_line.startswith(' '): # Format is typically: " command_name Description" - parts = line.split() + parts = raw_line.split()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 212 - 233, The code currently calls line = line.strip() which removes indentation needed to detect command entries and causes get_ci_operations() to return empty; fix by preserving the original line indentation: remove or move the line.strip() mutation and instead use a separate stripped variable (e.g., stripped = line.strip()) to test for the "Commands:" header (stripped.lower().startswith('commands:')) and to check empty/section-break conditions, while keeping the original line for the indentation checks that rely on line.startswith(' ') and for extracting command names into operations.projects/jump_ci/toolbox/jump_ci_prepare_step/templates/entrypoint.sh.j2-53-54 (1)
53-54:⚠️ Potential issue | 🟠 MajorConditionally emit the secret mount and env pair.
This template always renders the secret
--volumeand--envflags, but the role treatsjump_ci_prepare_step_secrets_path_env_keyas optional. If that key is empty, Podman gets an invalid--env =...argument and the step dies before the container starts. Wrapping these two lines in a Jinja{% if jump_ci_prepare_step_secrets_path_env_key %}block would keep no-secret steps working.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_step/templates/entrypoint.sh.j2` around lines 53 - 54, The template always renders the secret --volume and --env flags even when jump_ci_prepare_step_secrets_path_env_key is empty; wrap the two lines that emit the secret mount and environment pair in a Jinja conditional (e.g., {% if jump_ci_prepare_step_secrets_path_env_key %} ... {% endif %}) so both the --volume and --env lines are only rendered when jump_ci_prepare_step_secrets_path_env_key is set, keeping no-secret steps from producing an invalid --env argument.projects/jump_ci/toolbox/jump_ci_prepare_step/templates/entrypoint.sh.j2-9-9 (1)
9-9:⚠️ Potential issue | 🟠 MajorDon't splice templated values into
bash -c.
jump_ci_prepare_step_projectandjump_ci_prepare_step_stepare baked straight into a shell snippet and then executed withbash -c. Any whitespace or shell metacharacter in either value changes the command that runs inside the container. Use a fixed shell body and pass the rendered values as positional arguments tobash -cinstead.Also applies to: 62-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_step/templates/entrypoint.sh.j2` at line 9, The current line builds a shell snippet with templated values interpolated directly into cmd (cmd="set -a; source /run/secrets/test_env; set +a; exec run {{ jump_ci_prepare_step_project }} test {{ jump_ci_prepare_step_step }}"), which allows whitespace or shell metacharacters in jump_ci_prepare_step_project or jump_ci_prepare_step_step to alter the executed command; change cmd to use a fixed shell body that consumes positional parameters (e.g. cmd='set -a; source /run/secrets/test_env; set +a; exec run "$1" test "$2"' ) and then invoke bash -c with a -- and pass the rendered values as separate arguments (append -- "{{ jump_ci_prepare_step_project }}" "{{ jump_ci_prepare_step_step }}") so the templated values are treated as data, not code.projects/core/launcher/topsail_launcher.py-149-166 (1)
149-166:⚠️ Potential issue | 🟠 MajorPreserve argv boundaries instead of rebuilding shell text.
_run_toolbox_command()plusenter/run/run_cmdflatten cwd, env values, and CLI args intobash -cstrings. That breaks common cases like repo paths with spaces and arguments such as--label "a b", and it also lets shell metacharacters be reinterpreted instead of being passed through verbatim.Also applies to: 361-371, 390-405
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` around lines 149 - 166, The function _run_toolbox_command currently concatenates env, cwd and CLI into a single "bash -c" string which loses argv boundaries; change it to stop building a shell string and instead construct cmd as a list (no shell=True) so arguments are passed verbatim: do not build env_setup = '; '.join(...), instead pass environment via the subprocess env parameter or as separate KEY=VALUE argv entries, pass the working_dir and the tool command as distinct argv elements (preserve the original command's args instead of joining them), and remove the use of a single "bash -c" wrapper; apply the same fix pattern to the other occurrences noted (the blocks around lines 361-371 and 390-405) so all toolbox invocations preserve argument boundaries.projects/core/launcher/topsail_launcher.py-379-383 (1)
379-383:⚠️ Potential issue | 🟠 Major
enter <command>currently runs from the wrong directory.This second
working_dirassignment overwrites the earlier default and setsNonewhenevercommandis provided. As written,enter pytestruns from the container's default directory instead oftopsail_home.Suggested fix
- if not here and not command: - working_dir = launcher.config['topsail_home'] - else: - working_dir = os.getcwd() if here else None + working_dir = os.getcwd() if here else launcher.config['topsail_home']🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` around lines 379 - 383, The current logic in topsail_launcher.py assigns working_dir twice so that the second branch ("working_dir = os.getcwd() if here else None") overrides the default and returns None whenever a command is provided, causing commands like "enter pytest" to run in the container default instead of topsail_home; fix by changing the second assignment so it preserves the default topsail_home when a command is passed (e.g., only set working_dir to os.getcwd() when here is True, otherwise leave it as launcher.config['topsail_home']), updating the conditional around working_dir, and ensuring references to here, command, working_dir and launcher.config['topsail_home'] are used consistently.projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml-56-60 (1)
56-60:⚠️ Potential issue | 🟠 MajorWrite
variable_overrides.yamlwithout a shell heredoc.The unquoted heredoc body is expanded by the remote shell before the file is written. A value like
$(...), backticks, or$VARinsidejump_ci_prepare_step_variables_overrides_dictwill be executed/substituted instead of being persisted literally, and>>also makes partial reruns append duplicate documents.Suggested fix
-- name: Populate the variables_overrides files - shell: | - cat <<EOF >> {{ step_artifact_dir }}/artifacts/variable_overrides.yaml - {{ (jump_ci_prepare_step_variables_overrides_dict or {}) | to_nice_yaml }} - EOF +- name: Populate the variables_overrides file + ansible.builtin.copy: + dest: "{{ step_artifact_dir }}/artifacts/variable_overrides.yaml" + content: "{{ (jump_ci_prepare_step_variables_overrides_dict or {}) | to_nice_yaml }}" + mode: "0644"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml` around lines 56 - 60, The current "Populate the variables_overrides files" task uses a shell heredoc that expands and executes shell interpolation and appends with >>; replace this with an Ansible-managed write using the copy (or template) module so the YAML is written verbatim and overwritten rather than appended. Specifically, remove the shell task and create a task that uses copy with dest: "{{ step_artifact_dir }}/artifacts/variable_overrides.yaml" and content: "{{ (jump_ci_prepare_step_variables_overrides_dict or {}) | to_nice_yaml }}", ensuring the value is written as file content (not via the shell) so patterns like $(...), `...`, or $VAR are preserved literally and duplicates are avoided.
🟡 Minor comments (5)
projects/jump_ci/toolbox/jump_ci_prepare_topsail/defaults/main/config.yml-18-21 (1)
18-21:⚠️ Potential issue | 🟡 MinorUpdate repository defaults to match current
topsail-ngrepository.Lines 18-21 default to
openshift-psap/topsail, but this repository isopenshift-psap/topsail-ng(per pyproject.toml, Containerfiles, and launcher configurations). Update the defaults to:jump_ci_prepare_topsail_repo_owner: openshift-psap jump_ci_prepare_topsail_repo_name: topsail-ng🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_topsail/defaults/main/config.yml` around lines 18 - 21, The default repo name variable jump_ci_prepare_topsail_repo_name is outdated; update its value from "topsail" to "topsail-ng" while keeping jump_ci_prepare_topsail_repo_owner as "openshift-psap" so the defaults match the actual repository (ensure you modify the assignment for jump_ci_prepare_topsail_repo_name in the existing config block).projects/jump_ci/readme.md-3-10 (1)
3-10:⚠️ Potential issue | 🟡 MinorIncorrect command order: submodule update runs before entering the repository.
The
git submodule update --initcommand on line 6 executes beforecd topsailon line 8, so it runs in the wrong directory and will fail to initialize submodules.Additionally, the code block should specify a language for proper syntax highlighting.
📝 Proposed fix
-``` +```bash ssh $JUMP_HOST git clone https://github.com/openshift-psap/topsail.git -git submodule update --init cd topsail +git submodule update --init podman build -f ./build/Dockerfile --label preserve=true -t localhost/topsail:main .</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@projects/jump_ci/readme.mdaround lines 3 - 10, Move the git submodule
initialization to after changing into the cloned repo: run "cd topsail" before
"git submodule update --init" (i.e., adjust the command order around the cd and
git submodule update --init lines) so submodules are initialized inside the
topsail repository, and mark the fenced code block as bash (```bash) for proper
syntax highlighting.</details> </blockquote></details> <details> <summary>projects/jump_ci/testing/utils_gethostname.py-104-116 (1)</summary><blockquote> `104-116`: _⚠️ Potential issue_ | _🟡 Minor_ **Logging calls in `__main__` block will produce no output.** The `logging.info()` calls won't display anything because logging is not configured. Either configure logging with `basicConfig` or use `print()` for this standalone script usage. <details> <summary>📝 Proposed fix</summary> ```diff if __name__ == '__main__': # This block will only run if you execute the script directly # NOTE: This uses Python's built-in urllib library (no external dependencies) - logging.info("Attempting to determine Kubernetes Node Name...") + logging.basicConfig(level=logging.INFO) + logging.info("Attempting to determine Kubernetes Node Name...") NODE_NAME = get_kubernetes_node_name() if NODE_NAME: logging.info(f"\n✅ Pod is running on Node: **{NODE_NAME}**") else: logging.info("\n❌ Could not determine Node Name (Check if you are running inside a properly configured K8s Pod).") ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/utils_gethostname.py` around lines 104 - 116, The __main__ block uses logging.info but never configures the logging subsystem so messages won't appear; either call logging.basicConfig(level=logging.INFO, format=...) at the top of the script (before the logging.info calls) or replace the logging.info calls in the __main__ block with print() for a standalone script; update references around get_kubernetes_node_name(), NODE_NAME and the __main__ block accordingly so output is visible when run directly. ``` </details> </blockquote></details> <details> <summary>projects/jump_ci/toolbox/jump_ci_ensure_lock/tasks/main.yml-10-10 (1)</summary><blockquote> `10-10`: _⚠️ Potential issue_ | _🟡 Minor_ **Grammatical error in fail messages.** Multiple fail messages use "does not exists" instead of "does not exist" (lines 10, 23, 36). <details> <summary>📝 Proposed fixes</summary> ```diff - fail: msg="ansible_env does not exists. Make sure that the gather_facts command run. In TOPSAIL, the role needs to run remotely. Or set --check_kubeconfig=false" + fail: msg="ansible_env does not exist. Make sure that the gather_facts command runs. In TOPSAIL, the role needs to run remotely. Or set --check_kubeconfig=false" ``` Line 23: ```diff - fail: msg="The cluster KUBECONFIG ({{ cluster_kubeconfig }}) does not exists" + fail: msg="The cluster KUBECONFIG ({{ cluster_kubeconfig }}) does not exist" ``` Line 36: ```diff - fail: msg="The lock directory '{{ cluster_lock_dir }}' does not exists" + fail: msg="The lock directory '{{ cluster_lock_dir }}' does not exist" ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_ensure_lock/tasks/main.yml` at line 10, Fix the grammatical error in the fail message strings that say "does not exists" by changing them to "does not exist"; locate the fail tasks whose msg values are "ansible_env does not exists. Make sure that the gather_facts command run. In TOPSAIL, the role needs to run remotely. Or set --check_kubeconfig=false" and the two other fail messages with the same "does not exists" phrasing in the Ansible task definitions (look for fail: msg="...does not exists...") and update each msg to use "does not exist" while keeping the rest of the text unchanged. ``` </details> </blockquote></details> <details> <summary>projects/jump_ci/testing/utils.py-90-92 (1)</summary><blockquote> `90-92`: _⚠️ Potential issue_ | _🟡 Minor_ **Local lock ownership shouldn't depend on `USER` being set.** `os.environ["USER"]` is absent in some containers and non-interactive shells. When that happens, local runs crash instead of picking a fallback owner. `os.environ.get("USER") or getpass.getuser()` is safer here. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/utils.py` around lines 90 - 92, The code currently returns os.environ["USER"] (in the else branch that logs "Not running in a known CI. Using the username as lock owner.") which raises KeyError in some containers; change the return to use a safe fallback such as os.environ.get("USER") or getpass.getuser() and ensure getpass is imported; update the return statement to return os.environ.get("USER") or getpass.getuser() so local runs won't crash when USER is not set. ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (5)</summary><blockquote> <details> <summary>projects/jump_ci/toolbox/jump_ci_prepare_topsail/defaults/main/config.yml (1)</summary><blockquote> `36-36`: **Default cache source at Line 36 may hurt reproducibility.** Because `jump_ci_prepare_topsail_update_from_imagetag` is always set to `main`, runs default to an incremental path tied to mutable image state. Prefer `null` by default and enable this explicitly in jobs that want cache acceleration. <details> <summary>Suggested change</summary> ```diff -jump_ci_prepare_topsail_update_from_imagetag: main +jump_ci_prepare_topsail_update_from_imagetag: null ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_topsail/defaults/main/config.yml` at line 36, The default value jump_ci_prepare_topsail_update_from_imagetag is currently hardcoded to "main"; change its default to null so jobs opt-in to using a mutable image tag, and update any CI job definitions that rely on cache acceleration to explicitly set jump_ci_prepare_topsail_update_from_imagetag="main" where desired; locate the variable declaration (jump_ci_prepare_topsail_update_from_imagetag) in the config defaults and replace "main" with null, and add explicit assignments in the job templates that should use the cached incremental path. ``` </details> </blockquote></details> <details> <summary>projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/tasks/main.yml (1)</summary><blockquote> `14-16`: **Directory size command result is not captured.** The `du -sh` command output is executed but not registered, so the size information is lost. If this is intended for logging/debugging purposes, consider registering and displaying it. <details> <summary>📝 Suggested fix to capture and display the size</summary> ```diff - name: Get the size of the directory to retrieve command: du -sh "{{ remote_dir_path }}" + register: dir_size_cmd + +- name: Display the directory size + debug: + msg: "Directory size: {{ dir_size_cmd.stdout }}" ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/tasks/main.yml` around lines 14 - 16, The "Get the size of the directory to retrieve" task runs du -sh but doesn't capture its output; modify that task to register the result (e.g., register: dir_size) and then add a follow-up step to display or log the size (e.g., a debug task using dir_size.stdout or set a fact from dir_size.stdout) so the directory size is preserved for later use or troubleshooting. ``` </details> </blockquote></details> <details> <summary>projects/core/image/Containerfile.lightweight (1)</summary><blockquote> `3-3`: **`MAINTAINER` instruction is deprecated.** The `MAINTAINER` instruction is deprecated in favor of using `LABEL`. Consider updating to the modern approach. <details> <summary>📝 Proposed fix</summary> ```diff -MAINTAINER Perf&Scale PSAP Team <team-psap@redhat.com> - -LABEL io.k8s.display-name="OpenShift PSAP topsail" \ +LABEL maintainer="Perf&Scale PSAP Team <team-psap@redhat.com>" \ + io.k8s.display-name="OpenShift PSAP topsail" \ io.k8s.description="A lightweight image for running TOPSAIL in a remote host" \ ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@projects/core/image/Containerfile.lightweight` at line 3, Replace the deprecated MAINTAINER instruction with a LABEL; remove the MAINTAINER line and add a LABEL entry such as using the key "maintainer" and the same value ("Perf&Scale PSAP Team <team-psap@redhat.com>") so the Containerfile uses LABEL instead of the deprecated MAINTAINER instruction. ``` </details> </blockquote></details> <details> <summary>projects/jump_ci/toolbox/jump_ci_take_lock/tasks/main.yml (1)</summary><blockquote> `26-28`: **Misleading task name: task writes data, not logs.** The task name "Log info about the lock owner" suggests logging, but it actually writes the owner to a file. Consider renaming for clarity. <details> <summary>📝 Suggested fix</summary> ```diff -- name: Log info about the lock owner +- name: Write the lock owner to the lock file shell: echo "{{ jump_ci_take_lock_owner }}" > "{{ cluster_lock_dir }}/lock_owner" ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_take_lock/tasks/main.yml` around lines 26 - 28, Rename the misleading Ansible task named "Log info about the lock owner" to something that reflects its action (e.g., "Write lock owner to file" or "Save lock owner to cluster lock file") and keep the existing shell action that writes "{{ jump_ci_take_lock_owner }}" into "{{ cluster_lock_dir }}/lock_owner"; refer to the task name string "Log info about the lock owner" and the variables jump_ci_take_lock_owner and cluster_lock_dir/lock_owner when making the change. ``` </details> </blockquote></details> <details> <summary>pyproject.toml (1)</summary><blockquote> `32-32`: **Consider pinning `ansible` to a minimum version.** Unlike other dependencies, `ansible` lacks a version constraint. Major version changes could introduce breaking API changes affecting the project. <details> <summary>📝 Suggested fix</summary> ```diff "click>=8.1.0", - "ansible", + "ansible>=8.0.0", ] ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` at line 32, The dependency entry "ansible" in pyproject.toml has no version constraint; change it to pin a sensible minimum version (e.g., "ansible>=X.Y.Z") to prevent accidental breaking upgrades, update the corresponding lockfile (poetry.lock or pip-tools) and run your install/CI to verify no regressions; search for the literal "ansible" in pyproject.toml to locate the entry and pick an appropriate minimum version consistent with your supported APIs before committing. ``` </details> </blockquote></details> </blockquote></details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `65a3c03a-ac66-4678-a72a-d789707ef027` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 9c98c59f15f7f52780dcea3cb1f9919464be6c70 and 8933c1225cdceb0c893b36f248fb6a94f76c6756. </details> <details> <summary>📒 Files selected for processing (55)</summary> * `.gitignore` * `bin/run_ci` * `bin/topsail_launcher` * `projects/core/ci_entrypoint/run_ci.py` * `projects/core/image/Containerfile` * `projects/core/image/Containerfile.lightweight` * `projects/core/image/pyproject-lightweight.toml` * `projects/core/launcher/README.md` * `projects/core/launcher/launcher_config.yaml` * `projects/core/launcher/launcher_config.yaml.example` * `projects/core/launcher/topsail_launcher.py` * `projects/jump_ci/readme.md` * `projects/jump_ci/testing/command_args.yml.j2` * `projects/jump_ci/testing/config.yaml` * `projects/jump_ci/testing/prepare_jump_ci.py` * `projects/jump_ci/testing/run` * `projects/jump_ci/testing/test.py` * `projects/jump_ci/testing/tunnelling.py` * `projects/jump_ci/testing/utils.py` * `projects/jump_ci/testing/utils_gethostname.py` * `projects/jump_ci/toolbox/jump_ci.py` * `projects/jump_ci/toolbox/jump_ci_ensure_lock/defaults/main/config.yml` * `projects/jump_ci/toolbox/jump_ci_ensure_lock/files/.keep` * `projects/jump_ci/toolbox/jump_ci_ensure_lock/tasks/main.yml` * `projects/jump_ci/toolbox/jump_ci_ensure_lock/templates/.keep` * `projects/jump_ci/toolbox/jump_ci_ensure_lock/vars/main/resources.yml` * `projects/jump_ci/toolbox/jump_ci_prepare_step/defaults/main/config.yml` * `projects/jump_ci/toolbox/jump_ci_prepare_step/files/.keep` * `projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml` * `projects/jump_ci/toolbox/jump_ci_prepare_step/templates/.keep` * `projects/jump_ci/toolbox/jump_ci_prepare_step/templates/entrypoint.sh.j2` * `projects/jump_ci/toolbox/jump_ci_prepare_step/vars/main/resources.yml` * `projects/jump_ci/toolbox/jump_ci_prepare_topsail/defaults/main/config.yml` * `projects/jump_ci/toolbox/jump_ci_prepare_topsail/files/.keep` * `projects/jump_ci/toolbox/jump_ci_prepare_topsail/tasks/main.yml` * `projects/jump_ci/toolbox/jump_ci_prepare_topsail/templates/.keep` * `projects/jump_ci/toolbox/jump_ci_prepare_topsail/vars/main/resources.yml` * `projects/jump_ci/toolbox/jump_ci_release_lock/defaults/main/config.yml` * `projects/jump_ci/toolbox/jump_ci_release_lock/files/.keep` * `projects/jump_ci/toolbox/jump_ci_release_lock/tasks/main.yml` * `projects/jump_ci/toolbox/jump_ci_release_lock/templates/.keep` * `projects/jump_ci/toolbox/jump_ci_release_lock/vars/main/resources.yml` * `projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/defaults/main/config.yml` * `projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/files/.keep` * `projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/tasks/main.yml` * `projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/templates/.keep` * `projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/vars/main/resources.yml` * `projects/jump_ci/toolbox/jump_ci_take_lock/defaults/main/config.yml` * `projects/jump_ci/toolbox/jump_ci_take_lock/files/.keep` * `projects/jump_ci/toolbox/jump_ci_take_lock/tasks/main.yml` * `projects/jump_ci/toolbox/jump_ci_take_lock/templates/.keep` * `projects/jump_ci/toolbox/jump_ci_take_lock/vars/main/resources.yml` * `projects/skeleton/README.md` * `projects/skeleton/orchestration/ci.py` * `pyproject.toml` </details> <details> <summary>✅ Files skipped from review due to trivial changes (6)</summary> * projects/jump_ci/testing/command_args.yml.j2 * projects/jump_ci/toolbox/jump_ci_take_lock/defaults/main/config.yml * bin/topsail_launcher * projects/jump_ci/toolbox/jump_ci_retrieve_artifacts/defaults/main/config.yml * projects/jump_ci/toolbox/jump_ci_prepare_step/vars/main/resources.yml * projects/core/launcher/launcher_config.yaml </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (3)</summary> * bin/run_ci * .gitignore * projects/core/image/Containerfile </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| - name: Copy the secrets to the jump host | ||
| ansible.posix.synchronize: | ||
| src: "{{ lookup('env', jump_ci_prepare_step_secrets_path_env_key) }}/" | ||
| dest: "{{ cluster_lock_dir }}/secrets/{{ jump_ci_prepare_step_secrets_path_env_key }}/" | ||
| mode: "push" | ||
| no_log: true # not necessary, and leaks the hostnames and secret filenames (they aren't secret) | ||
| when: jump_ci_prepare_step_secrets_path_env_key |
There was a problem hiding this comment.
Fail fast when the controller-side secrets env var is missing.
If the environment variable named by jump_ci_prepare_step_secrets_path_env_key is unset, lookup('env', ...) returns an empty string. On Line 77 that turns src into /, so this task will try to rsync the controller's root filesystem to the jump host.
Suggested guard
+- name: Resolve the controller-side secrets path
+ ansible.builtin.set_fact:
+ jump_ci_prepare_step_secrets_src: "{{ lookup('env', jump_ci_prepare_step_secrets_path_env_key) }}"
+ when: jump_ci_prepare_step_secrets_path_env_key
+
+- name: Fail if the controller-side secrets path is missing
+ ansible.builtin.assert:
+ that:
+ - jump_ci_prepare_step_secrets_src | trim | length > 0
+ fail_msg: "Environment variable {{ jump_ci_prepare_step_secrets_path_env_key }} is not set"
+ when: jump_ci_prepare_step_secrets_path_env_key
+
- name: Copy the secrets to the jump host
ansible.posix.synchronize:
- src: "{{ lookup('env', jump_ci_prepare_step_secrets_path_env_key) }}/"
+ src: "{{ jump_ci_prepare_step_secrets_src | trim }}/"
dest: "{{ cluster_lock_dir }}/secrets/{{ jump_ci_prepare_step_secrets_path_env_key }}/"
mode: "push"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Copy the secrets to the jump host | |
| ansible.posix.synchronize: | |
| src: "{{ lookup('env', jump_ci_prepare_step_secrets_path_env_key) }}/" | |
| dest: "{{ cluster_lock_dir }}/secrets/{{ jump_ci_prepare_step_secrets_path_env_key }}/" | |
| mode: "push" | |
| no_log: true # not necessary, and leaks the hostnames and secret filenames (they aren't secret) | |
| when: jump_ci_prepare_step_secrets_path_env_key | |
| - name: Resolve the controller-side secrets path | |
| ansible.builtin.set_fact: | |
| jump_ci_prepare_step_secrets_src: "{{ lookup('env', jump_ci_prepare_step_secrets_path_env_key) }}" | |
| when: jump_ci_prepare_step_secrets_path_env_key | |
| - name: Fail if the controller-side secrets path is missing | |
| ansible.builtin.assert: | |
| that: | |
| - jump_ci_prepare_step_secrets_src | trim | length > 0 | |
| fail_msg: "Environment variable {{ jump_ci_prepare_step_secrets_path_env_key }} is not set" | |
| when: jump_ci_prepare_step_secrets_path_env_key | |
| - name: Copy the secrets to the jump host | |
| ansible.posix.synchronize: | |
| src: "{{ jump_ci_prepare_step_secrets_src | trim }}/" | |
| dest: "{{ cluster_lock_dir }}/secrets/{{ jump_ci_prepare_step_secrets_path_env_key }}/" | |
| mode: "push" | |
| no_log: true # not necessary, and leaks the hostnames and secret filenames (they aren't secret) | |
| when: jump_ci_prepare_step_secrets_path_env_key |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml` around lines 75
- 81, The task copies secrets using src: "{{ lookup('env',
jump_ci_prepare_step_secrets_path_env_key) }}/" which becomes "/" if the env var
is unset; add a guard so we fail fast when that env var is missing or empty:
either add a preceding ansible.builtin.fail task that checks lookup('env',
jump_ci_prepare_step_secrets_path_env_key) == '' and aborts with a clear
message, or tighten the existing when on the "Copy the secrets to the jump host"
task to require the lookup to be non-empty (for example: when:
jump_ci_prepare_step_secrets_path_env_key and (lookup('env',
jump_ci_prepare_step_secrets_path_env_key) | default('') != '')) so src will
never resolve to "/" and accidental rsync of the root filesystem is avoided.
|
/test jump-ci |
2 similar comments
|
/test jump-ci |
|
/test jump-ci |
|
/test jump-ci |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
projects/core/ci_entrypoint/run_ci.py (1)
56-56:⚠️ Potential issue | 🔴 Critical
TOPSAIL_HOMEpoints to wrong directory — project discovery will fail.File: projects/core/ci_entrypoint/run_ci.py
Line: 63The file is at
projects/core/ci_entrypoint/run_ci.py. With.parent.parent,TOPSAIL_HOMEresolves toprojects/core/instead of the repository root. Line 76 then looks forprojects/core/projects/which does not exist (verified:projects/only exists at the repo root).🐛 Proposed fix to correct the path depth
-TOPSAIL_HOME = Path(__file__).parent.parent +TOPSAIL_HOME = Path(__file__).resolve().parents[3] # ci_entrypoint → core → projects → repo_rootOr equivalently:
TOPSAIL_HOME = Path(__file__).resolve().parent.parent.parent.parent🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` at line 56, TOPSAIL_HOME is set too shallow (Path(__file__).parent.parent) in run_ci.py so it points to projects/core instead of repo root; update the TOPSAIL_HOME assignment to climb up to the repository root (e.g., use Path(__file__).resolve().parent.parent.parent or parent.parent.parent.parent depending on file nesting) so subsequent project discovery (which expects TOPSAIL_HOME/projects/) finds the top-level projects directory; modify the TOPSAIL_HOME variable in run_ci.py accordingly and verify that TOPSAIL_HOME / "projects" exists.
🧹 Nitpick comments (6)
projects/core/ci_entrypoint/run_ci.py (6)
139-140: Remove extraneousfprefix from strings without placeholders.These strings have no interpolation; use regular strings.
♻️ Proposed fix
- click.echo(f" run <project> <operation> # Execute project operation") - click.echo(f" run projects # List projects explicitly") + click.echo(" run <project> <operation> # Execute project operation") + click.echo(" run projects # List projects explicitly")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 139 - 140, The two click.echo calls containing literal strings (" run <project> <operation> # Execute project operation" and " run projects # List projects explicitly") incorrectly use f-string prefixes; remove the leading f on these calls so they are plain strings, i.e., update the click.echo calls that currently reference those exact string literals in run_ci.py to use regular quoted strings without interpolation.
365-385: Redundant--helphandling and builtin shadowing.The parameter
helpshadows the Python builtin. Additionally, Click automatically handles--helpfor commands, so the manual implementation (lines 382-385) is redundant and can be removed along with the@click.option('--help', ...)decorator.♻️ Proposed fix
`@click.command`() `@click.argument`('project', required=False) `@click.argument`('operation', required=False) `@click.argument`('args', nargs=-1) `@click.option`('--verbose', '-v', is_flag=True, help='Enable verbose output') `@click.option`('--dry-run', is_flag=True, help='Show what would be executed without running it') -@click.option('--help', '-h', is_flag=True, help='Show help message') -def main(project, operation, args, verbose, dry_run, help): +def main(project, operation, args, verbose, dry_run): """ TOPSAIL-NG CI Orchestration Entrypoint. ... """ - if help: - ctx = click.get_current_context() - click.echo(ctx.get_help()) - return - # No arguments - list projects if not project:Click's built-in
--helpwill still work after this change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 365 - 385, The main function currently defines and handles a manual --help option that shadows the built-in help and the Python builtin `help`; remove the `@click.option`('--help', '-h', ...) decorator, remove the `help` parameter from def main(project, operation, args, verbose, dry_run, help), and delete the manual help handling block that retrieves the context and prints get_help(); rely on Click's automatic help handling so the function signature becomes def main(project, operation, args, verbose, dry_run) and the explicit if help: ... block is removed.
191-192: Unused loop control variables.
file_pathon line 191 andion line 206 are not used within their loop bodies. Rename to_or_file_path/_ito signal intent.♻️ Proposed fix
- for operation_name, file_path in other_files: + for operation_name, _ in other_files: click.echo(f" 📝 {operation_name}.py")- for i, op in enumerate(ci_operations[:3]): + for op in ci_operations[:3]: click.echo(f" run {project} {op}")Also applies to: 206-207
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 191 - 192, The loops in run_ci.py use unused loop variables which should be renamed to indicate intentional unused values: in the loop "for operation_name, file_path in other_files" rename file_path to _file_path (or just _) and in the other loop where "i" is unused rename it to _i (or _) so linters/readers know the value is intentionally ignored; update any references accordingly (search for the two loops around the existing echo/print calls to locate them).
270-270: Remove extraneousfprefix from strings without placeholders.Multiple f-strings on these lines contain no interpolation.
♻️ Proposed fix (representative sample)
- click.echo(f"🚀 TOPSAIL-NG CI Orchestration") + click.echo("🚀 TOPSAIL-NG CI Orchestration")- click.echo(f"📂 No projects found in projects/ directory") + click.echo("📂 No projects found in projects/ directory")Apply similar changes to lines 285, 310, 316, and 317.
Also applies to: 285-285, 289-289, 310-310, 316-317
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` at line 270, Several click.echo calls in run_ci.py use f-strings with no interpolation (e.g., click.echo(f"🚀 TOPSAIL-NG CI Orchestration") and the similar calls at the other flagged lines); remove the unnecessary leading f from these string literals so they are plain strings (e.g., change click.echo(f"...") to click.echo("...")) for the occurrences around the click.echo calls at the shown locations (including the lines referenced near 285, 289, 310, 316, and 317) to avoid redundant f-strings.
230-231: Silent exception swallowing hinders debugging.Catching all exceptions and passing silently makes it difficult to diagnose why operation discovery fails. Consider logging the exception or catching more specific exceptions.
♻️ Proposed fix
- except Exception: - pass + except subprocess.TimeoutExpired: + pass # CI script took too long, skip operation discovery + except subprocess.SubprocessError as e: + # Could log here if verbose mode is available + pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 230 - 231, The except Exception: pass is silently swallowing errors in the CI entrypoint; update the exception handler in run_ci.py (the try/except block around operation discovery) to either catch specific exceptions or, at minimum, log the exception with traceback (e.g., use logging.exception or the existing process logger's exception/error method with the caught exception) and then re-raise or handle accordingly so failures are visible; locate the try/except that currently uses except Exception: pass and replace the pass with a logging call and appropriate re-raise or narrowed exception types.
25-53: Temporary runtime dependency installation should be replaced.Per the commit message, this is a temporary workaround. Runtime package installation introduces fragility (network dependency, permission issues) and security concerns (S603, S607). Consider adding
clickto the project's formal dependencies or documenting the prerequisite.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 25 - 53, The current runtime installation block in run_ci.py (the try/except using subprocess.run to call "[sys.executable, '-m', 'pip', 'show', 'click']" and the subsequent fallback installs via "uv pip install click" and "[sys.executable, '-m', 'pip', 'install', '--user', 'click']") must be removed and replaced with a declarative dependency approach: delete this dynamic install logic and instead add "click" to the project's formal dependencies (e.g., requirements.txt/pyproject.toml) or update README/prerequisites to require click, and update any CI or container images to ensure click is preinstalled so run_ci.py no longer attempts subprocess-based installation at runtime.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@projects/core/ci_entrypoint/run_ci.py`:
- Around line 236-265: In parse_cli_help, current code strips each line before
checking indentation which prevents detection of indented command lines; change
the loop to keep the original line (e.g., use raw_line = line, then
stripped_line = raw_line.strip()) and use stripped_line only for header checks
like raw_line.lower().strip().startswith('commands:') or content extraction,
while using raw_line.startswith(' ') to detect indented command lines and the
break condition; ensure command_name extraction uses stripped_line.split() so
leading spaces don't affect the token (refer to parse_cli_help, lines handling,
in_commands_section, and command_name extraction).
---
Duplicate comments:
In `@projects/core/ci_entrypoint/run_ci.py`:
- Line 56: TOPSAIL_HOME is set too shallow (Path(__file__).parent.parent) in
run_ci.py so it points to projects/core instead of repo root; update the
TOPSAIL_HOME assignment to climb up to the repository root (e.g., use
Path(__file__).resolve().parent.parent.parent or parent.parent.parent.parent
depending on file nesting) so subsequent project discovery (which expects
TOPSAIL_HOME/projects/) finds the top-level projects directory; modify the
TOPSAIL_HOME variable in run_ci.py accordingly and verify that TOPSAIL_HOME /
"projects" exists.
---
Nitpick comments:
In `@projects/core/ci_entrypoint/run_ci.py`:
- Around line 139-140: The two click.echo calls containing literal strings ("
run <project> <operation> # Execute project operation" and " run projects
# List projects explicitly") incorrectly use f-string prefixes; remove the
leading f on these calls so they are plain strings, i.e., update the click.echo
calls that currently reference those exact string literals in run_ci.py to use
regular quoted strings without interpolation.
- Around line 365-385: The main function currently defines and handles a manual
--help option that shadows the built-in help and the Python builtin `help`;
remove the `@click.option`('--help', '-h', ...) decorator, remove the `help`
parameter from def main(project, operation, args, verbose, dry_run, help), and
delete the manual help handling block that retrieves the context and prints
get_help(); rely on Click's automatic help handling so the function signature
becomes def main(project, operation, args, verbose, dry_run) and the explicit if
help: ... block is removed.
- Around line 191-192: The loops in run_ci.py use unused loop variables which
should be renamed to indicate intentional unused values: in the loop "for
operation_name, file_path in other_files" rename file_path to _file_path (or
just _) and in the other loop where "i" is unused rename it to _i (or _) so
linters/readers know the value is intentionally ignored; update any references
accordingly (search for the two loops around the existing echo/print calls to
locate them).
- Line 270: Several click.echo calls in run_ci.py use f-strings with no
interpolation (e.g., click.echo(f"🚀 TOPSAIL-NG CI Orchestration") and the
similar calls at the other flagged lines); remove the unnecessary leading f from
these string literals so they are plain strings (e.g., change click.echo(f"...")
to click.echo("...")) for the occurrences around the click.echo calls at the
shown locations (including the lines referenced near 285, 289, 310, 316, and
317) to avoid redundant f-strings.
- Around line 230-231: The except Exception: pass is silently swallowing errors
in the CI entrypoint; update the exception handler in run_ci.py (the try/except
block around operation discovery) to either catch specific exceptions or, at
minimum, log the exception with traceback (e.g., use logging.exception or the
existing process logger's exception/error method with the caught exception) and
then re-raise or handle accordingly so failures are visible; locate the
try/except that currently uses except Exception: pass and replace the pass with
a logging call and appropriate re-raise or narrowed exception types.
- Around line 25-53: The current runtime installation block in run_ci.py (the
try/except using subprocess.run to call "[sys.executable, '-m', 'pip', 'show',
'click']" and the subsequent fallback installs via "uv pip install click" and
"[sys.executable, '-m', 'pip', 'install', '--user', 'click']") must be removed
and replaced with a declarative dependency approach: delete this dynamic install
logic and instead add "click" to the project's formal dependencies (e.g.,
requirements.txt/pyproject.toml) or update README/prerequisites to require
click, and update any CI or container images to ensure click is preinstalled so
run_ci.py no longer attempts subprocess-based installation at runtime.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c61cfd31-d8d4-4450-81ca-cad5c7f63ea3
📒 Files selected for processing (1)
projects/core/ci_entrypoint/run_ci.py
| def parse_cli_help(help_output: str) -> List[str]: | ||
| """Parse CLI help output to extract available commands.""" | ||
| operations = [] | ||
| in_commands_section = False | ||
|
|
||
| lines = help_output.split('\n') | ||
| for line in lines: | ||
| line = line.strip() | ||
|
|
||
| # Look for "Commands:" section | ||
| if line.lower().startswith('commands:'): | ||
| in_commands_section = True | ||
| continue | ||
|
|
||
| # Stop when we hit another section | ||
| if in_commands_section and line and not line.startswith(' '): | ||
| break | ||
|
|
||
| # Extract command names | ||
| if in_commands_section and line.startswith(' '): | ||
| # Format is typically: " command_name Description" | ||
| parts = line.split() | ||
| if parts: | ||
| command_name = parts[0].strip() | ||
| # Skip common help/utility commands | ||
| if command_name not in ['--help', '--version', '-h', '-v']: | ||
| operations.append(command_name) | ||
|
|
||
| return operations | ||
|
|
There was a problem hiding this comment.
Parsing logic is broken — commands will never be extracted.
Line 243 strips whitespace from line, but lines 251 and 255 then check line.startswith(' '). After stripping, a line can never start with a space, so:
- Line 251 immediately breaks out of the loop on the first command line (non-empty, doesn't start with space)
- Line 255's condition is never satisfied
Click help output has indented commands like " prepare Description". After stripping, this becomes "prepare Description" which triggers the break condition before any commands are collected.
🐛 Proposed fix to preserve indentation for checks
def parse_cli_help(help_output: str) -> List[str]:
"""Parse CLI help output to extract available commands."""
operations = []
in_commands_section = False
lines = help_output.split('\n')
for line in lines:
- line = line.strip()
+ stripped = line.strip()
# Look for "Commands:" section
- if line.lower().startswith('commands:'):
+ if stripped.lower().startswith('commands:'):
in_commands_section = True
continue
- # Stop when we hit another section
- if in_commands_section and line and not line.startswith(' '):
+ # Stop when we hit another section (non-indented, non-empty line)
+ if in_commands_section and stripped and not line.startswith(' '):
break
# Extract command names
if in_commands_section and line.startswith(' '):
# Format is typically: " command_name Description"
- parts = line.split()
+ parts = stripped.split()
if parts:
command_name = parts[0].strip()
# Skip common help/utility commands
if command_name not in ['--help', '--version', '-h', '-v']:
operations.append(command_name)
return operations🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/core/ci_entrypoint/run_ci.py` around lines 236 - 265, In
parse_cli_help, current code strips each line before checking indentation which
prevents detection of indented command lines; change the loop to keep the
original line (e.g., use raw_line = line, then stripped_line = raw_line.strip())
and use stripped_line only for header checks like
raw_line.lower().strip().startswith('commands:') or content extraction, while
using raw_line.startswith(' ') to detect indented command lines and the break
condition; ensure command_name extraction uses stripped_line.split() so leading
spaces don't affect the token (refer to parse_cli_help, lines handling,
in_commands_section, and command_name extraction).
|
/test jump-ci |
1 similar comment
|
/test jump-ci |
|
/test jump-ci |
|
/test jump-ci |
|
/test jump-ci |
|
/test jump-ci |
There was a problem hiding this comment.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
♻️ Duplicate comments (9)
projects/core/launcher/topsail_launcher.py (1)
519-524:⚠️ Potential issue | 🔴 CriticalUse the existing
topsail_homekey here.
_load_config()only populatestopsail_home, soconfigwithout--setraisesKeyErroron Line 520 instead of showing the configured path.🐛 Proposed fix
- topsail_home = Path(launcher.config['TOPSAIL_HOME']) + topsail_home = Path(launcher.config['topsail_home']) if topsail_home.exists(): - click.echo(f"📁 TOPSAIL_HOME: ✅ Found at {topsail_home}") + click.echo(f"📁 topsail_home: ✅ Found at {topsail_home}") else: - click.echo(f"📁 TOPSAIL_HOME: ❌ Not found at {topsail_home}") + click.echo(f"📁 topsail_home: ❌ Not found at {topsail_home}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` around lines 519 - 524, The code is reading launcher.config['TOPSAIL_HOME'] but _load_config() stores the path under the 'topsail_home' key; update the check to read launcher.config['topsail_home'] (the local variable topsail_home is fine to keep) so that the KeyError is avoided and the configured path is shown; modify the block that sets topsail_home = Path(launcher.config['TOPSAIL_HOME']) to use 'topsail_home' and leave the subsequent exists()/click.echo logic unchanged.projects/jump_ci/toolbox/jump_ci_prepare_topsail/tasks/main.yml (1)
173-176:⚠️ Potential issue | 🔴 Critical
update_from_imagetagguard is always true.
| length >= 0matches empty and undefined values, so the update build runs on every cache miss. When no update tag is configured, this still expands to--from "<image>:"and can fail before the scratch build path is reached.🐛 Proposed fix
when: - has_image.rc != 0 - - jump_ci_prepare_topsail_update_from_imagetag | default('', true) | length >= 0 + - jump_ci_prepare_topsail_update_from_imagetag | default('', true) | length > 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_topsail/tasks/main.yml` around lines 173 - 176, The when guard always evaluates true because `| length >= 0` matches empty/undefined; change the condition so the update-from-imagetag branch only runs when `jump_ci_prepare_topsail_update_from_imagetag` is set and non-empty (e.g. replace `jump_ci_prepare_topsail_update_from_imagetag | default('', true) | length >= 0` with a check such as `jump_ci_prepare_topsail_update_from_imagetag is defined and jump_ci_prepare_topsail_update_from_imagetag | default('') | length > 0` or `jump_ci_prepare_topsail_update_from_imagetag | default('') != ''`) so the update build is skipped when no tag is configured; keep the existing `has_image.rc != 0` clause.projects/jump_ci/testing/tunnelling.py (2)
188-205:⚠️ Potential issue | 🟠 MajorSame shell injection risk in
probe_ssh_endpointandrun_with_ansible_ssh_conf.These functions also construct SSH commands via string interpolation. Apply the same fix to use argument lists instead of shell strings.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/tunnelling.py` around lines 188 - 205, Both probe_ssh_endpoint and run_with_ansible_ssh_conf build shell command strings which allow shell injection; change them to call run.run with a list of arguments instead of an interpolated string: construct an args list beginning with "ssh", extend with shlex.split(ssh_flags) (or split the flag string safely), append "-i" and private_key_path, add optional "{user}@{host}" or "{user}@{hostname}", "-p" and port, and then the remote command ("true" for probe_ssh_endpoint; for run_with_ansible_ssh_conf include "-t", "--" and the cmd as separate elements); preserve existing keyword args capture_stderr and log_command when calling run.run.
137-150:⚠️ Potential issue | 🟠 MajorAvoid composing SSH commands through the shell.
jump_host,bastion_host,private_key_path,ssh_flags, and the remotecmdare interpolated into shell strings. A malformed secret/config value with spaces or metacharacters breaks the command, and a hostile value becomes command injection on the runner. Build thesshinvocation as an argument list and avoidshell=True.🔒 Suggested fix - use argument list
- cmd = f"ssh {' '.join(ssh_flags)} \ - -i {private_key_path} {jump_host} \ - -L {local_host_port}:{bastion_host}:{bastion_host_port} \ - -N" + cmd = [ + 'ssh', + *ssh_flags, + '-i', str(private_key_path), + jump_host, + '-L', f'{local_host_port}:{bastion_host}:{bastion_host_port}', + '-N' + ] ... def create_tunnel(): - _proc = subprocess.Popen(cmd, shell=True) + _proc = subprocess.Popen(cmd)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/tunnelling.py` around lines 137 - 150, The code currently composes an SSH command string (cmd) and passes it to subprocess.Popen(shell=True), which is unsafe; change create_tunnel to build an argument list and call subprocess.Popen with shell=False: construct argv starting with 'ssh', extend with the existing ssh_flags (treat ssh_flags as a list or split safely), then append '-i', private_key_path, the jump_host target, '-L', f"{local_host_port}:{bastion_host}:{bastion_host_port}", and '-N', and use that list in subprocess.Popen(argv, shell=False) (adjust any logging to join the argv for display). Ensure you reference the existing symbols cmd (remove or stop using it for execution), create_tunnel, subprocess.Popen, ssh_flags, private_key_path, jump_host, bastion_host, local_host_port, and bastion_host_port.projects/core/ci_entrypoint/run_ci.py (4)
274-295:⚠️ Potential issue | 🟠 MajorPreserve indentation while parsing Click help.
Line 276 strips the line before the indentation checks, so Lines 284 and 288 can never match a command entry and
get_ci_operations()always returns[]for grouped CLIs.Proposed fix
lines = help_output.split('\n') - for line in lines: - line = line.strip() + for raw_line in lines: + line = raw_line.strip() @@ - if in_commands_section and line and not line.startswith(' '): + if in_commands_section and line and not raw_line.startswith(' '): break # Extract command names - if in_commands_section and line.startswith(' '): + if in_commands_section and raw_line.startswith(' '): # Format is typically: " command_name Description" parts = line.split()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 274 - 295, The parsing trims each line early (line = line.strip()), which removes leading spaces so the later indentation checks (line.startswith(' '')) never match and get_ci_operations() returns an empty operations list; change the logic to preserve the original line for indentation checks by not reassigning line to the stripped value — instead use a separate variable (e.g., trimmed = line.strip()) for the "Commands:" detection and section-empty checks while keeping the original line for the startswith(' ') checks that build the operations list; update references to line, trimmed, in_commands_section, operations and get_ci_operations() accordingly.
196-205:⚠️ Potential issue | 🟠 Major
run <project>misses operations stored underorchestration/.This only scans
project_dir/*.py, sorun skeletonreports no Python files even thoughprojects/skeleton/orchestration/ci.pyexists.Proposed fix
# List Python files in the project directory python_files = [] - for file_path in project_dir.glob("*.py"): - if file_path.is_file(): - operation_name = file_path.stem # filename without .py extension - python_files.append((operation_name, file_path)) + for pattern in ("*.py", "orchestration/*.py"): + for file_path in project_dir.glob(pattern): + if file_path.is_file(): + operation_name = file_path.stem # filename without .py extension + python_files.append((operation_name, file_path))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 196 - 205, The current collection logic only globs project_dir.glob("*.py") and thus misses modules under the orchestration subdirectory; update the python file discovery in run_ci.py so python_files includes .py files from project_dir/"orchestration" (or do a recursive glob like project_dir.rglob("*.py")), preserving how operation_name is derived (file_path.stem or a normalized name) and keeping the existing python_files list and subsequent logic intact; modify the loop that builds python_files (references: python_files, project_dir, operation_name) to also scan the orchestration folder (or use rglob) and append those (skipping non-files) so operations like orchestration/ci.py are detected.
109-132:⚠️ Potential issue | 🔴 CriticalFallback to grouped
ci.pywhen no dedicated script exists.
run skeleton preparecurrently searches onlyprepare.pyandorchestration/prepare.py, but the skeleton project exposesprepareviaprojects/skeleton/orchestration/ci.py. That makes grouped project CLIs unreachable.Proposed fix
possible_locations = [ # Operation-specific script in project root project_dir / f"{operation}.py", # Operation-specific script in orchestration subdirectory project_dir / "orchestration" / f"{operation}.py", + # Grouped CI entrypoints + project_dir / "ci.py", + project_dir / "orchestration" / "ci.py", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 109 - 132, The function find_ci_script currently only looks for operation-specific scripts (project_dir/{operation}.py and project_dir/orchestration/{operation}.py) and therefore misses grouped CLIs like orchestration/ci.py that dispatch multiple operations (e.g., prepare). Update find_ci_script to, after checking the two existing paths, also check project_dir / "orchestration" / "ci.py" (and optionally project_dir / "ci.py" if you want a top-level grouped entry) as a fallback when the operation-specific script isn't found; modify the possible_locations logic or add a fallback check so the function returns orchestration/ci.py when present.
339-343:⚠️ Potential issue | 🟠 MajorOnly prepend the operation for centralized entrypoints.
If
find_ci_script()returnsprepare.py, the current argv becomespython prepare.py prepare ..., which shifts every positional argument for direct scripts.Proposed fix
- cmd = [sys.executable, str(ci_script), click_operation] + list(args) + if ci_script.stem == "ci": + cmd = [sys.executable, str(ci_script), click_operation, *args] + else: + cmd = [sys.executable, str(ci_script), *args]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/ci_entrypoint/run_ci.py` around lines 339 - 343, The command currently always prepends the Click-style operation (click_operation) which breaks direct scripts like prepare.py; change the logic around building cmd so you only insert click_operation when the target script is a centralized Click entrypoint — e.g., if ci_script.stem == operation then do not prepend it. Update the code that builds cmd (variables: ci_script, operation, click_operation, cmd) to check ci_script.stem (or basename) against operation and only include click_operation when they differ.projects/jump_ci/testing/prepare_jump_ci.py (1)
92-111:⚠️ Potential issue | 🔴 CriticalStandard OpenShift CI jobs still prepare the wrong repository.
Only foreign-testing and periodic jobs override
repo_owner/repo_name. Regular PR jobs keep the defaults, so PRs in this repo still prepareopenshift-psap/topsailinstead of the actual CI repository.Proposed fix
if os.environ.get("OPENSHIFT_CI") == "true": + repo_owner = os.environ.get("REPO_OWNER", repo_owner) + repo_name = os.environ.get("REPO_NAME", repo_name) if os.environ.get("OPENSHIFT_CI_TOPSAIL_FOREIGN_TESTING"): logging.info("Not running from TOPSAIL repository. Using TOPSAIL foreign configuration.") repo_owner = config.project.get_config("foreign_testing.topsail.repo.owner") repo_name = config.project.get_config("foreign_testing.topsail.repo.name") git_ref = config.project.get_config("foreign_testing.topsail.repo.branch")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/prepare_jump_ci.py` around lines 92 - 111, When running under OpenShift CI the code only sets repo_owner/repo_name/git_ref for foreign-testing and periodic jobs, causing regular PR jobs to keep the topsail defaults; update the OPENSHIFT_CI block so that for the non-foreign, non-periodic branch you populate repo_owner, repo_name and git_ref from the standard CI env vars (e.g. os.environ.get("REPO_OWNER"), os.environ.get("REPO_NAME"), os.environ.get("PULL_PULL_SHA") or PULL_NUMBER as appropriate) before calling prepare_topsail_args |= dict(...); locate and modify the logic around prepare_topsail_args and the repo_owner/repo_name/git_ref variables in this block to ensure all OpenShift CI job types set those values correctly.
🟡 Minor comments (10)
projects/jump_ci/testing/utils_gethostname.py-104-116 (1)
104-116:⚠️ Potential issue | 🟡 Minor
logging.info()calls won't produce output without configuration.Python's logging module defaults to
WARNINGlevel, sologging.info()calls are suppressed unless the logger is configured. The__main__block won't display any output when run directly.🐛 Fix: Configure logging or use print()
Option 1: Configure logging
if __name__ == '__main__': # This block will only run if you execute the script directly # NOTE: This uses Python's built-in urllib library (no external dependencies) + logging.basicConfig(level=logging.INFO) logging.info("Attempting to determine Kubernetes Node Name...")Option 2: Use print() for CLI output
if __name__ == '__main__': # This block will only run if you execute the script directly # NOTE: This uses Python's built-in urllib library (no external dependencies) - logging.info("Attempting to determine Kubernetes Node Name...") + print("Attempting to determine Kubernetes Node Name...") NODE_NAME = get_kubernetes_node_name() if NODE_NAME: - logging.info(f"\n✅ Pod is running on Node: **{NODE_NAME}**") + print(f"\n✅ Pod is running on Node: {NODE_NAME}") else: - logging.info("\n❌ Could not determine Node Name (Check if you are running inside a properly configured K8s Pod).") + print("\n❌ Could not determine Node Name (Check if you are running inside a properly configured K8s Pod).")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/utils_gethostname.py` around lines 104 - 116, The __main__ block uses logging.info() which is suppressed by default; update the script so the output is visible when run directly by either configuring the logger at the start of the __main__ block (call logging.basicConfig(level=logging.INFO) or equivalent configuration before calling get_kubernetes_node_name()) or replace the user-facing logging.info() calls with print() calls; target the calls around get_kubernetes_node_name() and the success/failure messages so the script prints when executed.projects/jump_ci/testing/test.py-35-35 (1)
35-35:⚠️ Potential issue | 🟡 MinorRuff errors here will fail linting.
Line 35 and Line 89 use one-line
if ...: continue(E701), and Line 107 has an unnecessary f-string prefix (F541).Also applies to: 89-89, 107-107
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` at line 35, The one-line conditional "if k.startswith(\"PR_POSITIONAL_ARG\"): continue" should be expanded to a proper multi-line block (e.g., if k.startswith("PR_POSITIONAL_ARG"): newline indent continue) to satisfy Ruff E701; replace any other one-line if ...: continue occurrences (such as the one at line 89) the same way. Also remove the unnecessary f-string prefix (the leading f on a plain string literal causing F541) at the usage in test.py (line with the stray f-prefix) so the string is a normal quoted literal. Ensure you update the occurrences of k.startswith("PR_POSITIONAL_ARG") and the plain f-string literal accordingly.projects/jump_ci/testing/test.py-162-164 (1)
162-164:⚠️ Potential issue | 🟡 Minor
multi_run_args.listis not written to the opened file.Line 163 calls
print(...)withoutfile=f, so output goes to stdout instead ofmulti_run_args.list.Suggested fix
with open(env.ARTIFACT_DIR / "multi_run_args.list", "a+") as f: - print(f"{multi_run_dirname}: {multi_run_args}") + print(f"{multi_run_dirname}: {multi_run_args}", file=f)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` around lines 162 - 164, The with-open block that opens env.ARTIFACT_DIR / "multi_run_args.list" uses print(f"{multi_run_dirname}: {multi_run_args}") without directing output to the file handle, so nothing is written; update the write to use the opened file handle (e.g., pass file=f to print or call f.write(...) and include a newline) in the same block where f is defined (the with open(...) as f: scope) so the entry is appended to multi_run_args.list.projects/skeleton/README.md-13-13 (1)
13-13:⚠️ Potential issue | 🟡 MinorMinor grammar fix: hyphenate "AI-focused".
When "AI focused" is used as a compound adjective before a noun, it should be hyphenated.
Proposed fix
-- **AI Platform Specificity**: OpenShift AI focused testing patterns +- **AI Platform Specificity**: OpenShift AI-focused testing patterns🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/skeleton/README.md` at line 13, Replace the phrase "**AI Platform Specificity**: OpenShift AI focused testing patterns" so the compound adjective is hyphenated: change "AI focused" to "AI-focused" (i.e., "**AI Platform Specificity**: OpenShift AI-focused testing patterns") to correct the grammar.projects/jump_ci/toolbox/jump_ci_take_lock/tasks/main.yml-26-28 (1)
26-28:⚠️ Potential issue | 🟡 MinorMisleading task name: "Log info" but actually writes to file.
The task name suggests logging, but it's writing the lock owner to a file.
Proposed fix
-- name: Log info about the lock owner +- name: Save the lock owner name shell: echo "{{ jump_ci_take_lock_owner }}" > "{{ cluster_lock_dir }}/lock_owner"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_take_lock/tasks/main.yml` around lines 26 - 28, Rename the misleading Ansible task "Log info about the lock owner" to something explicit like "Write lock owner to file" and update the task to use an idempotent module (e.g., ansible.builtin.copy or ansible.builtin.lineinfile) instead of a raw shell echo; ensure it writes the variable {{ jump_ci_take_lock_owner }} into "{{ cluster_lock_dir }}/lock_owner" so the behavior and name match (reference task name "Log info about the lock owner", variable jump_ci_take_lock_owner, and path "{{ cluster_lock_dir }}/lock_owner").projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml-69-73 (1)
69-73:⚠️ Potential issue | 🟡 MinorGuard the secrets directory creation with the same condition as the copy task.
This task creates the secrets directory unconditionally, even when
jump_ci_prepare_step_secrets_path_env_keyisnull. This would create a directory path containing the literal string "null". Add the samewhencondition as Line 81.Proposed fix
- name: Ensure that the secrets directory exists ansible.builtin.file: state: directory path: "{{ cluster_lock_dir }}/secrets/{{ jump_ci_prepare_step_secrets_path_env_key }}" mode: '0755' + when: jump_ci_prepare_step_secrets_path_env_key🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml` around lines 69 - 73, The "Ensure that the secrets directory exists" ansible.builtin.file task should be guarded so it only runs when jump_ci_prepare_step_secrets_path_env_key is set; add the same when condition used on the secrets copy task (the task that copies secrets) — e.g., add a when: jump_ci_prepare_step_secrets_path_env_key is not none (and/or not empty) to the file task so it does not create a path containing the literal "null".projects/jump_ci/toolbox/jump_ci_ensure_lock/tasks/main.yml-10-10 (1)
10-10:⚠️ Potential issue | 🟡 MinorMinor grammar fix: "does not exists" → "does not exist".
This grammatical error appears in multiple error messages (lines 10, 23, 36).
Proposed fix
- fail: msg="ansible_env does not exists. Make sure that the gather_facts command run. In TOPSAIL, the role needs to run remotely. Or set --check_kubeconfig=false" + fail: msg="ansible_env does not exist. Make sure that the gather_facts command ran. In TOPSAIL, the role needs to run remotely. Or set --check_kubeconfig=false"Also apply to lines 23 and 36:
- fail: msg="The cluster KUBECONFIG ({{ cluster_kubeconfig }}) does not exists" + fail: msg="The cluster KUBECONFIG ({{ cluster_kubeconfig }}) does not exist"- fail: msg="The lock directory '{{ cluster_lock_dir }}' does not exists" + fail: msg="The lock directory '{{ cluster_lock_dir }}' does not exist"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_ensure_lock/tasks/main.yml` at line 10, Update the failing task messages to correct the grammar from "does not exists" to "does not exist": locate the fail tasks that use the string pattern fail: msg="ansible_env does not exists..." (appearing in the tasks with that exact msg text used at the three occurrences) and change each message to "ansible_env does not exist. Make sure that the gather_facts command run. In TOPSAIL, the role needs to run remotely. Or set --check_kubeconfig=false" (preserving the rest of the message text).projects/jump_ci/testing/tunnelling.py-105-105 (1)
105-105:⚠️ Potential issue | 🟡 MinorReplace mutable default argument with
None.Using a mutable default
ssh_flags=[]can lead to unexpected behavior if the list is modified. UseNoneand initialize within the function.🐛 Proposed fix
- ssh_flags=[], + ssh_flags=None, ): ... + if ssh_flags is None: + ssh_flags = [] + secret_dir = pathlib.Path(os.environ[secret_env_key])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/tunnelling.py` at line 105, The function that currently declares the parameter ssh_flags=[] should be changed to use ssh_flags=None and initialize it inside the function body (e.g., if ssh_flags is None: ssh_flags = []) to avoid sharing a mutable list across calls; update any type hints or callers if necessary and ensure all code paths use the newly-initialized list (search for the ssh_flags parameter name to locate the function to change).projects/jump_ci/orchestration/ci.py-217-223 (1)
217-223:⚠️ Potential issue | 🟡 MinorTypo and incorrect method call in
unlock_cluster.Line 220 has a typo: "ckuster" should be "cluster". Additionally,
unlock_clustercallsrunner.prepare()but should likely callrunner.cleanup()to perform unlock/teardown logic.🐛 Proposed fix
`@cli.command`() `@click.pass_context` def unlock_cluster(ctx): - """Teardown phase - Unlock the ckuster.""" + """Teardown phase - Unlock the cluster.""" runner = ctx.obj.runner - exit_code = runner.prepare() + exit_code = runner.cleanup() sys.exit(exit_code)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/orchestration/ci.py` around lines 217 - 223, Fix the typo in unlock_cluster's docstring ("ckuster" → "cluster") and change the method call from runner.prepare() to runner.cleanup() so the command executes teardown/unlock logic; update the unlock_cluster function to call runner.cleanup(), capture its return value in exit_code as before, and exit with sys.exit(exit_code).projects/core/launcher/topsail_launcher.py-366-384 (1)
366-384:⚠️ Potential issue | 🟡 MinorDead code:
working_diris assigned twice unconditionally.Lines 374-377 set
working_dir, but lines 379-382 immediately reassign it unconditionally, making the first assignment dead code. This appears to be a logic error.🐛 Proposed fix
def enter(ctx, command, args, here): """Enter the TOPSAIL development environment.""" launcher = ctx.obj['launcher'] if command: full_command = f"{command} {' '.join(args)}" + working_dir = None elif here: full_command = launcher.config['topsail_toolbox_command'] working_dir = os.getcwd() else: full_command = launcher.config['topsail_toolbox_command'] working_dir = launcher.config['topsail_home'] - if not here and not command: - working_dir = launcher.config['topsail_home'] - else: - working_dir = os.getcwd() if here else None - sys.exit(launcher._run_toolbox_command(full_command, working_dir))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` around lines 366 - 384, The function enter currently assigns working_dir twice, making the first assignment dead; fix by consolidating the branching so working_dir is set only once alongside full_command: when command is provided set full_command = f"{command} {' '.join(args)}" and working_dir = os.getcwd() if here else None; when command is not provided set full_command = launcher.config['topsail_toolbox_command'] and set working_dir = os.getcwd() if here else launcher.config['topsail_home']; ensure the final call uses launcher._run_toolbox_command(full_command, working_dir) unchanged.
🧹 Nitpick comments (22)
projects/jump_ci/testing/utils_gethostname.py (1)
61-76: Static analysis S310 is a false positive here – URL scheme is hardcoded.The static analysis tool flags
urllib.request.Request()andurlopen()for potential use offile:or custom schemes. However, in this code the URL is constructed with a hardcodedhttps://prefix (line 61), so the scheme is controlled by the code, not external input.For defense-in-depth, you could optionally add a scheme validation, but given that
KUBERNETES_SERVICE_HOSTandKUBERNETES_SERVICE_PORTare set by the kubelet in a trusted Kubernetes environment, the current implementation is acceptable.🛡️ Optional: Add explicit scheme validation
APISERVER = f"https://{K8S_HOST}:{K8S_PORT}" # --- 2. Make the API Call --- URL = f"{APISERVER}/api/v1/namespaces/{NAMESPACE}/pods/{POD_NAME}" + + # Validate URL scheme (defense-in-depth) + if not URL.startswith("https://"): + logging.error("Constructed URL does not use HTTPS scheme.") + return None try:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/utils_gethostname.py` around lines 61 - 76, Static analysis flags the use of urllib.request.Request/urllib.request.urlopen as a potential untrusted-scheme issue, but APISERVER/URL are built with a hardcoded "https://" prefix using K8S_HOST and K8S_PORT; update the code to explicitly validate the scheme before making the call (parse URL with urllib.parse.urlparse and assert parsed.scheme == "https") or add a clear inline suppression/comment documenting why the hardcoded scheme is safe, and keep the existing request creation (urllib.request.Request) and urlopen call unchanged if validation passes; reference APISERVER, URL, K8S_HOST, K8S_PORT, urllib.request.Request and urlopen when making the edit.projects/jump_ci/testing/test.py (1)
220-220: Prefer avoiding shell globbing for artifact moves.
run.run('mv .../*__jump_ci_* ...')is brittle when no files match and depends on shell behavior. Usepathlib/shutilmoves directly.Suggested refactor
+import shutil ... - run.run(f'mv {env.ARTIFACT_DIR}/*__jump_ci_* {jump_ci_artifacts}/') + for artifact in env.ARTIFACT_DIR.glob("*__jump_ci_*"): + shutil.move(str(artifact), str(jump_ci_artifacts / artifact.name))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/test.py` at line 220, Replace the shell mv with Python file operations: use pathlib.Path(env.ARTIFACT_DIR).glob('*__jump_ci_*') to iterate matching files and shutil.move (or Path.rename) to move each into the jump_ci_artifacts directory, handling the case of zero matches and ensuring jump_ci_artifacts exists; update the call site where run.run(f'mv {env.ARTIFACT_DIR}/*__jump_ci_* {jump_ci_artifacts}/') is used to perform these safe moves instead.projects/jump_ci/toolbox/jump_ci_release_lock/tasks/main.yml (1)
10-12: Consider usingansible.builtin.filewithstate: absentfor idiomatic cleanup.While
rm -rfworks, thefilemodule is more idiomatic and provides better error handling.Proposed fix
- name: Release the lock directory - command: - rm -rf "{{ cluster_lock_dir }}" + ansible.builtin.file: + path: "{{ cluster_lock_dir }}" + state: absent🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_release_lock/tasks/main.yml` around lines 10 - 12, Replace the raw shell task named "Release the lock directory" that runs rm -rf "{{ cluster_lock_dir }}" with an Ansible file module task: use ansible.builtin.file with path set to "{{ cluster_lock_dir }}" and state: absent to remove the directory idiomatically and get better error handling; keep the original task name and any necessary become/become_user settings when editing the task that references cluster_lock_dir.projects/jump_ci/testing/config.yaml (2)
6-6: Consider removing unused configuration keys.The comment "key/value never actually used" appears on lines 6, 24, and 30 for
secrets.dir.name. If these are truly unused, consider removing them to reduce configuration noise, or document why they're kept (e.g., for future use or documentation purposes).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/config.yaml` at line 6, Remove or document the unused configuration key "secrets.dir.name" found in the config.yaml (currently present on lines referenced in the review); either delete all occurrences of the "secrets.dir.name" key to eliminate dead/unused config, or add a comment next to each occurrence explaining why it is intentionally kept (e.g., reserved for future use or required for external tooling) so reviewers know it is not accidental. Ensure consistency by applying the chosen approach for every instance of "secrets.dir.name" in this file.
54-56: Missing blank line beforessh_tunnelsection.The
ssh_tunnelblock appears immediately after the lastjob_name_to_presetentry without a separating blank line, reducing readability.Proposed fix
pull-ci-crc-org-llama.cpp-main-topsail: llama_cpp_presubmit + ssh_tunnel:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/config.yaml` around lines 54 - 56, Add a blank line separator above the ssh_tunnel block to improve readability and follow YAML section spacing; update the area after the last job_name_to_preset entries (e.g., after "pull-ci-crc-org-llama.cpp-main-topsail: llama_cpp_presubmit") so that the "ssh_tunnel:" key begins on its own paragraph separated by a single empty line.projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml (2)
24-29: Themodeparameter is ignored whenstate: absent.When removing a directory with
state: absent, themodeparameter has no effect. Consider removing it to avoid confusion.Proposed fix
- name: Ensure that the step artifacts dirname directory is empty ansible.builtin.file: state: absent path: "{{ step_artifact_dir }}" - mode: '0755' no_log: true # can be too chatty when cleaning an existing artifacts directly🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml` around lines 24 - 29, The ansible task named "Ensure that the step artifacts dirname directory is empty" uses ansible.builtin.file with state: absent but also specifies mode: '0755' which is ignored; remove the mode parameter from that task (and any duplicate mode entries tied to ansible.builtin.file where state: absent) so the task only declares state: absent and path: "{{ step_artifact_dir }}" (keep no_log: true).
56-60: Potential shell injection via YAML content into_nice_yaml.Using a here-doc with variable interpolation could be problematic if
jump_ci_prepare_step_variables_overrides_dictcontains special characters. Consider using thecopymodule withcontentinstead.Proposed fix
- name: Populate the variables_overrides files - shell: | - cat <<EOF >> {{ step_artifact_dir }}/artifacts/variable_overrides.yaml - {{ (jump_ci_prepare_step_variables_overrides_dict or {}) | to_nice_yaml }} - EOF + ansible.builtin.copy: + content: "{{ (jump_ci_prepare_step_variables_overrides_dict or {}) | to_nice_yaml }}" + dest: "{{ step_artifact_dir }}/artifacts/variable_overrides.yaml" + mode: '0644'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_step/tasks/main.yml` around lines 56 - 60, Replace the shell here-doc task that writes variable_overrides.yaml with an Ansible copy task using the content parameter to safely render the YAML; locate the task named "Populate the variables_overrides files" and replace the shell block that uses {{ jump_ci_prepare_step_variables_overrides_dict | to_nice_yaml }} and target {{ step_artifact_dir }}/artifacts/variable_overrides.yaml with a copy task that sets dest to {{ step_artifact_dir }}/artifacts/variable_overrides.yaml and content to the rendered YAML using the same variable (e.g., {{ jump_ci_prepare_step_variables_overrides_dict | default({}) | to_nice_yaml }}), optionally specifying mode/owner as needed to preserve permissions.projects/jump_ci/toolbox/jump_ci_ensure_lock/tasks/main.yml (1)
39-42: Consider using theslurpmodule instead ofcommand: cat.Using the
slurpmodule is more idiomatic in Ansible for reading file contents and avoids shell invocation overhead.Proposed fix
- name: Get the owner of the lock - command: - cat "{{ cluster_lock_dir }}/lock_owner" - register: lock_owner_content_cmd + ansible.builtin.slurp: + src: "{{ cluster_lock_dir }}/lock_owner" + register: lock_owner_content - name: Fail if the owner is different - when: lock_owner_content_cmd.stdout | trim != jump_ci_ensure_lock_owner - fail: msg="Unexpected lock owner. Found '{{ lock_owner_content_cmd.stdout }}', expected '{{ jump_ci_ensure_lock_owner }}'" + when: (lock_owner_content.content | b64decode | trim) != jump_ci_ensure_lock_owner + fail: msg="Unexpected lock owner. Found '{{ lock_owner_content.content | b64decode | trim }}', expected '{{ jump_ci_ensure_lock_owner }}'"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_ensure_lock/tasks/main.yml` around lines 39 - 42, Replace the "Get the owner of the lock" task that uses command: cat with the Ansible slurp module to read "{{ cluster_lock_dir }}/lock_owner" (so you avoid shell invocation); register the result into the same variable (lock_owner_content_cmd) and remember that slurp returns base64-encoded content (use "{{ lock_owner_content_cmd.content | b64decode }}" wherever the raw file contents are consumed later). Ensure the task uses the path constructed from cluster_lock_dir and keeps the same register name to minimize downstream changes.projects/jump_ci/toolbox/jump_ci_prepare_step/templates/entrypoint.sh.j2 (1)
60-62: Quote$interactiveto prevent word-splitting issues.When
$interactiveis empty, leaving it unquoted can cause subtle parsing issues in some shell versions. Additionally, consider that|& teemay mask the container's exit code sincepipefailreturns the rightmost non-zero exit code in the pipeline, andteetypically succeeds.Proposed fix
- $interactive \ + ${interactive:-} \ "$image" \ - bash -ce "$cmd" |& tee -a "{{ step_artifact_dir }}/run.log" + bash -ce "$cmd" 2>&1 | tee -a "{{ step_artifact_dir }}/run.log" + +exit_code=${PIPESTATUS[0]} +if [[ $exit_code -ne 0 ]]; then + echo "Container exited with code $exit_code" + exit $exit_code +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_step/templates/entrypoint.sh.j2` around lines 60 - 62, Quote the $interactive variable to avoid word-splitting (replace $interactive with "$interactive" in the command invocation that also uses "$image" and bash -ce "$cmd") and enable pipefail so the container's exit code isn't masked by tee (change the bash invocation to include -o pipefail, e.g. use bash -o pipefail -ce "$cmd") so the pipeline with |& tee -a "{{ step_artifact_dir }}/run.log" will return a failing exit status if the container command fails.projects/jump_ci/testing/run (2)
6-11: Logger configuration order issue.Line 7 sets the log level before
configure_logging()is called on line 11, which may overwrite this setting. Move the level setting afterconfigure_logging()or rely solely on the library's configuration.♻️ Suggested fix
import logging -logging.getLogger().setLevel(logging.INFO) - from projects.core.library import env, config, run, configure_logging configure_logging() +logging.getLogger().setLevel(logging.INFO)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/run` around lines 6 - 11, The call to logging.getLogger().setLevel(logging.INFO) runs before configure_logging(), potentially being overridden; move the logging.getLogger().setLevel(logging.INFO) call to after configure_logging() in this file (or remove the manual set and rely solely on projects.core.library.configure_logging()) so that the desired log level is applied after configure_logging() initializes handlers/formatters.
35-40:main()returnsNone, causingsys.exit(None)which exits with 0.
fire.Fire()returns the result of the invoked command, which may not be an exit code. If commands don't return exit codes, consider handling this explicitly.♻️ Suggested fix
if __name__ == "__main__": try: - sys.exit(main()) + main() + sys.exit(0) except subprocess.CalledProcessError as e: logging.error(f"Command '{e.cmd}' failed --> {e.returncode}") sys.exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/run` around lines 35 - 40, The current wrapper calls sys.exit(main()) which can pass None (exiting 0); change to call main() into a variable, normalize its value, and pass an integer to sys.exit: rc = main(); if rc is None: sys.exit(0); elif isinstance(rc, int): sys.exit(rc); else: sys.exit(0) (or cast/handle other return types as needed). Also ensure this covers the case where main() delegates to fire.Fire() so return values from fire.Fire() are normalized before calling sys.exit; keep existing subprocess.CalledProcessError handling.projects/jump_ci/orchestration/ci.py (1)
41-41: Use explicitOptionaltype hint.PEP 484 prohibits implicit
Optional. Usestr | NoneorOptional[str]explicitly.♻️ Suggested fix
+from typing import Optional + class SkeletonTestRunner: ... - def execute_command(self, command: str, description: str = None) -> bool: + def execute_command(self, command: str, description: Optional[str] = None) -> bool:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/orchestration/ci.py` at line 41, The type hint for the optional description parameter in execute_command must be explicit; update the method signature of execute_command to use an explicit optional type (e.g., description: Optional[str] or description: str | None) and ensure you import typing.Optional if using Optional so the annotation follows PEP 484.projects/core/image/Containerfile.lightweight (1)
3-3:MAINTAINERinstruction is deprecated.The
MAINTAINERinstruction has been deprecated since Docker 1.13. Use aLABELinstead for maintainer information.♻️ Suggested fix
-MAINTAINER Perf&Scale PSAP Team <team-psap@redhat.com> +LABEL maintainer="Perf&Scale PSAP Team <team-psap@redhat.com>"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/image/Containerfile.lightweight` at line 3, Replace the deprecated MAINTAINER instruction with a LABEL; locate the MAINTAINER line (the MAINTAINER instruction containing "Perf&Scale PSAP Team <team-psap@redhat.com>") and change it to an equivalent LABEL entry that records the maintainer metadata (e.g., use a maintainer key on LABEL) so the information remains but conforms to modern Docker/Containerfile syntax.projects/jump_ci/testing/tunnelling.py (1)
80-82: Potential resource leak: file descriptor not closed on function exit.
extra_vars_fileis opened viautils.get_tmp_fd()and flushed, but there's no guarantee it gets closed if an exception occurs later. Consider using a context manager or ensuring cleanup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/testing/tunnelling.py` around lines 80 - 82, The temporary file descriptor returned by utils.get_tmp_fd() (assigned to extra_vars_fd_path and extra_vars_file) may leak because extra_vars_file is not guaranteed to be closed on early return/exception; update the code around the extra_vars_fd_path/extra_vars_file usage to ensure extra_vars_file is always closed (either by using a context manager around utils.get_tmp_fd() or wrapping subsequent logic in try/finally and calling extra_vars_file.close() in the finally block) and ensure any needed file flush/unlink is preserved.projects/core/launcher/topsail_launcher.py (4)
209-209: Use explicitOptionaltype hint instead of implicitNonedefault.PEP 484 prohibits implicit
Optional. UseOptional[List[str]]orList[str] | Noneexplicitly.♻️ Suggested fix
- def build_image(self, extra_packages: List[str] = None) -> int: + def build_image(self, extra_packages: Optional[List[str]] = None) -> int:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` at line 209, The function signature for build_image uses an implicit None default for extra_packages; update it to an explicit optional type (e.g., change def build_image(self, extra_packages: Optional[List[str]] = None) -> int or use List[str] | None) and add the necessary import (from typing import Optional) if not already present, then run type checks/linters to ensure no other call sites need annotation adjustments; reference: build_image.
60-61: Consider catching specific exceptions instead of bareException.Catching
Exceptionbroadly can mask unexpected errors. Consider catchingyaml.YAMLErrorandIOErrorspecifically for more targeted error handling.♻️ Suggested fix
- except Exception as e: + except (yaml.YAMLError, IOError) as e: click.echo(f"⚠️ Warning: Failed to load config {config_file}: {e}", err=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` around lines 60 - 61, The current except Exception block around loading the config (where config_file is used) is too broad; replace it with targeted exception handlers: catch yaml.YAMLError to handle YAML parsing errors and IOError/OSError (or FileNotFoundError) to handle file read issues, and keep a final generic except Exception as a last-resort that re-raises or logs differently if you must; ensure each handler uses the same click.echo call to report the config_file and the specific exception message so callers see precise failure causes when invoking the code that loads the config (refer to the existing except block that prints "⚠️ Warning: Failed to load config {config_file}: {e}").
177-187: Use iterable unpacking for cleaner list construction.Per static analysis hint (RUF005), prefer iterable unpacking over list concatenation.
♻️ Suggested fix
- cmd = [ - 'podman', 'run', '--rm', '-it', - '--security-opt', 'label=disable', - '--cgroupns', 'host', - '--network=host', - '-v', f'{topsail_home}:{topsail_home}:Z', - '-w', working_dir or topsail_home - ] + env_args + [ - self.config['container_image'], - '/bin/bash', '-c', full_command - ] + cmd = [ + 'podman', 'run', '--rm', '-it', + '--security-opt', 'label=disable', + '--cgroupns', 'host', + '--network=host', + '-v', f'{topsail_home}:{topsail_home}:Z', + '-w', working_dir or topsail_home, + *env_args, + self.config['container_image'], + '/bin/bash', '-c', full_command + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` around lines 177 - 187, The cmd list is built via concatenation which triggers RUF005; instead use iterable unpacking to merge env_args into the list. Replace the current construction that does "[] + env_args + [...]" with a single list literal that includes "*env_args" in the proper place, keeping the same items (e.g., 'podman', flags, '-v' with topsail_home, '-w' using working_dir or topsail_home, then *env_args, then self.config['container_image'], '/bin/bash', '-c', full_command) so the resulting cmd variable contains the identical sequence but using unpacking of env_args instead of list concatenation.
190-196: Remove extraneous f-string prefixes.Lines 190 and 193 have f-strings without any placeholders. Remove the
fprefix.♻️ Suggested fix
- click.echo(f"🚀 Executing container command:") + click.echo("🚀 Executing container command:") click.echo(f" Command: {' '.join(cmd)}") if env_vars: - click.echo(f" Environment variables:") + click.echo(" Environment variables:")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/topsail_launcher.py` around lines 190 - 196, In the block that prints the container command in topsail_launcher.py (the click.echo calls that reference cmd and env_vars), remove the unnecessary f-string prefixes on the two echo calls that have no interpolation—change click.echo(f"🚀 Executing container command:") to click.echo("🚀 Executing container command:") and click.echo(f" Command: {' '.join(cmd)}") if it contains no placeholders (or remove only the leading f where there is no interpolation), ensuring any lines that actually interpolate (like the one joining cmd or printing k/v) remain f-strings; locate the echo statements referencing cmd and env_vars to make the edits.projects/jump_ci/toolbox/jump_ci_prepare_topsail/tasks/main.yml (1)
20-23: Missingpipefailannotation for shell module.The shell command uses
set -o pipefailbut Ansible's shell module defaults to/bin/sh. Add explicit bash invocation or useexecutable: /bin/bashargument.♻️ Suggested fix
- name: Compute the index of the next logs directory shell: set -o pipefail; printf "%03d" $(ls "{{ artifact_dir }}/0*" -d 2>/dev/null | wc -l) register: next_artifact_dir_count + args: + executable: /bin/bash🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/jump_ci/toolbox/jump_ci_prepare_topsail/tasks/main.yml` around lines 20 - 23, The shell task using "set -o pipefail" (the shell: block that registers next_artifact_dir_count and references artifact_dir) must run under bash; update the task to explicitly run bash by adding executable: /bin/bash or change the shell invocation to "bash -lc" so pipefail is honored—locate the shell task that prints the padded count of "{{ artifact_dir }}/0*" and add the executable: /bin/bash option (or switch to bash -lc) to ensure set -o pipefail is effective.projects/core/launcher/README.md (1)
162-173: Add language specifier to fenced code block.The sample output block is missing a language specifier. Use
textorconsolefor plain text output.♻️ Suggested fix
**Sample output:** -``` +```text 📊 TOPSAIL Development Environment Status:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/launcher/README.md` around lines 162 - 173, The fenced code block in the README sample output lacks a language specifier; update the triple-backtick that opens the block in the README.md sample to include a plain-text specifier (e.g., change ``` to ```text or ```console) so the block is correctly recognized as plain output in renderers; locate the sample output block in the TOPSAIL Development Environment Status section and add the specifier to the opening fence.projects/core/image/Containerfile (1)
33-37: Consider pinning OpenShift CLI version for reproducibility.Using
stablefor the OpenShift CLI version may lead to inconsistent builds over time. Consider pinning to a specific version for reproducible builds.♻️ Suggested fix
# Install OpenShift CLI (oc) -RUN curl -LO "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz" && \ +# Pin to a specific version for reproducible builds +ARG OC_VERSION=4.14.0 +RUN curl -LO "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${OC_VERSION}/openshift-client-linux.tar.gz" && \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/core/image/Containerfile` around lines 33 - 37, The Dockerfile step that downloads the OpenShift CLI uses the floating "stable" channel in the URL (the RUN that curls "https://mirror.openshift.com/.../stable/openshift-client-linux.tar.gz") which makes builds non-reproducible; change this to pin a specific release by replacing "stable" with a concrete version string (or introduce an ARG like OPENSHIFT_CLIENT_VERSION and use it in the URL), update the filename reference (openshift-client-linux.tar.gz) accordingly, and optionally add a checksum verification step to the same RUN that extracts and installs oc to ensure integrity.projects/skeleton/orchestration/ci.py (1)
171-189: Don't report cleanup success when teardown commands fail.Every
execute_command()result is ignored here, so the phase returns0even if resource cleanup fails.Proposed fix
def cleanup(self): @@ - self.execute_command( + success = True + + success &= self.execute_command( "echo 'Cleaning up test data and temporary files'", "Cleaning up test data" ) @@ - self.execute_command( + success &= self.execute_command( "echo 'Resetting environment to initial state'", "Resetting environment" ) @@ - self.execute_command( + success &= self.execute_command( "echo 'Generating final test report'", "Generating final report" ) - self.log("Cleanup phase completed!", "success") - return 0 + if not success: + self.log("Cleanup phase failed!", "error") + return 1 + + self.log("Cleanup phase completed!", "success") + return 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@projects/skeleton/orchestration/ci.py` around lines 171 - 189, The cleanup block in ci.py ignores return values from execute_command, causing the method to always return 0 and log success even if teardown fails; change the cleanup phase to collect each execute_command() result (e.g., into a list or a cumulative exit_code), check for any non-zero exit codes after running the commands, log failure via self.log(...) when any command fails, and return a non-zero exit code (or the first failing code) instead of always returning 0; update references to execute_command, the cleanup sequence, and the final self.log("Cleanup phase completed!", ...) accordingly so success is only logged when all execute_command calls succeed.
| new_args_str = new_variable_overrides[f"PR_POSITIONAL_ARGS"] = " ".join(new_args) | ||
| new_variable_overrides["PR_POSITIONAL_ARGS"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"] | ||
| new_variable_overrides["PR_POSITIONAL_ARG_0"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"] |
There was a problem hiding this comment.
PR_POSITIONAL_ARGS gets overwritten with the wrong value.
Line 23 computes rewritten positional args, but Line 24 immediately replaces them with PR_POSITIONAL_ARG_0. This drops the rewritten arg string.
Suggested fix
- new_args_str = new_variable_overrides[f"PR_POSITIONAL_ARGS"] = " ".join(new_args)
- new_variable_overrides["PR_POSITIONAL_ARGS"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"]
+ new_args_str = " ".join(new_args)
+ new_variable_overrides["PR_POSITIONAL_ARGS"] = new_args_str
new_variable_overrides["PR_POSITIONAL_ARG_0"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| new_args_str = new_variable_overrides[f"PR_POSITIONAL_ARGS"] = " ".join(new_args) | |
| new_variable_overrides["PR_POSITIONAL_ARGS"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"] | |
| new_variable_overrides["PR_POSITIONAL_ARG_0"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"] | |
| new_args_str = " ".join(new_args) | |
| new_variable_overrides["PR_POSITIONAL_ARGS"] = new_args_str | |
| new_variable_overrides["PR_POSITIONAL_ARG_0"] = variable_overrides_dict["PR_POSITIONAL_ARG_0"] |
🧰 Tools
🪛 Ruff (0.15.5)
[error] 23-23: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/jump_ci/testing/test.py` around lines 23 - 25, The code sets
new_args_str into new_variable_overrides["PR_POSITIONAL_ARGS"] but then
immediately overwrites it with variable_overrides_dict["PR_POSITIONAL_ARG_0"];
update the assignments so that "PR_POSITIONAL_ARGS" keeps the joined new_args
(new_args_str) and do not replace it with PR_POSITIONAL_ARG_0—only set
new_variable_overrides["PR_POSITIONAL_ARG_0"] =
variable_overrides_dict["PR_POSITIONAL_ARG_0"] (or otherwise copy
PR_POSITIONAL_ARG_0 to its own key) while leaving
new_variable_overrides["PR_POSITIONAL_ARGS"] as new_args_str; locate this logic
around new_args_str, new_variable_overrides, and variable_overrides_dict to make
the change.
| variable_overrides_file = pathlib.Path(os.environ.get("ARTIFACT_DIR")) / "variable_overrides.yaml" | ||
|
|
||
| config_test_args = config.project.get_config("project.args", None) \ | ||
| if test_args is None else None | ||
|
|
||
| if test_args is None and not variable_overrides_file.exists() and not config_test_args: | ||
| logging.fatal(f"File '{variable_overrides_file}' does not exist, neither --test_args nor project.args have been passed. Please specify one of them :/") | ||
| raise SystemExit(1) |
There was a problem hiding this comment.
ARTIFACT_DIR can crash the command before validation.
At Line 97, pathlib.Path(os.environ.get("ARTIFACT_DIR")) raises if ARTIFACT_DIR is unset. This can fail even when --test_args is provided.
Suggested fix
- variable_overrides_file = pathlib.Path(os.environ.get("ARTIFACT_DIR")) / "variable_overrides.yaml"
+ artifact_dir = os.environ.get("ARTIFACT_DIR")
+ variable_overrides_file = pathlib.Path(artifact_dir) / "variable_overrides.yaml" if artifact_dir else None
...
- if test_args is None and not variable_overrides_file.exists() and not config_test_args:
+ if test_args is None and (not variable_overrides_file or not variable_overrides_file.exists()) and not config_test_args:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@projects/jump_ci/testing/test.py` around lines 97 - 104, The code constructs
variable_overrides_file via pathlib.Path(os.environ.get("ARTIFACT_DIR")) which
raises when ARTIFACT_DIR is unset; change this to first read the env var into a
local (e.g., artifact_dir = os.environ.get("ARTIFACT_DIR")) and only build
pathlib.Path(artifact_dir) if artifact_dir is truthy (or use
os.environ.get("ARTIFACT_DIR", "") to provide a safe default) so the subsequent
exists() check won't crash; keep the existing conditional using test_args and
config_test_args but use the safe variable_overrides_file (or guard the exists()
call) when verifying presence.
|
/test jump-ci |
|
/test jump-ci |
|
/test jump-ci skeleton abcd 1234 |
|
/test jump-ci skeleton abcd 1234 |
1 similar comment
|
/test jump-ci skeleton abcd 1234 |
a9d96e1 to
8c9d858
Compare
|
/test jump-ci skeleton abcd 1234 |
|
/test jump-ci skeleton abcd 1234 |
|
/test jump-ci skeleton abcd 1234 |
|
/test jump-ci skeleton abcd 1234 |
|
merging this version as a first version of the test harness CI wrapper |
Summary by CodeRabbit