chore: sync template from jebel-quant/rhiza@main#268
Conversation
WalkthroughThis PR migrates repository metadata from a template repository source to a new identity (jebel-quant/rhiza), introduces comprehensive automation scripts for version bumping and releases, implements dynamic Python version matrix generation in CI workflows, expands the Makefile with new targets for release management and template syncing, and adds extensive test coverage for the new infrastructure. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45–75 minutes Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (13)
tests/test_rhiza/conftest.py (1)
118-119: Potential issue withgit checkout -b masterafter clone.After cloning from a bare remote with
HEADpointing torefs/heads/master, the local clone might already be on a branch tracking the remote's default. Usingcheckout -b mastercould fail if the branch already exists.Consider using
checkout -B master(force create/reset) or checking if the branch exists first:- subprocess.run(["git", "checkout", "-b", "master"], check=True) + subprocess.run(["git", "checkout", "-B", "master"], check=True)tests/test_rhiza/test_docstrings.py (2)
89-97: Minor: Misleading module count in failure summary.The summary message says
{total_tests} tests across {len(failed_modules)} module(s), buttotal_testsincludes tests from all modules (including passing ones), not just the failed modules. Consider clarifying:msg = ( - f"Doctest summary: {total_tests} tests across {len(failed_modules)} module(s)\n" + f"Doctest summary: {total_tests} total tests, {len(failed_modules)} module(s) with failures\n" f"Failures: {total_failures}\n" f"Failed modules:\n{formatted}" )
28-33: Redundant warning emission.Both
warnings.warn()andlogger.warning()are called for the same import error. Consider using only the logger since tests already have logging configured, and warnings may clutter test output differently.tests/test_rhiza/test_readme.py (1)
64-74: Inconsistent regex pattern style.The regex at line 68 uses escaped backticks (
\``) while the module-level constants at lines 14-16 use raw triple backticks. Consider using the module-levelCODE_BLOCK` constant for consistency:def test_readme_code_is_syntactically_valid(self, root): """Python code blocks in README should be syntactically valid.""" readme = root / "README.md" content = readme.read_text(encoding="utf-8") - code_blocks = re.findall(r"\`\`\`python\n(.*?)\`\`\`", content, re.DOTALL) + code_blocks = CODE_BLOCK.findall(content) for i, code in enumerate(code_blocks):tests/test_rhiza/test_sync_script.py (2)
157-158: Remove debug print statements or replace with logger.These raw
print()calls are inconsistent with the rest of the test file which useslogger.debug(). They will add noise to test output.- print("STDOUT:", result.stdout) - print("STDERR:", result.stderr)
106-119: String replacement for script patching is fragile.This multi-line string replacement will silently fail to match if the source
sync.shchanges formatting or wording. Consider verifying the replacement occurred or using a more robust injection mechanism (e.g., environment variable to override the clone step).test_sync_content = sync_content.replace( "# Clone the template repository\n" ... ) + + # Verify the replacement actually happened + assert test_sync_content != sync_content, "Failed to patch sync script - source may have changed"tests/test_rhiza/test_makefile.py (1)
195-200: Assertion is too permissive.The check
"uv" in content.lower()could match unrelated strings (e.g., comments, other variable names). Consider being more specific.- assert "UV_BIN" in content or "uv" in content.lower() + assert "UV_BIN" in content.github/workflows/scripts/version_max.py (2)
14-15: Hardcoded path navigation and candidate versions may require maintenance.The script uses
parents[3]to locatepyproject.tomland hardcodes Python version candidates.Consider adding a comment indicating these will need updates:
parents[3]relies on the script staying at.github/workflows/scripts/CANDIDATESwill need expansion as new Python versions are released (e.g., 3.15, 3.16)- The fallback "3.13" on line 53 will eventually become outdated
-PYPROJECT = Path(__file__).resolve().parents[3] / "pyproject.toml" -CANDIDATES = ["3.11", "3.12", "3.13", "3.14"] # extend as needed +PYPROJECT = Path(__file__).resolve().parents[3] / "pyproject.toml" # Assumes script is at .github/workflows/scripts/ +CANDIDATES = ["3.11", "3.12", "3.13", "3.14"] # TODO: Update as new Python versions are released
49-53: Fallback behavior may mask configuration issues.The script silently falls back to "3.13" if
pyproject.tomlis missing. In a CI context, this could hide problems where the repository is not properly checked out or structured.Consider logging to stderr or failing when pyproject.toml is missing in CI environments:
if __name__ == "__main__": if PYPROJECT.exists(): print(json.dumps(max_supported_version())) else: + import sys + print("Warning: pyproject.toml not found, using fallback version", file=sys.stderr) print(json.dumps("3.13"))tests/test_rhiza/test_structure.py (1)
29-33: Rename / reword this test to match what it actually verifies.
The docstring mentionstests/test_config_templates/, but the assertions only checktests/test_rhiza/conftest.pyexists. Consider renaming to something liketest_root_fixture_origin_conftest_existsor updating the docstring..github/scripts/bump.sh (1)
101-107: Default-branch detection is brittle; preferrefs/remotes/origin/HEAD.
Parsinggit remote show originoutput (Line 102) can break with localization/format changes.More robust approach:
- DEFAULT_BRANCH=$(git remote show origin | grep 'HEAD branch' | cut -d' ' -f5) + DEFAULT_BRANCH=$(git symbolic-ref --quiet --short refs/remotes/origin/HEAD 2>/dev/null | sed 's|^origin/||') if [ -z "$DEFAULT_BRANCH" ]; then printf "%b[ERROR] Could not determine default branch from remote%b\n" "$RED" "$RESET" exit 1 fiMakefile (2)
54-63: Tighten script path quoting to avoid edge-case shell parsing issues.
These work today, but the mixed quote style is easy to regress; prefer quoting the full path consistently.- "${CUSTOM_SCRIPTS_FOLDER}"/build-extras.sh; \ + "${CUSTOM_SCRIPTS_FOLDER}/build-extras.sh"; \ @@ - "${CUSTOM_SCRIPTS_FOLDER}"/post-release.sh; \ + "${CUSTOM_SCRIPTS_FOLDER}/post-release.sh"; \Also applies to: 183-192
134-160: (Optional) Consider moving thedocsrecipe into.github/scripts/docs.sh.
docsis doing a lot of logic inline; extracting to a script would simplify maintenance and reduce Makefile complexity (matches the direction you’re already taking withbump.sh/release.sh).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (33)
.editorconfig(1 hunks).github/actions/setup-project/action.yml(5 hunks).github/scripts/bump.sh(1 hunks).github/scripts/customisations/build-extras.sh(1 hunks).github/scripts/customisations/post-release.sh(1 hunks).github/scripts/marimushka.sh(2 hunks).github/scripts/release.sh(1 hunks).github/scripts/sync.sh(1 hunks).github/scripts/update-readme-help.sh(1 hunks).github/workflows/book.yml(3 hunks).github/workflows/ci.yml(1 hunks).github/workflows/deptry.yml(1 hunks).github/workflows/devcontainer.yml(1 hunks).github/workflows/marimo.yml(2 hunks).github/workflows/pre-commit.yml(2 hunks).github/workflows/release.yml(7 hunks).github/workflows/scripts/version_matrix.py(1 hunks).github/workflows/scripts/version_max.py(1 hunks).github/workflows/sync.yml(2 hunks).pre-commit-config.yaml(2 hunks)CONTRIBUTING.md(2 hunks)Makefile(7 hunks)pytest.ini(1 hunks)ruff.toml(1 hunks)tests/test_rhiza/conftest.py(1 hunks)tests/test_rhiza/test_bump_script.py(1 hunks)tests/test_rhiza/test_docstrings.py(1 hunks)tests/test_rhiza/test_git_repo_fixture.py(1 hunks)tests/test_rhiza/test_makefile.py(1 hunks)tests/test_rhiza/test_readme.py(1 hunks)tests/test_rhiza/test_release_script.py(1 hunks)tests/test_rhiza/test_structure.py(1 hunks)tests/test_rhiza/test_sync_script.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (9)
.github/scripts/release.sh (1)
.github/scripts/bump.sh (1)
prompt_continue(39-52)
.github/scripts/sync.sh (1)
.github/scripts/release.sh (1)
show_usage(21-29)
tests/test_rhiza/test_release_script.py (1)
tests/test_rhiza/conftest.py (1)
git_repo(101-158)
.github/scripts/bump.sh (1)
.github/scripts/release.sh (2)
prompt_continue(63-76)prompt_yes_no(79-91)
tests/test_rhiza/test_structure.py (1)
tests/test_rhiza/conftest.py (1)
root(82-87)
tests/test_rhiza/test_docstrings.py (1)
tests/test_rhiza/conftest.py (2)
logger(91-97)root(82-87)
tests/test_rhiza/test_makefile.py (1)
tests/test_rhiza/conftest.py (2)
logger(91-97)root(82-87)
tests/test_rhiza/test_git_repo_fixture.py (1)
tests/test_rhiza/conftest.py (1)
git_repo(101-158)
tests/test_rhiza/test_bump_script.py (1)
tests/test_rhiza/conftest.py (1)
git_repo(101-158)
🪛 actionlint (1.7.9)
.github/workflows/devcontainer.yml
65-65: shellcheck reported issue in this script: SC2086:info:5:30: Double quote to prevent globbing and word splitting
(shellcheck)
82-82: shellcheck reported issue in this script: SC2086:info:2:26: Double quote to prevent globbing and word splitting
(shellcheck)
82-82: shellcheck reported issue in this script: SC2086:info:5:25: Double quote to prevent globbing and word splitting
(shellcheck)
92-92: shellcheck reported issue in this script: SC2086:info:1:90: Double quote to prevent globbing and word splitting
(shellcheck)
97-97: shellcheck reported issue in this script: SC2086:info:24:34: Double quote to prevent globbing and word splitting
(shellcheck)
126-126: shellcheck reported issue in this script: SC2086:info:2:40: Double quote to prevent globbing and word splitting
(shellcheck)
.github/workflows/release.yml
83-83: shellcheck reported issue in this script: SC2086:info:2:20: Double quote to prevent globbing and word splitting
(shellcheck)
232-232: shellcheck reported issue in this script: SC2086:info:2:33: Double quote to prevent globbing and word splitting
(shellcheck)
232-232: shellcheck reported issue in this script: SC2086:info:5:34: Double quote to prevent globbing and word splitting
(shellcheck)
244-244: shellcheck reported issue in this script: SC2086:info:5:30: Double quote to prevent globbing and word splitting
(shellcheck)
262-262: shellcheck reported issue in this script: SC2086:info:1:90: Double quote to prevent globbing and word splitting
(shellcheck)
267-267: shellcheck reported issue in this script: SC2086:info:24:34: Double quote to prevent globbing and word splitting
(shellcheck)
327-327: shellcheck reported issue in this script: SC2034:warning:1:1: OWNER_LC appears unused. Verify use (or export if used externally)
(shellcheck)
327-327: shellcheck reported issue in this script: SC2086:info:9:6: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 checkmake (0.2.2)
Makefile
[warning] 117-117: Target body for "marimushka" exceeds allowed length of 5 (7).
(maxbodylength)
[warning] 134-134: Target body for "docs" exceeds allowed length of 5 (28).
(maxbodylength)
[warning] 204-204: Target body for "customisations" exceeds allowed length of 5 (6).
(maxbodylength)
🪛 GitHub Actions: CI
tests/test_rhiza/test_structure.py
[error] 35-35: Root resolution: repository root detection failed due to presence of tests/conftest.py at repository root; test expected root to be above tests/conftest.py.
🪛 Ruff (0.14.8)
tests/test_rhiza/test_release_script.py
18-18: subprocess call: check for execution of untrusted input
(S603)
24-24: Starting a process with a partial executable path
(S607)
37-37: Starting a process with a partial executable path
(S607)
40-40: subprocess call: check for execution of untrusted input
(S603)
52-52: Starting a process with a partial executable path
(S607)
53-53: Starting a process with a partial executable path
(S607)
55-55: subprocess call: check for execution of untrusted input
(S603)
69-69: subprocess call: check for execution of untrusted input
(S603)
82-82: Starting a process with a partial executable path
(S607)
83-83: Starting a process with a partial executable path
(S607)
89-89: subprocess call: check for execution of untrusted input
(S603)
105-105: subprocess call: check for execution of untrusted input
(S603)
105-105: Starting a process with a partial executable path
(S607)
108-108: Starting a process with a partial executable path
(S607)
109-109: Starting a process with a partial executable path
(S607)
114-114: Starting a process with a partial executable path
(S607)
115-115: Starting a process with a partial executable path
(S607)
116-116: Starting a process with a partial executable path
(S607)
119-119: subprocess call: check for execution of untrusted input
(S603)
tests/test_rhiza/test_sync_script.py
70-70: subprocess call: check for execution of untrusted input
(S603)
70-70: Starting a process with a partial executable path
(S607)
144-144: subprocess call: check for execution of untrusted input
(S603)
tests/test_rhiza/test_makefile.py
1-1: Docstring contains ambiguous ‑ (NON-BREAKING HYPHEN). Did you mean - (HYPHEN-MINUS)?
(RUF002)
tests/test_rhiza/test_git_repo_fixture.py
61-61: Starting a process with a partial executable path
(S607)
72-72: Starting a process with a partial executable path
(S607)
83-83: Starting a process with a partial executable path
(S607)
94-94: Starting a process with a partial executable path
(S607)
105-105: Starting a process with a partial executable path
(S607)
110-110: Starting a process with a partial executable path
(S607)
120-120: Starting a process with a partial executable path
(S607)
tests/test_rhiza/test_bump_script.py
23-23: subprocess call: check for execution of untrusted input
(S603)
34-34: Starting a process with a partial executable path
(S607)
45-45: subprocess call: check for execution of untrusted input
(S603)
52-52: Starting a process with a partial executable path
(S607)
63-63: Starting a process with a partial executable path
(S607)
64-64: Starting a process with a partial executable path
(S607)
71-71: subprocess call: check for execution of untrusted input
(S603)
85-85: subprocess call: check for execution of untrusted input
(S603)
99-99: Starting a process with a partial executable path
(S607)
103-103: subprocess call: check for execution of untrusted input
(S603)
114-114: Starting a process with a partial executable path
(S607)
119-119: subprocess call: check for execution of untrusted input
(S603)
133-133: subprocess call: check for execution of untrusted input
(S603)
tests/test_rhiza/conftest.py
108-108: subprocess call: check for execution of untrusted input
(S603)
108-108: Starting a process with a partial executable path
(S607)
110-110: Starting a process with a partial executable path
(S607)
113-113: subprocess call: check for execution of untrusted input
(S603)
113-113: Starting a process with a partial executable path
(S607)
119-119: Starting a process with a partial executable path
(S607)
152-152: Starting a process with a partial executable path
(S607)
153-153: Starting a process with a partial executable path
(S607)
154-154: Starting a process with a partial executable path
(S607)
155-155: Starting a process with a partial executable path
(S607)
156-156: Starting a process with a partial executable path
(S607)
🪛 Shellcheck (0.11.0)
.github/scripts/release.sh
[warning] 64-64: In POSIX sh, 'local' is undefined.
(SC3043)
[warning] 80-80: In POSIX sh, 'local' is undefined.
(SC3043)
[warning] 139-139: This { is literal. Check expression (missing ;/\n?) or quote it.
(SC1083)
[warning] 139-139: This } is literal. Check expression (missing ;/\n?) or quote it.
(SC1083)
.github/scripts/bump.sh
[warning] 40-40: In POSIX sh, 'local' is undefined.
(SC3043)
[warning] 56-56: In POSIX sh, 'local' is undefined.
(SC3043)
🔇 Additional comments (51)
ruff.toml (1)
1-2: Repository header update—no functional changes.Header comment correctly reflects new repository ownership. Configuration remains unchanged.
.github/workflows/deptry.yml (1)
1-2: Repository header update—workflow logic unchanged.Header reflects new repository ownership. Workflow triggers and job steps remain identical.
.editorconfig (1)
1-2: Repository header update—editor configuration unchanged.Header metadata updated. All EditorConfig rules remain identical.
pytest.ini (1)
1-9: Pytest live logging configuration is reasonable.The DEBUG-level live logging and extra summary output (-ra) will improve test execution visibility and ease troubleshooting. Monitor CI log volume if this configuration generates excessive output, but the trade-off favors debuggability.
.github/workflows/sync.yml (2)
1-2: Repository header update—sync workflow metadata corrected.Header reflects new repository ownership.
30-30: Sync workflow guard correctly excludes the new template repository.The condition now prevents sync from running on jebel-quant/rhiza (the template source), ensuring the workflow only runs on dependent repositories. This aligns with the repository ownership change.
Verify that the sync workflow is intended to run only on dependent repositories and not on jebel-quant/rhiza itself by confirming:
- jebel-quant/rhiza is the source-of-truth template repository (not a dependent)
- No other workflows or scripts still reference the old 'tschm/.config-templates' guard condition
.github/scripts/customisations/build-extras.sh (1)
1-48: Well-structured optional customization hook with clear guidance.The script is properly documented, uses strict bash options, and provides safe defaults with helpful examples. It correctly guides developers on how to customize without breaking template sync updates.
.pre-commit-config.yaml (2)
1-2: Repository header update—pre-commit configuration metadata corrected.Header reflects new repository ownership.
45-53: The local hook is configured but non-functional—the README marker it targets does not exist.The script
.github/scripts/update-readme-help.shis safe and well-implemented, with proper error handling and cleanup. However, it searches for the marker"Runmake helpto see all available targets:"in README.md, which is not present. The script will exit gracefully (status 0) without making updates, making the hook an effective no-op. To activate this hook, add a section to README.md with the required marker and code block structure that the script expects.CONTRIBUTING.md (1)
14-14: The change from "make check" to "make fmt" is correct. Themake checktarget does not exist in the Makefile;make fmtis the actual pre-check target that runspre-commit run --all-filesto validate both formatting and linting. The documentation update aligns the guidance with the available Makefile targets..github/workflows/scripts/version_matrix.py (1)
49-53: Fallback behavior whenpyproject.tomlis missing.When
pyproject.tomldoesn't exist, the script falls back to printing allCANDIDATES. This is reasonable for template repositories, but consider whether returning all candidates (including potentially unreleased Python 3.14) might cause CI failures in downstream repos that haven't configured theirpyproject.tomlyet..github/scripts/update-readme-help.sh (2)
86-90: Minor:HELP_TEMPcleanup missing in error path.On the error path (awk failure),
HELP_TEMPis cleaned up. However, looking more closely, lines 84 and 88 both clean upHELP_TEMP, so this is actually handled correctly. Good defensive cleanup.
28-77: AWK script for README update looks correct.The AWK logic properly:
- Detects the help section marker
- Preserves the marker line and optional blank line
- Replaces the old help block content with new output
- Handles missing marker gracefully with exit code 2
tests/test_rhiza/conftest.py (1)
100-157: Well-structured test fixture for git repository setup.The
git_repofixture provides excellent test isolation by:
- Using
monkeypatch.chdirfor safe directory changes- Creating a complete mock environment with
uvsimulation- Properly configuring git user for commits
- Copying real scripts from the repository for integration testing
The static analysis warnings (S603/S607) about subprocess calls with partial paths are false positives in this test context - the fixture uses hardcoded, trusted
gitcommands.tests/test_rhiza/test_docstrings.py (1)
36-103: Good test design with proper cleanup and edge case handling.The test properly:
- Uses
monkeypatch.syspath_prependfor automatic cleanup- Gracefully skips when
src/is missing- Provides detailed failure summaries
- Skips when no doctests are found (avoiding false failures)
.github/workflows/devcontainer.yml (2)
97-121: Robust image name validation and construction.The image name logic properly:
- Defaults to a sanitized repo name when
DEVCONTAINER_IMAGE_NAMEis not set- Validates the format matches Docker's naming requirements
- Assembles the full image path with registry and owner
The SC2086 shellcheck warnings are acceptable in this GitHub Actions context since the values originate from trusted GitHub context variables.
130-137: Build step configured correctly withpush: never.The workflow correctly builds the image without pushing (validation-only), which aligns with the documented purpose. Publishing is handled separately in release.yml.
.github/scripts/sync.sh (3)
186-203:is_file_excludedonly matches exact paths.The function only checks for exact path matches. If a user wants to exclude an entire directory (e.g.,
exclude: .github/workflows), it won't automatically exclude files within it like.github/workflows/ci.ymlwhen those files are listed individually ininclude.However, the nested exclusion logic at lines 252-271 handles the case where a directory is synced and exclusions within that directory are applied post-copy. This is a reasonable approach given the YAML structure expects explicit paths.
273-276: Self-update safety mechanism is well-designed.The script backs up itself before syncing, immediately restores after
.githubdirectory sync, and handles deferred self-updates at the end. This prevents the script from being overwritten mid-execution, which could cause undefined behavior.
167-169: Good cleanup handling with trap.The trap ensures
TEMP_DIRis cleaned up on exit, interrupt, or termination signals. This prevents leftover temporary files.tests/test_rhiza/test_readme.py (2)
19-45: Test assumes code and result blocks are aligned.The test merges all code blocks together and all result blocks together, then compares. This works if:
- Code blocks are meant to run sequentially as one script
- Result blocks represent the cumulative expected output
If code blocks are independent examples with their own expected outputs, consider pairing them explicitly or documenting this assumption.
48-74: Good edge case coverage for README validation.The test class properly validates:
- File existence
- UTF-8 encoding readability
- Syntax validity of all Python code blocks
This provides good safety nets before attempting execution.
tests/test_rhiza/test_sync_script.py (1)
182-213: LGTM!Good fixture validation tests that ensure the sync script exists and has expected properties.
tests/test_rhiza/test_makefile.py (3)
27-46: LGTM!Good use of
try/finallyto restore the working directory. The fixture properly isolates each test by copying only the Makefile to a temporary directory.
49-76: LGTM!Well-structured helper with appropriate logging and error handling.
79-168: LGTM!Comprehensive dry-run tests for Makefile targets with good coverage of both target commands and environment variable exports.
tests/test_rhiza/test_git_repo_fixture.py (2)
1-11: LGTM!Good module docstring explaining the purpose of these fixture validation tests.
13-138: LGTM!Comprehensive test coverage for the
git_repofixture. The tests properly validate directory structure, file presence, executability, git initialization state, and environment modifications.The static analysis warnings about partial executable paths for
gitcommands are expected and acceptable in this test context..github/workflows/marimo.yml (2)
78-89: LGTM!Consistent implementation of dynamic Python version resolution, matching the pattern used in other workflows.
94-103:uvx uv runis an unusual pattern that likely defeats the intended isolation.According to uv documentation,
uvx(alias foruv tool run) invokes tools from PyPI in a temporary isolated environment. Usinguvx uv runwould attempt to invokeuvas a tool, which is unnecessary sinceuvis already available from the setup-project action. This adds overhead without benefit.For the stated goal of creating a "fresh ephemeral environment" where the notebook must explicitly handle dependencies, use
uv run --isolatedinstead. This ensures the notebook runs in an isolated environment without access to the project's installed packages, which aligns with the comments about forcing the notebook to bootstrap itself..github/workflows/pre-commit.yml (1)
33-41: LGTM!Good implementation of dynamic Python version resolution. Using a script for version determination keeps the logic centralized and maintainable. The
version_max.pyscript exists at the expected path and is properly executable..github/workflows/book.yml (2)
43-47: LGTM! Dynamic Python version detection added.The script-based version detection aligns with the dynamic matrix approach introduced in the CI workflow.
52-54: The secrets configuration is handled gracefully by design.The
uv-extra-index-urlparameter is optional in the setup-project action and is used consistently across five workflows (book.yml, ci.yml, marimo.yml, pre-commit.yml, release.yml). The action safely handles missing values with a conditional check (if [[ -n "${{ inputs.uv-extra-index-url }}" ]]), only setting the environment variable when the secret is provided. Repositories that require a private index will need to configure theUV_EXTRA_INDEX_URLsecret, while those that don't will work without it—no action required..github/scripts/release.sh (4)
139-139: False positive from shellcheck - @{u} syntax is valid.The shellcheck warning about literal braces on line 139 is a false positive. The
@{u}syntax is valid git shorthand for the upstream branch.
165-177: Good defensive check for existing remote tags.The script properly checks both local and remote tag existence before attempting to create/push, preventing duplicate releases.
128-134: LGTM! Uncommitted changes check prevents incomplete releases.The working tree cleanliness check ensures no uncommitted changes before release, which is the correct behavior for release automation.
149-163: Robust branch synchronization logic.The script correctly handles all three cases: behind, ahead, and diverged branches with appropriate prompts and actions.
.github/workflows/scripts/version_max.py (1)
18-46: LGTM! Version detection logic is sound.The function properly reads the pyproject.toml, parses the requires-python specifier, and returns the maximum matching version with appropriate error handling.
.github/scripts/customisations/post-release.sh (1)
1-46: LGTM! Well-documented customization hook.The post-release script provides a clear extension point with comprehensive documentation and commented examples. The bash shebang is appropriate here (unlike the release.sh POSIX requirement).
tests/test_rhiza/test_release_script.py (5)
11-30: LGTM! Comprehensive test coverage for tag creation.The test properly simulates user input for prompts and verifies both the return code and tag creation.
32-59: Good coverage of tag existence scenarios.Both local and remote tag existence are tested with appropriate expected behaviors (abort on user decline vs. hard failure for remote).
61-73: LGTM! Uncommitted changes correctly block release.The test verifies that uncommitted changes prevent release, which is critical for ensuring clean release states.
75-96: Good test for ahead-of-remote scenario.The test properly verifies the prompt flow and output when local branch is ahead, including the display of unpushed commits.
98-122: Thorough test for behind-remote scenario.The test correctly simulates a behind state by using a second clone to push changes to remote, then verifies the script fails appropriately.
.github/scripts/marimushka.sh (1)
36-50: Well-structured refactor to consolidated export.The change from per-file processing to a single
marimushka exportinvocation with proper path resolution is a good improvement. The absolute path normalization forUVX_BINand the working directory change toMARIMO_FOLDERensure correct operation..github/actions/setup-project/action.yml (3)
28-33: LGTM! Input changes improve action clarity.Making
python-versionrequired is good practice for explicit dependency management. The newuv-extra-index-urlinput properly supports private package registries.
86-88: Proper conditional environment variable handling.The conditional export of
UV_EXTRA_INDEX_URLcorrectly handles the optional input without polluting the environment when not provided.
43-46: Good migration to official setup-uv action.Using the official
astral-sh/setup-uvaction is more maintainable than custom setup logic. Version 0.9.17 is the latest release (December 9, 2025) with no breaking changes, making this a solid choice for current deployments..github/workflows/ci.yml (2)
21-36: LGTM! Dynamic matrix generation improves maintainability.The two-stage approach (generate-matrix → test) allows Python versions to be centrally managed in the version_matrix.py script rather than hardcoded in workflow files. The debug step aids troubleshooting.
38-44: Good use of fail-fast: false for comprehensive testing.Setting
fail-fast: falseensures all Python versions are tested even if one fails, providing complete compatibility feedback..github/workflows/release.yml (1)
314-323: No action needed:fromJSON()usage is correct.The
version_max.pyscript explicitly usesjson.dumps()on line 51 to serialize its output, producing valid JSON. Sincemax_supported_version()returns a string like"3.13",json.dumps()outputs the properly formatted JSON string"3.13"(with quotes). UsingfromJSON()to deserialize this is the correct pattern and ensures the value is properly parsed as a string for thepython-versionparameter. RemovingfromJSON()would pass the raw JSON string literal (including quotes) tosetup-python, which would be incorrect.
| # Helper function to prompt user to continue | ||
| prompt_continue() { | ||
| local message="$1" | ||
| printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET" | ||
| read -r answer | ||
| case "$answer" in | ||
| [Yy]*) | ||
| return 0 | ||
| ;; | ||
| *) | ||
| printf "%b[INFO] Aborted by user%b\n" "$YELLOW" "$RESET" | ||
| exit 0 | ||
| ;; | ||
| esac | ||
| } | ||
|
|
||
| # Helper function to prompt user for yes/no | ||
| prompt_yes_no() { | ||
| local message="$1" | ||
| printf "\n%b[PROMPT] %s [y/N] %b" "$YELLOW" "$message" "$RESET" | ||
| read -r answer | ||
| case "$answer" in | ||
| [Yy]*) | ||
| return 0 | ||
| ;; | ||
| *) | ||
| return 1 | ||
| ;; | ||
| esac | ||
| } |
There was a problem hiding this comment.
POSIX compliance bug: remove local (or switch to bash, but the script claims POSIX).
Shellcheck is correct: local isn’t POSIX (Line 40, Line 56). If this runs under dash (common /bin/sh), it can break.
Suggested minimal fix:
prompt_continue() {
- local message="$1"
+ message="$1"
printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET"
read -r answer
case "$answer" in
@@
}
prompt_yes_no() {
- local message="$1"
+ message="$1"
printf "\n%b[PROMPT] %s [y/N] %b" "$YELLOW" "$message" "$RESET"
read -r answer
case "$answer" in📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Helper function to prompt user to continue | |
| prompt_continue() { | |
| local message="$1" | |
| printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET" | |
| read -r answer | |
| case "$answer" in | |
| [Yy]*) | |
| return 0 | |
| ;; | |
| *) | |
| printf "%b[INFO] Aborted by user%b\n" "$YELLOW" "$RESET" | |
| exit 0 | |
| ;; | |
| esac | |
| } | |
| # Helper function to prompt user for yes/no | |
| prompt_yes_no() { | |
| local message="$1" | |
| printf "\n%b[PROMPT] %s [y/N] %b" "$YELLOW" "$message" "$RESET" | |
| read -r answer | |
| case "$answer" in | |
| [Yy]*) | |
| return 0 | |
| ;; | |
| *) | |
| return 1 | |
| ;; | |
| esac | |
| } | |
| # Helper function to prompt user to continue | |
| prompt_continue() { | |
| message="$1" | |
| printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET" | |
| read -r answer | |
| case "$answer" in | |
| [Yy]*) | |
| return 0 | |
| ;; | |
| *) | |
| printf "%b[INFO] Aborted by user%b\n" "$YELLOW" "$RESET" | |
| exit 0 | |
| ;; | |
| esac | |
| } | |
| # Helper function to prompt user for yes/no | |
| prompt_yes_no() { | |
| message="$1" | |
| printf "\n%b[PROMPT] %s [y/N] %b" "$YELLOW" "$message" "$RESET" | |
| read -r answer | |
| case "$answer" in | |
| [Yy]*) | |
| return 0 | |
| ;; | |
| *) | |
| return 1 | |
| ;; | |
| esac | |
| } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 40-40: In POSIX sh, 'local' is undefined.
(SC3043)
[warning] 56-56: In POSIX sh, 'local' is undefined.
(SC3043)
🤖 Prompt for AI Agents
.github/scripts/bump.sh around lines 38 to 67: the functions use the non-POSIX
keyword `local` which breaks in shells like dash; remove `local` from both
prompt_continue and prompt_yes_no and assign positional args using plain POSIX
variable assignment (e.g. message="$1") so the script stays POSIX-compliant, or
alternatively change the script shebang to `#!/usr/bin/env bash` if you prefer
to keep `local`.
| # Determine the new version using uv version with --dry-run first | ||
| if [ -n "$TYPE" ]; then | ||
| printf "%b[INFO] Bumping version using: %s%b\n" "$BLUE" "$TYPE" "$RESET" | ||
| NEW_VERSION=$("$UV_BIN" version --bump "$TYPE" --dry-run --short 2>/dev/null) | ||
| if [ $? -ne 0 ] || [ -z "$NEW_VERSION" ]; then | ||
| printf "%b[ERROR] Failed to calculate new version with type: %s%b\n" "$RED" "$TYPE" "$RESET" | ||
| exit 1 | ||
| fi | ||
| else | ||
| # Validate the version format by having uv try it with --dry-run | ||
| if ! "$UV_BIN" version "$VERSION" --dry-run >/dev/null 2>&1; then | ||
| printf "%b[ERROR] Invalid version format: %s%b\n" "$RED" "$VERSION" "$RESET" | ||
| printf "uv rejected this version. Please use a valid semantic version.\n" | ||
| exit 1 | ||
| fi | ||
| NEW_VERSION="$VERSION" | ||
| fi |
There was a problem hiding this comment.
set -e + command substitution makes the $? check ineffective; rewrite as an if assignment.
Right now (Line 122-126), if uv fails, many shells will exit immediately due to set -e, so the error handling won’t run as intended.
Suggested rewrite:
- NEW_VERSION=$("$UV_BIN" version --bump "$TYPE" --dry-run --short 2>/dev/null)
- if [ $? -ne 0 ] || [ -z "$NEW_VERSION" ]; then
+ if ! NEW_VERSION=$("$UV_BIN" version --bump "$TYPE" --dry-run --short 2>/dev/null) || [ -z "$NEW_VERSION" ]; then
printf "%b[ERROR] Failed to calculate new version with type: %s%b\n" "$RED" "$TYPE" "$RESET"
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Determine the new version using uv version with --dry-run first | |
| if [ -n "$TYPE" ]; then | |
| printf "%b[INFO] Bumping version using: %s%b\n" "$BLUE" "$TYPE" "$RESET" | |
| NEW_VERSION=$("$UV_BIN" version --bump "$TYPE" --dry-run --short 2>/dev/null) | |
| if [ $? -ne 0 ] || [ -z "$NEW_VERSION" ]; then | |
| printf "%b[ERROR] Failed to calculate new version with type: %s%b\n" "$RED" "$TYPE" "$RESET" | |
| exit 1 | |
| fi | |
| else | |
| # Validate the version format by having uv try it with --dry-run | |
| if ! "$UV_BIN" version "$VERSION" --dry-run >/dev/null 2>&1; then | |
| printf "%b[ERROR] Invalid version format: %s%b\n" "$RED" "$VERSION" "$RESET" | |
| printf "uv rejected this version. Please use a valid semantic version.\n" | |
| exit 1 | |
| fi | |
| NEW_VERSION="$VERSION" | |
| fi | |
| # Determine the new version using uv version with --dry-run first | |
| if [ -n "$TYPE" ]; then | |
| printf "%b[INFO] Bumping version using: %s%b\n" "$BLUE" "$TYPE" "$RESET" | |
| if ! NEW_VERSION=$("$UV_BIN" version --bump "$TYPE" --dry-run --short 2>/dev/null) || [ -z "$NEW_VERSION" ]; then | |
| printf "%b[ERROR] Failed to calculate new version with type: %s%b\n" "$RED" "$TYPE" "$RESET" | |
| exit 1 | |
| fi | |
| else | |
| # Validate the version format by having uv try it with --dry-run | |
| if ! "$UV_BIN" version "$VERSION" --dry-run >/dev/null 2>&1; then | |
| printf "%b[ERROR] Invalid version format: %s%b\n" "$RED" "$VERSION" "$RESET" | |
| printf "uv rejected this version. Please use a valid semantic version.\n" | |
| exit 1 | |
| fi | |
| NEW_VERSION="$VERSION" | |
| fi |
🤖 Prompt for AI Agents
In .github/scripts/bump.sh around lines 119 to 135: the current
command-substitution NEW_VERSION=$("$UV_BIN" version --bump "$TYPE" --dry-run
--short 2>/dev/null) can cause the script to exit under set -e before you check
$?, so replace it with an if-assignment pattern that runs the command and
branches on its success, e.g. if NEW_VERSION=$("$UV_BIN" version --bump "$TYPE"
--dry-run --short 2>/dev/null); then proceed using $NEW_VERSION; else print the
existing error message and exit 1; remove the subsequent explicit $? check and
the -z test by handling the empty/failure case in the else branch.
| case "$UVX_BIN" in | ||
| /*) ;; | ||
| */*) UVX_BIN="$CURRENT_DIR/$UVX_BIN" ;; | ||
| *) ;; | ||
| esac |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "UVX_BIN" --type=sh -A 2 -B 2Repository: tschm/jquantstats
Length of output: 209
🏁 Script executed:
# Find shell files in the repository
fd -e shRepository: tschm/jquantstats
Length of output: 43
🏁 Script executed:
# Check the repository structure
git ls-files | head -20Repository: tschm/jquantstats
Length of output: 661
🏁 Script executed:
cat -n .github/scripts/marimushka.shRepository: tschm/jquantstats
Length of output: 2156
🏁 Script executed:
# Search for references to UVX_BIN or how it's used in other scripts
grep -r "UVX_BIN\|uvx\|marimushka" .github/scripts/ --include="*.sh" -B 2 -A 2Repository: tschm/jquantstats
Length of output: 3037
Handle bare command edge case in UVX_BIN path normalization.
The case statement converts relative paths with slashes to absolute paths, but bare commands like "uvx" are left unchanged. Since the script changes to MARIMO_FOLDER at line 47 before executing UVX_BIN at line 50, bare commands will be resolved from that directory rather than from the original working directory, which may cause failures if "uvx" isn't in PATH or accessible from the notebook directory. Consider either: (1) converting bare commands to absolute paths by searching PATH at initialization, or (2) documenting that UVX_BIN must be an absolute or relative path, not a bare command name.
🤖 Prompt for AI Agents
In .github/scripts/marimushka.sh around lines 40 to 44, the case block leaves
bare command names (e.g. "uvx") unchanged which causes execution after cd to
MARIMO_FOLDER to resolve the command relative to that directory; update the
script to detect bare commands and resolve them to an absolute executable path
from PATH (e.g., use command -v or which at initialization and set UVX_BIN to
that full path if found), falling back to leaving it unchanged only if not
found, or alternatively add a clear comment/documentation near the variable that
UVX_BIN must be an absolute or relative path (not a bare command name) if you
prefer to enforce documentation instead of auto-resolving.
| prompt_continue() { | ||
| local message="$1" | ||
| printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET" | ||
| read -r answer | ||
| case "$answer" in | ||
| [Yy]*) | ||
| return 0 | ||
| ;; | ||
| *) | ||
| printf "%b[INFO] Aborted by user%b\n" "$YELLOW" "$RESET" | ||
| exit 0 | ||
| ;; | ||
| esac | ||
| } |
There was a problem hiding this comment.
POSIX compliance issue: local keyword is not POSIX-compliant.
The script header claims POSIX-sh compatibility (line 7), but the local keyword used in functions is a bash/ksh extension not available in pure POSIX sh.
Apply this diff to make the script truly POSIX-compliant or update the header to indicate bash requirement:
Option 1: Remove local (POSIX-compliant)
prompt_continue() {
- local message="$1"
+ message="$1"
printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET"Option 2: Change shebang to bash
-#!/bin/sh
+#!/bin/bash
# Release scriptBased on learnings: The codebase appears to favor POSIX compliance where possible (see bump.sh with similar structure).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| prompt_continue() { | |
| local message="$1" | |
| printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET" | |
| read -r answer | |
| case "$answer" in | |
| [Yy]*) | |
| return 0 | |
| ;; | |
| *) | |
| printf "%b[INFO] Aborted by user%b\n" "$YELLOW" "$RESET" | |
| exit 0 | |
| ;; | |
| esac | |
| } | |
| prompt_continue() { | |
| message="$1" | |
| printf "\n%b[PROMPT] %s Continue? [y/N] %b" "$YELLOW" "$message" "$RESET" | |
| read -r answer | |
| case "$answer" in | |
| [Yy]*) | |
| return 0 | |
| ;; | |
| *) | |
| printf "%b[INFO] Aborted by user%b\n" "$YELLOW" "$RESET" | |
| exit 0 | |
| ;; | |
| esac | |
| } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 64-64: In POSIX sh, 'local' is undefined.
(SC3043)
🤖 Prompt for AI Agents
.github/scripts/release.sh around lines 63 to 76: the function uses the
non‑POSIX local keyword (local message="$1"), which breaks POSIX sh
compatibility; to fix, either (preferred for POSIX) remove local and use a plain
variable assignment (message=$1) and ensure no name collision with global vars
(or unset message at end), or (if you require bash features) change the script
shebang/header to explicitly use bash so local is allowed.
| # The deployment is conditional based on whether the repository is a fork and the PUBLISH_COMPANION_BOOK variable is set | ||
| # If the repository is a fork, deployment is skipped to avoid unauthorized publishing | ||
| # If PUBLISH_COMPANION_BOOK is not set, it defaults to allowing deployment | ||
| - name: Deploy to GitHub Pages | ||
| if: ${{ !github.event.repository.fork && github.event.repository.visibility == 'public'}} | ||
| if: ${{ !github.event.repository.fork && (vars.PUBLISH_COMPANION_BOOK == 'true' || vars.PUBLISH_COMPANION_BOOK == '') }} | ||
| uses: actions/deploy-pages@v4 # Official GitHub Pages deployment action | ||
| continue-on-error: true |
There was a problem hiding this comment.
Deployment condition may have unexpected default behavior.
The condition vars.PUBLISH_COMPANION_BOOK == 'true' || vars.PUBLISH_COMPANION_BOOK == '' will deploy when the variable is unset (empty string). This default-to-deploy behavior could be unexpected for forked repositories where the variable is not configured.
Consider whether the default should be to skip deployment for safety:
- if: ${{ !github.event.repository.fork && (vars.PUBLISH_COMPANION_BOOK == 'true' || vars.PUBLISH_COMPANION_BOOK == '') }}
+ if: ${{ !github.event.repository.fork && vars.PUBLISH_COMPANION_BOOK == 'true' }}If the current "default to deploy" behavior is intentional, consider adding a comment explaining this choice.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # The deployment is conditional based on whether the repository is a fork and the PUBLISH_COMPANION_BOOK variable is set | |
| # If the repository is a fork, deployment is skipped to avoid unauthorized publishing | |
| # If PUBLISH_COMPANION_BOOK is not set, it defaults to allowing deployment | |
| - name: Deploy to GitHub Pages | |
| if: ${{ !github.event.repository.fork && github.event.repository.visibility == 'public'}} | |
| if: ${{ !github.event.repository.fork && (vars.PUBLISH_COMPANION_BOOK == 'true' || vars.PUBLISH_COMPANION_BOOK == '') }} | |
| uses: actions/deploy-pages@v4 # Official GitHub Pages deployment action | |
| continue-on-error: true | |
| # The deployment is conditional based on whether the repository is a fork and the PUBLISH_COMPANION_BOOK variable is set | |
| # If the repository is a fork, deployment is skipped to avoid unauthorized publishing | |
| # If PUBLISH_COMPANION_BOOK is not set, it defaults to allowing deployment | |
| - name: Deploy to GitHub Pages | |
| if: ${{ !github.event.repository.fork && vars.PUBLISH_COMPANION_BOOK == 'true' }} | |
| uses: actions/deploy-pages@v4 # Official GitHub Pages deployment action | |
| continue-on-error: true |
🤖 Prompt for AI Agents
.github/workflows/book.yml lines 69-75: the deployment condition currently
treats an empty or unset PUBLISH_COMPANION_BOOK as true, causing unexpected
deployments for forks; update the if to only deploy when
vars.PUBLISH_COMPANION_BOOK == 'true' (or explicitly check for an allowed
value), and/or add an explicit check for unset/null to skip deployment (e.g.,
require exact 'true'), and if keeping the current default-to-deploy behavior is
intentional, add a clear comment explaining that decision for safety.
| - name: Checkout Code | ||
| uses: actions/checkout@v6 | ||
| with: | ||
| fetch-depth: 0 | ||
| ref: ${{ github.ref }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
actions/checkout@v6 GitHub Marketplace latest version
💡 Result:
Latest v6 tag is v6.0.1 — released Dec 2, 2025. [1][2]
Sources:
- actions/checkout Releases (v6.0.1). [1]
- actions/checkout Marketplace README (v6). [2]
🌐 Web query:
actions/setup-python@v6 GitHub Marketplace
💡 Result:
actions/setup-python@v6 is the official GitHub Action for installing a specific Python/PyPy/GraalPy version in workflows. Key points:
- Official action (actions/setup-python) listed on GitHub Marketplace. Supports installing Python/PyPy/GraalPy, adding to PATH, and optional dependency caching (pip, pipenv, poetry). [1]
- v6 breaking change: runtime upgraded from Node 20 → Node 24; runner compatibility requires GitHub Actions runner v2.327.1 or later. See v6 release notes for details. [2][3]
- v6 adds features such as the pip-install input and GraalPy/windows builds (see v6.1.0 changelog). Example usage: uses: actions/setup-python@v6 with: python-version: '3.13'. [1][3]
Sources:
[1] GitHub Marketplace — Setup Python action.
[2] actions/setup-python repository (README).
[3] actions/setup-python releases (v6 / v6.1.0 changelog).
🌐 Web query:
actions/upload-artifact@v6 GitHub Marketplace
💡 Result:
There is a GitHub Marketplace entry for the official actions/upload-artifact action (Marketplace + repo). The supported major version is v4 (the repo and Marketplace docs show v4 as the current major release and v1–v3 are deprecated). I couldn’t find an official actions/upload-artifact@v6 release — use actions/upload-artifact@v4 (or the repository docs) instead. [1][2]
Sources:
- Marketplace: Upload a Build Artifact / @actions/upload-artifact. [1]
- Official repo: actions/upload-artifact README (shows v4, deprecation notes for v1–v3). [2]
🌐 Web query:
actions/download-artifact@v7 GitHub Marketplace
💡 Result:
actions/download-artifact@v7 is the official GitHub Action for downloading workflow artifacts. Key points:
- v7 released with Node.js 24 runtime and requires Actions runner >= 2.327.1 (self-hosted runners must be updated). [1]
- It’s the official download-artifact action (see docs for usage; v4+ is current recommended baseline for most examples). [2]
- Note GitHub’s prior deprecation notices for older artifact action versions — migrate to newer versions when needed. [3]
Sources:
[1] actions/download-artifact repository (v7 notes). [2] GitHub Actions docs — storing/downloading artifacts. [3] GitHub Changelog — artifact actions deprecation notice.
Critical: actions/upload-artifact@v6 does not exist; the current major version is v4.
actions/checkout@v6 (v6.0.1), actions/setup-python@v6 (v6.1.0+), and actions/download-artifact@v7 are valid and exist in the GitHub Marketplace. However, actions/upload-artifact@v6 is not a real release—v1–v3 are deprecated, and the current supported major is v4. This will cause the workflow to fail immediately when it reaches the upload-artifact step.
Update actions/upload-artifact@v6 to actions/upload-artifact@v4 at the applicable locations. Also note that v6/v7 actions require GitHub Actions runner >= 2.327.1.
Also applies to: 107-112, 140-146, 174-186, 319-323
🤖 Prompt for AI Agents
In .github/workflows/release.yml around lines 75-79 (and also update occurrences
at 107-112, 140-146, 174-186, 319-323), the workflow references
actions/upload-artifact@v6 which does not exist; change every
actions/upload-artifact@v6 to actions/upload-artifact@v4, ensure any
upload-artifact steps use the v4 pin, and verify runner compatibility notes
(v6/v7 require runner >= 2.327.1) are removed or adjusted if not applicable.
| - name: Setup the project | ||
| uses: ./.github/actions/setup-project | ||
| with: | ||
| python-version: ${{ vars.PYTHON_DEFAULT_VERSION || '3.14' }} | ||
| uv-extra-index-url: ${{ secrets.UV_EXTRA_INDEX_URL }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Python versions available on ubuntu-latest GitHub Actions runners 2025
💡 Result:
Short answer: ubuntu-latest → ubuntu-24.04; the runner images include Python 3.10, 3.11, 3.12, 3.13 and 3.14 (cached). [1][2][3]
Sources
- ubuntu-latest maps to Ubuntu 24.04 (actions/runner-images repo). [1]
- Recent runner-images releases showing cached Python versions (3.13 / 3.14) on images. [2]
- Announcement / issue about python 3.8 removal and supported Python-version policy (confirms only the most-popular ~5 Python majors are kept). [3]
References
[1] actions/runner-images — Available Images (GitHub).
[2] actions/runner-images — Releases (cached Tools / Python entries).
[3] actions/runner-images — Issue: Python 3.8.x removal / supported versions.
🌐 Web query:
actions/setup-python supported Python versions documentation
💡 Result:
Short answer — where to find the supported Python versions and how they’re specified:
-
Official action README (actions/setup-python) — describes supported implementations (CPython, PyPy, GraalPy, free‑threaded builds), supported version syntax (semver, ranges, prereleases), architecture options, and that the action pulls versions from the python-versions manifest when not cached. [1]
-
Versions manifest (actions/python-versions repo) — the authoritative list of available/releases used by setup-python; file versions-manifest.json contains the specific Python/PyPy/GraalPy versions that can be installed on-the-fly. The repo also documents the support policy and how new/ended versions are handled. [2]
If you want, I can fetch the current versions-manifest.json and list the exact Python versions available right now.
🏁 Script executed:
# Check if there are any other references to PYTHON_DEFAULT_VERSION in the repo
rg "PYTHON_DEFAULT_VERSION" --type yamlRepository: tschm/jquantstats
Length of output: 43
🏁 Script executed:
# Look for any other Python version references in workflows
rg "python-version" .github/workflows/ -A 2 -B 2Repository: tschm/jquantstats
Length of output: 3632
🏁 Script executed:
cat .github/workflows/scripts/version_max.pyRepository: tschm/jquantstats
Length of output: 1573
🏁 Script executed:
# Check the setup-project custom action to see how it handles python-version
cat -n .github/actions/setup-project/action.ymlRepository: tschm/jquantstats
Length of output: 3592
🌐 Web query:
actions/setup-python version specifier syntax '3.x' wildcard pattern supported
💡 Result:
Yes — actions/setup-python accepts semver-style ranges like "3.x" (a wildcard for the current latest 3.* release). It uses SemVer range syntax and will select the latest available matching minor/patch on the runner (or download the matching version from actions/python-versions if not cached). [1][2]
Sources:
[1] GitHub Docs — Building and testing Python (setup-python usage).
[2] actions/setup-python README (usage examples showing "3.x").
Consider using a dynamic Python version like other workflows instead of hardcoding '3.14'.
The release workflow hardcodes python-version: '3.14' as a fallback, while other workflows (pre-commit, marimo, book) dynamically determine the max supported version via version_max.py. For consistency, either use the same dynamic approach or align the fallback with the one in version_max.py, which uses '3.13' when pyproject.toml is unavailable.
🤖 Prompt for AI Agents
In .github/workflows/release.yml around lines 114 to 118, the workflow currently
hardcodes the python-version fallback to '3.14'; change this to follow the
project convention by either invoking the same dynamic mechanism (use
version_max.py to compute the max supported version and supply that value) or,
if you prefer the simpler fix, align the hardcoded fallback with version_max.py
by replacing '3.14' with '3.13' so the release workflow matches other workflows'
behavior.
| """Tests for the bump.sh script using a sandboxed git environment.""" | ||
|
|
||
| import subprocess | ||
|
|
||
| import pytest | ||
|
|
||
|
|
||
| @pytest.mark.parametrize( | ||
| "choice, expected_version", | ||
| [ | ||
| ("1", "0.1.1"), # patch | ||
| ("2", "0.2.0"), # minor | ||
| ("3", "1.0.0"), # major | ||
| ], | ||
| ) | ||
| def test_bump_updates_version_no_commit(git_repo, choice, expected_version): | ||
| """Running `bump` interactively updates pyproject.toml correctly.""" | ||
| script = git_repo / ".github" / "scripts" / "bump.sh" | ||
|
|
||
| # Input: choice -> n (no commit) | ||
| input_str = f"{choice}\nn\n" | ||
|
|
||
| result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) | ||
|
|
||
| assert result.returncode == 0 | ||
| assert f"Version bumped to {expected_version}" in result.stdout | ||
|
|
||
| # Verify pyproject.toml updated | ||
| with open(git_repo / "pyproject.toml") as f: | ||
| content = f.read() | ||
| assert f'version = "{expected_version}"' in content | ||
|
|
||
| # Verify no tag created yet | ||
| tags = subprocess.check_output(["git", "tag"], cwd=git_repo, text=True) | ||
| assert f"v{expected_version}" not in tags | ||
|
|
||
|
|
||
| def test_bump_commit_push(git_repo): | ||
| """Bump with commit and push.""" | ||
| script = git_repo / ".github" / "scripts" / "bump.sh" | ||
|
|
||
| # Input: 1 (patch) -> y (commit) -> y (push) | ||
| input_str = "1\ny\ny\n" | ||
|
|
||
| result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) | ||
|
|
||
| assert result.returncode == 0 | ||
| assert "Version committed" in result.stdout | ||
| assert "Pushed to origin/master" in result.stdout | ||
|
|
||
| # Verify commit on remote | ||
| remote_log = subprocess.check_output(["git", "log", "origin/master", "-1", "--pretty=%B"], cwd=git_repo, text=True) | ||
| assert "chore: bump version to 0.1.1" in remote_log | ||
|
|
||
|
|
||
| def test_uncommitted_changes_failure(git_repo): | ||
| """Script fails if there are uncommitted changes.""" | ||
| script = git_repo / ".github" / "scripts" / "bump.sh" | ||
|
|
||
| # Create a tracked file and commit it | ||
| tracked_file = git_repo / "tracked_file.txt" | ||
| tracked_file.touch() | ||
| subprocess.run(["git", "add", "tracked_file.txt"], cwd=git_repo, check=True) | ||
| subprocess.run(["git", "commit", "-m", "Add tracked file"], cwd=git_repo, check=True) | ||
|
|
||
| # Modify tracked file to create uncommitted change | ||
| with open(tracked_file, "a") as f: | ||
| f.write("\n# change") | ||
|
|
||
| # Input: 1 (patch) | ||
| result = subprocess.run([str(script)], cwd=git_repo, input="1\n", capture_output=True, text=True) | ||
|
|
||
| assert result.returncode == 1 | ||
| assert "You have uncommitted changes" in result.stdout | ||
|
|
||
|
|
||
| def test_bump_explicit_version(git_repo): | ||
| """Bump with explicit version.""" | ||
| script = git_repo / ".github" / "scripts" / "bump.sh" | ||
| version = "1.2.3" | ||
|
|
||
| # Input: 4 (explicit) -> 1.2.3 -> n (no commit) | ||
| input_str = f"4\n{version}\nn\n" | ||
|
|
||
| result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) | ||
|
|
||
| assert result.returncode == 0 | ||
| assert f"Version bumped to {version}" in result.stdout | ||
| with open(git_repo / "pyproject.toml") as f: | ||
| content = f.read() | ||
| assert f'version = "{version}"' in content | ||
|
|
||
|
|
||
| def test_bump_fails_existing_tag(git_repo): | ||
| """Bump fails if tag already exists.""" | ||
| script = git_repo / ".github" / "scripts" / "bump.sh" | ||
|
|
||
| # Create tag v0.1.1 | ||
| subprocess.run(["git", "tag", "v0.1.1"], cwd=git_repo, check=True) | ||
|
|
||
| # Try to bump to 0.1.1 (patch bump from 0.1.0) | ||
| # Input: 1 (patch) | ||
| result = subprocess.run([str(script)], cwd=git_repo, input="1\n", capture_output=True, text=True) | ||
|
|
||
| assert result.returncode == 1 | ||
| assert "Tag 'v0.1.1' already exists locally" in result.stdout | ||
|
|
||
|
|
||
| def test_warn_on_non_default_branch(git_repo): | ||
| """Script warns if not on default branch.""" | ||
| script = git_repo / ".github" / "scripts" / "bump.sh" | ||
|
|
||
| # Create and switch to new branch | ||
| subprocess.run(["git", "checkout", "-b", "feature"], cwd=git_repo, check=True) | ||
|
|
||
| # Run bump (input 1 (patch), then 'y' to proceed with non-default branch, then n (no commit)) | ||
| input_str = "1\ny\nn\n" | ||
|
|
||
| result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) | ||
| assert result.returncode == 0 | ||
| assert "You are on branch 'feature' but the default branch is 'master'" in result.stdout | ||
|
|
||
|
|
||
| def test_bump_fails_if_pyproject_toml_dirty(git_repo): | ||
| """Bump fails if pyproject.toml has uncommitted changes.""" | ||
| script = git_repo / ".github" / "scripts" / "bump.sh" | ||
|
|
||
| # Modify pyproject.toml | ||
| with open(git_repo / "pyproject.toml", "a") as f: | ||
| f.write("\n# dirty") | ||
|
|
||
| # Input: 1 (patch) | ||
| result = subprocess.run([str(script)], cwd=git_repo, input="1\n", capture_output=True, text=True) | ||
|
|
||
| assert result.returncode == 1 | ||
| assert "You have uncommitted changes" in result.stdout |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for ruff configuration files
find . -type f \( -name "ruff.toml" -o -name "pyproject.toml" -o -name ".ruff.toml" \) | head -20Repository: tschm/jquantstats
Length of output: 89
🏁 Script executed:
#!/bin/bash
# Check for CI workflow files that might enforce linting
find .github/workflows -type f -name "*.yml" -o -name "*.yaml" 2>/dev/null | head -20Repository: tschm/jquantstats
Length of output: 392
🏁 Script executed:
#!/bin/bash
# Look for existing ignore patterns in ruff/lint config
if [ -f "ruff.toml" ]; then
echo "=== ruff.toml ==="
cat ruff.toml
fi
if [ -f "pyproject.toml" ]; then
echo "=== pyproject.toml (ruff section) ==="
grep -A 50 "\[tool.ruff" pyproject.toml || echo "No [tool.ruff] section found"
fiRepository: tschm/jquantstats
Length of output: 1052
🏁 Script executed:
#!/bin/bash
# Check CI workflow for linting steps
echo "=== ci.yml ==="
cat .github/workflows/ci.yml | grep -A 20 -B 5 "ruff\|lint" || echo "No ruff/lint found in ci.yml"Repository: tschm/jquantstats
Length of output: 104
🏁 Script executed:
#!/bin/bash
# Check pre-commit workflow
echo "=== pre-commit.yml ==="
cat .github/workflows/pre-commit.ymlRepository: tschm/jquantstats
Length of output: 1280
🏁 Script executed:
#!/bin/bash
# Check pre-commit config
if [ -f ".pre-commit-config.yaml" ]; then
echo "=== .pre-commit-config.yaml ==="
cat .pre-commit-config.yaml
else
echo ".pre-commit-config.yaml not found"
fiRepository: tschm/jquantstats
Length of output: 1415
🏁 Script executed:
#!/bin/bash
# Run ruff directly on the test file to see actual errors
rg "subprocess.run" tests/test_rhiza/test_bump_script.py -B 2 -A 2 || echo "File or pattern not found"Repository: tschm/jquantstats
Length of output: 2019
🏁 Script executed:
#!/bin/bash
# Verify the exact Ruff errors that would be flagged
# Check if ruff can be run on the test file
if command -v ruff &> /dev/null; then
cd tests/test_rhiza && ruff check test_bump_script.py --select=S603,S607 2>&1 || true
else
echo "ruff not available in sandbox"
fiRepository: tschm/jquantstats
Length of output: 6729
Ruff S603/S607 will block pre-commit and CI—add suppression to this test file.
The test file triggers 13 Ruff errors (6× S603, 7× S607) on subprocess calls, and since ruff is configured in .pre-commit-config.yaml with --exit-non-zero-on-fix, these will fail the pre-commit hook and prevent merging. Add one of the following:
- Per-file ignore in
ruff.toml: Add"tests/test_rhiza/test_bump_script.py" = ["S603", "S607"]under[lint.per-file-ignores], or - Inline
# noqacomments on each affected line (23, 34, 45, 52, 63, 64, 71, 85, 99, 103, 114, 119, 133).
These subprocess calls are intentional test harnesses and safe; suppression is appropriate here.
🧰 Tools
🪛 Ruff (0.14.8)
23-23: subprocess call: check for execution of untrusted input
(S603)
34-34: Starting a process with a partial executable path
(S607)
45-45: subprocess call: check for execution of untrusted input
(S603)
52-52: Starting a process with a partial executable path
(S607)
63-63: Starting a process with a partial executable path
(S607)
64-64: Starting a process with a partial executable path
(S607)
71-71: subprocess call: check for execution of untrusted input
(S603)
85-85: subprocess call: check for execution of untrusted input
(S603)
99-99: Starting a process with a partial executable path
(S607)
103-103: subprocess call: check for execution of untrusted input
(S603)
114-114: Starting a process with a partial executable path
(S607)
119-119: subprocess call: check for execution of untrusted input
(S603)
133-133: subprocess call: check for execution of untrusted input
(S603)
| """Tests for the Makefile targets and help output using safe dry‑runs. | ||
|
|
||
| These tests validate that the Makefile exposes expected targets and emits | ||
| the correct commands without actually executing them, by invoking `make -n` | ||
| (dry-run). We also pass `-s` to reduce noise in CI logs. This approach keeps | ||
| tests fast, portable, and free of side effects like network or environment | ||
| changes. | ||
| """ |
There was a problem hiding this comment.
Use standard hyphen instead of non-breaking hyphen.
The docstring contains a non-breaking hyphen (U+2011) in "dry‑runs" which can cause issues with tooling. Replace with a standard hyphen-minus (U+002D).
-"""Tests for the Makefile targets and help output using safe dry‑runs.
+"""Tests for the Makefile targets and help output using safe dry-runs.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """Tests for the Makefile targets and help output using safe dry‑runs. | |
| These tests validate that the Makefile exposes expected targets and emits | |
| the correct commands without actually executing them, by invoking `make -n` | |
| (dry-run). We also pass `-s` to reduce noise in CI logs. This approach keeps | |
| tests fast, portable, and free of side effects like network or environment | |
| changes. | |
| """ | |
| """Tests for the Makefile targets and help output using safe dry-runs. | |
| These tests validate that the Makefile exposes expected targets and emits | |
| the correct commands without actually executing them, by invoking `make -n` | |
| (dry-run). We also pass `-s` to reduce noise in CI logs. This approach keeps | |
| tests fast, portable, and free of side effects like network or environment | |
| changes. | |
| """ |
🧰 Tools
🪛 Ruff (0.14.8)
1-1: Docstring contains ambiguous ‑ (NON-BREAKING HYPHEN). Did you mean - (HYPHEN-MINUS)?
(RUF002)
🤖 Prompt for AI Agents
In tests/test_rhiza/test_makefile.py around lines 1 to 8, the module docstring
uses a non-breaking hyphen (U+2011) in "dry‑runs"; replace that character with a
standard hyphen-minus (U+002D) so the text reads "dry-runs" to avoid tooling
issues.
| def test_root_resolves_correctly_from_nested_location(self, root): | ||
| """Root should correctly resolve to repository root from tests/test_config_templates/.""" | ||
| conftest_path = root / "tests" / "test_rhiza" / "conftest.py" | ||
| assert conftest_path.exists() | ||
|
|
||
| conftest_path = root / "tests" / "conftest.py" | ||
| assert not conftest_path.exists() | ||
|
|
There was a problem hiding this comment.
Fix CI-breaking assumption: don’t assert tests/conftest.py must not exist.
Your CI error matches this hard assertion (Line 35-36). Repos commonly have a tests/conftest.py at the tests root; its presence shouldn’t invalidate the root fixture.
Suggested change (keep the intent, remove the brittle constraint):
def test_root_resolves_correctly_from_nested_location(self, root):
"""Root should correctly resolve to repository root from tests/test_config_templates/."""
conftest_path = root / "tests" / "test_rhiza" / "conftest.py"
assert conftest_path.exists()
-
- conftest_path = root / "tests" / "conftest.py"
- assert not conftest_path.exists()
+ # It's OK if the repo also has a tests/conftest.py at the tests root.
+ # This test only cares that the `root` fixture resolves correctly.🧰 Tools
🪛 GitHub Actions: CI
[error] 35-35: Root resolution: repository root detection failed due to presence of tests/conftest.py at repository root; test expected root to be above tests/conftest.py.
🤖 Prompt for AI Agents
In tests/test_rhiza/test_structure.py around lines 29 to 36, the test currently
asserts that root / "tests" / "conftest.py" must not exist which is a brittle
CI-breaking assumption; remove the lines that reassign conftest_path to root /
"tests" / "conftest.py" and the subsequent assert not conftest_path.exists(),
keeping only the initial check that tests/test_rhiza/conftest.py exists (or
replace the second assertion with a non-failing comment), so the test no longer
fails when a repository legitimately contains tests/conftest.py at the tests
root.
This PR syncs the template from:
jebel-quant/rhiza @ main
Summary by CodeRabbit
New Features
Improvements
Tests
✏️ Tip: You can customize this high-level summary in your review settings.