chore(release): prepare 0.10.0 ci and packaging flow#3
chore(release): prepare 0.10.0 ci and packaging flow#3marlon-costa-dc merged 13 commits intomainfrom
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marlon-costa-dc, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request streamlines the development and release processes by standardizing CI workflows, updating core development dependencies, and refining internal tooling. It introduces a new mechanism for synchronizing GitHub workflows across projects, ensuring a consistent and maintainable CI/CD pipeline. Additionally, the project's documentation has been updated to provide clearer guidance in Portuguese, reflecting its current development status and usage recommendations. Highlights
Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request successfully aligns the repository's CI and packaging flow for the 0.10.0 release. It introduces a useful workflow synchronization script and improves internal dependency management. However, there are some inconsistencies in the pyproject.toml file regarding build requirements and project classifiers that should be addressed to ensure metadata correctness.
| requires = [ | ||
| "poetry-core>=2", | ||
| ] | ||
| requires = [ "poetry-core>=2.0", "poetry-core>=2" ] |
There was a problem hiding this comment.
| "Development Status :: 4 - Beta", | ||
| "Intended Audience :: Developers", | ||
| "Operating System :: OS Independent", | ||
| "Programming Language :: Python :: 3 :: Only", | ||
| "Programming Language :: Python :: 3.13", | ||
| "Topic :: Software Development :: Libraries :: Python Modules", | ||
| "Typing :: Typed", | ||
| "Development Status :: 5 - Production/Stable", |
There was a problem hiding this comment.
There was a problem hiding this comment.
2 issues found across 8 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="pyproject.toml">
<violation number="1" location="pyproject.toml:23">
P1: Contradictory Development Status classifiers: the list now includes both `4 - Beta` and `5 - Production/Stable`. These are mutually exclusive trove classifiers. For a `0.10.0-dev` version, keep only `4 - Beta` and remove `5 - Production/Stable`.</violation>
</file>
<file name="scripts/github/sync_workflows.py">
<violation number="1" location="scripts/github/sync_workflows.py:50">
P2: Relative project paths from the discovery subprocess would be resolved against CWD, not `workspace_root`. If `path_value` is relative, `Path(path_value).resolve()` uses the process's current working directory. Safer to resolve against `workspace_root` to handle both relative and absolute paths correctly.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| requires-python = ">=3.13,<3.14" | ||
| classifiers = [ | ||
| "Development Status :: 5 - Production/Stable", | ||
| "Development Status :: 4 - Beta", |
There was a problem hiding this comment.
P1: Contradictory Development Status classifiers: the list now includes both 4 - Beta and 5 - Production/Stable. These are mutually exclusive trove classifiers. For a 0.10.0-dev version, keep only 4 - Beta and remove 5 - Production/Stable.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At pyproject.toml, line 23:
<comment>Contradictory Development Status classifiers: the list now includes both `4 - Beta` and `5 - Production/Stable`. These are mutually exclusive trove classifiers. For a `0.10.0-dev` version, keep only `4 - Beta` and remove `5 - Production/Stable`.</comment>
<file context>
@@ -18,17 +16,18 @@ keywords = [
requires-python = ">=3.13,<3.14"
classifiers = [
- "Development Status :: 5 - Production/Stable",
+ "Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
</file context>
| path_value = item.get("path") | ||
| if not isinstance(name, str) or not isinstance(path_value, str): | ||
| continue | ||
| projects.append((name, Path(path_value).resolve())) |
There was a problem hiding this comment.
P2: Relative project paths from the discovery subprocess would be resolved against CWD, not workspace_root. If path_value is relative, Path(path_value).resolve() uses the process's current working directory. Safer to resolve against workspace_root to handle both relative and absolute paths correctly.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/github/sync_workflows.py, line 50:
<comment>Relative project paths from the discovery subprocess would be resolved against CWD, not `workspace_root`. If `path_value` is relative, `Path(path_value).resolve()` uses the process's current working directory. Safer to resolve against `workspace_root` to handle both relative and absolute paths correctly.</comment>
<file context>
@@ -0,0 +1,220 @@
+ path_value = item.get("path")
+ if not isinstance(name, str) or not isinstance(path_value, str):
+ continue
+ projects.append((name, Path(path_value).resolve()))
+ return projects
+
</file context>
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".github/workflows/ci.yml">
<violation number="1" location=".github/workflows/ci.yml:60">
P2: All CI quality gate steps (`check`, `test`, `validate`) are set to `continue-on-error: true`, which means this CI pipeline can never fail — it will always report green regardless of broken tests, type errors, lint violations, or security findings. This effectively disables CI as a quality gate.
If this is intentional for bootstrapping, consider adding a tracking issue or TODO comment with a target date/milestone to re-enable enforcement. At minimum, consider keeping `make test` as a hard failure so regressions are caught.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -0,0 +1,69 @@ | |||
| # Generated by scripts/github/sync_workflows.py - DO NOT EDIT | |||
There was a problem hiding this comment.
P2: All CI quality gate steps (check, test, validate) are set to continue-on-error: true, which means this CI pipeline can never fail — it will always report green regardless of broken tests, type errors, lint violations, or security findings. This effectively disables CI as a quality gate.
If this is intentional for bootstrapping, consider adding a tracking issue or TODO comment with a target date/milestone to re-enable enforcement. At minimum, consider keeping make test as a hard failure so regressions are caught.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/ci.yml, line 60:
<comment>All CI quality gate steps (`check`, `test`, `validate`) are set to `continue-on-error: true`, which means this CI pipeline can never fail — it will always report green regardless of broken tests, type errors, lint violations, or security findings. This effectively disables CI as a quality gate.
If this is intentional for bootstrapping, consider adding a tracking issue or TODO comment with a target date/milestone to re-enable enforcement. At minimum, consider keeping `make test` as a hard failure so regressions are caught.</comment>
<file context>
@@ -56,11 +56,14 @@ jobs:
- - name: Check
+ - name: Check (advisory)
+ continue-on-error: true
run: make check
</file context>
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".github/workflows/ci.yml">
<violation number="1" location=".github/workflows/ci.yml:51">
P2: Making the Setup step `continue-on-error: true` undermines the value of all downstream advisory steps. If `make setup` fails, the environment is broken and Check/Test/Validate will produce unreliable results (or fail for the wrong reasons). Consider keeping Setup as a hard failure so that advisory feedback from subsequent steps is at least trustworthy when it does run.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
.github/workflows/ci.yml
Outdated
| - name: Install CI gate toolchain | ||
| run: | | ||
| python -m pip install --upgrade pip | ||
| python -m pip install mypy pyright pyrefly ruff bandit pip-audit |
There was a problem hiding this comment.
P2: Making the Setup step continue-on-error: true undermines the value of all downstream advisory steps. If make setup fails, the environment is broken and Check/Test/Validate will produce unreliable results (or fail for the wrong reasons). Consider keeping Setup as a hard failure so that advisory feedback from subsequent steps is at least trustworthy when it does run.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/ci.yml, line 51:
<comment>Making the Setup step `continue-on-error: true` undermines the value of all downstream advisory steps. If `make setup` fails, the environment is broken and Check/Test/Validate will produce unreliable results (or fail for the wrong reasons). Consider keeping Setup as a hard failure so that advisory feedback from subsequent steps is at least trustworthy when it does run.</comment>
<file context>
@@ -47,7 +47,8 @@ jobs:
- - name: Setup
+ - name: Setup (advisory)
+ continue-on-error: true
run: make setup
</file context>
There was a problem hiding this comment.
6 issues found across 9 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="scripts/release/notes.py">
<violation number="1" location="scripts/release/notes.py:73">
P2: Potential off-by-one: `len(projects) + 2` doesn't match the 3 hardcoded names in the "Projects impacted" list below. If "root" is intentionally excluded from the packaged count (because it's the workspace root, not a distributable package), please add a brief comment explaining this so future maintainers don't 'fix' the mismatch. If "root" should be counted, change the offset to `+ 3`.</violation>
</file>
<file name="scripts/release/version.py">
<violation number="1" location="scripts/release/version.py:19">
P2: `content.replace(old, new)` replaces **all** occurrences in the file, which could unintentionally modify dependency version pins or other sections. Use `content.replace(old, new, 1)` to limit to a single replacement, consistent with the fallback branch that only replaces the first match.</violation>
<violation number="2" location="scripts/release/version.py:80">
P1: `--check` mode always exits with 0, defeating its purpose as a CI gate. When `changed > 0` and `--apply` is not set, the script should return non-zero to signal that files are out of date.</violation>
</file>
<file name="scripts/release/changelog.py">
<violation number="1" location="scripts/release/changelog.py:34">
P2: Idempotency check compares the full section including the dynamically generated date, so re-running on a different day with the same version/tag will produce a duplicate changelog entry. Check against the version header instead.</violation>
<violation number="2" location="scripts/release/changelog.py:50">
P2: `notes_path` is read unconditionally (line 50), but `notes_text` is only consumed inside `if args.apply`. In dry-run mode this is wasted I/O and will crash with `FileNotFoundError` if the notes file doesn't yet exist. Move the read inside the `if args.apply:` block.</violation>
</file>
<file name="scripts/release/run.py">
<violation number="1" location="scripts/release/run.py:87">
P2: Use `sys.executable` instead of hardcoded `"python"` for subprocess calls. In virtual environments, `"python"` may resolve to a different interpreter than the one running this script. The sibling module `shared.py` already uses `sys.executable` for the same purpose in `discover_projects()`. This same issue applies to lines 106, 126, and 142.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| if args.check: | ||
| _ = print(f"checked_version={args.version}") | ||
| _ = print(f"files_changed={changed}") | ||
| return 0 |
There was a problem hiding this comment.
P1: --check mode always exits with 0, defeating its purpose as a CI gate. When changed > 0 and --apply is not set, the script should return non-zero to signal that files are out of date.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/version.py, line 80:
<comment>`--check` mode always exits with 0, defeating its purpose as a CI gate. When `changed > 0` and `--apply` is not set, the script should return non-zero to signal that files are out of date.</comment>
<file context>
@@ -0,0 +1,84 @@
+ if args.check:
+ _ = print(f"checked_version={args.version}")
+ _ = print(f"files_changed={changed}")
+ return 0
+
+
</file context>
| "## Scope", | ||
| "", | ||
| f"- Workspace release version: {version}", | ||
| f"- Projects packaged: {len(projects) + 2}", |
There was a problem hiding this comment.
P2: Potential off-by-one: len(projects) + 2 doesn't match the 3 hardcoded names in the "Projects impacted" list below. If "root" is intentionally excluded from the packaged count (because it's the workspace root, not a distributable package), please add a brief comment explaining this so future maintainers don't 'fix' the mismatch. If "root" should be counted, change the offset to + 3.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/notes.py, line 73:
<comment>Potential off-by-one: `len(projects) + 2` doesn't match the 3 hardcoded names in the "Projects impacted" list below. If "root" is intentionally excluded from the packaged count (because it's the workspace root, not a distributable package), please add a brief comment explaining this so future maintainers don't 'fix' the mismatch. If "root" should be counted, change the offset to `+ 3`.</comment>
<file context>
@@ -0,0 +1,106 @@
+ "## Scope",
+ "",
+ f"- Workspace release version: {version}",
+ f"- Projects packaged: {len(projects) + 2}",
+ "",
+ "## Projects impacted",
</file context>
| old = 'version = "0.10.0-dev"' | ||
| new = f'version = "{version}"' | ||
| if old in content: | ||
| return content.replace(old, new), True |
There was a problem hiding this comment.
P2: content.replace(old, new) replaces all occurrences in the file, which could unintentionally modify dependency version pins or other sections. Use content.replace(old, new, 1) to limit to a single replacement, consistent with the fallback branch that only replaces the first match.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/version.py, line 19:
<comment>`content.replace(old, new)` replaces **all** occurrences in the file, which could unintentionally modify dependency version pins or other sections. Use `content.replace(old, new, 1)` to limit to a single replacement, consistent with the fallback branch that only replaces the first match.</comment>
<file context>
@@ -0,0 +1,84 @@
+ old = 'version = "0.10.0-dev"'
+ new = f'version = "{version}"'
+ if old in content:
+ return content.replace(old, new), True
+
+ marker = 'version = "'
</file context>
| tagged_notes_path = root / "docs" / "releases" / f"{args.tag}.md" | ||
| notes_path = args.notes if args.notes.is_absolute() else root / args.notes | ||
|
|
||
| notes_text = notes_path.read_text(encoding="utf-8") |
There was a problem hiding this comment.
P2: notes_path is read unconditionally (line 50), but notes_text is only consumed inside if args.apply. In dry-run mode this is wasted I/O and will crash with FileNotFoundError if the notes file doesn't yet exist. Move the read inside the if args.apply: block.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/changelog.py, line 50:
<comment>`notes_path` is read unconditionally (line 50), but `notes_text` is only consumed inside `if args.apply`. In dry-run mode this is wasted I/O and will crash with `FileNotFoundError` if the notes file doesn't yet exist. Move the read inside the `if args.apply:` block.</comment>
<file context>
@@ -0,0 +1,72 @@
+ tagged_notes_path = root / "docs" / "releases" / f"{args.tag}.md"
+ notes_path = args.notes if args.notes.is_absolute() else root / args.notes
+
+ notes_text = notes_path.read_text(encoding="utf-8")
+ existing = (
+ changelog_path.read_text(encoding="utf-8")
</file context>
| "- Status: Alpha, non-production\n\n" | ||
| f"Full notes: `docs/releases/{tag}.md`\n\n" | ||
| ) | ||
| if section in existing: |
There was a problem hiding this comment.
P2: Idempotency check compares the full section including the dynamically generated date, so re-running on a different day with the same version/tag will produce a duplicate changelog entry. Check against the version header instead.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/changelog.py, line 34:
<comment>Idempotency check compares the full section including the dynamically generated date, so re-running on a different day with the same version/tag will produce a duplicate changelog entry. Check against the version header instead.</comment>
<file context>
@@ -0,0 +1,72 @@
+ "- Status: Alpha, non-production\n\n"
+ f"Full notes: `docs/releases/{tag}.md`\n\n"
+ )
+ if section in existing:
+ return existing
+ marker = "# Changelog\n\n"
</file context>
|
|
||
| def _phase_version(root: Path, version: str, dry_run: bool) -> None: | ||
| command = [ | ||
| "python", |
There was a problem hiding this comment.
P2: Use sys.executable instead of hardcoded "python" for subprocess calls. In virtual environments, "python" may resolve to a different interpreter than the one running this script. The sibling module shared.py already uses sys.executable for the same purpose in discover_projects(). This same issue applies to lines 106, 126, and 142.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/run.py, line 87:
<comment>Use `sys.executable` instead of hardcoded `"python"` for subprocess calls. In virtual environments, `"python"` may resolve to a different interpreter than the one running this script. The sibling module `shared.py` already uses `sys.executable` for the same purpose in `discover_projects()`. This same issue applies to lines 106, 126, and 142.</comment>
<file context>
@@ -0,0 +1,202 @@
+
+def _phase_version(root: Path, version: str, dry_run: bool) -> None:
+ command = [
+ "python",
+ "scripts/release/version.py",
+ "--root",
</file context>
Summary
ci.yml0.10.0-devready for package generation and release checksSummary by cubic
Aligns this repo’s CI with the canonical workspace flow and finalizes 0.10.0 packaging for Python 3.13. Adds workflow sync/lint tooling and hardens dependency sync; setup/check/test/validate are advisory for easier rollout.
New Features
Refactors
Written for commit 1708183. Summary will update on new commits.