Skip to content

Comments

chore(workspace): finalize 0.11.0-dev checkpoint and propagation#8

Merged
marlon-costa-dc merged 35 commits intomainfrom
0.11.0-dev
Feb 20, 2026
Merged

chore(workspace): finalize 0.11.0-dev checkpoint and propagation#8
marlon-costa-dc merged 35 commits intomainfrom
0.11.0-dev

Conversation

@marlon-costa-dc
Copy link
Contributor

@marlon-costa-dc marlon-costa-dc commented Feb 20, 2026

Automated checkpoint and propagation updates after release automation rollout.


Summary by cubic

Finalizes the 0.11.0-dev checkpoint with a shared versioning library, a workspace PR manager, and a streamlined release flow. Propagates shared libs across projects, updates submodule pointers/poetry.lock to current tags, and tightens sync and tests.

  • New Features

    • Centralized versioning lib (semver parse/bump, tag detection, pyproject version replace) used by release and PR tooling.
    • Workspace PR manager (scripts/github/pr_workspace.py) that checks out a branch and optionally checkpoints/pushes changes before PR actions; Makefile adds PR_BRANCH and PR_CHECKPOINT.
  • Refactors

    • Release: make release replaces release-ci; VERSION phase can append -dev via RELEASE_DEV_SUFFIX; always creates an annotated tag (push remains optional); workflow and notes updated.
    • PR merge: retries on “not mergeable” by updating branch, no-ops if no open PR for head; shared release tag detection via versioning lib.
    • Sync/tests: sync now propagates libs/ and ignores pycache/dot paths; added unit tests for versioning, PR workspace/manager, release flags, and sync.

Written for commit fc9eac1. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Workspace PR management (create/view/check/merge/close) with configurable options
    • Workspace-wide Python 3.13 enforcement and guard checks
    • Targeted project selection for release and packaging workflows
  • Chores

    • Updated many submodule pointers and improved sync to include shared libs
    • Reworked project discovery and release tooling to support filtered/project-scoped operations
    • Added semantic version utilities for consistent version handling
  • Tests

    • Added end-to-end and unit tests covering PR workflows, release flows, and versioning

Marlon Costa added 30 commits February 20, 2026 12:52
@gemini-code-assist
Copy link

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

Warning

Rate limit exceeded

@marlon-costa-dc has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 11 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Centralizes project discovery into a new libs/ package, adds workspace-level GitHub PR automation (scripts/github/pr_manager.py, pr_workspace.py) and CLI Makefile targets, refactors release/maintenance scripts to use discovery utilities, updates many submodule pointers, and adds tests and Python 3.13 enforcement tooling.

Changes

Cohort / File(s) Summary
Build & CLI surface
Makefile, base.mk, .gitignore
Added PR-related Makefile variables and a new pr target; updated help text and release targets to accept selected projects; enforced Python 3.13 guard steps; explicitly whitelist libs/ in .gitignore.
Core libs package
libs/__init__.py, libs/discovery.py, libs/git.py, libs/paths.py, libs/reporting.py, libs/selection.py, libs/subprocess.py, libs/versioning.py
Introduced libs/ utilities: project discovery (ProjectInfo), git helpers, path helpers, reporting dir helpers, project selection/resolution, robust subprocess helpers, and versioning/semver utilities (parse/bump/replace/current).
PR automation scripts
scripts/github/pr_manager.py, scripts/github/pr_workspace.py
New per-repo PR manager and workspace coordinator: create/view/checks/merge/close actions, multi-repo PR orchestration, branch checkout/checkpointing, per-repo logs and fail-fast handling.
Release & publishing refactor
scripts/release/shared.py, scripts/release/run.py, scripts/release/version.py, scripts/release/build.py, scripts/release/notes.py, scripts/release/changelog.py
Switched from ad-hoc discovery to resolve_projects(); added --projects filtering and propagated project-scoped parameters across version/build/publish phases; moved to toml-based version handling and adjusted changelog heading logic.
Maintenance & enforcement
scripts/maintenance/enforce_python_version.py, scripts/maintenance/_discover.py, scripts/core/skill_validate.py, scripts/core/stub_supply_chain.py
Delegated discovery to libs, added workspace Python 3.13 enforcement script (conftest injection and .python-version), and updated callers to use centralized discovery.
Workflow sync & sync helper
scripts/github/sync_workflows.py, scripts/sync.py
Replaced external subprocess discovery with resolve_projects() import; expanded sync to mirror libs and excluded pycache/hidden paths.
Dependency scripts
scripts/dependencies/dependency_detection.py
Replaced local discovery with resolve_projects() and standardized subprocess env handling to use os.environ.
Tests
tests/... (many new/updated tests), pyproject.toml
Added extensive unit and integration tests for PR workspace, pr_manager, discovery, versioning, release flows; extended pytest discovery to include *_tests.py; adjusted test loaders/paths.
Submodule pointers
flexcore, flext-api, flext-auth, flext-cli, flext-core, flext-db-oracle, flext-dbt-ldap, flext-dbt-ldif, flext-dbt-oracle, flext-dbt-oracle-wms, flext-grpc, flext-ldap, flext-ldif, flext-meltano, flext-observability, flext-oracle-oic, flext-oracle-wms, flext-plugin, flext-quality, flext-tap-ldap, flext-tap-ldif, flext-tap-oracle, flext-tap-oracle-oic, flext-tap-oracle-wms, flext-target-ldap, flext-target-ldif, flext-target-oracle, flext-target-oracle-oic, flext-target-oracle-wms, flext-web
Advanced commit references for many submodules; pointer-only changes without functional edits in the parent repo.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant PR_Workspace as pr_workspace.py
  participant Repo as Repository
  participant PR_Manager as pr_manager.py
  participant GH as GitHub(gh CLI)
  participant Reports as .reports

  User->>PR_Workspace: invoke with --projects / PR_* args
  PR_Workspace->>Repo: resolve_projects() -> list of repos
  loop per repo
    PR_Workspace->>Repo: checkout/prepare branch (may checkpoint)
    alt repo is workspace root
      PR_Workspace->>PR_Manager: run pr_manager with PR args
      PR_Manager->>GH: gh pr create / view / merge / checks
      GH-->>PR_Manager: response
      PR_Manager-->>PR_Workspace: exit status
    else non-root repo
      PR_Workspace->>Repo: run `make -C <repo> pr` with env PR_*
      Repo-->>PR_Workspace: make/pr exit status
    end
    PR_Workspace->>Reports: write per-repo log
  end
  PR_Workspace->>User: aggregated summary & exit code
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I dug a tidy tunnel, libs/ now neatly sown,
Repos and PRs assemble, each branch into a home,
Scripts hop through the workspace, checks and merges bright,
Submodules twitch their whiskers, tests celebrate by night,
One rabbit cheers the changes — tidy burrow, tidy code.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: a finalized 0.11.0-dev checkpoint with propagation of automation updates (PR tooling, libs, version guards, and submodule pointers) across the workspace.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 0.11.0-dev

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8 issues found across 66 files

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="Makefile">

<violation number="1" location="Makefile:464">
P2: The `pr` target is missing `$(ENFORCE_WORKSPACE_VENV)`, which every other target that invokes Python scripts includes (`release`, `release-ci`, `check`, `build`, `security`, `format`, `docs`, `test`, `validate`, `typings`, `clean`). Without it, the workspace venv existence check and project-local `.venv` cleanup are skipped, violating the workspace venv enforcement policy.</violation>
</file>

<file name="libs/git.py">

<violation number="1" location="libs/git.py:13">
P2: Missing `--` end-of-options separator before the `tag` argument. If `tag` starts with `-`, git will interpret it as a flag instead of a tag name, causing incorrect behavior or errors. Use `--` to unambiguously mark `tag` as a positional argument.</violation>
</file>

<file name="scripts/maintenance/enforce_python_version.py">

<violation number="1" location="scripts/maintenance/enforce_python_version.py:102">
P2: `_has_guard` only checks for the marker comment, not the version. If `REQUIRED_MINOR` changes, existing conftest guards with the stale version will be reported as OK and never updated. Consider also verifying the version tuple is correct.</violation>
</file>

<file name="libs/subprocess.py">

<violation number="1" location="libs/subprocess.py:25">
P2: When both `stderr` and `stdout` are empty (e.g., a process that silently exits with a non-zero code), `detail` is `""` and the error message ends with a dangling `: ` with no useful information. Guard against this to produce a cleaner message.</violation>
</file>

<file name="scripts/core/skill_validate.py">

<violation number="1" location="scripts/core/skill_validate.py:206">
P2: Delegation to `ssot_discover_projects` loses `SkillInfraError` wrapping. If the library raises `OSError` (e.g., unreadable workspace root), it won't be caught by the caller's `SkillInfraError` handler, resulting in an unhandled traceback instead of a clean error exit. Wrap the call to preserve the error contract.</violation>
</file>

<file name="base.mk">

<violation number="1" location="base.mk:105">
P2: `pr` is included in `STANDARD_VERBS`, so every `make pr` invocation runs `_preflight` which triggers `AUTO_ADJUST_PROJECT` (auto-reformats markdown/Go files). A read-only action like `make pr PR_ACTION=status` should not silently modify working tree files. Consider removing `pr` from `STANDARD_VERBS` and adding only the specific preflight steps it actually needs (e.g., venv enforcement), or making `AUTO_ADJUST_PROJECT` conditional on the verb.</violation>
</file>

<file name="scripts/github/pr_workspace.py">

<violation number="1" location="scripts/github/pr_workspace.py:119">
P2: Use `sys.executable` instead of bare `"python"` to ensure the subprocess uses the same interpreter (and venv) as the parent process. This avoids accidentally invoking system Python when the venv isn't activated in the subprocess environment.</violation>

<violation number="2" location="scripts/github/pr_workspace.py:174">
P1: Missing `cwd` parameter in `subprocess.run` — the command uses the relative path `scripts/github/pr_manager.py` which will fail unless CWD happens to be the workspace root. Pass `cwd=workspace_root` (or `cwd=repo_root` since they are equal in this branch) to ensure the relative path resolves correctly regardless of where the script is invoked.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


started = time.monotonic()
with log_path.open("w", encoding="utf-8") as handle:
result = subprocess.run(
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Missing cwd parameter in subprocess.run — the command uses the relative path scripts/github/pr_manager.py which will fail unless CWD happens to be the workspace root. Pass cwd=workspace_root (or cwd=repo_root since they are equal in this branch) to ensure the relative path resolves correctly regardless of where the script is invoked.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/github/pr_workspace.py, line 174:

<comment>Missing `cwd` parameter in `subprocess.run` — the command uses the relative path `scripts/github/pr_manager.py` which will fail unless CWD happens to be the workspace root. Pass `cwd=workspace_root` (or `cwd=repo_root` since they are equal in this branch) to ensure the relative path resolves correctly regardless of where the script is invoked.</comment>

<file context>
@@ -0,0 +1,213 @@
+
+    started = time.monotonic()
+    with log_path.open("w", encoding="utf-8") as handle:
+        result = subprocess.run(
+            command, stdout=handle, stderr=subprocess.STDOUT, check=False
+        )
</file context>
Fix with Cubic

Comment on lines 464 to 467
pr: ## Manage pull requests for selected projects
$(Q)$(ENSURE_NO_PROJECT_CONFLICT)
$(Q)$(ENSURE_SELECTED_PROJECTS)
$(Q)$(ENSURE_PROJECTS_EXIST)
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The pr target is missing $(ENFORCE_WORKSPACE_VENV), which every other target that invokes Python scripts includes (release, release-ci, check, build, security, format, docs, test, validate, typings, clean). Without it, the workspace venv existence check and project-local .venv cleanup are skipped, violating the workspace venv enforcement policy.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At Makefile, line 464:

<comment>The `pr` target is missing `$(ENFORCE_WORKSPACE_VENV)`, which every other target that invokes Python scripts includes (`release`, `release-ci`, `check`, `build`, `security`, `format`, `docs`, `test`, `validate`, `typings`, `clean`). Without it, the workspace venv existence check and project-local `.venv` cleanup are skipped, violating the workspace venv enforcement policy.</comment>

<file context>
@@ -400,31 +428,63 @@ build: ## Build/package all selected projects
 		$(if $(TAG),--tag "$(TAG)",) \
 		$(if $(BUMP),--bump "$(BUMP)",)
 
+pr: ## Manage pull requests for selected projects
+	$(Q)$(ENSURE_NO_PROJECT_CONFLICT)
+	$(Q)$(ENSURE_SELECTED_PROJECTS)
</file context>
Suggested change
pr: ## Manage pull requests for selected projects
$(Q)$(ENSURE_NO_PROJECT_CONFLICT)
$(Q)$(ENSURE_SELECTED_PROJECTS)
$(Q)$(ENSURE_PROJECTS_EXIST)
pr: ## Manage pull requests for selected projects
$(Q)$(ENSURE_NO_PROJECT_CONFLICT)
$(Q)$(ENFORCE_WORKSPACE_VENV)
$(Q)$(ENSURE_SELECTED_PROJECTS)
$(Q)$(ENSURE_PROJECTS_EXIST)
Fix with Cubic



def tag_exists(repo_root: Path, tag: str) -> bool:
value = run_capture(["git", "tag", "-l", tag], cwd=repo_root)
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Missing -- end-of-options separator before the tag argument. If tag starts with -, git will interpret it as a flag instead of a tag name, causing incorrect behavior or errors. Use -- to unambiguously mark tag as a positional argument.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At libs/git.py, line 13:

<comment>Missing `--` end-of-options separator before the `tag` argument. If `tag` starts with `-`, git will interpret it as a flag instead of a tag name, causing incorrect behavior or errors. Use `--` to unambiguously mark `tag` as a positional argument.</comment>

<file context>
@@ -0,0 +1,14 @@
+
+
+def tag_exists(repo_root: Path, tag: str) -> bool:
+    value = run_capture(["git", "tag", "-l", tag], cwd=repo_root)
+    return value.strip() == tag
</file context>
Fix with Cubic


def _has_guard(content: str) -> bool:
"""Check if conftest.py already has the version guard."""
return GUARD_MARKER in content
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: _has_guard only checks for the marker comment, not the version. If REQUIRED_MINOR changes, existing conftest guards with the stale version will be reported as OK and never updated. Consider also verifying the version tuple is correct.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/maintenance/enforce_python_version.py, line 102:

<comment>`_has_guard` only checks for the marker comment, not the version. If `REQUIRED_MINOR` changes, existing conftest guards with the stale version will be reported as OK and never updated. Consider also verifying the version tuple is correct.</comment>

<file context>
@@ -0,0 +1,249 @@
+
+def _has_guard(content: str) -> bool:
+    """Check if conftest.py already has the version guard."""
+    return GUARD_MARKER in content
+
+
</file context>
Fix with Cubic

)
if result.returncode != 0:
cmd = shlex.join(command)
detail = (result.stderr or result.stdout).strip()
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: When both stderr and stdout are empty (e.g., a process that silently exits with a non-zero code), detail is "" and the error message ends with a dangling : with no useful information. Guard against this to produce a cleaner message.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At libs/subprocess.py, line 25:

<comment>When both `stderr` and `stdout` are empty (e.g., a process that silently exits with a non-zero code), `detail` is `""` and the error message ends with a dangling `: ` with no useful information. Guard against this to produce a cleaner message.</comment>

<file context>
@@ -0,0 +1,27 @@
+    )
+    if result.returncode != 0:
+        cmd = shlex.join(command)
+        detail = (result.stderr or result.stdout).strip()
+        raise RuntimeError(f"command failed ({result.returncode}): {cmd}: {detail}")
+    return result.stdout.strip()
</file context>
Fix with Cubic

raise SkillInfraError(msg) from exc
if "flext-core" in pyproject_text or "flext_core" in pyproject_text:
external_projects.append(name)
discovered = ssot_discover_projects(root)
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Delegation to ssot_discover_projects loses SkillInfraError wrapping. If the library raises OSError (e.g., unreadable workspace root), it won't be caught by the caller's SkillInfraError handler, resulting in an unhandled traceback instead of a clean error exit. Wrap the call to preserve the error contract.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/core/skill_validate.py, line 206:

<comment>Delegation to `ssot_discover_projects` loses `SkillInfraError` wrapping. If the library raises `OSError` (e.g., unreadable workspace root), it won't be caught by the caller's `SkillInfraError` handler, resulting in an unhandled traceback instead of a clean error exit. Wrap the call to preserve the error contract.</comment>

<file context>
@@ -198,43 +203,13 @@ def discover_projects(root: Path) -> dict[str, object]:
-            raise SkillInfraError(msg) from exc
-        if "flext-core" in pyproject_text or "flext_core" in pyproject_text:
-            external_projects.append(name)
+    discovered = ssot_discover_projects(root)
+    flext_projects = [
+        project.name for project in discovered if project.kind == "submodule"
</file context>
Suggested change
discovered = ssot_discover_projects(root)
try:
discovered = ssot_discover_projects(root)
except OSError as exc:
msg = f"Cannot discover projects in {root}"
raise SkillInfraError(msg) from exc
Fix with Cubic

.PHONY: help setup build check security format docs docs-base docs-sync-scripts test validate clean _preflight
STANDARD_VERBS := setup build check security format docs test validate clean
.PHONY: help setup build check security format docs docs-base docs-sync-scripts test validate clean pr _preflight
STANDARD_VERBS := setup build check security format docs test validate clean pr
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: pr is included in STANDARD_VERBS, so every make pr invocation runs _preflight which triggers AUTO_ADJUST_PROJECT (auto-reformats markdown/Go files). A read-only action like make pr PR_ACTION=status should not silently modify working tree files. Consider removing pr from STANDARD_VERBS and adding only the specific preflight steps it actually needs (e.g., venv enforcement), or making AUTO_ADJUST_PROJECT conditional on the verb.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At base.mk, line 105:

<comment>`pr` is included in `STANDARD_VERBS`, so every `make pr` invocation runs `_preflight` which triggers `AUTO_ADJUST_PROJECT` (auto-reformats markdown/Go files). A read-only action like `make pr PR_ACTION=status` should not silently modify working tree files. Consider removing `pr` from `STANDARD_VERBS` and adding only the specific preflight steps it actually needs (e.g., venv enforcement), or making `AUTO_ADJUST_PROJECT` conditional on the verb.</comment>

<file context>
@@ -89,8 +101,8 @@ $(LINT_CACHE_DIR):
-.PHONY: help setup build check security format docs docs-base docs-sync-scripts test validate clean _preflight
-STANDARD_VERBS := setup build check security format docs test validate clean
+.PHONY: help setup build check security format docs docs-base docs-sync-scripts test validate clean pr _preflight
+STANDARD_VERBS := setup build check security format docs test validate clean pr
 $(STANDARD_VERBS): _preflight
 
</file context>
Fix with Cubic

log_path = report_dir / f"{display}.log"
if repo_root == workspace_root:
command = [
"python",
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Use sys.executable instead of bare "python" to ensure the subprocess uses the same interpreter (and venv) as the parent process. This avoids accidentally invoking system Python when the venv isn't activated in the subprocess environment.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/github/pr_workspace.py, line 119:

<comment>Use `sys.executable` instead of bare `"python"` to ensure the subprocess uses the same interpreter (and venv) as the parent process. This avoids accidentally invoking system Python when the venv isn't activated in the subprocess environment.</comment>

<file context>
@@ -0,0 +1,213 @@
+    log_path = report_dir / f"{display}.log"
+    if repo_root == workspace_root:
+        command = [
+            "python",
+            "scripts/github/pr_manager.py",
+            "--repo-root",
</file context>
Fix with Cubic

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🧹 Nitpick comments (18)
scripts/core/stub_supply_chain.py (1)

19-22: Optional: cache the repo root path to avoid computing it twice.

Path(__file__).resolve().parents[2] is evaluated twice in consecutive lines.

♻️ Suggested refactor
-if str(Path(__file__).resolve().parents[2]) not in sys.path:
-    sys.path.insert(0, str(Path(__file__).resolve().parents[2]))
+_REPO_ROOT = str(Path(__file__).resolve().parents[2])
+if _REPO_ROOT not in sys.path:
+    sys.path.insert(0, _REPO_ROOT)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/core/stub_supply_chain.py` around lines 19 - 22, The code computes
Path(__file__).resolve().parents[2] twice; cache it in a local variable (e.g.,
repo_root or repo_root_path) and reuse that variable in both the membership
check and sys.path.insert call so you only resolve the path once; update the
top-level module code around the existing sys.path logic and the import of
resolve_projects to use the new variable.
libs/__init__.py (1)

9-9: "subprocess" in __all__ shadows the stdlib module name

libs/subprocess.py sharing its name with the stdlib subprocess means from libs import subprocess (or from libs import *) yields the internal module, not the stdlib one — a silent footgun for any future author. Consider renaming the module to libs/proc.py or libs/shell.py.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/__init__.py` at line 9, The project exports "subprocess" from libs via
the __all__ list which shadows the stdlib module name; rename the internal
module (e.g., move/rename libs/subprocess.py to libs/proc.py or libs/shell.py)
and update the __all__ entry to the new module name and all internal
imports/usages that reference subprocess (search for the module symbol
"subprocess", the file "libs/subprocess.py", and the package-level "__all__" in
libs/__init__.py) so that "from libs import subprocess" no longer hides the
standard library module and existing call sites use the renamed module symbol.
libs/subprocess.py (1)

8-27: Add an optional timeout parameter to prevent indefinite hangs

Neither run_checked nor run_capture accept a timeout, so any hung subprocess blocks the caller permanently. Callers like run_deptry work around this by calling subprocess.run directly with a hard-coded timeout, defeating the purpose of these helpers.

♻️ Proposed refactor
-def run_checked(command: list[str], cwd: Path | None = None) -> None:
-    result = subprocess.run(command, cwd=cwd, check=False)
+def run_checked(command: list[str], cwd: Path | None = None, timeout: int | None = None) -> None:
+    result = subprocess.run(command, cwd=cwd, check=False, timeout=timeout)
     if result.returncode != 0:
         cmd = shlex.join(command)
         raise RuntimeError(f"command failed ({result.returncode}): {cmd}")


-def run_capture(command: list[str], cwd: Path | None = None) -> str:
+def run_capture(command: list[str], cwd: Path | None = None, timeout: int | None = None) -> str:
     result = subprocess.run(
         command,
         cwd=cwd,
         capture_output=True,
         text=True,
         check=False,
+        timeout=timeout,
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/subprocess.py` around lines 8 - 27, Both run_checked and run_capture can
hang indefinitely; add an optional timeout: float | None parameter (default
None) to both functions and pass it through to subprocess.run; catch
subprocess.TimeoutExpired in each function, build the command string using
shlex.join(command) and raise a RuntimeError that includes the timeout and the
command (for run_capture include any available partial output from err/out if
present), ensuring type signatures of run_checked and run_capture are updated
accordingly.
scripts/dependencies/dependency_detection.py (1)

72-81: discover_projects refactoring looks correct; note pre-existing INTERNAL_PREFIXES duplication nearby

The new resolver-based implementation is clean. One unrelated pre-existing issue at line 32:

INTERNAL_PREFIXES = ("flext_", "flext_", "flext_")

All three entries are identical. While str.startswith(tuple) deduplicates at runtime, this is almost certainly a copy-paste error — the two additional entries were likely meant to be different prefixes (e.g., "flext-", "flexcore_"). Worth fixing in a follow-up.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/dependencies/dependency_detection.py` around lines 72 - 81, The tuple
INTERNAL_PREFIXES currently contains three identical entries ("flext_",
"flext_", "flext_") — update the definition of INTERNAL_PREFIXES to remove
duplicate entries and correct the intended prefixes (e.g., replace duplicates
with the intended variants such as "flext-", "flexcore_", or other
project-specific prefixes) so the startswith checks use the correct, unique
prefix set; locate the INTERNAL_PREFIXES constant and replace the duplicated
tuple with a deduplicated, accurate tuple of prefix strings.
libs/selection.py (1)

21-23: Optional: move error message into a custom exception or keep it compact (Ruff TRY003)

Ruff flags the long inline RuntimeError message at line 23. A lightweight fix is to create a dedicated exception; alternatively, just acknowledge the suppression with a # noqa: TRY003 comment if the inline message is preferred.

♻️ Minimal inline alternative
     if missing:
-        missing_text = ", ".join(sorted(missing))
-        raise RuntimeError(f"unknown projects: {missing_text}")
+        raise RuntimeError("unknown projects: " + ", ".join(sorted(missing)))  # noqa: TRY003
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/selection.py` around lines 21 - 23, Replace the inline RuntimeError with
a small dedicated exception type to satisfy Ruff TRY003: add a new exception
class (e.g., class UnknownProjectsError(Exception): pass) in the same module and
change the raise in the if missing block to raise
UnknownProjectsError(missing_text) (use the existing local variable
missing_text). This keeps the error message compact and moves the long inline
message into the exception instantiation via the variable; alternatively, if you
prefer not to add a class, append a "# noqa: TRY003" comment to the current
raise to silence Ruff.
scripts/release/run.py (1)

38-48: Consider using tomllib.load() directly with a binary file handle.

Reading bytes, decoding to UTF-8, then passing to tomllib.loads() works but is less idiomatic than using the binary-mode API that tomllib was designed for:

Proposed simplification
 def _current_version(root: Path) -> str:
     pyproject = root / "pyproject.toml"
-    content = pyproject.read_bytes()
-    data = tomllib.loads(content.decode("utf-8"))
+    with open(pyproject, "rb") as fh:
+        data = tomllib.load(fh)
     project = data.get("project")
     if not isinstance(project, dict):
         raise RuntimeError("unable to detect [project] section from pyproject.toml")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/release/run.py` around lines 38 - 48, The _current_version function
currently reads pyproject.toml bytes, decodes them, and calls tomllib.loads;
replace that with tomllib.load by opening the file in binary mode to be more
idiomatic: open pyproject in "rb" and call tomllib.load(file) to obtain data,
keep the existing validation of project/version and the final
removesuffix("-dev") behavior; update the code around the pyproject variable and
tomllib usage in _current_version accordingly.
scripts/maintenance/enforce_python_version.py (2)

24-29: Duplicate parents[2] resolution.

Path(__file__).resolve().parents[2] is computed twice — once on line 24 for sys.path setup and again on line 29 for ROOT. Consider reusing a single variable:

Proposed fix
-if str(Path(__file__).resolve().parents[2]) not in sys.path:
-    sys.path.insert(0, str(Path(__file__).resolve().parents[2]))
+ROOT = Path(__file__).resolve().parents[2]
+if str(ROOT) not in sys.path:
+    sys.path.insert(0, str(ROOT))

 from libs.selection import resolve_projects

-ROOT = Path(__file__).resolve().parents[2]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/maintenance/enforce_python_version.py` around lines 24 - 29, Compute
Path(__file__).resolve().parents[2] once into a variable (e.g., root_dir) and
reuse it for both the sys.path insertion and the ROOT assignment to avoid
duplicated work; replace the two occurrences of
Path(__file__).resolve().parents[2] with that single variable and use
str(root_dir) where needed before importing resolve_projects from
libs.selection.

186-200: Existing guards are never refreshed when REQUIRED_MINOR changes.

If REQUIRED_MINOR is bumped (e.g., 13 → 14), _has_guard still returns True for files containing the old marker, so _ensure_conftest_guard short-circuits at line 190 without re-injecting the updated block. This is likely acceptable for now but worth documenting — a future version bump would need a --force flag or manual marker removal to propagate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/maintenance/enforce_python_version.py` around lines 186 - 200, The
current logic in _ensure_conftest_guard short-circuits if _has_guard(content)
returns True, so files with an old REQUIRED_MINOR marker never get updated;
change the flow so you detect the guard's version (either by enhancing
_has_guard to return the matched minor version or adding a new helper like
_guard_minor_version) and if the found minor differs from REQUIRED_MINOR, treat
it as not current: when check_only is false call _inject_guard to rewrite the
file (using conftest.write_text as already done) and log that the guard was
refreshed (respecting verbose), otherwise when check_only is true report
MISSING/OUTDATED; update references in _ensure_conftest_guard, _has_guard (or
add _guard_minor_version), and _inject_guard to support this version-aware
behavior.
scripts/release/shared.py (1)

32-38: Fragile string-based error message rewriting.

The str(exc).replace("unknown projects", "unknown release projects") silently becomes a no-op if the upstream library changes its error wording. Consider matching on a more stable signal or documenting the coupling.

That said, this is purely cosmetic (the error is still raised correctly), so it's not blocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/release/shared.py` around lines 32 - 38, The current RuntimeError
message rewrite in resolve_projects is fragile because it blindly replaces text
in str(exc); update resolve_projects to detect the specific condition before
changing the message: call _resolve_projects(root, names) in the try block,
catch RuntimeError as exc, inspect str(exc) to see if it contains the stable
substring "unknown projects" (or better, check for a more stable indicator if
available), and only then raise a new RuntimeError with "unknown release
projects" chained from exc; otherwise re-raise the original exception (or raise
RuntimeError(str(exc)) from exc) to avoid silent no-ops—refer to the
resolve_projects function and the _resolve_projects call when making this
change.
tests/unit/scripts/release/release_shared_and_run_tests.py (1)

12-20: Inconsistent _load_module parameter order across test files.

This file uses _load_module(module_name, relative_path), but tests/unit/scripts/release/release_scripts_tests.py defines it with swapped parameters: _load_module(relative_path, module_name). While each file is internally consistent with its own signature, this inconsistency is a risk for copy-paste errors or future refactoring mistakes.

Three out of four test files (release_shared_and_run_tests.py, pr_manager_tests.py, test_pr_workspace.py) use the (module_name, relative_path) order. Consider standardizing all files to this convention.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/scripts/release/release_shared_and_run_tests.py` around lines 12 -
20, Standardize the _load_module signature to (module_name, relative_path)
across tests by changing the variant in
tests/unit/scripts/release/release_scripts_tests.py to match the other files:
keep the function name _load_module(module_name: str, relative_path: str) and
update all call sites in that file to pass module_name first and relative_path
second; ensure the function parameters, any internal uses, and imports in
release_scripts_tests.py align with the (module_name, relative_path) ordering to
avoid copy-paste mistakes with _load_module used in
release_shared_and_run_tests.py, pr_manager_tests.py, and test_pr_workspace.py.
scripts/github/pr_workspace.py (4)

99-109: Push-then-rebase-then-push retry can silently lose the race.

If the first push fails (Line 100-104) and pull-rebase succeeds, the second push on Line 109 uses run_checked which raises on failure. This means a second concurrent push from another agent would cause an unhandled exception and abort the entire workspace loop. Consider catching the error or using a non-raising subprocess call with a clear diagnostic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_workspace.py` around lines 99 - 109, The current
push-then-rebase-then-push flow can raise an unhandled exception on the second
push (run_checked(push_cmd, cwd=repo_root)) if another agent beat us to the
push; change this to handle failure gracefully by replacing the final
run_checked call with a non-raising subprocess.run (similar to the initial push
call) or wrap run_checked(push_cmd, cwd=repo_root) in a try/except for
CalledProcessError and log a clear diagnostic including push_cmd, branch and
push.returncode, then continue the workspace loop instead of aborting; reference
push_cmd, push, run_checked, repo_root and branch when implementing the change.

46-47: _has_changes will propagate a RuntimeError if the git command fails.

run_capture (from libs.subprocess) presumably raises on non-zero exit—similar to _run_capture in pr_manager.py. If git status --porcelain fails (e.g., not a git repo), this will throw an unhandled exception rather than returning a useful error. Since this is called inside the main loop, it would abort the entire workspace run.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_workspace.py` around lines 46 - 47, The _has_changes
function currently calls run_capture(["git", "status", "--porcelain"],
cwd=repo_root) which will raise a RuntimeError on non-zero exit and can abort
the main loop; wrap that call in a try/except that catches RuntimeError (and
optionally subprocess-related exceptions from libs.subprocess), and return False
when git fails (or if you prefer, re-raise a more descriptive error); update the
_has_changes implementation to handle the exception instead of letting
run_capture propagate, referencing the _has_changes function and run_capture
helper so the check returns a boolean safely even when the git command fails.

54-85: _checkout_branch fallback chain is complex—document the intent.

The three-tier fallback (plain checkout → force -B on local changes → fetch+checkout from origin) handles several real-world scenarios, but the logic is non-obvious. Specifically:

  • Line 71: git checkout -B branch when local changes are detected resets the branch pointer to HEAD while preserving the working tree. This works but only because the target is HEAD—if the intent were to track origin/branch, the local changes would still be on the old base.
  • Line 82 vs 85: After fetch, -B branch origin/branch resets to remote, while the else-branch on fetch failure creates a purely local branch. These are meaningfully different outcomes that happen silently.

A brief comment explaining the priority (prefer remote, fall back to local) would help future maintainers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_workspace.py` around lines 54 - 85, The _checkout_branch
function implements a three-step fallback for switching branches: try a normal
git checkout, if that fails due to local changes force-create/reset the local
branch with git checkout -B <branch> (keeps working tree but points branch at
HEAD), and if checkout fails otherwise try fetching origin and then force-create
the local branch to track origin with git checkout -B <branch> origin/<branch>,
falling back to creating a purely local branch if fetch fails; add a concise
in-code comment above (or at the top of) _checkout_branch describing this
priority (prefer remote tracking if fetch succeeds, otherwise fall back to local
branch; note that -B with local changes resets the branch to HEAD while
preserving the working tree) and mention the exact git commands used (git
checkout, git checkout -B, git fetch origin) so future maintainers understand
the silent behavioral differences.

112-182: _run_pr uses bare python for workspace-root invocation.

Line 119 calls "python" which may resolve to the system Python rather than the workspace venv. Since pr_manager.py doesn't import workspace libs.*, it will work today, but it's inconsistent with the workspace convention. For subprojects (line 149), the make pr target handles venv activation, so only the workspace-root path is affected.

Proposed fix
     if repo_root == workspace_root:
+        python = str(workspace_root / ".venv" / "bin" / "python")
         command = [
-            "python",
+            python,
             "scripts/github/pr_manager.py",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_workspace.py` around lines 112 - 182, The _run_pr function
builds a command using the bare "python" when running
scripts/github/pr_manager.py which can pick up the system interpreter instead of
the workspace virtualenv; update the workspace-root branch command construction
in _run_pr to use the current Python interpreter (e.g., sys.executable) so the
subprocess runs in the same venv as the rest of the workspace and remains
consistent with how subprojects are executed via make/pr; modify the command
list creation for running pr_manager.py to reference that interpreter symbol
instead of the literal "python".
scripts/github/pr_manager.py (4)

228-267: main() doesn't catch RuntimeError from _run_capture calls.

If _current_branch (Line 231) or _open_pr_for_head (called from _print_status or _create_pr) fails, the unhandled RuntimeError produces a full traceback. For a CLI tool, wrapping the body in a try/except to print a clean error and return non-zero would improve the operator experience.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_manager.py` around lines 228 - 267, main() currently lets
RuntimeError from helpers like _current_branch, _open_pr_for_head (used by
_print_status/_create_pr) bubble up and print full tracebacks; wrap the core
dispatch logic in a try/except RuntimeError block that prints a concise error
message to stderr (use the exception string) and returns a non-zero exit code
(e.g., 1). Ensure you still return existing ints on success paths and that
exceptions are only caught at the top level in main() so functions like
_print_status, _create_pr, _run_capture, and _current_branch can raise normally
for testability.

171-203: Rebase fallback after "not mergeable" retries merge unconditionally—potential for confusing double-error output.

On Line 198, the merge command is retried after update-branch --rebase succeeds. If the retry also fails, both attempts' outputs are printed (via _run_stream_with_output), which may confuse operators. More importantly, if update-branch itself fails (Line 196, update_code != 0), the original failed exit_code is returned silently without any diagnostic.

Consider adding a log line when update-branch fails so operators know the fallback was attempted and also failed.

Proposed improvement
         update_code, _ = _run_stream_with_output(
             ["gh", "pr", "update-branch", selector, "--rebase"],
             repo_root,
         )
         if update_code == 0:
             exit_code, _ = _run_stream_with_output(command, repo_root)
+        else:
+            print(f"status=update-branch-failed exit={update_code}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_manager.py` around lines 171 - 203, In _merge_pr, when the
"not mergeable" path tries ["gh", "pr", "update-branch", selector, "--rebase"]
via _run_stream_with_output, add explicit logging so operators see that the
fallback was attempted and whether it failed; specifically, log a clear message
(including the update_code and its output) if update-branch fails, and when you
retry the original merge capture and log the retry result separately (so outputs
aren't ambiguous). Update the logic around _run_stream_with_output calls in
_merge_pr to surface the update-branch failure and to emit a single clear merge
outcome message before returning, referencing _merge_pr, _run_stream_with_output
and _trigger_release_if_needed.

32-46: Side-effect in a "capture" helper may surprise callers.

_run_stream_with_output unconditionally prints stdout/stderr (Line 44-45) and returns the output. This means callers like _merge_pr can't decide whether to suppress output. Currently it works because both call-sites in _merge_pr want the output visible, but the implicit print couples presentation into the utility.

Not blocking—just noting for future maintainability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_manager.py` around lines 32 - 46, The helper
_run_stream_with_output currently prints output unconditionally which causes an
unwanted side-effect; change its signature to accept an optional echo: bool =
False, remove the unconditional print, and only print when echo is True; update
all call sites (e.g., _merge_pr and any other callers) to pass echo=True where
they need visible output so callers can choose whether to suppress printing.

96-97: _selector silently returns an empty string when both inputs are empty.

If pr_number is "" and head is "", this returns "", which would cause gh pr merge "" or similar downstream calls to fail with a confusing error. In main(), head is resolved via _current_branch, so this is currently safe—but the function itself offers no guard.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_manager.py` around lines 96 - 97, The _selector function
can return an empty string when both pr_number and head are empty; change
_selector(pr_number: str, head: str) so it validates inputs and raises a clear
exception (e.g., ValueError) when both are falsy, including both values in the
error message; keep the existing behavior of returning pr_number when present
and head otherwise, and ensure callers like main/_current_branch still get a
useful error rather than an empty string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@flext-auth`:
- Line 1: The repository submodule flext-auth currently points to a non-existent
commit f932d482af9771ca182f1308996e579b26368506; fix this by fetching the
upstream flext-auth repo to identify the correct commit or tag, update the
flext-auth gitlink to that valid commit (or update .gitmodules/branch to point
at a reachable ref), stage the updated submodule pointer (git add flext-auth),
commit the change, and push so future clones/updates no longer fail; verify by
performing a fresh clone or submodule update to confirm the pointer resolves.

In `@flext-db-oracle`:
- Line 1: The submodule entry for flext-db-oracle references a nonexistent
commit 4cd0b0ce7a362d1e7ab041d6135da2a9f5817ba3 causing git submodule update to
fail; fix it by locating the correct commit/hash in the flext-db-oracle upstream
repo (or switch to the intended branch/tag), update the submodule reference in
the repository (update the gitlink in the submodule directory or update
.gitmodules and run git submodule sync), run git submodule update --init
--recursive to verify the new hash resolves, and commit the updated submodule
reference so the PR points to a valid upstream commit.

In `@flext-tap-oracle`:
- Line 1: The submodule pointer recorded in the repo does not match the AI
summary: the checked-in gitlink points to commit
d141be75092c7672a9cf4336fa8a2681ed209d9e while the summary claims
960c01c115ff21339d03dee173afbc4ec435f579; fix by ensuring the submodule pointer
is set to the intended target commit and the commit is staged and committed (or
conversely, update the summary to reference the actual commit), e.g., update the
submodule to the correct commit for flext-tap-oracle and re-commit the gitlink
so the recorded hash and the summary agree.

In `@flext-target-ldap`:
- Line 1: The submodule flext-target-ldap is pointing at a non-existent commit
a6ac9dfb2e30fa0ef16321520fa9600d3334c62a; fix it by verifying the correct commit
in the upstream flext-target-ldap repo (fetch remote refs or ask upstream),
update the submodule pointer to a valid commit or tag, then update the
repository’s submodule metadata and commit the change (ensure .gitmodules and
the submodule gitlink in the parent commit reflect the new hash and run
submodule init/update on CI to validate the pointer).

In `@libs/discovery.py`:
- Around line 19-27: _submodule_names currently returns raw paths from
.gitmodules which causes mismatches later where callers (e.g., code comparing
entry.name) expect just the directory basename; update the function
_submodule_names to map each matched path string through Path(p).name (or
equivalent) and return a set of those basenames (skipping empty matches) so
nested entries like "platform/flext-core" yield "flext-core".

In `@libs/git.py`:
- Around line 8-9: current_branch currently returns the literal "HEAD" in a
detached HEAD state (via
run_capture(["git","rev-parse","--abbrev-ref","HEAD"])), which can break callers
that expect a real branch name; update the current_branch function to check the
result and raise a clear exception (e.g., ValueError or RuntimeError) when the
returned value is "HEAD" with a message indicating a detached HEAD and
suggesting using a commit SHA or creating/checkout a branch, or alternatively
document this behavior if you prefer not to raise — reference the current_branch
function and the run_capture call to locate where to add the guard.

In `@libs/paths.py`:
- Around line 10-11: The repo_root_from_script function currently uses
Path(...).resolve().parents[1], which fails for scripts two levels deep; update
repo_root_from_script to return Path(script_file).resolve().parents[2] instead
(or change the signature to accept an optional depth parameter with a default of
2 and use that index) so it correctly resolves the repository root for paths
like scripts/<category>/file.py; ensure you reference the function name
repo_root_from_script and the parents index when making the change.

In `@Makefile`:
- Around line 26-40: PR_BRANCH is hardcoded to "0.11.0-dev", which forces manual
bumps each release; change the Makefile so PR_BRANCH defaults to an empty string
(PR_BRANCH ?= "") and/or derive it at runtime (e.g., use git rev-parse
--abbrev-ref HEAD or a CI variable) when empty, and update any targets that rely
on PR_BRANCH to fall back to the current branch; ensure the variable name
PR_BRANCH is referenced in the Makefile logic so callers get the derived value
and add a short comment documenting the behavior.
- Line 139: The .PHONY line currently includes the non-authorized workspace verb
"pr"; remove "pr" from the .PHONY declaration (the line containing ".PHONY: help
setup upgrade build check security format docs test validate typings clean
release release-ci pr") so the Makefile only exposes the allowed verbs, or
alternatively move/declare the "pr" target in a repo-specific Makefile and not
the workspace Makefile—update the .PHONY to match the allowed set (setup, check,
security, format, docs, test, validate, typings, clean) and ensure there are no
remaining "pr" target definitions in this file.
- Around line 464-486: The pr Makefile target must ensure the workspace venv and
invoke Python from that venv: add $(ENFORCE_WORKSPACE_VENV) to the pr target
prerequisites and replace the bare python call to run
scripts/github/pr_workspace.py with the managed venv python (e.g.,
$(WORKSPACE_VENV)/bin/python or $(POETRY_ENV) python) so imports from
libs.selection and libs.subprocess resolve correctly; update references in the
pr rule only (target name "pr", script path "scripts/github/pr_workspace.py",
variables "ENFORCE_WORKSPACE_VENV", "WORKSPACE_VENV" or "POETRY_ENV").

In `@scripts/github/pr_manager.py`:
- Line 145: The print call currently uses an unnecessary f-string: replace the
extraneous f prefix on the string literal in the print statement
(print(f"status=already-open")) with a normal string
(print("status=already-open")) so there are no f-strings without placeholders;
locate the print call in scripts/github/pr_manager.py and update it accordingly.
- Around line 135-168: The _create_pr function currently calls _run_capture
which raises on non-zero exit and causes tracebacks; replace that call with
_run_stream_with_output (or _run_stream) so the GH command's exit code and
stdout/stderr are returned instead of raising, check the exit code, print
"status=created" and pr_url parsed from the command's output only on success,
and on failure print an error status/exit code and return a non-zero integer;
keep the early-return behavior when _open_pr_for_head finds an existing PR.
Ensure you update references to _run_capture in _create_pr and use the same
parsing/printing logic as other handlers (e.g., how view/checks/close use
_run_stream).

In `@scripts/github/pr_workspace.py`:
- Around line 88-109: The commit message in _checkpoint currently hardcodes
"chore: checkpoint pending 0.11.0-dev changes"; change it to derive the version
from the branch parameter instead so it stays correct across release
cycles—build the commit message string using branch (e.g., f"chore: checkpoint
pending {branch} changes" or extract a version token from branch if needed) and
replace the hardcoded message in the run_checked(["git", "commit", ...]) call;
keep all other git logic (staging, pushing, pull/rebase) unchanged.

In `@scripts/maintenance/enforce_python_version.py`:
- Around line 240-243: The second print uses an unnecessary f-string (no
placeholders) causing Ruff F541; in the branch where args.check is true, change
print(f"  Run: python scripts/maintenance/enforce_python_version.py") to a
regular string print("  Run: python
scripts/maintenance/enforce_python_version.py") so only the first print that
interpolates REQUIRED_MINOR remains an f-string; update the line near the
args.check block that references REQUIRED_MINOR to keep behavior unchanged.

---

Duplicate comments:
In `@tests/unit/scripts/release/release_scripts_tests.py`:
- Around line 10-18: The _load_module helper in release_scripts_tests.py has its
parameters reversed compared to the other test helpers; change the function
signature and all internal uses to _load_module(module_name: str, relative_path:
str) so it matches the convention used in release_shared_and_run_tests.py and
pr_manager_tests.py; update the parameter names inside the function (module_path
construction, spec creation, and any callers within this test file) to use the
new order and names (module_name then relative_path) to keep consistency across
tests.

---

Nitpick comments:
In `@libs/__init__.py`:
- Line 9: The project exports "subprocess" from libs via the __all__ list which
shadows the stdlib module name; rename the internal module (e.g., move/rename
libs/subprocess.py to libs/proc.py or libs/shell.py) and update the __all__
entry to the new module name and all internal imports/usages that reference
subprocess (search for the module symbol "subprocess", the file
"libs/subprocess.py", and the package-level "__all__" in libs/__init__.py) so
that "from libs import subprocess" no longer hides the standard library module
and existing call sites use the renamed module symbol.

In `@libs/selection.py`:
- Around line 21-23: Replace the inline RuntimeError with a small dedicated
exception type to satisfy Ruff TRY003: add a new exception class (e.g., class
UnknownProjectsError(Exception): pass) in the same module and change the raise
in the if missing block to raise UnknownProjectsError(missing_text) (use the
existing local variable missing_text). This keeps the error message compact and
moves the long inline message into the exception instantiation via the variable;
alternatively, if you prefer not to add a class, append a "# noqa: TRY003"
comment to the current raise to silence Ruff.

In `@libs/subprocess.py`:
- Around line 8-27: Both run_checked and run_capture can hang indefinitely; add
an optional timeout: float | None parameter (default None) to both functions and
pass it through to subprocess.run; catch subprocess.TimeoutExpired in each
function, build the command string using shlex.join(command) and raise a
RuntimeError that includes the timeout and the command (for run_capture include
any available partial output from err/out if present), ensuring type signatures
of run_checked and run_capture are updated accordingly.

In `@scripts/core/stub_supply_chain.py`:
- Around line 19-22: The code computes Path(__file__).resolve().parents[2]
twice; cache it in a local variable (e.g., repo_root or repo_root_path) and
reuse that variable in both the membership check and sys.path.insert call so you
only resolve the path once; update the top-level module code around the existing
sys.path logic and the import of resolve_projects to use the new variable.

In `@scripts/dependencies/dependency_detection.py`:
- Around line 72-81: The tuple INTERNAL_PREFIXES currently contains three
identical entries ("flext_", "flext_", "flext_") — update the definition of
INTERNAL_PREFIXES to remove duplicate entries and correct the intended prefixes
(e.g., replace duplicates with the intended variants such as "flext-",
"flexcore_", or other project-specific prefixes) so the startswith checks use
the correct, unique prefix set; locate the INTERNAL_PREFIXES constant and
replace the duplicated tuple with a deduplicated, accurate tuple of prefix
strings.

In `@scripts/github/pr_manager.py`:
- Around line 228-267: main() currently lets RuntimeError from helpers like
_current_branch, _open_pr_for_head (used by _print_status/_create_pr) bubble up
and print full tracebacks; wrap the core dispatch logic in a try/except
RuntimeError block that prints a concise error message to stderr (use the
exception string) and returns a non-zero exit code (e.g., 1). Ensure you still
return existing ints on success paths and that exceptions are only caught at the
top level in main() so functions like _print_status, _create_pr, _run_capture,
and _current_branch can raise normally for testability.
- Around line 171-203: In _merge_pr, when the "not mergeable" path tries ["gh",
"pr", "update-branch", selector, "--rebase"] via _run_stream_with_output, add
explicit logging so operators see that the fallback was attempted and whether it
failed; specifically, log a clear message (including the update_code and its
output) if update-branch fails, and when you retry the original merge capture
and log the retry result separately (so outputs aren't ambiguous). Update the
logic around _run_stream_with_output calls in _merge_pr to surface the
update-branch failure and to emit a single clear merge outcome message before
returning, referencing _merge_pr, _run_stream_with_output and
_trigger_release_if_needed.
- Around line 32-46: The helper _run_stream_with_output currently prints output
unconditionally which causes an unwanted side-effect; change its signature to
accept an optional echo: bool = False, remove the unconditional print, and only
print when echo is True; update all call sites (e.g., _merge_pr and any other
callers) to pass echo=True where they need visible output so callers can choose
whether to suppress printing.
- Around line 96-97: The _selector function can return an empty string when both
pr_number and head are empty; change _selector(pr_number: str, head: str) so it
validates inputs and raises a clear exception (e.g., ValueError) when both are
falsy, including both values in the error message; keep the existing behavior of
returning pr_number when present and head otherwise, and ensure callers like
main/_current_branch still get a useful error rather than an empty string.

In `@scripts/github/pr_workspace.py`:
- Around line 99-109: The current push-then-rebase-then-push flow can raise an
unhandled exception on the second push (run_checked(push_cmd, cwd=repo_root)) if
another agent beat us to the push; change this to handle failure gracefully by
replacing the final run_checked call with a non-raising subprocess.run (similar
to the initial push call) or wrap run_checked(push_cmd, cwd=repo_root) in a
try/except for CalledProcessError and log a clear diagnostic including push_cmd,
branch and push.returncode, then continue the workspace loop instead of
aborting; reference push_cmd, push, run_checked, repo_root and branch when
implementing the change.
- Around line 46-47: The _has_changes function currently calls
run_capture(["git", "status", "--porcelain"], cwd=repo_root) which will raise a
RuntimeError on non-zero exit and can abort the main loop; wrap that call in a
try/except that catches RuntimeError (and optionally subprocess-related
exceptions from libs.subprocess), and return False when git fails (or if you
prefer, re-raise a more descriptive error); update the _has_changes
implementation to handle the exception instead of letting run_capture propagate,
referencing the _has_changes function and run_capture helper so the check
returns a boolean safely even when the git command fails.
- Around line 54-85: The _checkout_branch function implements a three-step
fallback for switching branches: try a normal git checkout, if that fails due to
local changes force-create/reset the local branch with git checkout -B <branch>
(keeps working tree but points branch at HEAD), and if checkout fails otherwise
try fetching origin and then force-create the local branch to track origin with
git checkout -B <branch> origin/<branch>, falling back to creating a purely
local branch if fetch fails; add a concise in-code comment above (or at the top
of) _checkout_branch describing this priority (prefer remote tracking if fetch
succeeds, otherwise fall back to local branch; note that -B with local changes
resets the branch to HEAD while preserving the working tree) and mention the
exact git commands used (git checkout, git checkout -B, git fetch origin) so
future maintainers understand the silent behavioral differences.
- Around line 112-182: The _run_pr function builds a command using the bare
"python" when running scripts/github/pr_manager.py which can pick up the system
interpreter instead of the workspace virtualenv; update the workspace-root
branch command construction in _run_pr to use the current Python interpreter
(e.g., sys.executable) so the subprocess runs in the same venv as the rest of
the workspace and remains consistent with how subprojects are executed via
make/pr; modify the command list creation for running pr_manager.py to reference
that interpreter symbol instead of the literal "python".

In `@scripts/maintenance/enforce_python_version.py`:
- Around line 24-29: Compute Path(__file__).resolve().parents[2] once into a
variable (e.g., root_dir) and reuse it for both the sys.path insertion and the
ROOT assignment to avoid duplicated work; replace the two occurrences of
Path(__file__).resolve().parents[2] with that single variable and use
str(root_dir) where needed before importing resolve_projects from
libs.selection.
- Around line 186-200: The current logic in _ensure_conftest_guard
short-circuits if _has_guard(content) returns True, so files with an old
REQUIRED_MINOR marker never get updated; change the flow so you detect the
guard's version (either by enhancing _has_guard to return the matched minor
version or adding a new helper like _guard_minor_version) and if the found minor
differs from REQUIRED_MINOR, treat it as not current: when check_only is false
call _inject_guard to rewrite the file (using conftest.write_text as already
done) and log that the guard was refreshed (respecting verbose), otherwise when
check_only is true report MISSING/OUTDATED; update references in
_ensure_conftest_guard, _has_guard (or add _guard_minor_version), and
_inject_guard to support this version-aware behavior.

In `@scripts/release/run.py`:
- Around line 38-48: The _current_version function currently reads
pyproject.toml bytes, decodes them, and calls tomllib.loads; replace that with
tomllib.load by opening the file in binary mode to be more idiomatic: open
pyproject in "rb" and call tomllib.load(file) to obtain data, keep the existing
validation of project/version and the final removesuffix("-dev") behavior;
update the code around the pyproject variable and tomllib usage in
_current_version accordingly.

In `@scripts/release/shared.py`:
- Around line 32-38: The current RuntimeError message rewrite in
resolve_projects is fragile because it blindly replaces text in str(exc); update
resolve_projects to detect the specific condition before changing the message:
call _resolve_projects(root, names) in the try block, catch RuntimeError as exc,
inspect str(exc) to see if it contains the stable substring "unknown projects"
(or better, check for a more stable indicator if available), and only then raise
a new RuntimeError with "unknown release projects" chained from exc; otherwise
re-raise the original exception (or raise RuntimeError(str(exc)) from exc) to
avoid silent no-ops—refer to the resolve_projects function and the
_resolve_projects call when making this change.

In `@tests/unit/scripts/release/release_shared_and_run_tests.py`:
- Around line 12-20: Standardize the _load_module signature to (module_name,
relative_path) across tests by changing the variant in
tests/unit/scripts/release/release_scripts_tests.py to match the other files:
keep the function name _load_module(module_name: str, relative_path: str) and
update all call sites in that file to pass module_name first and relative_path
second; ensure the function parameters, any internal uses, and imports in
release_scripts_tests.py align with the (module_name, relative_path) ordering to
avoid copy-paste mistakes with _load_module used in
release_shared_and_run_tests.py, pr_manager_tests.py, and test_pr_workspace.py.

Comment on lines 19 to 27
def _submodule_names(workspace_root: Path) -> set[str]:
gitmodules = workspace_root / ".gitmodules"
if not gitmodules.exists():
return set()
try:
content = gitmodules.read_text(encoding="utf-8")
except OSError:
return set()
return set(re.findall(r"^\s*path\s*=\s*(.+?)\s*$", content, re.MULTILINE))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

_submodule_names misclassifies submodules with nested .gitmodules paths

The regex extracts the raw path = value from .gitmodules. For a nested entry like path = platform/flext-core, the set contains "platform/flext-core". At line 42, entry.name in submodules compares just the directory base-name "flext-core" — no match — so the submodule is silently mis-classified as "external".

While the current workspace appears to use flat layouts, extracting Path(p).name makes this robust:

🛠️ Proposed fix
-return set(re.findall(r"^\s*path\s*=\s*(.+?)\s*$", content, re.MULTILINE))
+return {Path(p).name for p in re.findall(r"^\s*path\s*=\s*(.+?)\s*$", content, re.MULTILINE)}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/discovery.py` around lines 19 - 27, _submodule_names currently returns
raw paths from .gitmodules which causes mismatches later where callers (e.g.,
code comparing entry.name) expect just the directory basename; update the
function _submodule_names to map each matched path string through Path(p).name
(or equivalent) and return a set of those basenames (skipping empty matches) so
nested entries like "platform/flext-core" yield "flext-core".

Comment on lines 464 to 486
pr: ## Manage pull requests for selected projects
$(Q)$(ENSURE_NO_PROJECT_CONFLICT)
$(Q)$(ENSURE_SELECTED_PROJECTS)
$(Q)$(ENSURE_PROJECTS_EXIST)
$(Q)python scripts/github/pr_workspace.py \
--workspace-root "$(CURDIR)" \
$(foreach proj,$(SELECTED_PROJECTS),--project "$(proj)") \
--include-root "$(PR_INCLUDE_ROOT)" \
--branch "$(PR_BRANCH)" \
--checkpoint "$(PR_CHECKPOINT)" \
--fail-fast "$(if $(filter 1,$(FAIL_FAST)),1,0)" \
--pr-action "$(PR_ACTION)" \
--pr-base "$(PR_BASE)" \
$(if $(PR_HEAD),--pr-head "$(PR_HEAD)",) \
$(if $(PR_NUMBER),--pr-number "$(PR_NUMBER)",) \
$(if $(PR_TITLE),--pr-title "$(PR_TITLE)",) \
$(if $(PR_BODY),--pr-body "$(PR_BODY)",) \
--pr-draft "$(PR_DRAFT)" \
--pr-merge-method "$(PR_MERGE_METHOD)" \
--pr-auto "$(PR_AUTO)" \
--pr-delete-branch "$(PR_DELETE_BRANCH)" \
--pr-checks-strict "$(PR_CHECKS_STRICT)" \
--pr-release-on-merge "$(PR_RELEASE_ON_MERGE)"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

pr target is missing $(ENFORCE_WORKSPACE_VENV) and uses bare python.

Every other target that runs a Python script either uses $(POETRY_ENV) python, $(WORKSPACE_VENV)/bin/python, or python3.13. The pr target invokes python scripts/github/pr_workspace.py without ensuring the workspace venv is active. Since pr_workspace.py imports from libs.selection and libs.subprocess (workspace-local packages), this will fail if the workspace venv isn't active.

Proposed fix
 pr: ## Manage pull requests for selected projects
 	$(Q)$(ENSURE_NO_PROJECT_CONFLICT)
+	$(Q)$(ENFORCE_WORKSPACE_VENV)
 	$(Q)$(ENSURE_SELECTED_PROJECTS)
 	$(Q)$(ENSURE_PROJECTS_EXIST)
-	$(Q)python scripts/github/pr_workspace.py \
+	$(Q)$(WORKSPACE_VENV)/bin/python scripts/github/pr_workspace.py \

Based on learnings: "All pip install, python, and tool invocations must use an active managed venv (.venv/bin/ in workspace mode, or project .venv/bin/ in fallback mode). Never install packages with system pip"

🧰 Tools
🪛 checkmake (0.2.2)

[warning] 464-464: Target body for "pr" exceeds allowed length of 5 (22).

(maxbodylength)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 464 - 486, The pr Makefile target must ensure the
workspace venv and invoke Python from that venv: add $(ENFORCE_WORKSPACE_VENV)
to the pr target prerequisites and replace the bare python call to run
scripts/github/pr_workspace.py with the managed venv python (e.g.,
$(WORKSPACE_VENV)/bin/python or $(POETRY_ENV) python) so imports from
libs.selection and libs.subprocess resolve correctly; update references in the
pr rule only (target name "pr", script path "scripts/github/pr_workspace.py",
variables "ENFORCE_WORKSPACE_VENV", "WORKSPACE_VENV" or "POETRY_ENV").

Comment on lines 135 to 168
def _create_pr(
repo_root: Path,
base: str,
head: str,
title: str,
body: str,
draft: int,
) -> int:
existing = _open_pr_for_head(repo_root, head)
if existing is not None:
print(f"status=already-open")
print(f"pr_url={existing.get('url')}")
return 0

command = [
"gh",
"pr",
"create",
"--base",
base,
"--head",
head,
"--title",
title,
"--body",
body,
]
if draft == 1:
command.append("--draft")

created = _run_capture(command, repo_root)
print("status=created")
print(f"pr_url={created}")
return 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

_create_pr will raise RuntimeError on gh pr create failure instead of returning a non-zero exit code.

Line 165 uses _run_capture, which raises on non-zero exit. If gh pr create fails (e.g. network issue, permission error), the exception propagates to main() and produces a traceback rather than a clean non-zero return. Other action handlers (view, checks, close) use _run_stream which returns the exit code gracefully.

Consider using _run_stream_with_output or _run_stream instead of _run_capture for the create command, and parse the PR URL from the output.

Proposed fix sketch
-    created = _run_capture(command, repo_root)
-    print("status=created")
-    print(f"pr_url={created}")
-    return 0
+    exit_code, output = _run_stream_with_output(command, repo_root)
+    if exit_code == 0:
+        print("status=created")
+        print(f"pr_url={output.strip()}")
+    return exit_code
🧰 Tools
🪛 Ruff (0.15.1)

[error] 145-145: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_manager.py` around lines 135 - 168, The _create_pr function
currently calls _run_capture which raises on non-zero exit and causes
tracebacks; replace that call with _run_stream_with_output (or _run_stream) so
the GH command's exit code and stdout/stderr are returned instead of raising,
check the exit code, print "status=created" and pr_url parsed from the command's
output only on success, and on failure print an error status/exit code and
return a non-zero integer; keep the early-return behavior when _open_pr_for_head
finds an existing PR. Ensure you update references to _run_capture in _create_pr
and use the same parsing/printing logic as other handlers (e.g., how
view/checks/close use _run_stream).

) -> int:
existing = _open_pr_for_head(repo_root, head)
if existing is not None:
print(f"status=already-open")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove extraneous f prefix on string without placeholders.

Static analysis (Ruff F541) correctly flags this.

Proposed fix
-        print(f"status=already-open")
+        print("status=already-open")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(f"status=already-open")
print("status=already-open")
🧰 Tools
🪛 Ruff (0.15.1)

[error] 145-145: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_manager.py` at line 145, The print call currently uses an
unnecessary f-string: replace the extraneous f prefix on the string literal in
the print statement (print(f"status=already-open")) with a normal string
(print("status=already-open")) so there are no f-strings without placeholders;
locate the print call in scripts/github/pr_manager.py and update it accordingly.

Comment on lines +88 to +109
def _checkpoint(repo_root: Path, branch: str) -> None:
if not _has_changes(repo_root):
return
run_checked(["git", "add", "-A"], cwd=repo_root)
staged = run_capture(["git", "diff", "--cached", "--name-only"], cwd=repo_root)
if not staged.strip():
return
run_checked(
["git", "commit", "-m", "chore: checkpoint pending 0.11.0-dev changes"],
cwd=repo_root,
)
push_cmd = ["git", "push", "-u", "origin", branch] if branch else ["git", "push"]
push = subprocess.run(
push_cmd, cwd=repo_root, check=False, capture_output=True, text=True
)
if push.returncode == 0:
return
if branch:
run_checked(["git", "pull", "--rebase", "origin", branch], cwd=repo_root)
else:
run_checked(["git", "pull", "--rebase"], cwd=repo_root)
run_checked(push_cmd, cwd=repo_root)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded 0.11.0-dev in checkpoint commit message.

Line 96 embeds the release cycle version directly in the commit message string. This creates a maintenance burden—every cycle this file needs updating. Consider deriving the message from the branch parameter:

Proposed fix
     run_checked(
-        ["git", "commit", "-m", "chore: checkpoint pending 0.11.0-dev changes"],
+        ["git", "commit", "-m", f"chore: checkpoint pending {branch or 'dev'} changes"],
         cwd=repo_root,
     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _checkpoint(repo_root: Path, branch: str) -> None:
if not _has_changes(repo_root):
return
run_checked(["git", "add", "-A"], cwd=repo_root)
staged = run_capture(["git", "diff", "--cached", "--name-only"], cwd=repo_root)
if not staged.strip():
return
run_checked(
["git", "commit", "-m", "chore: checkpoint pending 0.11.0-dev changes"],
cwd=repo_root,
)
push_cmd = ["git", "push", "-u", "origin", branch] if branch else ["git", "push"]
push = subprocess.run(
push_cmd, cwd=repo_root, check=False, capture_output=True, text=True
)
if push.returncode == 0:
return
if branch:
run_checked(["git", "pull", "--rebase", "origin", branch], cwd=repo_root)
else:
run_checked(["git", "pull", "--rebase"], cwd=repo_root)
run_checked(push_cmd, cwd=repo_root)
def _checkpoint(repo_root: Path, branch: str) -> None:
if not _has_changes(repo_root):
return
run_checked(["git", "add", "-A"], cwd=repo_root)
staged = run_capture(["git", "diff", "--cached", "--name-only"], cwd=repo_root)
if not staged.strip():
return
run_checked(
["git", "commit", "-m", f"chore: checkpoint pending {branch or 'dev'} changes"],
cwd=repo_root,
)
push_cmd = ["git", "push", "-u", "origin", branch] if branch else ["git", "push"]
push = subprocess.run(
push_cmd, cwd=repo_root, check=False, capture_output=True, text=True
)
if push.returncode == 0:
return
if branch:
run_checked(["git", "pull", "--rebase", "origin", branch], cwd=repo_root)
else:
run_checked(["git", "pull", "--rebase"], cwd=repo_root)
run_checked(push_cmd, cwd=repo_root)
🧰 Tools
🪛 Ruff (0.15.1)

[error] 100-100: subprocess call: check for execution of untrusted input

(S603)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/github/pr_workspace.py` around lines 88 - 109, The commit message in
_checkpoint currently hardcodes "chore: checkpoint pending 0.11.0-dev changes";
change it to derive the version from the branch parameter instead so it stays
correct across release cycles—build the commit message string using branch
(e.g., f"chore: checkpoint pending {branch} changes" or extract a version token
from branch if needed) and replace the hardcoded message in the
run_checked(["git", "commit", ...]) call; keep all other git logic (staging,
pushing, pull/rebase) unchanged.

Comment on lines 240 to 243
if args.check:
print(f"✗ Some projects missing Python 3.{REQUIRED_MINOR} enforcement")
print(f" Run: python scripts/maintenance/enforce_python_version.py")
return 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove extraneous f prefix on line 242.

This f-string contains no placeholders, as flagged by Ruff (F541).

Proposed fix
     if args.check:
         print(f"✗ Some projects missing Python 3.{REQUIRED_MINOR} enforcement")
-        print(f"  Run: python scripts/maintenance/enforce_python_version.py")
+        print("  Run: python scripts/maintenance/enforce_python_version.py")
         return 1
🧰 Tools
🪛 Ruff (0.15.1)

[error] 242-242: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/maintenance/enforce_python_version.py` around lines 240 - 243, The
second print uses an unnecessary f-string (no placeholders) causing Ruff F541;
in the branch where args.check is true, change print(f"  Run: python
scripts/maintenance/enforce_python_version.py") to a regular string print(" 
Run: python scripts/maintenance/enforce_python_version.py") so only the first
print that interpolates REQUIRED_MINOR remains an f-string; update the line near
the args.check block that references REQUIRED_MINOR to keep behavior unchanged.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 41 files (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="libs/versioning.py">

<violation number="1" location="libs/versioning.py:42">
P2: The `release/...` fallback regex uses `\d+` which accepts leading zeros (e.g. `release/01.2.3` → `v01.2.3`), unlike the stricter `SEMVER_RE` used in the first code path. Validate the extracted version with `SEMVER_RE` to enforce consistent semver rules across both paths.</violation>
</file>

<file name="scripts/release/run.py">

<violation number="1" location="scripts/release/run.py:20">
P2: Fragile import: `from libs.versioning import current_workspace_version` relies on the side-effect of importing `release.shared` (which adds REPO_ROOT to `sys.path`). This breaks if imports are reordered. For consistency with other `libs.*` usages in this file, re-export `current_workspace_version` from `release.shared` and import it from there.</violation>
</file>

<file name="scripts/sync.py">

<violation number="1" location="scripts/sync.py:47">
P1: Bug: hidden-directory filter checks absolute path parts instead of relative path parts. If any ancestor directory of `source_dir` starts with `.` (e.g., `~/.flext/…`), **all** files are silently excluded and nothing gets synced.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

for p in source_dir.rglob("*")
if p.is_file()
and "__pycache__" not in p.parts
and not any(part.startswith(".") for part in p.parts)
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Bug: hidden-directory filter checks absolute path parts instead of relative path parts. If any ancestor directory of source_dir starts with . (e.g., ~/.flext/…), all files are silently excluded and nothing gets synced.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/sync.py, line 47:

<comment>Bug: hidden-directory filter checks absolute path parts instead of relative path parts. If any ancestor directory of `source_dir` starts with `.` (e.g., `~/.flext/…`), **all** files are silently excluded and nothing gets synced.</comment>

<file context>
@@ -40,7 +40,11 @@ def _copy_if_changed(source: Path, target: Path) -> bool:
+        for p in source_dir.rglob("*")
+        if p.is_file()
+        and "__pycache__" not in p.parts
+        and not any(part.startswith(".") for part in p.parts)
     }
     for rel in sorted(source_files):
</file context>
Fix with Cubic

version = branch.removesuffix("-dev")
if SEMVER_RE.fullmatch(version):
return f"v{version}"
match = re.fullmatch(r"release/(?P<version>\d+\.\d+\.\d+)", branch)
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The release/... fallback regex uses \d+ which accepts leading zeros (e.g. release/01.2.3v01.2.3), unlike the stricter SEMVER_RE used in the first code path. Validate the extracted version with SEMVER_RE to enforce consistent semver rules across both paths.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At libs/versioning.py, line 42:

<comment>The `release/...` fallback regex uses `\d+` which accepts leading zeros (e.g. `release/01.2.3` → `v01.2.3`), unlike the stricter `SEMVER_RE` used in the first code path. Validate the extracted version with `SEMVER_RE` to enforce consistent semver rules across both paths.</comment>

<file context>
@@ -0,0 +1,73 @@
+    version = branch.removesuffix("-dev")
+    if SEMVER_RE.fullmatch(version):
+        return f"v{version}"
+    match = re.fullmatch(r"release/(?P<version>\d+\.\d+\.\d+)", branch)
+    if not match:
+        return None
</file context>
Fix with Cubic

run_checked,
workspace_root,
)
from libs.versioning import current_workspace_version
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Fragile import: from libs.versioning import current_workspace_version relies on the side-effect of importing release.shared (which adds REPO_ROOT to sys.path). This breaks if imports are reordered. For consistency with other libs.* usages in this file, re-export current_workspace_version from release.shared and import it from there.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/release/run.py, line 20:

<comment>Fragile import: `from libs.versioning import current_workspace_version` relies on the side-effect of importing `release.shared` (which adds REPO_ROOT to `sys.path`). This breaks if imports are reordered. For consistency with other `libs.*` usages in this file, re-export `current_workspace_version` from `release.shared` and import it from there.</comment>

<file context>
@@ -18,6 +17,7 @@
     run_checked,
     workspace_root,
 )
+from libs.versioning import current_workspace_version
 
 
</file context>
Fix with Cubic

@marlon-costa-dc marlon-costa-dc merged commit 6d90dfb into main Feb 20, 2026
1 check passed
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (5)
tests/unit/scripts/sync_tests.py (2)

14-16: Loaded modules accumulate in sys.modules across tests

sys.modules[spec.name] = module is never cleaned up, leaving stale entries for the remainder of the test session. While the distinct names ("scripts_sync_ignore", "scripts_sync_main") prevent cross-test conflicts, the leakage can confuse coverage collectors or other tools that inspect sys.modules.

♻️ Suggested cleanup pattern
 def _load_module(module_name: str, relative_path: str) -> Any:
     module_path = Path(__file__).resolve().parents[3] / relative_path
     spec = importlib.util.spec_from_file_location(module_name, module_path)
     assert spec is not None
     assert spec.loader is not None
     module = importlib.util.module_from_spec(spec)
     sys.modules[spec.name] = module
     spec.loader.exec_module(module)
     return module

Alternatively, expose cleanup via a pytest fixture:

import pytest

`@pytest.fixture`(autouse=True)
def _clean_dynamic_modules():
    before = set(sys.modules)
    yield
    for key in list(sys.modules):
        if key not in before:
            del sys.modules[key]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/scripts/sync_tests.py` around lines 14 - 16, The test is leaving
dynamically imported modules in sys.modules (sys.modules[spec.name] = module)
causing leakage; fix by removing that entry after use or by adding a pytest
autouse fixture to clean any new modules. Option A: wrap the import block that
uses module/spec in try/finally and in finally do del sys.modules[spec.name] if
present. Option B: add an autouse pytest fixture named _clean_dynamic_modules
that captures before = set(sys.modules), yields, then deletes any keys added
(for key in list(sys.modules): if key not in before: del sys.modules[key]). Use
the symbols spec, module, and sys.modules to locate the import code or add the
fixture.

9-10: parents[3] depth is fragile if the test file moves

The index is only correct for the current 4-level path tests/unit/scripts/sync_tests.py. Moving the file breaks the loader silently (wrong repo root) or raises IndexError.

♻️ Proposed fix – walk up to the repo root marker
 def _load_module(module_name: str, relative_path: str) -> Any:
-    module_path = Path(__file__).resolve().parents[3] / relative_path
+    here = Path(__file__).resolve()
+    repo_root = next(p for p in here.parents if (p / "pyproject.toml").exists())
+    module_path = repo_root / relative_path
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/scripts/sync_tests.py` around lines 9 - 10, The helper
_load_module currently uses Path(__file__).resolve().parents[3] which is
brittle; change _load_module to walk up parent directories from
Path(__file__).resolve() until it finds a repository root marker (e.g., a file
like "pyproject.toml", "setup.cfg", or ".git") and then use that parent as the
repo root to build module_path instead of parents[3]; update the function name
references (_load_module and module_path) accordingly and raise a clear error if
no repo root marker is found.
libs/versioning.py (1)

16-17: Use fullmatch in parse_semver for consistency with release_tag_from_branch.

SEMVER_RE already has ^ and $ anchors so both calls are functionally identical, but release_tag_from_branch (line 40) uses fullmatch while parse_semver uses match. Aligning them signals explicit intent.

Proposed fix
-    match = SEMVER_RE.match(version)
+    match = SEMVER_RE.fullmatch(version)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/versioning.py` around lines 16 - 17, The parse_semver function uses
SEMVER_RE.match; change it to use SEMVER_RE.fullmatch for consistency with
release_tag_from_branch and to make intent explicit: update parse_semver to call
SEMVER_RE.fullmatch(version) (leaving SEMVER_RE unchanged) so both functions use
fullmatch semantics when validating the version string.
scripts/release/version.py (1)

12-13: libs.versioning is importable only because release.shared adds the repo root to sys.path as a side effect — fragile import ordering.

SCRIPTS_ROOT (line 8) adds scripts/ to sys.path, which does not contain libs/. Importing libs.versioning on line 13 only works because importing release.shared on line 12 first executes shared.py's module-init block, which inserts the repo root. If these two imports are ever reordered, line 13 will raise ModuleNotFoundError.

Proposed fix — make the dependency explicit
 SCRIPTS_ROOT = Path(__file__).resolve().parents[1]
 if str(SCRIPTS_ROOT) not in sys.path:
     sys.path.insert(0, str(SCRIPTS_ROOT))
+
+REPO_ROOT = Path(__file__).resolve().parents[2]
+if str(REPO_ROOT) not in sys.path:
+    sys.path.insert(0, str(REPO_ROOT))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/release/version.py` around lines 12 - 13, The import of
libs.versioning depends on release.shared's side-effect of mutating sys.path
(SCRIPTS_ROOT logic), which is fragile; make the dependency explicit by ensuring
the repo root is added to sys.path before importing libs.versioning (or
otherwise resolving imports using workspace_root), e.g., move or replicate the
sys.path insertion that uses workspace_root/SCRIPTS_ROOT to the top of
version.py before the replace_project_version import or import
replace_project_version after explicitly fixing sys.path; reference the symbols
SCRIPTS_ROOT, workspace_root, release.shared, parse_semver, and
replace_project_version to locate where to apply the change.
scripts/release/run.py (1)

35-35: Consider nargs="+" instead of nargs="*" for --projects.

With nargs="*", a bare --projects (no names following) silently returns [], which resolves to all projects — the same as omitting the flag entirely. nargs="+" would make the mistake explicit with an argparse error.

♻️ Suggested change
-    _ = parser.add_argument("--projects", nargs="*", default=[])
+    _ = parser.add_argument("--projects", nargs="+", default=[])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/release/run.py` at line 35, Change the argparse definition for the
"--projects" flag so that it requires at least one value instead of allowing
none: update the parser.add_argument call for "--projects" to use nargs="+" (and
keep default behavior adjusted or remove default if appropriate) so passing a
bare --projects triggers an argparse error rather than silently returning an
empty list; modify the parser.add_argument("--projects", ...) occurrence
accordingly and verify any downstream code that treats the resulting variable
handles the new required-values behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@flext-tap-oracle-oic`:
- Line 1: The repository's submodule flext-tap-oracle-oic is pinned to a
non-existent commit (a1e6d20ae6e9d9f0e4330b3090b385ac0a798877); verify the
correct target commit or branch exists in the submodule remote, then update the
submodule pointer to a valid ref: fetch the submodule remote, identify the
intended commit/branch, update the submodule reference in the superproject
(replace the bad commit hash with the valid commit for flext-tap-oracle-oic) and
commit that change so git submodule init/update works for others. Ensure the
final commit points to a ref present in the remote.

In `@flext-target-oracle`:
- Line 1: The submodule pointer for flext-target-oracle references a nonexistent
commit (6e07187b55e24c13dc792851724594d4db219bfa); update the submodule
reference to a valid commit or branch by checking the flext-target-oracle remote
for the correct commit/hash or desired branch, update the submodule pointer in
the repo (either by `git submodule set-branch`/checkout in the submodule or by
updating the entry in .gitmodules and the gitlink to the correct commit), run
`git add .gitmodules` and `git add` the submodule, commit the change, and push
so the PR points at an existing commit (also run `git submodule sync` and `git
submodule update --init` locally to verify).

In `@scripts/sync.py`:
- Around line 43-47: The hidden-component and __pycache__ checks are using
p.parts (absolute path) so files under a hidden ancestor are incorrectly
filtered out; change the predicate to compute rel = p.relative_to(source_dir)
and then use rel.parts (e.g., check "__pycache__" not in rel.parts and not
any(part.startswith(".") for part in rel.parts)) when iterating the generator
from source_dir.rglob("*"), ensuring you still call p.is_file() on the original
Path object but apply the hidden/__pycache__ filters to the relative path.
- Around line 86-88: The call to _sync_tree(canonical_root / "libs",
project_root / "libs", args.prune) can raise FileNotFoundError because
_sync_tree iterates source_dir.rglob("*") when the source (canonical_root /
"libs") doesn't exist; update the code to guard this case by checking
source_dir.is_dir() before iterating (either in the caller before invoking
_sync_tree or at the start of the _sync_tree function) and return a no-op (e.g.,
0 changes) when the source directory is missing so the sync won't crash in
older/submodule/mid-migration environments.

In `@tests/unit/scripts/sync_tests.py`:
- Around line 39-70: The test test_main_syncs_scripts_and_libs is missing an
assertion that main() copies canonical/base.mk to project/base.mk; update the
test (test_main_syncs_scripts_and_libs) to assert that (project /
"base.mk").exists() after calling mod.main() so the file-copy behavior in main()
is verified alongside the script and lib syncs.

---

Duplicate comments:
In `@Makefile`:
- Around line 468-490: The pr target must ensure the workspace virtualenv is
active and avoid using the bare python; add a dependency invocation of
$(ENFORCE_WORKSPACE_VENV) into the recipe just like other targets, and replace
the bare "python scripts/github/pr_workspace.py" call with the managed
interpreter variable (e.g. "$(VENV_PYTHON) scripts/github/pr_workspace.py") so
the workspace-local packages (libs.selection, libs.subprocess) are importable.
- Line 40: Replace the hardcoded PR_BRANCH assignment in the Makefile with a
runtime-derived value: instead of PR_BRANCH ?= 0.11.0-dev, invoke git to read
the current branch (e.g., using git rev-parse --abbrev-ref HEAD or a
git-describe variant) to populate PR_BRANCH, and add a sensible fallback/default
when git fails (to preserve offline builds); update the Makefile’s PR_BRANCH
variable assignment accordingly so it always reflects the current git branch at
runtime.
- Line 140: The .PHONY declaration still contains the disallowed target "pr";
remove "pr" from the .PHONY list (ensure the .PHONY symbol only includes the
authorised workspace verbs: setup, check, security, format, docs, test,
validate, typings, clean) or, if "pr" is intentionally required, first reconcile
it with the root Makefile and add it to the authorised verbs set before
reintroducing; update the .PHONY line (referencing the .PHONY symbol and the
"pr" token) accordingly.

In `@scripts/github/pr_manager.py`:
- Around line 136-169: The _create_pr function currently calls _run_capture
which raises on non-zero exit, causing an unhandled RuntimeError; wrap the
_run_capture call in a try/except that catches the exception raised by
_run_capture, print a failure status (e.g., "status=error") and any error
message, and return a non-zero exit code (int) instead of allowing the traceback
to bubble up; update references inside _create_pr (around the created =
_run_capture(...) call) to handle the exception and mirror the exit behavior of
the other handlers (view, checks, close, merge).
- Line 146: The print call uses an unnecessary f-string prefix; remove the dead
`f` and change the statement `print(f"status=already-open")` to a plain string
`print("status=already-open")` (locate the print statement in
scripts/github/pr_manager.py and update it accordingly).

---

Nitpick comments:
In `@libs/versioning.py`:
- Around line 16-17: The parse_semver function uses SEMVER_RE.match; change it
to use SEMVER_RE.fullmatch for consistency with release_tag_from_branch and to
make intent explicit: update parse_semver to call SEMVER_RE.fullmatch(version)
(leaving SEMVER_RE unchanged) so both functions use fullmatch semantics when
validating the version string.

In `@scripts/release/run.py`:
- Line 35: Change the argparse definition for the "--projects" flag so that it
requires at least one value instead of allowing none: update the
parser.add_argument call for "--projects" to use nargs="+" (and keep default
behavior adjusted or remove default if appropriate) so passing a bare --projects
triggers an argparse error rather than silently returning an empty list; modify
the parser.add_argument("--projects", ...) occurrence accordingly and verify any
downstream code that treats the resulting variable handles the new
required-values behavior.

In `@scripts/release/version.py`:
- Around line 12-13: The import of libs.versioning depends on release.shared's
side-effect of mutating sys.path (SCRIPTS_ROOT logic), which is fragile; make
the dependency explicit by ensuring the repo root is added to sys.path before
importing libs.versioning (or otherwise resolving imports using workspace_root),
e.g., move or replicate the sys.path insertion that uses
workspace_root/SCRIPTS_ROOT to the top of version.py before the
replace_project_version import or import replace_project_version after
explicitly fixing sys.path; reference the symbols SCRIPTS_ROOT, workspace_root,
release.shared, parse_semver, and replace_project_version to locate where to
apply the change.

In `@tests/unit/scripts/sync_tests.py`:
- Around line 14-16: The test is leaving dynamically imported modules in
sys.modules (sys.modules[spec.name] = module) causing leakage; fix by removing
that entry after use or by adding a pytest autouse fixture to clean any new
modules. Option A: wrap the import block that uses module/spec in try/finally
and in finally do del sys.modules[spec.name] if present. Option B: add an
autouse pytest fixture named _clean_dynamic_modules that captures before =
set(sys.modules), yields, then deletes any keys added (for key in
list(sys.modules): if key not in before: del sys.modules[key]). Use the symbols
spec, module, and sys.modules to locate the import code or add the fixture.
- Around line 9-10: The helper _load_module currently uses
Path(__file__).resolve().parents[3] which is brittle; change _load_module to
walk up parent directories from Path(__file__).resolve() until it finds a
repository root marker (e.g., a file like "pyproject.toml", "setup.cfg", or
".git") and then use that parent as the repo root to build module_path instead
of parents[3]; update the function name references (_load_module and
module_path) accordingly and raise a clear error if no repo root marker is
found.

Comment on lines +43 to +47
p.relative_to(source_dir)
for p in source_dir.rglob("*")
if p.is_file()
and "__pycache__" not in p.parts
and not any(part.startswith(".") for part in p.parts)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hidden-component filter operates on the full absolute path, not the relative path

p.parts includes every ancestor component of the resolved absolute path, not just the parts below source_dir. If source_dir lives anywhere under a hidden directory (e.g. /home/user/.local/flext/… or a CI workspace path like /runner/.work/…), every file will silently fail the not any(part.startswith(".") …) guard, making the sync a no-op with changed = 0 and no error. The same applies to the "__pycache__" not in p.parts check.

🐛 Proposed fix – filter on relative parts only
-    source_files = {
-        p.relative_to(source_dir)
-        for p in source_dir.rglob("*")
-        if p.is_file()
-        and "__pycache__" not in p.parts
-        and not any(part.startswith(".") for part in p.parts)
-    }
+    source_files = {
+        p.relative_to(source_dir)
+        for p in source_dir.rglob("*")
+        if p.is_file()
+        and "__pycache__" not in (rel := p.relative_to(source_dir)).parts
+        and not any(part.startswith(".") for part in rel.parts)
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/sync.py` around lines 43 - 47, The hidden-component and __pycache__
checks are using p.parts (absolute path) so files under a hidden ancestor are
incorrectly filtered out; change the predicate to compute rel =
p.relative_to(source_dir) and then use rel.parts (e.g., check "__pycache__" not
in rel.parts and not any(part.startswith(".") for part in rel.parts)) when
iterating the generator from source_dir.rglob("*"), ensuring you still call
p.is_file() on the original Path object but apply the hidden/__pycache__ filters
to the relative path.

Comment on lines +86 to +88
changed += _sync_tree(
canonical_root / "libs", project_root / "libs", args.prune
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Python 3.13 Path.rglob non-existent directory FileNotFoundError

💡 Result:

Path.rglob() ultimately walks the filesystem using os.scandir(). If the starting directory doesn’t exist (or disappears between the time you create the Path and the time iteration happens), scandir() can raise FileNotFoundError, and you’ll see it when you iterate the rglob() iterator (e.g., list(p.rglob("*"))). This is the same underlying cause as rglob() failures on “fast-changing” trees (/proc, temp dirs, network shares, etc.). [1]

Although Python 3.13 tightened behavior so that OSError raised while scanning is generally suppressed for glob()/rglob(), you still shouldn’t rely on rglob() as an “exists check” for the root directory—handle “missing root” explicitly. [2]

Reliable patterns

1) If you want “missing dir ⇒ no matches”:

from pathlib import Path

def safe_rglob(root: Path, pattern: str):
    if not root.is_dir():
        return ()
    try:
        return root.rglob(pattern)
    except FileNotFoundError:
        return ()

2) If the directory may disappear mid-walk (race):

paths = []
try:
    for p in root.rglob("*"):
        paths.append(p)
except FileNotFoundError:
    pass  # root or a subtree vanished during iteration

If you share your exact snippet + traceback (and whether this is Windows/UNC, a symlink, deleted CWD, etc.), I can point to the precise trigger in your case.

Sources:
[1] Stack Overflow: “Recursive globbing of fast-changing directories” (shows rglob raising FileNotFoundError during scandir) https://stackoverflow.com/questions/69825966/recursive-globbing-of-fast-changing-directories
[2] Stack Overflow: “Why does pathlib.Path.glob function in Python3.13…” (quotes 3.13 change: scanning OSError suppressed) https://stackoverflow.com/questions/79540627/why-does-pathlib-path-glob-function-in-python3-13-return-map-object-instead-of-a


🏁 Script executed:

# Examine the sync.py file to understand context around lines 86-88
cat -n scripts/sync.py | head -100

Repository: flext-sh/flext

Length of output: 3833


🏁 Script executed:

# Look for the _sync_tree function definition
grep -n "def _sync_tree" scripts/sync.py -A 20

Repository: flext-sh/flext

Length of output: 977


_sync_tree will raise FileNotFoundError when canonical_root/libs is absent

When _sync_tree() is called with a non-existent source directory, iterating source_dir.rglob("*") at line 44 raises FileNotFoundError. Since libs/ is new to this PR, older submodule checkouts or mid-migration environments won't have it, causing the sync to crash. The scripts/ call is safe because that directory has always existed.

Fix – guard with `is_dir()`
-        changed += _sync_tree(
-            canonical_root / "libs", project_root / "libs", args.prune
-        )
+        if (canonical_root / "libs").is_dir():
+            changed += _sync_tree(
+                canonical_root / "libs", project_root / "libs", args.prune
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
changed += _sync_tree(
canonical_root / "libs", project_root / "libs", args.prune
)
if (canonical_root / "libs").is_dir():
changed += _sync_tree(
canonical_root / "libs", project_root / "libs", args.prune
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/sync.py` around lines 86 - 88, The call to _sync_tree(canonical_root
/ "libs", project_root / "libs", args.prune) can raise FileNotFoundError because
_sync_tree iterates source_dir.rglob("*") when the source (canonical_root /
"libs") doesn't exist; update the code to guard this case by checking
source_dir.is_dir() before iterating (either in the caller before invoking
_sync_tree or at the start of the _sync_tree function) and return a no-op (e.g.,
0 changes) when the source directory is missing so the sync won't crash in
older/submodule/mid-migration environments.

Comment on lines +39 to +70
def test_main_syncs_scripts_and_libs(tmp_path: Path, monkeypatch: Any) -> None:
mod = _load_module("scripts_sync_main", "scripts/sync.py")

canonical = tmp_path / "canonical"
project = tmp_path / "project"
_ = (canonical / "scripts").mkdir(parents=True)
_ = (canonical / "libs").mkdir(parents=True)
_ = (project / "scripts").mkdir(parents=True)
_ = (canonical / "base.mk").write_text("BASE\n", encoding="utf-8")
_ = (canonical / "scripts" / "tool.py").write_text(
"print('sync')\n", encoding="utf-8"
)
_ = (canonical / "libs" / "versioning.py").write_text(
"VALUE = 'v'\n", encoding="utf-8"
)

monkeypatch.setattr(
sys,
"argv",
[
"sync.py",
"--project-root",
str(project),
"--canonical-root",
str(canonical),
],
)

exit_code = mod.main()
assert exit_code == 0
assert (project / "scripts" / "tool.py").exists()
assert (project / "libs" / "versioning.py").exists()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing assertion for base.mk copy in test_main_syncs_scripts_and_libs

main() copies canonical/base.mkproject/base.mk before the tree syncs, but the test never asserts (project / "base.mk").exists(). A regression in that code path would go undetected.

🧪 Proposed addition
     assert (project / "scripts" / "tool.py").exists()
     assert (project / "libs" / "versioning.py").exists()
+    assert (project / "base.mk").exists()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/scripts/sync_tests.py` around lines 39 - 70, The test
test_main_syncs_scripts_and_libs is missing an assertion that main() copies
canonical/base.mk to project/base.mk; update the test
(test_main_syncs_scripts_and_libs) to assert that (project / "base.mk").exists()
after calling mod.main() so the file-copy behavior in main() is verified
alongside the script and lib syncs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant