Conversation
* feat: add interactive backlog issue creation flow * feat(backlog): add interactive issue creation and mapping setup * fix: align backlog protocol test fakes and module manifest versions * Fix type error * fix(backlog): persist ado sprint and normalize github create id * fix(backlog-core): address review findings for add/config/github --------- Signed-off-by: Dom <39115308+djm81@users.noreply.github.com> Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
#292) * chore(openspec): archive completed changes and align architecture docs * docs(architecture): refresh discrepancies report after arch-08 remediation --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
…log prompt (#294) * fix(backlog): restore installed runtime discovery and add backlog prompt * Archive bugfix change --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
* fix: harden module lifecycle bootstrap and signing workflows * fix: stabilize module signature hashing across environments * fix: stabilize bundle module signature verification in CI --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 448328b165
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 40
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/specfact_cli/analyzers/code_analyzer.py (1)
1736-1762:⚠️ Potential issue | 🟠 MajorMajor performance regression risk in await host detection.
At Line 1760, each
ast.Awaittriggers_function_name_holding_ast_subtree(tree, node), which walks the whole tree and nested function subtrees again. This is quadratic-ish on larger files and will slow Phase 3 noticeably.⚡ Proposed fix: build parent index once, then walk upward per await
+ `@staticmethod` + def _build_parent_index(tree: ast.AST) -> dict[int, ast.AST]: + parent_index: dict[int, ast.AST] = {} + for parent in ast.walk(tree): + for child in ast.iter_child_nodes(parent): + parent_index[id(child)] = parent + return parent_index + `@staticmethod` - def _function_name_holding_ast_subtree(tree: ast.AST, target: ast.AST) -> str | None: - for parent in ast.walk(tree): - if not isinstance(parent, (ast.FunctionDef, ast.AsyncFunctionDef)): - continue - for child in ast.walk(parent): - if child is target: - return parent.name + def _function_name_holding_ast_subtree(parent_index: dict[int, ast.AST], target: ast.AST) -> str | None: + current: ast.AST | None = target + while current is not None: + current = parent_index.get(id(current)) + if isinstance(current, (ast.FunctionDef, ast.AsyncFunctionDef)): + return current.name return None def _detect_async_patterns_parallel(self, tree: ast.AST, file_path: Path) -> list[str]: """ Detect async/await patterns in code (thread-safe version). @@ """ async_methods: list[str] = [] + parent_index = self._build_parent_index(tree) for node in ast.walk(tree): if isinstance(node, ast.AsyncFunctionDef): async_methods.append(node.name) if not isinstance(node, ast.Await): continue - host = self._function_name_holding_ast_subtree(tree, node) + host = self._function_name_holding_ast_subtree(parent_index, node) if host and host not in async_methods: async_methods.append(host)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/analyzers/code_analyzer.py` around lines 1736 - 1762, The current await-host detection is quadratic because _detect_async_patterns_parallel calls _function_name_holding_ast_subtree for each ast.Await which walks the whole tree; instead, build a single parent index once (e.g., walk the tree with ast.iter_child_nodes and fill a dict child->parent) inside _detect_async_patterns_parallel, then for each Await walk upward via the parent map until you hit an ast.FunctionDef or ast.AsyncFunctionDef and collect its .name; update or remove _function_name_holding_ast_subtree to use the parent map (or inline the upward-walk logic) and ensure async_methods still collects AsyncFunctionDef names plus hosts found via the parent map.docs/_support/readme-first-contact/sample-output/capture-metadata.txt (1)
3-13:⚠️ Potential issue | 🟡 MinorAlign captured CLI/versioned commands to
0.46.1.Line 3 and Lines 12-13 still pin to
0.46.0, which is stale for this release PR.Proposed fix
-- CLI version: `0.46.0` +- CLI version: `0.46.1` @@ -uvx --from "specfact-cli==0.46.0" specfact init --profile solo-developer -uvx --from "specfact-cli==0.46.0" --with ruff --with radon --with semgrep --with basedpyright --with pylint --with crosshair-tool specfact code review run --path . --scope full +uvx --from "specfact-cli==0.46.1" specfact init --profile solo-developer +uvx --from "specfact-cli==0.46.1" --with ruff --with radon --with semgrep --with basedpyright --with pylint --with crosshair-tool specfact code review run --path . --scope full🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/_support/readme-first-contact/sample-output/capture-metadata.txt` around lines 3 - 13, Update the pinned CLI version from 0.46.0 to 0.46.1: replace the "CLI version: `0.46.0`" line and the two occurrences of `uvx --from "specfact-cli==0.46.0"` in the captured command block so they read `0.46.1` instead, ensuring all version strings in the file are consistent.src/specfact_cli/adapters/ado.py (1)
2290-2312:⚠️ Potential issue | 🟠 MajorDead code detected: duplicate work item ID resolution.
Lines 2290-2309 perform work item ID lookup, validation, and int coercion, but are immediately overwritten by lines 2311-2312 which call
_get_source_tracking_work_item_id—a method that already handles the same validation and coercion internally.This appears to be a refactoring artifact where the new static method
_work_item_id_from_source_trackingwas added but the old inline code wasn't removed.🐛 Proposed fix: remove dead code
def _update_work_item_status( self, proposal_data: dict[str, Any], # ChangeProposal with source_tracking org: str, project: str, ) -> dict[str, Any]: """ Update ADO work item status based on change proposal status. ... """ target_repo = f"{org}/{project}" - work_item_id = self._work_item_id_from_source_tracking( - proposal_data.get("source_tracking", {}), - target_repo, - ) - - if not work_item_id: - msg = ( - f"Work item ID not found in source_tracking for repository {target_repo}. " - "Work item must be created first." - ) - raise ValueError(msg) - - # Ensure work_item_id is an integer for API call - if isinstance(work_item_id, str): - try: - work_item_id = int(work_item_id) - except ValueError: - msg = f"Invalid work item ID format: {work_item_id}" - raise ValueError(msg) from None - - target_repo = f"{org}/{project}" work_item_id = self._get_source_tracking_work_item_id(proposal_data.get("source_tracking", {}), target_repo) ado_state = self._resolve_proposal_ado_state(proposal_data)The
_get_source_tracking_work_item_idmethod at line 1623-1632 already calls_find_work_item_id_in_source_trackingand_coerce_work_item_id, providing equivalent validation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/adapters/ado.py` around lines 2290 - 2312, Remove the duplicated inline work-item resolution block that sets target_repo, calls _work_item_id_from_source_tracking, validates/coerces work_item_id and raises ValueError, because it is redundant with the subsequent call to _get_source_tracking_work_item_id; keep the single canonical call to _get_source_tracking_work_item_id(proposal_data.get("source_tracking", {}), target_repo) and ensure references to target_repo, work_item_id and proposal_data remain intact so that _get_source_tracking_work_item_id (which delegates to _find_work_item_id_in_source_tracking and _coerce_work_item_id) is the sole source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.pre-commit-config.yaml:
- Around line 22-28: The hook "cli-block1-format" currently uses always_run:
true without a files filter so it executes on every commit; update the hook
configuration for cli-block1-format to either add an appropriate files regex
(for example restrict to \.(py|pyi)$ or whatever languages/formats this script
targets) to scope runs to changed files, or remove always_run: true if the
intent is to run only when matching files are staged; update the entry for
cli-block1-format accordingly so the hook only runs under the desired
conditions.
In `@docs/_support/readme-first-contact/capture-readme-output.sh`:
- Line 8: Update the default CLI_VERSION to match the release target by changing
the CLI_VERSION default assignment (the CLI_VERSION="${CLI_VERSION:-0.46.0}"
line) to use 0.46.1 so generated README sample outputs align with this merge's
v0.46.1 release.
In `@docs/_support/readme-first-contact/sample-output/review-output.txt`:
- Line 12: Update the sample output header string "SpecFact CLI - v0.46.0" to
match the release version bump by changing it to "SpecFact CLI - v0.46.1";
locate the header line in the sample file and replace the old version token so
the example output aligns with the package/version change in this PR.
In `@openspec/CHANGE_ORDER.md`:
- Line 155: The table row for "marketplace | 06 |
marketplace-06-ci-module-signing |
[`#500`](https://github.com/nold-ai/specfact-cli/issues/500) | independent
(CI/trust-chain hardening); paired:
specfact-cli-modules/marketplace-06-ci-module-signing
[`#185`](https://github.com/nold-ai/specfact-cli-modules/issues/185)" is too long;
split that single markdown table row into multiple shorter lines to keep line
length under 120 chars (for example, break the last cell into two lines or move
the "paired: specfact-cli-modules/..." portion to a following wrapped line while
preserving the table pipes and cell content), updating the row that references
marketplace-06-ci-module-signing, issue `#500` and paired issue `#185` so the
markdown linter no longer reports a line-length violation.
In
`@openspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.md`:
- Around line 74-76: Update the ambiguous wording in the spec for the `verify`
job: replace "SHALL pass for checksum-only signed manifests" with a clear phrase
like "SHALL pass for checksum-valid manifests without signatures" to distinguish
signature-absent-but-checksum-valid from signature-present-but-invalid; ensure
the `verify` job statement that mentions `--require-signature` now reads "the
`verify` job SHALL run without `--require-signature` AND SHALL pass for
checksum-valid manifests without signatures" so the terminology matches other
"checksum-only verification" usages.
In `@openspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.md`:
- Around line 5-20: Update the TDD_EVIDENCE.md entry to capture the failing test
evidence, not just the pytest command: run the same pytest invocation for
tests/unit/utils/test_project_artifact_write.py (and the other listed tests) and
paste the failing exit status and the pytest failure excerpt (test name(s) like
malformed_json_raises, preserves_unrelated_keys, stack trace, and the failed
assertion lines) into the document so the "failing-first" proof is preserved
before implementing safe-merge; ensure the excerpt includes the failing test
identifiers and the assertion message/output.
- Around line 37-39: The TDD_EVIDENCE.md entry currently reports a full-suite
coverage "TOTAL ... 62%" (from the `hatch test --cover -v` run) but this is
below the repo requirement; do not sign it off as passing. Update the document
to either (A) rerun tests and add evidence showing full-suite coverage ≥80% (and
100% for critical paths) with the pytest-cov footer replacing the "TOTAL ...
62%" line, or (B) explicitly state that the enforced gate is changed-scope
coverage (describe the changed-scope target and show the corresponding
per-file/changed-files coverage numbers) and remove any implication that the 62%
full-suite result is a passing signal. Ensure the updated text references the
original test command (`hatch test --cover -v`) and the coverage footer string
so reviewers can verify the new evidence.
In `@README.md`:
- Around line 83-98: The README currently advises copying only
`.pre-commit-config.yaml` but the modular hooks depend on repo-local entrypoints
(`./scripts/...`) so downstream repos will get unresolved hooks; update the
Pre-commit section to either (a) instruct users to also vendor or copy the
referenced `scripts/*` entrypoints (e.g.,
`scripts/pre-commit-quality-checks.sh`) alongside `.pre-commit-config.yaml`, or
(b) recommend using the remote stable hook `specfact-smart-checks` as the
primary downstream integration path and show the example YAML snippet that
points to `https://github.com/nold-ai/specfact-cli` with `id:
specfact-smart-checks`; mention both options briefly so readers can choose
vendoring vs remote hook.
In `@scripts/pre-commit-smart-checks.sh`:
- Around line 13-17: The current assignment of _repo_root uses git -C
"${_hook_dir}" rev-parse --show-toplevel which fails when _hook_dir is inside
.git/hooks; change the resolution to prefer the current working tree first and
only fall back to the script directory: try running git rev-parse
--show-toplevel in the current working directory (or git -C . rev-parse
--show-toplevel) and if that fails, then run git -C "${_hook_dir}" rev-parse
--show-toplevel; update the _repo_root logic accordingly so both direct worktree
invocation and copied-hook deployment under .git/hooks resolve correctly before
exec-ing scripts/pre-commit-quality-checks.sh.
In `@scripts/verify_safe_project_writes.py`:
- Around line 87-88: The ast.parse call on IDE_SETUP.read_text can raise OSError
or SyntaxError and currently will escape as a traceback; wrap the read+parse
sequence (the call to ast.parse and IDE_SETUP.read_text) in a try/except that
catches OSError and SyntaxError, map those exceptions to the deterministic gate
failure path (e.g., call sys.exit with a non-zero gate code or return the same
controlled error result used by the script) and log a clear message; update any
callers of _collect_json_io_offenders or the surrounding function to use this
deterministic exit/return so the toolchain always produces a consistent gate
code when parse/read fails.
In `@src/specfact_cli/adapters/ado.py`:
- Around line 710-728: The static method _work_item_id_from_source_tracking
duplicates traversal logic present in the instance method
_find_work_item_id_in_source_tracking and misses the dev.azure.com URL fallback;
consolidate by extracting the shared dict/list traversal into a single helper
(or have _work_item_id_from_source_tracking call
_find_work_item_id_in_source_tracking) so both code paths use the same
implementation, ensure the merged routine preserves the instance method’s
fallback matching for dev.azure.com URLs, and update callers like
_update_work_item_status to use the unified helper (leave a deprecation note or
remove the redundant variant).
- Around line 55-63: The dataclass _AdoCreatedWorkItemRef currently types
work_item_id as Any; tighten it to Union[int, str] (or the Python 3.10 union
syntax int | str) to reflect ADO IDs being integers but sometimes coming in as
strings, and keep the dataclass frozen behavior; update any callers or add a
small validation/normalization step where instances are created (e.g., convert
numeric strings to int) so downstream code expecting ints can rely on the
type/normalized value.
In `@src/specfact_cli/adapters/github.py`:
- Around line 1473-1475: The call to construct _IssueBodyRenderInput uses
positional args which is brittle; update the call site that invokes
_IssueBodyRenderInput (and the other occurrences noted) to use explicit keyword
arguments like title=, description=, rationale=, impact=, change_id=, raw_body=
so each field is clear and immune to dataclass field-order changes; locate
constructors of _IssueBodyRenderInput (and the similar instantiations at the
other two call sites) and replace positional parameters with the corresponding
named parameters in the same order.
In `@src/specfact_cli/adapters/speckit.py`:
- Around line 548-557: The update branch treats empty lists as “keep old” —
change assignments on existing_feature (outcomes, acceptance, constraints,
stories) to be authoritative: set existing_feature.outcomes =
list(payload.outcomes) if payload.outcomes is not None else
list(existing_feature.outcomes or []) (and similarly for acceptance and
constraints), and set stories to list(payload.stories) or [] only based on
payload being not None so an explicit empty list clears the field; also remove
the early return after these assignments so the subsequent source-tracking
refresh (which updates the Speckit path and traceability for existing_feature)
still runs and overwrites stale plan-derived metadata.
In `@src/specfact_cli/analyzers/graph_analyzer.py`:
- Around line 189-203: The code repeatedly calls _resolve_module_from_function
for every callee, causing repeated full-repo scans; to fix, build a single
lookup map (e.g., function_name -> module_path or resolved module) by scanning
python_files once and use that map inside _add_call_graph_edges_for_module
instead of calling _resolve_module_from_function per callee (or modify
_resolve_module_from_function to accept and use a precomputed index); update
both _add_call_graph_edges_for_module and the other occurrence around the later
call (referenced in the comment) to use the cached resolver/index and fall back
to the original resolver only if a name is missing.
In `@src/specfact_cli/common/logger_setup.py`:
- Around line 286-294: The _FileOutputPipelineConfig dataclass should be updated
to use slots for memory efficiency and consistency: change the decorator on
class _FileOutputPipelineConfig to `@dataclass`(slots=True) (matching
InstallModuleOptions and _BundleDepsInstallContext) so instances use __slots__
instead of a per-instance __dict__; no other code changes should be necessary
unless runtime compatibility requires it.
- Around line 297-308: Add runtime type checking to the public dataclass by
decorating LoggerCreateOptions with `@beartype`: import beartype.beartype and
apply `@beartype` above the LoggerCreateOptions declaration so its fields
(log_file, agent_name, log_level, session_id, use_rotating_file, append_mode,
preserve_test_format, emit_to_console) are validated at construction time;
ensure this aligns with callers like setup_logger and logging_utils.py that
instantiate LoggerCreateOptions and keep the dataclass public API contract-first
consistent.
In `@src/specfact_cli/generators/persona_exporter.py`:
- Around line 204-210: The current truthy checks drop valid falsy values (e.g.,
0 or False) when using getattr or a provided spec value; update the condition in
the getattr branch to check for presence using "is not None" (i.e., replace "if
val" with "if val is not None") and likewise replace "if spec.value" with "if
spec.value is not None" so that legitimate falsy values for spec.field_name are
preserved when populating feature_dict (refer to spec.use_getattr, feature,
spec.field_name, and spec.value in persona_exporter.py).
In `@src/specfact_cli/generators/test_to_openapi.py`:
- Around line 382-397: Add a Google-style docstring to the helper
_examples_from_parsed_test_file describing the purpose, parameters (tree:
ast.AST, func_name: str | None), the return type (dict[str, dict[str, Any]]),
and the merge/aggregation semantics (it walks the AST, filters FunctionDef nodes
named test_*, optionally restricts to func_name, extracts examples via
_extract_examples_from_test_function, uses operation_id with "unknown" as
fallback, and merges example dicts per operation_id with later examples
overriding earlier keys). Keep it succinct, include Args, Returns, and a short
Examples/Notes line about the fallback "unknown" operation_id and
update/override behavior.
- Around line 334-343: The dict-coercion logic is duplicated between
_coerce_ast_dict_to_plain and the ast.Dict handling in the other branch;
consolidate by removing the duplicate ast.Dict logic and calling
_coerce_ast_dict_to_plain(value) from wherever the ast.Dict branch currently
reimplements it (e.g., in the function handling generic AST value extraction),
ensuring the same extraction helpers (_extract_ast_value) are used and the
returned dict[str, Any] is used in place of the inline implementation to keep a
single source of truth.
In `@src/specfact_cli/modules/init/src/first_run_selection.py`:
- Around line 180-191: The call to deps.install_module currently passes
InstallModuleOptions positionally; change it to pass the options as a keyword
(i.e., call deps.install_module(marketplace_id,
options=InstallModuleOptions(...))) so the API contract is explicit and
future-proof; update the invocation that uses marketplace_id and
InstallModuleOptions(install_root=deps.root,
non_interactive=deps.non_interactive,
trust_non_official=deps.trust_non_official) to supply those options via the
options= parameter.
In `@src/specfact_cli/registry/module_installer.py`:
- Around line 43-53: The InstallModuleOptions dataclass is missing runtime type
validation; add the `@beartype` decorator immediately above the `@dataclass` on the
InstallModuleOptions class to enable beartype checks at construction, and add
the import "from beartype import beartype" at the top of the module if not
already present; ensure the decorator is applied to the class name
InstallModuleOptions so external callers (e.g., first_run_selection.py,
commands.py) get runtime type validation when creating instances.
- Around line 56-64: The dataclass _BundleDepsInstallContext currently types the
logger field as Any; change its type to logging.Logger to improve static typing
and IDE support by importing logging (if not already) and updating the field
declaration from logger: Any to logger: logging.Logger; ensure any call sites
constructing _BundleDepsInstallContext pass a logging.Logger instance (or cast)
and run type checks to catch mismatches.
In `@src/specfact_cli/sync/bridge_sync.py`:
- Around line 1414-1421: The helper _export_change_proposals_load_list currently
swallows exceptions from _read_openspec_change_proposals and converts them to
warnings; change it to let the exception propagate (or re-raise) so the sync
fails on real I/O/parser errors instead of returning None and marking success;
update the same pattern in the other helper with identical behavior (the block
around where _read_openspec_change_proposals is called at the other occurrence)
to remove the try/except or re-raise after appending any warning so that calling
code receives the exception and the overall sync fails.
- Around line 1342-1345: The code rebuilds tracking from the pre-clear snapshot
(source_tracking_raw/source_tracking_list) after calling
_bridge_sync_clear_corrupted_tracking_entry, which can re-introduce stale
entries; instead, after calling
_bridge_sync_clear_corrupted_tracking_entry(proposal, source_tracking_raw,
source_tracking_list, target_entry) you must re-read and normalize the updated
proposal["source_tracking"] (or otherwise derive fresh
source_tracking_raw/source_tracking_list from proposal) and then construct
tracking = _ExportIterationTracking(updated_source_tracking_list,
updated_source_tracking_raw) so tracking reflects the cleared state before any
further remote-search or write-back.
- Around line 1518-1524: After deriving derived_target_repo from
target_repo/repo_owner/repo_name (using adapter_type, ado_org, ado_project) add
a fail-fast guard: if derived_target_repo is still falsy, raise a clear
exception (e.g., ValueError) or log an error and abort the export so we never
call _find_source_tracking_entry() with a wildcard; mention derived_target_repo,
target_repo, repo_owner, repo_name, adapter_type, ado_org, ado_project, and
_find_source_tracking_entry() in the error message to aid debugging.
- Around line 1447-1451: The public method export_change_proposals_to_devops was
changed to accept a single ExportChangeProposalsOptions parameter and now breaks
callers that pass legacy keyword args (repo_owner, change_ids, etc.); add a
backward-compatible shim inside export_change_proposals_to_devops that detects
legacy usage (e.g., non-None values in **kwargs or when callers pass individual
fields instead of an ExportChangeProposalsOptions instance), construct an
ExportChangeProposalsOptions object from those legacy parameters, and proceed as
before; update the function signature to accept options:
ExportChangeProposalsOptions | None = None and **kwargs (or accept object|dict)
and map known legacy keys to the new model so existing call sites keep working
while new callers can pass the options object.
In `@src/specfact_cli/sync/bridge_watch.py`:
- Around line 30-40: The helper _match_feature_id_from_pattern_parts currently
only verifies prefix segments up to the "{feature_id}" and therefore can return
IDs when suffix segments differ; update it to require that all non-placeholder
pattern_parts equal corresponding file_parts by: locate feature_id_index via
pattern_parts.index("{feature_id}"), verify file_parts is long enough, ensure
every pattern_parts[i] that is not "{feature_id}" matches file_parts[i] (both
for indices before and after feature_id_index), and only then return
file_parts[feature_id_index]; apply the same fix to the duplicate implementation
mentioned in the review (the other block with identical logic).
In `@src/specfact_cli/utils/source_scanner.py`:
- Around line 117-129: Remove the unused fields impl_files and test_files from
the _FeatureLinkingContext dataclass declaration so the context only contains
consumed caches/indices (file_functions_cache, file_test_functions_cache,
file_hashes_cache, impl_files_by_stem, test_files_by_stem,
impl_stems_by_substring, test_stems_by_substring and repo_path); then update the
place that constructs the linking_ctx (the code that builds the linking_ctx
instance) to stop passing impl_files and test_files and only pass the remaining
required fields, and run tests/static type checks to ensure no other references
to impl_files/test_files remain in functions that consume
_FeatureLinkingContext.
In `@src/specfact_cli/validators/cli_first_validator.py`:
- Around line 191-204: The traversal in detect_direct_manipulation uses
.exists() and then calls .iterdir()/.glob(), which can raise NotADirectoryError
if projects_dir or features_dir is a file; change the checks to use .is_dir()
for projects_dir and features_dir before iterating, and wrap the
bundle_dir.iterdir()/features_dir.glob loops in a try/except OSError (or
NotADirectoryError) to defensively skip non-directory entries; update references
to projects_dir, bundle_dir, manifest_file, and features_dir accordingly so the
validator continues running even if a path is a file.
In `@src/specfact_cli/validators/sidecar/crosshair_runner.py`:
- Around line 40-43: The precondition lambda triggers a type-checker warning
because the inline ternary makes timeout's type unclear; fix by extracting
timeout resolution into a typed helper and calling it from the require
predicate. Add a small function like get_effective_timeout(options:
Optional[CrosshairRunOptions]) -> int that returns options.timeout if options is
not None else CrosshairRunOptions().timeout, then change the decorator lambda to
use get_effective_timeout(options) > 0; reference the symbols
CrosshairRunOptions, get_effective_timeout, and the require decorator to locate
where to change.
In `@src/specfact_cli/validators/sidecar/crosshair_summary.py`:
- Around line 62-75: The current _append_rejected_line_violation uses substring
matching on violation_details which causes false deduping (e.g., "foo" matches
"foo_bar"); instead, first parse the function name from the line using
function_name_pattern (as already done into func_name), normalize it
(strip/whatever normalization used elsewhere), and then check exact equality
against the "function" keys in violation_details to dedupe; replace the
any(v["function"] in line ...) check with an equality-based check (e.g.,
any(v["function"] == func_name for v in violation_details)) before appending the
new dict with keys "function", "counterexample", and "raw".
In `@src/specfact_cli/validators/sidecar/frameworks/fastapi.py`:
- Around line 162-175: The helper _method_and_path_from_route_decorator
currently treats any decorator name as an HTTP method and defaults missing paths
to "/", which lets non-route decorators overwrite route metadata; restrict
method detection to a whitelist of actual HTTP verbs (e.g., GET, POST, PUT,
DELETE, PATCH, OPTIONS, HEAD) by checking func.attr/func.id against that set and
return None if not matched, and do not default path to "/"—return None for
missing path (or keep path as None) so non-route decorators don't inject a fake
"/" path; apply the same whitelist-and-no-default-path change to the other
decorator-handling block that mirrors this logic (the similar code around lines
187-193).
In `@tests/helpers/doc_frontmatter.py`:
- Around line 27-30: The current code registers module into sys.modules before
calling spec.loader.exec_module(module), which leaves a partially-initialized
module in sys.modules if exec_module raises; change the logic so that you either
(a) call spec.loader.exec_module(module) first and only set
sys.modules[spec.name] = module after it succeeds, or (b) keep the current
pre-registration but wrap spec.loader.exec_module(module) in try/except/finally
and ensure that on any exception you remove the entry via
sys.modules.pop(spec.name, None); update the code around sys.modules,
spec.loader.exec_module, module, and spec.name accordingly.
In `@tests/unit/modules/init/test_first_run_selection.py`:
- Around line 394-396: The stub _record_marketplace currently accepts an options
parameter but ignores it, so update the test stub to validate the options
contract: inside _record_marketplace (the test-local function), assert that the
received options matches the expected options object/structure defined in the
test (e.g., non-None, contains expected keys/values or equals expected_options
variable) and then continue to append module_id to installed_ids and return the
tmp_path-derived Path; use the installed_ids and expected_options variables from
the test scope to perform the assertion so wiring regressions for install
options are caught.
In `@tests/unit/scripts/test_pre_commit_verify_modules.py`:
- Around line 170-175: The test test_pre_commit_verify_modules_legacy_entrypoint
should stop asserting on internal shell text like "exec bash" and instead verify
delegation behavior: keep the LEGACY_VERIFY_WRAPPER existence check, drop the
fragile read_text assertion for "exec bash", and replace the string checks with
a behavioral invocation that runs LEGACY_VERIFY_WRAPPER (e.g.,
subprocess.run([str(LEGACY_VERIFY_WRAPPER), "--help"] or another harmless flag)
and assert it returns the expected exit code and/or stdout/stderr that matches
invoking the real target entrypoint (the canonical pre-commit-verify-modules
script); reference LEGACY_VERIFY_WRAPPER and
test_pre_commit_verify_modules_legacy_entrypoint to locate where to change the
assertions.
In `@tests/unit/scripts/test_verify_safe_project_writes.py`:
- Around line 51-60: The test test_verify_safe_project_writes_passes_on_repo
currently invokes scripts/verify_safe_project_writes.py against the whole
repository which couples unit tests to repo state; update this test to either
(a) run the verifier against a controlled fixture directory (create a small
temporary fixture tree under the test using tmp_path and invoke the script or
verifier function only on that path) or (b) move the existing repo-wide
invocation into an integration/contract test suite, keeping the unit test
deterministic; locate the test by name
test_verify_safe_project_writes_passes_on_repo and the script reference
scripts/verify_safe_project_writes.py and replace the subprocess run against the
repo with a subprocess run (or direct function call) targeting the fixture path
or relocate the assertion into an integration test.
In `@tests/unit/tools/test_smart_test_coverage.py`:
- Line 33: Tests currently only exercise _SmartCacheUpdate and its _update_cache
behavior; add focused unit tests that exercise the changed-only dispatch tree:
write separate tests that (1) invoke _run_changed_only_maybe_full_for_unmapped
to cover the unmapped->full fallback path, (2) exercise
_run_changed_only_when_idle to hit the idle-queue branch, and (3) call
_run_changed_only end-to-end (without mocking out the tests/e2e/** path) to
traverse the direct e2e dispatch branch; update or add a variant of
test_run_smart_tests_with_changes to not always mock the changed-only internals
so the above branches execute, and assert expected outcomes/state changes rather
than redundant internal values so coverage for those symbols (_SmartCacheUpdate,
_run_changed_only_maybe_full_for_unmapped, _run_changed_only_when_idle,
_run_changed_only, test_run_smart_tests_with_changes) is achieved.
In `@tests/unit/workflows/test_trustworthy_green_checks.py`:
- Around line 182-205: The helper _assert_pre_commit_cli_quality_block_hooks
currently iterates the same hook IDs twice; consolidate into a single loop over
the tuple of hook IDs (the ones currently referenced:
"cli-block1-format","cli-block1-yaml","cli-block1-markdown-fix","cli-block1-markdown-lint","cli-block1-workflows","cli-block1-lint","cli-block2")
and within that loop assert hid in by_id and then validate the entry with entry
= by_id[hid].get("entry","") and assert "pre-commit-quality-checks.sh" in
str(entry) (keeping the same failure message), then after the loop assert
by_id["cli-block2"].get("always_run") is True and assert "check-doc-frontmatter"
in by_id; update references to hid and entry accordingly.
In `@tools/smart_test_coverage.py`:
- Around line 1950-1953: _unit_direct is receiving paths like "tests/e2e/..."
because _split_tests_by_level doesn't recognize that pattern; ensure those are
routed to the e2e batch so _run_changed_only_e2e_batch() runs them. Update
_split_tests_by_level (or immediately after its call where modified_tests ->
unit_direct, integ_direct, e2e_direct are assigned) to detect paths starting
with "tests/e2e/" and move them from unit_direct into e2e_direct before calling
_dedupe_path_list; keep the existing variables (unit_direct, integ_direct,
e2e_direct) and then call _dedupe_path_list on the adjusted lists so
changed-only e2e tests are handled by _run_changed_only_e2e_batch().
---
Outside diff comments:
In `@docs/_support/readme-first-contact/sample-output/capture-metadata.txt`:
- Around line 3-13: Update the pinned CLI version from 0.46.0 to 0.46.1: replace
the "CLI version: `0.46.0`" line and the two occurrences of `uvx --from
"specfact-cli==0.46.0"` in the captured command block so they read `0.46.1`
instead, ensuring all version strings in the file are consistent.
In `@src/specfact_cli/adapters/ado.py`:
- Around line 2290-2312: Remove the duplicated inline work-item resolution block
that sets target_repo, calls _work_item_id_from_source_tracking,
validates/coerces work_item_id and raises ValueError, because it is redundant
with the subsequent call to _get_source_tracking_work_item_id; keep the single
canonical call to
_get_source_tracking_work_item_id(proposal_data.get("source_tracking", {}),
target_repo) and ensure references to target_repo, work_item_id and
proposal_data remain intact so that _get_source_tracking_work_item_id (which
delegates to _find_work_item_id_in_source_tracking and _coerce_work_item_id) is
the sole source of truth.
In `@src/specfact_cli/analyzers/code_analyzer.py`:
- Around line 1736-1762: The current await-host detection is quadratic because
_detect_async_patterns_parallel calls _function_name_holding_ast_subtree for
each ast.Await which walks the whole tree; instead, build a single parent index
once (e.g., walk the tree with ast.iter_child_nodes and fill a dict
child->parent) inside _detect_async_patterns_parallel, then for each Await walk
upward via the parent map until you hit an ast.FunctionDef or
ast.AsyncFunctionDef and collect its .name; update or remove
_function_name_holding_ast_subtree to use the parent map (or inline the
upward-walk logic) and ensure async_methods still collects AsyncFunctionDef
names plus hosts found via the parent map.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 856a4489-5302-4856-b676-730f4f9cbc5c
📒 Files selected for processing (98)
.github/workflows/pr-orchestrator.yml.pre-commit-config.yamlCHANGELOG.mdCONTRIBUTING.mdREADME.mddocs/_support/readme-first-contact/capture-readme-output.shdocs/_support/readme-first-contact/sample-output/capture-metadata.txtdocs/_support/readme-first-contact/sample-output/review-output.txtdocs/agent-rules/50-quality-gates-and-review.mddocs/agent-rules/70-release-commit-and-docs.mddocs/getting-started/installation.mddocs/guides/module-signing-and-key-rotation.mddocs/guides/publishing-modules.mddocs/modules/code-review.mddocs/reference/module-security.mdmodules/bundle-mapper/module-package.yamlmodules/bundle-mapper/src/bundle_mapper/mapper/engine.pyopenspec/CHANGE_ORDER.mdopenspec/changes/marketplace-06-ci-module-signing/.openspec.yamlopenspec/changes/marketplace-06-ci-module-signing/design.mdopenspec/changes/marketplace-06-ci-module-signing/proposal.mdopenspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.mdopenspec/changes/marketplace-06-ci-module-signing/specs/ci-module-signing-on-approval/spec.mdopenspec/changes/marketplace-06-ci-module-signing/tasks.mdopenspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.mdopenspec/changes/profile-04-safe-project-artifact-writes/tasks.mdpyproject.tomlscripts/check_doc_frontmatter.pyscripts/git-branch-module-signature-flag.shscripts/pre-commit-quality-checks.shscripts/pre-commit-smart-checks.shscripts/pre-commit-verify-modules-signature.shscripts/pre-commit-verify-modules.shscripts/setup-git-hooks.shscripts/verify-bundle-published.pyscripts/verify_safe_project_writes.pysetup.pysrc/__init__.pysrc/specfact_cli/__init__.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/openspec_parser.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/ambiguity_scanner.pysrc/specfact_cli/analyzers/code_analyzer.pysrc/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/common/logger_setup.pysrc/specfact_cli/common/logging_utils.pysrc/specfact_cli/generators/contract_generator.pysrc/specfact_cli/generators/openapi_extractor.pysrc/specfact_cli/generators/persona_exporter.pysrc/specfact_cli/generators/test_to_openapi.pysrc/specfact_cli/models/project.pysrc/specfact_cli/modules/init/module-package.yamlsrc/specfact_cli/modules/init/src/first_run_selection.pysrc/specfact_cli/modules/module_registry/module-package.yamlsrc/specfact_cli/modules/module_registry/src/commands.pysrc/specfact_cli/registry/module_installer.pysrc/specfact_cli/sync/__init__.pysrc/specfact_cli/sync/bridge_sync.pysrc/specfact_cli/sync/bridge_watch.pysrc/specfact_cli/utils/contract_predicates.pysrc/specfact_cli/utils/ide_setup.pysrc/specfact_cli/utils/project_artifact_write.pysrc/specfact_cli/utils/source_scanner.pysrc/specfact_cli/validators/cli_first_validator.pysrc/specfact_cli/validators/contract_validator.pysrc/specfact_cli/validators/sidecar/crosshair_runner.pysrc/specfact_cli/validators/sidecar/crosshair_summary.pysrc/specfact_cli/validators/sidecar/frameworks/fastapi.pysrc/specfact_cli/validators/sidecar/orchestrator.pytests/e2e/test_bundle_extraction_e2e.pytests/helpers/doc_frontmatter.pytests/integration/sync/test_multi_adapter_backlog_sync.pytests/integration/test_devops_github_sync.pytests/unit/adapters/test_openspec_parser.pytests/unit/adapters/test_speckit.pytests/unit/modules/init/test_first_run_selection.pytests/unit/modules/init/test_init_ide_prompt_selection.pytests/unit/modules/module_registry/test_commands.pytests/unit/registry/test_module_installer.pytests/unit/scripts/test_pre_commit_smart_checks_docs.pytests/unit/scripts/test_pre_commit_verify_modules.pytests/unit/scripts/test_verify_bundle_published.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/specfact_cli/modules/test_module_upgrade_improvements.pytests/unit/specfact_cli/modules/test_multi_module_install_uninstall.pytests/unit/specfact_cli/registry/test_profile_presets.pytests/unit/specfact_cli/validators/sidecar/test_crosshair_runner.pytests/unit/specfact_cli/validators/sidecar/test_crosshair_runner_env.pytests/unit/tools/test_smart_test_coverage.pytests/unit/utils/test_contract_predicates.pytests/unit/utils/test_ide_setup.pytests/unit/utils/test_project_artifact_write.pytests/unit/validators/test_bundle_dependency_install.pytests/unit/workflows/test_trustworthy_green_checks.pytools/contract_first_smart_test.pytools/smart_test_coverage.py
Keep dev versions for all conflicted paths (--ours). Retain auto-merged non-conflicting updates from main (notably adapter/analyzer tests). Made-with: Cursor
Use diff-filter ACMRD so deletion-only commits are not treated as empty safe changes. Restrict markdown and code-review paths to existing files. Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/pr-orchestrator.yml (1)
113-123:⚠️ Potential issue | 🟠 MajorMake module-signature verification branch-aware in CI.
Lines [120] and [122] always pass
--require-signature. That enforces main-branch policy in all contexts and conflicts with the documented dev/non-main checksum-only contract.Suggested patch
- name: Verify bundled module checksums and signatures run: | BASE_REF="" + REQUIRE_SIGNATURE_FLAG="" if [ "${{ github.event_name }}" = "pull_request" ]; then BASE_REF="origin/${{ github.event.pull_request.base.ref }}" + if [ "${{ github.event.pull_request.base.ref }}" = "main" ]; then + REQUIRE_SIGNATURE_FLAG="--require-signature" + fi + elif [ "${{ github.event_name }}" = "push" ] && [ "${{ github.ref }}" = "refs/heads/main" ]; then + REQUIRE_SIGNATURE_FLAG="--require-signature" fi if [ -n "$BASE_REF" ]; then - python scripts/verify-modules-signature.py --require-signature --enforce-version-bump --payload-from-filesystem --version-check-base "$BASE_REF" + python scripts/verify-modules-signature.py $REQUIRE_SIGNATURE_FLAG --enforce-version-bump --payload-from-filesystem --version-check-base "$BASE_REF" else - python scripts/verify-modules-signature.py --require-signature --enforce-version-bump --payload-from-filesystem + python scripts/verify-modules-signature.py $REQUIRE_SIGNATURE_FLAG --enforce-version-bump --payload-from-filesystem fiAs per coding guidelines, "
pr-orchestrator.ymlmust enforce--require-signatureonly for events targetingmain(omit fordev)."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/pr-orchestrator.yml around lines 113 - 123, The CI step currently always passes --require-signature to python scripts/verify-modules-signature.py which enforces main-branch policy for all PRs; change the logic around BASE_REF so that --require-signature is only appended when the target base branch is main (e.g., when BASE_REF ends with /main or equals main), and omit --require-signature for dev/non-main branches while still passing --enforce-version-bump, --payload-from-filesystem and, when present, --version-check-base "$BASE_REF".
♻️ Duplicate comments (8)
src/specfact_cli/adapters/ado.py (1)
2290-2312: 🛠️ Refactor suggestion | 🟠 MajorUse a single work-item resolver here.
The new block at Lines 2290-2310 resolves and coerces
work_item_id, but Lines 2311-2312 immediately recompute it through_get_source_tracking_work_item_id(). That leaves the refactor dead at runtime and splits the status-update path across two helpers. Please delete the stale reassignment or route the method entirely through one resolver.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/adapters/ado.py` around lines 2290 - 2312, The code resolves and validates work_item_id using _work_item_id_from_source_tracking and then immediately overwrites it by calling _get_source_tracking_work_item_id, making the new coercion logic dead; remove the stale reassignment (the second target_repo/work_item_id assignment) and use the already-resolved work_item_id (from _work_item_id_from_source_tracking) for subsequent API calls, or conversely route the entire flow through _get_source_tracking_work_item_id and move the integer-coercion/validation into that helper so only one resolver (_work_item_id_from_source_tracking or _get_source_tracking_work_item_id) is used consistently for target_repo, work_item_id, and proposal_data.tests/unit/scripts/test_verify_safe_project_writes.py (1)
51-60:⚠️ Potential issue | 🟠 MajorDon't couple a unit test to the live repository state.
This will fail on any unrelated
ide_setup.pychange, which makes the unit suite non-deterministic. Keep this as an integration/contract check, or point the verifier at a fixture tree instead. As per coding guidelines,tests/**/*.py: "Flag flakiness, environment coupling, and missing coverage for changed behavior."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/scripts/test_verify_safe_project_writes.py` around lines 51 - 60, The test test_verify_safe_project_writes_passes_on_repo invokes the real scripts/verify_safe_project_writes.py against the live repository which couples the unit test to repo state; change the test to run the verifier against a controlled fixture tree (or temporary directory) instead of the repo root, e.g. create a fixture directory with the minimal files expected by verify_safe_project_writes.py (or pass a path argument or environment variable to the script if verify_safe_project_writes.py/ide_setup supports it), and update test_verify_safe_project_writes_passes_on_repo to invoke the script with that fixture path so the assertion on completed.returncode is deterministic; alternatively mark this test as an integration/contract test rather than a unit test if you want to keep exercising the live repo.scripts/verify_safe_project_writes.py (1)
87-88:⚠️ Potential issue | 🟠 MajorMap parse/read failures to the gate's controlled exit path.
IDE_SETUP.read_text()andast.parse()can raiseOSError,UnicodeDecodeError, orSyntaxError. Right now that escapes as a traceback instead of returning the script's deterministic failure code.As per coding guidelines, `scripts/**/*.py`: "Deterministic tooling: subprocess safety, Hatch integration, and parity with documented quality gates."🛡️ Suggested fix
- tree = ast.parse(IDE_SETUP.read_text(encoding="utf-8"), filename=str(IDE_SETUP)) + try: + source = IDE_SETUP.read_text(encoding="utf-8") + tree = ast.parse(source, filename=str(IDE_SETUP)) + except (OSError, UnicodeDecodeError, SyntaxError) as exc: + _write_stderr(f"Failed to inspect {IDE_SETUP}: {exc}") + return 2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/verify_safe_project_writes.py` around lines 87 - 88, Wrap the IDE_SETUP.read_text(...) and ast.parse(...) calls in a try/except that catches OSError, UnicodeDecodeError, and SyntaxError, log or surface the caught exception, and map it to the script's deterministic failure exit path (e.g., call the existing failure/exit helper or sys.exit with the gate's non-zero code) instead of letting the traceback escape; only call _collect_json_io_offenders(tree) on successful parse. Ensure you update the code around IDE_SETUP, ast.parse, and _collect_json_io_offenders to perform this handling.openspec/CHANGE_ORDER.md (1)
155-155:⚠️ Potential issue | 🟡 MinorSplit the marketplace-06 row to avoid markdown line-length lint noise.
Line [155] is over the markdown line-length limit and will keep lint output noisy.
Suggested patch
-| marketplace | 06 | marketplace-06-ci-module-signing | [`#500`](https://github.com/nold-ai/specfact-cli/issues/500) | independent (CI/trust-chain hardening); paired: specfact-cli-modules/marketplace-06-ci-module-signing [`#185`](https://github.com/nold-ai/specfact-cli-modules/issues/185) | +| marketplace | 06 | marketplace-06-ci-module-signing | [`#500`](https://github.com/nold-ai/specfact-cli/issues/500) | independent (CI/trust-chain hardening);<br>paired: specfact-cli-modules/marketplace-06-ci-module-signing [`#185`](https://github.com/nold-ai/specfact-cli-modules/issues/185) |As per coding guidelines, "Keep line length under 120 characters in markdown files."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openspec/CHANGE_ORDER.md` at line 155, The marketplace-06 table row (the entry containing marketplace-06-ci-module-signing and its issue links) exceeds the markdown line-length rule; split the single long table row into two (or more) consecutive table rows or wrap the long cell content across multiple rows while preserving the pipe-delimited table syntax so each line is under 120 chars and the row still reads as the same logical entry (keep the identifiers marketplace, 06, marketplace-06-ci-module-signing, and the issue links like `#500` and `#185` intact and adjacent so meaning is preserved).README.md (1)
85-87:⚠️ Potential issue | 🟠 MajorDon’t tell downstream repos to copy only
.pre-commit-config.yaml.These hooks are
repo: localand call repo-ownedscripts/...entrypoints. Vendoring just the YAML into another repository leaves the hook config broken. Either document vendoring the referenced scripts too, or make the remotespecfact-smart-checksexample the primary downstream path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 85 - 87, The README incorrectly instructs downstream repos to copy only `.pre-commit-config.yaml` while the configured hooks are `repo: local` and call repo-owned `scripts/...` entrypoints; update the README to either (A) tell users to vendor the referenced `scripts/` directory and any other repo-owned hook files alongside `.pre-commit-config.yaml` (and note any required path changes), or (B) provide and show the alternative `pre-commit` configuration that points to the remote `specfact-smart-checks` repository so hooks run without vendoring; reference the `.pre-commit-config.yaml` file, `scripts/...` entrypoints, and the `specfact-smart-checks` remote option in the instructions so downstream consumers know both supported approaches.scripts/pre-commit-smart-checks.sh (1)
13-17:⚠️ Potential issue | 🟠 MajorCopied-hook mode is still broken.
Line 15 resolves the repo root only from
${_hook_dir}. When this shim is copied into.git/hooks/pre-commit,git -C "${_hook_dir}" rev-parse --show-toplevelfails because.git/hooksis not a worktree, so the shim never reachespre-commit-quality-checks.sh.🛠️ Proposed fix
-_hook_dir=$(CDPATH= cd -- "$(dirname "${_script_path}")" && pwd) - -_repo_root=$(git -C "${_hook_dir}" rev-parse --show-toplevel) +_hook_dir=$(CDPATH='' cd -- "$(dirname "${_script_path}")" && pwd) + +if _repo_root=$(git rev-parse --show-toplevel 2>/dev/null); then + : +else + _repo_root=$(git -C "${_hook_dir}" rev-parse --show-toplevel) +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/pre-commit-smart-checks.sh` around lines 13 - 17, The repo-root resolution fails when the shim is copied into .git/hooks because _hook_dir points at .git/hooks (not a worktree); update the _repo_root assignment to try resolving from _hook_dir first and, if that fails, fall back to the parent directory (e.g. attempt git -C "${_hook_dir}" rev-parse --show-toplevel || git -C "${_hook_dir}/.." rev-parse --show-toplevel) so the exec call that runs pre-commit-quality-checks.sh can locate the real repository root; change the line that sets _repo_root accordingly and keep the rest (exec bash "${_repo_root}/scripts/pre-commit-quality-checks.sh" all "$@") unchanged.openspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.md (1)
74-76: 🧹 Nitpick | 🔵 TrivialClarify "checksum-only signed manifests" terminology.
This phrasing is still ambiguous as noted in prior review. Consider "SHALL pass for checksum-valid manifests without signatures" to clearly distinguish signature-absent-but-checksum-valid from signature-present-but-invalid states.
📝 Suggested terminology fix
- **WHEN** a push to `dev` triggers `sign-modules.yml` - **AND** the push contains module changes - **THEN** the `verify` job SHALL run without `--require-signature` -- **AND** SHALL pass for checksum-only signed manifests +- **AND** SHALL pass for checksum-valid manifests without signatures🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.md` around lines 74 - 76, Update the ambiguous wording in the spec lines referencing the `verify` job and `--require-signature`: replace "checksum-only signed manifests" with a clearer phrase such as "checksum-valid manifests without signatures" so the `verify` job SHALL run without `--require-signature` and SHALL pass for checksum-valid manifests without signatures, explicitly distinguishing absence of signature from invalid signatures.tests/unit/scripts/test_pre_commit_verify_modules.py (1)
170-174: 🧹 Nitpick | 🔵 TrivialPrefer behavior over shell-text assertions here.
This will fail on harmless wrapper refactors while the next parity test already exercises the actual delegation path. A behavioral invocation of the legacy wrapper would preserve the contract without coupling the test to
exec bash.As per coding guidelines, "
tests/**/*.py: Contract-first testing: meaningful scenarios, not redundant assertions already covered by contracts. Flag flakiness, environment coupling, and missing coverage for changed behavior."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/scripts/test_pre_commit_verify_modules.py` around lines 170 - 174, The test test_pre_commit_verify_modules_legacy_entrypoint currently asserts on literal shell text in LEGACY_VERIFY_WRAPPER which is brittle; instead, create a minimal behavioral test that ensures the wrapper actually invokes the legacy target: in the test replace the string checks with a small temporary executable script (a fake pre-commit-verify-modules.sh) placed in a temporary directory, make it record invocation (e.g., write to a temp file or stdout), prepend that dir to PATH or otherwise ensure the wrapper will resolve that fake script, run LEGACY_VERIFY_WRAPPER as an executable (subprocess) and assert the wrapper returns the expected exit code and that the fake script was invoked; keep the test named test_pre_commit_verify_modules_legacy_entrypoint and reference LEGACY_VERIFY_WRAPPER when locating the wrapper to run.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/guides/publishing-modules.md`:
- Around line 105-108: The protected-branch example weakens the release check by
only showing --require-signature; update the example to require both flags so
strict verification enforces version bumps before merging to main: modify the
text that references scripts/verify-modules-signature.py to show
`--require-signature --enforce-version-bump` (and mention that checksum-only is
allowed on feature branches) so the documented policy matches the PR’s
enforcement rules.
In `@docs/modules/code-review.md`:
- Around line 110-111: The docs sentence referencing the shim delegation uses
the non-canonical CLI path; update the text so the hook `specfact-smart-checks`
points to `scripts/pre-commit-quality-checks.sh all` (instead of
`pre-commit-quality-checks.sh all`) to match the actual repo entrypoint and
avoid downstream copy/paste errors—edit the sentence mentioning
`specfact-smart-checks` and replace the inline script invocation accordingly.
In `@openspec/changes/marketplace-06-ci-module-signing/tasks.md`:
- Around line 76-86: Add a final strict validation step to the "7. Testing and
quality gates" checklist by inserting a task that runs openspec validate
marketplace-06-ci-module-signing --strict (e.g., as a new 7.7 or appended to
7.6) and requires fixing any validation errors before marking the change
complete; update the checklist text under the "7. Testing and quality gates"
section to mention this exact command and the requirement to re-run until it
passes.
In `@pyproject.toml`:
- Line 240: The release alias currently bypasses the new check-pypi-ahead step;
update the hatch 'release' alias to include the 'check-pypi-ahead' command (the
new "check-pypi-ahead" script entry) as a prerequisite so that running `hatch
run release` executes the PyPI ahead-of check first and fails the release if the
local version in pyproject.toml is not strictly greater than the latest PyPI
version; locate the 'release' alias and append or prepend "check-pypi-ahead" to
its command list (ensuring ordering before publish steps) so the gate is
enforced.
In `@scripts/check_doc_frontmatter.py`:
- Around line 465-466: The title generation in
_format_agent_rules_suggested_frontmatter currently uses path.stem directly
(assigned to title_guess) which preserves numeric ordering prefixes (e.g.,
"50-quality-gates-and-review") into the suggested title; update the logic to
strip any leading ordering prefix (one or more digits optionally followed by
separators like "-" or "_") from path.stem before replacing "-" with spaces,
title-casing, and escaping quotes so generated title no longer includes
navigation numbers—modify the title creation step in
_format_agent_rules_suggested_frontmatter to remove that prefix from path.stem
before the existing transformations.
In `@scripts/check_local_version_ahead_of_pypi.py`:
- Around line 48-66: fetch_latest_pypi_version currently fails CI on transient
network/PyPI errors; wrap the network call in retry logic using tenacity
(`@retry`) to perform a bounded number of attempts with exponential/backoff (e.g.,
stop_after_attempt and wait_exponential) and only retry on transient errors
(urllib.error.URLError and HTTPError except 404); preserve existing behavior of
returning None for 404 and raising RuntimeError for non-404 HTTP errors after
retries, and ensure tenacity is imported and configured to respect the timeout_s
parameter per attempt.
- Around line 79-120: The main issue is that compare_local_to_pypi_version
(which calls packaging.version.parse in compare_local_to_pypi_version) can raise
packaging.version.InvalidVersion (a ValueError subclass) and is currently
invoked in main without catching it; update main to wrap the call to
compare_local_to_pypi_version (or the earlier parsing steps) in a try/except
catching ValueError (or InvalidVersion) and on exception write the error to
stderr and return exit code 2 so malformed local or PyPI version strings produce
the documented deterministic exit path.
In `@scripts/pre-commit-quality-checks.sh`:
- Around line 135-159: The check_safe_change() function currently treats all
*.md as safe which lets openspec/changes/.../*.md deltas bypass review; update
the case list in check_safe_change() so that the explicit
openspec/changes/**/*.md (or openspec/changes/*) pattern is handled before the
*.md catch-all and does not fall into the allowed markdown branch—i.e., add a
case entry matching openspec/changes/* (or openspec/**/*.md) that forces
other_changes to increment (or omits the no-op) so such spec-source markdown
files are counted and therefore will trigger run_code_review_gate() (ensure the
staged_files() consumer and other_changes logic remain unchanged).
In `@scripts/verify_safe_project_writes.py`:
- Around line 63-76: The current ast.walk loop flags json I/O names regardless
of local rebinding; restore scope-aware alias detection by reintroducing the
previous _Visitor logic (or equivalent) that tracks shadowing via shadow_func
and shadow_mod sets so names in func_aliases or module_locals are ignored when
they've been rebound locally; update the call detection in the loop to use the
visitor's shadowing state before appending to offenders for func_aliases or json
names (_JSON_IO_NAMES), and ensure the visitor handles function/class defs,
parameters, assignments, for targets, with targets, except handlers, and
comprehensions to match prior behavior.
In `@src/specfact_cli/adapters/ado.py`:
- Around line 767-769: The current falsy check "if not source_tracking:" treats
empty lists ([]) as missing and replaces list-shaped tracking with a dict;
change the guard to only treat absent/None as missing by using "if
source_tracking is None:" (or explicitly check for None/missing) so that an
existing empty list keeps its list semantics; update the block that assigns
proposal_data["source_tracking"] = tracking_update to run only when
source_tracking is None and preserve existing list-shaped source_tracking and
its append behavior.
In `@src/specfact_cli/adapters/github.py`:
- Line 162: Update the _GITHUB_GIT_CONFIG_URL_RE constant by adding a short
comment above it documenting the supported URL patterns (e.g., "Matches:
https(s)://, http://, ssh://, git://, git@host:path"); add unit tests that
exercise _git_config_content_indicates_github with git config entries using the
three missing schemes (ssh://host/path, git://host/path, and http://host/path)
in addition to existing https and git@ cases to ensure detection remains
correct; run tests and adjust the regex/test inputs if any false
positives/negatives appear.
In `@src/specfact_cli/adapters/speckit.py`:
- Around line 465-471: The acceptance list currently preserves blank or
whitespace-only strings which can make story.acceptance appear non-empty; update
the logic around acceptance_raw processing (the branch that sets acceptance and
any call to self._extract_text_list) to normalize by trimming each string and
filtering out entries that are empty after strip, i.e., only keep items where
str(item).strip() is non-empty; if the resulting acceptance list becomes empty,
fall back to the default acceptance of [f"{story_title} is implemented"] so
downstream ambiguity_scanner and code_analyzer see meaningful content.
In `@src/specfact_cli/analyzers/code_analyzer.py`:
- Around line 1736-1764: The current implementation wastes time by calling
_function_name_holding_ast_subtree which re-walks the AST for every ast.Await
node; instead, do a single pre-pass over the tree to build a parent map (e.g.,
dict child->parent) and then, when iterating awaits in
_detect_async_patterns_parallel, walk up the parent chain to find the nearest
ast.FunctionDef/ast.AsyncFunctionDef to get the host name; also mark the unused
file_path parameter as intentionally unused by renaming it to _file_path or
adding a trailing # noqa comment to suppress warnings.
In `@tests/unit/scripts/test_check_local_version_ahead_of_pypi.py`:
- Around line 80-92: The test test_script_exits_zero_when_skip_env in
tests/unit/scripts/test_check_local_version_ahead_of_pypi.py spawns a subprocess
and depends on filesystem layout; move this test into tests/integration/ (e.g.,
tests/integration/scripts/) and update any imports/paths accordingly so unit
tests remain deterministic. Specifically, relocate the whole test function
(test_script_exits_zero_when_skip_env) to a new integration test file that
mirrors the same assertions but lives under tests/integration, and remove or
replace the unit-suite version with a small isolated unit test that mocks
subprocess.run or the script entrypoint if unit coverage of the behavior is
still required. Ensure the test path still references
check_local_version_ahead_of_pypi.py correctly after moving.
- Around line 15-25: The test currently creates shared module state via the
module-scoped _mod variable; remove the module-level "_mod = _load_module()" and
instead provide a pytest fixture (e.g. def mod(): return _load_module()) with
function scope so each test gets a fresh import; update tests to accept that
fixture (replace references to _mod with the fixture name) and keep the helper
function _load_module() as the loader used by the fixture to ensure no shared
mutable module state across tests.
In `@tests/unit/scripts/test_pre_commit_smart_checks_docs.py`:
- Around line 62-67: The test test_pre_commit_quality_markdown_globs_include_mdc
currently only checks the generated script; update it to also validate the
actual pre-commit hook regex in .pre-commit-config.yaml so staged .mdc files
aren't skipped. In tests/unit/scripts/test_pre_commit_smart_checks_docs.py
(function test_pre_commit_quality_markdown_globs_include_mdc) open and parse
.pre-commit-config.yaml (or load as text) and assert that the
cli-block1-markdown-* hooks' files pattern contains \.(md|mdc)$ (or an
equivalent pattern that includes .mdc), ensuring both the script and the
pre-commit hook definitions include .mdc.
In `@tests/unit/workflows/test_trustworthy_green_checks.py`:
- Around line 168-179: The test _assert_pre_commit_verify_and_version_hooks
currently hard-codes the new wrapper name and should be aligned with the
existing shim entrypoint; update the assertion that checks
verify_hook/verify_script so it asserts the expected legacy wrapper name
(./scripts/pre-commit-verify-modules-signature.sh) or allows either the legacy
or new script (pre-commit-verify-modules-signature.sh OR
pre-commit-verify-modules.sh) when checking verify_hook.get("entry") and when
resolving REPO_ROOT / "scripts"; keep the existing checks for legacy_verify and
the payload flag but change the verify_script assertion to reference the shim
name used by the nold-ai/specfact-cli-modules parity tests.
---
Outside diff comments:
In @.github/workflows/pr-orchestrator.yml:
- Around line 113-123: The CI step currently always passes --require-signature
to python scripts/verify-modules-signature.py which enforces main-branch policy
for all PRs; change the logic around BASE_REF so that --require-signature is
only appended when the target base branch is main (e.g., when BASE_REF ends with
/main or equals main), and omit --require-signature for dev/non-main branches
while still passing --enforce-version-bump, --payload-from-filesystem and, when
present, --version-check-base "$BASE_REF".
---
Duplicate comments:
In `@openspec/CHANGE_ORDER.md`:
- Line 155: The marketplace-06 table row (the entry containing
marketplace-06-ci-module-signing and its issue links) exceeds the markdown
line-length rule; split the single long table row into two (or more) consecutive
table rows or wrap the long cell content across multiple rows while preserving
the pipe-delimited table syntax so each line is under 120 chars and the row
still reads as the same logical entry (keep the identifiers marketplace, 06,
marketplace-06-ci-module-signing, and the issue links like `#500` and `#185` intact
and adjacent so meaning is preserved).
In
`@openspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.md`:
- Around line 74-76: Update the ambiguous wording in the spec lines referencing
the `verify` job and `--require-signature`: replace "checksum-only signed
manifests" with a clearer phrase such as "checksum-valid manifests without
signatures" so the `verify` job SHALL run without `--require-signature` and
SHALL pass for checksum-valid manifests without signatures, explicitly
distinguishing absence of signature from invalid signatures.
In `@README.md`:
- Around line 85-87: The README incorrectly instructs downstream repos to copy
only `.pre-commit-config.yaml` while the configured hooks are `repo: local` and
call repo-owned `scripts/...` entrypoints; update the README to either (A) tell
users to vendor the referenced `scripts/` directory and any other repo-owned
hook files alongside `.pre-commit-config.yaml` (and note any required path
changes), or (B) provide and show the alternative `pre-commit` configuration
that points to the remote `specfact-smart-checks` repository so hooks run
without vendoring; reference the `.pre-commit-config.yaml` file, `scripts/...`
entrypoints, and the `specfact-smart-checks` remote option in the instructions
so downstream consumers know both supported approaches.
In `@scripts/pre-commit-smart-checks.sh`:
- Around line 13-17: The repo-root resolution fails when the shim is copied into
.git/hooks because _hook_dir points at .git/hooks (not a worktree); update the
_repo_root assignment to try resolving from _hook_dir first and, if that fails,
fall back to the parent directory (e.g. attempt git -C "${_hook_dir}" rev-parse
--show-toplevel || git -C "${_hook_dir}/.." rev-parse --show-toplevel) so the
exec call that runs pre-commit-quality-checks.sh can locate the real repository
root; change the line that sets _repo_root accordingly and keep the rest (exec
bash "${_repo_root}/scripts/pre-commit-quality-checks.sh" all "$@") unchanged.
In `@scripts/verify_safe_project_writes.py`:
- Around line 87-88: Wrap the IDE_SETUP.read_text(...) and ast.parse(...) calls
in a try/except that catches OSError, UnicodeDecodeError, and SyntaxError, log
or surface the caught exception, and map it to the script's deterministic
failure exit path (e.g., call the existing failure/exit helper or sys.exit with
the gate's non-zero code) instead of letting the traceback escape; only call
_collect_json_io_offenders(tree) on successful parse. Ensure you update the code
around IDE_SETUP, ast.parse, and _collect_json_io_offenders to perform this
handling.
In `@src/specfact_cli/adapters/ado.py`:
- Around line 2290-2312: The code resolves and validates work_item_id using
_work_item_id_from_source_tracking and then immediately overwrites it by calling
_get_source_tracking_work_item_id, making the new coercion logic dead; remove
the stale reassignment (the second target_repo/work_item_id assignment) and use
the already-resolved work_item_id (from _work_item_id_from_source_tracking) for
subsequent API calls, or conversely route the entire flow through
_get_source_tracking_work_item_id and move the integer-coercion/validation into
that helper so only one resolver (_work_item_id_from_source_tracking or
_get_source_tracking_work_item_id) is used consistently for target_repo,
work_item_id, and proposal_data.
In `@tests/unit/scripts/test_pre_commit_verify_modules.py`:
- Around line 170-174: The test test_pre_commit_verify_modules_legacy_entrypoint
currently asserts on literal shell text in LEGACY_VERIFY_WRAPPER which is
brittle; instead, create a minimal behavioral test that ensures the wrapper
actually invokes the legacy target: in the test replace the string checks with a
small temporary executable script (a fake pre-commit-verify-modules.sh) placed
in a temporary directory, make it record invocation (e.g., write to a temp file
or stdout), prepend that dir to PATH or otherwise ensure the wrapper will
resolve that fake script, run LEGACY_VERIFY_WRAPPER as an executable
(subprocess) and assert the wrapper returns the expected exit code and that the
fake script was invoked; keep the test named
test_pre_commit_verify_modules_legacy_entrypoint and reference
LEGACY_VERIFY_WRAPPER when locating the wrapper to run.
In `@tests/unit/scripts/test_verify_safe_project_writes.py`:
- Around line 51-60: The test test_verify_safe_project_writes_passes_on_repo
invokes the real scripts/verify_safe_project_writes.py against the live
repository which couples the unit test to repo state; change the test to run the
verifier against a controlled fixture tree (or temporary directory) instead of
the repo root, e.g. create a fixture directory with the minimal files expected
by verify_safe_project_writes.py (or pass a path argument or environment
variable to the script if verify_safe_project_writes.py/ide_setup supports it),
and update test_verify_safe_project_writes_passes_on_repo to invoke the script
with that fixture path so the assertion on completed.returncode is
deterministic; alternatively mark this test as an integration/contract test
rather than a unit test if you want to keep exercising the live repo.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5e9970b1-099a-4b00-bd53-02cabd02aa18
📒 Files selected for processing (46)
.github/workflows/pr-orchestrator.yml.pre-commit-config.yamlCHANGELOG.mdCONTRIBUTING.mdREADME.mddocs/_support/readme-first-contact/capture-readme-output.shdocs/_support/readme-first-contact/sample-output/capture-metadata.txtdocs/_support/readme-first-contact/sample-output/review-output.txtdocs/agent-rules/50-quality-gates-and-review.mddocs/agent-rules/70-release-commit-and-docs.mddocs/guides/module-signing-and-key-rotation.mddocs/guides/publishing-modules.mddocs/modules/code-review.mddocs/reference/module-security.mdopenspec/CHANGE_ORDER.mdopenspec/changes/marketplace-06-ci-module-signing/.openspec.yamlopenspec/changes/marketplace-06-ci-module-signing/design.mdopenspec/changes/marketplace-06-ci-module-signing/proposal.mdopenspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.mdopenspec/changes/marketplace-06-ci-module-signing/specs/ci-module-signing-on-approval/spec.mdopenspec/changes/marketplace-06-ci-module-signing/tasks.mdpyproject.tomlscripts/check_doc_frontmatter.pyscripts/check_local_version_ahead_of_pypi.pyscripts/git-branch-module-signature-flag.shscripts/pre-commit-quality-checks.shscripts/pre-commit-smart-checks.shscripts/pre-commit-smart-test.shscripts/pre-commit-verify-modules-signature.shscripts/pre-commit-verify-modules.shscripts/setup-git-hooks.shscripts/verify_safe_project_writes.pysetup.pysrc/__init__.pysrc/specfact_cli/__init__.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.pysrc/specfact_cli/analyzers/graph_analyzer.pytests/unit/adapters/test_openspec_parser.pytests/unit/scripts/test_check_local_version_ahead_of_pypi.pytests/unit/scripts/test_pre_commit_smart_checks_docs.pytests/unit/scripts/test_pre_commit_verify_modules.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/workflows/test_trustworthy_green_checks.py
💤 Files with no reviewable changes (2)
- tests/unit/adapters/test_openspec_parser.py
- scripts/pre-commit-smart-test.sh
There was a problem hiding this comment.
Actionable comments posted: 12
♻️ Duplicate comments (9)
tests/helpers/doc_frontmatter.py (1)
29-34:⚠️ Potential issue | 🟡 MinorRestore prior
sys.modulesentry on loader failure.Line 33 always removes
spec.name; if a prior module existed, this mutates global import state for later tests. Snapshot and restore the previous value instead of unconditional pop.Suggested patch
sys.modules[spec.name] = module try: spec.loader.exec_module(module) except BaseException: - sys.modules.pop(spec.name, None) + if previous_module is None: + sys.modules.pop(spec.name, None) + else: + sys.modules[spec.name] = previous_module raise#!/bin/bash set -euo pipefail # Verify current rollback behavior in the helper. nl -ba tests/helpers/doc_frontmatter.py | sed -n '20,40p' # Check where this helper is used (to assess downstream test-state impact). rg -n --type=py -C2 '\bload_check_doc_frontmatter_module\s*\('Expected: you should see registration on Line 29 and unconditional
popin the exception block, plus multiple call sites in tests where preserved module state can matter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/helpers/doc_frontmatter.py` around lines 29 - 34, The exception handler currently unconditionally removes sys.modules[spec.name], which loses any prior module binding; change it to snapshot prior = sys.modules.get(spec.name) before assigning sys.modules[spec.name] = module, and in the except block restore the prior value (if prior is None remove the key, otherwise set sys.modules[spec.name] = prior) so that the loader failure in spec.loader.exec_module(module) does not mutate global import state; update the code around the spec.name/module assignment and the except BaseException block to perform this restore logic.src/specfact_cli/sync/bridge_watch.py (1)
39-43:⚠️ Potential issue | 🟠 MajorHandle
{contract_name}as a placeholder in the strict matcher.Line 40 currently skips only
"{feature_id}". For patterns that also include"{contract_name}", this path-matcher will now reject valid paths and fail feature-id extraction. This is the same unresolved matcher concern from earlier review context.Suggested patch
- for i, part in enumerate(pattern_parts): - if part == "{feature_id}": + for i, part in enumerate(pattern_parts): + if part in ("{feature_id}", "{contract_name}"): continue - if i >= len(file_parts) or file_parts[i] != part: + if file_parts[i] != part: return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/sync/bridge_watch.py` around lines 39 - 43, The strict path-matching loop currently only skips the "{feature_id}" placeholder, causing patterns containing "{contract_name}" to be rejected; update the loop in bridge_watch.py (the code iterating over pattern_parts and comparing to file_parts) to treat both "{feature_id}" and "{contract_name}" as placeholders by continuing when part is either of those strings so valid paths with contract_name are accepted and feature-id extraction logic can run correctly.src/specfact_cli/generators/persona_exporter.py (1)
206-210:⚠️ Potential issue | 🟡 MinorNarrow the non-
Nonechecks to avoid exporting empty owned sections.At Line 206 and Line 209, accepting any non-
Nonevalue now also propagates""/[], which can create empty sections and diverges from_merge_feature_scalar_fieldsbehavior. Keep0/False, but filter empty string/list.Proposed fix
if spec.use_getattr: val = getattr(feature, spec.field_name, None) - if val is not None: + if val is not None and val != [] and val != "": feature_dict[spec.field_name] = val return - if spec.value is not None: + if spec.value is not None and spec.value != [] and spec.value != "": feature_dict[spec.field_name] = spec.value🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/generators/persona_exporter.py` around lines 206 - 210, The current non-None checks in the persona exporter accept empty strings/lists and produce empty sections; update the two checks around feature_dict[spec.field_name] to only accept values that are not None and not empty strings or empty lists while still allowing falsy but valid values like 0 and False—i.e., replace "if val is not None" and "if spec.value is not None" with a guard that ensures val/spec.value is not None and not (isinstance(..., str) and len==0) and not (isinstance(..., list) and len==0); keep the assignment to feature_dict[spec.field_name] and the early return behavior otherwise.src/specfact_cli/validators/sidecar/frameworks/fastapi.py (1)
20-20:⚠️ Potential issue | 🟠 MajorThe parser still turns unsupported FastAPI decorators into a synthetic
GET /route.This whitelist only accepts verb-named decorators with a positional first arg. Valid FastAPI forms like
@router.api_route(..., methods=[...])and@router.get(path="/items")now returnNonehere, and_extract_route_from_function()still falls back to a default root endpoint when nothing matched. Please either support those decorator forms or returnNoneunless a real route decorator was recognized.#!/bin/bash # Read-only scan: find FastAPI decorator forms that the current helper rejects. python - <<'PY' import ast from pathlib import Path VERB_DECORATORS = {"get", "post", "put", "patch", "delete", "head", "options", "trace"} for py_file in Path(".").rglob("*.py"): try: tree = ast.parse(py_file.read_text(encoding="utf-8"), filename=str(py_file)) except (SyntaxError, UnicodeDecodeError, PermissionError): continue for node in ast.walk(tree): if not isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)): continue for decorator in node.decorator_list: if not isinstance(decorator, ast.Call): continue func = decorator.func name = func.attr if isinstance(func, ast.Attribute) else func.id if isinstance(func, ast.Name) else None if name == "api_route": print(f"api_route decorator: {py_file}:{node.lineno}") elif name in VERB_DECORATORS and not decorator.args and any(kw.arg == "path" for kw in decorator.keywords): print(f"path= keyword decorator: {py_file}:{node.lineno}:{name}") PYAny hit is a real route declaration that the current helper does not parse.
Also applies to: 173-180, 187-197
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/validators/sidecar/frameworks/fastapi.py` at line 20, The parser currently only recognizes verb-named decorators with a positional first arg via _HTTP_ROUTE_METHODS, causing valid FastAPI forms like `@router.api_route`(..., methods=[...]) and `@router.get`(path="/items") to be ignored and _extract_route_from_function to fall back to a synthetic "GET /"; update the decorator handling to (1) detect Call nodes where func.attr or func.id is "api_route" and extract the "methods" keyword (list of strings) and "path"/"name" keywords to derive method(s) and path, (2) accept verb-named decorator Calls that specify path via keyword (kw.arg == "path" or "url" or "route") when decorator.args is empty, and (3) change _extract_route_from_function to return None unless a real route decorator was matched (no silent fallback to GET /); reference functions/variables to edit: _HTTP_ROUTE_METHODS, the decorator parsing block that checks decorator.func.attr/func.id, and _extract_route_from_function to enforce None-on-no-match.scripts/verify_safe_project_writes.py (1)
48-62:⚠️ Potential issue | 🟠 MajorRestore lexical scoping in the JSON-I/O visitor.
func_aliasesandmodule_localsare global to the entire walk, so animport json as jsorfrom json import dump as dumperinside one function/class leaks into sibling scopes.visit_FunctionDef/visit_ClassDefalso never bindnode.namein the enclosing scope and only walkbody, so same-name defs do not shadow the alias while decorators, bases, and default expressions are skipped entirely. That makes this gate both false-positive safe code and miss real offenders. Please scope alias visibility with the same push/pop discipline asshadow_stack, bind definition names in the parent frame, and traverse definition-time expressions before entering the new local scope. As per coding guidelines,scripts/**/*.py: "Deterministic tooling: subprocess safety, Hatch integration, and parity with documented quality gates."Also applies to: 72-80, 144-178
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/verify_safe_project_writes.py` around lines 48 - 62, func_aliases and module_locals must be lexically scoped like shadow_stack so aliases in one function/class do not leak; when visiting definitions (visit_FunctionDef and visit_ClassDef) bind node.name in the parent frame, traverse definition-time expressions (decorators, bases, keywords/defaults) before entering the new local scope, then push a new shadow frame and walk the body, and finally pop the frame; ensure _note_shadow uses the current shadow frame (shadow_stack[-1]) and that any updates to func_aliases/module_locals are either stored/restored on enter/exit or moved into per-frame structures so visibility follows push/pop discipline (apply same fix to the other visitor sites mentioned: the code around _union_shadowed, lines referenced and the visitors for FunctionDef/ClassDef).openspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.md (1)
20-21:⚠️ Potential issue | 🟡 MinorFailing-first evidence is still missing from the TDD trail.
This still documents command ordering without preserving the actual red failure output (exit + failing assertion excerpt), so the evidence chain remains incomplete.
Suggested patch shape
-- **Note**: New scenarios (`malformed_json_raises`, `preserves_unrelated_keys`, verify script) were added - before the safe-merge implementation; prior behavior treated invalid JSON as `{}` and could destroy user - settings (issue `#487`). Failing-first excerpt was not preserved in this log; the command above is the - recorded gate for ordering (spec → tests → failing evidence → code → passing evidence). +- **Note**: New scenarios (`malformed_json_raises`, `preserves_unrelated_keys`, verify script) were added + before the safe-merge implementation; prior behavior treated invalid JSON as `{}` and could destroy user + settings (issue `#487`). + Failing-first evidence (pre-fix run): + - exit code: `<paste actual non-zero exit>` + - failing tests: `<paste failing test ids>` + - assertion excerpt: `<paste pytest failure snippet>`Based on learnings: "Perform
spec -> tests -> failing evidence -> code -> passing evidencein that order".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.md` around lines 20 - 21, TDD_EVIDENCE.md currently documents the command ordering but omits the actual failing-first evidence; edit TDD_EVIDENCE.md to insert the missing failing-evidence block immediately after the recorded command/gate mention (the line referencing "recorded gate for ordering (spec → tests → failing evidence → code → passing evidence)"), capturing and pasting the failing test output: the process exit status/exit code and the excerpted failing assertion/stack trace (include exact stdout/stderr snippet verbatim) so the trail shows "spec -> tests -> failing evidence -> code -> passing evidence" in order.docs/guides/publishing-modules.md (1)
105-109:⚠️ Potential issue | 🟠 MajorInclude
--payload-from-filesystemin the strict verification example.The repo and
specfact-cli-modulesnow treat strict/main-line verification as--require-signature --payload-from-filesystem --enforce-version-bump. Omitting--payload-from-filesystemhere documents a different path than CI and the wrapper actually enforce.🛠️ Proposed fix
-- Before merging to a protected branch such as `main`, run strict verification, e.g. - `scripts/verify-modules-signature.py --require-signature --enforce-version-bump` so signatures and + `scripts/verify-modules-signature.py --require-signature --payload-from-filesystem --enforce-version-bump` so signatures and version bumps are both enforced. On feature or `dev` branches, checksum-only verification (omit `--require-signature`) is typical — see [Module signing and key rotation](module-signing-and-key-rotation.md).As per coding guidelines, "docs/**/*.md: User-facing accuracy: CLI examples match current behavior".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/publishing-modules.md` around lines 105 - 109, Update the strict verification example so it matches current CI/wrapper behavior by adding the --payload-from-filesystem flag to the scripts/verify-modules-signature.py invocation; specifically change the example that currently shows `--require-signature --enforce-version-bump` to include `--payload-from-filesystem` (i.e., `--require-signature --payload-from-filesystem --enforce-version-bump`) and adjust any surrounding text to state that strict/main-line verification includes that flag to match the repo and specfact-cli-modules behavior.README.md (1)
85-90:⚠️ Potential issue | 🟠 MajorAdd the actual verify-modules shim to the vendoring list.
The modular config still points
verify-module-signaturesat./scripts/pre-commit-verify-modules-signature.sh. Downstream users who vendor only the three scripts listed here will still get a brokenentry:for that hook.🛠️ Proposed fix
- referenced `scripts/*.sh` entrypoints (at minimum `scripts/pre-commit-quality-checks.sh`, - `scripts/pre-commit-verify-modules.sh`, and `scripts/git-branch-module-signature-flag.sh`) so hook + referenced `scripts/*.sh` entrypoints (at minimum `scripts/pre-commit-quality-checks.sh`, + `scripts/pre-commit-verify-modules-signature.sh`, `scripts/pre-commit-verify-modules.sh`, and + `scripts/git-branch-module-signature-flag.sh`) so hook🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 85 - 90, The README's vendoring list omits the verify-modules shim causing the modular pre-commit entry for verify-module-signatures (./scripts/pre-commit-verify-modules-signature.sh) to break; update the paragraph that lists required vendored scripts to include scripts/pre-commit-verify-modules-signature.sh (alongside scripts/pre-commit-quality-checks.sh, scripts/pre-commit-verify-modules.sh, and scripts/git-branch-module-signature-flag.sh) so the verify-module-signatures hook's entry: path resolves when consumers copy the config.tests/unit/workflows/test_trustworthy_green_checks.py (1)
168-181:⚠️ Potential issue | 🟠 MajorStill unresolved: make the verify-hook parity check exact.
Linked repo
nold-ai/specfact-cli-modulesstill treats./scripts/pre-commit-verify-modules-signature.shas the public pre-commit entrypoint in its.pre-commit-config.yaml, docs, and wrapper tests. Accepting either wrapper here means this suite stays green while downstream parity is already broken.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/workflows/test_trustworthy_green_checks.py` around lines 168 - 181, The parity check currently accepts either wrapper script name which masks downstream breakage; update _assert_pre_commit_verify_and_version_hooks to require the canonical legacy entrypoint: change the OR assertion on verify_entry (currently allowing "pre-commit-verify-modules.sh" or "pre-commit-verify-modules-signature.sh") to assert that verify_entry contains "pre-commit-verify-modules-signature.sh" only, and adjust any related existence/assertions so legacy_verify (and verify_entry) are the expected canonical checks (refer to function _assert_pre_commit_verify_and_version_hooks and variables verify_entry, verify_hook, verify_script, legacy_verify).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/pr-orchestrator.yml:
- Around line 116-128: The shared verifier flags are in the wrong order; change
the initial VERIFY_ARGS assignment to preserve the contract used by
specfact-cli-modules by setting VERIFY_ARGS to (--payload-from-filesystem
--enforce-version-bump) and keep the existing logic that prepends
--require-signature when running against main (i.e., when adding signature
requirement, insert --require-signature before the existing VERIFY_ARGS), then
call python scripts/verify-modules-signature.py with "${VERIFY_ARGS[@]}" (and
--version-check-base "$BASE_REF" when BASE_REF is set) so the flag ordering
matches the pinned docs/tests.
In `@openspec/changes/marketplace-06-ci-module-signing/tasks.md`:
- Around line 99-106: The PR cleanup steps omit the required post-merge archive;
after merging add a new checklist item under "9. PR and cleanup" to run the
required archive command by instructing maintainers to run `openspec archive
marketplace-06-ci-module-signing` from the repo root (no manual mv) and to
record that the archive was completed in `TDD_EVIDENCE.md`; update items 9.3/9.4
to reference the archive step so the change cannot be marked complete without
running `/opsx:archive` via `openspec archive <change-id>`.
- Around line 78-88: Section 7's checklist omits required local gates; update
the checklist in "Section 7" (the 7.1–7.7 items) to add and order the missing
local validation steps: insert "hatch run format" before linting/format checks,
add "hatch run type-check" (basedpyright) before contract-test, and replace or
augment the lightweight test step with "hatch test --cover -v" as the full
test-suite gate, then renumber/adjust the existing items (7.1–7.7) so the
sequence follows: format, type-check, contract-test, full tests,
lint/yaml/specfact, record TDD_EVIDENCE, and openspec validate.
In `@scripts/pre_commit_code_review.py`:
- Around line 281-290: The current gate uses result.returncode to decide failure
which can misreport a blocking JSON review; after loading the report via
_load_review_report(report_path) and counting severities with
_count_findings_by_severity(report.findings), ensure that if counts["error"] > 0
you return a non-zero exit (e.g. return 1) directly instead of returning
result.returncode so any parsed blocking findings always cause the pre-commit
check to fail; update the branch that currently does `return result.returncode`
to return a fixed non-zero value when errors are present.
In `@scripts/pre-commit-quality-checks.sh`:
- Around line 52-60: The helper has_staged_yaml() only checks staged file names
for .yaml or .yml but the related hook cli-block1-yaml is triggered by other
filenames (pyproject.toml and .markdownlint.json), causing the hook to run then
skip linting; update has_staged_yaml() (which reads from staged_files) to also
detect those filenames by extending the filename test (e.g., include exact
matches for "pyproject.toml" and ".markdownlint.json" or broaden the regex) so
the helper returns success when any of those files are staged.
In `@src/specfact_cli/adapters/speckit.py`:
- Around line 555-561: The update path sets existing_feature.outcomes and
.acceptance directly from payload (existing_feature.outcomes =
list(payload.outcomes), existing_feature.acceptance = list(payload.acceptance))
but the create path still synthesizes fallback placeholders, causing
non-idempotent imports; change the create/insert logic that currently generates
placeholder outcomes/acceptance to instead assign outcomes =
list(payload.outcomes) and acceptance = list(payload.acceptance) (and mirror any
other fields treated differently) so both the create and update branches follow
the same rule and preserve empty lists across repeated imports.
In `@src/specfact_cli/analyzers/code_analyzer.py`:
- Around line 741-742: The tokenization currently only normalizes dots and
hyphens causing underscore_delimited package names like pytest_asyncio or
django_filters to remain single tokens; update the tokenization that builds
segments (the code that computes the segments variable in code_analyzer.py) to
also treat underscores as delimiters (e.g., include "_" in the replace/split
logic or use a regex splitting on [._-]), then keep returning themes via the
existing theme_keywords membership check; also add a regression test that
asserts product.themes correctly matches expected themes for
underscore-delimited imports (e.g., pytest_asyncio, django_filters,
sqlalchemy_utils, fastapi_users) to prevent future regressions.
In `@src/specfact_cli/analyzers/graph_analyzer.py`:
- Around line 458-460: The current substring checks in the loaded_contents loop
(checking f"def {function_name}" / f"class {function_name}") can match inside
comments/strings or partial names; replace that check with an exact-declaration
match by either (preferred) parsing content with Python's ast module and
checking for ast.FunctionDef and ast.AsyncFunctionDef or ast.ClassDef nodes
whose .name equals function_name, or (alternatively) use a declaration-anchored
regex like r'^\s*(?:async\s+def|def)\s+{re.escape(function_name)}\\b' and
r'^\s*class\\s+{re.escape(function_name)}\\b' applied multiline to content;
update the loop that uses module_name, content to use this exact-match logic so
only true declarations resolve the callee.
In `@src/specfact_cli/modules/init/module-package.yaml`:
- Around line 19-20: The module manifest is missing integrity.signature (only
has integrity.checksum), so re-sign the updated manifest using the repository
signing flow to produce and add the required integrity.signature field alongside
the existing integrity.checksum; run the repo signing tool/process against this
manifest (the one containing the integrity: checksum
sha256:ce74cdb62f00690fc68e80164471d3f339cde37e0e4eb372c2cc48739fd0cd31 entry),
commit the generated signature entry into the manifest, and ensure CI pre-commit
signature verification passes before merging.
In `@src/specfact_cli/validators/cli_first_validator.py`:
- Around line 191-197: The validator currently returns early when expected
directories are wrong type or unreadable; update detect_direct_manipulation()
(and the same logic covering the bundle/features checks around lines 199-214) to
treat these cases as suspicious by adding the offending Path(s) to
suspicious_files instead of returning or skipping: when projects_dir.is_dir() is
False append projects_dir to suspicious_files; when iterating bundles, if
bundle_path.is_dir() or bundle_features_path.is_dir() is False append the
specific bundle path(s) (e.g., bundle_path or bundle_features_path) to
suspicious_files; on OSError during projects_dir.iterdir() or bundle iteration
catch the exception and append the parent/path that failed to suspicious_files
before continuing so malformed or unreadable expected directories are surfaced.
In `@tests/integration/scripts/test_verify_safe_project_writes_integration.py`:
- Around line 12-23: Add two small pytest cases alongside
test_verify_safe_project_writes_passes_on_repo that invoke the CLI script
verify_safe_project_writes.py via subprocess.run (same pattern as existing test)
but use fixtures/temporary files to force the two failure branches in main():
one test should feed a file that triggers a syntax failure and assert
completed.returncode == 1, and the other should simulate a read/decode failure
and assert completed.returncode == 2; reference the script's main() exit-code
contract (1 for syntax, 2 for read/decode) and reuse the existing
subprocess.run/capture_output/text pattern so these failure paths are covered.
In `@tests/unit/workflows/test_trustworthy_green_checks.py`:
- Around line 198-199: The assertions for the "cli-block1-format" hook are
reversed: update the test so it asserts that
by_id["cli-block1-format"].get("always_run") is True (not False) and that the
hook has no files selector (i.e., assert that
by_id["cli-block1-format"].get("files") is falsy or None); locate the two
assertions referencing "cli-block1-format" and flip them accordingly.
---
Duplicate comments:
In `@docs/guides/publishing-modules.md`:
- Around line 105-109: Update the strict verification example so it matches
current CI/wrapper behavior by adding the --payload-from-filesystem flag to the
scripts/verify-modules-signature.py invocation; specifically change the example
that currently shows `--require-signature --enforce-version-bump` to include
`--payload-from-filesystem` (i.e., `--require-signature
--payload-from-filesystem --enforce-version-bump`) and adjust any surrounding
text to state that strict/main-line verification includes that flag to match the
repo and specfact-cli-modules behavior.
In `@openspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.md`:
- Around line 20-21: TDD_EVIDENCE.md currently documents the command ordering
but omits the actual failing-first evidence; edit TDD_EVIDENCE.md to insert the
missing failing-evidence block immediately after the recorded command/gate
mention (the line referencing "recorded gate for ordering (spec → tests →
failing evidence → code → passing evidence)"), capturing and pasting the failing
test output: the process exit status/exit code and the excerpted failing
assertion/stack trace (include exact stdout/stderr snippet verbatim) so the
trail shows "spec -> tests -> failing evidence -> code -> passing evidence" in
order.
In `@README.md`:
- Around line 85-90: The README's vendoring list omits the verify-modules shim
causing the modular pre-commit entry for verify-module-signatures
(./scripts/pre-commit-verify-modules-signature.sh) to break; update the
paragraph that lists required vendored scripts to include
scripts/pre-commit-verify-modules-signature.sh (alongside
scripts/pre-commit-quality-checks.sh, scripts/pre-commit-verify-modules.sh, and
scripts/git-branch-module-signature-flag.sh) so the verify-module-signatures
hook's entry: path resolves when consumers copy the config.
In `@scripts/verify_safe_project_writes.py`:
- Around line 48-62: func_aliases and module_locals must be lexically scoped
like shadow_stack so aliases in one function/class do not leak; when visiting
definitions (visit_FunctionDef and visit_ClassDef) bind node.name in the parent
frame, traverse definition-time expressions (decorators, bases,
keywords/defaults) before entering the new local scope, then push a new shadow
frame and walk the body, and finally pop the frame; ensure _note_shadow uses the
current shadow frame (shadow_stack[-1]) and that any updates to
func_aliases/module_locals are either stored/restored on enter/exit or moved
into per-frame structures so visibility follows push/pop discipline (apply same
fix to the other visitor sites mentioned: the code around _union_shadowed, lines
referenced and the visitors for FunctionDef/ClassDef).
In `@src/specfact_cli/generators/persona_exporter.py`:
- Around line 206-210: The current non-None checks in the persona exporter
accept empty strings/lists and produce empty sections; update the two checks
around feature_dict[spec.field_name] to only accept values that are not None and
not empty strings or empty lists while still allowing falsy but valid values
like 0 and False—i.e., replace "if val is not None" and "if spec.value is not
None" with a guard that ensures val/spec.value is not None and not
(isinstance(..., str) and len==0) and not (isinstance(..., list) and len==0);
keep the assignment to feature_dict[spec.field_name] and the early return
behavior otherwise.
In `@src/specfact_cli/sync/bridge_watch.py`:
- Around line 39-43: The strict path-matching loop currently only skips the
"{feature_id}" placeholder, causing patterns containing "{contract_name}" to be
rejected; update the loop in bridge_watch.py (the code iterating over
pattern_parts and comparing to file_parts) to treat both "{feature_id}" and
"{contract_name}" as placeholders by continuing when part is either of those
strings so valid paths with contract_name are accepted and feature-id extraction
logic can run correctly.
In `@src/specfact_cli/validators/sidecar/frameworks/fastapi.py`:
- Line 20: The parser currently only recognizes verb-named decorators with a
positional first arg via _HTTP_ROUTE_METHODS, causing valid FastAPI forms like
`@router.api_route`(..., methods=[...]) and `@router.get`(path="/items") to be
ignored and _extract_route_from_function to fall back to a synthetic "GET /";
update the decorator handling to (1) detect Call nodes where func.attr or
func.id is "api_route" and extract the "methods" keyword (list of strings) and
"path"/"name" keywords to derive method(s) and path, (2) accept verb-named
decorator Calls that specify path via keyword (kw.arg == "path" or "url" or
"route") when decorator.args is empty, and (3) change
_extract_route_from_function to return None unless a real route decorator was
matched (no silent fallback to GET /); reference functions/variables to edit:
_HTTP_ROUTE_METHODS, the decorator parsing block that checks
decorator.func.attr/func.id, and _extract_route_from_function to enforce
None-on-no-match.
In `@tests/helpers/doc_frontmatter.py`:
- Around line 29-34: The exception handler currently unconditionally removes
sys.modules[spec.name], which loses any prior module binding; change it to
snapshot prior = sys.modules.get(spec.name) before assigning
sys.modules[spec.name] = module, and in the except block restore the prior value
(if prior is None remove the key, otherwise set sys.modules[spec.name] = prior)
so that the loader failure in spec.loader.exec_module(module) does not mutate
global import state; update the code around the spec.name/module assignment and
the except BaseException block to perform this restore logic.
In `@tests/unit/workflows/test_trustworthy_green_checks.py`:
- Around line 168-181: The parity check currently accepts either wrapper script
name which masks downstream breakage; update
_assert_pre_commit_verify_and_version_hooks to require the canonical legacy
entrypoint: change the OR assertion on verify_entry (currently allowing
"pre-commit-verify-modules.sh" or "pre-commit-verify-modules-signature.sh") to
assert that verify_entry contains "pre-commit-verify-modules-signature.sh" only,
and adjust any related existence/assertions so legacy_verify (and verify_entry)
are the expected canonical checks (refer to function
_assert_pre_commit_verify_and_version_hooks and variables verify_entry,
verify_hook, verify_script, legacy_verify).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 12cdcc71-beb7-4fc8-805f-2f503ad5b6a0
📒 Files selected for processing (47)
.github/workflows/pr-orchestrator.yml.pre-commit-config.yamlCHANGELOG.mdREADME.mddocs/guides/publishing-modules.mddocs/modules/code-review.mdopenspec/CHANGE_ORDER.mdopenspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.mdopenspec/changes/marketplace-06-ci-module-signing/tasks.mdopenspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.mdpyproject.tomlscripts/check_doc_frontmatter.pyscripts/check_local_version_ahead_of_pypi.pyscripts/pre-commit-quality-checks.shscripts/pre-commit-smart-checks.shscripts/pre-commit-verify-modules.shscripts/pre_commit_code_review.pyscripts/verify_safe_project_writes.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.pysrc/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/common/logger_setup.pysrc/specfact_cli/generators/persona_exporter.pysrc/specfact_cli/generators/test_to_openapi.pysrc/specfact_cli/modules/init/module-package.yamlsrc/specfact_cli/modules/init/src/first_run_selection.pysrc/specfact_cli/registry/module_installer.pysrc/specfact_cli/sync/bridge_watch.pysrc/specfact_cli/utils/source_scanner.pysrc/specfact_cli/validators/cli_first_validator.pysrc/specfact_cli/validators/sidecar/crosshair_runner.pysrc/specfact_cli/validators/sidecar/crosshair_summary.pysrc/specfact_cli/validators/sidecar/frameworks/fastapi.pytests/helpers/doc_frontmatter.pytests/integration/scripts/test_check_local_version_ahead_of_pypi_integration.pytests/integration/scripts/test_verify_safe_project_writes_integration.pytests/unit/adapters/test_github.pytests/unit/scripts/test_check_local_version_ahead_of_pypi.pytests/unit/scripts/test_pre_commit_code_review.pytests/unit/scripts/test_pre_commit_smart_checks_docs.pytests/unit/scripts/test_pre_commit_verify_modules.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/tools/test_smart_test_coverage.pytests/unit/workflows/test_trustworthy_green_checks.pytools/smart_test_coverage.py
💤 Files with no reviewable changes (1)
- src/specfact_cli/utils/source_scanner.py
…503) * feat(ci): sign modules on PR approval and manual dispatch - Add sign-modules-on-approval workflow (approved reviews, dev/main base) - Extend sign-modules.yml with workflow_dispatch inputs and sign-and-push job - Document flows in module-security.md; update CHANGELOG and tests Made-with: Cursor * Fix sign review and process --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/pr-orchestrator.yml:
- Around line 222-224: The PR-required job "Verify local version is ahead of
PyPI" currently runs the script check_local_version_ahead_of_pypi.py which
treats transient PyPI/network failures as fatal; change the workflow so this
check does not block normal PR validation: either (A) move or gate the step to
only run for release/main flows (e.g., only on main branch or release jobs) so
PRs aren’t dependent on PyPI availability, or (B) wrap the script invocation so
you treat availability/network failures as advisory by capturing its exit code
and only fail the job when the script explicitly indicates the policy violation
(local version <= PyPI); reference the job name "Verify local version is ahead
of PyPI" and the script check_local_version_ahead_of_pypi.py when making the
change.
In @.github/workflows/sign-modules-on-approval.yml:
- Around line 33-38: The checkout step currently uses ref: ${{
github.event.pull_request.head.sha }} which pins a detached review-time commit
and can cause non-fast-forward push failures; change the checkout to use the
branch ref (${{ github.event.pull_request.head.ref }}) or, alternatively, after
the checkout run a git fetch + git reset --hard origin/${{
github.event.pull_request.head.ref }} (or equivalent) before the signing/push
step so you always operate on the branch tip; update the step that uses
actions/checkout@v4 (and any downstream push logic that uses HEAD:${HEAD_REF})
to reflect this fix.
In @.github/workflows/sign-modules.yml:
- Around line 136-212: The sign-and-push job duplicates the
checkout→install→sign→commit sequence present in sign-modules-on-approval.yml;
extract the shared steps (Checkout branch, Fetch comparison base, Set up Python,
Install signer dependencies, Sign module manifests, Commit and push signed
manifests) into a single reusable unit (either a
scripts/sign-and-push-modules.sh script or a composite action) and have both
workflows call that unit from their trigger-specific jobs (retain job name
sign-and-push or equivalent), passing through the existing inputs and secrets
(SPECFACT_MODULE_PRIVATE_SIGN_KEY, SPECFACT_MODULE_PRIVATE_SIGN_KEY_PASSPHRASE,
github.event.inputs.base_branch, version_bump, resign_all_manifests) and
preserving invocation of scripts/sign-modules.py and the same git commit/push
logic so behavior and flags remain identical.
In `@CHANGELOG.md`:
- Around line 46-57: The markdown block in CHANGELOG.md contains several lines
exceeding the 120-character lint limit (notably lines referencing
`.pre-commit-config.yaml`, `check_local_version_ahead_of_pypi.py`,
`scripts/pre-commit-quality-checks.sh all`, and
`marketplace-06-ci-module-signing`); reflow/wrap these long sentences so no line
exceeds 120 characters (preserve words and punctuation, break at sensible phrase
boundaries), keeping each bullet readable and keeping the same content and
punctuation.
In `@docs/reference/module-security.md`:
- Around line 47-76: The Verification command section is too dense and should be
reorganized into clearer subsections per workflow; split the large bullet under
"Verification command (`verify-modules-signature.py`)" into separate H3
subsections (e.g., "Strict verification", "Checksum-only verification", "GitHub
Actions", "Approval-time signing", "Manual signing", "Pre-commit") and move the
related details into each subsection, referencing the specific scripts/workflows
like verify-modules-signature.py, pr-orchestrator.yml, sign-modules.yml,
sign-modules-on-approval.yml, scripts/sign-modules.py,
scripts/pre-commit-verify-modules.sh,
scripts/git-branch-module-signature-flag.sh and artifacts like
module-package.yaml so readers can quickly find the contract and flags for each
workflow.
- Around line 73-76: Update specfact-cli-modules to use the new branch-aware
wrapper instead of the legacy shim: replace references to
scripts/pre-commit-verify-modules-signature.sh in .pre-commit-config.yaml and
any CI workflows with scripts/pre-commit-verify-modules.sh, preserve the
existing flags/behavior (--enforce-version-bump, --payload-from-filesystem and
ensure the wrapper still adds --require-signature on main), remove the legacy
shim references, and run the pre-commit and CI checks to verify nothing breaks.
In `@tests/unit/workflows/test_sign_modules_on_approval.py`:
- Around line 56-67: The test
test_sign_modules_on_approval_runs_signer_with_changed_only_mode hard-codes
exact shell text and should instead parse WORKFLOW and assert semantics; load
WORKFLOW (the WORKFLOW variable) as YAML, locate the step that invokes the
signer (the step whose run contains "scripts/sign-modules.py" or whose name
indicates signing), and assert that the step's run/args include the flags
--changed-only, --bump-version patch, and --payload-from-filesystem (string
containment, not exact formatting), that the step references the pull-request
base ref via a variable like BASE_REF (or uses origin/${{
github.event.pull_request.base.ref }} semantics) and uses HEAD_REF for the push
destination, that environment/secrets include SPECFACT_MODULE_PRIVATE_SIGN_KEY
and SPECFACT_MODULE_PRIVATE_SIGN_KEY_PASSPHRASE, and that the commit message
contains the phrase "ci sign changed modules" (substring match) rather than
asserting exact quoted shell lines.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: eb0142e3-2701-485d-823b-52fbe14916f8
📒 Files selected for processing (7)
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules-on-approval.yml.github/workflows/sign-modules.ymlCHANGELOG.mddocs/reference/module-security.mdtests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Tests (Python 3.12)
- GitHub Check: Compatibility (Python 3.11)
- GitHub Check: Compatibility (Python 3.11)
- GitHub Check: Tests (Python 3.12)
- GitHub Check: Analyze (python)
🧰 Additional context used
📓 Path-based instructions (12)
.github/workflows/*.{yml,yaml}
📄 CodeRabbit inference engine (.cursor/rules/testing-and-build-guide.mdc)
Validate GitHub workflow files using
hatch run lint-workflowsbefore committing
.github/workflows/*.{yml,yaml}: Use actionlint for semantic validation of GitHub Actions workflows
Format GitHub Actions workflows usinghatch run workflows-fmtand lint them withhatch run workflows-lintafter editing
Files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.yml
.github/workflows/!(tests).{yml,yaml}
📄 CodeRabbit inference engine (.cursor/rules/testing-and-build-guide.mdc)
Do not re-run the full test suite in other CI workflows; tests are enforced only in the dedicated Tests workflow (.github/workflows/tests.yml)
Files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.yml
.github/workflows/**
⚙️ CodeRabbit configuration file
.github/workflows/**: CI safety: secrets usage, workflow dependencies, alignment with hatch test / contract-test
gates, and action versions.
Files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.yml
**/test_*.py
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
**/test_*.py: Write tests first in test-driven development (TDD) using the Red-Green-Refactor cycle
Ensure each test is independent and repeatable with no shared state between tests
Organize Python imports in tests using unittest.mock for Mock and patch
Use setup_method() for test initialization and Arrange-Act-Assert pattern in test files
Use@pytest.mark.asynciodecorator for async test functions in Python
Organize test files in structure: tests/unit/, tests/integration/, tests/e2e/ by module
Files:
tests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
**/*.py: Maintain minimum 80% test coverage, with 100% coverage for critical paths in Python code
Use clear naming and self-documenting code, preferring clear names over comments
Ensure each function/class has a single clear purpose (Single Responsibility Principle)
Extract common patterns to avoid code duplication (DRY principle)
Apply SOLID object-oriented design principles in Python code
Use type hints everywhere in Python code and enable basedpyright strict mode
Use Pydantic models for data validation and serialization in Python
Use async/await for I/O operations in Python code
Use context managers for resource management in Python
Use dataclasses for simple data containers in Python
Enforce maximum line length of 120 characters in Python code
Use 4 spaces for indentation in Python code (no tabs)
Use 2 blank lines between classes and 1 blank line between methods in Python
Organize imports in order: Standard library → Third party → Local in Python files
Use snake_case for variables and functions in Python
Use PascalCase for class names in Python
Use UPPER_SNAKE_CASE for constants in Python
Use leading underscore (_) for private methods in Python classes
Use snake_case for Python file names
Enable basedpyright strict mode with strict type checking configuration in Python
Use Google-style docstrings for functions and classes in Python
Include comprehensive exception handling with specific exception types in Python code
Use logging with structured context (extra parameters) instead of print statements
Use retry logic with tenacity decorators (@retry) for operations that might fail
Use Pydantic BaseSettings for environment-based configuration in Python
Validate user input using Pydantic validators in Python models
Use@lru_cacheand Redis-based caching for expensive calculations in Python
Run code formatting with Black (120 character line length) and isort in Python
Run type checking with basedpyright on all Python files
Run linting with ruff and pylint on all Pyth...
Files:
tests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Tests must be meaningful and test actual functionality, cover both success and failure cases, be independent and repeatable, and have clear, descriptive names. NO EXCEPTIONS - no placeholder or empty tests.
tests/**/*.py: Trim low-value unit tests when a contract covers the same assertion (type/shape/raises on negative checks)
Delete tests that only assert input validation, datatype/shape enforcement, or raises on negative conditions now guarded by contracts and runtime typing
Convert repeated edge-case permutations into one Hypothesis property with contracts acting as oraclesSecret redaction via
LoggerSetup.redact_secretsmust be covered by unit tests
Files:
tests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
⚙️ CodeRabbit configuration file
tests/**/*.py: Contract-first testing: meaningful scenarios, not redundant assertions already covered by
contracts. Flag flakiness, environment coupling, and missing coverage for changed behavior.
Files:
tests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
@(src|tests)/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Linting must pass with no errors using: pylint src tests
Files:
tests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
**/*.{py,pyi}
📄 CodeRabbit inference engine (.cursorrules)
**/*.{py,pyi}: After any code changes, follow these steps in order: (1) Apply linting and formatting to ensure code quality:hatch run format, (2) Type checking:hatch run type-check(basedpyright), (3) Contract-first approach: Runhatch run contract-testfor contract validation, (4) Run full test suite:hatch test --cover -v, (5) Verify all tests pass and contracts are satisfied, (6) Fix any issues and repeat steps until all tests pass
All public APIs must have@icontractdecorators and@beartypetype checking
Use Pydantic models for all data structures with data validation
Only write high-value comments if at all. Avoid talking to the user through comments
Files:
tests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
**/*.{md,mdc}
📄 CodeRabbit inference engine (.cursor/rules/markdown-rules.mdc)
**/*.{md,mdc}: Do not use more than one consecutive blank line anywhere in the document (MD012: No Multiple Consecutive Blank Lines)
Fenced code blocks should be surrounded by blank lines (MD031: Fenced Code Blocks)
Lists should be surrounded by blank lines (MD032: Lists)
Files must end with a single empty line (MD047: Files Must End With Single Newline)
Lines should not have trailing spaces (MD009: No Trailing Spaces)
Use asterisks (**) for strong emphasis, not underscores (__) (MD050: Strong Style)
Fenced code blocks must have a language specified (MD040: Fenced Code Language)
Headers should increment by one level at a time (MD001: Header Increment)
Headers should be surrounded by blank lines (MD022: Headers Should Be Surrounded By Blank Lines)
Only one top-level header (H1) is allowed per document (MD025: Single H1 Header)
Use consistent list markers, preferring dashes (-) for unordered lists (MD004: List Style)
Nested unordered list items should be indented consistently, typically by 2 spaces (MD007: Unordered List Indentation)
Use exactly one space after the list marker (e.g., -, *, +, 1.) (MD030: Spaces After List Markers)
Use incrementing numbers for ordered lists (MD029: Ordered List Item Prefix)
Enclose bare URLs in angle brackets or format them as links (MD034: Bare URLs)
Don't use spaces immediately inside code spans (MD038: Spaces Inside Code Spans)
Use consistent indentation (usually 2 or 4 spaces) throughout markdown files
Keep line length under 120 characters in markdown files
Use reference-style links for better readability in markdown files
Use a trailing slash for directory paths in markdown files
Ensure proper escaping of special characters in markdown files
Files:
docs/reference/module-security.mdCHANGELOG.md
docs/**/*.md
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Update architecture documentation in docs/ for architecture changes, state machine documentation for FSM modifications, interface documentation for API changes, and configuration guides for configuration changes. DO NOT create internal docs in specfact-cli repo folder that should not be visible to end users; use the respective internal repository instead.
Files:
docs/reference/module-security.md
⚙️ CodeRabbit configuration file
docs/**/*.md: User-facing accuracy: CLI examples match current behavior; preserve Jekyll front matter;
call out when README/docs index need sync.
Files:
docs/reference/module-security.md
**/*.md
📄 CodeRabbit inference engine (.cursorrules)
Avoid markdown linting errors (refer to markdown-rules)
Files:
docs/reference/module-security.mdCHANGELOG.md
CHANGELOG.md
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
Include new version entries at the top of CHANGELOG.md when updating versions
Update CHANGELOG.md with all code changes as part of version control requirements.
Update CHANGELOG.md to document all significant changes under Added, Fixed, Changed, or Removed sections when making a version change
Files:
CHANGELOG.md
🧠 Learnings (23)
📓 Common learnings
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Applies to openspec/changes/**/*.md : For `/opsx:archive` (Archive change): Include module signing and cleanup in final tasks. Agents MUST run `openspec archive <change-id>` from repo root (no manual `mv` under `openspec/changes/archive/`)
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Enforce module signatures and version bumps when signed module assets or manifests are affected
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Enforce module signatures and version bumps when signed module assets or manifests are affected
Applied to files:
.github/workflows/pr-orchestrator.ymltests/unit/specfact_cli/registry/test_signing_artifacts.py.github/workflows/sign-modules.ymltests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules-on-approval.ymldocs/reference/module-security.mdCHANGELOG.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Applies to openspec/changes/**/*.md : For `/opsx:archive` (Archive change): Include module signing and cleanup in final tasks. Agents MUST run `openspec archive <change-id>` from repo root (no manual `mv` under `openspec/changes/archive/`)
Applied to files:
.github/workflows/pr-orchestrator.ymltests/unit/specfact_cli/registry/test_signing_artifacts.py.github/workflows/sign-modules.ymltests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules-on-approval.ymldocs/reference/module-security.mdCHANGELOG.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Before executing ANY workflow command (`/opsx:ff`, `/opsx:apply`, `/opsx:continue`, etc.), perform the Pre-Execution Checklist: (1) Git Worktree - create if task creates branches/modifies code, (2) TDD Evidence - create `TDD_EVIDENCE.md` if behavior changes, (3) Documentation - include documentation research if changes affect user-facing behavior, (4) Module Signing - include signature verification if changes modify bundled modules, (5) Confirmation - state clearly that pre-execution checklist is complete and AGENTS.md compliance is confirmed
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Fix SpecFact code review findings, including warnings, unless a rare explicit exception is documented
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:22.650Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/yaml-and-workflows.md:0-0
Timestamp: 2026-03-25T21:33:22.650Z
Learning: Run `hatch run yaml-check-all` in CI and before pull requests to validate all YAML and workflow files
Applied to files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to .github/workflows/*.{yml,yaml} : Validate GitHub workflow files using `hatch run lint-workflows` before committing
Applied to files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Applies to **/*.{py,pyi} : After any code changes, follow these steps in order: (1) Apply linting and formatting to ensure code quality: `hatch run format`, (2) Type checking: `hatch run type-check` (basedpyright), (3) Contract-first approach: Run `hatch run contract-test` for contract validation, (4) Run full test suite: `hatch test --cover -v`, (5) Verify all tests pass and contracts are satisfied, (6) Fix any issues and repeat steps until all tests pass
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Before making a pull request, locally run format, lint, contract tests, and full test suite: `hatch run format`, `hatch run lint`, `hatch run contract-test`, `hatch test --cover -v`
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to src/**/*.py : Test Coverage Validation: Run hatch run smart-test-unit for modified files, hatch run smart-test-folder for modified directories, and hatch run smart-test-full before committing. ALL TESTS MUST PASS.
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Run contract-first validation locally before committing: `hatch run contract-test-contracts`, `hatch run contract-test-exploration`, and `hatch run contract-test-scenarios`
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Use `hatch test` command for running tests instead of direct pytest usage to ensure proper Python matrix, dependency resolution, environment variables, and coverage reports
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Run type checking with basedpyright on all Python files
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Run linting with ruff and pylint on all Python files
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Enable basedpyright strict mode with strict type checking configuration in Python
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to src/**/*.py : Type checking must pass with no errors using: mypy .
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to @(src|tests)/**/*.py : Linting must pass with no errors using: pylint src tests
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Use type hints everywhere in Python code and enable basedpyright strict mode
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: For `/opsx:ff` (Fast-Forward Change Creation): OPSX provides change scaffolding and artifact templates. AGENTS.md requires explicitly adding: Worktree creation task as Step 1 in tasks.md (not just 'create branch'), TDD_EVIDENCE.md tracking task in section 2 (Tests), Documentation research task per `openspec/config.yaml`, Module signing quality gate if applicable, Worktree cleanup steps in final section
Applied to files:
tests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.pyCHANGELOG.md
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to .github/workflows/!(tests).{yml,yaml} : Do not re-run the full test suite in other CI workflows; tests are enforced only in the dedicated Tests workflow (.github/workflows/tests.yml)
Applied to files:
.github/workflows/sign-modules.ymltests/unit/workflows/test_sign_modules_on_approval.py
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to CHANGELOG.md : Update CHANGELOG.md with all code changes as part of version control requirements.
Applied to files:
CHANGELOG.md
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to CHANGELOG.md : Update CHANGELOG.md to document all significant changes under Added, Fixed, Changed, or Removed sections when making a version change
Applied to files:
CHANGELOG.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: When creating implementation plans, task lists, or OpenSpec tasks.md, ALWAYS explicitly verify and include: (1) Worktree creation from `origin/dev` (not `git checkout -b`), (2) `hatch env create` in the worktree, (3) Pre-flight checks (`hatch run smart-test-status`, `hatch run contract-test-status`), (4) Worktree cleanup steps post-merge, (5) Self-check: 'Have I followed AGENTS.md Git Worktree Policy section?'
Applied to files:
CHANGELOG.md
🔀 Multi-repo context nold-ai/specfact-cli-modules
nold-ai/specfact-cli-modules
-
Pre-commit hook entries and expected script names:
- .pre-commit-config.yaml expects ./scripts/pre-commit-verify-modules-signature.sh and ./scripts/pre-commit-quality-checks.sh (block1/block2 entries). [::nold-ai/specfact-cli-modules::.pre-commit-config.yaml]
-
CI/workflow consumers and required flags:
- .github/workflows/pr-orchestrator.yml constructs VERIFY_CMD beginning with python scripts/verify-modules-signature.py --payload-from-filesystem --enforce-version-bump and conditionally appends --require-signature and --version-check-base. Any change to argument tokens or ordering may break this job. [::nold-ai/specfact-cli-modules::.github/workflows/pr-orchestrator.yml]
- .github/workflows/sign-modules.yml and .github/workflows/sign-modules-on-approval.yml invoke signing/verification flows and expect --payload-from-filesystem and related workflow_dispatch inputs / sign-and-push behavior. [::nold-ai/specfact-cli-modules::.github/workflows/sign-modules.yml] [::nold-ai/specfact-cli-modules::.github/workflows/sign-modules-on-approval.yml]
-
Documentation/specs that encode exact verifier semantics:
- docs/reference/module-security.md, docs/authoring/module-signing.md, docs/guides/ci-cd-pipeline.md, and openspec/specs all document the branch-aware rule: always pass --enforce-version-bump and --payload-from-filesystem; include --require-signature only for main-equivalent checks. These files are authoritative for expected flag presence/semantics. [::nold-ai/specfact-cli-modules::docs/reference/module-security.md] [::nold-ai/specfact-cli-modules::docs/authoring/module-signing.md] [::nold-ai/specfact-cli-modules::docs/guides/ci-cd-pipeline.md] [::nold-ai/specfact-cli-modules::openspec/changes/marketplace-06-ci-module-signing/specs/ci-integration/spec.md]
-
Tests that will break on name/flag/order mismatches:
- tests/unit/scripts/test_pre_commit_verify_modules.py and several workflow tests assert:
- wrapper scripts exist and are invoked,
- hatch/python invocation contains --enforce-version-bump and --payload-from-filesystem always,
- --require-signature present only on main,
- propagation of git diff failures and policy parsing behavior.
Changes to script filenames, missing scripts, different argument tokens, or different ordering could cause test failures. [::nold-ai/specfact-cli-modules::tests/unit/scripts/test_pre_commit_verify_modules.py] [::nold-ai/specfact-cli-modules::tests/unit/workflows/test_pr_orchestrator_signing.py] [::nold-ai/specfact-cli-modules::tests/unit/workflows/test_sign_modules_on_approval.py]
- tests/unit/scripts/test_pre_commit_verify_modules.py and several workflow tests assert:
Top-level implication for the specfact-cli PR under review:
- Ensure the new/renamed scripts and CI invocation in specfact-cli match the exact script filenames and verifier invocation tokens used here:
- script filenames: pre-commit-quality-checks.sh, pre-commit-verify-modules.sh, pre-commit-verify-modules-signature.sh, git-branch-module-signature-flag.sh, verify-modules-signature.py (or hatch wrapper name)
- required tokens in invocations: --enforce-version-bump and --payload-from-filesystem must always be present; --require-signature must be present only for main-equivalent contexts (and omitted otherwise); --version-check-base is expected where CI compares against a base ref.
- If specfact-cli changes ordering or token names (or removes/renames scripts), update these consumers (tests/docs/workflows) in specfact-cli-modules accordingly.
🔇 Additional comments (2)
CHANGELOG.md (1)
13-96: Strong release note structure and scope coverageThis entry is correctly added at the top, versioned/date-stamped, and captures significant changes under Added, Fixed, and Changed with strong operational traceability for signing and pre-commit behavior.
Based on learnings: “Applies to CHANGELOG.md : Update CHANGELOG.md with all code changes as part of version control requirements.” and “Applies to CHANGELOG.md : Update CHANGELOG.md to document all significant changes under Added, Fixed, Changed, or Removed sections when making a version change”.
docs/reference/module-security.md (1)
47-76: Technical accuracy confirmed; contract semantics match implementation.The documented verification modes (Strict vs Checksum-only), CLI flags (
--require-signature,--enforce-version-bump,--payload-from-filesystem,--version-check-base), and workflow behaviors align precisely with the code:
- Two-mode verification: Checksum validation is always performed; signature validation is conditional on
--require-signatureor manifest presence (confirmed inverify_manifestandmain()argparse).- Branch-aware policy:
git-branch-module-signature-flag.shemits"require"onmain,"omit"elsewhere—correctly documented.- Approval-time automation:
sign-modules-on-approval.ymlworkflow contract (same-repo PRs,dev/maintargets, secret requirements) matches the described behavior.- No
--allow-unsignedon verifier: Confirmed absent from argparse; correctly noted as existing only onsign-modules.py.This maintains OpenSpec traceability and ensures the public verification contract is accurately documented for cross-repo consumers.
* feat(ci): sign modules on PR approval and manual dispatch - Add sign-modules-on-approval workflow (approved reviews, dev/main base) - Extend sign-modules.yml with workflow_dispatch inputs and sign-and-push job - Document flows in module-security.md; update CHANGELOG and tests Made-with: Cursor * Fix sign review and process * Fix signature flow * Fix review gate findings * Fix review gate findings --------- Signed-off-by: Dom <39115308+djm81@users.noreply.github.com> Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (7)
src/specfact_cli/modules/init/module-package.yaml (1)
19-20:⚠️ Potential issue | 🔴 CriticalRestore
integrity.signaturebefore merging this release PR tomain.The manifest is currently checksum-only. In this main-targeted release flow, strict module verification requires signatures for changed module manifests; this will block verification gates and cross-repo signing contract checks.
Suggested manifest shape (signature generated by signing flow)
integrity: checksum: sha256:7cfe10f383162c23c5abc057771f6eeb4f9b362f81cf5c50bd99a187d9c6a461 + signature: "<generated-by-signing-workflow>"Based on learnings: "Enforce module signatures and version bumps when signed module assets or manifests are affected".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/modules/init/module-package.yaml` around lines 19 - 20, The manifest was left checksum-only; restore the removed integrity.signature entry in src/specfact_cli/modules/init/module-package.yaml (add the integrity.signature field with the signature value produced by the signing flow) so the module manifest contains both integrity.checksum and integrity.signature; also ensure the module version is bumped if signed assets or manifests changed to satisfy the "signed module assets" release policy before merging to main.CHANGELOG.md (1)
55-66:⚠️ Potential issue | 🟡 MinorWrap overlength changelog lines to satisfy markdown lint
Several lines in this block exceed the 120-character markdown limit and may fail lint in CI. Please reflow these bullets without changing content.
As per coding guidelines: “Keep line length under 120 characters in markdown files”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` around lines 55 - 66, Reflow the overlength bullets in the CHANGELOG.md block so no line exceeds 120 characters: break long sentences in the three bullets (the one mentioning `.pre-commit-config.yaml` and `pr-orchestrator`, the “Release / version gate” bullet referencing `hatch run release` and `check_local_version_ahead_of_pypi.py`, and the “Docs / OpenSpec” bullet with `verify-modules-signature.py` and `scripts/pre-commit-quality-checks.sh all`) into multiple wrapped lines while preserving the exact wording and markdown list markers (hyphen + bold section headers like **Pre-commit / CI parity**, **Release / version gate**, **Docs / OpenSpec**) and all referenced identifiers (`.pre-commit-config.yaml`, `pr-orchestrator`, `pre-commit-smart-checks.sh`, `hatch run release`, `check_local_version_ahead_of_pypi.py`, `verify-modules-signature.py`, `scripts/pre-commit-quality-checks.sh all`, `marketplace-06-ci-module-signing`, `CHANGE_ORDER`) so CI markdown lint passes..github/workflows/sign-modules.yml (1)
77-91:⚠️ Potential issue | 🟠 MajorPreserve the verifier arg order expected by the signing contract.
This workflow now builds the same verifier contract as
pr-orchestrator.yml, but the always-on args are reversed. The linked modules repo pins--payload-from-filesystem --enforce-version-bump, so leaving them swapped here risks drift between CI, docs, and workflow tests.🛠️ Proposed fix
- VERIFY_ARGS=(--enforce-version-bump --payload-from-filesystem) + VERIFY_ARGS=(--payload-from-filesystem --enforce-version-bump) if [ "${{ github.event_name }}" = "pull_request" ]; then BASE_REF="origin/${{ github.event.pull_request.base.ref }}" if [ "${{ github.event.pull_request.head.repo.full_name }}" != "${{ github.repository }}" ] && [ "${{ github.event.pull_request.base.ref }}" = "main" ]; then VERIFY_ARGS=(--require-signature "${VERIFY_ARGS[@]}") fi elif [ "${{ github.event_name }}" = "workflow_dispatch" ]; then BASE_REF="origin/${{ github.event.inputs.base_branch }}" elif [ "${{ github.ref_name }}" = "main" ] && [ "${{ github.event_name }}" != "workflow_dispatch" ]; then VERIFY_ARGS=(--require-signature "${VERIFY_ARGS[@]}") fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/sign-modules.yml around lines 77 - 91, The VERIFY_ARGS initial array in the workflow is reversed compared to the signing contract; change the initialization of VERIFY_ARGS so the always-on args are ordered as --payload-from-filesystem then --enforce-version-bump (i.e., set VERIFY_ARGS=(--payload-from-filesystem --enforce-version-bump)), and keep the subsequent appends (VERIFY_ARGS=(--require-signature "${VERIFY_ARGS[@]}")) unchanged so the final argument ordering matches the contract used by pr-orchestrator and the modules repo..github/workflows/pr-orchestrator.yml (2)
116-128:⚠️ Potential issue | 🟠 MajorKeep the verifier flags in the shared contract order.
The linked
specfact-cli-modulesworkflow/script tests pin the always-on verifier args as--payload-from-filesystem --enforce-version-bump, with--require-signatureonly prepended for main-equivalent checks. Swapping the shared pair here breaks that parity contract.🛠️ Proposed fix
- VERIFY_ARGS=(--enforce-version-bump --payload-from-filesystem) + VERIFY_ARGS=(--payload-from-filesystem --enforce-version-bump) if [ "${{ github.event_name }}" = "pull_request" ]; then BASE_REF="origin/${{ github.event.pull_request.base.ref }}" if [ "${{ github.event.pull_request.head.repo.full_name }}" != "${{ github.repository }}" ] && [ "${{ github.event.pull_request.base.ref }}" = "main" ]; then VERIFY_ARGS=(--require-signature "${VERIFY_ARGS[@]}") fi elif [ "${{ github.ref_name }}" = "main" ]; then VERIFY_ARGS=(--require-signature "${VERIFY_ARGS[@]}") fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/pr-orchestrator.yml around lines 116 - 128, The shared verifier flags were initialized in the wrong order; change the VERIFY_ARGS initialization from (--enforce-version-bump --payload-from-filesystem) to (--payload-from-filesystem --enforce-version-bump) so the always-on pair matches the specfact-cli-modules contract, leaving the existing prepending of --require-signature via VERIFY_ARGS=(--require-signature "${VERIFY_ARGS[@]}") and the python invocation (python scripts/verify-modules-signature.py "${VERIFY_ARGS[@]}" [--version-check-base "$BASE_REF"]) unchanged.
225-227:⚠️ Potential issue | 🟠 MajorDon't make the required PR test job depend on live PyPI availability.
check_local_version_ahead_of_pypi.pytreats transient PyPI/network failures as fatal (exit 2), so this turns normal PR validation into an external-availability check. Gate it to release/main flows, or only fail PRs on the real policy violation (local <= PyPI).🛠️ One safe option
- name: Verify local version is ahead of PyPI - if: needs.changes.outputs.skip_tests_dev_to_main != 'true' + if: needs.changes.outputs.skip_tests_dev_to_main != 'true' && github.event_name == 'push' && github.ref == 'refs/heads/main' run: python scripts/check_local_version_ahead_of_pypi.py🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/pr-orchestrator.yml around lines 225 - 227, The workflow's "Verify local version is ahead of PyPI" step calls check_local_version_ahead_of_pypi.py and treats transient PyPI/network failures as fatal; change the CI so PR validation does not depend on live PyPI by either (A) moving or gating the step to release/main flows only (run it in release or main branch jobs), or (B) updating check_local_version_ahead_of_pypi.py to distinguish network/PyPI errors from a real version-policy violation and exit non‑fatally (return success or a neutral code) on transient failures while still failing when local_version <= pypi_version; reference the workflow step name "Verify local version is ahead of PyPI" and the script check_local_version_ahead_of_pypi.py when making the change.tests/unit/workflows/test_sign_modules_on_approval.py (1)
56-71: 🧹 Nitpick | 🔵 TrivialAssert workflow semantics here, not exact shell text.
This hard-codes quoting, path names, and commit-message formatting, so harmless workflow refactors will fail the test even when the approval-signing contract is unchanged. Parse the YAML, locate the signer/push steps, and assert the required flags, refs, and secrets by containment instead.
As per coding guidelines, "Tests must be meaningful and test actual functionality, cover both success and failure cases, be independent and repeatable, and have clear, descriptive names. NO EXCEPTIONS - no placeholder or empty tests."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/workflows/test_sign_modules_on_approval.py` around lines 56 - 71, The test test_sign_modules_on_approval_runs_signer_with_changed_only_mode currently asserts exact shell/text fragments; instead parse WORKFLOW as YAML, locate the signer step (search for the script path or action run that includes sign-modules.py) and the subsequent push step, then assert semantics by containment: verify the signer step's run/args include "--changed-only", "--bump-version patch", and "--payload-from-filesystem" and that it references the base ref placeholder (e.g., contains 'BASE_REF' or github.event.pull_request.base.ref) and required secrets (SPECFACT_MODULE_PRIVATE_SIGN_KEY and SPECFACT_MODULE_PRIVATE_SIGN_KEY_PASSPHRASE); verify the push step contains a push to origin of HEAD into the HEAD_REF placeholder (contain 'origin' and 'HEAD_REF' rather than exact quoting) and that the paths/workdir assertions use containment (e.g., include "_trusted_scripts" and "_pr_workspace") rather than exact quoted strings..github/workflows/sign-modules-on-approval.yml (1)
42-47:⚠️ Potential issue | 🟠 MajorCheck out the PR branch tip before pushing signed manifests.
Using
github.event.pull_request.head.shaleaves the job on a detached review-time commit. If the branch moves before Line 95,git push origin "HEAD:${HEAD_REF}"becomes a non-fast-forward rejection and the signer flakes on active PRs.🛠️ Minimal fix
- name: Checkout PR head (module tree to sign) uses: actions/checkout@v4 with: fetch-depth: 0 - ref: ${{ github.event.pull_request.head.sha }} + ref: ${{ github.event.pull_request.head.ref }} path: _pr_workspace persist-credentials: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/sign-modules-on-approval.yml around lines 42 - 47, The checkout step "Checkout PR head (module tree to sign)" is using ref: ${{ github.event.pull_request.head.sha }} which leaves a detached commit and causes git push origin "HEAD:${HEAD_REF}" to fail if the branch advances; change the ref to the branch name (e.g. ref: ${{ github.event.pull_request.head.ref }} or use github.head_ref) so the workflow checks out the branch tip instead of a SHA, keep fetch-depth: 0 (and the same path _pr_workspace) so pushes are allowed and non-fast-forward rejections are avoided.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/sign-modules-on-approval.yml:
- Around line 50-75: The workflow currently sets BASE_REF to a moving ref
("origin/${{ github.event.pull_request.base.ref }}") which can change after the
PR event; change the Sign changed module manifests step to use an immutable
comparison base by setting BASE_REF to the pull request base SHA
(github.event.pull_request.base.sha) or a merge-base derived from that SHA,
ensuring the python sign-modules.py --changed-only invocation compares against a
fixed commit; update the step to fetch the specific base SHA (or compute git
merge-base between the PR head and that SHA) and pass that immutable BASE_REF
into the sign-modules.py command so unrelated upstream commits won't affect
change detection.
---
Duplicate comments:
In @.github/workflows/pr-orchestrator.yml:
- Around line 116-128: The shared verifier flags were initialized in the wrong
order; change the VERIFY_ARGS initialization from (--enforce-version-bump
--payload-from-filesystem) to (--payload-from-filesystem --enforce-version-bump)
so the always-on pair matches the specfact-cli-modules contract, leaving the
existing prepending of --require-signature via VERIFY_ARGS=(--require-signature
"${VERIFY_ARGS[@]}") and the python invocation (python
scripts/verify-modules-signature.py "${VERIFY_ARGS[@]}" [--version-check-base
"$BASE_REF"]) unchanged.
- Around line 225-227: The workflow's "Verify local version is ahead of PyPI"
step calls check_local_version_ahead_of_pypi.py and treats transient
PyPI/network failures as fatal; change the CI so PR validation does not depend
on live PyPI by either (A) moving or gating the step to release/main flows only
(run it in release or main branch jobs), or (B) updating
check_local_version_ahead_of_pypi.py to distinguish network/PyPI errors from a
real version-policy violation and exit non‑fatally (return success or a neutral
code) on transient failures while still failing when local_version <=
pypi_version; reference the workflow step name "Verify local version is ahead of
PyPI" and the script check_local_version_ahead_of_pypi.py when making the
change.
In @.github/workflows/sign-modules-on-approval.yml:
- Around line 42-47: The checkout step "Checkout PR head (module tree to sign)"
is using ref: ${{ github.event.pull_request.head.sha }} which leaves a detached
commit and causes git push origin "HEAD:${HEAD_REF}" to fail if the branch
advances; change the ref to the branch name (e.g. ref: ${{
github.event.pull_request.head.ref }} or use github.head_ref) so the workflow
checks out the branch tip instead of a SHA, keep fetch-depth: 0 (and the same
path _pr_workspace) so pushes are allowed and non-fast-forward rejections are
avoided.
In @.github/workflows/sign-modules.yml:
- Around line 77-91: The VERIFY_ARGS initial array in the workflow is reversed
compared to the signing contract; change the initialization of VERIFY_ARGS so
the always-on args are ordered as --payload-from-filesystem then
--enforce-version-bump (i.e., set VERIFY_ARGS=(--payload-from-filesystem
--enforce-version-bump)), and keep the subsequent appends
(VERIFY_ARGS=(--require-signature "${VERIFY_ARGS[@]}")) unchanged so the final
argument ordering matches the contract used by pr-orchestrator and the modules
repo.
In `@CHANGELOG.md`:
- Around line 55-66: Reflow the overlength bullets in the CHANGELOG.md block so
no line exceeds 120 characters: break long sentences in the three bullets (the
one mentioning `.pre-commit-config.yaml` and `pr-orchestrator`, the “Release /
version gate” bullet referencing `hatch run release` and
`check_local_version_ahead_of_pypi.py`, and the “Docs / OpenSpec” bullet with
`verify-modules-signature.py` and `scripts/pre-commit-quality-checks.sh all`)
into multiple wrapped lines while preserving the exact wording and markdown list
markers (hyphen + bold section headers like **Pre-commit / CI parity**,
**Release / version gate**, **Docs / OpenSpec**) and all referenced identifiers
(`.pre-commit-config.yaml`, `pr-orchestrator`, `pre-commit-smart-checks.sh`,
`hatch run release`, `check_local_version_ahead_of_pypi.py`,
`verify-modules-signature.py`, `scripts/pre-commit-quality-checks.sh all`,
`marketplace-06-ci-module-signing`, `CHANGE_ORDER`) so CI markdown lint passes.
In `@src/specfact_cli/modules/init/module-package.yaml`:
- Around line 19-20: The manifest was left checksum-only; restore the removed
integrity.signature entry in src/specfact_cli/modules/init/module-package.yaml
(add the integrity.signature field with the signature value produced by the
signing flow) so the module manifest contains both integrity.checksum and
integrity.signature; also ensure the module version is bumped if signed assets
or manifests changed to satisfy the "signed module assets" release policy before
merging to main.
In `@tests/unit/workflows/test_sign_modules_on_approval.py`:
- Around line 56-71: The test
test_sign_modules_on_approval_runs_signer_with_changed_only_mode currently
asserts exact shell/text fragments; instead parse WORKFLOW as YAML, locate the
signer step (search for the script path or action run that includes
sign-modules.py) and the subsequent push step, then assert semantics by
containment: verify the signer step's run/args include "--changed-only",
"--bump-version patch", and "--payload-from-filesystem" and that it references
the base ref placeholder (e.g., contains 'BASE_REF' or
github.event.pull_request.base.ref) and required secrets
(SPECFACT_MODULE_PRIVATE_SIGN_KEY and
SPECFACT_MODULE_PRIVATE_SIGN_KEY_PASSPHRASE); verify the push step contains a
push to origin of HEAD into the HEAD_REF placeholder (contain 'origin' and
'HEAD_REF' rather than exact quoting) and that the paths/workdir assertions use
containment (e.g., include "_trusted_scripts" and "_pr_workspace") rather than
exact quoted strings.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: b6de99be-c6ee-4c3b-aa71-92933f3f5452
📒 Files selected for processing (8)
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules-on-approval.yml.github/workflows/sign-modules.ymlCHANGELOG.mddocs/reference/module-security.mdsrc/specfact_cli/modules/init/module-package.yamltests/unit/specfact_cli/registry/test_signing_artifacts.pytests/unit/workflows/test_sign_modules_on_approval.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Compatibility (Python 3.11)
- GitHub Check: Tests (Python 3.12)
- GitHub Check: Linting (ruff, pylint, safe-write guard)
- GitHub Check: Tests (Python 3.12)
- GitHub Check: Compatibility (Python 3.11)
- GitHub Check: Workflow Lint
- GitHub Check: Analyze (python)
🧰 Additional context used
📓 Path-based instructions (14)
**/*.yaml
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
YAML files must pass linting using: hatch run yaml-lint with relaxed policy.
Files:
src/specfact_cli/modules/init/module-package.yaml
**/*.{yml,yaml}
📄 CodeRabbit inference engine (.cursor/rules/testing-and-build-guide.mdc)
Validate YAML configuration files locally using
hatch run yaml-lintbefore committing
**/*.{yml,yaml}: Format all YAML and workflow files usinghatch run yaml-fix-allbefore committing
Use Prettier to fix whitespace, indentation, and final newline across YAML files
Use yamllint with the repo .yamllint configuration (line-length 140, trailing spaces and final newline enforced) to lint non-workflow YAML files
Files:
src/specfact_cli/modules/init/module-package.yaml
.github/workflows/*.{yml,yaml}
📄 CodeRabbit inference engine (.cursor/rules/testing-and-build-guide.mdc)
Validate GitHub workflow files using
hatch run lint-workflowsbefore committing
.github/workflows/*.{yml,yaml}: Use actionlint for semantic validation of GitHub Actions workflows
Format GitHub Actions workflows usinghatch run workflows-fmtand lint them withhatch run workflows-lintafter editing
Files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.yml
.github/workflows/!(tests).{yml,yaml}
📄 CodeRabbit inference engine (.cursor/rules/testing-and-build-guide.mdc)
Do not re-run the full test suite in other CI workflows; tests are enforced only in the dedicated Tests workflow (.github/workflows/tests.yml)
Files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.yml
.github/workflows/**
⚙️ CodeRabbit configuration file
.github/workflows/**: CI safety: secrets usage, workflow dependencies, alignment with hatch test / contract-test
gates, and action versions.
Files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.yml
**/*.{md,mdc}
📄 CodeRabbit inference engine (.cursor/rules/markdown-rules.mdc)
**/*.{md,mdc}: Do not use more than one consecutive blank line anywhere in the document (MD012: No Multiple Consecutive Blank Lines)
Fenced code blocks should be surrounded by blank lines (MD031: Fenced Code Blocks)
Lists should be surrounded by blank lines (MD032: Lists)
Files must end with a single empty line (MD047: Files Must End With Single Newline)
Lines should not have trailing spaces (MD009: No Trailing Spaces)
Use asterisks (**) for strong emphasis, not underscores (__) (MD050: Strong Style)
Fenced code blocks must have a language specified (MD040: Fenced Code Language)
Headers should increment by one level at a time (MD001: Header Increment)
Headers should be surrounded by blank lines (MD022: Headers Should Be Surrounded By Blank Lines)
Only one top-level header (H1) is allowed per document (MD025: Single H1 Header)
Use consistent list markers, preferring dashes (-) for unordered lists (MD004: List Style)
Nested unordered list items should be indented consistently, typically by 2 spaces (MD007: Unordered List Indentation)
Use exactly one space after the list marker (e.g., -, *, +, 1.) (MD030: Spaces After List Markers)
Use incrementing numbers for ordered lists (MD029: Ordered List Item Prefix)
Enclose bare URLs in angle brackets or format them as links (MD034: Bare URLs)
Don't use spaces immediately inside code spans (MD038: Spaces Inside Code Spans)
Use consistent indentation (usually 2 or 4 spaces) throughout markdown files
Keep line length under 120 characters in markdown files
Use reference-style links for better readability in markdown files
Use a trailing slash for directory paths in markdown files
Ensure proper escaping of special characters in markdown files
Files:
docs/reference/module-security.mdCHANGELOG.md
docs/**/*.md
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Update architecture documentation in docs/ for architecture changes, state machine documentation for FSM modifications, interface documentation for API changes, and configuration guides for configuration changes. DO NOT create internal docs in specfact-cli repo folder that should not be visible to end users; use the respective internal repository instead.
Files:
docs/reference/module-security.md
⚙️ CodeRabbit configuration file
docs/**/*.md: User-facing accuracy: CLI examples match current behavior; preserve Jekyll front matter;
call out when README/docs index need sync.
Files:
docs/reference/module-security.md
**/*.md
📄 CodeRabbit inference engine (.cursorrules)
Avoid markdown linting errors (refer to markdown-rules)
Files:
docs/reference/module-security.mdCHANGELOG.md
**/test_*.py
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
**/test_*.py: Write tests first in test-driven development (TDD) using the Red-Green-Refactor cycle
Ensure each test is independent and repeatable with no shared state between tests
Organize Python imports in tests using unittest.mock for Mock and patch
Use setup_method() for test initialization and Arrange-Act-Assert pattern in test files
Use@pytest.mark.asynciodecorator for async test functions in Python
Organize test files in structure: tests/unit/, tests/integration/, tests/e2e/ by module
Files:
tests/unit/workflows/test_sign_modules_on_approval.pytests/unit/specfact_cli/registry/test_signing_artifacts.py
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
**/*.py: Maintain minimum 80% test coverage, with 100% coverage for critical paths in Python code
Use clear naming and self-documenting code, preferring clear names over comments
Ensure each function/class has a single clear purpose (Single Responsibility Principle)
Extract common patterns to avoid code duplication (DRY principle)
Apply SOLID object-oriented design principles in Python code
Use type hints everywhere in Python code and enable basedpyright strict mode
Use Pydantic models for data validation and serialization in Python
Use async/await for I/O operations in Python code
Use context managers for resource management in Python
Use dataclasses for simple data containers in Python
Enforce maximum line length of 120 characters in Python code
Use 4 spaces for indentation in Python code (no tabs)
Use 2 blank lines between classes and 1 blank line between methods in Python
Organize imports in order: Standard library → Third party → Local in Python files
Use snake_case for variables and functions in Python
Use PascalCase for class names in Python
Use UPPER_SNAKE_CASE for constants in Python
Use leading underscore (_) for private methods in Python classes
Use snake_case for Python file names
Enable basedpyright strict mode with strict type checking configuration in Python
Use Google-style docstrings for functions and classes in Python
Include comprehensive exception handling with specific exception types in Python code
Use logging with structured context (extra parameters) instead of print statements
Use retry logic with tenacity decorators (@retry) for operations that might fail
Use Pydantic BaseSettings for environment-based configuration in Python
Validate user input using Pydantic validators in Python models
Use@lru_cacheand Redis-based caching for expensive calculations in Python
Run code formatting with Black (120 character line length) and isort in Python
Run type checking with basedpyright on all Python files
Run linting with ruff and pylint on all Pyth...
Files:
tests/unit/workflows/test_sign_modules_on_approval.pytests/unit/specfact_cli/registry/test_signing_artifacts.py
tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Tests must be meaningful and test actual functionality, cover both success and failure cases, be independent and repeatable, and have clear, descriptive names. NO EXCEPTIONS - no placeholder or empty tests.
tests/**/*.py: Trim low-value unit tests when a contract covers the same assertion (type/shape/raises on negative checks)
Delete tests that only assert input validation, datatype/shape enforcement, or raises on negative conditions now guarded by contracts and runtime typing
Convert repeated edge-case permutations into one Hypothesis property with contracts acting as oraclesSecret redaction via
LoggerSetup.redact_secretsmust be covered by unit tests
Files:
tests/unit/workflows/test_sign_modules_on_approval.pytests/unit/specfact_cli/registry/test_signing_artifacts.py
⚙️ CodeRabbit configuration file
tests/**/*.py: Contract-first testing: meaningful scenarios, not redundant assertions already covered by
contracts. Flag flakiness, environment coupling, and missing coverage for changed behavior.
Files:
tests/unit/workflows/test_sign_modules_on_approval.pytests/unit/specfact_cli/registry/test_signing_artifacts.py
@(src|tests)/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Linting must pass with no errors using: pylint src tests
Files:
tests/unit/workflows/test_sign_modules_on_approval.pytests/unit/specfact_cli/registry/test_signing_artifacts.py
**/*.{py,pyi}
📄 CodeRabbit inference engine (.cursorrules)
**/*.{py,pyi}: After any code changes, follow these steps in order: (1) Apply linting and formatting to ensure code quality:hatch run format, (2) Type checking:hatch run type-check(basedpyright), (3) Contract-first approach: Runhatch run contract-testfor contract validation, (4) Run full test suite:hatch test --cover -v, (5) Verify all tests pass and contracts are satisfied, (6) Fix any issues and repeat steps until all tests pass
All public APIs must have@icontractdecorators and@beartypetype checking
Use Pydantic models for all data structures with data validation
Only write high-value comments if at all. Avoid talking to the user through comments
Files:
tests/unit/workflows/test_sign_modules_on_approval.pytests/unit/specfact_cli/registry/test_signing_artifacts.py
CHANGELOG.md
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
Include new version entries at the top of CHANGELOG.md when updating versions
Update CHANGELOG.md with all code changes as part of version control requirements.
Update CHANGELOG.md to document all significant changes under Added, Fixed, Changed, or Removed sections when making a version change
Files:
CHANGELOG.md
🧠 Learnings (36)
📓 Common learnings
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Applies to openspec/changes/**/*.md : For `/opsx:archive` (Archive change): Include module signing and cleanup in final tasks. Agents MUST run `openspec archive <change-id>` from repo root (no manual `mv` under `openspec/changes/archive/`)
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Enforce module signatures and version bumps when signed module assets or manifests are affected
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Enforce module signatures and version bumps when signed module assets or manifests are affected
Applied to files:
src/specfact_cli/modules/init/module-package.yaml.github/workflows/pr-orchestrator.ymldocs/reference/module-security.md.github/workflows/sign-modules.ymltests/unit/specfact_cli/registry/test_signing_artifacts.py.github/workflows/sign-modules-on-approval.ymlCHANGELOG.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Applies to openspec/changes/**/*.md : For `/opsx:archive` (Archive change): Include module signing and cleanup in final tasks. Agents MUST run `openspec archive <change-id>` from repo root (no manual `mv` under `openspec/changes/archive/`)
Applied to files:
src/specfact_cli/modules/init/module-package.yaml.github/workflows/pr-orchestrator.ymldocs/reference/module-security.mdtests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules.ymltests/unit/specfact_cli/registry/test_signing_artifacts.py.github/workflows/sign-modules-on-approval.ymlCHANGELOG.md
📚 Learning: 2026-04-10T22:41:54.419Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/automatic-openspec-workflow.mdc:0-0
Timestamp: 2026-04-10T22:41:54.419Z
Learning: If `openspec` command is not found or if specfact-cli workspace or openspec/ directory is not accessible, do not modify application code unless the user explicitly confirms to proceed without OpenSpec
Applied to files:
src/specfact_cli/modules/init/module-package.yaml
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: All development work MUST use git worktrees per AGENTS.md Git Worktree Policy. Never create branches with `git checkout -b` in the primary checkout. Create worktree from origin/dev: `git worktree add ../specfact-cli-worktrees/<type>/<slug> -b <branch-name> origin/dev` where allowed types are: `feature/`, `bugfix/`, `hotfix/`, `chore/`. Forbidden in worktrees: `dev`, `main`. After creating worktree: `cd ../specfact-cli-worktrees/<type>/<slug>`. Bootstrap Hatch in worktree: `hatch env create`. Run pre-flight checks: `hatch run smart-test-status` and `hatch run contract-test-status`. Do all implementation work from the worktree, never from primary checkout. After PR merge: cleanup with `git worktree remove`, `git branch -d`, `git worktree prune`
Applied to files:
src/specfact_cli/modules/init/module-package.yamldocs/reference/module-security.md.github/workflows/sign-modules-on-approval.yml
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Before executing ANY workflow command (`/opsx:ff`, `/opsx:apply`, `/opsx:continue`, etc.), perform the Pre-Execution Checklist: (1) Git Worktree - create if task creates branches/modifies code, (2) TDD Evidence - create `TDD_EVIDENCE.md` if behavior changes, (3) Documentation - include documentation research if changes affect user-facing behavior, (4) Module Signing - include signature verification if changes modify bundled modules, (5) Confirmation - state clearly that pre-execution checklist is complete and AGENTS.md compliance is confirmed
Applied to files:
.github/workflows/pr-orchestrator.ymldocs/reference/module-security.mdtests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to .github/workflows/!(tests).{yml,yaml} : Do not re-run the full test suite in other CI workflows; tests are enforced only in the dedicated Tests workflow (.github/workflows/tests.yml)
Applied to files:
.github/workflows/pr-orchestrator.ymltests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules.yml
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Fix SpecFact code review findings, including warnings, unless a rare explicit exception is documented
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to .github/workflows/{quality-gates,contract-enforcement,contracts,no-escape-gating,integrated-ci}.{yml,yaml} : Dependent workflows must download the coverage.xml artifact from the Tests workflow and fail early if it is missing
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:22.650Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/yaml-and-workflows.md:0-0
Timestamp: 2026-03-25T21:33:22.650Z
Learning: Run `hatch run yaml-check-all` in CI and before pull requests to validate all YAML and workflow files
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to .github/workflows/*.{yml,yaml} : Validate GitHub workflow files using `hatch run lint-workflows` before committing
Applied to files:
.github/workflows/pr-orchestrator.yml.github/workflows/sign-modules.yml
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Applies to **/*.{py,pyi} : After any code changes, follow these steps in order: (1) Apply linting and formatting to ensure code quality: `hatch run format`, (2) Type checking: `hatch run type-check` (basedpyright), (3) Contract-first approach: Run `hatch run contract-test` for contract validation, (4) Run full test suite: `hatch test --cover -v`, (5) Verify all tests pass and contracts are satisfied, (6) Fix any issues and repeat steps until all tests pass
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Before making a pull request, locally run format, lint, contract tests, and full test suite: `hatch run format`, `hatch run lint`, `hatch run contract-test`, `hatch test --cover -v`
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to src/**/*.py : Test Coverage Validation: Run hatch run smart-test-unit for modified files, hatch run smart-test-folder for modified directories, and hatch run smart-test-full before committing. ALL TESTS MUST PASS.
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Run contract-first validation locally before committing: `hatch run contract-test-contracts`, `hatch run contract-test-exploration`, and `hatch run contract-test-scenarios`
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Use `hatch test` command for running tests instead of direct pytest usage to ensure proper Python matrix, dependency resolution, environment variables, and coverage reports
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Run type checking with basedpyright on all Python files
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Run linting with ruff and pylint on all Python files
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Enable basedpyright strict mode with strict type checking configuration in Python
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to src/**/*.py : Type checking must pass with no errors using: mypy .
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to @(src|tests)/**/*.py : Linting must pass with no errors using: pylint src tests
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Use type hints everywhere in Python code and enable basedpyright strict mode
Applied to files:
.github/workflows/pr-orchestrator.yml
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: When creating implementation plans, task lists, or OpenSpec tasks.md, ALWAYS explicitly verify and include: (1) Worktree creation from `origin/dev` (not `git checkout -b`), (2) `hatch env create` in the worktree, (3) Pre-flight checks (`hatch run smart-test-status`, `hatch run contract-test-status`), (4) Worktree cleanup steps post-merge, (5) Self-check: 'Have I followed AGENTS.md Git Worktree Policy section?'
Applied to files:
docs/reference/module-security.mdtests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules-on-approval.ymlCHANGELOG.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: Always finish each output by listing: (1) Which rulesets have been applied (e.g., `.cursorrules`, `AGENTS.md`, specific `.cursor/rules/*.mdc`), (2) Confirmation of Git Worktree Policy compliance (if applicable), (3) Which AI (LLM) provider and model version you are using
Applied to files:
docs/reference/module-security.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: For `/opsx:ff` (Fast-Forward Change Creation): OPSX provides change scaffolding and artifact templates. AGENTS.md requires explicitly adding: Worktree creation task as Step 1 in tasks.md (not just 'create branch'), TDD_EVIDENCE.md tracking task in section 2 (Tests), Documentation research task per `openspec/config.yaml`, Module signing quality gate if applicable, Worktree cleanup steps in final section
Applied to files:
docs/reference/module-security.mdtests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules.yml.github/workflows/sign-modules-on-approval.ymlCHANGELOG.md
📚 Learning: 2026-04-10T22:41:34.504Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-10T22:41:34.504Z
Learning: Treat the canonical rule docs in docs/agent-rules/INDEX.md as the source of truth for worktree policy, OpenSpec gating, GitHub completeness checks, TDD order, quality gates, versioning, and documentation rules
Applied to files:
docs/reference/module-security.md
📚 Learning: 2026-04-10T22:42:04.362Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/session_startup_instructions.mdc:0-0
Timestamp: 2026-04-10T22:42:04.362Z
Learning: For OpenSpec changes when a sibling `specfact-cli-internal/` checkout exists, read internal wiki guidance in `docs/agent-rules/40-openspec-and-tdd.md` and consult `wiki/hot.md`, `wiki/graph.md`, and relevant `wiki/concepts/*.md` files
Applied to files:
docs/reference/module-security.md
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to docs/**/*.md : Update architecture documentation in docs/ for architecture changes, state machine documentation for FSM modifications, interface documentation for API changes, and configuration guides for configuration changes. DO NOT create internal docs in specfact-cli repo folder that should not be visible to end users; use the respective internal repository instead.
Applied to files:
docs/reference/module-security.md
📚 Learning: 2026-04-10T22:42:04.362Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/session_startup_instructions.mdc:0-0
Timestamp: 2026-04-10T22:42:04.362Z
Learning: After merges shipping OpenSpec- or GitHub-related work with a sibling `specfact-cli-internal` checkout present, run wiki scripts from that sibling repo's working directory (e.g., `cd ../specfact-cli-internal && python3 scripts/wiki_openspec_gh_status.py`), not from `specfact-cli` or other directories
Applied to files:
docs/reference/module-security.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: For `/opsx:apply` (Implementation): OPSX provides task iteration and progress tracking. AGENTS.md requires verification before each task: Confirm you are IN a worktree (not primary checkout) before modifying code, Record failing test evidence in `TDD_EVIDENCE.md` BEFORE implementing, Record passing test evidence AFTER implementation, Run quality gates from worktree (format, type-check, contract-test), GPG-signed commits (`git commit -S`), PR to `dev` branch (never direct push)
Applied to files:
tests/unit/workflows/test_sign_modules_on_approval.py.github/workflows/sign-modules-on-approval.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to tests/**/*.py : Tests must be meaningful and test actual functionality, cover both success and failure cases, be independent and repeatable, and have clear, descriptive names. NO EXCEPTIONS - no placeholder or empty tests.
Applied to files:
tests/unit/workflows/test_sign_modules_on_approval.py
📚 Learning: 2026-04-10T22:41:54.419Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/automatic-openspec-workflow.mdc:0-0
Timestamp: 2026-04-10T22:41:54.419Z
Learning: For changes created from plan documents, use `/specfact-cli/wf-create-change-from-plan <plan-path>` workflow to create OpenSpec proposals; the workflow includes plan selection, alignment review, integrity re-check, proposal creation, improvement, optional GitHub issue creation, and completion steps
Applied to files:
.github/workflows/sign-modules-on-approval.yml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to CHANGELOG.md : Update CHANGELOG.md with all code changes as part of version control requirements.
Applied to files:
CHANGELOG.md
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to CHANGELOG.md : Update CHANGELOG.md to document all significant changes under Added, Fixed, Changed, or Removed sections when making a version change
Applied to files:
CHANGELOG.md
📚 Learning: 2026-03-25T21:32:08.987Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/markdown-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:08.987Z
Learning: Applies to **/*.{md,mdc} : Keep line length under 120 characters in markdown files
Applied to files:
CHANGELOG.md
📚 Learning: 2026-03-25T21:33:22.650Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/yaml-and-workflows.md:0-0
Timestamp: 2026-03-25T21:33:22.650Z
Learning: Applies to .yamllint : Ensure .yamllint configuration enforces line-length of 140 characters, ignores workflows, and enforces trailing spaces and final newline
Applied to files:
CHANGELOG.md
🔀 Multi-repo context nold-ai/specfact-cli-modules
Linked repositories findings
nold-ai/specfact-cli-modules
-
CI/workflows invoking verifier:
- .github/workflows/pr-orchestrator.yml: constructs VERIFY_CMD starting with python scripts/verify-modules-signature.py --payload-from-filesystem --enforce-version-bump and conditionally appends --require-signature and --version-check-base; exact lines shown in workflow (VERIFY_CMD array construction and usage). [::nold-ai/specfact-cli-modules::.github/workflows/pr-orchestrator.yml]
-
New/related signing workflow:
- .github/workflows/sign-modules-on-approval.yml: signs changed module manifests on approved PR reviews; requires SPECFACT_MODULE_PRIVATE_SIGN_KEY and pushes signed manifests back (uses --changed-only --bump-version patch --payload-from-filesystem). [::nold-ai/specfact-cli-modules::.github/workflows/sign-modules-on-approval.yml]
-
Pre-commit hook entries / scripts:
- .pre-commit-config.yaml expects ./scripts/pre-commit-verify-modules-signature.sh and ./scripts/pre-commit-quality-checks.sh block entries. [::nold-ai/specfact-cli-modules::.pre-commit-config.yaml]
- scripts/pre-commit-verify-modules-signature.sh: wrapper that runs verify-modules-signature with --payload-from-filesystem --enforce-version-bump and conditionally adds --require-signature in its true-branch. Tests reference this exact script and its required flags. [::nold-ai/specfact-cli-modules::scripts/pre-commit-verify-modules-signature.sh]
- scripts/verify-modules-signature.py: defines CLI flags --require-signature, --enforce-version-bump, --payload-from-filesystem, --version-check-base. [::nold-ai/specfact-cli-modules::scripts/verify-modules-signature.py]
-
Documentation/specs that encode exact verifier semantics:
- Multiple docs and openspec files specify the required presence of --payload-from-filesystem and --enforce-version-bump always, and that --require-signature is only applied for main-equivalent checks (pre-commit and CI). Examples: docs/reference/module-security.md, docs/authoring/module-signing.md, openspec/specs/modules-pre-commit-quality-parity/spec.md. These are authoritative consumers of the flag semantics. [::nold-ai/specfact-cli-modules::docs/reference/module-security.md] [::nold-ai/specfact-cli-modules::docs/authoring/module-signing.md] [::nold-ai/specfact-cli-modules::openspec/specs/modules-pre-commit-quality-parity/spec.md]
-
Tests that assert exact script names and verifier arguments (likely to fail if tokens/order/names change):
- tests/unit/scripts/test_pre_commit_verify_modules.py — exercises wrappers, expects --payload-from-filesystem and --enforce-version-bump always, and --require-signature only on main; checks --diff-filter=ACMRD usage. [::nold-ai/specfact-cli-modules::tests/unit/scripts/test_pre_commit_verify_modules.py]
- tests/unit/workflows/test_pr_orchestrator_signing.py — expects verify-modules-signature.py and the two always-present flags in pr-orchestrator.yml. [::nold-ai/specfact-cli-modules::tests/unit/workflows/test_pr_orchestrator_signing.py]
- tests/unit/workflows/test_sign_modules_on_approval.py — validates sign-modules-on-approval wiring, presence of --payload-from-filesystem and --changed-only, commit/push behavior, and required secrets. [::nold-ai/specfact-cli-modules::tests/unit/workflows/test_sign_modules_on_approval.py]
- tests/unit/test_pre_commit_verify_modules_signature_script.py — reads pre-commit wrapper contents and asserts flag presence/conditional parts. [::nold-ai/specfact-cli-modules::tests/unit/test_pre_commit_verify_modules_signature_script.py]
-
Other explicit consumers / references:
- pyproject.toml has a hatch entry verify-modules-signature = "python scripts/verify-modules-signature.py {args}" — ensure arg-array invocation matches expected contract. [::nold-ai/specfact-cli-modules::pyproject.toml]
- README and GitHub templates reference the exact verify command and the branch-aware policy. [::nold-ai/specfact-cli-modules::README.md] [::nold-ai/specfact-cli-modules::.github/pull_request_template.md]
Assessment / implications for the PR under review
- The specfact-cli changes in the PR (branch-aware construction of VERIFY_ARGS and switching to "${VERIFY_ARGS[@]}") must preserve the tokens required by these consumers:
- --payload-from-filesystem and --enforce-version-bump must remain always-present.
- --require-signature must be present only in main-equivalent contexts.
- The script filenames (pre-commit wrappers and flag script) and hatch/python invocation tokens must not be renamed or reordered in a way that tests/workflows/docs expect exact substrings.
- Tests in this repo directly validate script contents, workflow YAML substrings, and invocation argument presence — any mismatch in token names, missing scripts, or different argument ordering could break these tests.
🔇 Additional comments (2)
src/specfact_cli/modules/init/module-package.yaml (1)
2-2: Version bump is consistent with manifest mutation.
version: 0.1.27is appropriate alongside integrity metadata changes and maintains manifest version discipline.CHANGELOG.md (1)
13-105: Well-structured 0.46.1 release entry and category coverageThe new release block is clearly organized and captures the major CI, module-signing, pre-commit, and docs/policy changes expected for this merge-to-main release.
- Add sign-on-dispatch job with base_branch/version_bump inputs and merge-base signing - Rename approval job to sign-on-approval; fix concurrency for manual runs - Document default-branch vs Run workflow on dev; update tests and CHANGELOG - Refactor workflow tests to satisfy code-review complexity gate Made-with: Cursor Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
Align VERIFY_ARGS order; use immutable PR base SHA and branch checkout for approval signing; make PyPI ahead-of-registry check lenient on network failures in PR jobs. Refactor workflow policy tests for radon complexity; add beartype and icontract to the PyPI check script; restore init manifest signature field where applicable. Made-with: Cursor
The prior commit added integrity.signature without incrementing the module version, so sign-modules verify failed (same 0.1.27 vs HEAD~1). Refresh checksum; CI will re-add signatures via Module Signature Hardening workflow_dispatch. Made-with: Cursor
Summary
Merges current
devintomainafter PR #501 landed ondev.Highlights
pre-commit-quality-checks.sh), branch-aware module verify (pre-commit-verify-modules.sh), legacy shimpre-commit-verify-modules-signature.sh,.mdcstaged Markdown handling, Bash 3.2–safe markdown lists, Block 2 no longer skips onpyproject.toml/setup.pyalone.devsince the lastmainrelease.Verification
devmatch the intended PyPI release before merge.Opened via
gh pr create --base main --head devaftergit fetch origin.Made with Cursor