Conversation
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Signed-off-by: Dom <39115308+djm81@users.noreply.github.com>
* feat: launch module marketplace lifecycle and trust-first UX Deliver the central module marketplace workflow with source-aware discovery, lifecycle management, and trust/publisher visibility so users can safely manage official vs local modules. This also aligns docs and OpenSpec artifacts with the shipped behavior, including command introspection and clearer install/uninstall guidance. Co-authored-by: Cursor <cursoragent@cursor.com> * fix: respect explicit discovery roots in module tests Disable implicit legacy/workspace roots when explicit roots are passed to module discovery so isolated test roots are honored and deterministic. Co-authored-by: Cursor <cursoragent@cursor.com> * fix: enforce safe module extraction and upgrade reinstall --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com> Co-authored-by: Cursor <cursoragent@cursor.com>
* feat: add bundle-mapper module with confidence-based spec-to-bundle mapping - BundleMapping model and BundleMapper engine (explicit label, historical, content similarity) - Mapping history persistence and MappingRule (save_user_confirmed_mapping, load_bundle_mapping_config) - Interactive UI (ask_bundle_mapping) with Rich confidence visualization - Unit tests and TDD_EVIDENCE for bundle-mapper-01 (OpenSpec #121) Co-authored-by: Cursor <cursoragent@cursor.com> * fix(bundle-mapper): address PR review findings (P1/P2) - P1 interactive: no default accept for low-confidence; use default only when conf >= 0.5 - P1 history: ignore empty key fields in item_keys_similar (only count non-empty matches) - P2 engine: add historical weight only when hist_bundle == primary_bundle_id - Add test_item_keys_similar_empty_fields_not_counted to lock empty-key behavior Co-authored-by: Cursor <cursoragent@cursor.com> * fix: address bundle-mapper review defects with tdd evidence * test: make specmatic integration tests plugin-agnostic --------- Signed-off-by: Dom <39115308+djm81@users.noreply.github.com> Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com> Co-authored-by: Cursor <cursoragent@cursor.com>
* feat: add interactive backlog issue creation flow * feat(backlog): add interactive issue creation and mapping setup * fix: align backlog protocol test fakes and module manifest versions * Fix type error * fix(backlog): persist ado sprint and normalize github create id * fix(backlog-core): address review findings for add/config/github --------- Signed-off-by: Dom <39115308+djm81@users.noreply.github.com> Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
#292) * chore(openspec): archive completed changes and align architecture docs * docs(architecture): refresh discrepancies report after arch-08 remediation --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
…log prompt (#294) * fix(backlog): restore installed runtime discovery and add backlog prompt * Archive bugfix change --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
* fix: harden module lifecycle bootstrap and signing workflows * fix: stabilize module signature hashing across environments * fix: stabilize bundle module signature verification in CI --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
…496) * fix: safe merge for VS Code settings.json on init ide (profile-04) - Add project_artifact_write.merge_vscode_settings_prompt_recommendations with fail-safe on invalid JSON / bad chat shape; --force backs up to .specfact/recovery/ then replaces. - Route ide_setup create_vscode_settings through helper; thread force; catch errors for CLI exit. - Lint gate: scripts/verify_safe_project_writes.py blocks json.load/dump in ide_setup.py. - Tests, installation docs, 0.45.2 changelog and version pins. OpenSpec: profile-04-safe-project-artifact-writes Made-with: Cursor * fix(profile-04): satisfy review gate, pin setuptools for semgrep - Refactor project_artifact_write merge path (KISS); icontract predicates - Deduplicate ide_setup prompt helpers; import from project_artifact_write - verify_safe_project_writes: ast.walk, contracts, beartype - Pin setuptools<82 for Semgrep pkg_resources chain - Update TDD_EVIDENCE and tasks checklist Made-with: Cursor * ci: run safe-write verifier in PR orchestrator lint job Match hatch run lint by invoking scripts/verify_safe_project_writes.py after ruff/basedpyright/pylint. Use set -euo pipefail so the first lint failure is not masked by later commands. Made-with: Cursor * fix(profile-04): address CodeRabbit review (docs, guard, contracts, tests) - Wrap installation.md VS Code merge paragraph to <=120 chars per line - tasks 4.7 + TDD_EVIDENCE: openspec validate --strict sign-off - verify_safe_project_writes: detect from-json import and aliases - settings_relative_nonblank: reject absolute paths and .. segments - ide_setup: _handle_structured_json_document_error for duplicate handlers - ProjectWriteMode docstring (reserved policy surface); backup stamp + collision loop - Tests: malformed settings preserved on init ide exit; force+chat coercion; AST guard tests Made-with: Cursor * fix(profile-04): JSON5 settings, repo containment, review follow-ups - merge_vscode_settings: resolve containment before mkdir/write; JSON5 load/dump (JSONC comments; trailing_commas=False for strict JSON output) - ide_setup: empty prompts_by_source skips catalog fallback (_finalize allow_empty_fallback) - verify_safe_project_writes: detect import json as js attribute calls - contract_predicates: prompt_files_all_strings accepts list[Any] for mixed-type checks - Tests: symlink escape, JSONC merge, empty export strip, import-json-as-js guard - tasks.md / TDD_EVIDENCE: wrap lines to <=120 chars; CHANGELOG + json5 dep + setup.py sync Made-with: Cursor * docs(profile-04): tasks pre-flight + full pytest; narrow icontract ensure - tasks 1.1: hatch env create then smart-test-status and contract-test-status - tasks 4.3: add hatch test --cover -v to quality gates - TDD_EVIDENCE: shorter module-signatures and report lines (<=120 cols) - project_artifact_write: isinstance(result, Path) in @Ensure postconditions Made-with: Cursor * fix: clear specfact code review on safe-write modules - verify_safe_project_writes: flatten json binding helpers; stderr writes instead of print; inline Import/ImportFrom loops to drop duplicate-shape DRY - project_artifact_write: _VscodeChatMergeContext dataclass (KISS param count); typed chat_body cast before .get for pyright Made-with: Cursor * docs(profile-04): record hatch test --cover -v in TDD_EVIDENCE Align passing evidence with tasks.md 4.3 full-suite coverage gate. Made-with: Cursor * fix: reduce KISS blockers (bridge sync contexts, tools, partial adapters) Refactors high-parameter call sites into dataclasses/context objects and splits hot spots (export devops pipeline, smart_test_coverage incremental run, suggest_frontmatter, crosshair summary loop). API: BridgeSync.export_change_proposals_to_devops(adapter_type, ExportChangeProposalsOptions | None); LoggerSetup.create_logger(name, LoggerCreateOptions | None); run_crosshair(path, CrosshairRunOptions | None). Full specfact code review --scope full still reports error-severity kiss/radon findings (remaining nesting/LOC and param counts in ADO, analyzers, generators, source_scanner, module_installer, bundle-mapper, etc.). Gate PASS requires follow-up. Made-with: Cursor * Fix review findings and sign modules * fix(tests): register dynamic check_doc_frontmatter module; align _update_cache tests - Insert check_doc_frontmatter into sys.modules before exec_module so dataclasses can resolve string annotations (fixes Docs Review / agent rules governance fixture). - Call SmartCoverageManager._update_cache with _SmartCacheUpdate after signature refactor (fixes basedpyright reportCallIssue). Made-with: Cursor * fix(tests): align install_module mocks with InstallModuleOptions; register verify_bundle script - Monkeypatch/patch fakes now accept (module_id, options=None) matching install_module(module_id, InstallModuleOptions(...)). - Read install_root, trust_non_official, non_interactive, reinstall from InstallModuleOptions in CLI command tests. - Dynamic load of verify-bundle-published registers sys.modules before exec_module (same dataclass annotation issue as check_doc_frontmatter). Made-with: Cursor --------- Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
- Sync version across pyproject.toml, setup.py, and package __version__ - Add CHANGELOG [0.46.0] for PyPI release Made-with: Cursor
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughSpecFact CLI 0.46.0 Release SummaryVersion & Packaging
User-visible behavior and CLI surface
Contract / Public API impact
Testing, Contracts, and Quality Gates
OpenSpec, Docs, and CHANGELOG
Internal refactoring (no behavioral change)
Known reviewer guidance & priority suggestions (from review notes)
Summary of impact
WalkthroughBumps package to v0.46.0; adds JSON5-based safe VS Code settings merge with recovery backups and an AST-based verifier; wires verifier into CI lint job; introduces many internal dataclass option/context refactors across adapters, sync, generators, analyzers, validators, tooling, and tests. Changes
Sequence DiagramsequenceDiagram
participant CLI as IDE Setup CLI
participant Merge as Safe Merge Helper
participant JSON5 as JSON5 Parser
participant FS as File System
participant Backup as Recovery Backup
CLI->>Merge: create_vscode_settings(repo_path, force)
Merge->>FS: Validate file path is inside repo
FS-->>Merge: Path OK
Merge->>FS: Read existing settings.json
alt file exists
FS-->>JSON5: raw content
JSON5->>JSON5: parse (JSONC/JSON5)
JSON5-->>Merge: settings object
Merge->>Merge: ensure chat/chat.promptFilesRecommendations structure
Merge->>Merge: merge & dedupe prompt file paths
alt structure valid
Merge->>FS: write canonical JSON5
FS-->>CLI: return success (0)
else structure invalid
alt force = true
Merge->>Backup: create timestamped backup
Backup-->>FS: write backup under .specfact/recovery
Merge->>FS: write replaced settings file
FS-->>CLI: return success (0)
else
Merge-->>CLI: raise StructuredJsonDocumentError (exit 1)
end
end
else file missing
Merge->>FS: create directories and write new settings.json
FS-->>CLI: return success (0)
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~105 minutes Possibly related PRs
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 38
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/specfact_cli/sync/bridge_sync.py (1)
106-414: 🧹 Nitpick | 🔵 TrivialArchitectural note: consider extracting dataclasses to a separate types module.
With 23+ dataclasses added (lines 106-414), this file now exceeds 3700 lines. For long-term maintainability and to support cross-repo impact analysis for bundled modules, consider extracting the dataclass definitions to a dedicated
bridge_sync_types.pymodule:src/specfact_cli/sync/ ├── bridge_sync.py # Core logic ├── bridge_sync_types.py # Dataclasses and type definitions └── ...This separation:
- Enables independent type evolution and versioning
- Simplifies imports for consumers who only need types
- Reduces cognitive load when reviewing business logic
- Aligns with the OpenSpec traceability principle (types as contracts)
Not blocking for this release, but worth tracking for post-0.46.0 cleanup.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/sync/bridge_sync.py` around lines 106 - 414, Move the large set of dataclass definitions (e.g., SyncOperation, SyncResult, ExportChangeProposalsOptions, _AlignmentReportContentInput, _GithubIssueSearchInput, _AdoIssueSearchInput, _RemoteIssueResolutionInput, _IssueUpdatePayload, _ExportIterationTracking, _ChangeProposalExportLoopContext, _BundleAdapterExportInput, _BundleSingleExportInput, _WorkItemVerifyInput, _FetchIssueSyncStateInput, _PushIssueBodyInput, _IssueContentUpdateInput, _EmitCodeChangeProgressInput, _CodeChangeTrackingInput, and others) out of bridge_sync.py into a new module bridge_sync_types.py; keep the same class names and annotations, add selective exports (or __all__) in the new module, then update bridge_sync.py to import these types from bridge_sync_types (e.g., from .bridge_sync_types import SyncOperation, SyncResult, ExportChangeProposalsOptions, ...); ensure no runtime logic moved and run tests/linters to fix any import order/type-check issues.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CHANGELOG.md`:
- Line 13: The release header "## [0.46.0] - 2026-04-13" in CHANGELOG.md is
pre-dated; replace the explicit date with "Unreleased" (i.e. change the header
to "## [0.46.0] - Unreleased") and leave the actual date to be stamped when the
release/tag is cut so release automation and readers are not confused.
- Around line 46-47: The CHANGELOG.md bullet about "Dev / Semgrep" is over the
project's markdown line-length limit; edit the bullet in CHANGELOG.md to wrap
the text to <=120 characters per line (for example break after "Hatch and
`[dev]` extras pin `setuptools<82`" and continue the remainder on the next line)
so the sentence remains the same but no line exceeds the limit; ensure markdown
formatting and the exact phrase "setuptools<82" and reference to
Semgrep/OpenTelemetry/pkg_resources are preserved.
In `@docs/_support/readme-first-contact/capture-readme-output.sh`:
- Line 8: The default CLI_VERSION variable is stale (currently
CLI_VERSION="${CLI_VERSION:-0.45.2}"); update the default value to the correct
release line by changing the fallback to 0.46.0 so generated sample artifacts
match this release—modify the CLI_VERSION default assignment in
capture-readme-output.sh accordingly.
In `@docs/_support/readme-first-contact/sample-output/capture-metadata.txt`:
- Line 3: The documented CLI version in the sample output is still pinned to
0.45.2; update every occurrence of that version and the associated install
command to 0.46.0 (replace "0.45.2" with "0.46.0" in the CLI version line and
update any install command strings that reference 0.45.2), including the other
occurrences referenced (lines 12-13) so the sample metadata and runnable docs
match the 0.46.0 release.
In `@docs/_support/readme-first-contact/sample-output/review-output.txt`:
- Line 12: Update the sample output banner string in the file (the line
currently containing "SpecFact CLI - v0.45.2") to match the release target
"v0.46.0"; locate the banner text in
docs/_support/readme-first-contact/sample-output/review-output.txt (the sample
output banner line) and change the version suffix from v0.45.2 to v0.46.0 so the
sample output aligns with the release metadata.
In `@docs/getting-started/installation.md`:
- Around line 78-81: Update the sentence that currently reads “not valid JSON”
to clarify the actual CLI behavior: say the file is “not parseable or mergeable
by the CLI” (or “cannot be parsed/merged by the CLI”) so it matches runtime
behavior for `.vscode/settings.json` and the `chat` block; keep the guidance
about `specfact init ide --force` and the timestamped backup under
`.specfact/recovery/` intact.
In `@modules/bundle-mapper/src/bundle_mapper/mapper/engine.py`:
- Around line 219-224: The three identical _SignalContribution constructions
should be consolidated into a small local factory to remove duplication: add a
local helper (e.g., make_signal_contribution or a lambda) that accepts
bundle_id, score, weight, source and returns _SignalContribution, then replace
the repeated constructions at the _SignalContribution(...) sites (the blocks
that use explicit_bundle/explicit_score/WEIGHT_EXPLICIT/"explicit_label" and the
similar blocks at lines referencing the other two occurrences) with calls to
that helper; ensure you update all three spots (the occurrences that currently
build _SignalContribution using explicit_bundle, explicit_score, WEIGHT_EXPLICIT
and the other repeated patterns) so future signal additions only need to call
the helper.
In `@openspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.md`:
- Around line 37-39: The recorded test coverage in TDD_EVIDENCE.md shows "TOTAL
... 62%" which is below the required 80% policy; either increase test coverage
to meet policy or record an approved exception. To fix, add targeted tests for
the uncovered areas and re-run the suite using the existing hatch test --cover
-v / pytest-cov workflow until the pytest-cov footer shows TOTAL >= 80% (and
100% for any listed critical paths), then update the TDD_EVIDENCE.md entry to
reflect the new TOTAL and test counts; alternatively, if an exception is
approved, add a brief approved-exception statement in TDD_EVIDENCE.md
referencing the approver and justification instead of the coverage increase.
Ensure you update the same documented line that currently shows "TOTAL ... 62%"
and include the exact new coverage number or the explicit approval note.
In `@README.md`:
- Line 30: Update the two user-facing version references in README.md that
currently read "SpecFact CLI - v0.45.2" (and the other occurrence at line 87) to
the new release "0.46.0" so the documentation and release branch reflect the
version this PR publishes.
In `@scripts/check_doc_frontmatter.py`:
- Around line 432-439: The helper
_load_existing_agent_rule_frontmatter_overrides currently only catches OSError
around parse_frontmatter(path); update it to also catch yaml.YAMLError (or a
tuple (OSError, yaml.YAMLError)) so malformed YAML falls back to returning the
default _AgentRuleFrontmatterDraft instead of crashing; keep the existing
behavior (return draft) and optionally log or ignore the YAML error, referencing
parse_frontmatter and _AgentRuleFrontmatterDraft so the change is easy to find.
In `@scripts/verify_safe_project_writes.py`:
- Around line 34-38: The current _register_json_from_func_alias only handles
explicit imports and ignores wildcard imports (from json import *), allowing
calls like loads(...) to bypass detection; update _register_json_from_func_alias
to detect alias.name == "*" and in that case iterate over _JSON_IO_NAMES and add
entries to func_aliases for each name (using each name as the local key and
f"json.{name}" as the value), while preserving the existing behavior for
non-wildcard aliases; reference _register_json_from_func_alias, _JSON_IO_NAMES
and _collect_json_io_offenders when making the change.
In `@src/specfact_cli/adapters/ado.py`:
- Around line 750-775: The helper _merge_created_work_item_source_tracking
currently returns early when proposal_data.get("source_tracking") is falsy,
dropping the new tracking info; instead, ensure the new tracking_update is
persisted into proposal_data: if source_tracking is missing or falsy, set
proposal_data["source_tracking"] to tracking_update (or [tracking_update] if you
prefer list semantics); if source_tracking is a dict, convert via
_as_str_dict(source_tracking), update it with tracking_update and assign it back
to proposal_data["source_tracking"]; if it's a list, append tracking_update (as
currently done) so the proposal_data is updated for downstream syncs.
In `@src/specfact_cli/adapters/github.py`:
- Around line 833-837: The try/except around calling
_git_config_content_indicates_github(git_config.read_text(encoding="utf-8")) is
currently swallowing all exceptions; change it to catch only expected
I/O/decoding errors (e.g., OSError, UnicodeDecodeError) and log the exception
(use the module logger or an existing logger) so failures are visible, and allow
unexpected exceptions to propagate instead of using a bare except Exception;
update the block that reads git_config and calls
_git_config_content_indicates_github to perform targeted exception handling and
logging.
In `@src/specfact_cli/adapters/openspec_parser.py`:
- Around line 395-406: The parser currently treats any line starting with "##"
as a new section and abandons the content body; update the logic so "##" only
starts a new section when we're not already inside the "content" section.
Concretely, inside the parsing loop adjust the if handling for
line.startswith("##") to skip setting current_section when current_section ==
"content" (or instead append the literal line to delta_content when
current_section == "content"); ensure delta_content (and variables change_type,
feature_id) remain unchanged and subsequent lines are preserved when
current_section == "content".
In `@src/specfact_cli/adapters/speckit.py`:
- Around line 523-535: The code assigns list/dict fields from payload directly
(e.g., existing_feature.outcomes, existing_feature.acceptance,
existing_feature.stories, constraints and when constructing Feature(...)), which
can lead to shared mutable references; update these assignments in
existing_feature and the Feature constructor to use defensive copies (e.g.,
list(payload.outcomes) or copy of dict/list from
payload.spec_data.get("edge_cases", [])) so project_bundle.features owns its own
data and is not affected by upstream mutations.
- Around line 566-575: Tests are not asserting that the payload fields passed
into _SpeckitFeatureUpsert (outcomes, acceptance, stories,
edge_cases/constraints) are correctly extracted and mapped from the parsed spec;
update unit test test_import_artifact_specification to assert feature.outcomes,
feature.acceptance, feature.stories, and feature.constraints equal the expected
values from the spec_data, and update the integration test covering the import
flow to validate the full extraction pipeline (e.g., requirements → outcomes,
success_criteria → acceptance, edge_cases → constraints) so any regressions in
the mapping logic are caught. Ensure assertions reference the created Feature
object produced by the import path and that the code path which constructs
_SpeckitFeatureUpsert still includes and assigns outcomes, acceptance, stories,
and edge_cases into the Feature model.
In `@src/specfact_cli/analyzers/ambiguity_scanner.py`:
- Around line 181-184: The code currently swallows missing scanner handlers by
returning an empty list when scanners.get(category) is None; change this to fail
fast by raising a clear exception (e.g., KeyError or RuntimeError) that includes
the category name and context instead of returning [], so replace the early
return in the scanner lookup (the scanners.get(category) / scanner variable
usage and the scanner(plan_bundle) call) with an explicit raise that enforces
the taxonomy-to-scanner contract.
In `@src/specfact_cli/analyzers/code_analyzer.py`:
- Around line 1205-1211: The loop correctly creates a local
_SemgrepFeatureBuckets and accumulates findings, but
_accumulate_semgrep_finding_bucket currently mutates self.themes directly;
change it so it only updates the passed-in buckets object (do not touch
self.themes) and move any application of buckets back into
_apply_semgrep_findings_to_feature so the only code that mutates self.themes is
inside that single method; update _accumulate_semgrep_finding_bucket to accept
and modify the buckets parameter, keep _SemgrepFeatureBuckets local to the loop,
and ensure _apply_semgrep_findings_to_feature performs the final self.themes
update (with any needed synchronization) using the buckets.
- Around line 1153-1175: The method _accumulate_semgrep_finding_bucket mutates
self.themes from worker threads causing a data race; instead add a themes field
to the _SemgrepFeatureBuckets dataclass (e.g., buckets.themes: set), stop
touching self.themes inside _accumulate_semgrep_finding_bucket and append theme
names to buckets.themes (or a local set) for the relevant categories, update
_extract_feature_for_results to include the bucket.themes into the per-file
result payload, and finally merge those returned themes into self.themes within
the sequential _merge_analysis_results routine so all theme aggregation happens
single-threaded.
In `@src/specfact_cli/analyzers/graph_analyzer.py`:
- Around line 189-195: Add a Google-style docstring to the new helper method
_add_call_graph_edges_for_module describing its purpose (adding call edges for a
specific module to the StrDiGraph), parameters (graph: StrDiGraph, module_name:
str, call_graph: dict[str, list[str]], python_files: list[Path]) with types and
brief descriptions, the return value (None), and any assumptions/constraints
(e.g., call_graph keys are module names, targets are function identifiers,
python_files contains Paths used to resolve nodes); place the docstring
immediately under the def line in _add_call_graph_edges_for_module and follow
Google-style sections: Args and Returns (and Raises if applicable).
In `@src/specfact_cli/common/logger_setup.py`:
- Around line 592-602: The call constructing the pipeline config uses positional
args which is fragile; update the call to _FileOutputPipelineConfig inside
cls._attach_file_output_pipeline to pass explicit keyword arguments (e.g.,
logger_name=logger_name, logger=logger, level=level, log_format=log_format,
log_file=log_file, use_rotating_file=use_rotating_file, append_mode=append_mode)
so the _FileOutputPipelineConfig invocation is unambiguous and resilient to
field reorderings.
In `@src/specfact_cli/generators/persona_exporter.py`:
- Around line 23-30: Add a concise Google-style docstring to the dataclass
_OwnedFeatureFieldSpec describing the purpose of the class and each field's
semantics; mention that this frozen dataclass represents a feature field spec
owned by named features and explain the fields owns (list of owner feature
names), pattern (match pattern for field selection), field_name (target
attribute name), value (optional default/constant value), and use_getattr
(whether to use getattr access). Place the docstring immediately below the class
definition line for _OwnedFeatureFieldSpec.
- Around line 202-210: Summary: Falsy but valid values are being dropped because
the code uses truthiness checks. Locate the block in persona_exporter.py that
checks spec.use_getattr and spec.value (references: match_section_pattern,
spec.owns, spec.use_getattr, spec.field_name, feature_dict, feature, spec.value)
and change the truthy checks to explicit None checks: replace "if val:" with "if
val is not None:" and replace "if spec.value:" with "if spec.value is not None:"
so that valid falsy values (0, False, empty string) are preserved in
feature_dict.
In `@src/specfact_cli/generators/test_to_openapi.py`:
- Around line 334-343: Add Google-style docstrings to the two new helper methods
(the _coerce_ast_dict_to_plain method and the sibling _coerce_ast_list_to_plain
method) describing their purpose, parameters, and return values; for
_coerce_ast_dict_to_plain document that it accepts value: ast.Dict and returns
dict[str, Any] after extracting keys/values, include Args: value (ast.Dict):
description, Returns: dict[str, Any]: description of the resulting plain dict,
and any Raises if applicable; do the same for _coerce_ast_list_to_plain (value:
ast.List -> returns list[Any]) following Google-style sections (Args, Returns,
Raises) so the functions meet the project docstring standard.
In `@src/specfact_cli/sync/bridge_sync.py`:
- Around line 274-295: Define a Protocol that captures the minimal adapter
interface used by these dataclasses (e.g., methods like export_artifact,
import_artifact, etc.), then replace the ad-hoc Any type on the adapter field in
_ChangeProposalExportLoopContext (and other dataclasses that use adapter) with
that Protocol type to restore type-safety and IDE/static-analysis support;
ensure the Protocol name (e.g., DevOpsAdapterProtocol) lists the exact method
signatures your code calls so callers and implementations remain compatible and
update any imports/annotations referencing adapter accordingly.
- Around line 268-272: The dataclass _ExportIterationTracking is declared frozen
but holds a mutable list in source_tracking_list (and source_tracking_raw can be
a list), which allows content mutation despite frozen=True; update the class to
use immutable types (e.g., change source_tracking_list: tuple[dict[str, Any],
...] and source_tracking_raw: dict[str, Any] | tuple[dict[str, Any], ...]) and
update call sites that construct instances (locations creating
_ExportIterationTracking) to pass tuples instead of lists, or alternatively add
a brief docstring on _ExportIterationTracking stating callers must not mutate
the lists and convert incoming lists to tuples in the constructor/factory to
enforce immutability.
- Around line 163-171: There are two duplicate dataclasses
(_AdoWorkItemVerifyInput and _WorkItemVerifyInput) with identical fields;
consolidate them into a single dataclass (either keep _AdoWorkItemVerifyInput or
rename to a generic name like _WorkItemVerificationContext), remove the other
duplicate, and update all usages (notably the _verify_work_item_exists function
and any call sites) to reference the single consolidated class so there's a
single source of truth for the work-item verification context.
- Around line 126-150: ExportChangeProposalsOptions is a frozen dataclass used
as a public API but lacks cross-field validation; add a __post_init__ method on
ExportChangeProposalsOptions to validate combinations (e.g., require ado_org,
ado_project, ado_base_url, ado_work_item_type together if any ADO field set;
require repo_owner, repo_name, api_token together or use_gh_cli True; ensure
tmp_file present when export_to_tmp/import_from_tmp true) and raise ValueError
with clear messages on invalid combos; alternatively convert the class to a
Pydantic model and implement equivalent field validators using `@field_validator`
for the same rules if you prefer richer validation.
- Around line 1199-1203: The helper _bridge_sync_update_existing_issue_then_save
currently uses an assert to check payload.target_entry and payload.issue_number;
replace that assert with an explicit runtime check that raises a clear exception
(e.g., ValueError or RuntimeError) with a descriptive message if
payload.target_entry or payload.issue_number is None, and apply the same change
inside _update_existing_issue to validate the same preconditions (checking
payload.target_entry and payload.issue_number) so the contract is enforced even
when Python assertions are disabled; reference the methods
_bridge_sync_update_existing_issue_then_save and _update_existing_issue and the
attributes payload.target_entry and payload.issue_number when making the change.
- Around line 1486-1489: Replace the direct private access to
AdapterRegistry._adapters with the public existence check
AdapterRegistry.is_registered(adapter_type) before aborting; if is_registered
returns False append the same error and return the SyncResult, otherwise call
self._bridge_sync_make_devops_adapter_kwargs(...) to build adapter_kwargs and
then obtain the adapter via AdapterRegistry.get_adapter(adapter_type,
**adapter_kwargs). Ensure you reference adapter_type,
AdapterRegistry.is_registered(), AdapterRegistry.get_adapter(),
_bridge_sync_make_devops_adapter_kwargs, and SyncResult in the updated logic so
the registry API and contract (`@icontract`) are respected.
In `@src/specfact_cli/sync/bridge_watch.py`:
- Around line 30-40: The current _match_feature_id_from_pattern_parts only
validates segments before {feature_id}, allowing false matches; update it to
require file_parts length matches pattern_parts length and validate every
segment: for each index j, if pattern_parts[j] is a literal (does not start with
"{" and end with "}") ensure pattern_parts[j] == file_parts[j], and if it's a
placeholder (like "{feature_id}" or other {…}) accept any value (but still
record file_parts[feature_id_index] for the feature id). Apply the same stricter
validation logic to the other similar helper (the one at the later occurrence
mentioned in the review).
In `@src/specfact_cli/utils/contract_predicates.py`:
- Around line 63-66: The settings_relative_nonblank predicate currently uses
Path(stripped).is_absolute(), which is OS-dependent and lets Windows absolute
paths slip through on POSIX; update the check to validate against both POSIX and
Windows grammars by creating PurePosixPath(stripped) and
PureWindowsPath(stripped) and returning False if either .is_absolute() is True,
while still ensuring no path segment equals ".." (apply the same parts check for
the chosen PurePath that yields parts, or check both if necessary). Modify the
predicate (settings_relative_nonblank) accordingly and add unit tests that
assert False for inputs like r"C:\settings.json" and "D:\\settings.json" to
cover Windows absolute-path cases.
In `@src/specfact_cli/validators/cli_first_validator.py`:
- Around line 191-194: The validator currently uses projects_dir.exists() (and
similar checks) before calling projects_dir.iterdir() and features_dir.glob(),
which will raise NotADirectoryError if those paths are regular files; change the
guards to use is_dir() and skip non-directories (e.g., if not
projects_dir.is_dir(): return suspicious_files and inside the loop skip
bundle_dir unless bundle_dir.is_dir()) so iteration and globbing only operate on
directories; update the corresponding checks around projects_dir, bundle_dir,
and features_dir to defend against tampered filesystem state.
In `@src/specfact_cli/validators/contract_validator.py`:
- Around line 19-32: The public dataclass ContractDensityMetrics lacks runtime
type and invariant checks; add runtime validation by decorating the class with
`@beartype` and applying icontract invariants or implement a __post_init__ that
uses beartype.validate and icontract.require/ensure checks to enforce field
types and invariants (e.g., non-negative totals, percentages within 0.0–100.0,
contracts_per_story/invariants_per_feature numeric). Ensure you import and apply
beartype and icontract consistently with the project's pattern so direct
instantiation of ContractDensityMetrics triggers the same runtime guards as the
decorated factory functions.
In `@src/specfact_cli/validators/sidecar/crosshair_summary.py`:
- Around line 67-75: The current dedupe checks if any(v["function"] in line for
v in violation_details) which does substring matching and can drop distinct
functions; instead, first parse the line with function_name_pattern to get
func_name (as done with match = function_name_pattern.match(line) and func_name
= match.group(1).strip()), then perform deduplication by exact equality against
existing entries in violation_details (e.g., compare func_name to
v["function"]), and only if no exact match exists append {"function": func_name,
"counterexample": {}, "raw": line.strip()}; keep the existing "/" checks (if "/"
in func_name or func_name.startswith("/")) after parsing and before appending.
In `@src/specfact_cli/validators/sidecar/frameworks/fastapi.py`:
- Around line 162-175: The helper _method_and_path_from_route_decorator
currently treats any Call as a route and only reads a positional arg; restrict
it to known FastAPI HTTP verbs and read keyword path args: check decorator.func
(ast.Attribute.attr or ast.Name.id) against a whitelist like
{"get","post","put","patch","delete","options","head"} (case-insensitive) and
return None for non-matching decorators, then inspect decorator.keywords for a
keyword.arg == "path" and use _extract_string_literal on that before falling
back to the first positional arg or "/" ; apply the same whitelist+keyword
handling to the other route-parsing helper in this module so both behave
identically.
In `@tests/unit/modules/init/test_first_run_selection.py`:
- Around line 394-396: The test stub _record_marketplace weakens the API
contract by accepting None options; update _record_marketplace to assert that
the installer was called with a non-None options object and that the install
root matches the expected Path (the same Path returned: tmp_path /
module_id.split("/")[1]); if the options object exposes an install root field,
also assert that options.install_root equals that expected Path so callers must
pass the new invocation shape rather than the old one.
In `@tools/smart_test_coverage.py`:
- Around line 80-91: Flip the default of enforce_threshold in the
_SmartCacheUpdate dataclass from True to False to reduce caller boilerplate;
update the class declaration for _SmartCacheUpdate so enforce_threshold: bool =
False, and ensure the one caller that expects the "fail loudly" behavior
(force_full_run) continues to pass enforce_threshold=True explicitly (no other
changes to callers are required since they already set it to False).
---
Outside diff comments:
In `@src/specfact_cli/sync/bridge_sync.py`:
- Around line 106-414: Move the large set of dataclass definitions (e.g.,
SyncOperation, SyncResult, ExportChangeProposalsOptions,
_AlignmentReportContentInput, _GithubIssueSearchInput, _AdoIssueSearchInput,
_RemoteIssueResolutionInput, _IssueUpdatePayload, _ExportIterationTracking,
_ChangeProposalExportLoopContext, _BundleAdapterExportInput,
_BundleSingleExportInput, _WorkItemVerifyInput, _FetchIssueSyncStateInput,
_PushIssueBodyInput, _IssueContentUpdateInput, _EmitCodeChangeProgressInput,
_CodeChangeTrackingInput, and others) out of bridge_sync.py into a new module
bridge_sync_types.py; keep the same class names and annotations, add selective
exports (or __all__) in the new module, then update bridge_sync.py to import
these types from bridge_sync_types (e.g., from .bridge_sync_types import
SyncOperation, SyncResult, ExportChangeProposalsOptions, ...); ensure no runtime
logic moved and run tests/linters to fix any import order/type-check issues.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5e3ea8c7-cdb3-463e-9d88-99d81ec0299b
📒 Files selected for processing (72)
.github/workflows/pr-orchestrator.ymlCHANGELOG.mdREADME.mddocs/_support/readme-first-contact/capture-readme-output.shdocs/_support/readme-first-contact/sample-output/capture-metadata.txtdocs/_support/readme-first-contact/sample-output/review-output.txtdocs/getting-started/installation.mdmodules/bundle-mapper/module-package.yamlmodules/bundle-mapper/src/bundle_mapper/mapper/engine.pyopenspec/changes/profile-04-safe-project-artifact-writes/TDD_EVIDENCE.mdopenspec/changes/profile-04-safe-project-artifact-writes/tasks.mdpyproject.tomlscripts/check_doc_frontmatter.pyscripts/verify-bundle-published.pyscripts/verify_safe_project_writes.pysetup.pysrc/__init__.pysrc/specfact_cli/__init__.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/openspec_parser.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/ambiguity_scanner.pysrc/specfact_cli/analyzers/code_analyzer.pysrc/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/common/logger_setup.pysrc/specfact_cli/common/logging_utils.pysrc/specfact_cli/generators/contract_generator.pysrc/specfact_cli/generators/openapi_extractor.pysrc/specfact_cli/generators/persona_exporter.pysrc/specfact_cli/generators/test_to_openapi.pysrc/specfact_cli/models/project.pysrc/specfact_cli/modules/init/module-package.yamlsrc/specfact_cli/modules/init/src/first_run_selection.pysrc/specfact_cli/modules/module_registry/module-package.yamlsrc/specfact_cli/modules/module_registry/src/commands.pysrc/specfact_cli/registry/module_installer.pysrc/specfact_cli/sync/__init__.pysrc/specfact_cli/sync/bridge_sync.pysrc/specfact_cli/sync/bridge_watch.pysrc/specfact_cli/utils/contract_predicates.pysrc/specfact_cli/utils/ide_setup.pysrc/specfact_cli/utils/project_artifact_write.pysrc/specfact_cli/utils/source_scanner.pysrc/specfact_cli/validators/cli_first_validator.pysrc/specfact_cli/validators/contract_validator.pysrc/specfact_cli/validators/sidecar/crosshair_runner.pysrc/specfact_cli/validators/sidecar/crosshair_summary.pysrc/specfact_cli/validators/sidecar/frameworks/fastapi.pysrc/specfact_cli/validators/sidecar/orchestrator.pytests/e2e/test_bundle_extraction_e2e.pytests/helpers/doc_frontmatter.pytests/integration/sync/test_multi_adapter_backlog_sync.pytests/integration/test_devops_github_sync.pytests/unit/modules/init/test_first_run_selection.pytests/unit/modules/init/test_init_ide_prompt_selection.pytests/unit/modules/module_registry/test_commands.pytests/unit/registry/test_module_installer.pytests/unit/scripts/test_verify_bundle_published.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/specfact_cli/modules/test_module_upgrade_improvements.pytests/unit/specfact_cli/modules/test_multi_module_install_uninstall.pytests/unit/specfact_cli/registry/test_profile_presets.pytests/unit/specfact_cli/validators/sidecar/test_crosshair_runner.pytests/unit/specfact_cli/validators/sidecar/test_crosshair_runner_env.pytests/unit/tools/test_smart_test_coverage.pytests/unit/utils/test_contract_predicates.pytests/unit/utils/test_ide_setup.pytests/unit/utils/test_project_artifact_write.pytests/unit/validators/test_bundle_dependency_install.pytools/contract_first_smart_test.pytools/smart_test_coverage.py
Co-authored-by: Dominikus Nold <djm81@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
modules/bundle-mapper/module-package.yaml (1)
1-27:⚠️ Potential issue | 🟡 MinorVerify checksum and signature against the built artifact before publishing.
The manifest's YAML passes linting and integrity fields are well-formatted. Before releasing bundle-mapper to the registry, the development workflow must confirm that the checksum (line 23) and signature (line 24) match the actual packaged artifact. Since module signatures are mandatory for cross-repo dependencies in the specfact ecosystem, this validation step is critical for supply chain integrity—the signature should be regenerated from the final built artifact (e.g., via
openspec bundle sign) to ensure the manifest and asset are cryptographically bound at publication time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/bundle-mapper/module-package.yaml` around lines 1 - 27, Update your release workflow to validate and regenerate the integrity fields in module-package.yaml: compute the sha256 checksum of the final built artifact and ensure integrity.checksum matches it, then run the signing step (e.g., openspec bundle sign or your project's signing command) on that same artifact and replace integrity.signature with the newly generated signature so the manifest's integrity.checksum and integrity.signature are produced from and tied to the exact packaged artifact before publishing.src/specfact_cli/analyzers/code_analyzer.py (1)
1736-1763:⚠️ Potential issue | 🟠 MajorEliminate the redundant per-
awaitAST rescan.Line 1756 already records every
AsyncFunctionDefin the tree. Sinceawaitcannot exist outside an async function (Python syntax constraint), the_function_name_holding_ast_subtree()helper cannot discover names not already collected; it only adds O(n²) cost by re-walking the tree for eachawaitnode. Simplify to a single pass:def _detect_async_patterns_parallel(self, tree: ast.AST, file_path: Path) -> list[str]: """ Detect async/await patterns in code (thread-safe version). Returns: List of async method/function names """ return [node.name for node in ast.walk(tree) if isinstance(node, ast.AsyncFunctionDef)]The helper method
_function_name_holding_ast_subtree()can be removed entirely—it has no other call sites in the codebase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/analyzers/code_analyzer.py` around lines 1736 - 1763, The current implementation redundantly rescans the AST for every Await node using _function_name_holding_ast_subtree, which is unnecessary because all AsyncFunctionDef names are already collected; replace _detect_async_patterns_parallel to simply return the list of node.name for all ast.AsyncFunctionDef nodes found (e.g., via a single ast.walk list comprehension) and remove the unused helper _function_name_holding_ast_subtree since it has no other callers.
♻️ Duplicate comments (3)
src/specfact_cli/analyzers/graph_analyzer.py (1)
196-196:⚠️ Potential issue | 🟡 MinorUse a Google-style docstring for the new helper method.
The current one-liner is clear, but it still misses the required structured
Args/Returnssections.Proposed docstring update
def _add_call_graph_edges_for_module( self, graph: StrDiGraph, module_name: str, call_graph: dict[str, list[str]], python_files: list[Path], ) -> None: - """Add directed edges from ``module_name`` to callees that resolve to known graph nodes.""" + """Add call-graph-derived dependency edges for one module. + + Args: + graph: Dependency graph being populated. + module_name: Source module name for emitted edges. + call_graph: Mapping of caller symbols to callee symbols. + python_files: Repository Python files used for callee module resolution. + + Returns: + None. + """As per coding guidelines, "Use Google-style docstrings for functions and classes in Python code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/analyzers/graph_analyzer.py` at line 196, The helper method whose docstring currently reads "Add directed edges from ``module_name`` to callees that resolve to known graph nodes." needs a Google-style docstring: replace the one-liner with a docstring that includes an Args section describing parameters (e.g., module_name: str, callees: Iterable[str] or list, plus the graph or resolver objects used by the function such as the graph instance or node lookup function) and a Returns section (likely Returns: None) and optionally a Raises section if the function can raise errors; keep the description concise but explicit about what the function does (adding directed edges from module_name to resolved callee nodes).CHANGELOG.md (1)
13-13:⚠️ Potential issue | 🟡 MinorUse
Unreleaseduntil the 0.46.0 tag is cut.Line 13 is future-dated (2026-04-13) while this PR is still open on 2026-04-12; this can confuse release readers/automation.
Suggested edit
-## [0.46.0] - 2026-04-13 +## [0.46.0] - Unreleased🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` at line 13, The changelog entry header "## [0.46.0] - 2026-04-13" is future-dated; update it to use "Unreleased" instead of the 2026-04-13 date so the PR doesn't advertise a released version prematurely—locate the header string "## [0.46.0] - 2026-04-13" in CHANGELOG.md and change it to "## [0.46.0] - Unreleased" (or simply "## [Unreleased]") and commit the change; revert to the real date when the 0.46.0 tag is actually cut.src/specfact_cli/adapters/ado.py (1)
766-769:⚠️ Potential issue | 🟠 MajorPreserve list-shaped
source_trackingon first export.
if not source_trackingalso matches[], so the first ADO export rewrites a list carrier into a dict. That breaks multi-entry traceability for proposals that intentionally start with list semantics, because later exports will overwrite instead of append.🩹 Proposed fix
source_tracking = proposal_data.get("source_tracking") - if not source_tracking: + if source_tracking is None: proposal_data["source_tracking"] = tracking_update return if isinstance(source_tracking, dict): st = _as_str_dict(source_tracking) st.update(tracking_update) proposal_data["source_tracking"] = st return if isinstance(source_tracking, list): + if not source_tracking: + proposal_data["source_tracking"] = [tracking_update] + return cast(list[dict[str, Any]], source_tracking).append(tracking_update)Also applies to: 775-776
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/specfact_cli/adapters/ado.py` around lines 766 - 769, The current check uses "if not source_tracking" which treats an empty list as falsy and replaces list-shaped source_tracking with a dict; change the guard in the ADO adapter to only treat missing/None as absent (e.g., "if source_tracking is None") so a present empty list remains a list and the code will append rather than overwrite; apply the same fix to the second occurrence handling the same logic (the block that references proposal_data["source_tracking"] and tracking_update at the other location).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/check_doc_frontmatter.py`:
- Around line 465-466: The suggested frontmatter title is built from the raw
filename (path.stem) which reintroduces numeric prefixes; change
_format_agent_rules_suggested_frontmatter to derive title_guess from the
provided canonical_id (or by calling _agent_rules_canonical_id if needed)
instead of path.stem, i.e., replace path.stem usage with the canonical slug,
convert hyphens to spaces, apply title-casing and escape quotes (same escaping
logic currently used) so the generated title aligns with the canonical
id/permalink.
In `@scripts/verify_safe_project_writes.py`:
- Around line 60-77: The current _collect_json_io_offenders uses _json_bindings,
func_aliases and module_locals globally, so locally shadowed names (e.g., a
local variable or function arg named "loads" or "json") get misflagged; update
this to be scope-aware by replacing the flat ast.walk approach with a visitor
that tracks a scope stack and records local bindings (assignments,
function/class args/names, imports) per scope, then when visiting Call nodes in
_collect_json_io_offenders only mark offenders whose callee name resolves to the
module-level binding returned by _json_bindings (or whose attribute base is a
module-local that isn’t shadowed in the current scope) using the scope lookup;
ensure symbols referenced: _collect_json_io_offenders, _json_bindings,
func_aliases, module_locals, and _JSON_IO_NAMES are used as the canonical
module-level imports to compare against.
- Around line 87-88: Wrap the file read/parse call that produces "tree =
ast.parse(IDE_SETUP.read_text(...), filename=str(IDE_SETUP))" in a try/except
that explicitly catches OSError, UnicodeDecodeError, and SyntaxError; on any of
those exceptions write a clear error message to stderr (including the exception
text and the IDE_SETUP path) and exit non‑zero (e.g., sys.exit(1)) so the gate
outcome is deterministic, then only call _collect_json_io_offenders(tree) when
parsing succeeds.
In `@src/specfact_cli/adapters/ado.py`:
- Around line 710-728: The helper _work_item_id_from_source_tracking duplicates
the repo-matching logic already implemented in
_find_work_item_id_in_source_tracking / _get_source_tracking_work_item_id,
causing drift; refactor by removing the duplicated matching code in
_work_item_id_from_source_tracking and delegate to the existing resolver (call
or import
_find_work_item_id_in_source_tracking/_get_source_tracking_work_item_id) so both
_update_work_item_status and other callers use the single shared implementation,
and apply the same fix to the duplicate block around the 2291-2312 region so
repo matching remains centralized.
In `@src/specfact_cli/adapters/github.py`:
- Around line 162-168: The regex _GITHUB_GIT_CONFIG_URL_RE used by
_git_config_content_indicates_github currently matches both "url" and "pushurl"
keys causing false-positive GitHub detection; update the regex so it anchors the
key to the start of the line (e.g., require line start and optional whitespace
before "url") so only actual "url =" entries are matched, then run/adjust tests
and add a unit test case that includes a repo config with only "pushurl =
git@github.com:owner/repo.git" to ensure it does not classify as GitHub.
In `@src/specfact_cli/adapters/speckit.py`:
- Around line 464-471: The current logic in the block handling acceptance_raw
(variables acceptance_raw, acceptance, and method _extract_text_list in
speckit.py) can produce an empty acceptance list when a non-string list is
provided and _extract_text_list returns no items; change the flow so that after
attempting extraction (when acceptance_raw is a list and not all strings), you
check if the extracted acceptance is empty and, if so, set acceptance to the
default [f"{story_title} is implemented"]; ensure this fallback runs whether the
original branch used direct string copying or extraction.
In `@src/specfact_cli/analyzers/code_analyzer.py`:
- Around line 739-741: The current _themes_for_import_module uses raw substring
matching and misclassifies packages (e.g., "clickhouse_connect" matching
"click"); change it to tokenize and match package prefixes: normalize
module_name to lowercase, split into tokens on dots, underscores and hyphens
(e.g., re.split(r'[._-]', lowered)), then map themes when a keyword equals any
token or when the module_name or its top-level package starts with the keyword
(e.g., lowered == keyword or lowered.startswith(keyword + ".") or any(token ==
keyword for token in tokens)); update the loop over theme_keywords (in
_themes_for_import_module) to use these token/prefix checks instead of raw
"keyword in lowered".
In `@tests/unit/adapters/test_openspec_parser.py`:
- Around line 187-213: Add assertions to the
test_parse_change_spec_delta_content_keeps_markdown_headings test to also verify
parsed metadata keys come back in the expected case: after calling
parser.parse_change_spec_delta(delta_path) assert parsed is not None and then
assert parsed.get("type") == "MODIFIED" and parsed.get("feature_id") ==
"001-auth" (or equivalent keys your parser uses), so failures clearly show
metadata parsing regressions; locate the test and update the assertions around
the parsed variable returned by parse_change_spec_delta.
In `@tests/unit/scripts/test_verify_safe_project_writes.py`:
- Around line 51-60: The test test_verify_safe_project_writes_passes_on_repo is
environment-coupled because it spawns the script verify_safe_project_writes.py
against the real repo; change it to a deterministic unit test by patching the
module-level references (e.g., ROOT and IDE_SETUP used in
verify_safe_project_writes.py / ide_setup.py) and invoking the script's main()
directly instead of subprocess.run, or alternately move this test to integration
tests; specifically, replace the subprocess invocation in
test_verify_safe_project_writes_passes_on_repo with monkeypatch/patch calls that
set ROOT/IDE_SETUP to controlled tmp paths and then call main() from the
verify_safe_project_writes module so the outcome is repeatable and isolated.
---
Outside diff comments:
In `@modules/bundle-mapper/module-package.yaml`:
- Around line 1-27: Update your release workflow to validate and regenerate the
integrity fields in module-package.yaml: compute the sha256 checksum of the
final built artifact and ensure integrity.checksum matches it, then run the
signing step (e.g., openspec bundle sign or your project's signing command) on
that same artifact and replace integrity.signature with the newly generated
signature so the manifest's integrity.checksum and integrity.signature are
produced from and tied to the exact packaged artifact before publishing.
In `@src/specfact_cli/analyzers/code_analyzer.py`:
- Around line 1736-1763: The current implementation redundantly rescans the AST
for every Await node using _function_name_holding_ast_subtree, which is
unnecessary because all AsyncFunctionDef names are already collected; replace
_detect_async_patterns_parallel to simply return the list of node.name for all
ast.AsyncFunctionDef nodes found (e.g., via a single ast.walk list
comprehension) and remove the unused helper _function_name_holding_ast_subtree
since it has no other callers.
---
Duplicate comments:
In `@CHANGELOG.md`:
- Line 13: The changelog entry header "## [0.46.0] - 2026-04-13" is
future-dated; update it to use "Unreleased" instead of the 2026-04-13 date so
the PR doesn't advertise a released version prematurely—locate the header string
"## [0.46.0] - 2026-04-13" in CHANGELOG.md and change it to "## [0.46.0] -
Unreleased" (or simply "## [Unreleased]") and commit the change; revert to the
real date when the 0.46.0 tag is actually cut.
In `@src/specfact_cli/adapters/ado.py`:
- Around line 766-769: The current check uses "if not source_tracking" which
treats an empty list as falsy and replaces list-shaped source_tracking with a
dict; change the guard in the ADO adapter to only treat missing/None as absent
(e.g., "if source_tracking is None") so a present empty list remains a list and
the code will append rather than overwrite; apply the same fix to the second
occurrence handling the same logic (the block that references
proposal_data["source_tracking"] and tracking_update at the other location).
In `@src/specfact_cli/analyzers/graph_analyzer.py`:
- Line 196: The helper method whose docstring currently reads "Add directed
edges from ``module_name`` to callees that resolve to known graph nodes." needs
a Google-style docstring: replace the one-liner with a docstring that includes
an Args section describing parameters (e.g., module_name: str, callees:
Iterable[str] or list, plus the graph or resolver objects used by the function
such as the graph instance or node lookup function) and a Returns section
(likely Returns: None) and optionally a Raises section if the function can raise
errors; keep the description concise but explicit about what the function does
(adding directed edges from module_name to resolved callee nodes).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: c1e4fda6-62d5-4f92-8bef-9a4a4afc7ca2
📒 Files selected for processing (20)
CHANGELOG.mdREADME.mddocs/_support/readme-first-contact/capture-readme-output.shdocs/_support/readme-first-contact/sample-output/capture-metadata.txtdocs/_support/readme-first-contact/sample-output/review-output.txtdocs/getting-started/installation.mdmodules/bundle-mapper/module-package.yamlmodules/bundle-mapper/src/bundle_mapper/mapper/engine.pyscripts/check_doc_frontmatter.pyscripts/verify_safe_project_writes.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/openspec_parser.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/ambiguity_scanner.pysrc/specfact_cli/analyzers/code_analyzer.pysrc/specfact_cli/analyzers/graph_analyzer.pytests/unit/adapters/test_openspec_parser.pytests/unit/adapters/test_speckit.pytests/unit/scripts/test_verify_safe_project_writes.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: CLI Command Validation
- GitHub Check: Contract-First CI
- GitHub Check: Linting (ruff, pylint, safe-write guard)
- GitHub Check: Tests (Python 3.12)
- GitHub Check: Compatibility (Python 3.11)
- GitHub Check: Type Checking (basedpyright)
- GitHub Check: Workflow Lint
- GitHub Check: Docs Review
- GitHub Check: Analyze (python)
🧰 Additional context used
📓 Path-based instructions (15)
**/*.yaml
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
YAML files must pass linting using: hatch run yaml-lint with relaxed policy.
Files:
modules/bundle-mapper/module-package.yaml
**/*.{yml,yaml}
📄 CodeRabbit inference engine (.cursor/rules/testing-and-build-guide.mdc)
Validate YAML configuration files locally using
hatch run yaml-lintbefore committing
**/*.{yml,yaml}: Format all YAML and workflow files usinghatch run yaml-fix-allbefore committing
Use Prettier to fix whitespace, indentation, and final newline across YAML files
Use yamllint with the repo .yamllint configuration (line-length 140, trailing spaces and final newline enforced) to lint non-workflow YAML files
Files:
modules/bundle-mapper/module-package.yaml
**/*.{md,mdc}
📄 CodeRabbit inference engine (.cursor/rules/markdown-rules.mdc)
**/*.{md,mdc}: Do not use more than one consecutive blank line anywhere in the document (MD012: No Multiple Consecutive Blank Lines)
Fenced code blocks should be surrounded by blank lines (MD031: Fenced Code Blocks)
Lists should be surrounded by blank lines (MD032: Lists)
Files must end with a single empty line (MD047: Files Must End With Single Newline)
Lines should not have trailing spaces (MD009: No Trailing Spaces)
Use asterisks (**) for strong emphasis, not underscores (__) (MD050: Strong Style)
Fenced code blocks must have a language specified (MD040: Fenced Code Language)
Headers should increment by one level at a time (MD001: Header Increment)
Headers should be surrounded by blank lines (MD022: Headers Should Be Surrounded By Blank Lines)
Only one top-level header (H1) is allowed per document (MD025: Single H1 Header)
Use consistent list markers, preferring dashes (-) for unordered lists (MD004: List Style)
Nested unordered list items should be indented consistently, typically by 2 spaces (MD007: Unordered List Indentation)
Use exactly one space after the list marker (e.g., -, *, +, 1.) (MD030: Spaces After List Markers)
Use incrementing numbers for ordered lists (MD029: Ordered List Item Prefix)
Enclose bare URLs in angle brackets or format them as links (MD034: Bare URLs)
Don't use spaces immediately inside code spans (MD038: Spaces Inside Code Spans)
Use consistent indentation (usually 2 or 4 spaces) throughout markdown files
Keep line length under 120 characters in markdown files
Use reference-style links for better readability in markdown files
Use a trailing slash for directory paths in markdown files
Ensure proper escaping of special characters in markdown files
Files:
README.mddocs/getting-started/installation.mdCHANGELOG.md
@(README.md|AGENTS.md)
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Check README.md and AGENTS.md for current project status and development guidelines. Review .cursor/rules/ for detailed development standards and testing procedures.
Files:
README.md
**/*.md
📄 CodeRabbit inference engine (.cursorrules)
Avoid markdown linting errors (refer to markdown-rules)
Files:
README.mddocs/getting-started/installation.mdCHANGELOG.md
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
**/*.py: Maintain minimum 80% test coverage, with 100% coverage for critical paths in Python code
Use clear naming and self-documenting code, preferring clear names over comments
Ensure each function/class has a single clear purpose (Single Responsibility Principle)
Extract common patterns to avoid code duplication (DRY principle)
Apply SOLID object-oriented design principles in Python code
Use type hints everywhere in Python code and enable basedpyright strict mode
Use Pydantic models for data validation and serialization in Python
Use async/await for I/O operations in Python code
Use context managers for resource management in Python
Use dataclasses for simple data containers in Python
Enforce maximum line length of 120 characters in Python code
Use 4 spaces for indentation in Python code (no tabs)
Use 2 blank lines between classes and 1 blank line between methods in Python
Organize imports in order: Standard library → Third party → Local in Python files
Use snake_case for variables and functions in Python
Use PascalCase for class names in Python
Use UPPER_SNAKE_CASE for constants in Python
Use leading underscore (_) for private methods in Python classes
Use snake_case for Python file names
Enable basedpyright strict mode with strict type checking configuration in Python
Use Google-style docstrings for functions and classes in Python
Include comprehensive exception handling with specific exception types in Python code
Use logging with structured context (extra parameters) instead of print statements
Use retry logic with tenacity decorators (@retry) for operations that might fail
Use Pydantic BaseSettings for environment-based configuration in Python
Validate user input using Pydantic validators in Python models
Use@lru_cacheand Redis-based caching for expensive calculations in Python
Run code formatting with Black (120 character line length) and isort in Python
Run type checking with basedpyright on all Python files
Run linting with ruff and pylint on all Pyth...
Files:
src/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/analyzers/ambiguity_scanner.pyscripts/check_doc_frontmatter.pymodules/bundle-mapper/src/bundle_mapper/mapper/engine.pysrc/specfact_cli/adapters/openspec_parser.pytests/unit/adapters/test_speckit.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/adapters/test_openspec_parser.pysrc/specfact_cli/adapters/ado.pyscripts/verify_safe_project_writes.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.py
src/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
src/**/*.py: All code changes must be followed by running the full test suite using the smart test system.
All Python files in src/ and tools/ directories must have corresponding test files in tests/ directory. If you modify src/common/logger_setup.py, you MUST have tests/unit/common/test_logger_setup.py. NO EXCEPTIONS - even small changes require tests.
All new Python runtime code files must have corresponding test files created BEFORE committing the code. NO EXCEPTIONS - no code without tests.
Test Coverage Validation: Run hatch run smart-test-unit for modified files, hatch run smart-test-folder for modified directories, and hatch run smart-test-full before committing. ALL TESTS MUST PASS.
All components must support TEST_MODE=true environment variable with test-specific behavior defined as: if os.environ.get('TEST_MODE') == 'true': # test-specific behavior
Use src/common/logger_setup.py for all logging via: from common.logger_setup import get_logger; logger = get_logger(name)
Use src/common/redis_client.py with fallback for Redis operations via: from common.redis_client import get_redis_client; redis_client = get_redis_client()
Type checking must pass with no errors using: mypy .
Test coverage must meet or exceed 80% total coverage. New code must have corresponding tests. Modified code must maintain or improve coverage. Critical paths must have 100% coverage.
Use Pydantic v2 validation for all context and data schemas.Add/update contracts on new or modified public APIs, stateful classes and adapters using
icontractdecorators andbeartyperuntime type checks
src/**/*.py: Meaningful Naming — identifiers reveal intent; avoid abbreviations. Identifiers insrc/must usesnake_case(modules/functions),PascalCase(classes),UPPER_SNAKE_CASE(constants). Avoid single-letter names outside short loop variables.
KISS — keep functions and modules small and single-purpose. Maximum function length: 120 lines (Phase A error threshold). Maximum cyclomati...
Files:
src/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/analyzers/ambiguity_scanner.pysrc/specfact_cli/adapters/openspec_parser.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.py
@(src|tests)/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Linting must pass with no errors using: pylint src tests
Files:
src/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/analyzers/ambiguity_scanner.pysrc/specfact_cli/adapters/openspec_parser.pytests/unit/adapters/test_speckit.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/adapters/test_openspec_parser.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.py
**/*.{py,pyi}
📄 CodeRabbit inference engine (.cursorrules)
**/*.{py,pyi}: After any code changes, follow these steps in order: (1) Apply linting and formatting to ensure code quality:hatch run format, (2) Type checking:hatch run type-check(basedpyright), (3) Contract-first approach: Runhatch run contract-testfor contract validation, (4) Run full test suite:hatch test --cover -v, (5) Verify all tests pass and contracts are satisfied, (6) Fix any issues and repeat steps until all tests pass
All public APIs must have@icontractdecorators and@beartypetype checking
Use Pydantic models for all data structures with data validation
Only write high-value comments if at all. Avoid talking to the user through comments
Files:
src/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/analyzers/ambiguity_scanner.pyscripts/check_doc_frontmatter.pymodules/bundle-mapper/src/bundle_mapper/mapper/engine.pysrc/specfact_cli/adapters/openspec_parser.pytests/unit/adapters/test_speckit.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/adapters/test_openspec_parser.pysrc/specfact_cli/adapters/ado.pyscripts/verify_safe_project_writes.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.py
src/specfact_cli/**/*.py
⚙️ CodeRabbit configuration file
src/specfact_cli/**/*.py: Focus on modular CLI architecture: lazy module loading, registry/bootstrap patterns, and
dependency direction. Flag breaking changes to public APIs, Pydantic models, and resource
bundling. Verify@icontract+@beartypeon public surfaces; prefer centralized logging
(get_bridge_logger) over print().
Files:
src/specfact_cli/analyzers/graph_analyzer.pysrc/specfact_cli/analyzers/ambiguity_scanner.pysrc/specfact_cli/adapters/openspec_parser.pysrc/specfact_cli/adapters/ado.pysrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.py
scripts/**/*.py
⚙️ CodeRabbit configuration file
scripts/**/*.py: Deterministic tooling: subprocess safety, Hatch integration, and parity with documented
quality gates (format, type-check, module signing).
Files:
scripts/check_doc_frontmatter.pyscripts/verify_safe_project_writes.py
docs/**/*.md
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Update architecture documentation in docs/ for architecture changes, state machine documentation for FSM modifications, interface documentation for API changes, and configuration guides for configuration changes. DO NOT create internal docs in specfact-cli repo folder that should not be visible to end users; use the respective internal repository instead.
Files:
docs/getting-started/installation.md
⚙️ CodeRabbit configuration file
docs/**/*.md: User-facing accuracy: CLI examples match current behavior; preserve Jekyll front matter;
call out when README/docs index need sync.
Files:
docs/getting-started/installation.md
**/test_*.py
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
**/test_*.py: Write tests first in test-driven development (TDD) using the Red-Green-Refactor cycle
Ensure each test is independent and repeatable with no shared state between tests
Organize Python imports in tests using unittest.mock for Mock and patch
Use setup_method() for test initialization and Arrange-Act-Assert pattern in test files
Use@pytest.mark.asynciodecorator for async test functions in Python
Organize test files in structure: tests/unit/, tests/integration/, tests/e2e/ by module
Files:
tests/unit/adapters/test_speckit.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/adapters/test_openspec_parser.py
tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/spec-fact-cli-rules.mdc)
Tests must be meaningful and test actual functionality, cover both success and failure cases, be independent and repeatable, and have clear, descriptive names. NO EXCEPTIONS - no placeholder or empty tests.
tests/**/*.py: Trim low-value unit tests when a contract covers the same assertion (type/shape/raises on negative checks)
Delete tests that only assert input validation, datatype/shape enforcement, or raises on negative conditions now guarded by contracts and runtime typing
Convert repeated edge-case permutations into one Hypothesis property with contracts acting as oraclesSecret redaction via
LoggerSetup.redact_secretsmust be covered by unit tests
Files:
tests/unit/adapters/test_speckit.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/adapters/test_openspec_parser.py
⚙️ CodeRabbit configuration file
tests/**/*.py: Contract-first testing: meaningful scenarios, not redundant assertions already covered by
contracts. Flag flakiness, environment coupling, and missing coverage for changed behavior.
Files:
tests/unit/adapters/test_speckit.pytests/unit/scripts/test_verify_safe_project_writes.pytests/unit/adapters/test_openspec_parser.py
CHANGELOG.md
📄 CodeRabbit inference engine (.cursor/rules/python-github-rules.mdc)
Include new version entries at the top of CHANGELOG.md when updating versions
Update CHANGELOG.md with all code changes as part of version control requirements.
Update CHANGELOG.md to document all significant changes under Added, Fixed, Changed, or Removed sections when making a version change
Files:
CHANGELOG.md
🧠 Learnings (26)
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Enforce module signatures and version bumps when signed module assets or manifests are affected
Applied to files:
modules/bundle-mapper/module-package.yaml
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to docs/**/*.md : Update architecture documentation in docs/ for architecture changes, state machine documentation for FSM modifications, interface documentation for API changes, and configuration guides for configuration changes. DO NOT create internal docs in specfact-cli repo folder that should not be visible to end users; use the respective internal repository instead.
Applied to files:
docs/_support/readme-first-contact/capture-readme-output.shREADME.mddocs/_support/readme-first-contact/sample-output/capture-metadata.txtdocs/getting-started/installation.mddocs/_support/readme-first-contact/sample-output/review-output.txt
📚 Learning: 2026-04-10T22:42:04.362Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/session_startup_instructions.mdc:0-0
Timestamp: 2026-04-10T22:42:04.362Z
Learning: After merges shipping OpenSpec- or GitHub-related work with a sibling `specfact-cli-internal` checkout present, run wiki scripts from that sibling repo's working directory (e.g., `cd ../specfact-cli-internal && python3 scripts/wiki_openspec_gh_status.py`), not from `specfact-cli` or other directories
Applied to files:
docs/_support/readme-first-contact/capture-readme-output.shREADME.mddocs/_support/readme-first-contact/sample-output/capture-metadata.txt
📚 Learning: 2026-04-10T22:42:04.362Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/session_startup_instructions.mdc:0-0
Timestamp: 2026-04-10T22:42:04.362Z
Learning: Detect repository root, branch, and worktree state before implementation
Applied to files:
docs/_support/readme-first-contact/capture-readme-output.sh
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: All development work MUST use git worktrees per AGENTS.md Git Worktree Policy. Never create branches with `git checkout -b` in the primary checkout. Create worktree from origin/dev: `git worktree add ../specfact-cli-worktrees/<type>/<slug> -b <branch-name> origin/dev` where allowed types are: `feature/`, `bugfix/`, `hotfix/`, `chore/`. Forbidden in worktrees: `dev`, `main`. After creating worktree: `cd ../specfact-cli-worktrees/<type>/<slug>`. Bootstrap Hatch in worktree: `hatch env create`. Run pre-flight checks: `hatch run smart-test-status` and `hatch run contract-test-status`. Do all implementation work from the worktree, never from primary checkout. After PR merge: cleanup with `git worktree remove`, `git branch -d`, `git worktree prune`
Applied to files:
docs/_support/readme-first-contact/capture-readme-output.sh
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Detect repository root, active branch, and worktree state during bootstrap
Applied to files:
docs/_support/readme-first-contact/capture-readme-output.sh
📚 Learning: 2026-04-10T22:42:04.362Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/session_startup_instructions.mdc:0-0
Timestamp: 2026-04-10T22:42:04.362Z
Learning: For OpenSpec changes when a sibling `specfact-cli-internal/` checkout exists, read internal wiki guidance in `docs/agent-rules/40-openspec-and-tdd.md` and consult `wiki/hot.md`, `wiki/graph.md`, and relevant `wiki/concepts/*.md` files
Applied to files:
README.mddocs/_support/readme-first-contact/sample-output/capture-metadata.txtsrc/specfact_cli/adapters/openspec_parser.pydocs/getting-started/installation.mddocs/_support/readme-first-contact/sample-output/review-output.txtCHANGELOG.mdsrc/specfact_cli/adapters/speckit.py
📚 Learning: 2026-04-10T22:41:54.419Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/automatic-openspec-workflow.mdc:0-0
Timestamp: 2026-04-10T22:41:54.419Z
Learning: Applies to openspec/**/{proposal.md,tasks.md,design.md,spec.md} : Apply `openspec/config.yaml` project context and per-artifact rules (for proposal, specs, design, tasks) when creating or updating any OpenSpec change artifact in the specfact-cli codebase
Applied to files:
README.md
📚 Learning: 2026-04-10T22:41:26.519Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-04-10T22:41:26.519Z
Learning: Enforce the clean-code review gate through `hatch run specfact code review run --json --out .specfact/code-review.json`
Applied to files:
README.mddocs/_support/readme-first-contact/sample-output/capture-metadata.txtscripts/verify_safe_project_writes.py
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to @(pyproject.toml|setup.py|src/__init__.py|src/specfact_cli/__init__.py) : Maintain synchronized versions across pyproject.toml, setup.py, src/__init__.py, and src/specfact_cli/__init__.py.
Applied to files:
docs/_support/readme-first-contact/sample-output/capture-metadata.txt
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Fix SpecFact code review findings, including warnings, unless a rare explicit exception is documented
Applied to files:
docs/_support/readme-first-contact/sample-output/capture-metadata.txtdocs/_support/readme-first-contact/sample-output/review-output.txtsrc/specfact_cli/adapters/github.pysrc/specfact_cli/adapters/speckit.pysrc/specfact_cli/analyzers/code_analyzer.py
📚 Learning: 2026-04-10T22:41:54.419Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/automatic-openspec-workflow.mdc:0-0
Timestamp: 2026-04-10T22:41:54.419Z
Learning: If `openspec` command is not found or if specfact-cli workspace or openspec/ directory is not accessible, do not modify application code unless the user explicitly confirms to proceed without OpenSpec
Applied to files:
docs/_support/readme-first-contact/sample-output/capture-metadata.txtdocs/getting-started/installation.md
📚 Learning: 2026-04-10T22:42:04.362Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/session_startup_instructions.mdc:0-0
Timestamp: 2026-04-10T22:42:04.362Z
Learning: For large frontmatter file changes in `docs/agent-rules/`, run `cd ../specfact-cli-internal && python3 scripts/wiki_rebuild_graph.py` from the sibling repo after merging
Applied to files:
scripts/check_doc_frontmatter.py
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: Use `wiki_rebuild_graph.py` when frontmatter churn is large, as described in docs/agent-rules/40-openspec-and-tdd.md
Applied to files:
scripts/check_doc_frontmatter.py
📚 Learning: 2026-04-10T22:42:04.362Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/session_startup_instructions.mdc:0-0
Timestamp: 2026-04-10T22:42:04.362Z
Learning: Treat `docs/agent-rules/` as the canonical location for repository governance rather than relying on inline reminders or other documentation
Applied to files:
scripts/check_doc_frontmatter.py
📚 Learning: 2026-04-10T22:41:54.419Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/automatic-openspec-workflow.mdc:0-0
Timestamp: 2026-04-10T22:41:54.419Z
Learning: Applies to {src,tools}/**/*.{js,ts,jsx,tsx,py},**/*.test.{js,ts,jsx,tsx,py},**/*.spec.{js,ts,jsx,tsx,py} : Do not add, modify, or delete any application code in src/, tools/, tests, or significant docs until an OpenSpec change (new or delta) is created and validated, unless the user explicitly opts out with 'skip openspec', 'direct implementation', 'simple fix', or 'just fix it'
Applied to files:
src/specfact_cli/adapters/openspec_parser.py
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/test_*.py : Organize Python imports in tests using unittest.mock for Mock and patch
Applied to files:
tests/unit/adapters/test_speckit.py
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to src/**/*.py : All code changes must be followed by running the full test suite using the smart test system.
Applied to files:
tests/unit/scripts/test_verify_safe_project_writes.py
📚 Learning: 2026-03-25T21:32:57.944Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/spec-fact-cli-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:57.944Z
Learning: Applies to CHANGELOG.md : Update CHANGELOG.md with all code changes as part of version control requirements.
Applied to files:
CHANGELOG.md
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to CHANGELOG.md : Update CHANGELOG.md to document all significant changes under Added, Fixed, Changed, or Removed sections when making a version change
Applied to files:
CHANGELOG.md
📚 Learning: 2026-04-10T22:42:21.849Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-10T22:42:21.849Z
Learning: If GitHub hierarchy metadata is needed and `.specfact/backlog/github_hierarchy_cache.md` is missing or stale, refresh it with `python scripts/sync_github_hierarchy_cache.py`
Applied to files:
CHANGELOG.md
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: For `/opsx:ff` (Fast-Forward Change Creation): OPSX provides change scaffolding and artifact templates. AGENTS.md requires explicitly adding: Worktree creation task as Step 1 in tasks.md (not just 'create branch'), TDD_EVIDENCE.md tracking task in section 2 (Tests), Documentation research task per `openspec/config.yaml`, Module signing quality gate if applicable, Worktree cleanup steps in final section
Applied to files:
CHANGELOG.md
📚 Learning: 2026-03-31T22:38:25.299Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/clean-code-principles.mdc:0-0
Timestamp: 2026-03-31T22:38:25.299Z
Learning: Applies to src/**/*.py : No broad `except Exception` without re-raising or logging in `src/`
Applied to files:
src/specfact_cli/adapters/github.py
📚 Learning: 2026-03-25T21:32:29.182Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/python-github-rules.mdc:0-0
Timestamp: 2026-03-25T21:32:29.182Z
Learning: Applies to **/*.py : Include comprehensive exception handling with specific exception types in Python code
Applied to files:
src/specfact_cli/adapters/github.py
📚 Learning: 2026-03-25T21:33:15.296Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursor/rules/testing-and-build-guide.mdc:0-0
Timestamp: 2026-03-25T21:33:15.296Z
Learning: Applies to tests/**/*e2e*.py : Keep scenario/E2E tests for behavioral coverage, performance and resilience validation
Applied to files:
src/specfact_cli/adapters/speckit.py
📚 Learning: 2026-04-10T22:41:19.077Z
Learnt from: CR
Repo: nold-ai/specfact-cli PR: 0
File: .cursorrules:0-0
Timestamp: 2026-04-10T22:41:19.077Z
Learning: When creating implementation plans, task lists, or OpenSpec tasks.md, ALWAYS explicitly verify and include: (1) Worktree creation from `origin/dev` (not `git checkout -b`), (2) `hatch env create` in the worktree, (3) Pre-flight checks (`hatch run smart-test-status`, `hatch run contract-test-status`), (4) Worktree cleanup steps post-merge, (5) Self-check: 'Have I followed AGENTS.md Git Worktree Policy section?'
Applied to files:
src/specfact_cli/adapters/speckit.py
🔇 Additional comments (26)
modules/bundle-mapper/src/bundle_mapper/mapper/engine.py (3)
34-43: Nice extraction of signal payload into a typed value object.Using
_SignalContribution+_signal_contribution(...)makes the scoring pipeline cleaner and easier to evolve without widening method signatures.
165-181: Good API tightening on_apply_signal_contribution.Passing a single signal object improves cohesion and keeps this method focused on one responsibility: applying contribution rules consistently.
219-244: DRY improvement applied correctly at call sites.The three signal applications now share a consistent construction path, which reduces duplication risk for future signal additions.
As per coding guidelines, "Extract common patterns to avoid code duplication (DRY principle)".
src/specfact_cli/analyzers/graph_analyzer.py (1)
224-224: Good refactor: delegation improves analyzer architecture.Calling
_add_call_graph_edges_for_modulehere keeps_build_call_graph_edgesfocused on task orchestration (futures/progress), which improves long-term maintainability.src/specfact_cli/analyzers/ambiguity_scanner.py (2)
12-12: Type import is clean and modern.Using
collections.abc.Callablehere is correct and keeps the callable typing consistent with the scanner dispatch signature.
169-184: Dispatch-table routing + fail-fast guard is a solid improvement.This keeps taxonomy wiring explicit and prevents silent misclassification when a category is not registered.
src/specfact_cli/adapters/openspec_parser.py (1)
396-409: ✅ Content-heading preservation fix is correct and robust.At Line 396–Line 400,
##lines are now preserved once insidecontent, preventing accidental section switching and content truncation.modules/bundle-mapper/module-package.yaml (1)
2-2: Version bump + integrity rotation are aligned for signed-module contract updates.Line 2 and Lines 23-24 move together correctly, which is what downstream module consumers need to keep trust metadata coherent across releases.
Based on learnings: "Enforce module signatures and version bumps when signed module assets or manifests are affected".
Also applies to: 23-24
tests/unit/adapters/test_speckit.py (2)
9-16: Imports are focused and necessary for the new regression path.
AnyandSpecKitScannerare both used by the monkeypatched parse fixture and keep the test explicit.
241-295: Strong regression test for parse-buffer detachment.This is a good guard for mutable-alias regressions: it asserts pre-mutation values, mutates scanner-origin buffers, then confirms imported feature data stays stable.
src/specfact_cli/adapters/speckit.py (3)
501-516: Copy/coercion helpers cleanly prevent mutable reference leakage.
_coerce_str_list_fieldand_copy_scenarios_dictestablish a clear normalization boundary for parser-origin structures.
533-575: Upsert path now correctly owns mutable feature state.The list copies plus
model_copy(deep=True)in Line 549-Line 567 are the right safeguards against downstream mutation through shared references.
598-607: Payload object improves call-site clarity and reduces parameter drift risk.Wrapping upsert inputs into
_SpeckitFeatureUpsertmakes future maintenance safer than positional argument growth.scripts/verify_safe_project_writes.py (1)
34-42: Nice fix: wildcard JSON imports are now covered.The
from json import *mapping closes the prior bypass path for directload/dump/loads/dumpscalls.CHANGELOG.md (1)
15-52: Release entry structure and placement look good.The new sections are correctly inserted at the top and use standard changelog categories with clear, scannable bullets.
As per coding guidelines,
CHANGELOG.mdupdates should be added at the top and documented under sections like Added/Fixed/Changed/Removed.README.md (1)
30-30: Release-facing README references are now consistent with0.46.0.The sample banner and pre-commit hook revision are aligned with the release bump, which keeps onboarding and automation snippets coherent.
Also applies to: 87-87
docs/_support/readme-first-contact/sample-output/review-output.txt (1)
12-12: Sample capture banner version is correctly bumped.This keeps the stored first-contact artifact aligned with the release metadata and avoids documentation drift.
docs/_support/readme-first-contact/capture-readme-output.sh (1)
8-8: Capture script default version is correctly synchronized to0.46.0.This preserves reproducibility for generated sample artifacts without requiring manual env overrides.
docs/_support/readme-first-contact/sample-output/capture-metadata.txt (1)
3-3: Captured metadata and runnable commands are consistently pinned to0.46.0.Good alignment across declarative version metadata and executable install/run examples.
Also applies to: 12-13
docs/getting-started/installation.md (1)
78-81: VS Code/Copilot merge-safety guidance now matches runtime behavior.The updated wording correctly communicates merge preservation, abort-on-unparseable/unmergeable input, and
--forcebackup-backed replacement semantics.As per coding guidelines, "docs/**/*.md: User-facing accuracy: CLI examples match current behavior; preserve Jekyll front matter; call out when README/docs index need sync."
src/specfact_cli/adapters/github.py (3)
53-85: Strong refactor: structured input objects improve adapter boundaries.This is a good architectural move for maintainability: the new private input dataclasses reduce long helper signatures and keep rendering/comment concerns cohesive.
836-840: Good fix: exception handling is now specific and observable.Line 838 now catches expected read/decode failures, and Line 839 logs with traceback context instead of silently swallowing failures.
Based on learnings: Applies to src/**/*.py : No broad
except Exceptionwithout re-raising or logging insrc/.
1473-1475: Caller/callee migration is consistent across the refactor.The dataclass-based argument flow is wired end-to-end without drift: constructors, helper signatures, and call sites align cleanly in issue body/status/significant-change paths.
Also applies to: 1737-1739, 1757-1778, 1811-1821, 1852-1860
src/specfact_cli/analyzers/code_analyzer.py (1)
35-53: Nice containment of Semgrep worker state.Moving theme collection into
_SemgrepFeatureBucketsand only updating the file-local accumulator at the end keepsself.themesout of the worker path. That’s the right synchronization boundary for the parallel analyzer.Also applies to: 1173-1185, 1217-1225
scripts/check_doc_frontmatter.py (2)
401-446: Nice extraction of the agent-rule draft pipeline.The dataclass + loader split keeps parsing, fallback handling, and rendering decoupled, which makes the CLI-facing fix-hint path much easier to evolve.
449-459: Good isolation of optional frontmatter rendering.Keeping the optional metadata assembly out of
suggest_frontmatter()preserves a clean public API boundary and avoids spreading YAML formatting branches through the caller.
|
@cursor check the code review findings and if valid, fix them and create a PR to this chore/release-0.46.0 branch, so we can remediate: Verify each finding against the current code and only fix it if needed. Inline comments:
In
In
In
In
In
In
In
Outside diff comments:
In
Duplicate comments:
In
In
|
|
Here is what was verified and shipped. Valid fixes (implemented on
|
* fix: address CodeRabbit review findings for 0.46.0 line Scope-aware JSON I/O gate, ADO source_tracking list handling, GitHub remote url regex, Speckit acceptance fallback, theme import matching, async pattern scan simplification, doc frontmatter title from canonical id, changelog Unreleased header, tests and shallow-modules-repo guard. Co-authored-by: Dom <djm81@users.noreply.github.com> * fix: scan json I/O in function default expressions (verify_safe_project_writes) Visit positional and kw-only defaults before applying parameter shadow scopes so defaults are checked in the enclosing namespace. Add regression tests for loads() in defaults. Co-authored-by: Dom <djm81@users.noreply.github.com> * docs: restore 0.46.0 changelog date (release in progress) Co-authored-by: Dom <djm81@users.noreply.github.com> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: Dom <djm81@users.noreply.github.com>


Description
Release 0.46.0: minor version bump with synchronized packaging metadata (
pyproject.toml,setup.py,src/__init__.py,src/specfact_cli/__init__.py) and CHANGELOG entry. Brings the currentdevline tomainfor PyPI publication (includes #492, #493, #496 and related CI/test hardening).Fixes #(n/a — release chore)
New Features #(n/a — see CHANGELOG for shipped features in this line)
Contract References: None changed in this commit (version + changelog only).
Type of Change
Please check all that apply:
@icontractdecorators)Contract-First Testing Evidence
Required for all changes affecting CLI commands or public APIs:
Contract Validation
@icontractdecorators on public APIs)@beartypedecorators applied)hatch run contract-test-explorationTest Execution
hatch run contract-test-contracts✅hatch run contract-test-exploration✅hatch run contract-test-scenarios✅hatch run contract-test-full✅Test Quality
How Has This Been Tested?
Contract-First Approach: This PR is a version and changelog-only delta on top of the already-integrated
devbranch; runtime behavior is unchanged from the merged feature/fix commits.Manual Testing
Automated Testing
mainafter merge)Test Environment
Checklist
@icontract,@beartype)Quality Gates Status
hatch run type-check)hatch run lint)hatch run contract-test-contracts)hatch run contract-test-exploration)hatch run contract-test-scenarios)Screenshots/Recordings (if applicable)
N/A (release metadata only).
Release notes
See
CHANGELOG.mdsection [0.46.0] - 2026-04-13.