Conversation
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (20)
📝 WalkthroughSummary by CodeRabbit
WalkthroughReplaces documented CLI examples using Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as CLI\n(sync_bridge_command_impl)
participant Setup as Setup\n(sync_bridge_command_setup)
participant Dispatcher as Phase\nDispatcher
participant SpeckitSync as Speckit\nChangeProposalSync
participant Converter as SpecKit\nConverter
participant BridgeSync as BridgeSync
participant OpenSpec as OpenSpec\nFilesystem
User->>CLI: specfact sync bridge --mode change-proposal --feature <id>
CLI->>Setup: maybe_auto_detect_adapter / probe_capabilities
Setup-->>CLI: adapter instance + capabilities
CLI->>Dispatcher: run_sync_bridge_tracked_pipeline(...)
Dispatcher->>SpeckitSync: phase_change_proposal(...)
SpeckitSync->>Converter: convert_to_change_proposal(feature_path, change_name, out)
Converter->>BridgeSync: create change dir + generate files
BridgeSync->>OpenSpec: write proposal.md / design.md / tasks.md / specs/...
OpenSpec-->>BridgeSync: persisted
BridgeSync-->>Converter: return change path
Converter-->>SpeckitSync: return created change paths
SpeckitSync-->>Dispatcher: phase complete
Dispatcher-->>CLI: pipeline complete
CLI-->>User: success / created proposals
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related issues
Possibly related PRs
Suggested labels
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 69
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docs/getting-started/installation.md (1)
303-387:⚠️ Potential issue | 🟡 MinorUpdate the remaining legacy bridge command in Quick Tips.
Line 386 still says
project sync bridge --adapter <adapter-name>, which conflicts with the updatedspecfact sync bridgeexamples above.📘 Suggested doc fix
-- **Bridge adapter sync**: Use `project sync bridge --adapter <adapter-name>` for external tool integration (Spec-Kit, OpenSpec, GitHub, etc.) +- **Bridge adapter sync**: Use `sync bridge --adapter <adapter-name>` for external tool integration (Spec-Kit, OpenSpec, GitHub, etc.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/getting-started/installation.md` around lines 303 - 387, The Quick Tips section still contains the old bridge command "project sync bridge --adapter <adapter-name>"; update that bullet to use the consistent CLI form "specfact sync bridge --adapter <adapter-name>" to match the examples above (search for the Quick Tips line containing "Bridge adapter sync" and replace the command text accordingly).docs/adapters/github.md (1)
337-358:⚠️ Potential issue | 🟡 MinorFix stale command-reference anchor after bridge rename.
Line 357 still points to
/reference/commands/#project-sync-bridgewhile the page now documentsspecfact sync bridge. This can send readers to the wrong anchor.🔗 Suggested link fix
-For public repos, add `--sanitize` when exporting so content is sanitized before creating issues. See [DevOps Adapter Integration](/integrations/devops-adapter-overview/) and the [sync bridge command reference](/reference/commands/#project-sync-bridge). +For public repos, add `--sanitize` when exporting so content is sanitized before creating issues. See [DevOps Adapter Integration](/integrations/devops-adapter-overview/) and the [sync bridge command reference](/reference/commands/#sync-bridge).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/adapters/github.md` around lines 337 - 358, The link pointing to /reference/commands/#project-sync-bridge is stale after the bridge rename; update the anchor to the correct command anchor for "specfact sync bridge" (e.g., change the reference from /reference/commands/#project-sync-bridge to /reference/commands/#specfact-sync-bridge) so the "See DevOps Adapter Integration and the sync bridge command reference" link navigates to the proper section.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/reference/commands.md`:
- Line 59: Update the document sections that currently reference the
non-existent `specfact project sync` taxonomy: change those headings/entries to
`specfact sync` and list the actual subcommands `specfact sync bridge`,
`specfact sync repository`, and `specfact sync intelligent` (and keep the
example usage like `specfact sync bridge --adapter github --mode export-only
--repo .` intact); ensure any occurrences on the two mentioned lines are
replaced so the taxonomy and examples accurately reflect `specfact sync` as the
parent command.
In
`@packages/specfact-code-review/src/specfact_code_review/tools/contract_runner.py`:
- Around line 69-73: The current logic skips every file under
"specfact_project/sync_runtime/" unless its filename is explicitly allowlisted
via _SYNC_RUNTIME_ICONTRACT_ENTRYPOINTS, which causes new public modules to be
silently skipped; change the check so only known helper/convention filenames are
skipped and unknown modules are scanned: inside the function that uses
normalized and file_path.name (the block referencing normalized, file_path.name
and _SYNC_RUNTIME_ICONTRACT_ENTRYPOINTS), invert the condition so that when
"/specfact_project/sync_runtime/" is in normalized you return True (skip) only
if the filename matches a known helper pattern or is listed in
_SYNC_RUNTIME_ICONTRACT_ENTRYPOINTS (e.g., name in
_SYNC_RUNTIME_ICONTRACT_ENTRYPOINTS or matches underscore/private/helper
naming), otherwise return False so new modules are not silently excluded.
In
`@packages/specfact-project/src/specfact_project/importers/speckit_change_proposal_bridge.py`:
- Around line 30-34: The code creates capability_dir = change_dir / "specs" /
capability and calls capability_dir.mkdir(parents=True, exist_ok=True), which
already creates change_dir, so remove the redundant
change_dir.mkdir(parents=True, exist_ok=True) call; update the function or block
containing change_dir and capability_dir (identify by the symbols change_dir and
capability_dir) to drop the extra mkdir line so directories are created only
once via capability_dir.mkdir(...).
- Around line 480-500: The _render_speckit_plan method currently hardcodes the
Technology Stack block ("Python 3.11+", "typer", "pydantic"); update it to
derive language/version and primary dependencies from the design input (e.g.,
design.get("language"), design.get("language_version"),
design.get("dependencies") or a similar field) and format those values into the
Technology Stack section, falling back to generic placeholders like
"Language/Version: TBD" and "Primary Dependencies: TBD" when keys are missing;
ensure you update the template construction in _render_speckit_plan to use the
extracted variables instead of the literal strings so non-Python projects render
correctly.
In
`@packages/specfact-project/src/specfact_project/importers/speckit_converter.py`:
- Around line 479-480: The call to
SpecKitChangeProposalBridge.convert_feature_to_change wraps Path-typed,
contract-validated arguments in redundant Path() calls; remove the extra
wrapping and pass feature_path and output_dir (and change_dir at the other
occurrence) directly to convert_feature_to_change — e.g., change
bridge.convert_feature_to_change(Path(feature_path), change_name,
Path(output_dir)) to bridge.convert_feature_to_change(feature_path, change_name,
output_dir) and similarly remove Path() around change_dir in the other
occurrence, leaving the signature and contracts intact.
- Around line 8-9: The top-of-file broad pylint disable in speckit_converter.py
is too permissive; remove the long comma-separated disable list and instead fix
the root causes (remove unused imports, rename variables to avoid
redefined-argument-from-local, replace constant-test patterns) and apply narrow
inline disables only where unavoidable (e.g., use "# pylint:
disable=unused-import" immediately above a specific import or "# pylint:
disable=redefined-argument-from-local" on the single function or block). Locate
the module-level disable string (the long pylint: disable=... line) and replace
it with targeted, contextual fixes and minimal inline disables scoped to the
smallest possible code region.
- Around line 274-276: The current condition uses substring matching (story_ref
in story_key) which can produce false positives; change the check to an exact
match by replacing the substring test with equality (story_ref == story_key) or,
if hierarchical prefix matching was intended, use a clear prefix/suffix check
(e.g., story_key.startswith(story_ref) or story_key.split(...) comparison).
Update the condition around variables story_ref, story_key, task and the
out.append(...) call in speckit_converter.py so it only appends
task.get("description", "") when the match logic is exact or explicitly defined.
In
`@packages/specfact-project/src/specfact_project/importers/speckit_markdown_sections.py`:
- Around line 75-76: The keyword list used in the recovery branch contains a
duplicate "retry"; update the conditional that checks acc_lower (the elif with
any(keyword in acc_lower for keyword in [...])) to remove the duplicate "retry"
so the list is unique, leaving the branch that calls
buckets.recovery.append(scenario_text) unchanged; this is in the same function
around the recovery handling where acc_lower and buckets.recovery are
referenced.
- Around line 60-69: Rename the misspelled function invsest_lines to
invest_lines and update all call sites to use the new name; specifically change
the function definition def invsest_lines() -> list[str]: to def invest_lines()
-> list[str]: (keeping the same return values and type hint) and replace any
calls to invsest_lines (including the one referenced in the review) with
invest_lines so references resolve correctly.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_alignment_helpers.py`:
- Around line 137-140: If bridge_config.external_base_path is a relative path,
resolve it against the repo root before passing into _alignment_collect_ids to
avoid depending on the process CWD: compute base_path as repo_path joined with
bridge_config.external_base_path when not absolute (and normalize/abspath it),
otherwise use the absolute external_base_path or fall back to repo_path; update
the code around base_path and the call to _alignment_collect_ids(adapter,
base_path, bridge_config, bundle_dir) so discover_features sees a
repo-root-relative absolute path.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_backlog_bundle_impl.py`:
- Around line 50-65: The function _ibi_fallback_last_proposal currently returns
the last proposal even when imported_proposal.source_tracking.tool !=
adapter_type; change it so that after retrieving imported_proposal you check
imported_proposal.source_tracking and if source_tool != adapter_type you log a
warning (use logger.warning) and return None instead of returning the proposal,
so proposals from a different backlog system are not silently associated; ensure
you reference project_bundle.change_tracking.proposals,
imported_proposal.source_tracking.tool and adapter_type in the check and
logging.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_backlog_helpers.py`:
- Around line 63-77: The function upsert_backlog_entry_list currently mutates
the incoming entries list; change it to be a pure function by working on a copy:
create a new list (e.g., new_entries = entries.copy() or [e for e in entries])
and when updating an item assign a merged dict (new_entries[idx] = {**entry,
**new_entry}) so you never modify the original list or its dicts; perform the
same matching logic against the original entries but apply changes to
new_entries, append new_entry to new_entries if no match, and return
new_entries.
- Around line 33-42: The function get_backlog_entries_list may raise
AttributeError by assuming proposal.source_tracking has source_metadata; update
it to defensively access source_metadata (e.g., use
getattr(proposal.source_tracking, "source_metadata", None) or check
hasattr(proposal.source_tracking, "source_metadata") before using it) so that if
source_metadata is missing you return [] or fall back to
_backlog_entries_from_fallback_metadata(proposal, source_metadata) only when
source_metadata is a dict; keep the existing filtering of list entries when
present.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_change_proposals_impl.py`:
- Around line 21-44: The run_export_change_proposals_to_devops function has an
unwieldy 24-parameter signature; refactor by introducing small dataclasses
(e.g., AdapterConfig for adapter_type/repo_owner/repo_name/api_token/ado_*
values, ExportOptions for
sanitize/interactive/update_existing/track_code_changes/add_progress_comment/include_archived/change_ids,
TmpFileConfig for export_to_tmp/import_from_tmp/tmp_file, and RepoConfig for
target_repo/use_gh_cli/code_repo_path) and replace the long parameter list with
a few typed dataclass parameters (or a single combined Config dataclass) in the
run_export_change_proposals_to_devops signature; update all call sites to
construct and pass the new dataclass instances (or accept old keyword args
temporarily by providing an overload/compat shim that converts kwargs into the
new dataclasses) and adjust internal references inside
run_export_change_proposals_to_devops to read from the dataclass fields
(preserve SyncResult return and behavior and update unit tests and any callers
to reflect the new signature).
- Around line 104-105: The except block that currently does "except Exception as
e: errors.append(f'Export to DevOps failed: {e}')" loses diagnostic context;
update the handler in bridge_sync_export_change_proposals_impl (the except block
that appends to errors) to capture and record the full exception traceback and
exception type (e.g., via traceback.format_exc() or logging.exception) and
append or log that detailed text (not just str(e)) so downstream callers get
full diagnostics for failures originating anywhere in this orchestration
function.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_change_proposals_loop.py`:
- Around line 13-35: The function ecd_iterate_active_proposals has an oversized
parameter list; define a configuration dataclass (e.g., EcdIterationConfig) or a
TypedDict containing grouped fields such as repo info (target_repo, repo_owner,
repo_name, ado_org, ado_project), adapter info (adapter, adapter_type), file/tmp
flags (import_from_tmp, export_to_tmp, tmp_file, code_repo_path), behavior flags
(update_existing, should_sanitize, track_code_changes, add_progress_comment),
and helpers/services (bridge, sanitizer, operations, errors, warnings), replace
the long signature with a single config parameter plus any truly orthogonal
required args, update all callers to construct and pass the new config object,
and adjust references inside ecd_iterate_active_proposals to access
attributes/keys on the new config type; ensure type annotations and imports are
added for dataclass/TypedDict and run tests to fix any call-site mismatches.
- Around line 61-68: The logger is being instantiated inside the except block;
move logger = logging.getLogger(__name__) to module-level (top of
bridge_sync_export_change_proposals_loop.py) so it’s created once, then remove
the logger instantiation from the except Exception as e block and use the
module-level logger there (keeping the same logger.debug call and
exc_info=True). Ensure the module-level symbol is named logger to match the
existing usage in the exception handler.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_ecd_prepare.py`:
- Around line 25-30: Replace the direct access to the private
AdapterRegistry._adapters by calling the public lookup API: use
AdapterRegistry.get_adapter(adapter_type) (or AdapterRegistry.list_adapters() if
listing is needed) instead of accessing _adapters, check its return value and
append the same error to errors when it returns None or raises a descriptive
exception; update the block that currently references adapter_type and errors to
call get_adapter(adapter_type.lower()) (or normalize before) and handle errors
consistently so encapsulation is preserved and future changes to AdapterRegistry
won’t break this code.
- Around line 153-156: available_change_ids is built with {p.get("change_id")
for p in active_proposals if p.get("change_id")} then immediately filtered again
for cid is not None, which is redundant; remove the second comprehension/filter
and compute available_change_ids once (keep the existing first comprehension) so
invalid_change_ids = valid_change_ids - available_change_ids works without the
extra {cid for cid in available_change_ids if cid is not None} step; update the
code around the variables available_change_ids, valid_change_ids, and
invalid_change_ids accordingly.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_one_proposal.py`:
- Around line 247-270: The method _import_from_tmp_path currently returns an
empty dict on errors which can be mistaken for a valid (but empty) proposal;
change _import_from_tmp_path to return None on failure (file-not-found, parse
error, exceptions) and update its caller in run() to explicitly check for None
and return early (or skip calling _export_artifact_and_persist) instead of
passing an empty dict; ensure _export_artifact_and_persist is only called when
_import_from_tmp_path returned a non-None dict and preserve existing
error/warning appends (inspect symbols: _import_from_tmp_path, run,
_export_artifact_and_persist).
- Around line 259-266: The temp-file cleanup for original_tmp and
sanitized_file_path should run regardless of exceptions; move the unlink logic
into a finally block so temporary files are always removed even if
_parse_sanitized_proposal (or other code) raises. Specifically, in the method
where original_tmp and sanitized_file_path are created (referencing variables
original_tmp, sanitized_file_path and the call to _parse_sanitized_proposal),
wrap the processing in try: ... finally: and put the existing unlink and
warnings.append(...) logic inside that finally so cleanup always executes.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_extract_requirement_impl.py`:
- Around line 75-102: The function erfp_extract_section_details is wrongly
collecting lines from fenced code blocks (in_code_block branch) and appending
them to details, which causes example/YAML/CLI lines to become requirements;
change the in_code_block handling so that when a line toggles the in_code_block
state (stripped.startswith("```")) you flip the flag and otherwise skip any
lines while in_code_block (do not run the cleaning/appending logic inside that
branch), keeping only the existing logic for non-code lines; ensure the function
still correctly toggles in_code_block and returns the same list when no code
blocks are present.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_find_source_tracking_entry.py`:
- Around line 19-26: The heuristic in the return expression that treats a
36-character string containing a hyphen as a GUID (checking entry_project and
target_project with len(...) == 36 and "-" in ...) is too permissive; replace
that heuristic with a strict UUID check (e.g., attempt parsing via uuid.UUID or
match a canonical UUID regex) when evaluating entry_project and target_project,
and update the logic around entry_has_guid, entry_project, and target_project in
the function containing this return so only valid UUIDs trigger the "project
unknown" fallback.
- Around line 29-51: The two pairs of functions duplicate URL-matching logic;
extract the shared logic into a single helper (e.g.,
_try_source_url_match(source_url: str, target_repo: str, entry_type: str) ->
bool or Optional[dict[str, Any]]) that performs the GitHub repo regex, the
dev.azure.com parse/hostname check and org comparison, and returns a match
result; then have _fst_dict_try_source_urls and _fst_list_try_secondary_urls
delegate to that helper (passing source_tracking.get("source_url", "") and
returning source_tracking on true), and likewise have _fst_dict_try_ado_tertiary
and _fst_list_try_ado_tertiary call the same helper for the ADO-specific branch
so behavior and return values remain identical while removing duplicated
regex/URL parsing.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_issue_subhelpers.py`:
- Around line 138-139: The except Exception: return False block is silently
swallowing errors (same as in uicn_fetch_title_state_flags); change it to catch
the exception as a variable (e.g., except Exception as e:) and log the full
error/traceback using the module logger (e.g., logger.exception or logger.error
with traceback) before returning False so failures are recorded consistently
across the module.
- Around line 244-303: hcct_persist_progress_comment currently mutates proposal,
source_tracking_list and operations and performs side-effect calls
(adapter.export_artifact, bridge._save_openspec_change_proposal), making
behavior hard to reason about; refactor by avoiding in-place mutations: create
and use local copies (e.g., proposal_with_progress, updated_entry,
new_source_tracking_list) and return the updated proposal, the new
source_tracking_list and the SyncOperation to append instead of directly
changing the inputs, or alternatively clearly document the mutation contract at
the top of hcct_persist_progress_comment; ensure calls to
run_update_source_tracking_entry, adapter.export_artifact and
bridge._save_openspec_change_proposal remain but operate on the returned/updated
objects and update callers to accept the returned values.
- Around line 105-107: The bare "except Exception: pass" in the
exception-handling block silently swallows all errors; replace it by capturing
the exception as a variable and logging it (e.g., logger.exception or
logger.debug with the exception string) in the same block so
authentication/network/other errors are recorded; keep returning (False, False)
as before unless you decide to re-raise for fatal errors. Locate the try/except
in bridge_sync_issue_subhelpers (the block that currently ends with "except
Exception: pass" and then "return False, False") and update that except clause
to log the caught exception with contextual message.
- Around line 26-51: Wrap the HTTP call and response.raise_for_status in a
try/except that catches requests.HTTPError inside uicn_github_title_state
(around the requests.get + response.raise_for_status lines) and handle 404
explicitly (e.g., return a consistent tuple like (None, None, False, False) when
the issue doesn't exist) while for other HTTP errors raise a new exception or
log a clearer message that includes the URL, status code and response.text to
aid debugging; ensure you still return the same tuple shape on handled errors so
callers of uicn_github_title_state (which expects current_issue_title,
current_issue_state, needs_title_update, needs_state_update) get predictable
results.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_issue_update_impl.py`:
- Around line 27-44: The function run_update_issue_content_if_needed currently
takes 16 parameters; refactor by introducing small context dataclasses (e.g.,
RepoContext with repo_owner, repo_name, ado_org, ado_project, target_repo and
UpdateContext with import_from_tmp, tmp_file, adapter, adapter_type) and replace
those related parameters with instances of these classes in the signature;
update the function body to access fields via repo_ctx and update_ctx, adjust
all call sites to construct and pass RepoContext/UpdateContext, add necessary
imports (dataclass, Path typing) and update type hints for source_tracking_list,
operations, and errors as before.
- Around line 120-127: The control flow after calling hcct_try_detect_changes is
redundant: replace the separate checks "if stop: return" and "if pdata is None:
return" with a single combined check and then assign progress_data;
specifically, after calling hcct_try_detect_changes(bridge, code_repo_path,
change_id, hcct_load_last_detection(target_entry), errors), use a single
conditional like "if stop or pdata is None: return" and then set progress_data =
pdata to eliminate the unnecessary separate None check while keeping behavior
identical for the stop and pdata cases.
- Around line 189-216: Duplicate extraction of source_metadata from target_entry
should be replaced with a single small helper function (e.g.,
_get_source_metadata(target_entry)) that returns a dict fallback when value is
missing or not a dict; replace both occurrences in
bridge_sync_issue_update_impl.py (the blocks around the use of target_entry,
last_synced_status, and current_status) to call that helper, and update any
references (e.g., where last_synced_status =
source_metadata.get("last_synced_status") and later metadata usage) to use the
helper result.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_openspec_proposal_parse.py`:
- Around line 42-67: The _step function incorrectly treats any stripped "## ..."
line as a section header even if it appears inside a fenced code block; modify
_step to update and consult a fence-state flag before performing header/mode
logic (i.e., detect opening/closing fences like ``` or ~~~ at the start of the
function and toggle an in-fence boolean on the parser/state object), and only
run the existing header checks (ls.startswith("# Change:"), the "## Why"/"##
What Changes"/"## Impact"/"## Source Tracking" checks and calls to _set_mode or
_in_why/_in_what/_in_impact) when not inside a fenced block; keep references to
the existing symbols _step, _set_mode, _in_why/_in_what/_in_impact and the st.*
flags when implementing this change.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_parse_source_tracking_entry_impl.py`:
- Around line 68-76: The _pst_apply_progress_comments function currently
swallows JSON decode errors when parsing progress_comments from entry_content;
update it to catch json.JSONDecodeError (and ValueError if needed) but log the
error (at debug level) including the exception message and the raw matched
string before returning, using the module's logger (or create one) so malformed
progress_comments are visible for troubleshooting while preserving the existing
silent-fail behavior; reference _pst_apply_progress_comments and _pst_meta to
locate where to add the logging.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_read_openspec_proposals.py`:
- Around line 80-85: The function _finalize_source_tracking currently returns
mixed types (empty dict, dict, or list) which is confusing; change it to always
return a list of dicts: return [] when source_tracking_list is empty, return
source_tracking_list (which may be length 1 or >1) otherwise, and update the
function signature to -> list[dict[str, Any]]; also search for callers of
_finalize_source_tracking and adjust code that expects a dict (e.g., unpacking
or key access) to handle a list instead or extract the single element as needed.
- Around line 39-44: The try/except block that parses source_url with urlparse
and then checks parsed.hostname == "dev.azure.com" is a no-op and silently
swallows errors; either remove this dead code or implement the Azure DevOps
parsing: use urlparse(source_url) (urlparse, parsed, source_url,
parsed.hostname) without a broad silent except, validate parsed.hostname,
extract the ADO path segments (organization/project/_git/repo or the relevant
parts) and return or use them, and on parse errors raise or log a meaningful
error instead of pass; if you choose removal, simply delete the entire
try/except and any unused parsed/hostname checks.
- Around line 113-115: Move logger creation out of the exception handlers by
instantiating logger = logging.getLogger(__name__) once at module level
(immediately after the imports) and remove the local logger =
logging.getLogger(__name__) lines inside the except Exception as e blocks;
update the two places that currently do logger.warning("Failed to parse proposal
from %s: %s", proposal_file, e) (the except blocks around the proposal parsing
at the spots shown in the diff, including the one near line 153) to reuse the
module-level logger variable.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_save_openspec_parts_impl.py`:
- Around line 42-53: The archive name parsing currently assumes a date-prefixed
format by using archive_name.split("-", 3) and checking parts[3] == change_id,
which can silently fail for other naming conventions; update the logic in the
loop that iterates archive_dir.iterdir() to explicitly validate the expected
pattern (e.g., use a regex that captures the date prefix and change_id or check
len(parts) == 4 and that parts[0:3] match a date pattern) before comparing to
change_id, and add a concise comment above the block documenting the expected
archive_name format and why the validation is required (reference variables
archive_name, parts, change_id, and the candidate = archive_subdir /
"proposal.md" check).
- Around line 150-165: soscp_replace_what_body currently defines identical
regexes what_pattern and what_simple so the fallback never runs; replace the
duplicate by making what_simple a more relaxed fallback (e.g. match "## What
Changes" block until the next top-level "##" or end) and keep the original
what_pattern as the stricter match (variables: what_pattern, what_simple;
function: soscp_replace_what_body); update the second re.sub to use this new
what_simple fallback so the function tries the strict pattern first and then a
broader one if the strict search fails.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_save_openspec_proposal_impl.py`:
- Around line 34-47: The current broad except Exception around the proposal file
processing swallows programming errors; narrow error handling to recoverable I/O
failures and propagate others: replace the generic "except Exception as e" with
"except OSError as e" (or IOError for compatibility) around
proposal_file.read_text / proposal_file.write_text to log the failure via
logger.warning("Failed to save source tracking to %s: %s", proposal_file, e) and
then either re-raise or return a failure indicator (e.g., return False) so
callers can act; remove/avoid catching AttributeError/TypeError so that issues
in bridge._normalize_source_tracking, soscp_apply_* functions bubble up to fail
fast.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_source_tracking_list_impl.py`:
- Around line 48-49: The code mutates the caller's entry_data by setting
entry_data["source_repo"] = target_repo; to avoid unexpected side effects,
create a shallow copy before modifying and use that copy for downstream use
(e.g., new_entry = dict(entry_data)) or explicitly document that entry_data is
mutated; update the branch that checks "if 'source_repo' not in entry_data:" to
assign to the copied dict (reference variables: entry_data, target_repo and the
if branch that sets source_repo).
- Around line 8-13: The function _usl_ado_orgs_match currently returns tuple[str
| None, str | None] | None despite both tuple entries being identical when
present; change its signature to return str | None (i.e., the org name or None)
and implement it to extract entry_org and target_org, return entry_org if they
match otherwise None; then update all callers to accept a single str | None (use
the returned org directly instead of unpacking a tuple) and adjust any type
hints/annotations accordingly.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_what_changes_impl.py`:
- Around line 209-211: The hardcoded threshold len(result) < 20 in the
bridge_sync_what_changes_impl.py block is a magic number; extract it to a named
constant (e.g. MIN_RESULT_LENGTH = 20) near the top of the module or the
function and add a brief comment explaining why 20 was chosen (minimum
meaningful characters for a change description). Replace the direct comparison
with if not result or len(result) < MIN_RESULT_LENGTH: and ensure any tests or
callers use the constant for consistency; reference variables result,
what_changes_lines and description to locate the check.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_write_openspec_change_impl.py`:
- Around line 50-51: The write of proposal_file using
proposal_file.write_text("\n".join(proposal_lines), encoding="utf-8") may omit a
trailing newline; update the write to ensure the file ends with a newline (e.g.,
build the content from proposal_lines and append a final "\n" before calling
proposal_file.write_text) so that the proposal_file, written by the code
referencing proposal_file and proposal_lines, always ends with a newline.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/speckit_backlog_sync.py`:
- Line 65: The assignment that builds tasks_path redundantly wraps an
already-validated Path in Path(...) — update the code in speckit_backlog_sync
where tasks_path is assigned (look for the line creating tasks_path in the
function/method that accepts feature_path) to remove the extra Path() call and
construct the path directly using feature_path / "tasks.md" so feature_path's
type is relied upon and the redundant conversion is eliminated.
- Around line 30-44: The _EXTENSION_TOOLS map includes "trello" but _PATTERNS
lacks a "trello" entry so detect_issue_mappings will never find Trello
references; either add a Trello regex to _PATTERNS (e.g. one that matches Trello
card URLs/shortlinks such as the /c/<shortlink> URL form and 8-char shortlink
tokens) and name the key "trello", or remove the "trello" entry from
_EXTENSION_TOOLS if Trello detection is intentionally unsupported; update the
_PATTERNS dictionary (and tests if any) and ensure detect_issue_mappings handles
the new "trello" key.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/speckit_bridge_backlog.py`:
- Around line 17-18: Remove the redundant `@ensure`(lambda result:
isinstance(result, list), "Must return list") decorators from functions already
annotated with a beartype-checked return type (e.g., return type list[dict[str,
Any]]) and only leave the `@beartype` decorator; locate occurrences of the `@ensure`
decorator in speckit_bridge_backlog.py (they appear alongside `@beartype`) and
delete those `@ensure` lines so the return-type validation is handled solely by
beartype, unless you intentionally want defense-in-depth in which case leave
them documented and justified.
- Around line 75-88: The helper _to_backlog_entry repeatedly calls
infer_backlog_repo_identifier(repo_path, mapping.tool) per mapping which spawns
a subprocess each time; change the calling code in
detect_speckit_backlog_mappings to compute and cache the repository
identifier(s) once and pass the cached value(s) into _to_backlog_entry (or add
an extra parameter like repo_identifier) so you avoid repeated subprocess
invocation; if all mappings share the same repo identifier compute it once,
otherwise build a small dict cache keyed by mapping.tool and reuse those values
when creating entries.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/speckit_change_proposal_sync.py`:
- Around line 99-105: _wrap the file read in _extract_proposal_markers in a
try/except to handle file I/O/encoding errors: call
proposal_path.read_text(encoding="utf-8") inside a try block, catch OSError and
UnicodeError, log a concise warning with context (use
logging.getLogger(__name__) or the module logger) including the proposal_path
and exception, and return an empty set() on error so the function still
satisfies the `@ensure` return type; keep the existing marker extraction logic
unchanged when read_text succeeds._
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_command_impl.py`:
- Line 24: Module-level call to get_configured_console() (assigned to the symbol
console) causes import-time side effects; change this to a lazy initializer:
remove the module-level console assignment and add a small getter (e.g.,
get_console()) that caches the result (use a module private _console or
functools.lru_cache) and calls get_configured_console() on first use; update all
places that currently reference the module symbol console to call get_console()
instead so configuration happens lazily at runtime.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_command_setup.py`:
- Around line 82-83: The condition that returns "export-only" incorrectly uses a
logical OR allowing a single repo identifier to trigger export-only; update the
check in sync_bridge_command_setup (the if that inspects supported_sync_modes,
repo_owner, repo_name) to require both repo_owner and repo_name by changing the
`or` to `and` so it only returns "export-only" when both repo_owner and
repo_name are present, matching other usages like
bridge_sync_export_ecd_prepare, bridge_sync_backlog_bundle_impl, and
bridge_sync_issue_subhelpers.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_compliance_helpers.py`:
- Around line 50-60: The tech-stack detection in _compliance_warn_tech_stack is
brittle because it matches literal strings; update the heuristic to perform
case-insensitive checks and broaden keywords. Iterate
plan_bundle.idea.constraints safely (handle None), convert each constraint to a
string and lower() it, and test for a set of normalized keywords/patterns (e.g.,
"python", "flask", "django", "node", "express", "postgres", "postgresql",
"mysql", "sql", "nosql", "database", "framework") or a simple regex to catch
common variants; use this result to set has_tech_stack and keep the existing
warning behavior when false.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_github_ado.py`:
- Around line 51-66: Rename the function build_import_adapter_kwargs to
build_adapter_kwargs and update every call site to use the new name (preserve
the same signature and behavior); leave helper functions _github_adapter_kwargs
and _ado_adapter_kwargs unchanged and ensure references in the module are
updated so import and export paths that previously called
build_import_adapter_kwargs now call build_adapter_kwargs.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_openapi_validation.py`:
- Line 34: Replace the hard-coded slice limit "5" used in the loop "for
contract_path in contract_files[:5]:" with a descriptive module-level constant
(e.g., MAX_CONTRACTS_TO_VALIDATE or MAX_CONTRACT_CHECKS) defined near the top of
sync_bridge_openapi_validation.py; update the loop to use that constant
("contract_files[:MAX_CONTRACTS_TO_VALIDATE]") so the limit is named and easy to
adjust and document. Ensure the constant is uppercase and include a short
comment describing its purpose.
- Around line 32-49: The loop calls
asyncio.run(validate_spec_with_specmatic(...)) repeatedly which recreates an
event loop per contract; instead, batch the async calls and run the event loop
once: create an async helper (e.g., _validate_contracts_async) that accepts the
slice contract_files[:5], uses asyncio.gather(*(validate_spec_with_specmatic(p)
for p in contract_files_slice), return_exceptions=True) to run validations
concurrently, then iterate the gathered results to print the same console
messages and set validation_failed accordingly (handle Exception results from
gather). Update _validate_contract_subset to call
asyncio.run(_validate_contracts_async(...)) so validate_spec_with_specmatic,
console, bundle_dir, and contract_files are used in the same flow.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_phases.py`:
- Around line 272-312: Add a short docstring or top-of-function comment to
run_sync_bridge_tracked_pipeline that explicitly documents the phase dispatch
order and its rationale: that the function checks phase_change_proposal first,
then phase_export_only, then phase_read_only, then the bidirectional branch,
then watch, and finally falls back to perform; mention any implied precedence
(e.g., mutually exclusive flags like export_only vs read_only vs
change_proposal) and the expected behavior when multiple flags are set so future
maintainers know the priority and why it is necessary (reference the phase
handlers phase_change_proposal, phase_export_only, phase_read_only, the
bidirectional branch, the watch branch, and perform).
- Line 26: The module currently initializes console at import-time via the call
to get_configured_console() assigned to the module-level name console; change
this to lazy initialization or dependency injection by replacing direct
module-level initialization with a getter function (e.g., def _get_console(): if
not hasattr(_get_console, "console"): _get_console.console =
get_configured_console(); return _get_console.console) and update uses to call
_get_console() (or alternatively accept a console parameter to functions/classes
that need it), so that get_configured_console() is only invoked at runtime when
actually needed.
- Around line 67-73: The early return prevents bundle inference because the
condition if adapter_value not in ("github", "ado") or not bundle returns when
bundle is missing; change it to only check adapter_value (e.g., if adapter_value
not in ("github", "ado"): return False) so resolved_bundle = bundle or
infer_bundle_name(repo) can run and infer_bundle_name(repo) is used; keep the
subsequent check that prints to console and raises typer.Exit(1) when
resolved_bundle is falsy (use console and typer.Exit as written).
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_command_common.py`:
- Around line 18-22: The is_test_mode function currently treats any sys.argv
entry containing "test" as test-mode which causes false positives; update
is_test_mode to only detect explicit test entrypoints/modules: keep the
TEST_MODE env var check, then inspect sys.argv for exact script names or known
test runner invocations (e.g., "pytest", "py.test", "unittest", "nosetests", or
their executable basenames) rather than substring matches, and check sys.modules
for those exact module names (e.g., "pytest" or "unittest") instead of using
"test" in arg.lower(); modify the logic in is_test_mode accordingly to use
equality or basename matching against a small allowlist of known test runner
names.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_intelligent_impl.py`:
- Around line 11-14: In _intelligent_report_changes replace the list passed to
any() with a generator expression to avoid allocating an intermediate list:
check the attributes changeset.code_changes, changeset.spec_changes, and
changeset.test_changes using any(...) (e.g., any(attr for attr in
(changeset.code_changes, changeset.spec_changes, changeset.test_changes))) and
keep the existing console.print and return False behavior when no changes are
found.
- Around line 48-51: The prompt filename generation using prompt_file =
prompts_dir / f"{bundle}-code-generation-{len(changeset.spec_changes)}.md" can
lead to silent overwrites when multiple syncs have the same change count; change
the naming to include a unique suffix (e.g., a timestamp or uuid) to guarantee
uniqueness—update the code that computes prompt_file (referencing prompts_dir,
prompt_file, bundle, and changeset.spec_changes) to append an ISO8601 timestamp
or a uuid4 string (sanitized for filesystem use) before the .md extension so
each run writes a distinct file.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_perform_operation_impl.py`:
- Around line 500-501: The except block that currently catches a broad Exception
after calling validate_spec_with_specmatic should be narrowed: identify and
import the specific exceptions thrown by validate_spec_with_specmatic (e.g.,
SpecmaticValidationError, ValidationError, ValueError or RuntimeError as
applicable) and replace the bare "except Exception as e" in
sync_perform_operation_impl.py with specific except clauses that handle
validation-related errors (logging the friendly validation message via
console.print) and let any unexpected exceptions propagate (or re-raise them) so
they are not silently masked; reference the validate_spec_with_specmatic call
and the current except block when locating where to make these changes.
- Around line 60-66: The condition in _pso_maybe_bootstrap_constitution is
inverted: it prints "Constitution found and validated" when constitution_path
does NOT exist. Change the logic so that after verifying adapter_type ==
AdapterType.SPECKIT you check constitution_path.exists() and only print the
success message (console.print) and return when the file actually exists; handle
the non-existent case by falling through to the bootstrapping/creation logic
instead of returning early.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_repository_impl.py`:
- Around line 19-29: The current recursive glob collects spec_files from
resolved_repo and then validates spec_files[:3] in filesystem traversal order,
which is nondeterministic and can pick up vendored/generated specs; update the
collection code (the spec_files list built via resolved_repo.glob in
sync_repository_impl.py) to filter out common vendor/build directories (e.g.,
node_modules, .git, vendor, build, dist, .venv, site-packages) and exclude files
inside those paths, then normalize and sort the remaining spec_files
deterministically (e.g., by path or name) and deduplicate before slicing
(spec_files[:3]); apply the same filtering/sorting/deduplication change to the
other similar block referenced (lines 36-52) so both places use the same
deterministic selection logic.
In
`@packages/specfact-project/src/specfact_project/sync_runtime/sync_tool_to_specfact_impl.py`:
- Around line 30-32: Duplicate logic computing is_modular_bundle (using
plan_path.exists(), plan_path.is_dir(), and plan_path.parent.name == "projects")
appears in _stsf_load_existing_plan_bundle and run_sync_tool_to_specfact;
extract that conditional into a small helper function (e.g., def
_is_modular_bundle(plan_path): ...) and replace the duplicated expressions in
both _stsf_load_existing_plan_bundle and run_sync_tool_to_specfact with calls to
that helper so the behavior is centralized and consistent.
- Around line 190-221: In _stsf_reload_bundle, avoid discarding a successfully
loaded ProjectBundle when save_project_bundle fails: separate the load and save
into two try/except blocks so that load_project_bundle(bundle_dir,
validate_hashes=False) assigns project_bundle and is not overwritten to None if
save_project_bundle raises; catch exceptions from save_project_bundle but do not
reset project_bundle (log or handle the save error), then proceed to the second
reload attempt only if the initial load failed—if reload still fails and
project_bundle is available, return that existing project_bundle instead of
creating a minimal fallback; reference function _stsf_reload_bundle and symbols
load_project_bundle, save_project_bundle, and project_bundle when making the
change.
In `@pyrightconfig.json`:
- Line 14: The global pyright setting "reportAttributeAccessIssue": false should
be removed and replaced with scoped suppressions: delete or change that key in
pyrightconfig.json and instead add "# pyright:
ignore[reportAttributeAccessIssue]" on specific lines or top of files where
false positives occur, run the Pyright CLI with --writebaseline to capture
existing issues, and where applicable implement __getattr__ on dynamic classes
or provide .pyi stubs with explicit attributes; alternatively set
"reportAttributeAccessIssue" to "warning" rather than false if you need a
project-wide softer signal.
In `@tests/unit/specfact_code_review/run/test_runner.py`:
- Around line 417-425: Add a Windows-specific test that mirrors
test_pytest_python_executable_prefers_local_venv: in
tests/unit/specfact_code_review/run/test_runner.py create a new test (e.g.,
test_pytest_python_executable_prefers_local_venv_windows) that changes to
tmp_path, makes the .venv/Scripts directory, writes a dummy python.exe file
(with appropriate permissions on the platform or just create the file), and
asserts that _pytest_python_executable() returns the resolved path to
.venv/Scripts/python.exe; this ensures the _pytest_python_executable() branch
that checks ".venv/Scripts/python.exe" is covered.
In `@tests/unit/sync_runtime/test_speckit_backlog_sync.py`:
- Around line 44-47: The test currently only checks membership in the computed
set refs and can miss unexpected extra mappings; replace the two membership
asserts with a single equality assertion that refs equals the exact expected set
(e.g. assert refs == {("ado", "AB#456"), ("github", "#89")}) to ensure no
unexpected entries are present — update the assertions around the refs variable
derived from mappings in test_speckit_backlog_sync.py accordingly.
In `@tests/unit/sync/test_change_proposal_mode.py`:
- Around line 111-170: In
test_sync_bridge_change_proposal_all_skips_tracked_features update the negative
assertion to match the change ID naming used by
derive_change_name_from_feature_dir: replace the check for
openspec/changes/001-auth-sync/proposal.md with a check that
openspec/changes/auth-sync/proposal.md does not exist, so the test verifies the
numeric prefix is stripped when determining the change directory.
---
Outside diff comments:
In `@docs/adapters/github.md`:
- Around line 337-358: The link pointing to
/reference/commands/#project-sync-bridge is stale after the bridge rename;
update the anchor to the correct command anchor for "specfact sync bridge"
(e.g., change the reference from /reference/commands/#project-sync-bridge to
/reference/commands/#specfact-sync-bridge) so the "See DevOps Adapter
Integration and the sync bridge command reference" link navigates to the proper
section.
In `@docs/getting-started/installation.md`:
- Around line 303-387: The Quick Tips section still contains the old bridge
command "project sync bridge --adapter <adapter-name>"; update that bullet to
use the consistent CLI form "specfact sync bridge --adapter <adapter-name>" to
match the examples above (search for the Quick Tips line containing "Bridge
adapter sync" and replace the command text accordingly).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 90661ac4-bf29-4a94-ab3c-1d7dd4750853
📒 Files selected for processing (86)
docs/adapters/azuredevops.mddocs/adapters/github.mddocs/bundles/backlog/refinement.mddocs/getting-started/first-steps.mddocs/getting-started/installation.mddocs/guides/ai-ide-workflow.mddocs/guides/brownfield-examples.mddocs/guides/brownfield-modernization.mddocs/guides/command-chains.mddocs/guides/common-tasks.mddocs/guides/cross-module-chains.mddocs/guides/dual-stack-enrichment.mddocs/guides/ide-integration.mddocs/guides/integrations-overview.mddocs/guides/migration-cli-reorganization.mddocs/guides/migration-guide.mddocs/guides/openspec-journey.mddocs/guides/speckit-comparison.mddocs/guides/speckit-journey.mddocs/guides/troubleshooting.mddocs/guides/use-cases.mddocs/integrations/devops-adapter-overview.mddocs/reference/README.mddocs/reference/command-syntax-policy.mddocs/reference/commands.mddocs/reference/directory-structure.mddocs/reference/parameter-standard.mddocs/team-and-enterprise/multi-repo.mdopenspec/changes/speckit-03-change-proposal-bridge/CHANGE_VALIDATION.mdopenspec/changes/speckit-03-change-proposal-bridge/TDD_EVIDENCE.mdopenspec/changes/speckit-03-change-proposal-bridge/tasks.mdpackages/specfact-code-review/module-package.yamlpackages/specfact-code-review/src/specfact_code_review/run/commands.pypackages/specfact-code-review/src/specfact_code_review/run/runner.pypackages/specfact-code-review/src/specfact_code_review/tools/contract_runner.pypackages/specfact-project/module-package.yamlpackages/specfact-project/src/specfact_project/importers/speckit_change_proposal_bridge.pypackages/specfact-project/src/specfact_project/importers/speckit_converter.pypackages/specfact-project/src/specfact_project/importers/speckit_markdown_sections.pypackages/specfact-project/src/specfact_project/sync/commands.pypackages/specfact-project/src/specfact_project/sync_runtime/__init__.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_alignment_helpers.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_backlog_bundle_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_backlog_helpers.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_change_proposals_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_change_proposals_loop.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_ecd_prepare.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_export_one_proposal.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_extract_requirement_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_find_source_tracking_entry.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_generate_tasks_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_issue_subhelpers.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_issue_update_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_openspec_proposal_parse.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_parse_source_tracking_entry_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_read_openspec_proposals.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_save_openspec_parts_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_save_openspec_proposal_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_source_tracking_list_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_what_changes_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_write_openspec_change_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/bridge_sync_write_openspec_parts_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/speckit_backlog_sync.pypackages/specfact-project/src/specfact_project/sync_runtime/speckit_bridge_backlog.pypackages/specfact-project/src/specfact_project/sync_runtime/speckit_change_proposal_sync.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_command_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_command_setup.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_compliance_helpers.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_github_ado.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_openapi_validation.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_bridge_phases.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_command_common.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_intelligent_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_perform_operation_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_repository_impl.pypackages/specfact-project/src/specfact_project/sync_runtime/sync_tool_to_specfact_impl.pypyrightconfig.jsonregistry/index.jsontests/unit/importers/test_speckit_converter.pytests/unit/specfact_code_review/run/test_commands.pytests/unit/specfact_code_review/run/test_runner.pytests/unit/specfact_code_review/tools/test_contract_runner.pytests/unit/sync/test_change_proposal_mode.pytests/unit/sync_runtime/test_bridge_sync_speckit_backlog.pytests/unit/sync_runtime/test_speckit_backlog_sync.py
Summary
Add the Spec-Kit change proposal bridge to
specfact-project, including Spec-Kit <-> OpenSpec conversion flows, bridge sync extraction into focused helper modules, backlog duplicate prevention for Spec-Kit issue references, and docs/OpenSpec updates. This PR also updatesspecfact-code-reviewso review discovery ignores hidden directories, internal helper modules do not emit falseMISSING_ICONTRACTfindings, and targeted review tests use the repo.venvPython when available.Refs:
speckit-03-change-proposal-bridgeScope
packages/registry/index.json,packages/*/module-package.yaml).github/workflows/*)docs/*,README.md,AGENTS.md)scripts/sign-modules.py,scripts/verify-modules-signature.py)Bundle Impact
List impacted bundles and version updates:
nold-ai/specfact-project:0.40.23 -> 0.41.0nold-ai/specfact-backlog:unchangednold-ai/specfact-codebase:unchangednold-ai/specfact-spec:unchangednold-ai/specfact-govern:unchangednold-ai/specfact-code-review:0.44.0 -> 0.44.2Validation Evidence
Paste command output snippets or link workflow runs.
Required local gates
hatch run formathatch run type-checkhatch run linthatch run yaml-linthatch run check-bundle-importshatch run contract-testhatch run smart-test(orhatch run test)Commands run locally during this change:
python3 -m pytest tests/unit/importers/test_speckit_converter.py tests/unit/sync_runtime/test_speckit_backlog_sync.py tests/unit/sync_runtime/test_bridge_sync_speckit_backlog.py tests/unit/sync/test_change_proposal_mode.py -qpython3 -m pytest tests/unit/specfact_code_review -qpython3 scripts/check-docs-commands.pypython3 -m pytest tests/unit/docs/test_docs_review.py -qspecfact code review run ... --no-testson the extracted Speckit helper scopehatch run verify-modules-signature --require-signature --payload-from-filesystem --enforce-version-bumpSignature + version integrity (required)
hatch run verify-modules-signature --require-signature --payload-from-filesystem --enforce-version-bumpCI and Branch Protection
verify-module-signaturesquality (3.11)quality (3.12)quality (3.13)Docs / Pages
docs/)docs-pages.yml, if changed)specfact-clidocs updated (if applicable)Checklist