Skip to content

feat: implement 10 missing domain features for ImpactGuard#8

Merged
daedalus merged 2 commits intomasterfrom
copilot/identify-missing-features
May 6, 2026
Merged

feat: implement 10 missing domain features for ImpactGuard#8
daedalus merged 2 commits intomasterfrom
copilot/identify-missing-features

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented May 6, 2026

ImpactGuard lacked several features fundamental to a production-grade API impact analyzer: no way to suppress known-intentional changes, no __all__-aware visibility, no deprecation lifecycle, no type-compatibility reasoning, no Protocol/ABC cascade propagation, no re-export tracking, no markdown CI output, no release-history baselines, no feedback calibration loop, and no data contract validation.

New modules

  • schema.py — Structural validators for all inter-module JSON formats (validate_signatures, validate_calls, validate_runtime, validate_risk_report). Integrated into compare_signatures.load() as non-fatal warnings.
  • class_hierarchy.pyextract_class_hierarchy / find_implementations / get_cascade_changes: detects Protocol/ABC bases and surfaces cascade impact when their methods change.
  • feedback.pyrecord_outcome, get_stats, compute_calibrated_weights, apply_weights_to_config: records patch acceptance/rejection outcomes and calibrates [impactguard.patches] weights back to impactguard.toml.

Modified modules

extract_signatures.py

  • ignored: bool field — set when # impactguard: ignore appears on/before the def line
  • exported: bool | None field — True/False when __all__ is defined, None otherwise
  • extract_reexports(files) — parses __init__.py relative imports into {public_fqname: source_fqname}
  • extract(..., include_reexports=True) — appends alias signatures for re-exported names

compare_signatures.py

  • Suppression: functions with ignored=True or matching [impactguard.analysis] suppress list are skipped; result now includes suppressed key
  • __all__ visibility: _is_effectively_public() gates on exported field before falling back to underscore heuristic
  • Deprecation lifecycle: removed functions with a @deprecated* decorator emit DEPRECATED REMOVED: into nonbreaking instead of breaking
  • Type compatibility: _type_change_kind(old, new) classifies widening (int → int | None) as nonbreaking TYPE WIDENED; narrowing/changed stays breaking TYPE CHANGED
# widening is non-breaking
result = compare(old_path, new_path)
# "TYPE WIDENED: mod.py:fn arg 'x' int -> int | None"  → nonbreaking
# "TYPE CHANGED: mod.py:fn arg 'x' int -> str"         → breaking
# "DEPRECATED REMOVED: mod.py:old_fn"                  → nonbreaking

generate_report.py

  • generate_markdown(report_data, semver_rec, max_rows) — compact risk table + badge row for GitHub PR comments
  • generate_markdown_from_file(path, output_path, semver_rec) — file-based variant

baseline.py

  • save_tagged_baseline(tag, files, ...) / load_tagged_baseline(tag) / list_baselines() / compare_with_tagged_baseline(tag, new_files) / delete_tagged_baseline(tag) — append-log of named snapshots in .impactguard_history.json

risk_model.py / semver.py

  • DEPRECATED REMOVED: 0.15, TYPE WIDENED: 0.05, RETURN TYPE WIDENED: 0.05 severity entries
  • DEPRECATED REMOVED excluded from semver major-bump triggers

CLI additions

Command Purpose
impactguard report-markdown <report.json> Emit markdown PR comment to stdout or file
impactguard feedback record <id> --accepted|--rejected Record patch outcome
impactguard feedback stats Acceptance-rate summary
impactguard feedback calibrate Rewrite config weights from outcomes
impactguard history save <tag> Save tagged baseline snapshot
impactguard history list List all tagged snapshots
impactguard history compare <tag> Compare current code against a historical tag
impactguard history delete <tag> Remove a tagged snapshot

Summary by Sourcery

Add suppression, visibility, lifecycle, type-compatibility, hierarchy, reporting, baseline history, and feedback-loop capabilities to ImpactGuard’s core API analysis pipeline.

New Features:

  • Introduce JSON schema validation module for signatures, calls, runtime traces, and risk reports, integrating non-fatal validation into signature loading.
  • Add class-hierarchy analysis to track Protocol/ABC bases and compute cascade impacts for concrete implementations when abstract methods change.
  • Extend signature extraction to record inline ignore directives, all-based export visibility, and init.py re-exports, with optional alias generation.
  • Classify type-annotation changes as widening vs narrowing, treating widening (including return types) as non-breaking with dedicated change labels.
  • Treat removals of deprecated APIs as a non-breaking change category with dedicated risk scoring and semver handling.
  • Generate compact markdown risk summaries for PR comments, including optional semver recommendations and exposure-aware tables.
  • Provide multi-baseline release-history storage and comparison against tagged snapshots with semver recommendations and JSON export.
  • Add a feedback system to record patch outcomes, compute calibrated patch-confidence weights by change type, and apply them back to impactguard.toml.
  • Expose new CLI subcommands for markdown report generation, feedback management, and release-history baseline operations.

Enhancements:

  • Extend comparison results with a suppressed-changes channel driven by inline ignores and configurable suppress lists.
  • Refine public/private detection to be all-aware before falling back to underscore heuristics.
  • Tune risk model severities and semver major-bump triggers to account for deprecated removals and widening-type changes.
  • Export new primitives (schema validators, hierarchy utilities, feedback helpers, markdown generators, and history APIs) from the top-level package for external use.

Tests:

  • Add an extensive integration-style test suite covering suppression, all handling, deprecation lifecycle, type-compatibility classification, re-export propagation, markdown generation, multi-baseline history, feedback calibration, and JSON schema validation.

Copilot AI and others added 2 commits May 6, 2026 01:30
…tion, ABC cascade, type compat, re-exports, markdown, history, feedback, schema)

Agent-Logs-Url: https://github.com/daedalus/ImpactGuard/sessions/b8048bc5-4f7d-41ee-aaac-a1bd76edb6a1

Co-authored-by: daedalus <115175+daedalus@users.noreply.github.com>
…, mutually exclusive flags, unused var)

Agent-Logs-Url: https://github.com/daedalus/ImpactGuard/sessions/b8048bc5-4f7d-41ee-aaac-a1bd76edb6a1

Co-authored-by: daedalus <115175+daedalus@users.noreply.github.com>
@daedalus daedalus marked this pull request as ready for review May 6, 2026 01:34
@daedalus daedalus merged commit 9e73043 into master May 6, 2026
1 check was pending
@daedalus daedalus deleted the copilot/identify-missing-features branch May 6, 2026 01:34
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented May 6, 2026

Reviewer's Guide

Adds ten production-grade domain features to ImpactGuard: schema validation for all JSON contracts, suppression and all-aware visibility in signature extraction/comparison, deprecation lifecycle handling, type-widening classification, protocol/ABC cascade analysis, init.py re-export propagation, markdown PR report generation, tagged baseline history with CLI, feedback-driven risk calibration, and wires these through the public API and CLI with updated risk/semver scoring and comprehensive tests.

Sequence diagram for the new history compare workflow

sequenceDiagram
    actor Developer
    participant CLI_main as CLI_main
    participant Baseline as BaselineModule
    participant Extract as ExtractSignatures
    participant Compare as CompareSignatures
    participant Semver as SemverModule

    Developer->>CLI_main: impactguard history compare <tag_from> [files]
    CLI_main->>Baseline: compare_with_tagged_baseline(tag_from, files, history_path)
    Baseline->>Baseline: load_tagged_baseline(tag, history_path)
    Baseline->>Baseline: _load_history(effective_path)
    Baseline-->>Baseline: old_signatures

    Baseline->>Extract: extract(new_files)
    Extract-->>Baseline: new_signatures

    Baseline->>Baseline: write old.json and new.json to tempdir
    Baseline->>Compare: compare(old_path, new_path, include_private)
    Compare->>Compare: load(old_path)
    Compare->>Compare: validate_signatures(data)
    Compare->>Compare: load(new_path)
    Compare->>Compare: validate_signatures(data)
    Compare-->>Baseline: comparison{breaking,nonbreaking,suppressed}

    Baseline->>Semver: format_semver_recommendation(comparison)
    Semver-->>Baseline: semver_rec

    Baseline-->>CLI_main: {comparison, semver_rec, baseline_tag, metadata}
    CLI_main->>CLI_main: print counts and semver recommendation
    alt output path provided
        CLI_main->>CLI_main: json.dump(result, output)
    end
    CLI_main-->>Developer: exit_code (1 if breaking else 0)
Loading

Sequence diagram for the new feedback calibration workflow

sequenceDiagram
    actor Developer
    participant CLI_main as CLI_main
    participant Feedback as FeedbackModule
    participant ConfigFile as ConfigFile

    Developer->>CLI_main: impactguard feedback calibrate [--feedback-path] [--config-path]
    CLI_main->>Feedback: load_outcomes(feedback_path)
    Feedback-->>CLI_main: outcomes

    CLI_main->>Feedback: compute_calibrated_weights(outcomes)
    Feedback-->>CLI_main: weights

    alt not enough data
        CLI_main->>Developer: print "Not enough data for calibration"
        CLI_main-->>Developer: exit_code 0
    else sufficient data
        CLI_main->>Feedback: apply_weights_to_config(weights, config_path)
        Feedback->>ConfigFile: read existing impactguard.toml
        Feedback->>Feedback: _upsert_toml_section(lines, impactguard.patches, weights)
        Feedback->>ConfigFile: write updated impactguard.toml
        Feedback-->>CLI_main: ok
        alt write ok
            CLI_main->>Developer: print calibrated weights summary
            CLI_main-->>Developer: exit_code 0
        else write failed
            CLI_main->>Developer: print error to stderr
            CLI_main-->>Developer: exit_code 1
        end
    end
Loading

Updated class diagram for core ImpactGuard analysis modules

classDiagram
    class ExtractSignatures {
        +extract(files, base_path, include_reexports)
        +extract_reexports(files)
        +_has_ignore_comment(source_lines, lineno)
        +_extract_all_names(tree)
        +_unparse_annotation(node)
        +arg_info(arg, default)
    }

    class CompareSignatures {
        +load(path)
        +compare(old_path, new_path, include_private)
        +_is_public(fqname)
        +_is_effectively_public(fqname, sig)
        +_is_ignored(fqname, sig, suppress_list)
        +_parse_union_members(type_str)
        +_type_change_kind(old_type, new_type)
        +_has_deprecated_decorator(sig)
    }

    class BaselineModule {
        +DEFAULT_BASELINE_PATH
        +DEFAULT_HISTORY_PATH
        +save_baseline(files, path, metadata)
        +load_baseline(path)
        +compare_with_baseline(files, path, include_private)
        +baseline_exists(path)
        +save_tagged_baseline(tag, files, history_path, metadata)
        +load_tagged_baseline(tag, history_path)
        +list_baselines(history_path)
        +compare_with_tagged_baseline(tag, new_files, history_path, include_private)
        +delete_tagged_baseline(tag, history_path)
        +_resolve_path(path)
        +_resolve_history_path(path)
        +_load_history(path)
    }

    class SchemaModule {
        +validate_signatures(data)
        +validate_calls(data)
        +validate_runtime(data)
        +validate_risk_report(data)
        +validate(kind, data)
        +_check_list(data, label, errors)
        +_check_fields(item, required, label, idx, errors)
        +_check_arg(arg, label, idx, errors)
    }

    class FeedbackModule {
        +DEFAULT_FEEDBACK_PATH
        +record_outcome(patch_id, accepted, change_type, patch_data, feedback_path)
        +load_outcomes(feedback_path)
        +get_stats(feedback_path)
        +compute_calibrated_weights(outcomes)
        +apply_weights_to_config(weights, config_path)
        +_resolve_path(feedback_path)
        +_load_raw(path)
        +_save_raw(path, outcomes)
        +_upsert_toml_section(lines, section_header, values)
    }

    class ClassHierarchyModule {
        +ClassInfo
        +Hierarchy
        +extract_class_hierarchy(files)
        +find_implementations(hierarchy)
        +get_cascade_changes(comparison, hierarchy, implementations)
        +_base_names(bases)
        +_is_abstract(base_names)
    }

    class GenerateReportModule {
        +generate_html(report_data)
        +generate_html_from_file(risk_json_path, output_path)
        +generate_markdown(report_data, semver_rec, max_rows)
        +generate_markdown_from_file(risk_json_path, output_path, semver_rec)
        +_summary_stats(report_data)
    }

    class RiskModelModule {
        +CHANGE_WEIGHTS
    }

    class SemverModule {
        +BREAKING_PREFIXES
        +format_semver_recommendation(comparison)
    }

    ExtractSignatures ..> ClassHierarchyModule : provides_methods_for
    BaselineModule ..> ExtractSignatures : uses_extract
    BaselineModule ..> CompareSignatures : uses_compare
    BaselineModule ..> SemverModule : uses_format_semver_recommendation
    CompareSignatures ..> SchemaModule : uses_validate_signatures
    GenerateReportModule ..> SemverModule : uses_semver_rec
    FeedbackModule ..> RiskModelModule : calibrates_patch_weights
    ClassHierarchyModule ..> CompareSignatures : analyzes_cascade_from
Loading

File-Level Changes

Change Details Files
Add tagged multi-baseline history support with compare/list/save/delete operations and wire it into the CLI.
  • Introduce a separate release-history store (.impactguard_history.json) alongside the existing single baseline file.
  • Add save_tagged_baseline/load_tagged_baseline/list_baselines/compare_with_tagged_baseline/delete_tagged_baseline helpers that reuse existing extraction/comparison logic.
  • Extend the CLI with an impactguard history command supporting list/save/compare/delete subcommands, including auto-discovery of Python files and JSON output for comparisons.
src/impactguard/baseline.py
src/impactguard/__main__.py
src/impactguard/__init__.py
tests/test_ten_features.py
Introduce a feedback loop for patch acceptance and map outcomes back into impactguard.toml patch weights, with CLI entry points.
  • Add feedback.py to persist patch outcomes to a JSON file, compute aggregate stats and per-change-type acceptance rates, and derive calibrated weight values.
  • Implement a small TOML text updater that creates or amends the [impactguard.patches] section with calibrated weights without disturbing other config content.
  • Expose impactguard feedback record/stats/calibrate CLI commands that call the feedback APIs, including basic UX/output and error handling.
src/impactguard/feedback.py
src/impactguard/__main__.py
src/impactguard/__init__.py
tests/test_ten_features.py
Tighten signature extraction, visibility, and re-export handling to support suppressions and all-aware public API reasoning.
  • Extend extract() to annotate signatures with ignored and exported flags based on inline # impactguard: ignore comments and simple all assignment parsing.
  • Add extract_reexports() and optional include_reexports behavior to synthesize alias signatures for relative re-exports from init.py into the signature set.
  • Ensure stable fqname behavior and propagation of reexported_from metadata on alias signatures for downstream tools.
src/impactguard/extract_signatures.py
src/impactguard/__init__.py
tests/test_ten_features.py
Enhance signature comparison to respect suppression, all visibility, deprecation lifecycle, and type-widening classification, and to record suppressed items.
  • Wrap load() with schema-based validation of the signatures payload and emit non-fatal warnings to stderr when shapes are invalid.
  • Add _is_effectively_public, _is_ignored, and _has_deprecated_decorator helpers, integrate config-driven analysis.suppress, and track suppressed fqnames in a new suppressed list in the compare() result.
  • Implement type-compatibility parsing and classification helpers to distinguish widening, narrowing, and changed unions/Optionals, emitting TYPE WIDENED/RETURN TYPE WIDENED as nonbreaking and keeping other type changes breaking, while treating removal of @deprecated functions as DEPRECATED REMOVED nonbreaking events.
  • Ensure compare() filters by effective public status (including all), skips suppressed entries in all phases, and returns the expanded result structure used in tests and downstream features.
src/impactguard/compare_signatures.py
src/impactguard/config.py
src/impactguard/risk_model.py
src/impactguard/semver.py
tests/test_ten_features.py
Add JSON schema-style validators for all inter-module data formats and hook them into loading paths.
  • Introduce schema.py with validate_signatures/validate_calls/validate_runtime/validate_risk_report and a generic validate(kind, data) dispatcher, using lightweight structural checks and descriptive error messages.
  • Integrate validate_signatures() into compare_signatures.load() so malformed signature JSON yields warnings but doesn’t abort the comparison pipeline.
  • Export validation helpers through the top-level impactguard package for reuse by callers and tests.
src/impactguard/schema.py
src/impactguard/compare_signatures.py
src/impactguard/__init__.py
tests/test_ten_features.py
Add protocol/ABC class-hierarchy analysis and cascade impact reporting utilities.
  • Implement class_hierarchy.py to parse class definitions, detect Protocol/ABC bases, capture declared methods, and build a hierarchy mapping.
  • Provide find_implementations() to map abstract classes to concrete implementors and get_cascade_changes() to derive CASCADE messages from comparison results plus hierarchy data.
  • Cover protocol and ABC detection, implementation mapping, and cascade behavior with focused tests.
src/impactguard/class_hierarchy.py
tests/test_ten_features.py
src/impactguard/__init__.py
Add markdown PR-comment report generation alongside existing HTML reporting and wire it into the CLI.
  • Implement generate_markdown() to produce a compact, emoji/badge-driven risk summary table with optional semver recommendation and row limiting.
  • Add generate_markdown_from_file() to read risk JSON, generate markdown, optionally write to disk, and return the content for stdout printing.
  • Expose a new impactguard report-markdown CLI subcommand that wraps generate_markdown_from_file(), supports an optional output file, and integrate exports via init.
src/impactguard/generate_report.py
src/impactguard/__main__.py
src/impactguard/__init__.py
tests/test_ten_features.py
Update risk and semver models to recognize new non-breaking categories and keep deprecated removals out of major-bump triggers.
  • Extend SEVERITY_SCORES with DEPRECATED REMOVED, TYPE WIDENED, and RETURN TYPE WIDENED entries with appropriately low weights.
  • Ensure semver’s MAJOR_TRIGGER_PREFIXES excludes DEPRECATED REMOVED and the widening labels so they never force a major bump.
  • Add tests that assert type widening never appears in breaking and that deprecated removal severity and semver behavior are bounded as intended.
src/impactguard/risk_model.py
src/impactguard/semver.py
tests/test_ten_features.py
Expose new APIs at the package root and extend CLI routing to recognize new subcommands and defaulting behavior.
  • Update impactguard.init to re-export baseline history helpers, schema validators, class-hierarchy utilities, feedback APIs, re-export extraction, and markdown generators under stable names.
  • Register report-markdown, feedback, and history as top-level CLI subcommands with appropriate argparsers, including feedback/history sub-subcommands and help text.
  • Update the main() default-subcommand logic’s allowlist to include the new commands so pipeline mode still kicks in only when appropriate.
src/impactguard/__init__.py
src/impactguard/__main__.py
Add a comprehensive end-to-end test suite covering all ten new domain features and cross-feature integration.
  • Introduce tests/test_ten_features.py as a single, scenario-oriented module that drives extraction, comparison, hierarchy, markdown generation, baselines, feedback, and schema validators together.
  • Assert behavior for inline ignores, config suppress lists, all handling, deprecation lifecycle, type widening vs narrowing, re-export propagation, cascade analysis, feedback stats and calibration, history commands’ helpers, and validation error reporting.
  • Use lightweight helpers to construct signature JSON, call compare(), and avoid over-coupling tests to internal representations beyond what the features require.
tests/test_ten_features.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@codacy-production
Copy link
Copy Markdown

Not up to standards ⛔

🔴 Issues 16 medium · 84 minor

Alerts:
⚠ 100 issues (≤ 0 issues of at least minor severity)

Results:
100 new issues

Category Results
Documentation 84 minor
Complexity 16 medium

View in Codacy

🟢 Metrics 351 complexity

Metric Results
Complexity 351

View in Codacy

NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • In extract_class_hierarchy, methods are collected via ast.walk(node) which will also include methods of nested classes defined inside this class; if you only intend direct methods on the class, consider iterating over node.body and filtering FunctionDef/AsyncFunctionDef instead.
  • In schema._check_arg / validate_signatures, the index reported in error messages is always 0 for arguments (because _check_fields is called with idx=0), which makes debugging harder; consider threading the actual argument index through so positional and keyword-only argument errors can be correctly attributed.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `extract_class_hierarchy`, `methods` are collected via `ast.walk(node)` which will also include methods of nested classes defined inside this class; if you only intend direct methods on the class, consider iterating over `node.body` and filtering `FunctionDef`/`AsyncFunctionDef` instead.
- In `schema._check_arg` / `validate_signatures`, the index reported in error messages is always `0` for arguments (because `_check_fields` is called with `idx=0`), which makes debugging harder; consider threading the actual argument index through so positional and keyword-only argument errors can be correctly attributed.

## Individual Comments

### Comment 1
<location path="src/impactguard/baseline.py" line_range="216-218" />
<code_context>
+    """
+    effective_path = _resolve_history_path(history_path)
+    if not Path(effective_path).is_file():
+        raise FileNotFoundError(
+            f"History file not found: {effective_path}. "
+            "Run `impactguard baseline save --tag <tag>` first."
+        )
+    history = _load_history(effective_path)
</code_context>
<issue_to_address>
**issue:** Error message suggests an outdated CLI command (`baseline save --tag`) instead of the new `history save` interface.

This message should direct users to `impactguard history save <tag> ...`, which is the current way to create tagged baselines. Please update the text (and arguments, if needed) so the guidance matches the new history interface when the file is missing.
</issue_to_address>

### Comment 2
<location path="src/impactguard/compare_signatures.py" line_range="73-82" />
<code_context>
+def _parse_union_members(type_str: str) -> frozenset[str]:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Union parsing is too naive for nested/complex type annotations and may misclassify type changes.

`_parse_union_members` currently splits on `","`, so `Union[tuple[int, int], str]` is parsed into four members instead of two. This can misclassify widening/narrowing and thus whether a change is breaking. To avoid this, parse with bracket-depth awareness or via the AST rather than raw string splitting.
</issue_to_address>

### Comment 3
<location path="src/impactguard/schema.py" line_range="35-40" />
<code_context>
+            errors.append(f"{label}[{idx}]: missing required field '{field}'")
+
+
+def _check_arg(arg: object, label: str, idx: int, errors: list[str]) -> None:
+    """Validate a single argument dict within a signature."""
+    if not isinstance(arg, dict):
+        errors.append(f"{label}[{idx}]: argument entry must be an object")
+        return
+    _check_fields(arg, ["name", "has_default"], f"{label}[{idx}].arg", 0, errors)
+
+
</code_context>
<issue_to_address>
**suggestion:** Argument index handling in validation error messages is confusing and likely incorrect.

In `_check_arg`, `idx` is used as the argument index but is always derived from the signature index in `validate_signatures`, and `_check_fields` is called with a hard‑coded index `0`. This produces diagnostics like `signatures[3].positional[3].arg[0]` that don’t match the real argument position. Consider passing the actual argument index into `_check_arg` and through to `_check_fields` so the reported paths reflect the true argument location.

Suggested implementation:

```python
    # Use the actual argument index so diagnostics reflect the real argument location.
    _check_fields(arg, ["name", "has_default"], label, idx, errors)

```

To fully align diagnostics with the true argument positions, you should also:

1. Update the place where `_check_arg` is called (likely in `validate_signatures`) so that:
   - The `idx` argument passed to `_check_arg` is the *argument index* within the relevant list (e.g. `positional`, `keyword_only`, etc.), not the signature index.
   - The `label` argument passed to `_check_arg` is the base path to the argument list (e.g. `f"signatures[{sig_idx}].positional"`), letting `_check_fields` compose `"...positional[{arg_idx}]"`.
2. Verify other callers of `_check_fields` are still passing a base label and the correct index for that collection, to keep diagnostic paths consistent.
</issue_to_address>

### Comment 4
<location path="tests/test_ten_features.py" line_range="884" />
<code_context>
+        validate("widgets", [])
+
+
+def test_validate_signatures_emits_warning_on_load(tmp_path: Path, capsys: pytest.CaptureFixture) -> None:
+    """compare_signatures.load() should warn to stderr on invalid data."""
+    from impactguard.compare_signatures import load
</code_context>
<issue_to_address>
**issue (testing):** The warning assertion is currently too weak and can pass even if no warning is emitted.

The current assertion `assert "Warning" in captured.err or len(captured.err) == 0` allows the test to pass even when no warning is printed, which contradicts the docstring and weakens coverage of `compare_signatures.load`’s integration with `validate_signatures`. Tighten this to require a warning (e.g. `assert "Warning: signatures file" in captured.err`), and if you need to assert that invalid data is non-fatal, add a separate test for the return value instead of relaxing the warning requirement here.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +216 to +218
raise FileNotFoundError(
f"History file not found: {effective_path}. "
"Run `impactguard baseline save --tag <tag>` first."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: Error message suggests an outdated CLI command (baseline save --tag) instead of the new history save interface.

This message should direct users to impactguard history save <tag> ..., which is the current way to create tagged baselines. Please update the text (and arguments, if needed) so the guidance matches the new history interface when the file is missing.

Comment on lines +73 to +82
def _parse_union_members(type_str: str) -> frozenset[str]:
"""Break a type annotation string into its constituent member types.

Handles:

* PEP 604 ``X | Y | Z``
* ``Optional[X]`` → ``{X, None}``
* ``Union[X, Y]`` → ``{X, Y}``
* Everything else → ``{type_str}``
"""
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Union parsing is too naive for nested/complex type annotations and may misclassify type changes.

_parse_union_members currently splits on ",", so Union[tuple[int, int], str] is parsed into four members instead of two. This can misclassify widening/narrowing and thus whether a change is breaking. To avoid this, parse with bracket-depth awareness or via the AST rather than raw string splitting.

Comment thread src/impactguard/schema.py
Comment on lines +35 to +40
def _check_arg(arg: object, label: str, idx: int, errors: list[str]) -> None:
"""Validate a single argument dict within a signature."""
if not isinstance(arg, dict):
errors.append(f"{label}[{idx}]: argument entry must be an object")
return
_check_fields(arg, ["name", "has_default"], f"{label}[{idx}].arg", 0, errors)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Argument index handling in validation error messages is confusing and likely incorrect.

In _check_arg, idx is used as the argument index but is always derived from the signature index in validate_signatures, and _check_fields is called with a hard‑coded index 0. This produces diagnostics like signatures[3].positional[3].arg[0] that don’t match the real argument position. Consider passing the actual argument index into _check_arg and through to _check_fields so the reported paths reflect the true argument location.

Suggested implementation:

    # Use the actual argument index so diagnostics reflect the real argument location.
    _check_fields(arg, ["name", "has_default"], label, idx, errors)

To fully align diagnostics with the true argument positions, you should also:

  1. Update the place where _check_arg is called (likely in validate_signatures) so that:
    • The idx argument passed to _check_arg is the argument index within the relevant list (e.g. positional, keyword_only, etc.), not the signature index.
    • The label argument passed to _check_arg is the base path to the argument list (e.g. f"signatures[{sig_idx}].positional"), letting _check_fields compose "...positional[{arg_idx}]".
  2. Verify other callers of _check_fields are still passing a base label and the correct index for that collection, to keep diagnostic paths consistent.

validate("widgets", [])


def test_validate_signatures_emits_warning_on_load(tmp_path: Path, capsys: pytest.CaptureFixture) -> None:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): The warning assertion is currently too weak and can pass even if no warning is emitted.

The current assertion assert "Warning" in captured.err or len(captured.err) == 0 allows the test to pass even when no warning is printed, which contradicts the docstring and weakens coverage of compare_signatures.load’s integration with validate_signatures. Tighten this to require a warning (e.g. assert "Warning: signatures file" in captured.err), and if you need to assert that invalid data is non-fatal, add a separate test for the return value instead of relaxing the warning requirement here.

daedalus added a commit that referenced this pull request May 7, 2026
- Fix malformed function names in risk report (Gap #1 introduced)
  risk_gate.run() now extracts fqname before first space
  Fixes TYPE_CHANGED and DECORATOR_REMOVED entries
  seen_functions now uses fqname (not malformed string)
  Runtime lookup now works correctly for these change types

- Fix DECORATOR_ADDED contradiction (Gap #3)
  compare_signatures.py now appends to nonbreaking (not breaking)
  Consistent with risk_model.py severity 0.1 (typically non-breaking)

- Fix README API mismatch (Gap #6)
  Updated run_pipeline() examples to use old_files/new_files
  Matches actual function signature in pipeline.py

- Delete requirements.txt (Gap #8)
  Redundant with pyproject.toml, was misleading
  Added note in requirements.txt pointing to pyproject.toml

- Fix AGENTS.md Python version (Gap #9)
  Updated from 3.9 to 3.11 to match pyproject.toml

- Fix transitive_depth default (Gap #12)
  Changed default from 0 to 1 in config.py
  Enables transitive impact tracking by default

All 1223 tests pass with 80.99% coverage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants