Skip to content

feat: extend risk model to S × E × C × λ with tunable --lambda parameter#14

Merged
daedalus merged 1 commit intomasterfrom
copilot/add-lambda-parameter-impactguard
May 6, 2026
Merged

feat: extend risk model to S × E × C × λ with tunable --lambda parameter#14
daedalus merged 1 commit intomasterfrom
copilot/add-lambda-parameter-impactguard

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented May 6, 2026

The risk formula S × E × C now accepts a sensitivity multiplier λ (default 1.0) via --lambda. Values above 1 raise sensitivity (more changes classified HIGH/MEDIUM); values below 1 lower it.

Changes

  • risk_model.pyclassify() scales effective severity by λ before threshold comparison; compute_risk() multiplies by λ
  • risk_gate.pyrun() accepts and forwards lambda_
  • enforce_gate.pyenforce() accepts and forwards lambda_
  • __main__.py--lambda LAMBDA argument added to both risk and enforce subcommands (stored as lam to avoid keyword conflict)
  • README.md — All S × E × C references updated to S × E × C × λ; sensitivity tuning documented

Example

# Default — unchanged behaviour
impactguard risk diff.txt runtime.json report.json

# More sensitive: severity 0.5 → effective 1.0, crosses HIGH threshold
impactguard risk diff.txt runtime.json report.json --lambda=2

# Less sensitive: high-severity changes less likely to block
impactguard enforce diff.txt runtime.json --lambda=0.5

Summary by Sourcery

Add a configurable lambda sensitivity multiplier to the S × E × C risk model and CLI, and document the extended S × E × C × λ framework.

New Features:

  • Introduce a lambda (λ) sensitivity multiplier into risk classification and scoring, allowing tuning of how aggressively changes are classified as HIGH or MEDIUM via a CLI flag.
  • Add a --lambda option to the risk and enforce CLI subcommands to pass the sensitivity multiplier through the analysis and enforcement pipeline.

Enhancements:

  • Update the risk framework documentation and README to describe the extended S × E × C × λ model, its parameters, and its effect on sensitivity.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented May 6, 2026

Reviewer's Guide

Extends the risk model from S × E × C to S × E × C × λ by introducing a tunable lambda sensitivity multiplier that flows from CLI flags through enforcement and risk evaluation, scaling severity in classification and overall risk scoring, and documents the new behavior in the README.

Sequence diagram for CLI lambda propagation through risk evaluation and enforcement

sequenceDiagram
    actor Developer
    participant CLI_main as impactguard___main__
    participant RiskGate as risk_gate_run
    participant RiskModel as risk_model_functions
    participant EnforceGate as enforce_gate_enforce

    Developer->>CLI_main: impactguard risk diff runtime report --lambda=L
    CLI_main->>CLI_main: parse_args()
    CLI_main->>CLI_main: cmd_risk(args)
    CLI_main->>RiskGate: run(diff_path, runtime_path, output_path, lambda_=L)

    loop per_changed_function
        RiskGate->>RiskModel: classify(severity, count, max_count, samples=count, lambda_=L)
        RiskModel->>RiskModel: effective_severity = severity * lambda_
        RiskModel-->>RiskGate: risk_label, exposure_val, confidence_val
        RiskGate->>RiskModel: compute_risk(severity, exposure_val, confidence_val, lambda_=L)
        RiskModel-->>RiskGate: risk_score = S*E*C*L
    end

    RiskGate-->>CLI_main: report
    CLI_main-->>Developer: write risk report

    Developer->>CLI_main: impactguard enforce diff runtime --lambda=L
    CLI_main->>CLI_main: parse_args()
    CLI_main->>CLI_main: cmd_enforce(args)
    CLI_main->>EnforceGate: enforce(diff_path, runtime_path, output_path, block_unknown, lambda_=L)
    EnforceGate->>RiskGate: run(diff_path, runtime_path, output_path, lambda_=L)
    RiskGate-->>EnforceGate: report
    EnforceGate->>EnforceGate: scan for HIGH / UNKNOWN
    EnforceGate-->>CLI_main: exit_code (1 or 0)
    CLI_main-->>Developer: CI pass or fail based on lambda-sensitive risk
Loading

Updated class diagram for risk model, gate, enforcement, and CLI lambda flow

classDiagram
    class RiskModel {
        +float confidence(samples, threshold)
        +tuple classify(severity, count, max_count, samples, lambda_)
        +float compute_risk(severity, exposure_val, confidence_val, lambda_)
    }

    class RiskGate {
        +list run(diff_path, runtime_path, output_path, lambda_)
    }

    class EnforceGate {
        +int enforce(diff_path, runtime_path, output_path, block_unknown, lambda_)
    }

    class CLIMain {
        +int cmd_risk(args)
        +int cmd_enforce(args)
        +int main()
        -float lam
    }

    CLIMain --> RiskGate : passes lambda_ to run
    CLIMain --> EnforceGate : passes lambda_ to enforce
    EnforceGate --> RiskGate : forwards lambda_ to run
    RiskGate --> RiskModel : uses lambda_ in classify
    RiskGate --> RiskModel : uses lambda_ in compute_risk

    class ConfigModule {
        +Any cfg_get(section, key, default_value)
    }

    RiskModel --> ConfigModule : reads thresholds via cfg_get
Loading

File-Level Changes

Change Details Files
Add lambda sensitivity multiplier into risk classification and risk score computation.
  • Extend classify() to accept lambda_ with a default of 1.0 and compute an effective_severity = severity * lambda_ used for HIGH/MEDIUM threshold checks.
  • Extend compute_risk() to accept lambda_ with a default of 1.0 and multiply it into the final score.
  • Keep exposure and confidence logic unchanged so lambda only scales severity/risk, not data quality or call frequency.
src/impactguard/risk_model.py
Thread lambda parameter from the CLI through risk and enforce workflows into the risk model.
  • Add --lambda (stored as lam) option to the risk and enforce subcommands with help text and default 1.0.
  • Update cmd_risk() to pass lambda_ from parsed args into risk_main().
  • Update cmd_enforce() to pass lambda_ from parsed args into enforce(), and from there into risk_gate.run().
  • Extend risk_gate.run() and enforce_gate.enforce() signatures to accept lambda_ and forward it into classify().
src/impactguard/__main__.py
src/impactguard/risk_gate.py
src/impactguard/enforce_gate.py
Document the extended S × E × C × λ framework and CLI sensitivity tuning in the README.
  • Rename all S × E × C references to S × E × C × λ where appropriate, including feature lists and comparison tables.
  • Introduce Lambda (λ) as a first-class component in the risk framework documentation, explaining semantics and defaults.
  • Add examples that demonstrate how different --lambda values increase or decrease risk sensitivity.
README.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@daedalus daedalus merged commit 8b244e9 into master May 6, 2026
1 check was pending
@codacy-production
Copy link
Copy Markdown

Not up to standards ⛔

🔴 Issues 1 minor

Alerts:
⚠ 1 issue (≤ 0 issues of at least minor severity)

Results:
1 new issue

Category Results
CodeStyle 1 minor

View in Codacy

🟢 Metrics 0 complexity · 0 duplication

Metric Results
Complexity 0
Duplication 0

View in Codacy

NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • Consider validating the --lambda value (e.g., enforcing a positive range and rejecting NaN/inf) before using it in classify/compute_risk, since negative or nonsensical values would make the thresholds and risk scores behave unpredictably.
  • To keep configuration transparent in downstream consumers, you may want to include the effective lambda_ value in the risk report output so users can see which sensitivity setting was used when the report was generated.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider validating the `--lambda` value (e.g., enforcing a positive range and rejecting NaN/inf) before using it in `classify`/`compute_risk`, since negative or nonsensical values would make the thresholds and risk scores behave unpredictably.
- To keep configuration transparent in downstream consumers, you may want to include the effective `lambda_` value in the risk report output so users can see which sensitivity setting was used when the report was generated.

## Individual Comments

### Comment 1
<location path="README.md" line_range="201" />
<code_context>
+### The S × E × C × λ Risk Framework

-The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions:
+The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tuneable sensitivity multiplier λ:

 | Component | Code Entity | Description |
</code_context>
<issue_to_address>
**nitpick (typo):** Consider using the more standard spelling "tunable" instead of "tuneable".

"Tuneable" is acceptable, but "tunable" is more common in technical writing and will feel more natural to most readers.

```suggestion
The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tunable sensitivity multiplier λ:
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread README.md
### The S × E × C × λ Risk Framework

The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions:
The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tuneable sensitivity multiplier λ:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick (typo): Consider using the more standard spelling "tunable" instead of "tuneable".

"Tuneable" is acceptable, but "tunable" is more common in technical writing and will feel more natural to most readers.

Suggested change
The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tuneable sensitivity multiplier λ:
The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tunable sensitivity multiplier λ:

@daedalus daedalus deleted the copilot/add-lambda-parameter-impactguard branch May 6, 2026 13:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants