Conversation
Agent-Logs-Url: https://github.com/daedalus/ImpactGuard/sessions/aa8e07c7-bdb0-401e-a600-87e048b0f45c Co-authored-by: daedalus <115175+daedalus@users.noreply.github.com>
Reviewer's GuideExtends the risk model from S × E × C to S × E × C × λ by introducing a tunable lambda sensitivity multiplier that flows from CLI flags through enforcement and risk evaluation, scaling severity in classification and overall risk scoring, and documents the new behavior in the README. Sequence diagram for CLI lambda propagation through risk evaluation and enforcementsequenceDiagram
actor Developer
participant CLI_main as impactguard___main__
participant RiskGate as risk_gate_run
participant RiskModel as risk_model_functions
participant EnforceGate as enforce_gate_enforce
Developer->>CLI_main: impactguard risk diff runtime report --lambda=L
CLI_main->>CLI_main: parse_args()
CLI_main->>CLI_main: cmd_risk(args)
CLI_main->>RiskGate: run(diff_path, runtime_path, output_path, lambda_=L)
loop per_changed_function
RiskGate->>RiskModel: classify(severity, count, max_count, samples=count, lambda_=L)
RiskModel->>RiskModel: effective_severity = severity * lambda_
RiskModel-->>RiskGate: risk_label, exposure_val, confidence_val
RiskGate->>RiskModel: compute_risk(severity, exposure_val, confidence_val, lambda_=L)
RiskModel-->>RiskGate: risk_score = S*E*C*L
end
RiskGate-->>CLI_main: report
CLI_main-->>Developer: write risk report
Developer->>CLI_main: impactguard enforce diff runtime --lambda=L
CLI_main->>CLI_main: parse_args()
CLI_main->>CLI_main: cmd_enforce(args)
CLI_main->>EnforceGate: enforce(diff_path, runtime_path, output_path, block_unknown, lambda_=L)
EnforceGate->>RiskGate: run(diff_path, runtime_path, output_path, lambda_=L)
RiskGate-->>EnforceGate: report
EnforceGate->>EnforceGate: scan for HIGH / UNKNOWN
EnforceGate-->>CLI_main: exit_code (1 or 0)
CLI_main-->>Developer: CI pass or fail based on lambda-sensitive risk
Updated class diagram for risk model, gate, enforcement, and CLI lambda flowclassDiagram
class RiskModel {
+float confidence(samples, threshold)
+tuple classify(severity, count, max_count, samples, lambda_)
+float compute_risk(severity, exposure_val, confidence_val, lambda_)
}
class RiskGate {
+list run(diff_path, runtime_path, output_path, lambda_)
}
class EnforceGate {
+int enforce(diff_path, runtime_path, output_path, block_unknown, lambda_)
}
class CLIMain {
+int cmd_risk(args)
+int cmd_enforce(args)
+int main()
-float lam
}
CLIMain --> RiskGate : passes lambda_ to run
CLIMain --> EnforceGate : passes lambda_ to enforce
EnforceGate --> RiskGate : forwards lambda_ to run
RiskGate --> RiskModel : uses lambda_ in classify
RiskGate --> RiskModel : uses lambda_ in compute_risk
class ConfigModule {
+Any cfg_get(section, key, default_value)
}
RiskModel --> ConfigModule : reads thresholds via cfg_get
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Not up to standards ⛔🔴 Issues
|
| Category | Results |
|---|---|
| CodeStyle | 1 minor |
🟢 Metrics 0 complexity · 0 duplication
Metric Results Complexity 0 Duplication 0
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- Consider validating the
--lambdavalue (e.g., enforcing a positive range and rejecting NaN/inf) before using it inclassify/compute_risk, since negative or nonsensical values would make the thresholds and risk scores behave unpredictably. - To keep configuration transparent in downstream consumers, you may want to include the effective
lambda_value in the risk report output so users can see which sensitivity setting was used when the report was generated.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider validating the `--lambda` value (e.g., enforcing a positive range and rejecting NaN/inf) before using it in `classify`/`compute_risk`, since negative or nonsensical values would make the thresholds and risk scores behave unpredictably.
- To keep configuration transparent in downstream consumers, you may want to include the effective `lambda_` value in the risk report output so users can see which sensitivity setting was used when the report was generated.
## Individual Comments
### Comment 1
<location path="README.md" line_range="201" />
<code_context>
+### The S × E × C × λ Risk Framework
-The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions:
+The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tuneable sensitivity multiplier λ:
| Component | Code Entity | Description |
</code_context>
<issue_to_address>
**nitpick (typo):** Consider using the more standard spelling "tunable" instead of "tuneable".
"Tuneable" is acceptable, but "tunable" is more common in technical writing and will feel more natural to most readers.
```suggestion
The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tunable sensitivity multiplier λ:
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| ### The S × E × C × λ Risk Framework | ||
|
|
||
| The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions: | ||
| The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tuneable sensitivity multiplier λ: |
There was a problem hiding this comment.
nitpick (typo): Consider using the more standard spelling "tunable" instead of "tuneable".
"Tuneable" is acceptable, but "tunable" is more common in technical writing and will feel more natural to most readers.
| The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tuneable sensitivity multiplier λ: | |
| The core logic resides in `risk_model.py`. It quantifies risk by evaluating three distinct dimensions, scaled by a tunable sensitivity multiplier λ: |
The risk formula
S × E × Cnow accepts a sensitivity multiplier λ (default1.0) via--lambda. Values above 1 raise sensitivity (more changes classified HIGH/MEDIUM); values below 1 lower it.Changes
risk_model.py—classify()scales effective severity by λ before threshold comparison;compute_risk()multiplies by λrisk_gate.py—run()accepts and forwardslambda_enforce_gate.py—enforce()accepts and forwardslambda___main__.py—--lambda LAMBDAargument added to bothriskandenforcesubcommands (stored aslamto avoid keyword conflict)README.md— AllS × E × Creferences updated toS × E × C × λ; sensitivity tuning documentedExample
Summary by Sourcery
Add a configurable lambda sensitivity multiplier to the S × E × C risk model and CLI, and document the extended S × E × C × λ framework.
New Features:
Enhancements: