Conversation
…metrics, executor Co-authored-by: SkBlaz <10035780+SkBlaz@users.noreply.github.com>
Co-authored-by: SkBlaz <10035780+SkBlaz@users.noreply.github.com>
…n and format code Co-authored-by: SkBlaz <10035780+SkBlaz@users.noreply.github.com>
Co-authored-by: SkBlaz <10035780+SkBlaz@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Introduce query sensitivity analysis to py3plex
Add Query Sensitivity Analysis: Test robustness of conclusions under perturbations
Jan 6, 2026
Owner
|
@copilot re merge master, fix conflicts |
…babilistic communities Co-authored-by: SkBlaz <10035780+SkBlaz@users.noreply.github.com>
Contributor
Author
Merged master branch and resolved all conflicts in commit 8d407dd. Changes include: Conflicts resolved:
New features from master:
Testing:
|
Owner
|
@copilot verify recent merge |
Contributor
Author
Merge verification complete ✅ Merge Status:
Test Results: 58/58 passing ✅
Feature Integration Verified:
Files Modified:
Backward Compatibility: Maintained - all existing tests pass |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Query Sensitivity Analysis Implementation - COMPLETE ✅
This PR implements Query Sensitivity Analysis as a first-class analytical layer in py3plex, strictly distinct from Uncertainty Quantification (UQ) while reusing UQ infrastructure where appropriate.
Recent Update: Merged with master
Successfully merged latest changes from master branch including:
Summary
Sensitivity Analysis tests the robustness of query CONCLUSIONS (rankings, sets, communities) under controlled perturbations. This is fundamentally different from UQ which estimates uncertainty of metric VALUES.
Implementation Phases - ALL COMPLETE ✅
Phase 0 - Reconnaissance ✅
Phase 1 - Core Implementation ✅
Phase 2 - Testing ✅
Phase 3 - Documentation ✅
Phase 4 - Examples ✅
Phase 5 - Validation ✅
Key Features
DSL Integration:
Results:
Perturbations:
edge_drop: Test robustness to missing datadegree_preserving_rewire: Test topology dependenceMetrics:
jaccard_at_k(k): Top-k set agreement [0, 1]kendall_tau: Ranking correlation [-1, 1]variation_of_information: Partition distance [0, ∞)Files Changed
New files:
Modified files:
Testing
All tests passing:
Documentation
AGENTS.md now includes:
Definition of Done - MET ✅
✅ Sensitivity is a distinct analytical layer
✅ Users can ask "Is this result robust?" and get concrete answers
✅ Results include curves, influence, and tipping points
✅ Provenance cleanly captures sensitivity runs
✅ AGENTS.md, examples, and tests updated
✅ Successfully merged with master branch
✅ All conflicts resolved
✅ Backward compatibility maintained
Original prompt
This section details on the original issue you should resolve
<issue_title>sensitivity</issue_title>
<issue_description>You are an expert Python systems researcher and network scientist working inside the py3plex repository.
This task introduces Query Sensitivity Analysis as a new analytical layer that is strictly distinct from existing UQ, while reusing UQ internally where appropriate.
HARD CONSTRAINTS (MUST OBEY)
❌ NO new .md files
✅ Update AGENTS.md (existing file only)
✅ Update relevant .rst documentation
✅ Update examples
✅ Add property-based tests
❌ Do NOT break backward compatibility
❌ Do NOT rebrand UQ or fold sensitivity into UQ
GOAL (NON-NEGOTIABLE)
Introduce Query Sensitivity Analysis as a first-class concept that answers:
This is not uncertainty of values (UQ).
This is robustness of conclusions under stress.
CORE DESIGN PRINCIPLE (GUARDRAIL)
Sensitivity must produce at least one of:
stability curves over a perturbation grid,
ranking / set agreement metrics (e.g. Jaccard@k, Kendall-τ),
per-node or per-layer influence / attribution,
tipping points or collapse thresholds.
Enforce this distinction everywhere.
PHASE 0 — RECON & BOUNDARY SETTING
perturbation methods
resampling hooks
provenance fields
where new stages (operators) can be inserted
how QueryResult metadata is stored
UQ = uncertainty of values
Sensitivity = stability of conclusions
PHASE 1 — NEW SENSITIVITY OPERATOR (DSL v2)
Signature (adjust naming to repo conventions):
.sensitivity(
perturb: str, # e.g. "edge_drop", "rewire_degree_preserving"
grid: list[float], # perturbation strengths
n_samples: int = 30,
seed: int | None = None,
metrics: list[str], # e.g. ["jaccard_at_k(20)", "kendall_tau"]
scope: str = "global", # "global" | "per_node" | "per_layer"
)
Rules:
.sensitivity() operates on the output of the query, not raw metrics.
It may internally call UQ machinery, but:
MUST NOT expose UQ-style summaries.
MUST return sensitivity-specific objects.
PHASE 2 — SENSITIVITY OBJECT MODEL
Must store:
perturbation spec
grid of perturbation strengths
per-metric stability curves
optional per-node / per-layer influence tables
provenance snapshot
Required fields (conceptual):
curves[metric][p] -> value
influence[node] -> sensitivity_score
collapse_point (if detectable)
meta["provenance"]
Must support:
.to_pandas(expand_sensitivity=True)
.to_dict()
deterministic replay via provenance
PHASE 3 — SUPPORTED PERTURBATIONS (V1)
Implement at least two, cleanly:
drop fraction p of edges
layer-aware if multilayer
configuration-model-style rewiring
preserve layer membership
Perturbations must:
be composable
be seeded deterministically
log parameters into provenance
PHASE 4 — STABILITY METRICS (MANDATORY)
Implement at least:
Ranking / Set Stability
jaccard_at_k(k)
kendall_tau
Community Stability (if query includes communities)
mean VI between partitions
probability of node community flip
Metrics must be:
computed between baseline and perturbed results
aggregated across samples per grid point
reported as curves (p → value)
PHASE 5 — LOCAL INFLUENCE / ATTRIBUTION
Add local sensitivity estimates:
Node-level:
expected rank change per Δp
probability of leaving top-k
optional entropy delta (if communities involved)
Layer-level (if multilayer):
contribution of each layer to instability
normalized attribution scores
These are critical for SOTA positioning.
PHASE 6 — PROVENANCE EXTENSION
Extend provenance to include:
perturbation family
grid
metrics used
n_samples
seeds
baseline hash
Ensure:
sensitivity runs are replayable
provenance distinguishes UQ vs sensitivity clearly
PHASE 7 — BACKWARD COMPATIBILITY
Guarantees:
Queries without .sensitivity() behave exactly as before
.uq() alone remains unchanged
.sensitivity() does not modify core query outputs
PHASE 8 — TESTING (REQUIRED)
Property-Based Tests (Hypothesis)
Add tests asserting:
stability metrics in valid ranges:
Jaccard ∈ [0,1]
Kendall-τ ∈ [-1,1]
p = 0 ⇒ stability ≈ 1
increasing perturbation ⇒ non-increasing stability (in expectation)
fixed seed ⇒ identical curves
per-node influence finite and bounded
Regression Tests
ensure UQ outputs unchanged
ensure DSL v2 pipelines without sensitivity unaffected
PHASE 9 — DOCS & AG...
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.