Skip to content
Merged
11 changes: 5 additions & 6 deletions .claude/skills/final-review/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,14 +87,12 @@ git fetch origin main
git merge origin/main --no-edit
```

- **Merge clean** — merge commit is ready locally. Run `cargo check` to verify the merge didn't introduce API incompatibilities (e.g., functions renamed/removed on main that the PR still calls). If `cargo check` fails, fix the compile errors before proceeding.
- **Merge conflicted** — most conflicts in this codebase are "both sides added new entries in ordered lists" (in `mod.rs`, `lib.rs`, `create.rs`, `dispatch.rs`, `reductions.typ`). These are mechanical: keep both sides, maintain alphabetical order. Delegate to a subagent for resolution, then run `cargo check` to verify. Continue with the review; if conflicts are too complex, decide whether to hold in Step 5.
- **Merge clean** — merge commit is ready locally. Run `make check && make paper` to verify the merge didn't break anything (API incompatibilities, formatting, test failures, paper compilation). If any step fails, fix the errors before proceeding. For `.bib` conflicts, also check for duplicate BibTeX keys (`grep '^@' docs/paper/references.bib | sed 's/@[a-z]*{//' | sed 's/,$//' | sort | uniq -d`) and remove duplicates.
- **Merge conflicted** — most conflicts in this codebase are "both sides added new entries in ordered lists" (in `mod.rs`, `lib.rs`, `create.rs`, `dispatch.rs`, `reductions.typ`). These are mechanical: keep both sides, maintain alphabetical order. Delegate to a subagent for resolution, then run `make check && make paper` to verify. Continue with the review; if conflicts are too complex, decide whether to hold in Step 5.
- **Merge failed** — note the error and continue.

**0d. Sanity check**: verify the diff touches `src/models/` or `src/rules/` (for model/rule PRs). If the diff only contains unrelated files, STOP and flag the mismatch.

**0e. Documentation build check**: if the PR touches `docs/paper/reductions.typ`, `docs/paper/references.bib`, or example specs, run `make paper` to verify the Typst paper compiles with the merged code. If compilation fails, check whether the error is in files touched by this PR. If not, note "pre-existing paper error from [source model/rule]" and continue — do not block the review on errors from other merged PRs. Only fix errors caused by this PR.

### Step 1: Gather Context

Use `gh` commands to get the PR's actual data — always from GitHub, never from local git state:
Expand Down Expand Up @@ -380,12 +378,13 @@ Use `AskUserQuestion` only when needed:
### Step 8: Execute decision

**If Push and fix CI:**
1. Push all commits (merge-with-main + any fixes) from the worktree:
1. **Pre-push verification** — run `make check && make paper` locally before pushing. Both must pass. Fix any failures, commit, then push:
```bash
cd <worktree path>
make check && make paper
git push
```
2. Wait for CI — but **do not poll excessively**. Check once after ~60 seconds. If CI hasn't triggered or is still pending, run `make check` locally (fmt + clippy + test). If local checks pass, proceed to step 4 immediately and note "CI pending, local checks pass" when presenting the merge link. The reviewer can admin-merge or wait for CI at their discretion. Do not spend more than 1–2 CI poll attempts. If CI does run and fails, fix the issues, commit, and push again.
2. Wait for CI — but **do not poll excessively**. Check once after ~60 seconds. If CI hasn't triggered or is still pending, note "CI pending, local checks pass" when presenting the merge link (since pre-push already verified). The reviewer can admin-merge or wait for CI at their discretion. Do not spend more than 1–2 CI poll attempts. If CI does run and fails, fix the issues, commit, and push again.
3. If any follow-up items were noted during the review, post them as a comment:
```bash
COMMENT_FILE=$(mktemp)
Expand Down
35 changes: 35 additions & 0 deletions docs/paper/reductions.typ
Original file line number Diff line number Diff line change
Expand Up @@ -5033,6 +5033,41 @@ The following reductions to Integer Linear Programming are straightforward formu
_Solution extraction._ $K = {v : x_v = 1}$.
]

#let gp_ilp = load-example("GraphPartitioning", "ILP")
#let gp_ilp_sol = gp_ilp.solutions.at(0)
#let gp_n = graph-num-vertices(gp_ilp.source.instance)
#let gp_edges = gp_ilp.source.instance.graph.edges
#let gp_m = gp_edges.len()
#let gp_part_a = range(gp_n).filter(i => gp_ilp_sol.source_config.at(i) == 0)
#let gp_part_b = range(gp_n).filter(i => gp_ilp_sol.source_config.at(i) == 1)
#let gp_crossing = range(gp_m).filter(i => gp_ilp_sol.target_config.at(gp_n + i) == 1)
#let gp_crossing_edges = gp_crossing.map(i => gp_edges.at(i))
#reduction-rule("GraphPartitioning", "ILP",
example: true,
example-caption: [Two triangles linked by three crossing edges encoded as a 15-variable ILP.],
extra: [
*Step 1 -- Balanced partition variables.* Introduce $x_v in {0,1}$ for each vertex. In the canonical witness, $A = {#gp_part_a.map(str).join(", ")}$ and $B = {#gp_part_b.map(str).join(", ")}$, so $bold(x) = (#gp_ilp_sol.source_config.map(str).join(", "))$.\

*Step 2 -- Crossing indicators.* Add one binary variable per edge, so the target has $#gp_ilp.target.instance.num_vars$ binary variables and #gp_ilp.target.instance.constraints.len() constraints in total. The three active crossing indicators correspond to edges $\{#gp_crossing_edges.map(e => "(" + str(e.at(0)) + "," + str(e.at(1)) + ")").join(", ")\}$.\

*Step 3 -- Verify the objective.* The target witness $bold(z) = (#gp_ilp_sol.target_config.map(str).join(", "))$ sets exactly #gp_crossing.len() edge-indicator variables to 1, so the ILP objective equals the bisection width #gp_crossing.len() #sym.checkmark
],
)[
The node-and-edge integer-programming formulation of Chopra and Rao @chopra1993 models a balanced cut with one binary variable per vertex and one binary crossing indicator per edge. A single balance equality enforces the bisection, and two linear inequalities per edge linearize $|x_u - x_v|$ so that the objective can minimize the number of crossing edges directly.
][
_Construction._ Given graph $G = (V, E)$ with $n = |V|$ and $m = |E|$:

_Variables._ Binary $x_v in {0, 1}$ for each $v in V$, where $x_v = 1$ means vertex $v$ is placed in side $B$. For each edge $e = (u, v) in E$, binary $y_e in {0, 1}$ indicates whether $e$ crosses the partition. Total: $n + m$ variables.

_Constraints._ (1) Balance: $sum_(v in V) x_v = n / 2$. If $n$ is odd, the right-hand side is fractional, so the ILP is infeasible exactly when Graph Partitioning has no valid balanced partition. (2) For each edge $e = (u, v)$: $y_e >= x_u - x_v$ and $y_e >= x_v - x_u$. Since $y_e$ is binary and the objective minimizes $sum_e y_e$, these inequalities force $y_e = 1$ exactly for crossing edges. Total: $2m + 1$ constraints.

_Objective._ Minimize $sum_(e in E) y_e$.

_Correctness._ ($arrow.r.double$) Given a balanced partition $(A, B)$, set $x_v = 1$ iff $v in B$, and set $y_e = 1$ iff edge $e$ has one endpoint in each side. The balance constraint holds because $|B| = n / 2$, and the linking inequalities hold because $|x_u - x_v| = 1$ exactly on crossing edges. The objective is therefore the cut size. ($arrow.l.double$) Any feasible ILP solution satisfies the balance equation, so exactly half the vertices have $x_v = 1$ when $n$ is even. For each edge, the linking inequalities imply $y_e >= |x_u - x_v|$; minimization therefore chooses $y_e = |x_u - x_v|$, making the objective count precisely the crossing edges of the extracted partition.

_Solution extraction._ Return the first $n$ variables $(x_v)_(v in V)$ as the Graph Partitioning configuration; the edge-indicator variables are auxiliary.
]

#let ks_ilp = load-example("Knapsack", "ILP")
#let ks_ilp_sol = ks_ilp.solutions.at(0)
#let ks_ilp_selected = ks_ilp_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => i)
Expand Down
31 changes: 21 additions & 10 deletions docs/paper/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -894,6 +894,27 @@ @phdthesis{booth1975
year = {1975}
}

@article{chopra1993,
author = {Sunil Chopra and M. R. Rao},
title = {The partition problem},
journal = {Mathematical Programming},
volume = {59},
number = {1--3},
pages = {87--115},
year = {1993},
doi = {10.1007/BF01581239}
}

@article{chopra1996,
author = {Sunil Chopra and Jonathan H. Owen},
title = {Extended formulations for the A-cut problem},
journal = {Mathematical Programming},
volume = {73},
pages = {7--30},
year = {1996},
doi = {10.1007/BF02592096}
}

@article{boothlueker1976,
author = {Kellogg S. Booth and George S. Lueker},
title = {Testing for the Consecutive Ones Property, Interval Graphs, and Graph Planarity Using {PQ}-Tree Algorithms},
Expand Down Expand Up @@ -925,16 +946,6 @@ @article{chen2008
doi = {10.1145/1411509.1411511}
}

@article{chopra1996,
author = {Sunil Chopra and Jonathan H. Owen},
title = {Extended formulations for the A-cut problem},
journal = {Mathematical Programming},
volume = {73},
pages = {7--30},
year = {1996},
doi = {10.1007/BF02592096}
}

@article{coffman1972,
author = {Edward G. Coffman and Ronald L. Graham},
title = {Optimal Scheduling for Two-Processor Systems},
Expand Down
1 change: 0 additions & 1 deletion src/models/formula/qbf.rs
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,6 @@ impl QuantifiedBooleanFormulas {
}
}
}

}

impl Problem for QuantifiedBooleanFormulas {
Expand Down
114 changes: 114 additions & 0 deletions src/rules/graphpartitioning_ilp.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
//! Reduction from GraphPartitioning to ILP (Integer Linear Programming).
//!
//! Uses the standard balanced-cut ILP formulation:
//! - Variables: `x_v` for vertex-side assignment and `y_e` for edge-crossing indicators
//! - Constraints: one balance equality plus two linking inequalities per edge
//! - Objective: minimize the number of crossing edges

use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP};
use crate::models::graph::GraphPartitioning;
use crate::reduction;
use crate::rules::traits::{ReduceTo, ReductionResult};
use crate::topology::{Graph, SimpleGraph};

/// Result of reducing GraphPartitioning to ILP.
///
/// Variable layout (all binary):
/// - `x_v` for `v = 0..n-1`: vertex `v` belongs to side `B`
/// - `y_e` for `e = 0..m-1`: edge `e` crosses the partition
#[derive(Debug, Clone)]
pub struct ReductionGraphPartitioningToILP {
target: ILP<bool>,
num_vertices: usize,
}

impl ReductionResult for ReductionGraphPartitioningToILP {
type Source = GraphPartitioning<SimpleGraph>;
type Target = ILP<bool>;

fn target_problem(&self) -> &ILP<bool> {
&self.target
}

fn extract_solution(&self, target_solution: &[usize]) -> Vec<usize> {
target_solution[..self.num_vertices].to_vec()
}
}

#[reduction(
overhead = {
num_vars = "num_vertices + num_edges",
num_constraints = "2 * num_edges + 1",
}
)]
impl ReduceTo<ILP<bool>> for GraphPartitioning<SimpleGraph> {
type Result = ReductionGraphPartitioningToILP;

fn reduce_to(&self) -> Self::Result {
let n = self.num_vertices();
let edges = self.graph().edges();
let m = edges.len();
let num_vars = n + m;

let mut constraints = Vec::with_capacity(2 * m + 1);

let balance_terms: Vec<(usize, f64)> = (0..n).map(|v| (v, 1.0)).collect();
constraints.push(LinearConstraint::eq(balance_terms, n as f64 / 2.0));

for (edge_idx, (u, v)) in edges.iter().enumerate() {
let y_var = n + edge_idx;
constraints.push(LinearConstraint::ge(
vec![(y_var, 1.0), (*u, -1.0), (*v, 1.0)],
0.0,
));
constraints.push(LinearConstraint::ge(
vec![(y_var, 1.0), (*u, 1.0), (*v, -1.0)],
0.0,
));
}

let objective: Vec<(usize, f64)> = (0..m).map(|edge_idx| (n + edge_idx, 1.0)).collect();
let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Minimize);

ReductionGraphPartitioningToILP {
target,
num_vertices: n,
}
}
}

#[cfg(feature = "example-db")]
pub(crate) fn canonical_rule_example_specs() -> Vec<crate::example_db::specs::RuleExampleSpec> {
use crate::export::SolutionPair;

vec![crate::example_db::specs::RuleExampleSpec {
id: "graphpartitioning_to_ilp",
build: || {
let source = GraphPartitioning::new(SimpleGraph::new(
6,
vec![
(0, 1),
(0, 2),
(1, 2),
(1, 3),
(2, 3),
(2, 4),
(3, 4),
(3, 5),
(4, 5),
],
));
crate::example_db::specs::rule_example_with_witness::<_, ILP<bool>>(
source,
SolutionPair {
source_config: vec![0, 0, 0, 1, 1, 1],
target_config: vec![0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0],
},
)
},
}]
}

#[cfg(test)]
#[path = "../unit_tests/rules/graphpartitioning_ilp.rs"]
mod tests;
3 changes: 3 additions & 0 deletions src/rules/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,8 @@ pub(crate) mod coloring_ilp;
#[cfg(feature = "ilp-solver")]
pub(crate) mod factoring_ilp;
#[cfg(feature = "ilp-solver")]
pub(crate) mod graphpartitioning_ilp;
#[cfg(feature = "ilp-solver")]
mod ilp_bool_ilp_i32;
#[cfg(feature = "ilp-solver")]
pub(crate) mod ilp_qubo;
Expand Down Expand Up @@ -118,6 +120,7 @@ pub(crate) fn canonical_rule_example_specs() -> Vec<crate::example_db::specs::Ru
specs.extend(circuit_ilp::canonical_rule_example_specs());
specs.extend(coloring_ilp::canonical_rule_example_specs());
specs.extend(factoring_ilp::canonical_rule_example_specs());
specs.extend(graphpartitioning_ilp::canonical_rule_example_specs());
specs.extend(ilp_qubo::canonical_rule_example_specs());
specs.extend(knapsack_ilp::canonical_rule_example_specs());
specs.extend(longestcommonsubsequence_ilp::canonical_rule_example_specs());
Expand Down
7 changes: 2 additions & 5 deletions src/unit_tests/models/formula/qbf.rs
Original file line number Diff line number Diff line change
Expand Up @@ -255,10 +255,7 @@ fn test_qbf_quantifier_clone() {
#[test]
fn test_qbf_empty_clause() {
// An empty clause (disjunction of zero literals) is always false
let problem = QuantifiedBooleanFormulas::new(
1,
vec![Quantifier::Exists],
vec![CNFClause::new(vec![])],
);
let problem =
QuantifiedBooleanFormulas::new(1, vec![Quantifier::Exists], vec![CNFClause::new(vec![])]);
assert!(!problem.is_true());
}
Loading
Loading