Skip to content

Add Implied bound cuts. Find more knapsack cuts. Lift knapsack cuts#970

Merged
rapids-bot[bot] merged 38 commits intoNVIDIA:release/26.04from
chris-maes:knapsack_improve
Apr 1, 2026
Merged

Add Implied bound cuts. Find more knapsack cuts. Lift knapsack cuts#970
rapids-bot[bot] merged 38 commits intoNVIDIA:release/26.04from
chris-maes:knapsack_improve

Conversation

@chris-maes
Copy link
Copy Markdown
Contributor

@chris-maes chris-maes commented Mar 17, 2026

This PR fixes several bugs:

  • Failure to stop branch and bound quickly when an optimal solution is found
  • An incorrect dual bound on physciansched3-3 caused by a bug in clique cuts
  • A numerical issue in MIR cuts caused by dividing by small coefficients in the original inequality was leading to numerical issues in the cuts and false infeasibility on graphdraw-domain.

This PR add support for implied bound cuts

  • The probing cache from heuristics is used to implement these cuts.

We make the following changes to improve finding and generating knapsack cuts

  • If a constraint contains a negative coefficient, we complement the binary variable to produce a knapsack constraint
  • We rationalize coefficients and multiply the common denominator to make coefficients integer to produce a knapsack constraint
  • We fix variables where x* = 0 or x* = 1 to decrease the size of the 0-1 knapsack separation problem
  • After separating a cover, we find the minimal cover
  • We now lift the knapsack constraint

We go from 0% gap closed on the publicly available instances of Wolter's main knapsack test set, to geomean of 14.39%. Wolter reports a geomean of gap closed of ~11% on the same subset of her test set.

Baseline
===================================================================================================================
                        model   gap closed knapsack cuts knapsack time          zlp           db         zmip
                           gen         0.00            -         0.00 +1.122338e+05 +1.122338e+05 +1.123134e+05
                          umts         0.00            -         0.00 +2.912957e+07 +2.912957e+07 +3.012408e+07
                      roll3000         0.00            -         0.00 +1.109728e+04 +1.109728e+04 +1.289900e+04
                       cap6000         0.00            -         0.00 -2.451537e+06 -2.451537e+06 -2.451377e+06
                         fiber         0.00            -         0.00 +1.981074e+05 +1.981074e+05 +4.059352e+05
                          lseu         0.00            -         0.00 +9.479572e+02 +9.479572e+02 +1.120000e+03
                         mitre         0.00            -         0.00 +1.151350e+05 +1.151350e+05 +1.151550e+05
                        mod010         0.00            -         0.00 +6.532083e+03 +6.532083e+03 +6.548000e+03
              rococoC11-010100         0.00            -         0.00 +8.773207e+03 +8.773207e+03 +2.558600e+04
                         p2756         0.00            6         0.00 +2.701144e+03 +2.701144e+03 +3.124000e+03
                           mkc         0.00            -         0.00 -6.118500e+02 -6.118500e+02 -5.632120e+02
                         p0282         0.00            -         0.00 +1.800003e+05 +1.800003e+05 +2.584110e+05
                         p0548         0.00            -         0.00 +4.983656e+03 +4.983656e+03 +8.691000e+03
                        sp97ar         0.00            -         0.00 +6.525604e+08 +6.525604e+08 +6.631647e+08
                         harp2         0.00            -         0.00 -7.432517e+07 -7.432517e+07 -7.389960e+07
              rococoC11-011100         0.00            -         0.00 +9.024205e+03 +9.024205e+03 +2.290600e+04
                         p0033         0.00            -         0.00 +2.838547e+03 +2.838547e+03 +3.089000e+03
                    nsrand-ipx         0.00            -         0.00 +4.966789e+04 +4.966789e+04 +5.152000e+04
                        mod008         0.00            -         0.00 +2.909311e+02 +2.909311e+02 +3.070000e+02
                    atlanta-ip         0.00            -         0.00 +8.128974e+01 +8.128974e+01 +9.500955e+01
Wolter Geomean gap closed (20 problems):  1.0

After changes

=======================================================================================================================
                         model   gap closed knapsack cuts knapsack time          zlp           db         zmip
                         p0033         0.00            1         0.00 +2.838547e+03 +2.838547e+03 +3.089000e+03
                      roll3000         0.00           11         0.00 +1.109728e+04 +1.109728e+04 +1.289900e+04
                        sp97ar         0.21           22         0.12 +6.525604e+08 +6.525824e+08 +6.631647e+08
                       cap6000         2.70           10         0.68 -2.451537e+06 -2.451533e+06 -2.451377e+06
                        mod008         3.48           10         0.00 +2.909311e+02 +2.914898e+02 +3.070000e+02
                    nsrand-ipx         4.39          108         0.10 +4.966789e+04 +4.974923e+04 +5.152000e+04
                    atlanta-ip         6.07           52         0.10 +8.128974e+01 +8.212313e+01 +9.500955e+01
                           mkc        13.86           51         0.02 -6.118500e+02 -6.051090e+02 -5.632120e+02
                        mod010        18.32            2         0.00 +6.532083e+03 +6.535000e+03 +6.548000e+03
                          lseu        23.80            1         0.00 +9.479572e+02 +9.889100e+02 +1.120000e+03
                         harp2        25.02          105         0.00 -7.432517e+07 -7.421871e+07 -7.389960e+07
              rococoC11-010100        31.57         1057         0.08 +8.773207e+03 +1.408153e+04 +2.558600e+04
              rococoC11-011100        36.94          537         0.06 +9.024205e+03 +1.415180e+04 +2.290600e+04
                         mitre        38.00           30         0.00 +1.151350e+05 +1.151426e+05 +1.151550e+05
                         p2756        85.46          346         0.01 +2.701144e+03 +3.062529e+03 +3.124000e+03
                         fiber        86.95           90         0.00 +1.981074e+05 +3.788121e+05 +4.059352e+05
                         p0548        88.50          123         0.00 +4.983656e+03 +8.264613e+03 +8.691000e+03
                         p0282        93.32           64         0.00 +1.800003e+05 +2.531751e+05 +2.584110e+05
                           gen        99.04           14         0.00 +1.122338e+05 +1.123126e+05 +1.123134e+05
Wolter Geomean gap closed (19 problems):  14.399854103217512
Geomean nonzero gap closed (19 problems):  5.292294960692844
Mean gap closed (19 problems):  34.6136937173968
Mean nonzero gap closed (19 problems):  34.61265998701388

On a MIPLIB 5 minute run on GH200

Geomean MIP GAP A / (B = basline): 0.99
Geomean Time to Optimal A/B: 1.00
A optimal 65
B optimal 64
A problems with feasible solutions 225
B problems with feasible solutions 222

A wins
                         air05     2.50     2.79
                  rmatr100-p10     3.20     3.62
                    binkar10_1    61.17    94.57
                       cbs-cta    25.64    31.61
                drayage-100-23     4.20     4.72
           neos-5188808-nattai   114.30   162.75
               mik-250-20-75-4   109.28   229.36
                    neos859080     1.38     1.72
                       mzzv42z    96.85   300.00
                      seymour1    12.72    18.47
                  neos-1582420     1.48     1.70
                     ns1116954    88.93   128.69
                     nu25-pr12     1.59     9.50
           nursesched-sprint02    15.00    56.29
                     ns1952667   108.45   300.00
                         qap10    79.90    99.32
                        swath1     0.88     1.31
                       trento1   165.90   286.07
A wins  18

A losses
           neos-4738912-atrato    39.43    18.21
                           irp    27.62    12.68
            istanbul-no-cutoff    29.41    23.73
                       eil33-2    21.12     8.24
                         neos5    42.42    34.95
                 markshare_4_0    12.10     9.89
                   neos-662469   249.75   156.25
                   neos-860300     4.12     3.50
                         neos8     1.13     1.01
                     gen-ip002    52.30    44.87
                     gen-ip054   172.18    93.65
   gfd-schedulen180f7d50m30k18   216.00    70.70
                          n5-3     9.66     7.88
                  neos-1171448     3.94     3.06
                  neos-1445765     4.61     2.53
             neos-3004026-krka     1.88     1.46
                   neos-960392    65.83    57.16
                     ns1830653   136.64    95.89
                          nw04    47.56    42.15
                   piperout-08    14.62     8.80
                   piperout-27    23.60     6.29
                           pk1     2.50     2.22
                  supportcase7    23.08    18.48
                 supportcase26    43.79    23.77
A losses 24

@chris-maes chris-maes requested a review from a team as a code owner March 17, 2026 23:35
@chris-maes chris-maes requested review from kaatish and nguidotti March 17, 2026 23:35
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 17, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@chris-maes chris-maes self-assigned this Mar 17, 2026
@chris-maes chris-maes added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels Mar 17, 2026
@chris-maes chris-maes added this to the 26.04 milestone Mar 17, 2026
@chris-maes
Copy link
Copy Markdown
Contributor Author

/ok to test daad714

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Thread probing-implied-bound data through branch-and-bound and cut generation; added implied-bound cut generation and duplicate-cut detection; refactored rational/GCD helpers; adjusted LP concurrent-halt and MIP mode handling; updated B&B gap/termination control and propagated probing structures into solver entry points and heuristics.

Changes

Cohort / File(s) Summary
Branch & Bound core
cpp/src/branch_and_bound/branch_and_bound.hpp, cpp/src/branch_and_bound/branch_and_bound.cpp, cpp/src/dual_simplex/solve.cpp
Constructor now accepts/stores a probing_implied_bound_t reference; added node_concurrent_halt_; repair_solution became non-const; threaded node_concurrent_halt_ into LP settings and set inside_mip to 2; B&B plunge/dive/scheduler/single-thread loops reworked for gap checks, concurrent-halt signaling, and termination flow.
Cut infrastructure & knapsack
cpp/src/cuts/cuts.hpp, cpp/src/cuts/cuts.cpp
Added IMPLIED_BOUND cut type and probing_implied_bound_t struct; moved rationalization/GCD helpers to free functions; implemented cut_pool_t::check_for_duplicate_cuts() and integrated duplicate removal; refactored knapsack generation (renamed entry, time checks), added minimal-cover/partition/lifting helpers, exact DP knapsack solver, and generate_implied_bound_cuts(...).
Settings & constants
cpp/include/cuopt/linear_programming/constants.h, cpp/include/cuopt/linear_programming/mip/solver_settings.hpp, cpp/src/dual_simplex/simplex_solver_settings.hpp, cpp/src/math_optimization/solver_settings.cu
Added CUOPT_MIP_IMPLIED_BOUND_CUTS constant; introduced implied_bound_cuts setting in MIP/simplex settings; registered parameter in solver settings initialization.
LP concurrent / phase2
cpp/src/branch_and_bound/branch_and_bound.cpp, cpp/src/dual_simplex/phase2.cpp
Set per-node lp_settings.concurrent_halt = &node_concurrent_halt_ for repair/node solves and standardized inside_mip usage; adjusted phase2 concurrent-halt guard to check settings.inside_mip == 1.
Solver wiring & heuristics
cpp/src/mip_heuristics/solver.cu, cpp/src/mip_heuristics/diversity/lns/rins.cu, cpp/src/mip_heuristics/diversity/recombiners/sub_mip.cuh
Added extract_probing_implied_bounds() to build CPU-side probing_implied_bound_t; populate and pass probing object into branch_and_bound_t at solver and heuristic call sites; propagate implied_bound_cuts into B&B settings.
Presolve / presolve-adjacent
cpp/src/mip_heuristics/presolve/conflict_graph/clique_table.cu, cpp/src/mip_heuristics/presolve/third_party_presolve.cpp
Adjusted ranged-row counter progression when extracting knapsack constraints from certain equality rows; expanded Papilo presolve logging to include integer-variable count.
Tests
cpp/tests/mip/cuts_test.cu
Added TEST(cuts, test_duplicate_cuts_detection) to validate duplicate-cut detection behavior in cut_pool_t.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 3.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add Implied bound cuts. Find more knapsack cuts. Lift knapsack cuts' directly describes the main changes in the changeset: adding implied bound cuts, improving knapsack cut detection, and implementing cut lifting.
Description check ✅ Passed The PR description clearly explains multiple bug fixes and the addition of implied bound cuts with specific knapsack improvements, supported by benchmark results.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)

2223-2227: ⚠️ Potential issue | 🟠 Major

Guard cut-pool diagnostics behind debug logging.

Line 2223 forces this block on in all builds (#if 1), so every cut pass prints unconditionally. In this hot path, that can materially increase runtime/log volume, and printf on Line 2226 bypasses logger controls.

🔧 Suggested fix
-#if 1
+#ifdef CUOPT_LOG_DEBUG
       cut_pool.print_cutpool_types();
       print_cut_types("In LP      ", cut_types, settings_);
-      printf("Cut pool size: %d\n", cut_pool.pool_size());
+      settings_.log.debug("Cut pool size: %d\n", cut_pool.pool_size());
 `#endif`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2223 - 2227, This
diagnostic block is unconditionally enabled via "#if 1" and uses printf, so wrap
the cut_pool.print_cutpool_types(), print_cut_types("In LP      ", cut_types,
settings_), and the pool size output behind the project's debug/logging guard
(e.g., check a debug flag or Logger::is_debug_enabled()) instead of "#if 1";
also replace the printf call with the project's logger (e.g.,
processLogger.debug/info) so it respects log levels and formatting;
specifically, remove the "#if 1" block, add a conditional like "if
(debug_enabled)" around calls to cut_pool.print_cutpool_types(),
print_cut_types(...), and the pool-size output, and use the logger API rather
than printf when emitting the pool size.
cpp/src/cuts/cuts.cpp (1)

725-744: ⚠️ Potential issue | 🟠 Major

Remove unconditional knapsack logging from the hot path.

Line 725 (if (1)) and Line 741 (#if 1) force verbose logging for every qualifying row, which can significantly slow separation and flood logs on large MIPs.

♻️ Proposed fix
-          if (1) {
+          if (verbose) {
             settings.log.printf(
               "Knapsack constraint %d row len %d beta %e sum_neg %e sum_pos %e sum_pos / (row_len - 1) %e\n",
               i,
               row_len,
               beta,
               sum_neg,
               sum_pos,
               sum_pos / (row_len - 1));
           }
@@
-#if 1
-  i_t num_knapsack_constraints = knapsack_constraints_.size();
-  settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints);
-#endif
+  if (verbose) {
+    i_t num_knapsack_constraints = knapsack_constraints_.size();
+    settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints);
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 725 - 744, The code currently forces
verbose logging in the knapsack detection hot path via the unconditional if (1)
that calls settings.log.printf and the `#if` 1 block printing
num_knapsack_constraints; remove these unconditional logs and instead wrap both
the per-row printf and the final count printf in a runtime verbosity or debug
flag check (e.g., settings.verbosity() > N or settings.log.isDebugEnabled()) so
logging only occurs when enabled, and avoid computing expensive expressions
(like sum_pos / (row_len - 1)) unless the log will be emitted; update the blocks
around knapsack_constraints_, the per-row settings.log.printf call, and the
num_knapsack_constraints print to use the conditional verbosity guard.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2223-2227: This diagnostic block is unconditionally enabled via
"#if 1" and uses printf, so wrap the cut_pool.print_cutpool_types(),
print_cut_types("In LP      ", cut_types, settings_), and the pool size output
behind the project's debug/logging guard (e.g., check a debug flag or
Logger::is_debug_enabled()) instead of "#if 1"; also replace the printf call
with the project's logger (e.g., processLogger.debug/info) so it respects log
levels and formatting; specifically, remove the "#if 1" block, add a conditional
like "if (debug_enabled)" around calls to cut_pool.print_cutpool_types(),
print_cut_types(...), and the pool-size output, and use the logger API rather
than printf when emitting the pool size.

In `@cpp/src/cuts/cuts.cpp`:
- Around line 725-744: The code currently forces verbose logging in the knapsack
detection hot path via the unconditional if (1) that calls settings.log.printf
and the `#if` 1 block printing num_knapsack_constraints; remove these
unconditional logs and instead wrap both the per-row printf and the final count
printf in a runtime verbosity or debug flag check (e.g., settings.verbosity() >
N or settings.log.isDebugEnabled()) so logging only occurs when enabled, and
avoid computing expensive expressions (like sum_pos / (row_len - 1)) unless the
log will be emitted; update the blocks around knapsack_constraints_, the per-row
settings.log.printf call, and the num_knapsack_constraints print to use the
conditional verbosity guard.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b3fc359d-9dfd-45be-9b16-50160a958299

📥 Commits

Reviewing files that changed from the base of the PR and between 591ac5f and daad714.

📒 Files selected for processing (3)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/cuts/cuts.cpp
  • cpp/src/cuts/cuts.hpp

…nstraints. Fix variables to reduce the size of seperation problem
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/cuts/cuts.cpp (1)

2058-2112: ⚠️ Potential issue | 🟠 Major

Guard the global scaling step in rational_coefficients().

Line 696 now calls this helper on every row. When a row has no integer terms, gcd() and lcm() both return 0, so scalar becomes NaN and the whole inequality gets scaled by NaN. On the other side, rows with many distinct denominators can overflow the accumulated scale before the later abs(scalar) > 1000 guard ever runs. Please short-circuit the no-integer case and cap the denominator accumulation before applying scale(). As per coding guidelines "Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 2058 - 2112, The scaling step in
rational_coefficients() must be guarded: if no integer terms were found (indices
is empty) return true immediately to avoid gcd/lcm on empty vectors; compute
gcd/ lcm only when denominators non-empty. Harden lcm() (and gcd()) to detect
overflow during accumulation and clamp the result to a safe maximum (so huge
denominators cannot blow up the scalar), and ensure rational_coefficients checks
the computed scalar is finite, non-zero and within the allowed bounds (use
std::isfinite or equivalent and the existing abs(scalar) > 1000 guard) before
calling rational_inequality.scale(scalar). Update function names referenced:
rational_coefficients, gcd, lcm, and the vectors numerators/denominators/indices
accordingly.
🧹 Nitpick comments (1)
cpp/src/cuts/cuts.cpp (1)

726-745: Please re-hide the knapsack discovery prints.

knapsack_generation_t is constructed unconditionally from cut_generation_t, so if (1) / #if 1 make every cut setup emit per-row diagnostics and the final count even when knapsack cuts are disabled. On large models this will add avoidable overhead in a hot path and flood logs. Prefer settings.log.debug(...) or a dedicated verbose flag.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 726 - 745, The unconditional diagnostic
prints around knapsack detection should be hidden: replace the "if (1) {
settings.log.printf(...)" block with a debug-level log call (e.g.
settings.log.debug(...) or whatever debug API exists) so the per-row message is
emitted only when verbose/debug logging is enabled, and change the final "#if 1
... settings.log.printf("Number of knapsack constraints %d\n",
num_knapsack_constraints);" to a debug log as well (using
knapsack_constraints_.size() to compute num_knapsack_constraints). Ensure you
modify the printf calls around the knapsack_detection block and the final count
print to use the debug logging path (or guard them with the existing verbose
flag) so diagnostics are no longer emitted unconditionally.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 845-882: The loop can fix all items leaving values.empty(), but
the code still calls solve_knapsack_problem which will crash on empty input;
before calling solve_knapsack_problem check if values.empty(): if so and
seperation_rhs > 0 then set objective = 0 (the reduced knapsack optimum is zero
because any cover must come from fixed_values==1), skip calling
solve_knapsack_problem, then proceed with the existing
isNaN/restore_complemented handling; ensure you reference
solve_knapsack_problem, values, seperation_rhs, fixed_values and
restore_complemented when making the change.

In `@cpp/src/cuts/cuts.hpp`:
- Around line 169-178: The header declares two templates (rational_approximation
and rational_coefficients) but their definitions live only in cuts.cpp and are
not explicitly instantiated, causing unresolved templates for other TUs; either
move the full template definitions of rational_approximation(f_t) and
rational_coefficients<i_t,f_t> into this header (so users including cuts.hpp can
instantiate them), or remove these public declarations and keep the definitions
private in cuts.cpp and/or add explicit template instantiations in cuts.cpp for
every i_t/f_t combination you need; refer to the symbols rational_approximation,
rational_coefficients, variable_type_t and inequality_t in your change to locate
the declarations to update.

---

Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 2058-2112: The scaling step in rational_coefficients() must be
guarded: if no integer terms were found (indices is empty) return true
immediately to avoid gcd/lcm on empty vectors; compute gcd/ lcm only when
denominators non-empty. Harden lcm() (and gcd()) to detect overflow during
accumulation and clamp the result to a safe maximum (so huge denominators cannot
blow up the scalar), and ensure rational_coefficients checks the computed scalar
is finite, non-zero and within the allowed bounds (use std::isfinite or
equivalent and the existing abs(scalar) > 1000 guard) before calling
rational_inequality.scale(scalar). Update function names referenced:
rational_coefficients, gcd, lcm, and the vectors numerators/denominators/indices
accordingly.

---

Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 726-745: The unconditional diagnostic prints around knapsack
detection should be hidden: replace the "if (1) { settings.log.printf(...)"
block with a debug-level log call (e.g. settings.log.debug(...) or whatever
debug API exists) so the per-row message is emitted only when verbose/debug
logging is enabled, and change the final "#if 1 ... settings.log.printf("Number
of knapsack constraints %d\n", num_knapsack_constraints);" to a debug log as
well (using knapsack_constraints_.size() to compute num_knapsack_constraints).
Ensure you modify the printf calls around the knapsack_detection block and the
final count print to use the debug logging path (or guard them with the existing
verbose flag) so diagnostics are no longer emitted unconditionally.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3015e873-2776-4e76-95da-cda0669f8ff9

📥 Commits

Reviewing files that changed from the base of the PR and between daad714 and c2a7331.

📒 Files selected for processing (2)
  • cpp/src/cuts/cuts.cpp
  • cpp/src/cuts/cuts.hpp

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/src/cuts/cuts.cpp (1)

2409-2435: ⚠️ Potential issue | 🟠 Major

Avoid scaling the whole inequality with a non-finite factor.

If an inequality has no integer coefficients to rationalize—or all approximated integer coefficients are 0—then gcd(...) / lcm(...) return 0 and Line 2430 computes scalar as NaN/Inf. The function then falls through and scales the entire inequality with that value.

Possible fix
   for (i_t k = 0; k < input_inequality.size(); k++) {
     const i_t j = rational_inequality.index(k);
     const f_t x = rational_inequality.coeff(k);
     if (var_types[j] == variable_type_t::INTEGER) {
       int64_t numerator, denominator;
       if (!rational_approximation(x, static_cast<int64_t>(1000), numerator, denominator)) {
         return false;
       }
       numerators.push_back(numerator);
       denominators.push_back(denominator);
       indices.push_back(k);
       rational_inequality.vector.x[k] = static_cast<f_t>(numerator) / static_cast<f_t>(denominator);
     }
   }
+
+  if (numerators.empty()) { return true; }
 
   int64_t gcd_numerators   = gcd(numerators);
   int64_t lcm_denominators = lcm(denominators);
+  if (gcd_numerators == 0 || lcm_denominators == 0) { return true; }
 
   f_t scalar = static_cast<f_t>(lcm_denominators) / static_cast<f_t>(gcd_numerators);
-  if (scalar < 0) { return false; }
+  if (!std::isfinite(scalar) || scalar <= 0) { return false; }
   if (std::abs(scalar) > 1000) { return false; }

As per coding guidelines "Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 2409 - 2435, The current code computes
scalar = lcm(denominators)/gcd(numerators) without guarding zero/empty cases; if
numerators or denominators are empty or gcd_numerators==0 or lcm_denominators==0
the scalar becomes non-finite and you end up scaling rational_inequality with
NaN/Inf. Fix by checking after collecting numerators/denominators: if
numerators.empty() or denominators.empty() or gcd_numerators == 0 or
lcm_denominators == 0 then return false (or skip scaling), and also verify
std::isfinite(scalar) before calling rational_inequality.scale(scalar); keep the
existing magnitude guard (std::abs(scalar) > 1000) as well.
cpp/src/branch_and_bound/branch_and_bound.cpp (1)

2217-2221: ⚠️ Potential issue | 🟠 Major

Avoid unconditional cut-pool printing in the cut loop.

Line 2217 forces verbose cut-pool output in every build (#if 1), which can materially slow large runs and pollute stdout. Keep this behind a debug guard and route output through the logger.

💡 Proposed fix
-#if 1
+#ifdef PRINT_CUT_POOL_TYPES
       cut_pool.print_cutpool_types();
       print_cut_types("In LP      ", cut_types, settings_);
-      printf("Cut pool size: %d\n", cut_pool.pool_size());
+      settings_.log.printf("Cut pool size: %d\n", cut_pool.pool_size());
 `#endif`

As per coding guidelines: "Assess algorithmic complexity for large-scale problems (millions of variables/constraints); ensure O(n log n) or better complexity, not O(n²) or worse".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2217 - 2221,
Remove the unconditional debug prints guarded by "#if 1" around
cut_pool.print_cutpool_types(), print_cut_types("In LP      ", cut_types,
settings_), and printf("Cut pool size: %d\n", cut_pool.pool_size()); instead
wrap them in a debug/verbose guard (e.g., if (settings_.verbose ||
ENABLE_DEBUG_LOGGING) or an `#ifdef` DEBUG_CUT_POOL) and route messages through
the project's logging facility (replace printf with logger->info/debug/error as
appropriate) so cut-pool output only appears when verbose/debug is enabled and
does not run in production builds.
♻️ Duplicate comments (1)
cpp/src/cuts/cuts.cpp (1)

872-882: ⚠️ Potential issue | 🔴 Critical

Handle the all-fixed separation case before calling solve_knapsack_problem.

After the fixing pass, values can be empty while seperation_rhs is still positive. Line 881 still calls solve_knapsack_problem, and that routine immediately hits std::max_element(values.begin(), values.end()) on an empty range. Treat the reduced knapsack optimum as 0 and continue with the cover induced by fixed_values.

Possible fix
   if (verbose) { settings.log.printf("Calling solve_knapsack_problem\n"); }
-  f_t objective = solve_knapsack_problem(values, weights, seperation_rhs, solution);
+  f_t objective = 0.0;
+  if (!values.empty()) {
+    objective = solve_knapsack_problem(values, weights, seperation_rhs, solution);
+  } else {
+    solution.clear();
+  }
   if (std::isnan(objective)) { restore_complemented(complemented_variables); return -1; }

As per coding guidelines "Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 872 - 882, The code currently calls
solve_knapsack_problem even when values is empty (after fixing) which causes a
crash; before calling solve_knapsack_problem check if values.empty() and if so
set objective = 0.0 and ensure solution is either left as zeros
(solution.resize(values.size()) already does this) and do NOT call
solve_knapsack_problem; then proceed to build the cover using fixed_values and
complemented_variables as normal (still call
restore_complemented(complemented_variables) on the existing early-return paths
but not here). Modify the block around seperation_rhs/solve_knapsack_problem to
handle values.empty() specially using the function/variable names values,
seperation_rhs, solution, solve_knapsack_problem, fixed_values,
complemented_variables, restore_complemented.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 946-951: The RHS shift when undoing complementation is wrong:
after calling lifted_cut.negate() the current coefficient at
lifted_cut.vector.x[k] equals -alpha_j, so you must shift the RHS by the
original coefficient alpha_j (rhs -= alpha_j). Fix the loop that handles
complemented variables (where you now do lifted_cut.vector.x[k] *= -1.0;
lifted_cut.rhs += 1.0;) to compute the original coefficient (e.g., old_coeff =
-lifted_cut.vector.x[k]) and apply lifted_cut.rhs -= old_coeff (or equivalently
lifted_cut.rhs += lifted_cut.vector.x[k] if using the post-negation value),
rather than adding 1.0; update the code around lifted_cut.vector.x[k] and
lifted_cut.rhs accordingly.

---

Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2217-2221: Remove the unconditional debug prints guarded by "#if
1" around cut_pool.print_cutpool_types(), print_cut_types("In LP      ",
cut_types, settings_), and printf("Cut pool size: %d\n", cut_pool.pool_size());
instead wrap them in a debug/verbose guard (e.g., if (settings_.verbose ||
ENABLE_DEBUG_LOGGING) or an `#ifdef` DEBUG_CUT_POOL) and route messages through
the project's logging facility (replace printf with logger->info/debug/error as
appropriate) so cut-pool output only appears when verbose/debug is enabled and
does not run in production builds.

In `@cpp/src/cuts/cuts.cpp`:
- Around line 2409-2435: The current code computes scalar =
lcm(denominators)/gcd(numerators) without guarding zero/empty cases; if
numerators or denominators are empty or gcd_numerators==0 or lcm_denominators==0
the scalar becomes non-finite and you end up scaling rational_inequality with
NaN/Inf. Fix by checking after collecting numerators/denominators: if
numerators.empty() or denominators.empty() or gcd_numerators == 0 or
lcm_denominators == 0 then return false (or skip scaling), and also verify
std::isfinite(scalar) before calling rational_inequality.scale(scalar); keep the
existing magnitude guard (std::abs(scalar) > 1000) as well.

---

Duplicate comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 872-882: The code currently calls solve_knapsack_problem even when
values is empty (after fixing) which causes a crash; before calling
solve_knapsack_problem check if values.empty() and if so set objective = 0.0 and
ensure solution is either left as zeros (solution.resize(values.size()) already
does this) and do NOT call solve_knapsack_problem; then proceed to build the
cover using fixed_values and complemented_variables as normal (still call
restore_complemented(complemented_variables) on the existing early-return paths
but not here). Modify the block around seperation_rhs/solve_knapsack_problem to
handle values.empty() specially using the function/variable names values,
seperation_rhs, solution, solve_knapsack_problem, fixed_values,
complemented_variables, restore_complemented.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 42a18b50-7e3c-4db1-8653-113668d14d65

📥 Commits

Reviewing files that changed from the base of the PR and between c2a7331 and b07feed.

📒 Files selected for processing (3)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/cuts/cuts.cpp
  • cpp/src/cuts/cuts.hpp

@chris-maes chris-maes changed the title Complement variables in order to find knapsack cuts Complement variables, rationalize coefficients and scale in order to find knapsack cuts. Lift knapsack cuts Mar 19, 2026
@rgsl888prabhu rgsl888prabhu changed the base branch from main to release/26.04 March 19, 2026 16:21
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/cuts/cuts.cpp (1)

2430-2436: ⚠️ Potential issue | 🟡 Minor

Potential division by zero if no integer variables exist.

If the inequality contains no integer variables, numerators will be empty, gcd(numerators) returns 0, and line 2433 will perform division by zero. While knapsack constraints should always have integer variables, adding a guard would make this function more robust.

Proposed fix
   int64_t gcd_numerators   = gcd(numerators);
   int64_t lcm_denominators = lcm(denominators);

+  if (gcd_numerators == 0 || lcm_denominators == 0) { return false; }
+
   f_t scalar = static_cast<f_t>(lcm_denominators) / static_cast<f_t>(gcd_numerators);
   if (scalar < 0) { return false; }
   if (std::abs(scalar) > 1000) { return false; }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 2430 - 2436, The code computes scalar
using gcd_numerators = gcd(numerators) and then divides by it, which can be zero
when numerators is empty; guard this by checking numerators.empty() or
gcd_numerators == 0 (from the variables numerators and gcd_numerators) before
computing scalar (the static_cast division) and return false (or handle
appropriately) to avoid division by zero in the function containing this logic
in cuts.cpp.
🧹 Nitpick comments (6)
cpp/src/cuts/cuts.cpp (5)

1018-1019: Typo in method name: minimial_cover_and_partition should be minimal_cover_and_partition.

Consistent with the header file, this method name has a spelling error.

Proposed fix
 template <typename i_t, typename f_t>
-void knapsack_generation_t<i_t, f_t>::minimial_cover_and_partition(
+void knapsack_generation_t<i_t, f_t>::minimal_cover_and_partition(
   const inequality_t<i_t, f_t>& knapsack_inequality,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 1018 - 1019, The implementation method
name knapsack_generation_t<i_t, f_t>::minimial_cover_and_partition is
misspelled; rename the definition to knapsack_generation_t<i_t,
f_t>::minimal_cover_and_partition to match the header and any declarations.
Update any internal references, declarations, or calls (including template
instantiations or overrides) that use the old name to the corrected
minimal_cover_and_partition so signatures and linkage remain consistent.

1480-1481: Debug logging left unconditionally enabled.

The if (1 || ...) pattern always evaluates to true, causing knapsack cut generation time to always be printed. This appears to be debug code that should be reverted.

Proposed fix
     f_t cut_generation_time = toc(cut_start_time);
-    if (1 || cut_generation_time > 1.0) {
-      settings.log.printf("Knapsack cut generation time %.2f seconds\n", cut_generation_time);
+    if (cut_generation_time > 1.0) {
+      settings.log.debug("Knapsack cut generation time %.2f seconds\n", cut_generation_time);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 1480 - 1481, The logging for knapsack cut
generation is left always-on due to the hardcoded "1 ||" in the condition;
update the if-statement (around the block that calls settings.log.printf and
uses cut_generation_time) to remove the "1 ||" debug override so it only logs
when the intended condition (e.g., cut_generation_time > 1.0) or an appropriate
verbose/debug flag is true; ensure you modify the if condition that surrounds
settings.log.printf rather than changing the printf call itself.

760-763: Debug logging left unconditionally enabled.

The #if 1 directive causes knapsack constraint information to always be printed. This appears to be debug code that should be conditional or removed before merging.

Proposed fix
-#if 1
+#ifdef PRINT_KNAPSACK_INFO
   i_t num_knapsack_constraints = knapsack_constraints_.size();
   settings.log.printf("Number of knapsack constraints %d\n", num_knapsack_constraints);
 `#endif`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 760 - 763, The debug log printing of
knapsack constraints is unconditionally enabled by the `#if` 1 block; remove or
gate it behind a proper runtime or compile-time debug flag. Replace the `#if 1`
wrapper around the `i_t num_knapsack_constraints = knapsack_constraints_.size();
settings.log.printf("Number of knapsack constraints %d\n",
num_knapsack_constraints);` lines with either removal of the printf entirely or
a conditional that checks a debug/logging setting (e.g., use existing
verbosity/log level or a DEBUG_COMPILE flag) so `settings.log.printf` is only
called when debugging is enabled; ensure references to `knapsack_constraints_`,
`num_knapsack_constraints`, and `settings.log.printf` are updated consistently.

1040-1051: Consider clearing workspace_ after use to prevent potential stale data issues.

The workspace_ vector is populated at line 1042 but not cleared after the function completes. While the current logic correctly only reads values that were set in the same call, adding cleanup would be a defensive measure against future modifications that might inadvertently read stale values.

Proposed fix - add cleanup at the end of minimial_cover_and_partition
   for (i_t j : cover_indicies) {
     if (xstar[j] > 1.0 - x_tol) {
       c2_partition.push_back(j);
     } else {
       c1_partition.push_back(j);
     }
   }
+
+  // Clear workspace_ to prevent stale data in future calls
+  for (i_t k = 0; k < knapsack_inequality.size(); k++) {
+    const i_t j = knapsack_inequality.index(k);
+    workspace_[j] = 0.0;
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 1040 - 1051, The workspace_ vector is
populated from knapsack_inequality inside minimal_cover_and_partition but never
cleared, risking stale values later; after you finish using workspace_ (after
building score, cover_indicies, cover_coefficients), reset the entries you
wrote—iterate over knapsack_inequality (or over the same indices collected) and
set workspace_[j] back to zero (or otherwise clear them) so workspace_ does not
retain stale data; reference the workspace_ write site (workspace_[j] =
knapsack_inequality.coeff(k)) and perform the cleanup there at function end.

1259-1273: Debug logging left unconditionally enabled in lifting code.

Lines 1259 and 1273 unconditionally log information about lifted variables. This should be conditional on a verbose flag or debug mode to avoid excessive output in production.

Proposed fix
     if (alpha_k > 0.0) {
-      settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k);
+      if (verbose) { settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k); }
       F.push_back(k);
       alpha.push_back(alpha_k);
       values.push_back(alpha_k);
       weights.push_back(a_k);

       lifted_cut.vector.i.push_back(k);
       lifted_cut.vector.x.push_back(alpha_k);
     }

     // Remove the variable from the remaining variables and coefficients
     remaining_variables.pop_back();
     remaining_coefficients.pop_back();
   }
-  settings_.log.printf("Lifted %ld variables\n", F.size());
+  if (verbose) { settings_.log.printf("Lifted %ld variables\n", F.size()); }

Note: You'll need to add const bool verbose = false; at the start of lift_knapsack_cut similar to other methods.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 1259 - 1273, The unconditional debug
prints in lift_knapsack_cut (the settings_.log.printf calls that report "Lifted
variable ..." and "Lifted %ld variables") should be guarded by a verbose flag;
add a local const bool verbose = false at the start of lift_knapsack_cut
(matching other methods) and wrap the two settings_.log.printf calls in if
(verbose) { ... } so logging only occurs when verbose is enabled; leave the
logic that updates F, alpha, values, weights and lifted_cut.vector unchanged.
cpp/src/cuts/cuts.hpp (1)

313-318: Typo in method name: minimial_cover_and_partition should be minimal_cover_and_partition.

The method name contains a spelling error that should be corrected for consistency and readability.

Proposed fix
-  void minimial_cover_and_partition(const inequality_t<i_t, f_t>& knapsack_inequality,
+  void minimal_cover_and_partition(const inequality_t<i_t, f_t>& knapsack_inequality,
                                     const inequality_t<i_t, f_t>& negated_base_cut,
                                     const std::vector<f_t>& xstar,
                                     inequality_t<i_t, f_t>& minimal_cover_cut,
                                     std::vector<i_t>& c1_partition,
                                     std::vector<i_t>& c2_partition);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.hpp` around lines 313 - 318, The declaration of
minimial_cover_and_partition has a typo in its name; rename the function to
minimal_cover_and_partition consistently across the codebase (declaration in
cuts.hpp and any definitions/usages) to fix the misspelling, update all
references (calls, forward declarations, and the corresponding implementation
function name) so signature and linkage remain identical (inequality_t<i_t,
f_t>& knapsack_inequality, inequality_t<i_t, f_t>& negated_base_cut, const
std::vector<f_t>& xstar, inequality_t<i_t, f_t>& minimal_cover_cut,
std::vector<i_t>& c1_partition, std::vector<i_t>& c2_partition).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/cuts/cuts.hpp`:
- Around line 340-343: The reset coverage misses clearing workspace_ and
complemented_xstar_ between generate_knapsack_cut() calls on the reused
knapsack_generation_t instance; update the reset pattern so after each
generate_knapsack_cut() invocation you explicitly clear or reinitialize
workspace_ and complemented_xstar_ (analogous to how is_marked_ is reset in
lift_knapsack_cut() and how restore_complemented() handles is_complemented_),
ensuring no helper function reads stale values from workspace_ or
complemented_xstar_ between calls.

---

Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 2430-2436: The code computes scalar using gcd_numerators =
gcd(numerators) and then divides by it, which can be zero when numerators is
empty; guard this by checking numerators.empty() or gcd_numerators == 0 (from
the variables numerators and gcd_numerators) before computing scalar (the
static_cast division) and return false (or handle appropriately) to avoid
division by zero in the function containing this logic in cuts.cpp.

---

Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1018-1019: The implementation method name
knapsack_generation_t<i_t, f_t>::minimial_cover_and_partition is misspelled;
rename the definition to knapsack_generation_t<i_t,
f_t>::minimal_cover_and_partition to match the header and any declarations.
Update any internal references, declarations, or calls (including template
instantiations or overrides) that use the old name to the corrected
minimal_cover_and_partition so signatures and linkage remain consistent.
- Around line 1480-1481: The logging for knapsack cut generation is left
always-on due to the hardcoded "1 ||" in the condition; update the if-statement
(around the block that calls settings.log.printf and uses cut_generation_time)
to remove the "1 ||" debug override so it only logs when the intended condition
(e.g., cut_generation_time > 1.0) or an appropriate verbose/debug flag is true;
ensure you modify the if condition that surrounds settings.log.printf rather
than changing the printf call itself.
- Around line 760-763: The debug log printing of knapsack constraints is
unconditionally enabled by the `#if` 1 block; remove or gate it behind a proper
runtime or compile-time debug flag. Replace the `#if 1` wrapper around the `i_t
num_knapsack_constraints = knapsack_constraints_.size();
settings.log.printf("Number of knapsack constraints %d\n",
num_knapsack_constraints);` lines with either removal of the printf entirely or
a conditional that checks a debug/logging setting (e.g., use existing
verbosity/log level or a DEBUG_COMPILE flag) so `settings.log.printf` is only
called when debugging is enabled; ensure references to `knapsack_constraints_`,
`num_knapsack_constraints`, and `settings.log.printf` are updated consistently.
- Around line 1040-1051: The workspace_ vector is populated from
knapsack_inequality inside minimal_cover_and_partition but never cleared,
risking stale values later; after you finish using workspace_ (after building
score, cover_indicies, cover_coefficients), reset the entries you wrote—iterate
over knapsack_inequality (or over the same indices collected) and set
workspace_[j] back to zero (or otherwise clear them) so workspace_ does not
retain stale data; reference the workspace_ write site (workspace_[j] =
knapsack_inequality.coeff(k)) and perform the cleanup there at function end.
- Around line 1259-1273: The unconditional debug prints in lift_knapsack_cut
(the settings_.log.printf calls that report "Lifted variable ..." and "Lifted
%ld variables") should be guarded by a verbose flag; add a local const bool
verbose = false at the start of lift_knapsack_cut (matching other methods) and
wrap the two settings_.log.printf calls in if (verbose) { ... } so logging only
occurs when verbose is enabled; leave the logic that updates F, alpha, values,
weights and lifted_cut.vector unchanged.

In `@cpp/src/cuts/cuts.hpp`:
- Around line 313-318: The declaration of minimial_cover_and_partition has a
typo in its name; rename the function to minimal_cover_and_partition
consistently across the codebase (declaration in cuts.hpp and any
definitions/usages) to fix the misspelling, update all references (calls,
forward declarations, and the corresponding implementation function name) so
signature and linkage remain identical (inequality_t<i_t, f_t>&
knapsack_inequality, inequality_t<i_t, f_t>& negated_base_cut, const
std::vector<f_t>& xstar, inequality_t<i_t, f_t>& minimal_cover_cut,
std::vector<i_t>& c1_partition, std::vector<i_t>& c2_partition).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c4f10f6a-923f-462e-9038-b017d90ece30

📥 Commits

Reviewing files that changed from the base of the PR and between b07feed and 3709bf9.

📒 Files selected for processing (2)
  • cpp/src/cuts/cuts.cpp
  • cpp/src/cuts/cuts.hpp

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/src/cuts/cuts.cpp (2)

1298-1305: ⚠️ Potential issue | 🟡 Minor

Potential division by zero in greedy knapsack ratio computation.

At line 1301, ratios[i] = values[i] / weights[i] performs division without checking if weights[i] is zero. While the calling context typically ensures non-zero weights, a defensive check would prevent crashes from unexpected inputs.

Proposed fix
   std::vector<f_t> ratios;
   ratios.resize(n);
   for (i_t i = 0; i < n; i++) {
+    if (weights[i] == 0.0) {
+      ratios[i] = (values[i] >= 0.0) ? std::numeric_limits<f_t>::infinity() : -std::numeric_limits<f_t>::infinity();
+    } else {
       ratios[i] = values[i] / weights[i];
+    }
   }

As per coding guidelines, "prevent overflow/underflow, precision loss, division by zero/near-zero".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 1298 - 1305, The computation of ratios
(ratios[i] = values[i] / weights[i]) can divide by zero; add a defensive check
before computing each ratio in the loop that inspects weights[i] (or near-zero)
and handles it deterministically — e.g. if weights[i] == 0 then set ratios[i] to
std::numeric_limits<f_t>::infinity() for positive values[i], -infinity() for
negative values[i], and 0 for values[i]==0 (or otherwise skip/flag the item);
keep the rest of the logic (perm and the std::sort lambda comparing ratios)
unchanged so sorting remains well-defined.

2456-2464: ⚠️ Potential issue | 🟠 Major

Integer overflow risk in lcm computation causes undefined behavior.

The lcm function uses std::lcm, which has undefined behavior when the result overflows int64_t. The denominators are constrained to ≤1000 by rational_approximation, but the LCM of multiple coprime denominators grows exponentially. For example, the LCM of the first 16 primes (~3.26×10¹⁹) exceeds int64_t max (~9.22×10¹⁸). With realistic constraints containing 15+ integer variables, each with different denominators, overflow is possible.

The check at line 2438 (scalar > 1000) guards the final result but occurs after the potentially overflowing LCM computation, leaving undefined behavior uncaught.

Proposed safer implementation
 int64_t lcm(const std::vector<int64_t>& integers)
 {
   if (integers.empty()) { return 0; }
-  int64_t result =
-    std::reduce(std::next(integers.begin()), integers.end(), integers[0], [](int64_t a, int64_t b) {
-      return std::lcm(a, b);
-    });
+  int64_t result = integers[0];
+  constexpr int64_t MAX_LCM = 1'000'000'000LL; // Reasonable bound
+  for (size_t i = 1; i < integers.size(); ++i) {
+    int64_t g = std::gcd(result, integers[i]);
+    int64_t factor = integers[i] / g;
+    if (result > MAX_LCM / factor) {
+      return 0; // Signal overflow
+    }
+    result = result * factor;
+  }
   return result;
 }

Per coding guidelines: "Check numerical stability: prevent overflow/underflow."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 2456 - 2464, The current lcm function
uses std::lcm which can overflow int64_t and cause UB; replace the
reduce/std::lcm implementation in lcm(...) with an overflow-safe accumulator:
for each denominator d compute g = std::gcd(acc, d), compute acc_div = acc / g
and before multiplying check if acc_div > std::numeric_limits<int64_t>::max() /
d (or if you want to enforce the existing limit, check if acc_div * d > 1000)
and if so return an early sentinel (e.g., a value >1000 or propagate an error)
to avoid UB; ensure this new behavior aligns with the later scalar >1000 check
so overflow is detected before any undefined multiplication and replace
references to std::lcm in the function with this gcd+safe-multiply logic.
🧹 Nitpick comments (4)
cpp/src/cuts/cuts.cpp (4)

1019-1019: Typos in function and variable names.

Two typos in this function:

  1. Function name minimial_cover_and_partition should be minimal_cover_and_partition (line 1019)
  2. Variable name permuation should be permutation (line 1063)

These don't affect functionality but reduce code readability and searchability.

Also applies to: 1063-1063

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` at line 1019, Rename the misspelled function and
variable to improve readability: change the function name
knapsack_generation_t::minimial_cover_and_partition to
minimal_cover_and_partition (update its declaration, definition, and any call
sites) and rename the variable permuation to permutation wherever it appears
(e.g., inside the function body around the former line 1063 and any references).
Ensure you update any related comments and tests to use the corrected
identifiers so all references remain consistent.

1263-1263: Unconditional debug logging in hot path.

Line 1263 uses settings_.log.printf unconditionally for every lifted variable. This will generate excessive log output during normal operation, potentially impacting performance.

Proposed fix
     if (alpha_k > 0.0) {
-      settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k);
+      if (verbose) { settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k); }
       F.push_back(k);

And define const bool verbose = false; at the start of the function (similar to other functions in this file).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` at line 1263, The unconditional log call
settings_.log.printf("Lifted variable %d with alpha %g\n", k, alpha_k) is in a
hot path and should be gated by a verbosity flag; declare a const bool verbose =
false; at the start of the function (matching other functions in this file) and
wrap the printf in an if (verbose) { ... } block so the message for lifted
variable k and alpha_k only prints when verbose is enabled.

1276-1276: Unconditional debug logging should be conditional.

Line 1276 unconditionally logs the number of lifted variables. This should be conditional on a verbose flag or log level to avoid noise in production.

-  settings_.log.printf("Lifted %ld variables\n", F.size());
+  if (verbose) { settings_.log.printf("Lifted %ld variables\n", F.size()); }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` at line 1276, The unconditional call to
settings_.log.printf("Lifted %ld variables\n", F.size()) should be guarded by
the appropriate verbosity/log level check: wrap this printf with the existing
verbose flag or log-level API (e.g., settings_.verbose(),
settings_.log.isVerbose(), or settings_.log.level >= DEBUG) so it only executes
when detailed logging is enabled; keep the message and F.size() as-is but emit
it conditionally near the same location inside the function containing
settings_.log.printf.

1483-1485: Debug logging is unconditionally enabled.

The condition if (1 || cut_generation_time > 1.0) is always true, forcing knapsack cut generation time to be printed on every invocation. This appears to be intentional debug code that should be either removed or controlled via a setting/flag before production release.

Similarly, #if 1 at line 760 forces knapsack constraint count to always print.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 1483 - 1485, The unconditional debug
prints should be removed or gated behind a logging/verbosity flag: replace the
always-true condition around settings.log.printf that references
cut_generation_time with a check against the existing runtime logging/verbosity
setting (e.g., only print if settings.verbosity or
settings.log.isDebug/isVerbose is enabled) and similarly change the `#if 1`
block that forces printing of knapsack constraint count to either be removed or
wrapped by the same configurable condition or a dedicated compile-time debug
macro; locate the prints by the symbols settings.log.printf and
cut_generation_time and the `#if 1` knapsack constraint count block and gate or
remove them so production runs do not always emit these messages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1040-1051: The loop computing score uses workspace_[j] without
guarding against zero and workspace_ isn't cleared between calls; inside
minimial_cover_and_partition ensure workspace_ is initialized/cleared for all
relevant indices before populating it from knapsack_inequality (or track and
zero only previously touched entries), and when computing score for each j from
base_cut check workspace_[j] for zero or near-zero (e.g., skip the index,
log/continue, or handle as infinite score) instead of dividing by zero; update
the code paths that push into score, cover_indicies, and cover_coefficients to
only use non-zero workspace_ entries (or handle them deterministically) and
reset/zero workspace_ after use to avoid stale values on subsequent calls.

---

Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1298-1305: The computation of ratios (ratios[i] = values[i] /
weights[i]) can divide by zero; add a defensive check before computing each
ratio in the loop that inspects weights[i] (or near-zero) and handles it
deterministically — e.g. if weights[i] == 0 then set ratios[i] to
std::numeric_limits<f_t>::infinity() for positive values[i], -infinity() for
negative values[i], and 0 for values[i]==0 (or otherwise skip/flag the item);
keep the rest of the logic (perm and the std::sort lambda comparing ratios)
unchanged so sorting remains well-defined.
- Around line 2456-2464: The current lcm function uses std::lcm which can
overflow int64_t and cause UB; replace the reduce/std::lcm implementation in
lcm(...) with an overflow-safe accumulator: for each denominator d compute g =
std::gcd(acc, d), compute acc_div = acc / g and before multiplying check if
acc_div > std::numeric_limits<int64_t>::max() / d (or if you want to enforce the
existing limit, check if acc_div * d > 1000) and if so return an early sentinel
(e.g., a value >1000 or propagate an error) to avoid UB; ensure this new
behavior aligns with the later scalar >1000 check so overflow is detected before
any undefined multiplication and replace references to std::lcm in the function
with this gcd+safe-multiply logic.

---

Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Line 1019: Rename the misspelled function and variable to improve readability:
change the function name knapsack_generation_t::minimial_cover_and_partition to
minimal_cover_and_partition (update its declaration, definition, and any call
sites) and rename the variable permuation to permutation wherever it appears
(e.g., inside the function body around the former line 1063 and any references).
Ensure you update any related comments and tests to use the corrected
identifiers so all references remain consistent.
- Line 1263: The unconditional log call settings_.log.printf("Lifted variable %d
with alpha %g\n", k, alpha_k) is in a hot path and should be gated by a
verbosity flag; declare a const bool verbose = false; at the start of the
function (matching other functions in this file) and wrap the printf in an if
(verbose) { ... } block so the message for lifted variable k and alpha_k only
prints when verbose is enabled.
- Line 1276: The unconditional call to settings_.log.printf("Lifted %ld
variables\n", F.size()) should be guarded by the appropriate verbosity/log level
check: wrap this printf with the existing verbose flag or log-level API (e.g.,
settings_.verbose(), settings_.log.isVerbose(), or settings_.log.level >= DEBUG)
so it only executes when detailed logging is enabled; keep the message and
F.size() as-is but emit it conditionally near the same location inside the
function containing settings_.log.printf.
- Around line 1483-1485: The unconditional debug prints should be removed or
gated behind a logging/verbosity flag: replace the always-true condition around
settings.log.printf that references cut_generation_time with a check against the
existing runtime logging/verbosity setting (e.g., only print if
settings.verbosity or settings.log.isDebug/isVerbose is enabled) and similarly
change the `#if 1` block that forces printing of knapsack constraint count to
either be removed or wrapped by the same configurable condition or a dedicated
compile-time debug macro; locate the prints by the symbols settings.log.printf
and cut_generation_time and the `#if 1` knapsack constraint count block and gate
or remove them so production runs do not always emit these messages.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c6ad4053-8345-4c10-a50b-667e85f8d3e7

📥 Commits

Reviewing files that changed from the base of the PR and between 3709bf9 and 9c54e01.

📒 Files selected for processing (1)
  • cpp/src/cuts/cuts.cpp

@chris-maes
Copy link
Copy Markdown
Contributor Author

@coderabbitai resume

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 31, 2026

✅ Actions performed

Reviews resumed.

@chris-maes
Copy link
Copy Markdown
Contributor Author

/ok to test

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 31, 2026

/ok to test

@chris-maes, there was an error processing your request: E1

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)

1603-1608: ⚠️ Potential issue | 🟡 Minor

Missing CONCURRENT_LIMIT handling in dive_with.

Unlike plunge_with (lines 1474-1476), dive_with doesn't handle dual::status_t::CONCURRENT_LIMIT. When the gap tolerance is met and node_concurrent_halt_ is signaled, diving workers should also respond by breaking out of the loop gracefully.

🔧 Proposed fix to handle CONCURRENT_LIMIT
     if (lp_status == dual::status_t::TIME_LIMIT) {
       solver_status_ = mip_status_t::TIME_LIMIT;
       break;
+    } else if (lp_status == dual::status_t::CONCURRENT_LIMIT) {
+      break;
     } else if (lp_status == dual::status_t::ITERATION_LIMIT) {
       break;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 1603 - 1608,
dive_with currently checks for dual::status_t::TIME_LIMIT and ITERATION_LIMIT
but omits handling dual::status_t::CONCURRENT_LIMIT; update the dive_with loop
to mirror plunge_with by treating CONCURRENT_LIMIT like TIME_LIMIT when
node_concurrent_halt_ is set: set solver_status_ = mip_status_t::TIME_LIMIT (or
the appropriate concurrent halt status if used elsewhere) and break when
lp_status == dual::status_t::CONCURRENT_LIMIT and node_concurrent_halt_ is true,
referencing dive_with, plunge_with, dual::status_t::CONCURRENT_LIMIT,
node_concurrent_halt_, and solver_status_ to locate and modify the logic.
🧹 Nitpick comments (4)
cpp/src/branch_and_bound/branch_and_bound.cpp (2)

2278-2281: Debug printing unconditionally enabled.

The #if 1 enables cut pool type printing for all builds. Based on the previous #ifdef PRINT_CUT_POOL_TYPES pattern, this appears to be debug code. Consider reverting to conditional compilation before merging to avoid unnecessary console output in production.

♻️ Revert to conditional compilation
-#if 1
+#ifdef PRINT_CUT_POOL_TYPES
       cut_pool.print_cutpool_types();
       print_cut_types("In LP      ", cut_types, settings_);
       printf("Cut pool size: %d\n", cut_pool.pool_size());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2278 - 2281, The
debug prints are unconditionally enabled by the `#if 1` around calls to
`cut_pool.print_cutpool_types()`, `print_cut_types("In LP      ", cut_types,
settings_)` and `printf("Cut pool size: %d\n", cut_pool.pool_size())`; revert
this to conditional compilation (e.g. `#ifdef PRINT_CUT_POOL_TYPES` / `#endif`)
or remove the block so these debug messages are only compiled when the debug
flag is set, keeping the same calls and spacing but guarded by the feature macro
instead of `#if 1`.

1450-1450: Stale upper bound used for early cutoff check.

At line 1450, the cutoff check uses the local upper_bound variable computed at line 1433-1434, which is only updated after each node is processed (line 1524). If another thread found a better solution during this node's processing, this check might not catch it. However, the actual LP cutoff in solve_node_lp (line 1339-1341) uses the atomic upper_bound_, so this is a minor inefficiency rather than a correctness issue - at worst, a node that could be pruned earlier will still be pruned by the LP solver.

♻️ Optional: Use atomic upper_bound_ for fresher cutoff
-    if (node_ptr->lower_bound > upper_bound) {
+    if (node_ptr->lower_bound > upper_bound_.load()) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` at line 1450, The early cutoff
uses the stale local variable upper_bound (checked in the if at
node_ptr->lower_bound > upper_bound) which may not reflect more recent solutions
stored in the atomic upper_bound_; update the cutoff to read the freshest value
by using the atomic upper_bound_ (or load it into a local fresh_upper =
upper_bound_.load()) before the lower_bound check in the same scope where
node_ptr->lower_bound is examined (the block around the if at line 1450), so the
comparison uses up-to-date global bests similar to how solve_node_lp consults
upper_bound_.
cpp/src/cuts/cuts.cpp (2)

1607-1607: Minor: inconsistent type comparison.

sum_value is i_t (integer), but it's compared to 0.0 (floating point). While this works, comparing to integer 0 would be more consistent.

-  if (sum_value <= 0.0 || sum_value >= max_size) {
+  if (sum_value <= 0 || sum_value >= max_size) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` at line 1607, The comparison uses sum_value (type i_t)
against a floating literal 0.0; change the literal to the integer 0 (i.e. use
`0`) or otherwise make both operands the same type so the condition `if
(sum_value <= 0 || sum_value >= max_size)` compares integers consistently;
update the expression referencing sum_value and max_size accordingly.

1412-1412: Use portable format specifier for size_t.

F.size() returns size_t, but %ld expects long. This mismatch can cause incorrect output or warnings on platforms where size_t and long differ in size (e.g., 64-bit Windows).

Suggested fix
-  settings_.log.printf("Lifted %ld variables\n", F.size());
+  settings_.log.printf("Lifted %zu variables\n", F.size());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` at line 1412, The printf call uses "%ld" but passes
F.size() (a size_t), causing a type mismatch; update the call in
settings_.log.printf to use the proper size_t format specifier (e.g., "%zu") or
cast F.size() to an explicitly sized integer type and match the format (e.g.,
static_cast<unsigned long long>(F.size()) with "%llu"); modify the invocation of
settings_.log.printf("Lifted %ld variables\n", F.size()) accordingly so the
format specifier matches the type returned by F.size().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1803-1804: The printed timing message is guarded by a leftover
debug condition `if (1 || cut_generation_time > 1.0)` which always evaluates
true; remove the `1 ||` so the condition reads `if (cut_generation_time > 1.0)`
(or adjust threshold if you intended a different cutoff) so that the call to
settings.log.printf("Knapsack cut generation time %.2f seconds\n",
cut_generation_time) only runs when the generation time exceeds the intended
limit.
- Around line 626-638: In the duplicate-detection loop in cuts.cpp (the block
using variables a_rj, f_r, f_i, a_ij, sets and new_set) add a defensive check
before computing val = (a_rj / f_r) * (f_i / a_ij): ensure f_r and a_ij (and
optionally f_i) are not zero or near-zero (use a small epsilon, e.g. 1e-12, with
fabs) and skip this iteration if any are too small to avoid division-by-zero;
keep the existing sign guard (f_r > 0 && f_i > 0) but move the non-zero checks
to occur first so the division is always safe.

In `@cpp/tests/mip/cuts_test.cu`:
- Around line 972-1019: The test test_duplicate_cuts_detection currently only
calls cut_pool.check_for_duplicate_cuts() without verifying outcomes; add
assertions that validate the cut pool state after duplicate-removal (e.g.,
expected retained cut count and that the strongest representative is kept).
Modify the test to query cut_pool (use its public accessors or add temporary
getters if missing) after calling check_for_duplicate_cuts() to assert the
number of cuts equals the expected value and that specific inequalities
(constructed from inequality_t entries added via add_cut) remain or were removed
as intended; ensure assertions reference cut_pool, check_for_duplicate_cuts, and
the original cuts (cut1..cut8) so the test will fail on regressions.

---

Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1603-1608: dive_with currently checks for
dual::status_t::TIME_LIMIT and ITERATION_LIMIT but omits handling
dual::status_t::CONCURRENT_LIMIT; update the dive_with loop to mirror
plunge_with by treating CONCURRENT_LIMIT like TIME_LIMIT when
node_concurrent_halt_ is set: set solver_status_ = mip_status_t::TIME_LIMIT (or
the appropriate concurrent halt status if used elsewhere) and break when
lp_status == dual::status_t::CONCURRENT_LIMIT and node_concurrent_halt_ is true,
referencing dive_with, plunge_with, dual::status_t::CONCURRENT_LIMIT,
node_concurrent_halt_, and solver_status_ to locate and modify the logic.

---

Nitpick comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2278-2281: The debug prints are unconditionally enabled by the
`#if 1` around calls to `cut_pool.print_cutpool_types()`, `print_cut_types("In
LP      ", cut_types, settings_)` and `printf("Cut pool size: %d\n",
cut_pool.pool_size())`; revert this to conditional compilation (e.g. `#ifdef
PRINT_CUT_POOL_TYPES` / `#endif`) or remove the block so these debug messages
are only compiled when the debug flag is set, keeping the same calls and spacing
but guarded by the feature macro instead of `#if 1`.
- Line 1450: The early cutoff uses the stale local variable upper_bound (checked
in the if at node_ptr->lower_bound > upper_bound) which may not reflect more
recent solutions stored in the atomic upper_bound_; update the cutoff to read
the freshest value by using the atomic upper_bound_ (or load it into a local
fresh_upper = upper_bound_.load()) before the lower_bound check in the same
scope where node_ptr->lower_bound is examined (the block around the if at line
1450), so the comparison uses up-to-date global bests similar to how
solve_node_lp consults upper_bound_.

In `@cpp/src/cuts/cuts.cpp`:
- Line 1607: The comparison uses sum_value (type i_t) against a floating literal
0.0; change the literal to the integer 0 (i.e. use `0`) or otherwise make both
operands the same type so the condition `if (sum_value <= 0 || sum_value >=
max_size)` compares integers consistently; update the expression referencing
sum_value and max_size accordingly.
- Line 1412: The printf call uses "%ld" but passes F.size() (a size_t), causing
a type mismatch; update the call in settings_.log.printf to use the proper
size_t format specifier (e.g., "%zu") or cast F.size() to an explicitly sized
integer type and match the format (e.g., static_cast<unsigned long
long>(F.size()) with "%llu"); modify the invocation of
settings_.log.printf("Lifted %ld variables\n", F.size()) accordingly so the
format specifier matches the type returned by F.size().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: cd8ee36a-1ba6-4caa-94a8-a56099c60dac

📥 Commits

Reviewing files that changed from the base of the PR and between 3709bf9 and e407fa3.

📒 Files selected for processing (16)
  • cpp/include/cuopt/linear_programming/constants.h
  • cpp/include/cuopt/linear_programming/mip/solver_settings.hpp
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/branch_and_bound.hpp
  • cpp/src/cuts/cuts.cpp
  • cpp/src/cuts/cuts.hpp
  • cpp/src/dual_simplex/phase2.cpp
  • cpp/src/dual_simplex/simplex_solver_settings.hpp
  • cpp/src/dual_simplex/solve.cpp
  • cpp/src/math_optimization/solver_settings.cu
  • cpp/src/mip_heuristics/diversity/lns/rins.cu
  • cpp/src/mip_heuristics/diversity/recombiners/sub_mip.cuh
  • cpp/src/mip_heuristics/presolve/conflict_graph/clique_table.cu
  • cpp/src/mip_heuristics/presolve/third_party_presolve.cpp
  • cpp/src/mip_heuristics/solver.cu
  • cpp/tests/mip/cuts_test.cu
✅ Files skipped from review due to trivial changes (1)
  • cpp/include/cuopt/linear_programming/constants.h

Comment on lines +972 to +1019
TEST(cuts, test_duplicate_cuts_detection)
{
dual_simplex::simplex_solver_settings_t<int, double> settings;
dual_simplex::cut_pool_t<int, double> cut_pool(4, settings);
dual_simplex::inequality_t<int, double> cut1;
cut1.push_back(0, 1.0);
cut1.push_back(1, 2.0);
cut1.rhs = 1.0;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut1);
dual_simplex::inequality_t<int, double> cut2;
cut2.push_back(0, 2.0);
cut2.push_back(1, 4.0);
cut2.rhs = 2.0;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut2);
dual_simplex::inequality_t<int, double> cut3;
cut3.push_back(0, 0.1);
cut3.push_back(2, 0.2);
cut3.rhs = 1.0;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut3);
dual_simplex::inequality_t<int, double> cut4;
cut4.push_back(0, 0.2);
cut4.push_back(2, 0.4);
cut4.rhs = 1.0;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut4);
dual_simplex::inequality_t<int, double> cut5;
cut5.push_back(1, 10.0);
cut5.push_back(3, 20.0);
cut5.rhs = 0.1;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut5);
dual_simplex::inequality_t<int, double> cut6;
cut6.push_back(1, 20.0);
cut6.push_back(3, 40.0);
cut6.rhs = 0.2;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut6);
dual_simplex::inequality_t<int, double> cut7;
cut7.push_back(0, 1.0);
cut7.push_back(1, 1.0);
cut7.push_back(2, 1.0);
cut7.push_back(3, 1.0);
cut7.rhs = 1.0;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut7);
dual_simplex::inequality_t<int, double> cut8;
cut8.push_back(1, 3.0);
cut8.rhs = 7.0;
cut_pool.add_cut(dual_simplex::cut_type_t::MIXED_INTEGER_GOMORY, cut8);

cut_pool.check_for_duplicate_cuts();
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

test_duplicate_cuts_detection lacks assertions and can’t catch regressions.

Lines 972-1019 only exercise the API; they don’t validate that duplicates/weaker proportional cuts were actually removed. Please add explicit postconditions on the resulting cut pool state (e.g., expected retained cut count and/or retained strongest representatives).

As per coding guidelines **/*test*.{cpp,cu,py}: “Write tests validating numerical correctness of optimization results (not just 'runs without error').”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/tests/mip/cuts_test.cu` around lines 972 - 1019, The test
test_duplicate_cuts_detection currently only calls
cut_pool.check_for_duplicate_cuts() without verifying outcomes; add assertions
that validate the cut pool state after duplicate-removal (e.g., expected
retained cut count and that the strongest representative is kept). Modify the
test to query cut_pool (use its public accessors or add temporary getters if
missing) after calling check_for_duplicate_cuts() to assert the number of cuts
equals the expected value and that specific inequalities (constructed from
inequality_t entries added via add_cut) remain or were removed as intended;
ensure assertions reference cut_pool, check_for_duplicate_cuts, and the original
cuts (cut1..cut8) so the test will fail on regressions.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)

1598-1612: ⚠️ Potential issue | 🟠 Major

Missing CONCURRENT_LIMIT handling in dive_with.

Unlike plunge_with (line 1474), dive_with does not handle dual::status_t::CONCURRENT_LIMIT. When this status is returned, the code proceeds to call update_tree where it falls through to the else branch (line 1267) and is incorrectly treated as a NUMERICAL issue.

Add handling for CONCURRENT_LIMIT to break out of the loop gracefully:

Proposed fix
     if (lp_status == dual::status_t::TIME_LIMIT) {
       solver_status_ = mip_status_t::TIME_LIMIT;
       break;
+    } else if (lp_status == dual::status_t::CONCURRENT_LIMIT) {
+      break;
     } else if (lp_status == dual::status_t::ITERATION_LIMIT) {
       break;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 1598 - 1612, The
dive_with loop is missing handling for dual::status_t::CONCURRENT_LIMIT; add a
branch like in plunge_with to detect if lp_status ==
dual::status_t::CONCURRENT_LIMIT, set solver_status_ =
mip_status_t::CONCURRENT_LIMIT (or the appropriate mip_status_t enum value used
elsewhere) and break out of the loop before calling update_tree so
CONCURRENT_LIMIT is not misclassified by update_tree; modify the block handling
lp_status (around the calls to solve_node_lp and update_tree in dive_with) to
include this new branch, referencing lp_status, solver_status_, dive_stats and
update_tree.
🧹 Nitpick comments (1)
cpp/src/cuts/cuts.cpp (1)

1197-1207: Floating-point equality in sort comparator is acceptable for tie-breaking.

The exact equality check score[a] == score[b] at line 1202 is used only as a tie-breaker to prefer larger coefficients when scores are equal. For sorting purposes, this is safe and doesn't violate strict weak ordering. An epsilon-based comparison would be more robust but isn't critical here since either ordering is valid for near-equal scores.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 1197 - 1207, The comparator in the
std::sort using exact equality on score (in the block creating permuation over
cover_indicies) can be made more robust: replace the equality branch `score[a]
== score[b]` with an epsilon-based check (e.g., fabs(score[a] - score[b]) < EPS)
and pick EPS (like 1e-12 or a domain-appropriate tolerance), then tie-break
using cover_coefficients[a] > cover_coefficients[b]; update the comparator
closure around score and cover_coefficients to use that epsilon or,
alternatively, add a short comment next to the comparator explaining why exact
equality is acceptable if you choose to keep it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1396-1407: The cast of the floating alpha_k to integer via
static_cast<i_t>(alpha_k) risks truncation due to FP error; change the
conversion in the code that pushes into values (and any other places where
alpha_k becomes an integer, e.g., values.push_back(...) and lifted_cut.vector.x
entries) to use rounding with a small epsilon/robust conversion (e.g.,
std::lround/std::llround or add 0.5 with bounds checks) so 0.9999999 becomes 1,
and also validate/clamp the resulting i_t to the expected range to avoid
overflow/underflow; update any related comments/logging (settings_.log.debug and
uses of F, alpha, values) to reflect the rounded conversion.

---

Outside diff comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1598-1612: The dive_with loop is missing handling for
dual::status_t::CONCURRENT_LIMIT; add a branch like in plunge_with to detect if
lp_status == dual::status_t::CONCURRENT_LIMIT, set solver_status_ =
mip_status_t::CONCURRENT_LIMIT (or the appropriate mip_status_t enum value used
elsewhere) and break out of the loop before calling update_tree so
CONCURRENT_LIMIT is not misclassified by update_tree; modify the block handling
lp_status (around the calls to solve_node_lp and update_tree in dive_with) to
include this new branch, referencing lp_status, solver_status_, dive_stats and
update_tree.

---

Nitpick comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 1197-1207: The comparator in the std::sort using exact equality on
score (in the block creating permuation over cover_indicies) can be made more
robust: replace the equality branch `score[a] == score[b]` with an epsilon-based
check (e.g., fabs(score[a] - score[b]) < EPS) and pick EPS (like 1e-12 or a
domain-appropriate tolerance), then tie-break using cover_coefficients[a] >
cover_coefficients[b]; update the comparator closure around score and
cover_coefficients to use that epsilon or, alternatively, add a short comment
next to the comparator explaining why exact equality is acceptable if you choose
to keep it.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: bbb5a81a-6d20-4756-ad5b-e1f0f707677e

📥 Commits

Reviewing files that changed from the base of the PR and between e407fa3 and 7fe6f35.

📒 Files selected for processing (3)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/cuts/cuts.cpp
  • cpp/src/mip_heuristics/presolve/third_party_presolve.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/src/mip_heuristics/presolve/third_party_presolve.cpp

@chris-maes
Copy link
Copy Markdown
Contributor Author

/ok to test bf10c68

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/cuts/cuts.cpp (1)

2773-2792: ⚠️ Potential issue | 🟡 Minor

Division by zero when input is empty produces NaN that propagates undetected.

Both gcd and lcm return 0 for empty input vectors. In rational_coefficients, if the inequality contains no integer variables, numerators and denominators remain empty, causing line 2764 to compute scalar = 0.0 / 0.0 = NaN. The subsequent checks (scalar < 0 and std::abs(scalar) > 1000) don't catch NaN because NaN comparisons return false, allowing rational_inequality.scale(NaN) to corrupt the inequality.

Proposed fix: return early for empty input or add explicit check
 int64_t gcd(const std::vector<int64_t>& integers)
 {
-  if (integers.empty()) { return 0; }
+  if (integers.empty()) { return 1; }  // Neutral element for GCD in this context
 
   int64_t result = integers[0];
   for (size_t i = 1; i < integers.size(); ++i) {
     result = std::gcd(result, integers[i]);
   }
   return result;
 }
 
 int64_t lcm(const std::vector<int64_t>& integers)
 {
-  if (integers.empty()) { return 0; }
+  if (integers.empty()) { return 1; }  // Neutral element for LCM in this context
   int64_t result =
     std::reduce(std::next(integers.begin()), integers.end(), integers[0], [](int64_t a, int64_t b) {
       return std::lcm(a, b);
     });
   return result;
 }

Alternatively, add an early return in rational_coefficients:

+  if (numerators.empty()) {
+    // No integer variables to rationalize, return as-is
+    return true;
+  }
+
   int64_t gcd_numerators   = gcd(numerators);
   int64_t lcm_denominators = lcm(denominators);

As per coding guidelines: "prevent overflow/underflow, precision loss, division by zero/near-zero".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/cuts/cuts.cpp` around lines 2773 - 2792, The code can produce NaN
when both numerators and denominators are empty (leading to 0/0); modify
rational_coefficients to detect this case and skip scaling: if
numerators.empty() or denominators.empty() then return early (or set scalar =
1.0) instead of computing scalar and calling rational_inequality.scale(NaN);
alternatively, ensure gcd/lcm do not return 0 for empty inputs (e.g., make
lcm/gcd return 1) but the preferred fix is the explicit empty-check in
rational_coefficients to avoid division by zero and prevent passing NaN into
rational_inequality.scale.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@cpp/src/cuts/cuts.cpp`:
- Around line 2773-2792: The code can produce NaN when both numerators and
denominators are empty (leading to 0/0); modify rational_coefficients to detect
this case and skip scaling: if numerators.empty() or denominators.empty() then
return early (or set scalar = 1.0) instead of computing scalar and calling
rational_inequality.scale(NaN); alternatively, ensure gcd/lcm do not return 0
for empty inputs (e.g., make lcm/gcd return 1) but the preferred fix is the
explicit empty-check in rational_coefficients to avoid division by zero and
prevent passing NaN into rational_inequality.scale.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 67c94f55-7b42-489e-a012-4366eb87a7ef

📥 Commits

Reviewing files that changed from the base of the PR and between 7fe6f35 and bf10c68.

📒 Files selected for processing (2)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/cuts/cuts.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/src/branch_and_bound/branch_and_bound.cpp

@chris-maes
Copy link
Copy Markdown
Contributor Author

/ok to tes 03cfa1f

@chris-maes
Copy link
Copy Markdown
Contributor Author

/ok to test 03cfa1f

@aliceb-nv
Copy link
Copy Markdown
Contributor

closes #977

@chris-maes
Copy link
Copy Markdown
Contributor Author

/ok to test 31ce885

@chris-maes
Copy link
Copy Markdown
Contributor Author

/merge

@rapids-bot rapids-bot bot merged commit d369633 into NVIDIA:release/26.04 Apr 1, 2026
207 of 209 checks passed
@chris-maes chris-maes deleted the knapsack_improve branch April 1, 2026 19:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change P0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants