Skip to content

Use a single dual simplex pivot to initialize pseudocost#963

Open
chris-maes wants to merge 1 commit intoNVIDIA:mainfrom
chris-maes:pseudo_cost_single_pivot_estimate
Open

Use a single dual simplex pivot to initialize pseudocost#963
chris-maes wants to merge 1 commit intoNVIDIA:mainfrom
chris-maes:pseudo_cost_single_pivot_estimate

Conversation

@chris-maes
Copy link
Contributor

This is an alternative to using strong branching to initialize pseudocosts.

This may also be used to rank candidates for reliability branching.

@chris-maes chris-maes requested a review from a team as a code owner March 17, 2026 00:00
@chris-maes chris-maes requested review from Bubullzz and Kh4ster March 17, 2026 00:00
@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 17, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Mar 17, 2026

📝 Walkthrough

Walkthrough

This pull request extends the branch-and-bound strong branching mechanism to pass dual variables (y, z) and basis tracking structures (basic_list, nonbasic_list) through the branching pipeline. It introduces new objective estimation functions in pseudo-costs for computing down/up branch estimates and refactors phase2 reduced cost computation into explicit template functions, supporting an expanded parameter range for batch PDLP strong branching.

Changes

Cohort / File(s) Summary
Strong Branching Signature Updates
cpp/src/branch_and_bound/branch_and_bound.cpp, cpp/src/branch_and_bound/pseudo_costs.hpp
Updated function signatures to pass dual variables (root_y, root_z) and basis data (basic_list, nonbasic_list, basis_update) to strong_branching, replacing the prior monolithic root_soln parameter with separate root_x, root_y, root_z vectors.
Pseudo-Costs Core Logic
cpp/src/branch_and_bound/pseudo_costs.cpp
Introduced new objective_estimate_t struct and helper functions (compute_step_length, single_pivot_objective_estimate) for branch objective estimation. Added initialize_pseudo_costs_with_estimate path and conditional branching logic to support batch PDLP (settings value 1) and estimate-based (settings value 2) strong branching flows. Refactored calls to use root_x/y/z instead of root_soln.
Phase 2 Reduced Cost Computation
cpp/src/dual_simplex/phase2.hpp, cpp/src/dual_simplex/phase2.cpp
Added two new template functions: compute_reduced_cost_update (calculates delta_z for basic/nonbasic variables) and compute_delta_z (accumulates delta_z from dual updates via A_transpose). Includes explicit template instantiations and optional debug validation via CHECK_CHANGE_IN_REDUCED_COST.
Solver Settings
cpp/src/math_optimization/solver_settings.cu
Extended MIP_BATCH_PDLP_STRONG_BRANCHING parameter maximum bound from 1 to 2, enabling a new strong branching strategy mode.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and clearly summarizes the main change: introducing a single dual simplex pivot method to initialize pseudocosts as an alternative to strong branching.
Description check ✅ Passed The description is relevant to the changeset, explaining that this PR offers an alternative to strong branching for pseudocost initialization and potential use in reliability branching.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can use Trivy to scan for security misconfigurations and secrets in Infrastructure as Code files.

Add a .trivyignore file to your project to customize which findings Trivy reports.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
cpp/src/math_optimization/solver_settings.cu (1)

102-102: Document the new non-PDLP mode behind this setting.

Line 102 now accepts 2, but the setting name and raw bounds alone do not tell callers that this is the new single-pivot estimator rather than another PDLP variant. A nearby comment or named constants for 0/1/2 would make this much easier to configure correctly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/math_optimization/solver_settings.cu` at line 102, The entry for
CUOPT_MIP_BATCH_PDLP_STRONG_BRANCHING now accepts value 2 but lacks
documentation that 2 is the new single-pivot estimator; update the code around
the setting (CUOPT_MIP_BATCH_PDLP_STRONG_BRANCHING and
mip_settings.mip_batch_pdlp_strong_branching) to document the three modes and
their meanings (e.g. 0=off/legacy, 1=PDLP, 2=single-pivot estimator) by adding a
brief comment and/or introducing named constants/enums (or constexpr values) for
0/1/2, and replace raw numeric bounds/usages with those symbols so callers can
configure the mode unambiguously.
cpp/src/dual_simplex/phase2.cpp (1)

122-126: Use a tolerance when deciding whether delta_z[j] is structurally nonzero.

Line 123 uses exact dot != 0.0 on a floating-point dot product. The sparse path already applies a numerical cutoff, so this dense branch should use a shared threshold as well; otherwise the 30% switch can change delta_z_indices bookkeeping purely because of roundoff.

As per coding guidelines, "Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/dual_simplex/phase2.cpp` around lines 122 - 126, The check for
structural nonzero should use a numeric tolerance instead of exact comparison:
replace the exact test `dot != 0.0` with a magnitude comparison like `fabs(dot)
> tol` (or `std::abs(dot) > tol`) and reuse the same cutoff used by the sparse
path (or a shared `tol`/`epsilon` constant) so delta_z, delta_z_indices and
delta_z_mark updates are consistent; ensure the chosen tolerance is
defined/co-located with the sparse cutoff and applied exactly where `delta_z[j]
= -dot;` is followed by pushing to `delta_z_indices` and setting
`delta_z_mark[j]`.
cpp/src/branch_and_bound/pseudo_costs.cpp (3)

169-184: Guard or remove debug logging in hot path.

These log.printf calls execute unconditionally for every fractional variable during pseudo-cost estimation. For problems with many fractional variables, this could generate excessive output and impact performance.

Consider guarding these with a debug flag or conditional compilation:

♻️ Suggested fix
-  settings.log.printf("Down branch %d. Step length: %e. Objective estimate: %e. Delta obj: %e. Root obj: %e.\n", variable_j, step_length, down_obj, delta_obj, root_obj);
+  settings.log.debug("Down branch %d. Step length: %e. Objective estimate: %e. Delta obj: %e. Root obj: %e.\n", variable_j, step_length, down_obj, delta_obj, root_obj);
   ...
-  settings.log.printf("Up   branch %d. Step length: %e. Objective estimate: %e. Delta obj: %e. Root obj: %e.\n", variable_j, step_length, up_obj, delta_obj, root_obj);
+  settings.log.debug("Up   branch %d. Step length: %e. Objective estimate: %e. Delta obj: %e. Root obj: %e.\n", variable_j, step_length, up_obj, delta_obj, root_obj);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 169 - 184, The
unconditional settings.log.printf calls in pseudo-cost estimation (the two
prints around "Down branch" and "Up branch" near variable_j, step_length,
up_obj, delta_obj, root_obj) are in a hot path and should be guarded or removed;
update the code around those log.printf calls to only execute when a debug flag
or verbose setting is enabled (e.g., check a settings.debug or settings.verbose
boolean or use a conditional compilation macro) so the prints run only when
debugging, or remove them entirely if not needed, ensuring the rest of
compute_step_length, delta_z manipulation, and variables like delta_z_indices
and variable_j remain unchanged.

172-177: Dead assignment: direction = 1 is unused.

The direction variable is assigned to 1 but never used afterward—the code directly negates delta_z for the up branch computation. This is minor but could be cleaned up.

♻️ Remove unused assignment
   // Up branch
-  direction = 1;
   // Negate delta_z
   for (i_t j : delta_z_indices)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 172 - 177, The
assignment to direction in pseudo_costs.cpp is dead — remove the unused
statement "direction = 1" (or, if intended, use the variable in the up-branch
logic) and keep the explicit negation loop over delta_z_indices as-is; update or
delete the variable declaration/assignment related to direction so no unused
local remains (look for the symbol direction and the loop using delta_z and
delta_z_indices).

248-256: Add defensive check for non-basic fractional variables.

If a fractional variable j is not in the basis (edge case due to numerical issues), basic_map[j] would be -1, and passing this to single_pivot_objective_estimate would result in undefined behavior when constructing the sparse vector e_k with index -1.

While fractional integer variables should always be basic in LP solutions, a defensive assertion would catch unexpected edge cases:

🛡️ Proposed defensive check
   for (i_t k = 0; k < fractional.size(); k++) {
     const i_t j                 = fractional[k];
+    cuopt_assert(basic_map[j] >= 0,
+                 "Fractional variable %d is not basic - cannot estimate pseudo-cost",
+                 j);
     objective_estimate_t<f_t> estimate = single_pivot_objective_estimate(lp,

Or alternatively, skip variables that aren't basic:

   for (i_t k = 0; k < fractional.size(); k++) {
     const i_t j = fractional[k];
+    if (basic_map[j] < 0) {
+      // Variable is not basic - skip estimation
+      pc.strong_branch_down[k] = 0;
+      pc.strong_branch_up[k] = 0;
+      continue;
+    }
     objective_estimate_t<f_t> estimate = single_pivot_objective_estimate(...);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 248 - 256, The loop
over fractional variables must guard against non-basic indices: check
basic_map[j] before calling single_pivot_objective_estimate and handle
basic_map[j] == -1 (e.g., in debug builds assert that basic_map[j] >= 0, and in
normal/production builds skip that j with continue or log and continue) to avoid
passing -1 into single_pivot_objective_estimate (which constructs e_k). Modify
the loop that iterates fractional[k] (variable j) to perform if (basic_map[j] <
0) { /* debug: assert(false); */ continue; } before calling
single_pivot_objective_estimate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/dual_simplex/phase2.cpp`:
- Around line 183-192: The CHECK_CHANGE_IN_REDUCED_COST macro in
compute_delta_z() references lp, basic_list, and nonbasic_list which are out of
scope; move the verification out of compute_delta_z() into its caller (where lp,
basic_list, nonbasic_list are available) or add those variables as parameters to
compute_delta_z(); specifically either (A) remove the macro call from
compute_delta_z() and invoke phase2::compute_reduced_cost_update(lp, basic_list,
nonbasic_list, delta_y_dense, leaving_index, direction, delta_z_mark_check,
delta_z_indices_check, delta_z_check, work_estimate) (or a dedicated
CHECK_CHANGE_IN_REDUCED_COST call) in the caller after compute_delta_z()
returns, or (B) extend compute_delta_z() signature to accept lp, basic_list,
nonbasic_list and use them inside the function to evaluate the macro—pick the
approach that preserves encapsulation and compilation.

---

Nitpick comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 169-184: The unconditional settings.log.printf calls in
pseudo-cost estimation (the two prints around "Down branch" and "Up branch" near
variable_j, step_length, up_obj, delta_obj, root_obj) are in a hot path and
should be guarded or removed; update the code around those log.printf calls to
only execute when a debug flag or verbose setting is enabled (e.g., check a
settings.debug or settings.verbose boolean or use a conditional compilation
macro) so the prints run only when debugging, or remove them entirely if not
needed, ensuring the rest of compute_step_length, delta_z manipulation, and
variables like delta_z_indices and variable_j remain unchanged.
- Around line 172-177: The assignment to direction in pseudo_costs.cpp is dead —
remove the unused statement "direction = 1" (or, if intended, use the variable
in the up-branch logic) and keep the explicit negation loop over delta_z_indices
as-is; update or delete the variable declaration/assignment related to direction
so no unused local remains (look for the symbol direction and the loop using
delta_z and delta_z_indices).
- Around line 248-256: The loop over fractional variables must guard against
non-basic indices: check basic_map[j] before calling
single_pivot_objective_estimate and handle basic_map[j] == -1 (e.g., in debug
builds assert that basic_map[j] >= 0, and in normal/production builds skip that
j with continue or log and continue) to avoid passing -1 into
single_pivot_objective_estimate (which constructs e_k). Modify the loop that
iterates fractional[k] (variable j) to perform if (basic_map[j] < 0) { /* debug:
assert(false); */ continue; } before calling single_pivot_objective_estimate.

In `@cpp/src/dual_simplex/phase2.cpp`:
- Around line 122-126: The check for structural nonzero should use a numeric
tolerance instead of exact comparison: replace the exact test `dot != 0.0` with
a magnitude comparison like `fabs(dot) > tol` (or `std::abs(dot) > tol`) and
reuse the same cutoff used by the sparse path (or a shared `tol`/`epsilon`
constant) so delta_z, delta_z_indices and delta_z_mark updates are consistent;
ensure the chosen tolerance is defined/co-located with the sparse cutoff and
applied exactly where `delta_z[j] = -dot;` is followed by pushing to
`delta_z_indices` and setting `delta_z_mark[j]`.

In `@cpp/src/math_optimization/solver_settings.cu`:
- Line 102: The entry for CUOPT_MIP_BATCH_PDLP_STRONG_BRANCHING now accepts
value 2 but lacks documentation that 2 is the new single-pivot estimator; update
the code around the setting (CUOPT_MIP_BATCH_PDLP_STRONG_BRANCHING and
mip_settings.mip_batch_pdlp_strong_branching) to document the three modes and
their meanings (e.g. 0=off/legacy, 1=PDLP, 2=single-pivot estimator) by adding a
brief comment and/or introducing named constants/enums (or constexpr values) for
0/1/2, and replace raw numeric bounds/usages with those symbols so callers can
configure the mode unambiguously.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2e599412-87c4-4e49-8d08-d9d91317ad55

📥 Commits

Reviewing files that changed from the base of the PR and between c5116e1 and aa4d55c.

📒 Files selected for processing (6)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/pseudo_costs.cpp
  • cpp/src/branch_and_bound/pseudo_costs.hpp
  • cpp/src/dual_simplex/phase2.cpp
  • cpp/src/dual_simplex/phase2.hpp
  • cpp/src/math_optimization/solver_settings.cu

Comment on lines +183 to +192
phase2::compute_reduced_cost_update(lp,
basic_list,
nonbasic_list,
delta_y_dense,
leaving_index,
direction,
delta_z_mark_check,
delta_z_indices_check,
delta_z_check,
work_estimate);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "phase2.cpp" | head -20

Repository: NVIDIA/cuopt

Length of output: 89


🏁 Script executed:

wc -l ./cpp/src/dual_simplex/phase2.cpp

Repository: NVIDIA/cuopt

Length of output: 94


🏁 Script executed:

sed -n '170,210p' ./cpp/src/dual_simplex/phase2.cpp

Repository: NVIDIA/cuopt

Length of output: 1453


🏁 Script executed:

sed -n '100,175p' ./cpp/src/dual_simplex/phase2.cpp

Repository: NVIDIA/cuopt

Length of output: 2547


🏁 Script executed:

rg "compute_delta_z\(" ./cpp/src/dual_simplex/phase2.cpp -A 5 -B 5

Repository: NVIDIA/cuopt

Length of output: 899


🏁 Script executed:

rg "compute_delta_z\(" ./cpp/src/dual_simplex/phase2.cpp -A 15

Repository: NVIDIA/cuopt

Length of output: 1334


🏁 Script executed:

sed -n '140,220p' ./cpp/src/dual_simplex/phase2.cpp

Repository: NVIDIA/cuopt

Length of output: 2852


🏁 Script executed:

rg "compute_reduced_cost_update" ./cpp/src/dual_simplex/phase2.cpp -B 3 -A 3

Repository: NVIDIA/cuopt

Length of output: 1352


🏁 Script executed:

sed -n '500,600p' ./cpp/src/dual_simplex/phase2.cpp

Repository: NVIDIA/cuopt

Length of output: 3408


🏁 Script executed:

rg "void.*pivot_step|void.*iteration|void.*main_loop" ./cpp/src/dual_simplex/phase2.cpp | head -20

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

rg "PHASE2_NVTX_RANGE.*delta_z" ./cpp/src/dual_simplex/phase2.cpp -B 50 | head -80

Repository: NVIDIA/cuopt

Length of output: 1885


🏁 Script executed:

sed -n '1,100p' ./cpp/src/dual_simplex/phase2.cpp

Repository: NVIDIA/cuopt

Length of output: 2986


🏁 Script executed:

rg "const bool use_transpose" ./cpp/src/dual_simplex/phase2.cpp -B 100 | grep -E "^[0-9]+-.*\)" | tail -5

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

sed -n '700,900p' ./cpp/src/dual_simplex/phase2.cpp | head -100

Repository: NVIDIA/cuopt

Length of output: 3946


🏁 Script executed:

rg "use_transpose.*=" ./cpp/src/dual_simplex/phase2.cpp -B 200 | grep -E "^[0-9]+-.*\(" | tail -3

Repository: NVIDIA/cuopt

Length of output: 38


🏁 Script executed:

rg "if \(use_transpose\)" ./cpp/src/dual_simplex/phase2.cpp -B 5 -A 15

Repository: NVIDIA/cuopt

Length of output: 849


CHECK_CHANGE_IN_REDUCED_COST macro references variables out of scope in compute_delta_z().

Lines 183-192 reference lp, basic_list, and nonbasic_list which are not parameters of compute_delta_z() and therefore unavailable when this function is called. The macro is uncompilable as written. Move this verification logic to the caller or pass these variables through function parameters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/dual_simplex/phase2.cpp` around lines 183 - 192, The
CHECK_CHANGE_IN_REDUCED_COST macro in compute_delta_z() references lp,
basic_list, and nonbasic_list which are out of scope; move the verification out
of compute_delta_z() into its caller (where lp, basic_list, nonbasic_list are
available) or add those variables as parameters to compute_delta_z();
specifically either (A) remove the macro call from compute_delta_z() and invoke
phase2::compute_reduced_cost_update(lp, basic_list, nonbasic_list,
delta_y_dense, leaving_index, direction, delta_z_mark_check,
delta_z_indices_check, delta_z_check, work_estimate) (or a dedicated
CHECK_CHANGE_IN_REDUCED_COST call) in the caller after compute_delta_z()
returns, or (B) extend compute_delta_z() signature to accept lp, basic_list,
nonbasic_list and use them inside the function to evaluate the macro—pick the
approach that preserves encapsulation and compilation.

@anandhkb anandhkb added this to the 26.04 milestone Mar 17, 2026
@anandhkb anandhkb added non-breaking Introduces a non-breaking change improvement Improves an existing functionality labels Mar 17, 2026
nguidotti added a commit to nguidotti/cuopt that referenced this pull request Mar 17, 2026
@nguidotti nguidotti self-requested a review March 18, 2026 15:16
Copy link
Contributor

@nguidotti nguidotti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the single_pivot_objective_estimate routine should return the objective change instead of the objective to match the other code paths. After all, the pseudocost is compute using the objective change.

workspace,
delta_z,
work_estimate);
pc.strong_branch_down[k] = estimate.down_obj;
Copy link
Contributor

@nguidotti nguidotti Mar 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the pseudocost initilization, should be strong_branch_down/up set to the objective change instead of the objective? This matches the other code paths.

i_t delta_y_nz0 = 0;
const i_t nz_delta_y = delta_y.i.size();
for (i_t k = 0; k < nz_delta_y; k++) {
if (std::abs(delta_y.x[k]) > 1e-12) { delta_y_nz0++; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: replaced 1E-12 with zero_tol from simplex_settings.

i_t m = lp.num_rows;
i_t n = lp.num_cols;

csc_matrix_t<i_t, f_t> A_transpose(m, n, 0);
Copy link
Contributor

@nguidotti nguidotti Mar 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are using $\mathbf{A}^T$ in many places (in dual simplex, bound strengthening and here). Can this move to lp_problem_t so it is visible across the entire code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants