Summary
GDPopt LOA can report TerminationCondition.optimal with problem.lower_bound == problem.upper_bound on nonconvex GDPs even when those bounds are not rigorous.
This is not a request for LOA to prove global optimality on nonconvex problems. Pyomo already documents that "For nonconvex problems, LOA may not report rigorous lower/upper bounds." The issue is that the returned SolverResults object still uses the unqualified global-looking fields:
results.solver.termination_condition = optimal
results.problem.lower_bound = <local incumbent>
results.problem.upper_bound = <same local incumbent>
For benchmark and downstream tooling, that is indistinguishable from a globally certified solve unless the caller already knows the model is nonconvex and knows this GDPopt LOA caveat.
Minimal result-finalization reproducer
This reproducer does not require external solvers. It isolates the relevant result semantics: when a LOA run has a local primal incumbent and a non-rigorous OA/discrete bound that has crossed it, the generic GDPopt convergence logic force-updates the dual bound to the primal bound and returns optimal with equal bounds.
import logging
import pyomo.environ as pyo
from pyomo.common.timing import default_timer
from pyomo.contrib.gdpopt.loa import GDP_LOA_Solver
from pyomo.opt import SolverResults
solver = GDP_LOA_Solver()
solver.pyomo_results = SolverResults()
solver.pyomo_results.problem.sense = pyo.maximize
# This mimics a nonconvex LOA state:
# - LB is a feasible incumbent from a local NLP subproblem.
# - UB is the OA/discrete bound. For a nonconvex problem this bound is not
# necessarily globally valid, and it has crossed the incumbent.
solver.LB = 1011.6577899409375
solver.UB = 1000.0
solver.iteration = 1
solver.timing.main_timer_start_time = default_timer()
solver.timing.total = 0.0
config = solver.CONFIG()
config.bound_tolerance = 1e-6
config.logger = logging.getLogger("gdpopt-loa-mwe")
config.logger.handlers[:] = [logging.StreamHandler()]
config.logger.setLevel(logging.INFO)
print("before", solver.LB, solver.UB, solver.pyomo_results.solver.termination_condition)
print("converged", solver.bounds_converged(config))
print("after", solver.LB, solver.UB, solver.pyomo_results.solver.termination_condition)
results = solver._get_final_pyomo_results_object()
print(
"results",
results.problem.lower_bound,
results.problem.upper_bound,
results.solver.termination_condition,
)
Observed output with Pyomo 6.10.0:
1 1011.65779 1011.65779 0.00% 0.00
GDPopt exiting--bounds have converged or crossed.
before 1011.6577899409375 1000.0 unknown
converged True
after 1011.6577899409375 1011.6577899409375 optimal
results 1011.6577899409375 1011.6577899409375 optimal
The problematic behavior is not that the internal crossed-bound logic exists for algorithms with rigorous bounds. The problematic behavior is that LOA uses the same finalization path even though its dual bound can be non-rigorous on nonconvex models.
Downstream evidence from GDPLib
This surfaced while benchmarking GDPLib's HDA model:
For the HDA instance, the local LOA benchmark row reported:
strategy: gdpopt.loa
solver profile: gams-local
termination: optimal
lower bound: 1011.6577899409375
upper bound: 1011.6577899409375
iterations: 1
However, the HDA model is nonconvex, and other GDPopt/global-capable routes found a better feasible solution:
gdpopt.gloa + gams-local: optimal, objective 1105.3441871686007
gdpopt.ric + gams-local: optimal, objective 1105.3441871686007
gdpopt.ric + gams-gurobi: optimal, objective 1105.3441871686007
gdpopt.ric + gams-baron: optimal, objective 1105.339918385673
For a maximization problem, a reported LOA upper bound of 1011.6577899409375 is therefore not a valid global upper bound once a feasible point around 1105.34 is known. The LB = UB result gives the appearance of a globally certified incumbent even though it is a local/non-rigorous LOA result.
Expected behavior
For LOA on models where the OA/discrete bound is not known to be rigorous, GDPopt should avoid returning globally certified-looking result metadata.
Possible approaches:
- Return a non-global termination condition such as
feasible or locallyOptimal when LOA converges only by non-rigorous/crossed bounds on a nonconvex problem.
- Preserve the incumbent as the primal bound but avoid force-setting the dual bound equal to the incumbent when the dual bound is not rigorous.
- Add explicit result metadata indicating that the reported bound is not rigorous.
- Add an option such as
assume_convex=True/False or certify_bounds=True/False, with conservative default result semantics for LOA unless convexity/global validity is asserted.
The important downstream requirement is that JSON/benchmark consumers should not have to infer from solver.name == "GDPopt ... - LOA" that optimal and LB = UB may not mean globally optimal.
Environment
Observed locally with:
Python 3.12.13
Pyomo 6.10.0
OS/platform: Linux WSL2, glibc 2.39
Summary
GDPopt LOA can report
TerminationCondition.optimalwithproblem.lower_bound == problem.upper_boundon nonconvex GDPs even when those bounds are not rigorous.This is not a request for LOA to prove global optimality on nonconvex problems. Pyomo already documents that "For nonconvex problems, LOA may not report rigorous lower/upper bounds." The issue is that the returned
SolverResultsobject still uses the unqualified global-looking fields:results.solver.termination_condition = optimalresults.problem.lower_bound = <local incumbent>results.problem.upper_bound = <same local incumbent>For benchmark and downstream tooling, that is indistinguishable from a globally certified solve unless the caller already knows the model is nonconvex and knows this GDPopt LOA caveat.
Minimal result-finalization reproducer
This reproducer does not require external solvers. It isolates the relevant result semantics: when a LOA run has a local primal incumbent and a non-rigorous OA/discrete bound that has crossed it, the generic GDPopt convergence logic force-updates the dual bound to the primal bound and returns
optimalwith equal bounds.Observed output with Pyomo 6.10.0:
The problematic behavior is not that the internal crossed-bound logic exists for algorithms with rigorous bounds. The problematic behavior is that LOA uses the same finalization path even though its dual bound can be non-rigorous on nonconvex models.
Downstream evidence from GDPLib
This surfaced while benchmarking GDPLib's HDA model:
hdato fix benchmark issues SECQUOIA/gdplib#67For the HDA instance, the local LOA benchmark row reported:
However, the HDA model is nonconvex, and other GDPopt/global-capable routes found a better feasible solution:
For a maximization problem, a reported LOA upper bound of
1011.6577899409375is therefore not a valid global upper bound once a feasible point around1105.34is known. TheLB = UBresult gives the appearance of a globally certified incumbent even though it is a local/non-rigorous LOA result.Expected behavior
For LOA on models where the OA/discrete bound is not known to be rigorous, GDPopt should avoid returning globally certified-looking result metadata.
Possible approaches:
feasibleorlocallyOptimalwhen LOA converges only by non-rigorous/crossed bounds on a nonconvex problem.assume_convex=True/Falseorcertify_bounds=True/False, with conservative default result semantics for LOA unless convexity/global validity is asserted.The important downstream requirement is that JSON/benchmark consumers should not have to infer from
solver.name == "GDPopt ... - LOA"thatoptimalandLB = UBmay not mean globally optimal.Environment
Observed locally with: