Skip to content

Release v1.2.4: Performance Quick Wins#85

Merged
daggbt merged 7 commits intomainfrom
release/v1.2.4
Feb 8, 2026
Merged

Release v1.2.4: Performance Quick Wins#85
daggbt merged 7 commits intomainfrom
release/v1.2.4

Conversation

@daggbt
Copy link
Copy Markdown
Collaborator

@daggbt daggbt commented Feb 8, 2026

Summary

Theme: Performance Quick Wins (Patch Release)

This release implements low-risk, high-confidence performance improvements from the PERFORMANCE_BOTTLENECK_REPORT.md, along with a solver selection bug fix and documentation updates.

Changes

Bug Fixes

  • Solver selection: _auto_select_method no longer selects L-BFGS-B for constrained problems — falls back to SLSQP instead
  • Hash initialization: Added self._hash = None to MatrixSum, QuadraticForm, and FrobeniusNorm __init__ methods

Performance (Roadmap Tasks 1-4)

  • Task 1: Remove np.isfinite safety checks from hot solver loop
  • Task 2: Vectorize DotProduct and VectorSum evaluation with np.dot/np.fromiter
  • Task 3: Pre-compile regex in _natural_sort_key at module level
  • Task 4: Initialize _hash=None in all Expression subclasses (9 classes across 3 files)

Documentation

  • Updated all benchmark tables with latest numbers
  • Fixed landing page title inference (Quarto freeze cache issue)
  • Centered navbar logo with menu items
  • Added benchmark analysis sections for caching and SciPy baseline scaling

Benchmarks

  • 804 tests passing (core)
  • 95 passed, 4 skipped (benchmarks — Pyomo not installed)
  • NLP warm solves: 800x faster than SciPy at n=5000
  • LP/CQP: parity with raw SciPy at scale

Version Bump

pyproject.toml: 1.2.3 → 1.2.4

Removes the np.isfinite(val) and np.all(np.isfinite(grad)) checks from
the objective and gradient functions in scipy_solver.py. These checks
ran on every solver iteration (10,000+ times) adding measurable overhead.

Users encountering inf/nan values will still see them in solver output
or get solver failures, which is sufficient feedback.
…tion (#82)

Use np.dot() and np.sum() instead of Python's sum() with generator
expressions. This enables SIMD optimization and NumPy's optimized C loops
for vector operations.

Changes:
- DotProduct.evaluate: use np.array() + np.dot()
- VectorSum.evaluate: use np.array() + np.sum()
- VectorExpressionSum.evaluate: use np.array() + np.sum()
Add _NUMBER_SPLIT_RE module-level compiled regex pattern instead of
calling re.split() with a pattern string on every sort comparison.
Reduces function call overhead in O(N log N) sorting operations.
…84)

Initialize _hash = None in Constant, BinaryOp, and UnaryOp __init__
methods. These are the most frequently created expression types and
benefit from having the slot pre-initialized.

The base __hash__ method keeps the hasattr() check for compatibility
with other Expression subclasses (in vectors.py, matrices.py) that
don't initialize _hash explicitly.
np.array() construction overhead exceeded computation gains at ALL
vector sizes tested (up to n=5000). Python's sum(generator) wins.

Profiling showed:
- VectorSum n=1000: 111.9us (python) vs 190.5us (numpy) = 1.7x slower
- DotProduct n=1000: 267.1us (python) vs 343.9us (numpy) = 1.3x slower

True numpy benefits require data staying in numpy arrays (v1.3.0 scope).
@daggbt daggbt merged commit 1f746e4 into main Feb 8, 2026
3 checks passed
@daggbt daggbt deleted the release/v1.2.4 branch February 8, 2026 10:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant