Skip to content

Conversation

@FBumann
Copy link
Owner

@FBumann FBumann commented Jan 30, 2026

Closes # (if applicable).

Changes proposed in this Pull Request

Checklist

  • Code changes are sufficiently documented; i.e. new functions contain docstrings and further explanations may be given in doc.
  • Unit tests for new features were added (if applicable).
  • A note for the release notes doc/release_notes.rst of the upcoming release is included.
  • I consent to the release of this PR's code under the MIT license.

Summary by CodeRabbit

Release Notes

  • New Features
    • Added support for multiple piecewise constraint formulations via a new method parameter: "sos2" (default), "incremental", or "auto".
    • Incremental formulation now available for strictly monotonic breakpoints, offering improved efficiency without requiring SOS2 or binary variables.
    • Auto-selection logic automatically chooses the optimal formulation based on breakpoint characteristics.

✏️ Tip: You can customize this high-level summary in your review settings.

…_SUFFIX = "_fill".

  linopy/model.py —
  - Added method: str = "sos2" parameter to add_piecewise_constraints()
  - Updated docstring with the new parameter and incremental formulation notes
  - Refactored: extracted _add_pwl_sos2() (existing SOS2 logic) and added _add_pwl_incremental() (new delta formulation)
  - Added _check_strict_monotonicity() static method
  - method="auto" checks monotonicity and picks accordingly
  - Numeric coordinate validation only enforced for SOS2

  test/test_piecewise_constraints.py — Added TestIncrementalFormulation (10 tests) covering: single variable, two breakpoints, dict case, non-monotonic error, decreasing monotonic, auto-select incremental/sos2, invalid method, extra coordinates. Added TestIncrementalSolverIntegration (Gurobi-gated).
…f(dim).rename()

  2. Filling-order constraints: replaced per-segment individual add_constraints calls with a single vectorized constraint via xr.concat + LinearExpression
  3. Mask computation: replaced loop over segments with vectorized slice + rename
  4. Coordinate lists: unified extra_coords/lambda_coords — lambda_coords = extra_coords + [bp_dim_index], eliminating duplicate list comprehensions
@coderabbitai
Copy link

coderabbitai bot commented Jan 30, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review
📝 Walkthrough

Walkthrough

The pull request introduces a new piecewise-linear constraint formulation called "incremental" alongside the existing "sos2" method. It adds suffix constants, extends the add_piecewise_constraints API with a method parameter, and provides helper methods for both formulations with auto-selection logic.

Changes

Cohort / File(s) Summary
PWL Constants
linopy/constants.py
Added two new piecewise-linear constraint suffix constants: PWL_DELTA_SUFFIX and PWL_FILL_SUFFIX for naming delta and fill variables.
Piecewise Constraint Implementation
linopy/model.py
Extended add_piecewise_constraints with a method parameter supporting "sos2", "incremental", and "auto" formulations. Introduced _add_pwl_sos2 and _add_pwl_incremental helper methods to implement each formulation separately. Added _check_strict_monotonicity static method for validating breakpoint monotonicity. Updated imports and adjusted internal logic for coordinate/mask handling based on selected method.
Piecewise Constraint Tests
test/test_piecewise_constraints.py
Expanded test imports and added comprehensive TestIncrementalFormulation test class covering incremental formulation behavior, monotonicity validation, masking, auto-selection logic, multi-dimensional constraints, and solver integration tests with Gurobi.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

Deltas dancing, filling with grace,
Two formulations now take their place,
Incremental paths, strictly monotone,
The SOS2 way, forever known,
Piecewise wisdom, auto-selected with care! 🐰✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is largely empty; all required sections are present as a template checklist but contain no actual content about the changes proposed. Fill in the 'Changes proposed in this Pull Request' section with details about the new incremental formulation, method parameter, and related implementation changes.
Title check ❓ Inconclusive The title is vague and unclear; 'clean+linear' does not meaningfully describe the main changes of adding piecewise linear constraint formulations. Revise title to be more specific about the primary change, e.g. 'Add incremental formulation for piecewise linear constraints' or 'Support alternative piecewise constraint methods (SOS2 and incremental)'.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/add-piecewise-linear-clean+linear

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@FBumann
Copy link
Owner Author

FBumann commented Jan 30, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Jan 30, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@linopy/model.py`:
- Around line 934-941: The _check_strict_monotonicity function currently
evaluates monotonicity over the entire DataArray at once and fails when
different variables have opposite directions or when diffs contain NaNs; change
it to compute diffs = breakpoints.diff(dim) and then iterate (or apply a
vectorized reduction) over all orthogonal slices along dim (e.g., per-variable
slices) ignoring NaNs in each slice, and for each slice consider it strictly
monotonic if all non-NaN diffs are either >0 or <0; return True only if every
slice meets that condition. Ensure you reference _check_strict_monotonicity, the
breakpoints DataArray and dim when locating and updating the logic.
🧹 Nitpick comments (1)
linopy/constants.py (1)

39-44: Add type annotations for the new PWL suffix constants.

These new constants are untyped; annotating them as str keeps typing consistent and helps mypy.

♻️ Suggested change
-PWL_DELTA_SUFFIX = "_delta"
-PWL_FILL_SUFFIX = "_fill"
+PWL_DELTA_SUFFIX: str = "_delta"
+PWL_FILL_SUFFIX: str = "_fill"

As per coding guidelines: Use type hints and mypy for type checking in Python files.

linopy/model.py Outdated
Comment on lines 934 to 941
@staticmethod
def _check_strict_monotonicity(breakpoints: DataArray, dim: str) -> bool:
"""Check if breakpoints are strictly monotonic along dim."""
diffs = breakpoints.diff(dim)
# All diffs must be either all positive or all negative (strictly monotonic)
all_positive = bool((diffs > 0).all())
all_negative = bool((diffs < 0).all())
return all_positive or all_negative
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find and examine the file
fd -t f "model.py" linopy/

Repository: FBumann/linopy

Length of output: 73


🏁 Script executed:

# Get the specific lines mentioned in the review
cat -n linopy/model.py | sed -n '930,945p'

Repository: FBumann/linopy

Length of output: 946


🏁 Script executed:

# Search for context around this function and its usage
rg "_check_strict_monotonicity" -B 5 -A 15

Repository: FBumann/linopy

Length of output: 3546


🏁 Script executed:

# Check if there are any tests for this function
rg "_check_strict_monotonicity" --type py -l

Repository: FBumann/linopy

Length of output: 73


🏁 Script executed:

# Get more context around the function calls
cat -n linopy/model.py | sed -n '750,780p'

Repository: FBumann/linopy

Length of output: 1423


🏁 Script executed:

# Check the full _add_pwl_link function to understand the context better
cat -n linopy/model.py | sed -n '750,820p'

Repository: FBumann/linopy

Length of output: 3246


🏁 Script executed:

# Look for tests related to PWL and monotonicity
fd -t f "test.*\.py" | xargs rg "pwl|monotonic" -l

Repository: FBumann/linopy

Length of output: 92


🏁 Script executed:

# Check imports to understand what DataArray is
head -50 linopy/model.py

Repository: FBumann/linopy

Length of output: 1157


🏁 Script executed:

# Look at the PWL tests to understand the use cases
cat test/test_piecewise_constraints.py | head -100

Repository: FBumann/linopy

Length of output: 3313


🏁 Script executed:

# Search for tests specifically about monotonicity
rg "monotonic|opposite" test/test_piecewise_constraints.py -B 2 -A 5

Repository: FBumann/linopy

Length of output: 1619


🏁 Script executed:

# Check if there are any tests with NaNs in breakpoints
rg "nan|NaN" test/test_piecewise_constraints.py -B 2 -A 5

Repository: FBumann/linopy

Length of output: 1424


🏁 Script executed:

# Look at the full _add_pwl_link signature to understand the dict case
rg "def _add_pwl_link" -A 30 linopy/model.py

Repository: FBumann/linopy

Length of output: 40


🏁 Script executed:

# Get the full signature of _add_pwl_link
cat -n linopy/model.py | sed -n '665,680p'

Repository: FBumann/linopy

Length of output: 876


🏁 Script executed:

# Look for dict test with opposite monotonicity
rg "opposite|decreasing.*increasing|increasing.*decreasing" test/test_piecewise_constraints.py -B 5 -A 10

Repository: FBumann/linopy

Length of output: 40


🏁 Script executed:

# Check if there's an existing test for dict with different monotonicities
rg "dict_of_variables" -A 20 test/test_piecewise_constraints.py | head -60

Repository: FBumann/linopy

Length of output: 801


🏁 Script executed:

# Look at tests showing what happens with dict and non-monotonic
cat test/test_piecewise_constraints.py | sed -n '100,250p'

Repository: FBumann/linopy

Length of output: 5197


🏁 Script executed:

# Look for any tests that show opposite monotonic directions
cat test/test_piecewise_constraints.py | sed -n '60,130p'

Repository: FBumann/linopy

Length of output: 2503


🏁 Script executed:

# Verify the xarray API for diff, notnull, and the proposed operations
python3 << 'EOF'
import xarray as xr
import numpy as np

# Simulate the proposed fix logic
breakpoints = xr.DataArray(
    [[0, 50, 100], [100, 50, 0]],  # power increasing, efficiency decreasing
    dims=["var", "bp"],
    coords={"var": ["power", "efficiency"], "bp": [0, 1, 2]},
)

print("Breakpoints shape:", breakpoints.shape)
print("Breakpoints:\n", breakpoints.values)

diffs = breakpoints.diff("bp")
print("\nDiffs shape:", diffs.shape)
print("Diffs:\n", diffs.values)

valid = diffs.notnull()
print("\nValid shape:", valid.shape)
print("Valid:\n", valid.values)

all_positive = ((diffs > 0) | ~valid).all(dim="bp")
print("\nAll positive (shape", all_positive.shape, "):\n", all_positive.values)

all_negative = ((diffs < 0) | ~valid).all(dim="bp")
print("\nAll negative (shape", all_negative.shape, "):\n", all_negative.values)

result = bool((all_positive | all_negative).all())
print("\nFinal result (opposite directions):", result)

# Now test with NaNs
breakpoints_nan = xr.DataArray(
    [[0, 50, np.nan, 100], [100, 50, 30, np.nan]],
    dims=["var", "bp"],
    coords={"var": ["power", "efficiency"], "bp": [0, 1, 2, 3]},
)

print("\n\n=== Test with NaNs ===")
print("Breakpoints shape:", breakpoints_nan.shape)
print("Breakpoints:\n", breakpoints_nan.values)

diffs_nan = breakpoints_nan.diff("bp")
print("\nDiffs shape:", diffs_nan.shape)
print("Diffs:\n", diffs_nan.values)

valid_nan = diffs_nan.notnull()
all_positive_nan = ((diffs_nan > 0) | ~valid_nan).all(dim="bp")
all_negative_nan = ((diffs_nan < 0) | ~valid_nan).all(dim="bp")

print("\nAll positive:\n", all_positive_nan.values)
print("All negative:\n", all_negative_nan.values)

result_nan = bool((all_positive_nan | all_negative_nan).all())
print("\nFinal result (with NaNs):", result_nan)
EOF

Repository: FBumann/linopy

Length of output: 176


🏁 Script executed:

# Check the current behavior with opposite directions to confirm the issue
python3 << 'EOF'
import xarray as xr
import numpy as np

# Current implementation test
breakpoints = xr.DataArray(
    [[0, 50, 100], [100, 50, 0]],  # power increasing, efficiency decreasing
    dims=["var", "bp"],
    coords={"var": ["power", "efficiency"], "bp": [0, 1, 2]},
)

diffs = breakpoints.diff("bp")
print("Diffs:\n", diffs.values)

# Current implementation
all_positive = bool((diffs > 0).all())
all_negative = bool((diffs < 0).all())
current_result = all_positive or all_negative

print(f"\nCurrent implementation: all_positive={all_positive}, all_negative={all_negative}")
print(f"Current result (would be used for incremental): {current_result}")
print("This shows the issue: opposite directions are rejected!")
EOF

Repository: FBumann/linopy

Length of output: 176


🏁 Script executed:

# Install xarray and numpy first
pip install xarray numpy -q && python3 << 'EOF'
import xarray as xr
import numpy as np

# Current implementation test
breakpoints = xr.DataArray(
    [[0, 50, 100], [100, 50, 0]],  # power increasing, efficiency decreasing
    dims=["var", "bp"],
    coords={"var": ["power", "efficiency"], "bp": [0, 1, 2]},
)

diffs = breakpoints.diff("bp")
print("Diffs:\n", diffs.values)

# Current implementation
all_positive = bool((diffs > 0).all())
all_negative = bool((diffs < 0).all())
current_result = all_positive or all_negative

print(f"\nCurrent implementation: all_positive={all_positive}, all_negative={all_negative}")
print(f"Current result (rejects opposite directions): {current_result}")

# Proposed implementation
valid = diffs.notnull()
all_positive_proposed = ((diffs > 0) | ~valid).all(dim="bp")
all_negative_proposed = ((diffs < 0) | ~valid).all(dim="bp")
proposed_result = bool((all_positive_proposed | all_negative_proposed).all())

print(f"\nProposed implementation:")
print(f"  all_positive per row: {all_positive_proposed.values}")
print(f"  all_negative per row: {all_negative_proposed.values}")
print(f"  final result (accepts opposite directions): {proposed_result}")
EOF

Repository: FBumann/linopy

Length of output: 602


🏁 Script executed:

# Test with NaNs to verify the proposed fix handles them correctly
pip install xarray numpy -q && python3 << 'EOF'
import xarray as xr
import numpy as np

# Test with NaNs
breakpoints_nan = xr.DataArray(
    [[0, 50, np.nan, 100], [100, 50, 30, np.nan]],
    dims=["var", "bp"],
    coords={"var": ["power", "efficiency"], "bp": [0, 1, 2, 3]},
)

print("Breakpoints with NaNs:")
print(breakpoints_nan.values)

diffs_nan = breakpoints_nan.diff("bp")
print("\nDiffs with NaNs:")
print(diffs_nan.values)

# Current implementation - NaNs break it
all_positive_curr = bool((diffs_nan > 0).all())
all_negative_curr = bool((diffs_nan < 0).all())
print(f"\nCurrent: all_positive={all_positive_curr}, all_negative={all_negative_curr}, result={all_positive_curr or all_negative_curr}")

# Proposed implementation
valid = diffs_nan.notnull()
all_positive_prop = ((diffs_nan > 0) | ~valid).all(dim="bp")
all_negative_prop = ((diffs_nan < 0) | ~valid).all(dim="bp")
proposed_result = bool((all_positive_prop | all_negative_prop).all())

print(f"\nProposed:")
print(f"  valid mask per row: {valid.values}")
print(f"  all_positive per row: {all_positive_prop.values}")
print(f"  all_negative per row: {all_negative_prop.values}")
print(f"  final result (handles NaNs): {proposed_result}")
EOF

Repository: FBumann/linopy

Length of output: 678


🏁 Script executed:

# Look for any test that shows the dict case with opposite monotonicity issue
cat test/test_piecewise_constraints.py | grep -A 40 "test_dict_of_variables"

Repository: FBumann/linopy

Length of output: 1574


Fix monotonicity check to support per-variable directions and handle masked NaNs.

The current implementation incorrectly rejects valid cases where:

  1. Different variables in a dict have opposite monotonic directions (e.g., power increasing while efficiency decreases)
  2. NaN values used for masking are present in the diffs

The check evaluates monotonicity globally across the full DataArray, but should evaluate per slice along dim to allow independent monotonic directions. NaN diffs should be ignored.

🛠️ Suggested fix
-        diffs = breakpoints.diff(dim)
-        # All diffs must be either all positive or all negative (strictly monotonic)
-        all_positive = bool((diffs > 0).all())
-        all_negative = bool((diffs < 0).all())
-        return all_positive or all_negative
+        diffs = breakpoints.diff(dim)
+        valid = diffs.notnull()
+        # Each slice must be strictly increasing or strictly decreasing; ignore NaNs
+        all_positive = ((diffs > 0) | ~valid).all(dim=dim)
+        all_negative = ((diffs < 0) | ~valid).all(dim=dim)
+        return bool((all_positive | all_negative).all())
🤖 Prompt for AI Agents
In `@linopy/model.py` around lines 934 - 941, The _check_strict_monotonicity
function currently evaluates monotonicity over the entire DataArray at once and
fails when different variables have opposite directions or when diffs contain
NaNs; change it to compute diffs = breakpoints.diff(dim) and then iterate (or
apply a vectorized reduction) over all orthogonal slices along dim (e.g.,
per-variable slices) ignoring NaNs in each slice, and for each slice consider it
strictly monotonic if all non-NaN diffs are either >0 or <0; return True only if
every slice meets that condition. Ensure you reference
_check_strict_monotonicity, the breakpoints DataArray and dim when locating and
updating the logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants