Conversation
…commands - mkdocs.yml — Stripped to just Home + API Reference (no phantom nav entries) - docs.yaml — Simplified, removed notebook/changelog scripts that don't exist - release.yaml — Simplified, removed CITATION.cff and script references - CONTRIBUTING.md — Concise setup + workflow using uv run throughout
.github/workflows/ci.yaml — runs on every push to main and all PRs ┌────────────┬────────────────────────┬─────────────────────────────────────────────────────────────────────────┐ │ Job │ Runs in parallel │ What it does │ ├────────────┼────────────────────────┼─────────────────────────────────────────────────────────────────────────┤ │ Lint │ yes │ ruff check + ruff format --check via uvx (no install needed) │ ├────────────┼────────────────────────┼─────────────────────────────────────────────────────────────────────────┤ │ Check lock │ yes │ uv lock --check — fails if lock file is stale │ ├────────────┼────────────────────────┼─────────────────────────────────────────────────────────────────────────┤ │ Test │ yes (3.12, 3.13, 3.14) │ uv sync --frozen → pytest with coverage → upload to Codecov (3.12 only) │ └────────────┴────────────────────────┴─────────────────────────────────────────────────────────────────────────┘ All three jobs run in parallel — no more lint blocking tests. .github/workflows/release.yaml — triggered by v*.*.* tags CI → Build → PyPI publish → Verify (~3 min) → GitHub release → Deploy docs Key improvements: - fetch-depth: 0 for hatch-vcs (was missing) - Version-tag mismatch check — fails the build if git tag doesn't match the built wheel version - Removed skip-existing: true from PyPI publish (would silently swallow failures) - Shorter verify loop (~3 min max instead of ~12) - Consistent setup-uv@v7 everywhere .github/workflows/docs.yaml — docs with dev auto-deploy - PRs: build-only verification (mkdocs build --strict) - Push to main: auto-deploys a dev version via mike - Release: deploys versioned docs (stable gets aliased to latest) Changes to pyproject.toml - Added pytest-cov to dev dependencies (for coverage reporting)
- Strict mode enabled from day one (strict = true in pyproject.toml) - Third-party libs without stubs (pyoframe, pyoptinterface, etc.) get ignore_missing_imports - pandas-stubs added for proper pandas typing - Added to CI as a parallel typecheck job - All 8 source files pass clean — fixed missing return types on __post_init__, typed solve(), added dict[str, Any] annotations Ruff (improved) - New rules added: SIM (simplify), RUF (ruff-specific), PERF (performance), C4 (comprehensions), PIE, RET (return), FLY (f-strings), FURB (refurb), T20 (no print), PLC/PLE (pylint) - TCH → TC (updated rule prefix) - Removed redundant exclude list (ruff defaults already cover those dirs) - Added PLC0415 to ignores (circular import avoidance is intentional) - Fixed all new violations (list comprehensions, ternaries, list.extend, etc.) Pre-commit (updated) ┌──────────────────┬─────────┬───────────────┐ │ Hook │ Old │ New │ ├──────────────────┼─────────┼───────────────┤ │ pre-commit-hooks │ v5.0.0 │ v6.0.0 │ ├──────────────────┼─────────┼───────────────┤ │ ruff-pre-commit │ v0.12.4 │ v0.15.0 │ ├──────────────────┼─────────┼───────────────┤ │ nbstripout │ 0.8.2 │ 0.9.0 │ ├──────────────────┼─────────┼───────────────┤ │ mypy │ — │ v1.19.1 (new) │ └──────────────────┴─────────┴───────────────┘ Also added check-merge-conflict and debug-statements hooks. Rename - src/energysys/ → src/fluxopt/ (import name now matches package name) - tests/energysys/ → tests/fluxopt/ - All imports, configs, and docs updated
- Strict mode enabled from day one (strict = true in pyproject.toml) - Third-party libs without stubs (pyoframe, pyoptinterface, etc.) get ignore_missing_imports - pandas-stubs added for proper pandas typing - Added to CI as a parallel typecheck job - All 8 source files pass clean — fixed missing return types on __post_init__, typed solve(), added dict[str, Any] annotations Ruff (improved) - New rules added: SIM (simplify), RUF (ruff-specific), PERF (performance), C4 (comprehensions), PIE, RET (return), FLY (f-strings), FURB (refurb), T20 (no print), PLC/PLE (pylint) - TCH → TC (updated rule prefix) - Removed redundant exclude list (ruff defaults already cover those dirs) - Added PLC0415 to ignores (circular import avoidance is intentional) - Fixed all new violations (list comprehensions, ternaries, list.extend, etc.) Pre-commit (updated) ┌──────────────────┬─────────┬───────────────┐ │ Hook │ Old │ New │ ├──────────────────┼─────────┼───────────────┤ │ pre-commit-hooks │ v5.0.0 │ v6.0.0 │ ├──────────────────┼─────────┼───────────────┤ │ ruff-pre-commit │ v0.12.4 │ v0.15.0 │ ├──────────────────┼─────────┼───────────────┤ │ nbstripout │ 0.8.2 │ 0.9.0 │ ├──────────────────┼─────────┼───────────────┤ │ mypy │ — │ v1.19.1 (new) │ └──────────────────┴─────────┴───────────────┘ Also added check-merge-conflict and debug-statements hooks. Rename - src/energysys/ → src/fluxopt/ (import name now matches package name) - tests/energysys/ → tests/fluxopt/ - All imports, configs, and docs updated
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughA major project restructuring renaming from "energysys" to "fluxopt," introducing new component abstractions (Port, Converter, Storage), implementing a type system with validation, refactoring model logic, and establishing modern CI/CD workflows via GitHub Actions. Comprehensive mathematical documentation and test coverage added. Changes
Poem
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. ℹ️ You can also turn on project coverage checks and project coverage reporting on Pull Request comment Thanks for integrating Codecov - We've got you covered ☂️ |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @.github/workflows/docs.yaml:
- Around line 75-83: The workflow step is interpolating ${{ inputs.version }}
directly inside the run script which risks shell injection; instead set an
environment variable for the input (e.g., add env: VERSION: ${{ inputs.version
}} on the step) and then use that shell variable (VERSION or ${VERSION#v})
inside the script and in the uv run mike deploy / set-default commands so the
run block never contains raw GitHub Actions interpolation.
In @.github/workflows/release.yaml:
- Around line 78-91: The log message inside the Verify step is misleading
because it prints "waited ${delay}s total" while ${delay} is just the current
interval; update the loop to track cumulative time (e.g., initialize a
TOTAL_WAIT=0 before the for loop, add TOTAL_WAIT=$((TOTAL_WAIT + delay)) inside
each iteration) and change the echo to use ${TOTAL_WAIT}s (or print both current
interval and cumulative) when reporting retries; ensure the rest of the logic
using VERSION and delay remains unchanged.
- Around line 34-41: The PKG_VERSION extraction uses a greedy sed regex and
captures the wrong field; update the sed expression used to set PKG_VERSION so
it anchors to the start of the filename and captures the second hyphen-delimited
segment (the version) instead of using a greedy leading match—replace the
current sed pattern in the PKG_VERSION assignment with one that starts with
^[^-]*- and captures \1 as the version (so TAG_VERSION vs PKG_VERSION comparison
uses the actual wheel version).
In `@src/fluxopt/data.py`:
- Around line 412-415: Validate the length of dt when it's an iterable before
building dt_values: if dt is an int/float handle as now, but when dt is a
list/iterable check that len(dt) == len(ts_series) and raise a clear ValueError
mentioning ts_series and dt lengths; mirror the validation style used in
to_polars_series. Update the block that constructs dt_values (variables dt,
dt_values, ts_series) to perform this check and only then create dt_df.
- Around line 72-73: The computation of lb and ub uses "size or 1e9" which
treats size=0 as falsy and incorrectly substitutes 1e9; change that to use an
explicit None check (e.g., use size if size is not None else 1e9) so zero sizes
are honored; update the expressions computing lb and ub (the lines referencing
rel_min, rel_max, and size) to mirror the explicit None-check logic used in
FlowsTable.from_elements.
🧹 Nitpick comments (2)
tests/fluxopt/test_data.py (1)
10-13: DeadSourceinstantiation on line 11.The
Source('src', outputs=[flow])on line 11 is created and immediately discarded. Its only side-effect is mutatingflow._componentandflow._is_input, which are then overwritten by the secondSource('src', outputs=[flow])on line 13. Line 11 can be removed.♻️ Suggested cleanup
def test_bounds_with_size(self, timesteps_3): flow = Flow('f1', bus='b', size=100, relative_minimum=0.2, relative_maximum=0.8) - Source('src', outputs=[flow]) data = build_model_data( timesteps_3, [Bus('b')], [Effect('cost', is_objective=True)], [Source('src', outputs=[flow])] ) - # Re-collect since Source.__post_init__ mutates flow bounds = data.flows.bounds.filter(pl.col('flow') == 'f1')mkdocs.yml (1)
6-7: Navigation is very minimal — only a Home page.The
mkdocstringsplugin is configured (lines 72-89) but there are no API reference pages in the navigation. The plugin won't generate anything unless referenced via::: fluxoptdirectives in markdown files. This is fine if intentional for the alpha stage, but worth noting as a follow-up.
…p-level env: block to prevent shell injection. .github/workflows/release.yaml (2 fixes): - Verify step: Added TOTAL_WAIT accumulator so the log message shows actual cumulative wait time instead of the current interval. - Version extraction: Changed sed from .*-\([^-]*\)-.* (greedy) to ^[^-]*-\([^-]*\)-.* so it correctly captures the second segment (version) of the wheel filename. src/fluxopt/data.py (2 fixes): - dt validation: When dt is an iterable, validates len(dt_values) == len(ts_series) and raises a clear ValueError on mismatch. - size=0 bug: Changed size or 1e9 to size if size is not None else 1e9 so that size=0 is honored instead of being treated as falsy.
…nd, no big-M hack) - size is None and rel_max != 1.0 → raises ValueError since rel_max is meaningless without a size
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/fluxopt/data.py (1)
36-42:⚠️ Potential issue | 🟠 MajorInconsistent bounds strategy for
size is None: big-M (1e9) infrom_elementsvsfloat('inf')infrom_dataframe.
from_elements(line 41-42) multiplies relative bounds by1e9when size is None, whilefrom_dataframe(line 79-80) setslb=0.0/ub=float('inf'). These two code paths for the same logical scenario will produce different LP constraints, which can cause solver-dependent behavior (some solvers reject or behave poorly withinf; others are fine with it but would differ from1e9). Pick one strategy and apply it consistently.Additionally,
from_dataframevalidatesrel_max != 1.0when size is None (lines 71-73), butfrom_elementsdoes not perform this check — consider adding parity.Proposed fix: align from_elements with from_dataframe
if f.size is not None: lb = lb * f.size ub = ub * f.size else: - # No size: bounds are relative, use large M for upper - lb = lb * 1e9 - ub = ub * 1e9 + lb = 0.0 + ub = float('inf')And add validation before the loop in
from_elements:for f in flows: if f.size is None and f.relative_maximum != 1.0: # Only works if relative_maximum is a scalar; for series, consider a different check msg = f"Flow '{f.label}': relative_maximum has no effect without a size" raise ValueError(msg)Also applies to: 75-80
🤖 Fix all issues with AI agents
In `@pyproject.toml`:
- Around line 28-32: The dependency entry "highsbox>=1.13.1" is invalid because
that version does not exist on PyPI; update the pyproject.toml dependency list
by either changing the package constraint to "highsbox>=1.12.0" (the latest
highsbox release) or replacing the package with the correct HiGHS Python wrapper
"highspy>=1.13.1" if you intended that project; edit the dependencies array
where "highsbox>=1.13.1" appears to use one of those two corrected strings and
leave "polars>=1.38.1" and "pyoframe>=1.1.0" unchanged.
In `@src/fluxopt/data.py`:
- Around line 71-73: The validation currently raises when size is None and
rel_max != 1.0 but ignores rel_min; update the check in the function handling
each row (the block using variables size, rel_min, rel_max and row['flow']) to
also validate rel_min when size is None — e.g., if size is None and (rel_max !=
1.0 or rel_min != 0.0) raise a ValueError with a clear message referencing
row['flow'] (mirror the existing message style), so non-zero rel_min without a
size is surfaced instead of silently becoming 0.0.
In `@src/fluxopt/types.py`:
- Around line 25-29: The conversion block that uses polars (pl.from_pandas) must
catch missing-optional-dependency errors in addition to TypeError: update the
except clause in the value conversion (the code that checks isinstance(value,
pl.Series) and calls pl.from_pandas(value)) to catch ImportError and
ModuleNotFoundError alongside TypeError (e.g., except (TypeError, ImportError,
ModuleNotFoundError):) and then raise the same TypeError(f'Unsupported
TimeSeries type: {type(value)}') from None so passing a pandas Series when
pandas/pyarrow isn't installed yields the handled, user-facing TypeError.
🧹 Nitpick comments (1)
src/fluxopt/data.py (1)
374-374: Inline import inside_collect_flowsto break circular dependency.This is a common pattern for circular imports, but note that
_collect_flowsis called frombuild_model_datawhich also imports fromfluxopt.components(line 401). If these always run together, the overhead is negligible, but you could consider consolidating the import to one location.
| dependencies = [ | ||
| "highspy>=1.13.1", | ||
| "pandas>=3.0.0", | ||
| "highsbox>=1.13.1", | ||
| "polars>=1.38.1", | ||
| "pyoframe>=1.0.0", | ||
| ] | ||
|
|
||
| [project.optional-dependencies] | ||
| network_viz = [ | ||
| "dash>=3.0,<4", | ||
| "dash-cytoscape>=1.0,<2", | ||
| "dash-daq>=0.6,<1", | ||
| "networkx>=3.0,<4", | ||
| "werkzeug>=3.0,<4", | ||
| "flask>=3.0,<4", | ||
| ] | ||
| full = [ | ||
| "fluxopt[network_viz]", | ||
| "tsam>=2.3,<4", | ||
| "pyvis==0.3.2", | ||
| "scipy>=1.15,<2", | ||
| "gurobipy>=10.0,<14; python_version<'3.14'", | ||
| "pyoframe>=1.1.0", | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
highsbox 1.13.1 PyPI
💡 Result:
highsbox does not have a 1.13.1 release on PyPI. The newest highsbox version shown on PyPI is 1.12.0 (Oct 27, 2025). [1]
If you meant the HiGHS Python wrapper highspy, PyPI shows highspy 1.13.1 (Feb 11, 2026). [2]
Install:
pip install highsbox==1.12.0
# or, if you meant highspy:
pip install highspy==1.13.1🌐 Web query:
polars 1.38.1 PyPI
💡 Result:
polars==1.38.1 is available on PyPI (released Feb 6, 2026). The immediately previous release 1.38.0 is yanked. [1]
Install:
pip install polars==1.38.1PyPI files for 1.38.1 include:
polars-1.38.1-py3-none-any.whlpolars-1.38.1.tar.gz[1]
🌐 Web query:
pyoframe 1.1.0 PyPI
💡 Result:
pyoframe v1.1.0 on PyPI is a “blazing fast linear program interface” / library for formulating large, sparse optimization models using Pandas or Polars DataFrames. It was released Jan 7, 2026, and requires Python >= 3.9. [1]
Install:
pip install pyoframe==1.1.0[1]
Downloads for 1.1.0: pyoframe-1.1.0.tar.gz (source) and pyoframe-1.1.0-py3-none-any.whl (wheel). [1]
Newer versions exist (latest shown on PyPI: 1.2.1, released Feb 4, 2026). [2]
Sources: [1] [2]
highsbox version 1.13.1 does not exist on PyPI and must be corrected.
The dependency highsbox>=1.13.1 cannot be satisfied. PyPI has no release of highsbox 1.13.1; the latest version is 1.12.0 (Oct 27, 2025). Either use highsbox>=1.12.0 or, if you meant the HiGHS Python wrapper, use highspy>=1.13.1 (released Feb 11, 2026).
The other dependencies are correct: polars>=1.38.1 (released Feb 6, 2026) and pyoframe>=1.1.0 (released Jan 7, 2026) both exist on PyPI.
🤖 Prompt for AI Agents
In `@pyproject.toml` around lines 28 - 32, The dependency entry "highsbox>=1.13.1"
is invalid because that version does not exist on PyPI; update the
pyproject.toml dependency list by either changing the package constraint to
"highsbox>=1.12.0" (the latest highsbox release) or replacing the package with
the correct HiGHS Python wrapper "highspy>=1.13.1" if you intended that project;
edit the dependencies array where "highsbox>=1.13.1" appears to use one of those
two corrected strings and leave "polars>=1.38.1" and "pyoframe>=1.1.0"
unchanged.
| if size is None and rel_max != 1.0: | ||
| msg = f"Flow '{row['flow']}': rel_max={rel_max} has no effect without a size" | ||
| raise ValueError(msg) |
There was a problem hiding this comment.
Validation only checks rel_max but not rel_min when size is None.
If size is None and rel_min != 0.0, the lower bound silently becomes 0.0 (line 79), discarding the user's intent. Consider also warning or raising when rel_min is non-zero without a size, for consistency.
🤖 Prompt for AI Agents
In `@src/fluxopt/data.py` around lines 71 - 73, The validation currently raises
when size is None and rel_max != 1.0 but ignores rel_min; update the check in
the function handling each row (the block using variables size, rel_min, rel_max
and row['flow']) to also validate rel_min when size is None — e.g., if size is
None and (rel_max != 1.0 or rel_min != 0.0) raise a ValueError with a clear
message referencing row['flow'] (mirror the existing message style), so non-zero
rel_min without a size is surfaced instead of silently becoming 0.0.
| if not isinstance(value, pl.Series): | ||
| try: | ||
| value = pl.from_pandas(value) | ||
| except TypeError: | ||
| raise TypeError(f'Unsupported TimeSeries type: {type(value)}') from None |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/fluxopt/types.py | head -40Repository: FBumann/fluxopt
Length of output: 1433
🏁 Script executed:
# Check project dependencies
if [ -f setup.py ]; then cat setup.py; elif [ -f pyproject.toml ]; then cat pyproject.toml; elif [ -f requirements.txt ]; then cat requirements.txt; fiRepository: FBumann/fluxopt
Length of output: 3431
🏁 Script executed:
# Search for other uses of pl.from_pandas in the codebase
rg "from_pandas" -B 3 -A 3Repository: FBumann/fluxopt
Length of output: 494
🏁 Script executed:
# Check if pandas is imported and how
rg "import pandas|from pandas" -nRepository: FBumann/fluxopt
Length of output: 155
🏁 Script executed:
# Check test files for this function
find tests -name "*.py" -exec grep -l "to_polars_series\|from_pandas" {} \;Repository: FBumann/fluxopt
Length of output: 86
🏁 Script executed:
# Look at test_types.py to understand how this function is tested
cat tests/fluxopt/test_types.pyRepository: FBumann/fluxopt
Length of output: 1670
🌐 Web query:
polars from_pandas ImportError ModuleNotFoundError exceptions
💡 Result:
Common ImportError / ModuleNotFoundError cases around polars.from_pandas(...) (and how to fix them):
ModuleNotFoundError: No module named 'pyarrow'(or similar)
pl.from_pandas(...)requires bothpandasandpyarrow. Install them (preferably via Polars “extras”):Polars documents the optional-dependency “feature flags/extras” (python -m pip install -U "polars[pandas,pyarrow]" # or (heavier, but simplest) python -m pip install -U "polars[all]"
pandas,pyarrow,all). [1][2]
ModuleNotFoundError: pa.array requires 'pyarrow' module to be installedeven though you “installed pyarrow”
- This almost always means
pyarrowis installed in a different environment/interpreter than the one running your code (common with Jupyter/VS Code/conda). This exact symptom has been reported by users callingpolars.from_pandas(df). [3] - Quick checks:
Then reinstall into that interpreter/kernel with
import polars as pl pl.show_versions()
python -m pip install -U "polars[pandas,pyarrow]"and restart the kernel. [1][3]
- Confusion about “exceptions”
- Polars exceptions live under
polars.exceptions(e.g.,pl.exceptions.ComputeError,pl.exceptions.ColumnNotFoundError), and the docs list the available exception types. [4]
Sources: Polars from_pandas docs (dependency requirement). [2] Polars installation/feature flags (extras). [1] Polars GitHub issue showing the pa.array requires 'pyarrow' error during from_pandas. [3] Polars exceptions reference. [4]
Exception handler should be broadened to handle missing optional dependencies.
Since pandas is not a required dependency (only in dev), users can pass pd.Series per the type signature but will encounter unhandled ImportError/ModuleNotFoundError if pandas or pyarrow is not installed at runtime when pl.from_pandas() is called. The current exception handler only catches TypeError, leaving these errors uncaught.
Proposed fix
if not isinstance(value, pl.Series):
- try:
- value = pl.from_pandas(value)
- except TypeError:
- raise TypeError(f'Unsupported TimeSeries type: {type(value)}') from None
+ try:
+ value = pl.from_pandas(value)
+ except (TypeError, ImportError, ModuleNotFoundError):
+ raise TypeError(f'Unsupported TimeSeries type: {type(value)}') from None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if not isinstance(value, pl.Series): | |
| try: | |
| value = pl.from_pandas(value) | |
| except TypeError: | |
| raise TypeError(f'Unsupported TimeSeries type: {type(value)}') from None | |
| if not isinstance(value, pl.Series): | |
| try: | |
| value = pl.from_pandas(value) | |
| except (TypeError, ImportError, ModuleNotFoundError): | |
| raise TypeError(f'Unsupported TimeSeries type: {type(value)}') from None |
🤖 Prompt for AI Agents
In `@src/fluxopt/types.py` around lines 25 - 29, The conversion block that uses
polars (pl.from_pandas) must catch missing-optional-dependency errors in
addition to TypeError: update the except clause in the value conversion (the
code that checks isinstance(value, pl.Series) and calls pl.from_pandas(value))
to catch ImportError and ModuleNotFoundError alongside TypeError (e.g., except
(TypeError, ImportError, ModuleNotFoundError):) and then raise the same
TypeError(f'Unsupported TimeSeries type: {type(value)}') from None so passing a
pandas Series when pandas/pyarrow isn't installed yields the handled,
user-facing TypeError.
- Timesteps type alias (list[datetime] | list[str] | pl.Series | pd.DatetimeIndex)
- normalize_timesteps() — converts any timestep input to pl.Series('time', ...) preserving dtype
- compute_dt() — auto-derives dt from datetime diffs (hours), defaults to 1.0 for strings, validates explicit dt
- compute_end_time() — computes end-of-horizon time (datetime + timedelta or "_end")
src/fluxopt/data.py — Changed:
- All pl.String hardcoded schemas → time_dtype = timesteps.dtype (dynamic)
- ModelData gained weights (all 1.0) and charge_state_times (N+1 entries) fields
- StoragesTable gained cs_bounds DataFrame with pre-computed absolute bounds (capacity * relative)
- EffectsTable.time_bounds split into time_bounds_lb / time_bounds_ub (pre-filtered, no None checks needed in model)
- build_model_data() now accepts Timesteps and dt: ... | None, uses normalize_timesteps/compute_dt
src/fluxopt/model.py — Vectorized constraints:
- Storage capacity: single m.cs_cap = charge_state <= Param(cap_expanded) replaces per-storage loop
- Storage time-varying bounds: single m.cs_lb/m.cs_ub constraints replace nested per-storage/per-time loops
- Effect total bounds: single m.effect_min_total/m.effect_max_total replaces per-effect loop
- Effect per-hour bounds: single m.effect_min_ph/m.effect_max_ph replaces per-row loop
- Weights multiplied into objective: (effect_per_timestep * Param(weights)).sum('time')
- Storage balance uses charge_state_times with integer naming (storage_bal_{stor}_{i})
src/fluxopt/results.py — Dynamic time_dtype for empty fallback DataFrames
src/fluxopt/__init__.py — Updated solve() signature, exported Timesteps, normalize_timesteps, compute_dt
Tests — 61 tests (was 32):
- 17 new type function tests (normalize_timesteps, compute_dt, compute_end_time)
- Key solver tests parametrized for [str]/[dt]: bus_balance (6→6), effects (8→8), storage (5→6+1 dedicated datetime)
- src/fluxopt/types.py — Timesteps type alias now uses list[int] instead of list[str]. normalize_timesteps() rejects strings (TypeError), accepts list[int] → pl.Int64, empty list defaults to pl.Datetime. compute_dt() uses .is_integer() instead of == pl.String. compute_end_time() returns datetime | int (no more '_end' sentinel). - src/fluxopt/data.py — Two fallback dtypes changed from pl.String() to pl.Datetime(). Test changes: - conftest.py — timesteps_3/timesteps_4 now return hourly datetimes. Removed redundant _dt variants. - test_types.py — ts fixture uses datetime. Added tests for list[int], pl.Int64, string rejection. Removed all string-based tests. - test_storage.py — Removed parametrized ts3/ts4 fixtures and test_cyclic_storage_datetime (now redundant). Tests use timesteps_3/timesteps_4 directly. - test_end_to_end.py — String timesteps → datetime. Removed parametrized fixtures. Added test_int_timesteps smoke test. - test_bus_balance.py / test_effects.py — Removed parametrized ts3 fixtures, tests now use timesteps_3 directly.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @.pre-commit-config.yaml:
- Around line 20-29: The mypy strict run is failing because pyoframe is treated
as an untyped package; update the mypy configuration to silence import-untyped
for pyoframe by adding an override that sets module = "pyoframe" and
ignore_missing_imports = true (e.g., in pyproject.toml or a mypy.ini/mypy config
section), or alternatively add a local inline suppression at the import in
src/fluxopt/model.py using "# type: ignore[import-untyped]"; reference the
pyoframe import in src/fluxopt/model.py and the mypy pre-commit hook (mypy with
--strict) when making the change.
In `@src/fluxopt/data.py`:
- Line 324: The code uses "cap = s.capacity or 1e9" which treats 0/0.0 as falsy
and wrongly substitutes 1e9; change it to explicitly check for None (e.g., set
cap = s.capacity if s.capacity is not None else 1e9) so zero capacities are
preserved; update the assignment where cap is computed (reference: s.capacity
and cap) accordingly.
In `@src/fluxopt/types.py`:
- Around line 111-113: compute_end_time's integer branch currently returns last
+ 1 and ignores the caller-provided dt_series; change it so when
timesteps.dtype.is_integer() you compute last_dt = dt_series[-1] if dt_series is
not None (falling back to 1 if dt_series is None) and return last + last_dt
instead of hardcoding +1; update the branch in compute_end_time to use
timesteps, dt_series, and last so integer sequences respect the provided
timestep spacing.
- Around line 64-71: The docstring for compute_dt is stale/misleading: update it
to reflect current behavior — remove the unreachable "dt=None + string:
broadcast 1.0" case (since normalize_timesteps rejects string timesteps) and
correct the wording about edge handling to state that when dt is derived from
datetime timesteps the function sets the first timestep equal to the second
(first = second) and leaves the last element unchanged rather than "last =
second-to-last"; reference compute_dt and normalize_timesteps to ensure
docstring matches their actual logic.
🧹 Nitpick comments (5)
src/fluxopt/model.py (1)
155-184: Storage balance loop is still O(storages × timesteps) with per-row filtering.Each iteration filters
time_paramsanddtby time, which is O(N) per step. For large models this could become a bottleneck. Consider pre-indexing into dicts (e.g.,{t: row for ...}) before the loop, or vectorizing the balance constraints entirely using DataFrame operations similar to the effect/capacity refactors above.src/fluxopt/data.py (1)
264-268: Fallbackpl.Datetime()when'time'column is absent — verify intent.If
flow_effects_dfhas notimecolumn,schema.get('time', pl.Datetime())silently defaults. This could mask a caller error (e.g., passing a DataFrame with wrong schema). The same pattern repeats at line 375 forStoragesTable.from_dataframe. Consider raising if the expected column is missing, or document this as an intentional fallback for empty-effect scenarios.tests/fluxopt/test_storage.py (1)
64-66: Simplify initial charge state assertion.The filter is redundant since
cs['value'][0]already gives the first time point's value (data is time-ordered). The other storage tests (lines 93-95, 124-125) use direct index access consistently.Proposed simplification
- # First row is the initial charge state (first timestep) - first_time = cs['time'][0] - assert cs.filter(cs['time'] == first_time)['value'][0] == pytest.approx(0.0, abs=1e-6) + # First row is the initial charge state (first timestep) + assert cs['value'][0] == pytest.approx(0.0, abs=1e-6)src/fluxopt/types.py (2)
47-52: Same optional-dependency concern for the pandas path.If a non-list, non-
pl.Seriesobject lackingto_pydatetimeis passed, theAttributeErrorcatch correctly converts it to aTypeError. This path looks fine, but note that the error message could be more helpful—it doesn't mention thatpandas.DatetimeIndexis the expected non-list type.
96-102: Remove stale/misleading inline comments.Lines 98-101 discuss "extend last" logic that doesn't exist in the code, creating confusion. The actual behavior (first element patched, last kept as-is) is already handled by line 97.
Suggested cleanup
dt_values = diffs.to_list() # First element is None from diff(); use the second element dt_values[0] = dt_values[1] - # Extend last: already correct from diff, but last element uses second-to-last - # Actually diff() gives [None, d1, d2, ..., d_{n-1}] so we have n values - # dt_values[0] = dt_values[1] already handles first - # The last value is the diff between last and second-to-last, which is correct return pl.Series('dt', dt_values)
| - repo: https://github.com/pre-commit/mirrors-mypy | ||
| rev: v1.19.1 | ||
| hooks: | ||
| - id: mypy | ||
| additional_dependencies: | ||
| - pandas-stubs | ||
| - polars | ||
| args: [--strict, src/] | ||
| pass_filenames: false | ||
| files: ^src/ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check which third-party packages are imported in src/ to verify mypy coverage
rg -n --type=py '^\s*(import|from)\s+' src/ | grep -v 'fluxopt' | grep -v '__future__' | grep -v 'typing' | sort -u | head -50Repository: FBumann/fluxopt
Length of output: 41
🌐 Web query:
pyoframe Python package type support py.typed type stubs
💡 Result:
pyoframe uses inline type annotations, but (as of pyoframe 1.2.1, released Feb 4, 2026) it does not appear to ship the PEP 561 marker file py.typed (it’s not present in the repo at src/pyoframe/py.typed, which returns 404). [1], [2]
Because type checkers (e.g., mypy/pyright) rely on py.typed to treat an installed third-party package as “typed”, pyoframe will generally be treated as untyped unless that marker is added (or a separate *-stubs package exists). [3]
Workarounds:
- Add an empty
py.typedfile under the installed package directory (e.g.,.../site-packages/pyoframe/py.typed), or - Use a tool that injects missing
py.typedmarkers (e.g.,typeforce). [4], [3]
Sources:
[1] https://pypi.org/project/pyoframe/
[2] https://github.com/Bravos-Power/pyoframe/ (and src/pyoframe/py.typed missing)
[3] https://typing.python.org/en/latest/spec/distributing.html
[4] https://pypi.org/project/typeforce/
🏁 Script executed:
#!/bin/bash
# Check if pyoframe is imported anywhere in src/
rg 'pyoframe' src/ --type=py -nRepository: FBumann/fluxopt
Length of output: 103
🏁 Script executed:
#!/bin/bash
# List ALL imports in src/fluxopt/model.py to see what's actually imported
if [ -f "src/fluxopt/model.py" ]; then
rg '^\s*(import|from)' src/fluxopt/model.py -n
fiRepository: FBumann/fluxopt
Length of output: 260
Configure mypy to handle untyped pyoframe import under --strict.
pyoframe is imported in src/fluxopt/model.py but lacks the PEP 561 marker file py.typed. Although pyoframe includes inline type annotations, mypy --strict will treat it as untyped and emit import-untyped errors. Adding pyoframe to additional_dependencies won't resolve this. Instead, use one of these approaches:
- Add a mypy configuration file to allow untyped imports for
pyoframe(e.g.,[[tool.mypy.overrides]]withmodule = "pyoframe"andignore_missing_imports = true) - Use an inline
# type: ignore[import-untyped]comment at the import statement - Consider using a tool like
typeforceto inject the missingpy.typedmarker
🤖 Prompt for AI Agents
In @.pre-commit-config.yaml around lines 20 - 29, The mypy strict run is failing
because pyoframe is treated as an untyped package; update the mypy configuration
to silence import-untyped for pyoframe by adding an override that sets module =
"pyoframe" and ignore_missing_imports = true (e.g., in pyproject.toml or a
mypy.ini/mypy config section), or alternatively add a local inline suppression
at the import in src/fluxopt/model.py using "# type: ignore[import-untyped]";
reference the pyoframe import in src/fluxopt/model.py and the mypy pre-commit
hook (mypy with --strict) when making the change.
| cs_lb_rel = to_polars_series(s.relative_minimum_charge_state, timesteps, 'cs_lb') | ||
| cs_ub_rel = to_polars_series(s.relative_maximum_charge_state, timesteps, 'cs_ub') | ||
|
|
||
| cap = s.capacity or 1e9 |
There was a problem hiding this comment.
capacity or 1e9 is falsy for capacity=0, producing incorrect bounds.
Same pattern as the previously fixed size or 1e9 issue. If capacity is 0 (or 0.0), the or operator treats it as falsy and substitutes 1e9.
Proposed fix
- cap = s.capacity or 1e9
+ cap = s.capacity if s.capacity is not None else 1e9📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cap = s.capacity or 1e9 | |
| cap = s.capacity if s.capacity is not None else 1e9 |
🤖 Prompt for AI Agents
In `@src/fluxopt/data.py` at line 324, The code uses "cap = s.capacity or 1e9"
which treats 0/0.0 as falsy and wrongly substitutes 1e9; change it to explicitly
check for None (e.g., set cap = s.capacity if s.capacity is not None else 1e9)
so zero capacities are preserved; update the assignment where cap is computed
(reference: s.capacity and cap) accordingly.
| def compute_dt(timesteps: pl.Series, dt: float | list[float] | pl.Series | None) -> pl.Series: | ||
| """Compute dt (hours) for each timestep. | ||
|
|
||
| - dt=None + datetime: derive from consecutive differences in hours; last = second-to-last. | ||
| - dt=None + string: broadcast 1.0. | ||
| - dt provided: validate length, return as pl.Series. | ||
| - Single timestep: default to 1.0. | ||
| """ |
There was a problem hiding this comment.
Stale/misleading docstring.
Two issues:
- Line 68 documents
"dt=None + string: broadcast 1.0", butnormalize_timestepsnow rejects string timesteps, so this path is unreachable. - Line 67 says
"last = second-to-last"— the code actually sets first = second (line 97); the last element is left as-is (the genuine diff). The phrase reads as if the last element is being patched.
Suggested docstring fix
"""Compute dt (hours) for each timestep.
- - dt=None + datetime: derive from consecutive differences in hours; last = second-to-last.
- - dt=None + string: broadcast 1.0.
+ - dt=None + datetime: derive from consecutive differences in hours; first = second.
+ - dt=None + integer: broadcast 1.0.
- dt provided: validate length, return as pl.Series.
- Single timestep: default to 1.0.
"""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def compute_dt(timesteps: pl.Series, dt: float | list[float] | pl.Series | None) -> pl.Series: | |
| """Compute dt (hours) for each timestep. | |
| - dt=None + datetime: derive from consecutive differences in hours; last = second-to-last. | |
| - dt=None + string: broadcast 1.0. | |
| - dt provided: validate length, return as pl.Series. | |
| - Single timestep: default to 1.0. | |
| """ | |
| def compute_dt(timesteps: pl.Series, dt: float | list[float] | pl.Series | None) -> pl.Series: | |
| """Compute dt (hours) for each timestep. | |
| - dt=None + datetime: derive from consecutive differences in hours; first = second. | |
| - dt=None + integer: broadcast 1.0. | |
| - dt provided: validate length, return as pl.Series. | |
| - Single timestep: default to 1.0. | |
| """ |
🤖 Prompt for AI Agents
In `@src/fluxopt/types.py` around lines 64 - 71, The docstring for compute_dt is
stale/misleading: update it to reflect current behavior — remove the unreachable
"dt=None + string: broadcast 1.0" case (since normalize_timesteps rejects string
timesteps) and correct the wording about edge handling to state that when dt is
derived from datetime timesteps the function sets the first timestep equal to
the second (first = second) and leaves the last element unchanged rather than
"last = second-to-last"; reference compute_dt and normalize_timesteps to ensure
docstring matches their actual logic.
| if timesteps.dtype.is_integer(): | ||
| last: int = timesteps[-1] | ||
| return last + 1 |
There was a problem hiding this comment.
Integer path ignores dt_series, hardcodes +1.
For datetime timesteps, compute_end_time correctly uses last_dt from dt_series. But for integer timesteps it unconditionally returns last + 1, ignoring the caller-provided dt_series. If a user passes integer timesteps [0, 2, 4] with dt=2.0, the end time would be 5 instead of 6.
Consider making the integer path consistent with the datetime path:
Proposed fix
if timesteps.dtype.is_integer():
last: int = timesteps[-1]
- return last + 1
+ last_dt_val: float = dt_series[-1]
+ return last + int(last_dt_val)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if timesteps.dtype.is_integer(): | |
| last: int = timesteps[-1] | |
| return last + 1 | |
| if timesteps.dtype.is_integer(): | |
| last: int = timesteps[-1] | |
| last_dt_val: float = dt_series[-1] | |
| return last + int(last_dt_val) |
🤖 Prompt for AI Agents
In `@src/fluxopt/types.py` around lines 111 - 113, compute_end_time's integer
branch currently returns last + 1 and ignores the caller-provided dt_series;
change it so when timesteps.dtype.is_integer() you compute last_dt =
dt_series[-1] if dt_series is not None (falling back to 1 if dt_series is None)
and return last + last_dt instead of hardcoding +1; update the branch in
compute_end_time to use timesteps, dt_series, and last so integer sequences
respect the provided timestep spacing.
…nal, added id as init=False field, eq=False for identity equality, dropped
_component. Bus, Effect, Storage: label → id. Storage __post_init__ auto-generates {id}(charge) / {id}(discharge).
2. components.py — LinearConverter → Converter. Port/Converter: label → id, __post_init__ auto-generates flow ids as {self.id}({flow.bus}).
conversion_factors uses Flow objects as dict keys instead of strings.
3. tables.py — All .label → .id. ConvertersTable.from_elements iterates Flow keys with flow_obj.id. Type annotations updated (LinearConverter →
Converter).
4. validation.py — All .label → .id, renamed _check_label_uniqueness → _check_id_uniqueness, added _check_flow_uniqueness for duplicate flow id
detection.
5. results.py — flow_rate(label=) → flow_rate(id=), charge_state(label=) → charge_state(id=).
6. __init__.py — LinearConverter → Converter in imports and __all__.
Test changes (6 files)
- Removed first string arg from all Flow(...) calls
- LinearConverter → Converter
- Updated flow_rate()/charge_state() id strings to match auto-generated patterns (e.g. 'bat(charge)' → 'battery(charge)', 'source' → 'grid(elec)')
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/fluxopt/tables.py (1)
37-43:⚠️ Potential issue | 🟡 MinorInconsistent big-M handling between
from_elementsandfrom_dataframefor unsized flows.
from_elements(Line 42–43) useslb * 1e9/ub * 1e9as big-M bounds for flows without a size, whilefrom_dataframe(Line 81–82) useslb=0.0/ub=float('inf'). This divergence means the same logical model could produce different solver behavior depending on the construction path. Consider unifying the approach —float('inf')is generally preferred by modern solvers over an arbitrary big-M.
🤖 Fix all issues with AI agents
In `@docs/math/converters.md`:
- Line 5: The docs still reference the old API names: replace all occurrences of
the type/name LinearConverter with Converter and change all property accesses of
.label to .id so the examples and prose match the current API; update every code
example and descriptive sentence that mentions LinearConverter or .label (e.g.,
in the examples that construct/describe converters and when reading their
identifiers) to use Converter and .id respectively, ensuring example objects,
variable names, and explanatory text are consistent with the new symbols.
In `@src/fluxopt/model.py`:
- Around line 146-158: cs_lb/ cs_ub bounds are only applied for N timestep rows
from StoragesTable.cs_bounds while m.charge_state is indexed over
charge_state_times (N+1), so the final time point t_end lacks the relative
min/max bounds; update the selection logic that builds cs_lb_df and cs_ub_df (or
the Param creation for m.cs_lb / m.cs_ub) to include the final
charge_state_times entry as well—either by extending cs_bounds to include t_end
when constructing StoragesTable.cs_bounds or by joining cs_lb_df / cs_ub_active
with the full charge_state_times index before creating pf.Param so the N+1th
time receives the same relative bounds; if the omission is intentional, add a
clarifying comment near m.charge_state and the cs_bounds use explaining that
t_end is only bounded by capacity and non-negativity.
In `@src/fluxopt/tables.py`:
- Around line 193-194: The code in from_elements assumes an objective effect
exists and does objective_effects[0].id which will raise IndexError if none are
present; update from_elements to guard against empty objective_effects: check
len(objective_effects) == 0 and raise a clear ValueError (or custom exception)
explaining "no objective effect found; call validate_system or include an effect
with is_objective=True" and otherwise set objective_effect =
objective_effects[0].id; reference the variables
objective_effects/objective_effect and the from_elements function (also note
build_model_data and validate_system call sites) so callers get a helpful error
when invoked independently.
In `@src/fluxopt/validation.py`:
- Around line 10-21: The validate_system signature treats storages as optional
but not converters, causing asymmetry and potential errors; update the
validate_system parameter type for converters to accept None (converters:
list[Converter] | None) and ensure the function normalizes it before use (e.g.,
pass converters or [] into _check_id_uniqueness alongside storages or []), so
callers can omit converters and behavior matches storages handling; search for
validate_system and the call to _check_id_uniqueness to apply the change.
🧹 Nitpick comments (6)
mkdocs.yml (1)
101-103: Consider using MathJax's officially recommended CDN.MathJax's official documentation explicitly recommends
cdn.jsdelivr.netas the preferred CDN, withunpkg.comlisted as an alternative. While both work, aligning with the official recommendation provides better long-term compatibility.♻️ Suggested change
- - https://unpkg.com/mathjax@3/es5/tex-mml-chtml.js + - https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.jsThe
docs/javascripts/mathjax.jsconfiguration file exists and is correctly loaded first.docs/math/storage.md (1)
69-79: Hard-coded line numbers in code mapping will become stale quickly.References like
model.py:127,model.py:130–135, etc. will break as soon as the code is modified. Consider linking to function/method names (e.g.,_create_storage()) or using anchor comments in the source instead.src/fluxopt/results.py (1)
20-26: Parameteridshadows the Python built-in — trivial nit.Using
idas a parameter name shadows the built-inid(). This is common in domain code and unlikely to cause issues here, butflow_id/storage_idwould be more precise and avoid the shadow.src/fluxopt/model.py (1)
163-193: Storage balance loop is correct but could benefit from vectorization.The per-storage, per-timestep Python loop (Lines 164–193) builds individual constraints via
setattr. This is correct and safe —cs_time_list[i + 1]never goes out of bounds sincecs_time_listhas N+1 entries andiranges over[0, N-1]. However, for large models this row-by-row approach will be slow compared to a vectorized pyoframe formulation.src/fluxopt/elements.py (1)
56-56:initial_charge_state: float | str | None— stringly-typed sentinel.Using the string
'cyclic'as a sentinel value works, but a dedicated enum or booleancyclicfield (as done inparams_rowsvia a separatecyclickey) would be more type-safe and discoverable. Not blocking, just a thought for future API polish.src/fluxopt/components.py (1)
17-23: Duplicate flow IDs within a Port or Converter are caught by validation.A
Portwith multiple imports/exports on the same bus will produce identical flow IDs (e.g., both getid='p(elec)'). However, this is not a silent failure—_check_flow_uniqueness()explicitly validates and raisesValueError(f'Duplicate flow id: {flow.id!r}')duringvalidate_system().That said, earlier detection in
__post_init__would improve error reporting. Consider adding a check for duplicate bus values within a single component's flow list to fail fast instead of deferring to downstream validation.
docs/math/converters.md
Outdated
|
|
||
| ## Formulation | ||
|
|
||
| A `LinearConverter` enforces linear coupling between its input and output flows. |
There was a problem hiding this comment.
Documentation references LinearConverter and .label — both were renamed in this PR.
Throughout this file, LinearConverter should be Converter and .label should be .id to match the current API. This affects lines 5, 20, 47–48, 66–67, 86–91, and 101.
📝 Suggested fixes for code examples
Line 47–48:
-LinearConverter.boiler("boiler", thermal_efficiency=0.9, fuel_flow=gas, thermal_flow=th)
-# conversion_factors = [{gas.label: 0.9, th.label: -1}]
+Converter.boiler("boiler", thermal_efficiency=0.9, fuel_flow=gas, thermal_flow=th)
+# conversion_factors = [{gas.id: 0.9, th.id: -1}]Line 66–67:
-LinearConverter.heat_pump("hp", cop=3.5, electrical_flow=el, thermal_flow=th)
-# conversion_factors = [{el.label: 3.5, th.label: -1}]
+Converter.heat_pump("hp", cop=3.5, electrical_flow=el, thermal_flow=th)
+# conversion_factors = [{el.id: 3.5, th.id: -1}]Line 86–91:
-LinearConverter.chp("chp", eta_el=0.4, eta_th=0.5,
- fuel_flow=fuel, electrical_flow=el, thermal_flow=th)
-# conversion_factors = [
-# {fuel.label: 0.4, el.label: -1},
-# {fuel.label: 0.5, th.label: -1},
-# ]
+Converter.chp("chp", eta_el=0.4, eta_th=0.5,
+ fuel_flow=fuel, electrical_flow=el, thermal_flow=th)
+# conversion_factors = [
+# {fuel.id: 0.4, el.id: -1},
+# {fuel.id: 0.5, th.id: -1},
+# ]Line 101:
-LinearConverter.heat_pump("hp", cop=cop_profile, electrical_flow=el, thermal_flow=th)
+Converter.heat_pump("hp", cop=cop_profile, electrical_flow=el, thermal_flow=th)Also update line 5 and 20:
-A `LinearConverter` enforces linear coupling between its input and output flows.
+A `Converter` enforces linear coupling between its input and output flows.-| \(a_f\) | Conversion coefficient | `LinearConverter.conversion_factors` |
+| \(a_f\) | Conversion coefficient | `Converter.conversion_factors` |🤖 Prompt for AI Agents
In `@docs/math/converters.md` at line 5, The docs still reference the old API
names: replace all occurrences of the type/name LinearConverter with Converter
and change all property accesses of .label to .id so the examples and prose
match the current API; update every code example and descriptive sentence that
mentions LinearConverter or .label (e.g., in the examples that
construct/describe converters and when reading their identifiers) to use
Converter and .id respectively, ensuring example objects, variable names, and
explanatory text are consistent with the new symbols.
| # Time-varying charge state bounds — vectorized using pre-computed absolute bounds | ||
| cs_lb_df = d.storages.cs_bounds.filter(pl.col('cs_lb') > 0).select( | ||
| 'storage', 'time', pl.col('cs_lb').alias('value') | ||
| ) | ||
| cs_ub_active = d.storages.cs_bounds.join(d.storages.params.select('storage', 'capacity'), on='storage').filter( | ||
| pl.col('cs_ub') < pl.col('capacity') | ||
| ) | ||
| cs_ub_df = cs_ub_active.select('storage', 'time', pl.col('cs_ub').alias('value')) | ||
|
|
||
| if len(cs_lb_df) > 0: | ||
| m.cs_lb = m.charge_state >= pf.Param(cs_lb_df) | ||
| if len(cs_ub_df) > 0: | ||
| m.cs_ub = m.charge_state <= pf.Param(cs_ub_df) |
There was a problem hiding this comment.
Time-varying charge-state bounds (cs_lb/cs_ub) only cover N of N+1 time points.
cs_bounds in StoragesTable is built from the N regular timesteps (see tables.py Line 326 loop), but m.charge_state is indexed over charge_state_times (N+1 entries). The final charge-state time point t_end is covered by the capacity upper bound (Line 143, cross-joined) and the lb=0 on the variable, but time-varying lower/upper bounds from relative_minimum_charge_state / relative_maximum_charge_state are not applied to it.
If this is intentional (the final state is only bounded by capacity and non-negativity), it might be worth a brief comment. Otherwise, the N+1th point should also receive bounds.
🤖 Prompt for AI Agents
In `@src/fluxopt/model.py` around lines 146 - 158, cs_lb/ cs_ub bounds are only
applied for N timestep rows from StoragesTable.cs_bounds while m.charge_state is
indexed over charge_state_times (N+1), so the final time point t_end lacks the
relative min/max bounds; update the selection logic that builds cs_lb_df and
cs_ub_df (or the Param creation for m.cs_lb / m.cs_ub) to include the final
charge_state_times entry as well—either by extending cs_bounds to include t_end
when constructing StoragesTable.cs_bounds or by joining cs_lb_df / cs_ub_active
with the full charge_state_times index before creating pf.Param so the N+1th
time receives the same relative bounds; if the omission is intentional, add a
clarifying comment near m.charge_state and the cs_bounds use explaining that
t_end is only bounded by capacity and non-negativity.
| objective_effects = [e for e in effects if e.is_objective] | ||
| objective_effect = objective_effects[0].label | ||
| objective_effect = objective_effects[0].id |
There was a problem hiding this comment.
objective_effects[0] will raise IndexError if no effect has is_objective=True.
from_elements assumes at least one objective effect exists. While validate_system is called before this in build_model_data, from_elements can also be called independently. A guard or clearer error message would help.
Proposed fix
objective_effects = [e for e in effects if e.is_objective]
+ if not objective_effects:
+ raise ValueError('Exactly one effect must have is_objective=True')
objective_effect = objective_effects[0].id📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| objective_effects = [e for e in effects if e.is_objective] | |
| objective_effect = objective_effects[0].label | |
| objective_effect = objective_effects[0].id | |
| objective_effects = [e for e in effects if e.is_objective] | |
| if not objective_effects: | |
| raise ValueError('Exactly one effect must have is_objective=True') | |
| objective_effect = objective_effects[0].id |
🤖 Prompt for AI Agents
In `@src/fluxopt/tables.py` around lines 193 - 194, The code in from_elements
assumes an objective effect exists and does objective_effects[0].id which will
raise IndexError if none are present; update from_elements to guard against
empty objective_effects: check len(objective_effects) == 0 and raise a clear
ValueError (or custom exception) explaining "no objective effect found; call
validate_system or include an effect with is_objective=True" and otherwise set
objective_effect = objective_effects[0].id; reference the variables
objective_effects/objective_effect and the from_elements function (also note
build_model_data and validate_system call sites) so callers get a helpful error
when invoked independently.
| def validate_system( | ||
| buses: list[Bus], | ||
| effects: list[Effect], | ||
| ports: list[Port], | ||
| converters: list[Converter], | ||
| storages: list[Storage] | None, | ||
| flows: list[Flow], | ||
| ) -> None: | ||
| _check_id_uniqueness(buses, effects, ports, converters, storages or []) | ||
| _check_bus_references(buses, flows) | ||
| _check_objective(effects) | ||
| _check_flow_uniqueness(flows) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how validate_system is called and find the solve() signature
rg -n 'validate_system' --type=py -C5Repository: FBumann/fluxopt
Length of output: 1918
🏁 Script executed:
#!/bin/bash
# Search more broadly for validate_system to ensure we found all callers
rg -n 'validate_system' --type=pyRepository: FBumann/fluxopt
Length of output: 283
converters should accept | None to match the storages parameter and enable consistent fallback handling.
The function signature inconsistently marks storages as optional but not converters, despite both parameters defaulting to None in build_model_data(). While the current sole caller (build_model_data) normalizes converters to [] before calling validate_system, the type hints should reflect the actual design pattern. Line 18 handles storages with or [], but converters has no fallback in the function body—this asymmetry is confusing and error-prone if validate_system is called from elsewhere.
Proposed fix
def validate_system(
buses: list[Bus],
effects: list[Effect],
ports: list[Port],
- converters: list[Converter],
+ converters: list[Converter] | None,
storages: list[Storage] | None,
flows: list[Flow],
) -> None:
- _check_id_uniqueness(buses, effects, ports, converters, storages or [])
+ _check_id_uniqueness(buses, effects, ports, converters or [], storages or [])🤖 Prompt for AI Agents
In `@src/fluxopt/validation.py` around lines 10 - 21, The validate_system
signature treats storages as optional but not converters, causing asymmetry and
potential errors; update the validate_system parameter type for converters to
accept None (converters: list[Converter] | None) and ensure the function
normalizes it before use (e.g., pass converters or [] into _check_id_uniqueness
alongside storages or []), so callers can omit converters and behavior matches
storages handling; search for validate_system and the call to
_check_id_uniqueness to apply the change.
- Accepts any iterable of objects with an id: str attribute - Supports container['gas'] (by id) and container[0] (by position) - Raises ValueError on duplicate ids at construction - Supports + for concatenation (returns new IdList, validates combined uniqueness) - Supports in, len, iteration - Uses __slots__ and stores items as a tuple internally (truly frozen) Integration in src/fluxopt/components.py: - Port.imports/exports and Converter.inputs/outputs accept list[Flow] from users, get wrapped into IdList[Flow] in __post_init__ - Shared _qualify_flows() helper handles id qualification and wrapping
- FlowsTable gained effect_coefficients: pl.DataFrame field. The coefficient-building loop (iterating flow.effects_per_flow_hour) moved from
EffectsTable.from_elements into FlowsTable.from_elements. from_dataframe accepts an optional effect_coefficients param (defaults to empty
DataFrame).
- EffectsTable lost flow_coefficients. from_elements no longer takes flows param. from_dataframe takes an optional time_dtype instead of
flow_effects_df.
- build_model_data call updated accordingly.
src/fluxopt/model.py
- Added _TemporalSource dataclass (name, index, expression, sum_dim).
- FlowSystemModel.__init__ initializes _temporal_sources: list[_TemporalSource].
- add_temporal_contribution() — public registration method.
- _create_flow_variables() now registers flow contributions via add_temporal_contribution when effect_coefficients is non-empty.
- _create_effects() rewritten with the new variable hierarchy:
- contributions(flow) — per-source variable with tracking constraint
- effect_per_timestep — accumulated via keep_extras() pattern
- effect_temporal — weighted sum over time
- effect_total — grand total (= effect_temporal, extensible for future effect(periodic))
- Bounds applied to effect_total and effect_per_timestep as before.
- _set_objective() unchanged (uses effect_total).
src/fluxopt/results.py
- SolvedModel gained contributions: pl.DataFrame with schema (source, contributor, effect, time, value).
- from_model extracts solution from each registered _TemporalSource variable, renames the source dim to contributor, adds a source column, and
concatenates.
Tests
- test_data.py: data.effects.flow_coefficients → data.flows.effect_coefficients.
- test_effects.py: Added assertions on result.contributions in test_single_cost_effect.
- 53 tests pass, ruff clean, mypy clean.
- Port 19 mathematical correctness tests from flixopt into tests/math/test_math.py covering bus balance, conversion, effects, flow constraints, and storage - Fix fixed_relative_profile: exclude fixed flows from lb/ub constraints via anti-join in model; equality constraint is sufficient - Fix charge state SOC bounds: extend cs_bounds to N+1 charge_state_times - Reorganize tests: math/ for solver tests, root for unit tests - Add ruff per-file ignores for scripts/ (T201, RUF003) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- docs/math/converters.md: LinearConverter → Converter, .label → .id - tables.py: guard against missing objective effect in EffectsTable.from_elements - validation.py: accept converters=None in validate_system Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a parallel Guide section (getting-started, flows, converters, storage, effects) with API examples and cross-links to the math formulation. Rewrite index.md as a landing page with install and quick example. Rename math/flow-bounds.md → math/flows.md and expand to cover full flow model. Remove stale Code Mapping sections from all math pages. Fix LinearConverter → Converter in notation.md. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 01-quickstart.ipynb — Rewritten from flixopt to fluxopt API. Uses pl.datetime_range for timesteps, plotly bar chart for demand, polars DataFrames for results and cost summary. - 02-heat-system.ipynb — Rewritten with inline data generation (48h sinusoidal demand + off-peak/peak prices). Uses pl.datetime_range, plotly line charts for demand/price profiles, polars DataFrames for results, storage ops, and combined overview. mkdocs.yml — Added mkdocs-jupyter plugin and Notebooks nav section. pyproject.toml — Added viz dependency group with plotly>=6. Added per-file ruff ignores for notebooks (T201, RUF001, RUF003). Ruff, notebook execution, and mkdocs build --strict all pass.
- New field sizes (flow, size) — size is null for unsized flows - Renamed bounds → relative_bounds with columns rel_lb/rel_ub (dimensionless fractions, no size multiplication) - fixed now stores relative profile values (no * size) - Both from_elements() and from_dataframe() updated accordingly src/fluxopt/validation.py — validate_flow_bounds() updated column references from lb/ub to rel_lb/rel_ub src/fluxopt/model.py — _create_flow_variables() now computes absolute bounds at constraint time: - Sized flows: joins relative_bounds with sizes, computes rel_lb * size / rel_ub * size - Unsized flows: no bound constraints needed (variable has lb=0 by default) - Fixed flows: multiplies relative profile by size for sized flows, uses as-is for unsized Tests updated: test_bounds_with_size asserts relative values + sizes, test_fixed_profile asserts relative values, test_modified_data mutates with 0.7 instead of 70.0
Description
Brief description of the changes in this PR.
Type of Change
Related Issues
Closes #(issue number)
Testing
Checklist
Summary by CodeRabbit
New Features
solve()function for streamlined optimization.Portabstraction for input/output wiring.Converterclass with factory methods (boiler, heat pump, CHP).Storagesupport with charge/discharge dynamics.Documentation
Chores