Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
1885911
event assignments jax - sbml cases 348 - 404
BSnelling Dec 1, 2025
1e5bfeb
fix up sbml test cases - not implemented priority, update t_eps, fix …
BSnelling Dec 5, 2025
ff35c15
initialValue False not implemented
BSnelling Dec 5, 2025
973aaa1
try fix other test cases
BSnelling Dec 5, 2025
36f16e6
params only in explicit triggers - and matrix only in JAX again
BSnelling Dec 5, 2025
865c340
oops committed breakpoint
BSnelling Dec 5, 2025
2279cca
looking for initialValue test cases
BSnelling Dec 8, 2025
599b8b5
do not update h pre-solve
BSnelling Dec 9, 2025
52fea34
handle_t0_event
BSnelling Dec 9, 2025
dbf43c1
reinstate time skip (hack diffrax bug?)
BSnelling Dec 9, 2025
fab9f6f
Update python/sdist/amici/jax/_simulation.py
BSnelling Dec 10, 2025
ebd0c0c
Revert "Update python/sdist/amici/jax/_simulation.py"
BSnelling Dec 10, 2025
c19b183
rm clip controller
BSnelling Dec 10, 2025
0961646
handle t0 event near zero
BSnelling Dec 10, 2025
b8c1a8c
skip non-time dependent event assignment cases
BSnelling Dec 10, 2025
60e7cf5
first pass petabv2 - updating JAXProblem init
BSnelling Dec 15, 2025
b4aed26
petabv2 test cases up to 15-ish
BSnelling Dec 19, 2025
e63b748
petab v2 test cases up to 27-ish
BSnelling Jan 6, 2026
6efc38e
rework petabv2 jax test cases with ExperimentsToSbmlEvents and no v1 …
BSnelling Jan 9, 2026
b0d7ad1
fix some rebase issues
BSnelling Jan 9, 2026
c33f01d
add test skip for petab v1
BSnelling Jan 9, 2026
5c1b30e
remaining petab v2 test cases
BSnelling Jan 21, 2026
b4aecfd
update workflows - deactivate petab_sciml wf for now
BSnelling Jan 22, 2026
b382c1f
update tests to skip on petab v2 type error
BSnelling Jan 22, 2026
8223a61
skip some more tests and reinstate more specific implicit triggers check
BSnelling Jan 22, 2026
a1d88c3
fixup benchmark skipping check
BSnelling Jan 22, 2026
2746d5b
tidying - add docstrings - rm outputs in notebook
BSnelling Jan 26, 2026
693e29b
skip implicit benchmark case - and prior distribution cases
BSnelling Jan 26, 2026
607781b
review feedback - rm simultaneous event check - implement sequential …
BSnelling Jan 28, 2026
828016c
fix test_jax tests and sbml cases with no y0
BSnelling Jan 28, 2026
e3126f5
fix pysb test case for jax
BSnelling Jan 29, 2026
df88e31
use h symbol and update example notebook
BSnelling Feb 6, 2026
720980c
rm v1 instance checks and improve preeq conditionals
BSnelling Feb 6, 2026
8286033
implement implicit triggers using fixed parameters check
BSnelling Feb 9, 2026
590fa97
temp workaround for pysb build issue
BSnelling Feb 11, 2026
781957c
pysb workaround in petab workflow too
BSnelling Feb 11, 2026
5bbd612
fix indentation errors in gen JAX code - restore petabv1 conditional
BSnelling Feb 12, 2026
48b13d6
skip JAXProblems with v1 problems
BSnelling Feb 12, 2026
bfba95c
skip JAXProblems with v1 problems - again
BSnelling Feb 12, 2026
89b52a1
pin optax version
BSnelling Feb 13, 2026
a1aefbc
check implicit triggers in sep function
BSnelling Feb 13, 2026
a9a6913
use petabv2 constants - avoid df usage in loops
BSnelling Feb 16, 2026
56763ad
restore pysb install to master
BSnelling Feb 18, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions .github/workflows/test_petab_sciml.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
name: PEtab SciML
on:
push:
branches:
- main
- 'release*'
pull_request:
branches:
- main
merge_group:
workflow_dispatch:
# on:
# push:
# branches:
# - main
# - 'release*'
# pull_request:
# branches:
# - main
# merge_group:
# workflow_dispatch:

jobs:
build:
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/test_petab_test_suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ jobs:
git clone https://github.com/PEtab-dev/petab_test_suite \
&& source ./venv/bin/activate \
&& cd petab_test_suite \
&& git checkout c12b9dc4e4c5585b1b83a1d6e89fd22447c46d03 \
&& git checkout 9542847fb99bcbdffc236e2ef45ba90580a210fa \
&& pip3 install -e .

# TODO: once there is a PEtab v2 benchmark collection
Expand All @@ -186,7 +186,7 @@ jobs:
run: |
source ./venv/bin/activate \
&& python3 -m pip uninstall -y petab \
&& python3 -m pip install git+https://github.com/petab-dev/libpetab-python.git@8dc6c1c4b801fba5acc35fcd25308a659d01050e \
&& python3 -m pip install git+https://github.com/petab-dev/libpetab-python.git@d57d9fed8d8d5f8592e76d0b15676e05397c3b4b \
&& python3 -m pip install git+https://github.com/pysb/pysb@master \
&& python3 -m pip install sympy>=1.12.1

Expand Down
172 changes: 83 additions & 89 deletions doc/examples/example_jax_petab/ExampleJaxPEtab.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@
"outputs": [],
"source": [
"import petab.v1 as petab\n",
"from amici.importers.petab.v1 import import_petab_problem\n",
"from amici.importers.petab import *\n",
"from petab.v2 import Problem\n",
"\n",
"# Define the model name and YAML file location\n",
"model_name = \"Boehm_JProteomeRes2014\"\n",
Expand All @@ -41,14 +42,20 @@
" f\"master/Benchmark-Models/{model_name}/{model_name}.yaml\"\n",
")\n",
"\n",
"# Load the PEtab problem from the YAML file\n",
"petab_problem = petab.Problem.from_yaml(yaml_url)\n",
"# Load the PEtab problem from the YAML file as a PEtab v2 problem\n",
"# (the JAX backend only supports PEtab v2)\n",
"petab_problem = Problem.from_yaml(yaml_url)\n",
"\n",
"# Import the PEtab problem as a JAX-compatible AMICI problem\n",
"jax_problem = import_petab_problem(\n",
" petab_problem,\n",
" verbose=False, # no text output\n",
" jax=True, # return jax problem\n",
"pi = PetabImporter(\n",
" petab_problem=petab_problem,\n",
" module_name=model_name,\n",
" compile_=True,\n",
" jax=True,\n",
")\n",
"\n",
"jax_problem = pi.create_simulator(\n",
" force_import=True,\n",
")"
]
},
Expand All @@ -75,6 +82,16 @@
"llh, results = run_simulations(jax_problem)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6c5b2980-13f0-42e9-b13e-0fce05793910",
"metadata": {},
"outputs": [],
"source": [
"results"
]
},
{
"cell_type": "markdown",
"id": "415962751301c64a",
Expand All @@ -90,11 +107,11 @@
"metadata": {},
"outputs": [],
"source": [
"# Define the simulation condition\n",
"simulation_condition = (\"model1_data1\",)\n",
"# # Define the simulation condition\n",
"experiment_condition = \"_petab_experiment_condition___default__\"\n",
"\n",
"# Access the results for the specified condition\n",
"ic = results[\"simulation_conditions\"].index(simulation_condition)\n",
"# # Access the results for the specified condition\n",
"ic = results[\"dynamic_conditions\"].index(experiment_condition)\n",
"print(\"llh: \", results[\"llh\"][ic])\n",
"print(\"state variables: \", results[\"x\"][ic, :])"
]
Expand Down Expand Up @@ -146,8 +163,8 @@
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"# Define the simulation condition\n",
"simulation_condition = (\"model1_data1\",)\n",
"# Define the experiment condition\n",
"experiment_condition = \"_petab_experiment_condition___default__\"\n",
"\n",
"\n",
"def plot_simulation(results):\n",
Expand All @@ -158,7 +175,7 @@
" results (dict): Simulation results from run_simulations.\n",
" \"\"\"\n",
" # Extract the simulation results for the specific condition\n",
" ic = results[\"simulation_conditions\"].index(simulation_condition)\n",
" ic = results[\"dynamic_conditions\"].index(experiment_condition)\n",
"\n",
" # Create a new figure for the state trajectories\n",
" plt.figure(figsize=(8, 6))\n",
Expand All @@ -172,7 +189,7 @@
" # Add labels, legend, and grid\n",
" plt.xlabel(\"Time\")\n",
" plt.ylabel(\"State Values\")\n",
" plt.title(simulation_condition)\n",
" plt.title(experiment_condition)\n",
" plt.legend()\n",
" plt.grid(True)\n",
" plt.show()\n",
Expand All @@ -187,18 +204,7 @@
"id": "4fa97c33719c2277",
"metadata": {},
"source": [
"`run_simulations` enables users to specify the simulation conditions to be executed. For more complex models, this allows for restricting simulations to a subset of conditions. Since the Böhm model includes only a single condition, we demonstrate this functionality by simulating no condition at all."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7950774a3e989042",
"metadata": {},
"outputs": [],
"source": [
"llh, results = run_simulations(jax_problem, simulation_conditions=tuple())\n",
"results"
"`run_simulations` enables users to specify the simulation experiments to be executed. For more complex models, this allows for restricting simulations to a subset of experiments by passing a tuple of experiment ids under the keyword `simulation_experiments` to `run_simulations`."
]
},
{
Expand Down Expand Up @@ -384,8 +390,8 @@
"from amici.jax import ReturnValue\n",
"\n",
"# Define the simulation condition\n",
"simulation_condition = (\"model1_data1\",)\n",
"ic = jax_problem.simulation_conditions.index(simulation_condition)\n",
"experiment_condition = \"_petab_experiment_condition___default__\"\n",
"ic = 0\n",
"\n",
"# Load condition-specific data\n",
"ts_dyn = jax_problem._ts_dyn[ic, :]\n",
Expand All @@ -397,7 +403,7 @@
"nps = jax_problem._np_numeric[ic, :]\n",
"\n",
"# Load parameters for the specified condition\n",
"p = jax_problem.load_model_parameters(simulation_condition[0])\n",
"p = jax_problem.load_model_parameters(jax_problem._petab_problem.experiments[0], is_preeq=False)\n",
"\n",
"\n",
"# Define a function to compute the gradient with respect to dynamic timepoints\n",
Expand Down Expand Up @@ -431,13 +437,17 @@
"cell_type": "markdown",
"id": "19ca88c8900584ce",
"metadata": {},
"source": "## Model training"
"source": [
"## Model training"
]
},
{
"cell_type": "markdown",
"id": "7f99c046d7d4e225",
"metadata": {},
"source": "This setup makes it pretty straightforward to train models using [equinox](https://docs.kidger.site/equinox/) and [optax](https://optax.readthedocs.io/en/latest/) frameworks. Below we provide barebones implementation that runs training for 5 steps using Adam."
"source": [
"This setup makes it pretty straightforward to train models using [equinox](https://docs.kidger.site/equinox/) and [optax](https://optax.readthedocs.io/en/latest/) frameworks. Below we provide barebones implementation that runs training for 5 steps using Adam."
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -569,16 +579,20 @@
"from amici.sim.sundials.petab.v1 import simulate_petab\n",
"\n",
"# Import the PEtab problem as a standard AMICI model\n",
"amici_model = import_petab_problem(\n",
" petab_problem,\n",
" verbose=False,\n",
" jax=False, # load the amici model this time\n",
"pi = PetabImporter(\n",
" petab_problem=petab_problem,\n",
" module_name=model_name,\n",
" compile_=True,\n",
" jax=False,\n",
")\n",
"\n",
"amici_model = pi.create_simulator(\n",
" force_import=True,\n",
")\n",
"\n",
"# Configure the solver with appropriate tolerances\n",
"solver = amici_model.create_solver()\n",
"solver.set_absolute_tolerance(1e-8)\n",
"solver.set_relative_tolerance(1e-16)\n",
"amici_model.solver.set_absolute_tolerance(1e-8)\n",
"amici_model.solver.set_relative_tolerance(1e-16)\n",
"\n",
"# Prepare the parameters for the simulation\n",
"problem_parameters = dict(\n",
Expand All @@ -594,86 +608,65 @@
"outputs": [],
"source": [
"# Profile simulation only\n",
"solver.set_sensitivity_order(SensitivityOrder.none)"
"amici_model.solver.set_sensitivity_order(SensitivityOrder.none)"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"id": "42cbc67bc09b67dc",
"metadata": {},
"outputs": [],
"source": [
"%%timeit\n",
"simulate_petab(\n",
" petab_problem,\n",
" amici_model=amici_model,\n",
" solver=solver,\n",
" problem_parameters=problem_parameters,\n",
" scaled_parameters=True,\n",
" scaled_gradients=True,\n",
")"
],
"id": "42cbc67bc09b67dc"
"amici_model.simulate(petab_problem.get_x_nominal_dict())"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"id": "4f1c06c5893a9c07",
"metadata": {},
"outputs": [],
"source": [
"# Profile gradient computation using forward sensitivity analysis\n",
"solver.set_sensitivity_order(SensitivityOrder.first)\n",
"solver.set_sensitivity_method(SensitivityMethod.forward)"
],
"id": "4f1c06c5893a9c07"
"amici_model.solver.set_sensitivity_order(SensitivityOrder.first)\n",
"amici_model.solver.set_sensitivity_method(SensitivityMethod.forward)"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"id": "7367a19bcea98597",
"metadata": {},
"outputs": [],
"source": [
"%%timeit\n",
"simulate_petab(\n",
" petab_problem,\n",
" amici_model=amici_model,\n",
" solver=solver,\n",
" problem_parameters=problem_parameters,\n",
" scaled_parameters=True,\n",
" scaled_gradients=True,\n",
")"
],
"id": "7367a19bcea98597"
"amici_model.simulate(petab_problem.get_x_nominal_dict())"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"id": "a31e8eda806c2d7",
"metadata": {},
"outputs": [],
"source": [
"# Profile gradient computation using adjoint sensitivity analysis\n",
"solver.set_sensitivity_order(SensitivityOrder.first)\n",
"solver.set_sensitivity_method(SensitivityMethod.adjoint)"
],
"id": "a31e8eda806c2d7"
"amici_model.solver.set_sensitivity_order(SensitivityOrder.first)\n",
"amici_model.solver.set_sensitivity_method(SensitivityMethod.adjoint)"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"id": "3f2ab1acb3ba818f",
"metadata": {},
"outputs": [],
"source": [
"%%timeit\n",
"simulate_petab(\n",
" petab_problem,\n",
" amici_model=amici_model,\n",
" solver=solver,\n",
" problem_parameters=problem_parameters,\n",
" scaled_parameters=True,\n",
" scaled_gradients=True,\n",
")"
],
"id": "3f2ab1acb3ba818f"
"amici_model.simulate(petab_problem.get_x_nominal_dict())"
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the notebook runs there is an error from this cell that the FIM was not computed: https://github.com/AMICI-dev/AMICI/actions/runs/21985309084/job/63517900881?pr=3115

@dweindl can you advise on that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My fault. I hope #3125 fixes that.

]
}
],
"metadata": {
Expand All @@ -691,7 +684,8 @@
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3"
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
Expand Down
7 changes: 6 additions & 1 deletion python/sdist/amici/_symbolic/de_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -2679,11 +2679,16 @@ def has_priority_events(self) -> bool:
def has_implicit_event_assignments(self) -> bool:
"""
Checks whether the model has event assignments with implicit triggers
(i.e. triggers that are not time based).

:return:
boolean indicating if event assignments with implicit triggers are present
"""
return any(event.updates_state and not event.has_explicit_trigger_times({}) for event in self._events)
fixed_symbols = set([k._symbol for k in self._fixed_parameters])
allowed_symbols = fixed_symbols | {amici_time_symbol}
# TODO: update to use has_explicit_trigger_times once
# https://github.com/AMICI-dev/AMICI/issues/3126 is resolved
return any(event.updates_state and event._has_implicit_triggers(allowed_symbols) for event in self._events)

def toposort_expressions(
self, reorder: bool = True
Expand Down
Loading
Loading