Skip to content

Neural network solvers for differential equations with exact conservation laws

License

Notifications You must be signed in to change notification settings

eikehmueller/mlconservation_code

Repository files navigation

linting: pylint Code style: black Automated testing

Neural network solvers for dynamical systems with exact conservation laws

Following the ideas in Greydanus et al. (2019), the Lagrangian is represented by a neural network, from which the acceleration $\ddot{q}$ for a given position $q$ and velocity $\dot{q}$ can be computed via automatic differentiation. The key novelty of this work is the ability to exactly conserve certain physical quantities. This is achieved by using Noether's Theorem, which relates continuous symmetries of the Lagrangian with conservation laws. The neural networks are trained on noisy synthetic data obtained by solving the true equations of motion.

Neural network architecture

A range of dynamical systems have been implemented:

  • A system of $N$ two-dimensional spins with nearest-neighbour interactions; the system is invariant under global rotations of all spins.
  • The motion of a relativistic particle in an electromagnetc field. Depending on the structure of the electric and magnetic fields, the system is invariant under rotations and/or translations.
  • A single non-relativistic particle moving in a three-dimensional potential. Here the considered true solution corresponds to motion in a Newtonian gravitational potential.
  • Two interacting particles moving in $d$ dimensions; if the pairwise potential is invariant under rotations and translations the total linear- and angular momentum are conserved
  • A single relativistic particle moving in $1+3$ dimensional space-time. Here the true solution is given by geodesics of the Schwarzschild metric, which is invariant under three-dimensional rotations.

Installation

Running the Jupyter notebooks, training script and tests requires installation with

python -m pip install .

If you want to edit the code, you might want prefer an editable install with

python -m pip install --editable .

C-Code generation

The time integrator classes use C code generation to produce synthetic training data efficiently, see the TimeIntegrator base class in time_integrator.py. The gcc compiler is used to compile the autogenerated C code in a library with the command

gcc -fPIC -shared -O3 -o LIBRARY

If this does not work on your system (for example because it uses a different C compiler) there are two options:

  1. adapt the subprocess.run() command in the _generate_timestepper_library() accordingly
  2. disable C code autogeneration globally by replacing the line
self.fast_code = hasattr(self.dynamical_system, "acceleration_code")

in the constructor of the TimeIntegrator class by

self.fast_code = False

The latter option will use the fallback-option of running the interpreted Python code in all cases. The code will still work, but it will be slower.

Repository structure

The Python library files are collected in the src/conservative_nn directory. The assets collects the weights of several trained models. The scripts in the src directory can be used to evaluate the trained model and visualise the solutions.

  • EvaluateModel.ipynb Evaluates the performance of the two-particle model. The trajectories of the rained Lagrangian neural network model are compared with the true trajectories and the (approximate) conservation of conserved quantities is demonstracted.
  • EvaluateKeplerModel.ipynb Contains the corresponding code for the Kepler model, i.e. non-relativistic motion of a single particle in the Newtonian potential.
  • Kepler.ipynb Contains several derivations relating to the exact dynamics of particles moving in the non-relativistic Newton potential and in the Schwarzschild metric.
  • VisualiseTrajectories.ipynb This notebook was mainly written for debugging and to choose sensible parameters for the true solutions that are used to train the models. It can be used to visualise the trajectories for the considered systems.

Running the code

Training the neural networks

The neural network models can be trained with the train_model.py script, which reads its parameters from a .toml configuration file. You might want to copy and modify the provided template file. To run the code, use

python src/train_model.py --parameterfile=PARAMETERFILE

where PARAMETERFILE is the name of the .toml file with the parameters. If you leave out the --parameterfile flag, this defaults to training_parameters.toml.

Evaluating and visualising trained models

To evaluate the trained models and assess their performance, use EvaluateModel.ipynb and EvaluateKeplerModel.ipynb

Testing

Tests are collected in the tests subdirectory. To run all tests use

pytest

About

Neural network solvers for differential equations with exact conservation laws

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published