Skip to content
Port-Hamiltonian Approach to Neural Network Training
Jupyter Notebook Python
Branch: master
Clone or download
Latest commit db2ef0a Sep 9, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
img feat: swap gif Sep 5, 2019
pyPH feat: new gif Aug 30, 2019
01_linear_boundary.ipynb feat: clean codebase Aug 30, 2019
02_nonlinear_vector_field.ipynb feat: clean codebase Aug 30, 2019 org: readme update Sep 9, 2019
requirements.txt feat: new gif Aug 30, 2019


A new framework for learning in which the neural network parameters are solutions of ODEs. By viewing the optimization process as the evolution of a port-Hamiltonian system we can ensure convergence to a minimum of the objective function.

This method is applicable to any generic neural network architecture. The neural network is coupled to a fictitious Port-Hamiltonian system whose states are given by the neural network parameters. The energy of the Port-Hamiltonian system is then linked to the objective function and automatically minimized due to the PH passivity property.

Code for "Port-Hamiltonian Approach to Neural Network Training" to appear in the 58th IEEE Conference on Decision and Control (CDC 2019). arXiv preprint available here.


pyPH/ contains a numpy implementation of a single linear predictor along with functions that describe the Port-Hamiltonian ODE of its parameters. For general use import the PHNN class in pyPH/ instead.

pyPH/ contains the new optimizer class proposed in the paper. The class take as input PyTorch torch.nn.Modules and provides a fit method to optimize them as Port-Hamiltonian Neural Networks.

You can’t perform that action at this time.