Skip to content

CraigGin/PDEKoopman

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PDEKoopman

Using neural networks to learn linearizing transformations for PDEs

This code is for the paper "Deep Learning Models for Global Coordinate Transformations that Linearise PDEs" by Craig Gin, Bethany Lusch, Steven L. Brunton, and J. Nathan Kutz. All of the code is written for Python 2 and Tensorflow 1.

Our recommendation is to only use this code to verify the results of the above paper. If you are interested in adapting the code and using it for a problem you are working on, we highly recommend you use the updated version of the code:

https://github.com/CraigGin/PDEKoopman2

To run the code:

  1. Clone the repository.
  2. In the data directory, recreate the desired datasets. The data sets used in the paper are created with the files Heat_Eqn_exp29_data.m, Burgers_Eqn_exp28_data.m, Burgers_Eqn_exp30_data.m, Burgers_Eqn_exp32_data.m, KS_Eqn_exp4_data.py, KS_Eqn_exp5_data.py, KS_Eqn_exp6_data.py, and KS_Eqn_exp7_data.py. If you create data using one of the MATLAB .m files, you will then need to convert the resulting csv files to .npy files which can be done with the script csv_to_npy.py.
  3. In the main directory, run the desired experiment files. As an example, Burgers_Experiment_28rr.py will train 20 neural networks with randomly chosen learning rates and initializations each for 20 minutes. It will create a directory called Burgers_exp28rr and store the networks and losses. You can then run the file Burgers_Experiment28rr_restore.py to restore the network with the smallest validation loss and continue training the network until convergence.
  4. The Jupyter notebooks in the main directory (Heat_PostProcess.ipynb, Burgers_Postprocess.ipynb, KS_Postprocess.ipynb) can be used to examine the results and create figures like the ones in the paper.

All of the results from the paper are already in the repository so you can exactly recreate the paper figures by running the Jupter notebooks "FigureXX.ipynb".

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published