Skip to content

acse-jy220/SFC-CAE-Ready-to-use

Repository files navigation

SFC-CAE-Ready-to-use

A self-adjusting Space-filling curve autoencoder

License Testing workflow


Achitechture of a Space-filling curve Convolutional Autoencoder

Achitechture of a Space-filling curve Variational Convolutional Autoencoder

__Table of Contents__
  1. Project Description
  2. Getting Started
  3. Colab Notebooks
  4. t-SNE plots
  5. Decompressing Examples
  6. Training on HPC
  7. License
  8. Testing
  9. Contact
  10. Acknowledgements

Project Description

This project contains a self-adjusting Space-filling curve Convolutional Autoencoder (SFC-CAE), of which the methodlogy is based on the work of previous year DOI:2011.14820, this new tool automatically generates a SFC-CAE network for unadapted mesh examples, a simple variational autoencoder is also included.

Getting started

Dependencies

  • Python ~= 3.8.5
  • numpy >= 1.19.5
  • scipy >= 1.4.1
  • matplotlib ~= 3.2.2
  • vtk >= 9.0
  • livelossplot ~= 0.5.4
  • meshio[all]
  • cmocean ~= 2.0
  • torch >= 1.8.0
  • dash ~= 1.21.0
  • pytest >= 3.6.4
  • progressbar2 ~= 3.38.0
  • (Optional) GPU/multi GPUs with CUDA

Contribution of Codes

External Libraries:

  • space_filling_decomp_new.f90

A domian decompositon method for unstructured mesh, developed by Prof. Christopher Pain, for detail please see Paper.

  • vtktools.py

The Python wrappers for vtu file I/O, from FluidityProject

Other codes in this repository are implemented by myself.

Installation

  1. Clone the repository:
git clone https://github.com/acse-jy220/SFC-CAE-Ready-to-use
  1. cd to the repo:
cd SFC-CAE-Ready-to-use
  1. Install the module:

(1) For pip install, just use

pip install -e .

It will compile the fortran library automatically, no matter you are on Windows or Linux.

(2) Create a conda environment via

conda env create -f environment.yml

activate the environment

conda activate sfc_cae

but you need to compile the fortran code by yourself in this way. On linux, type

python3 -m numpy.f2py -c space_filling_decomp_new.f90 -m space_filling_decomp_new

On windows, install

MinGW (I use version 7.2.0) and compile fortran use

f2py -c space_filling_decomp_new.f90 -m space_filling_decomp_new --compiler=mingw32
  1. For convenience, you could just simply import all functions in my module:
from sfc_cae import *

and call the functions you want!

  1. Initializing the autoencoder by passing the following arguments:
autoencoder = SFC_CAE(input_size,
                      dimension,
                      components,
                      structured,
                      self_concat,
                      nearest_neighbouring,
                      dims_latent,
                      space_filling_orderings, 
                      invert_space_filling_orderings,
                      activation,
                      variational = variational)

The meaning of each parameters are:

  • input_size: [int] the number of Nodes in each snapshot.
  • dimension: [int] the dimension of the problem, 2 for 2D and 3 for 3D.
  • components: [int] the number of components we are compressing.
  • structured: [bool] whether the mesh is structured or not.
  • self_concat: [int] a channel copying operation, of which the input_channel of the 1D Conv Layers would be components * self_concat.
  • nearest_neighbouring: [bool] whether the sparse layers are added to the ANN or not.
  • dims_latent: [int] the dimension of the latent variable
  • space_filling_orderings: [list of 1D-arrays or 2D-array] the space-filling curves, of shape [number of curves, number of Nodes]
  • activation: [torch.nn.functional] the activation function, ReLU() and Tanh() are usually used.
  • variational: [bool] whether this is a variational autoencoder or not.

For advance training options, please have a look at the instruction notebooks

Template Notebooks

Advecting Block

Open In Colab

Analytical Block Advection

Reconstructed by 2-SFC-CAE-NN, 16 latent

FPC-DG

Open In Colab

Original Velocity Magnitude Reconstructed by 2-SFC-CAE-NN, 8 latent

FPC-CG

Open In Colab

CO2

Open In Colab

Original CO2 PPM Reconstructed by 3-SFC-VCAE-NN, 4 latent variables

Slugflow

Open In Colab

Original Volume Fraction of the Slugflow

Reconstructed by 3-SFC-CAE-NN, 64 latent

tSNE plots

The creation of t-SNE plots in the Thesis are offered,

After you get FPC-CG data as well as sfcs by

bash get_FPC_data_CG.sh 

run

python3 tSNE.py

at the root of this directory.

t-SNE for SFC-CAE t-SNE for SFC-VCAE

Decompressing Examples

I have attached the compressing variables for the CO2 and Slugflow data in decompressing_examples/, scripts for downloading pretrained models/ decompressing vtu files are introduced in that folder.

Training on HPC

I wroted a (not very smart) simple script to do training using command line, simply do:

python3 command_train.py

will do training based on the configuration file parameters.ini, all parameters goes there for training on the College HPC. You could also write a custom configuration file, say my_config.ini, and training w.r.t that, by passing argument:

python3 command_train.py my_config.ini

License

Distributed under the Apache 2.0 License.

Testing

Some basic tests for the module are avaliable in tests/tests.py , you could execute them locally by

python3 -m pytest tests/tests.py --doctest-modules -v

at the root of the repository, by running it, you will automatically download the FPC_CG data and two pretrained model (one SFC-CAE, one SFC-VCAE) for that problem and the MSELoss() / KL_div will be evaluated. A github workflow is also built to run those tests on github.

Contact

Acknowledgements

Great thanks to my supervisors:

  • Dr. Claire Heaney [mail]
  • Prof. Christopher Pain [mail]