Skip to content

tesla-cu/amrPOD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Efficient Algorithm to Compute POD on AMR Grids

This repository hosts code to compute proper orthogonal decomposition (POD) on a set of grid that utilized adaptive mesh refinement (AMR) and evaluate the efficiency of the algorithm compared to the standard algorithm. We have provided a detailed description of the code and how to use it below.

In order to use the code, we expect the following directory structure:

  • POD_AMR/
    • code/
      • contains all the code in this repository
    • data/
      • data generated by parameter sweeps and synthetic AMR data
    • images/
      • images that are generating from the plotting scripts

Organization

This repository is organized in the following manner:

  • archive/ contains codes that are no longer useful but are good to reference
  • fortran/ contains fortran versions of the code that:
    • CPU/: compare CPU times between the standard and AMR algorithms
    • parallel/: computes POD using a hybrid MPI/OMP approach. Note, we no longer provide this code here; please reach out to the authors for the code.
  • miscellaneous/ contains codes for miscellaneous tasks
  • plotting/ contains all codes to generate figures in the paper
  • source/ contains all python source code that computes each operation of POD
  • span_params/ contains codes to span parameters diagnosing computational advantage with operation counts
  • tests/ contains codes to test the algorithms used in this repository. See below for a discussion on the variety of tests.

How to use

If you are interesting in learning the algorithm, please look through the python source code contained in source/ with the files ending with _CPU.py. This code contains thorough comments what is going on.

If you are interested in testing a new algorithm, make changes to the code in source/ then you can test if the new algorithm works using some of the tests discussed below.

If you are interested in redoing all the parameter sweeps (possibly with different weighting) presented in the paper, simply run MasterRun.sh in span_params/ then run MasterPlot.py in plotting/. Please be aware that these runs can take a significant amount of time, on the order of a month or so to completely reproduce all results. You can simply modify each span_*.py file in span_params/ to reduce the cost or run these individually.

If you are interested in spanning new parameter spaces, simply copy one of the span_*.py files similar to your needs and edit to your liking, then run with span_new.py in the span_params/ folder. Weights of the operations can be changed in source/Compute_POD.py.

If you would like to compute CPU time, use code in fortran/CPU/. Compile lines are provided at the top of POD.f90 and data can be found HERE, then run ./POD.batch to perform the parameter sweep. Please note that you will need a fortran compiler as well as access to the dsyev routine to compute an eigenvalue problem on the covariance matrix.

If you would like to compute POD as fast and efficiently as possible, please reach out to us for access to this code. We built a parallelized version using a hydrid MPI/OMP approach with additional functionality, such as support for multiple variable POD.

Testing

One of the benefits of using this approach is we can easily test the code compared to standard matrix operations because we are simply weighting and skipping operations. We provide a suite of tests to benchmark our algorithms.

Specifically within tests/, we provide testing for:

  • how we generate our synthetic grids in grid_generation/. Simply edit the parameters under "User defined inputs" to your liking and run using python grid_generation.py. This will output your inputs and the computed AMR statistics in a folder within data/ (which can be modified within the script).
  • our reshaping procedure in reshaping/. We compute POD with synthetically generated grid with no reshaping and matrix operations then compare the results with our new algorithm all within POD_rshp.py. Simply provide the synthetic grid you wish to generate then execute python POD_rshp.py. This is particularly useful if you are interested in algorithm improvement to test any new developments
  • our python code again Matlab in python_vs_matlab/. Matlab is the standard utility for matrix operations so we demonstrate that both of codes are equivalent. To compare:
    1. Run miscellaneous/write_synthetic_data.py to generate data or provide your own data.
    2. Modify file paths and parameters in POD_pvm.py and POD_pvm.m, both located in python_vs_matlab/, to match that of the data
    3. Run the python script using python POD_pvm.py
    4. Run the Matlab script. The error between the two approaches will be outputted in the command line interface.
  • our fortran code against Matlab in fortran_vs_matlab/. Follow the same steps as above, expect compile the fortran code, modify POD.inputs to match those of the data, and run the fortran program using ./POD.ex. The Matlab script with show the error between the two approaches.

Citation

To cite our code, please use

@article{meehan2022efficient,
  title={Efficient algorithm for proper orthogonal decomposition of block-structured adaptively refined numerical simulations},
  author={Meehan, Michael A and Simons-Wellin, Sam and Hamlington, Peter E},
  journal={Journal of Computational Physics},
  pages={111527},
  year={2022},
  publisher={Elsevier}
}

License

LICENSE INFORMATION

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published