Skip to content

Using BERNAISE

Matthew Hockley edited this page Oct 29, 2020 · 2 revisions

The full usage and operation of BERNAISE retains the usage in Asger Bolet's, Gaute Linga's and Joachim Mathiesen's paper. Below is a summary of the key terminal commands and tools available.

Directory Structure

Directory structure for BERNAISE where problems, meshes, solvers, etc are stored. Additionally common IO such as parallelisation functions are kept within the common folder allowing updates and changes to be made to the system compared to each problem. This approach to common functions makes BERNAISE easily maintainable. Results folders are by default added into the root BERNAISE folder alongside problems, meshes, solvers, etc.

Image of BERNAISE sturcture from Asger Bolet's, Gaute Linga's and Joachim Mathiesen's paper.

Terminal Commands

Running a problem

python sauce.py problem=flow_spiral2D

The code runs sauce.py which is the common python script for running all problems. This will load all the problem specific parameters and conditions from a python script located in BERNAISE > problems folder. This is pointed to the problem using problem=flow_spiral2D where flow_spiral2d is a python script problem. Note that '.py' is not required on the end of the problem. The code may need to be adjusted using python3 instead of python if other versions of python are available on the workstation.

Changing parameters on the go

python sauce.py problem=flow_spiral2D T=1000

Any parameter can be overwritten by defining the parameter name, in this case T, with a corresponding value. This avoids editing the script each time which is beneficial for high performance clusters (HPC) operation.

Restarting a problem

python sauce.py problem=flow_spiral2D restart_folder=results_flow_spiral2D/1/Checkpoint/

A problem can be continued from the last checkpoint by setting the restart_folder to the desired checkpoint in the results folder. This is useful if the simulation ended early due to hardware or if a study needs to be continued further by increasing the total time. This saves time than re-running an entire simulation.

Running in parallel

mpirun -n 4 python sauce.py problem=flow_spiral2D

Some solvers run on single core and slow down due to calculations in large meshes. BERANISE, and FEniCS, overcomes this limitation by spreading the load across available cores in a controlled manner called 'MPI'. Adding mpirun -n 4 to the beginning of a terminal command spreads the load by creating '4' processes. Increasing the processes usually increases the speed of the solver but can slow down simple models due to the time taken to spread and gather calculations. 'MPI' usage can be studied in depth here.