Skip to content

code for active flow control of flow around cynder using Deep Reinforcement Learning

License

Notifications You must be signed in to change notification settings

darshan315/flow_past_cylinder_by_DRL

Repository files navigation

Active flow control of the flow past a cylinder using OpenFOAM and PyTorch

Overview

Active flow control is high dimensional optimization problem. Therefore in a generic example as flow around cylinder, the deep reinforcement learning is used to achieve optimal flow control by leveraging its power of approximation in high dimensional space. In this study, the flow control is achieved by open-loop control and closed-loop control. For flow around 2D cylinder, the von kármán vortices impose fluctuating drag and lift forces. Hence, for flow control the objective is to reduce drag and fluactuation of drag and lift for the stability of a cylinder. Hence, the cylinder is rotated in order to control the flow. For open-loop control the optimal strategy is determined by parametric study, where the rotation of the rotation of the cylinder is wave function in order to counter the natural vrtex shedding. For closed-loop control, the flow control is achieved by usind deep reinforcement learning. The proximal policy optimization (PPO) algorithm is used to implement the DRL setup, where the cylinder is rotated with optimal policy network and the pressure sensors are placed on the surface of the cylinder. In the PPO iteration, the starting of trajectory control is considered randomly between t=0s and t=4s.

Screenshot from 2021-06-04 17-42-02

cd cl

surface_pressure_desh.mp4

Screenshot from 2021-06-04 17-22-25

inte_omegas

Dependencies

  • Python-libraries, Singularity, Docker, paraview(for visualisation)

Simulation setup

For the simulation setup in OpenFOAM, the base case for the simulation may be found in ./test_cases/cylinder2D_base. For more info see here.

To Built the singularity image follow the instruction given here. The singularity image file (.sif) should be in parent directory.

This base case is executable with singularity image as,

singuarity run of2006-py1.6-cpu.sif ./Allrun ./test_cases/cylinder2D_base/

For mesh dependency study, execute the shell file as,

$ ./mesh_study

The mesh is set to refinement level 100, 200, and 400. For more refinement level change the array mesh_size=( 100 200 400 ) in shell script. The simulations for different mesh will generate in ./test_case/run/mesh_convergence_study/.

Open-loop control

The parameter amplitude and frequency for the rotation of the cylinder is sampled by LHS method. For LHS sampling,

option-1 (with shell script)

$ ./bash_LHS_sampling

option-2 (with python script)

$ python3 py_LHS_sampling.py

for python script the py-libraries - numpy and matplotlib.

The simulations for the LHS is found in ./test_cases/run/oscillatory_parameter_study/cases.

Closed-loop control by Deep Reinforcement Learning using PPO

Training on local machine

python-libraries :

Python libraries that are used in DRL can be saperately installed in virtual environment by,

pip install -r ./DRL_py/docker/requirements.txt

PPO iterations

For PPO iteration, the simulations in OpenFOAM (environmnent) are handled by ./DRL_py/env_local.py.

To set the triaing in local machine, in ./DRL_py/reply_buffer.py, change machine variable to machine = 'local'. see here.

To start training,

$ python3 main.py

Training on cluster (slurm workload manager)

python-libraries :

Python libraries in cluster is installed by creating virtual environment as,

module load python/3.7 
python3 -m pip install --user virtualenv 
python3 -m virtualenv venv

To activate the virtual environment :

source venv/bin/activate

To deactivate :

deactivate

To install the python libraries in venv virtual environment,

pip install -r ./DRL_py/docker/requirements.txt

PPO iterations

For PPO iteration on cluster, the simulations in OpenFOAM (environmnent) are handled by ./DRL_py/env_cluster.py.

To set the training on cluster, in ./DRL_py/reply_buffer.py, change machine variable to machine = 'cluster'. see here.

To submit the training job on cluster,

$ cd DRL_py

$ sbatch python_job.sh

Report

The report for this study : https://doi.org/10.5281/zenodo.4897961

BibTex citation :

@misc{darshan_thummar_2021_4897961,
  author       = {Darshan Thummar},
  title        = {{Active flow control in simulations of fluid flows 
                   based on deep reinforcement learning}},
  month        = may,
  year         = 2021,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.4897961},
  url          = {https://doi.org/10.5281/zenodo.4897961}
}

References

The PPO implementation is based on chapter 12 of Miguel Morales' excellent book Grokking Deep Reinforcement Learning. For more information refer to the Notebook.

For more information about the base simulation setup and the open loop control refer Schaefer et al., and Tokumaru et al. The robust active flow control is inspired from Rabault et al., and Tokarev et al.

About

code for active flow control of flow around cynder using Deep Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages