Skip to content

Deep reinforcement learning uncovers processes for separating azeotropic mixtures without prior knowledge

License

Notifications You must be signed in to change notification settings

grimmlab/drl4procsyn

Repository files navigation

DRL4ProcSyn

image

Official PyTorch implementation of the Gumbel AlphaZero-based algorithm for flowsheet synthesis in the paper "Deep reinforcement learning enables conceptual design of processes for separating azeotropic mixtures without prior knowledge". An agent plays a singleplayer game where it sequentially constructs flowsheets and gets some net present value as episodic reward.

NOTE: As we are in an AlphaZero-setting, the terms episode and game are used interchangeably in the code.

Full training/testing with parallel actors is run with

$ python main.py

Config

Configuration of the runs is separated into two config files:

  1. Everything regarding the general Gumbel AlphaZero setup, number of workers, training devices, optimizer settings, results folders etc., see the config file and comments for individual settings in ./gaz_singleplayer/config_syngame.py.

    The code is heavily parallelized with ray.io, and the parallelization can be configured with num_experience_workers and subsequent attributes. See the comments for the individual attributes to adjust the settings for your hardware. (Pre-configured to run 50 parallel MCTS-workers on CPU and training on cuda device 0).

  2. Environment configuration including chemical systems considered, simulation settings, discretization etc., see config file and comments for individual settings in ./environment/env_config.py.

Requirements

Built and tested with PyTorch 2. Required packages are specified in requirements.txt. An NVIDIA GPU is recommended for faster training. An example Dockerfile is given under docker/Dockerfile.

Acknowledgments

Thanks to the following repositories:

Paper

For more details, please see our preprint "Deep reinforcement learning uncovers processes for separating azeotropic mixtures without prior knowledge". If this code is useful for your work, please cite our paper:

@article{gottl2023deep,
  title={Deep reinforcement learning uncovers processes for separating azeotropic mixtures without prior knowledge},
  author={G{\"o}ttl, Quirin and Pirnay, Jonathan and Burger, Jakob and Grimm, Dominik G},
  journal={arXiv preprint arXiv:2310.06415},
  year={2023}
}

About

Deep reinforcement learning uncovers processes for separating azeotropic mixtures without prior knowledge

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages