This repository is based on Distributional-GFlowNets.
PyTorch implementation for our paper
Evolution guided generative flow networks.
Anonymous authors.
We train our in two ways. Using evolutionary algorithms, we evolve a population of agent parameters to learn the optimal parameter population 🤖🤖🤖 that maximize the reward signal. While evolution in the evolution step, the generated samples get stored in the prioritized replay buffer (PRB). Using the offline samples from the replay buffer and online samples from the current policy, we train a GFlowNets agent 🤖 using gradient descent.
First, clone our repository.
git clone https://github.com/zarifikram/E-GFN
cd ./E-GFN
Install Anaconda environment in case not available. Run the following command next.
./setup.zsh
conda activate e-gfn
First, navigate to the directory.
cd ./grid
python run_hydra.py ndim=5 horizon=20 method=db_egfn n_train_steps=2500 replay_sample_size=16 seed=$seed R0=0.00001
All ablations are in the scripts
directory. To run long time horizon ablation, run
./scripts/sparsity.sh
Other ablations include-
./scripts/long_time_horizon.sh
./scripts/generalizability.sh
./scripts/ablation_population_size.sh
./scripts/gamma.sh
./scripts/ablation_elite_population.sh
./scripts/buffer_size.sh
To change the configurations for the experiment, simply change the configs/main.yaml
file.
First, navigate to the directory.
cd ./mols
Next change the proxy_path
and bpath
variable in the gflownet.py file.
To run the exeriment, run the following command.
python gflownet.py obj=fm_egfn sample_prob=0.2
To change the configurations for the experiment, simply change the configs/main_gfn.yaml
file.