Full documentation can be found here
In a terminal, enter the following commands:
git clone https://github.com/realm-ai-project/RL-Subsystem.git
cd RL-Subsystem
pip install -e .
- Build an executable of the game.
- In a terminal, do
realm-tune --env-path <path_to_environment>
where
<path_to_environment>
represents the path to the game executable
- Does not support multiplayer environments (i.e., environments with >1 behaviour(s))
- Does not support in-editor training, must be trained on a build
- Resuming feature. If training is paused in the middle of hyperparameter tuning, the currently running trial will be discarded upon resuming. However, if training is paused in the middle of the full run, upon resuming it will automatically continue running the full run!
- Error testing of specifying hyperparameters in yaml file is left to
mlagents
python package, which does a good job in checking if specified hyperparameters are valid. An important note is that it is not required to specify all hyperparameters, we can just have hyperparameters that we want to perform tuning over - others will automatically be defaulted. The only mandatory field is the trainer type (e.g.,ppo
orsac
. Note:realm-tune
is algorithm-agnostic). - The resuming feature works with or without wandb. If using wandb, we might see multiple runs with the same name.
- Tested on wandb offline. When using wandb offline, after running to completion,
cd
into root folder of run (where wandb folder lies), and dowandb sync --sync-all
. If there are any errors, delete those runs and retry the wandb sync command.