-
Notifications
You must be signed in to change notification settings - Fork 799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request: save/resume training #25
Comments
Hi @123mitnik, Right now the first and last are supported already. By default, if you run experiment in "stub" mode (i.e. calling Which algorithm(s) do you need the resume training feature for? I can give a shot at implementing a primitive version. |
the main focus is on Trust Region Policy Optimization and second on the list is Truncated Natural Policy Gradient it will be great asset if you can resume the policy training .. just a lifesaver in demanding envs I would like to Thank You for your fast replay. And to appreciate how wonderful rllab is ! |
Hi @123mitnik, I've pushed an experimental implementation. For new experiment runs (must be under "stub" mode), now the snapshot files (ending in python scripts/resume_training.py PATH_TO_PKL_FILE.pkl |
wow !!! that was FAST Thank You so much !!! ... will start using it immediately |
how difficult would be to implement:
The text was updated successfully, but these errors were encountered: