-
Notifications
You must be signed in to change notification settings - Fork 727
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to set number of epochs trained instead of total_timestep #352
Comments
Hello, How do you define an epoch? You maybe mean episodes? |
I think it depends on the definition, but I mean an epoch in the traditional DL sense, a forward/backward pass through the entirety of the data |
Well, we are in a RL setting, "the entirety of the data" does not really makes sense here, no? |
Epochs are defined in openai spinups
Which would be the equivalent of So I'm just curious if there's an equivalent variable in baselines. If I want to set the number of episodes, is there an option to do so, or is it enough to set the number of steps? |
I see. If you can want an equivalent of for PPO2 for instance, For other algorithms, like SAC that updates every steps, that does not really makes sense...
There is no option, but you can easily convert your number of epochs into a number of steps (cf above), but that depends on the algorithm used. |
I see. Thanks for your answers. |
How about DDPG? I saw in openai spinups
Is there a similar setting in stable_baselines? |
In the example codes, the call to train the agent is as follows:
model.learn(total_timesteps=10000)
Is there a way to specify the number of epochs a model is trained, instead of by timesteps?
The text was updated successfully, but these errors were encountered: