You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to quickly confirm if it is also recommended for SAC to set different initial exploration steps similar to TD3 (e.g. 1e4 for HalfCheetah, 1e3 for Hopper) ?
The text was updated successfully, but these errors were encountered:
Hi, random initial exploration helps to reduce solution's dependency on the initial weights of the policy. The more random steps you add, less variation there will be between training trials. A good rule of thumb is to have a fixed number (say 10) of exploration episodes. Unfortunately our code does not support fixing the number of episodes (we can only set the number of steps), so we set 1e4 steps for HC (which has fixed length episodes of 1000 steps) and 1e3 for Hopper (which has varying episode length of around 100 steps initially). Hope this answers your question!
I would like to quickly confirm if it is also recommended for SAC to set different initial exploration steps similar to TD3 (e.g. 1e4 for HalfCheetah, 1e3 for Hopper) ?
The text was updated successfully, but these errors were encountered: