We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
I was wondering if you had any ideas how a Prioritized Experience Replay buffer could be added to ES?
They do something similar to that here - Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards, with DDPG for a robotics application
I'm guessing though ES would be more general?
Perhaps OpenAI's prioritized replay_buffer, from the baselines repo could be used?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi,
I was wondering if you had any ideas how a Prioritized Experience Replay buffer could be added to ES?
They do something similar to that here - Leveraging Demonstrations for Deep Reinforcement
Learning on Robotics Problems with Sparse Rewards, with DDPG for a robotics application
I'm guessing though ES would be more general?
Perhaps OpenAI's prioritized replay_buffer, from the baselines repo could be used?
The text was updated successfully, but these errors were encountered: