-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[rllib] Flexible multi-agent replay modes and replay_sequence_length #8893
Conversation
from ray.util.iter import ParallelIteratorWorker | ||
from ray.rllib.utils.timer import TimerStat | ||
from ray.rllib.utils.window_stat import WindowStat | ||
|
||
# Constant that represents all policies in lockstep replay mode. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The most significant changes are to this file.
Can one of the admins verify this patch? |
Test PASSed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for cleaning up the buffers.
@@ -41,73 +44,42 @@ def __len__(self): | |||
return len(self._storage) | |||
|
|||
@DeveloperAPI | |||
def add(self, obs_t, action, reward, obs_tp1, done, weight): | |||
data = (obs_t, action, reward, obs_tp1, done) | |||
def add(self, item: SampleBatchType, weight: float): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
}, | ||
|
||
# === Replay Settings === |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very cool. We should be able to implement R2D2 then just setting this to >1.
Test PASSed. |
Test PASSed. |
Test FAILed. |
Test PASSed. |
I believe test_rollout.py is flaking in master as well due to timeouts (@sven1977 ) |
Why are these changes needed?
This adds a "replay_mode" option for multiagent training. When replay_mode=independent (current state of things), RLlib provides no guarantees on whether all the agent experiences for a particular timestep are present in the same training batch together. When replay_mode=lockstep, RLlib will replay all the agent experiences at a particular timestep together in the batch. Some things like prioritized replay and SGD minibatches are difficult to tricky to implement in lockstep mode, so those are currently unsupported.
Implementation:
This PR also adds a new "replay_sequence_length" option that can work in either mode. It's intended for future use with training RNN and attention models.
Related issue number
Part of #7341
Checks
scripts/format.sh
to lint the changes in this PR.