-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Help Wanted] Incompatible log_probs shape when combining trajectories #70
Comments
If you actually don't care about the
I'm not sure what you mean by that. Did you mean "to store" instead of "to sample" ? If so, note that a replay buffer works essentially with |
Alright thank you! I can set
I'm implementing a replay buffer where we can sample from the top highest (and lowest) reward terminal states seen so far. Its PRT from https://arxiv.org/abs/2305.07170. We need it because our rewards are very skewed. Currently, every time I add terminal states to the buffer, I remove any duplicated terminal states and I sort them according to their corresponding rewards. So when I sample from the buffer, I only sample from the first n states and last n states. If I reach the maximum capacity, I remove states from the middle of the buffer. My question was, instead of storing just terminal states and sampling the highest (and lowest) reward terminal states from them, can I store the trajectories themselves and find a way to sample those trajectories with the desired rewards? This is so that I wouldn't need to generate the backward trajectories from the terminal states and convert them to forward ones. What do you think is the best way to implement such a buffer? |
I have never implemented a prioritized replay buffer, but your idea looks like it can be extended to storing trajectories rather than terminal states. For example, you can use the provided ReplayBuffer class, and keep a sorted list of indices (the indices of the trajectories). Every time you add a trajectory, you sort again the list of indices (using the corresponding trajectory reward for sorting). Then you could use the sorted list of indices to subsample your replay buffer, rather than using the |
Alright, let me try that out! Thank you :) |
Hello again everyone,
We are trying to augment our training with backward trajectories starting from states sampled from a reward-prioritized replay buffer, as described here: https://arxiv.org/abs/2305.07170. I used
Trajectories.revert_backward_trajectories()
to transform the backward trajectories into forward ones. But attempting to combine them with the forward sampled trajectories causes an error. Specifically, the code below reproduces the error:The error is:
Inserting the code
before
trajectories.extend(offline_trajectories)
seems to work but I don't know if there will be unexpected behavior downstream. It seems thatlog_probs
needs to be padded afterTrajectories.revert_backward_trajectories()
. I would appreciate your insight.Possibly a bit off topic, would it be better to sample forward trajectories stored in
ReplayBuffer
instead? I would just need to sort the trajectories according to the rewards of the terminating states?Thank you very much for your time!
The text was updated successfully, but these errors were encountered: