You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Doing human experience replay the naive way (as in, making a separate numpy array and loading them in, then combining with the built-in dataset in deep_q_rl) means the code runs possibly several orders of magnitude slower. The built-in replay memory has a size of 1 million, and my data is "only" on the order of 10k so there's non reason why my version should be that slow. My guess is that it has something to do with memory issues, if I decrease my human experience replay data by a factor of 10, runtime increases by a factor of 10.
So let's instead figure out how to get the dataset built into the normal experience replay in deep_q_rl.
The text was updated successfully, but these errors were encountered:
Doing human experience replay the naive way (as in, making a separate numpy array and loading them in, then combining with the built-in dataset in deep_q_rl) means the code runs possibly several orders of magnitude slower. The built-in replay memory has a size of 1 million, and my data is "only" on the order of 10k so there's non reason why my version should be that slow. My guess is that it has something to do with memory issues, if I decrease my human experience replay data by a factor of 10, runtime increases by a factor of 10.
So let's instead figure out how to get the dataset built into the normal experience replay in deep_q_rl.
The text was updated successfully, but these errors were encountered: