Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Replayloader doesn't work for Atari #11

Closed
slerman12 opened this issue Oct 19, 2021 · 9 comments
Closed

Replayloader doesn't work for Atari #11

slerman12 opened this issue Oct 19, 2021 · 9 comments

Comments

@slerman12
Copy link

slerman12 commented Oct 19, 2021

Have you tried using this replay loader with Atari? I keep getting this error unless I set the num replay workers to 1:

File "/u/slerman/miniconda3/envs/agi/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/u/slerman/miniconda3/envs/agi/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
    data.append(next(self.dataset_iter))
  File "/home/cxu-serve/u1/slerman/drqv2/replay_buffer.py", line 176, in __iter__
    yield self._sample()
  File "/home/cxu-serve/u1/slerman/drqv2/replay_buffer.py", line 159, in _sample
    episode = self._sample_episode()
  File "/home/cxu-serve/u1/slerman/drqv2/replay_buffer.py", line 99, in _sample_episode
    eps_fn = random.choice(self._episode_fns)
  File "/u/slerman/miniconda3/envs/agi/lib/python3.8/random.py", line 290, in choice
    raise IndexError('Cannot choose from an empty sequence') from None
IndexError: Cannot choose from an empty sequence


Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Edit: Sorry, originally posted the wrong trace.

@denisyarats
Copy link
Contributor

Hi, I think before starting sampling from the replay buffer you need to have at least N episodes to be stored in the replay buffer, where N is the number of workers. Otherwise some of the workers end up with no data to sample from and throw this error.

So make sure to modify your code in such a way that you already have enough data collected.

@slerman12
Copy link
Author

Curiously, I tried that, even set the number of seed frames to num_replay_workers*episode_length, but it still kept throwing that error.

@slerman12
Copy link
Author

Oh, looks like I found the problem. It was something unrelated.

@slerman12
Copy link
Author

Sorry if this is a lot of trouble to answer, but I've implemented a version of DrQ with DQN, including double Q learning and Duel Q Networks like in the original DrQ paper. Same hyperparams, the only difference is no terminal states since I'm using this code's replay buffer (although I guess it wouldn't be too hard to add them). The only thing is, I'm not able to reproduce the reported Atari results in that paper... Are there any other additions I should consider besides the double/duel Q learning and matching up hyperparams? I store the "episodes" in 100-frame increments for the replay buffer, but the actual training proceeds normally until the episode is really completed. Also, exploration and intensity augmentation are same as in DrQ paper, too. I even borrowed from DrQv2 and made the action sampling "noisy" via an increasing softmax temperature and categorical sampling for training instead of just taking the max.

@denisyarats
Copy link
Contributor

Are you using the exact Atari wrapper than described in Rainbow? For example, stick actions, terminate on life loss, etc.? Those are very important to get right. Also for Atari we used additional data augmentation in a form of noise.

It is hard to pin point exactly what is the issue with your code, but I'm happy to take a look at it and see if I can spot anything. Please email me at denisyarats@cs.nyu.edu if you want me to take a look at your code.

@slerman12
Copy link
Author

slerman12 commented Oct 26, 2021

I'm using a slightly different Atari wrapper. If it's alright, I'll send you the code, because I'm having trouble reproducing the results. Any chance you have a script handy that I can use to compile results output by this repo? Currently, I've just been manually looking at the eval CSVs. Just checking, would save me some time having to code one up from scratch.

@denisyarats
Copy link
Contributor

Here is a sample script that you can use to plot csvs: https://github.com/denisyarats/pytorch_sac/blob/master/data/sac.ipynb

Ok, send me our your code and I can take a look.

@slerman12
Copy link
Author

Just sent over the code. I'm pretty shell-shocked by how low the performance turned out:

5 seeds, 99996 frames:
alien: 377.0 ± 126.4
amidar: 42.2 ± 15.7
assault: 371.7 ± 33.7
asterix: 309.0 ± 183.0
bankheist: 29.6 ± 17.6
battlezone: 2980.0 ± 1144.4
boxing: -12.5 ± 5.8
breakout: 3.4 ± 1.5
choppercommand: 518.0 ± 314.3
crazyclimber: 120.0 ± 240.0
demonattack: 656.3 ± 174.3
freeway: 24.7 ± 1.1
frostbite: 158.6 ± 15.3
gopher: 103.2 ± 53.9
hero: 2261.0 ± 1096.3
jamesbond: 11.0 ± 22.0
kangaroo: 192.0 ± 170.5
krull: 186.8 ± 223.4
kungfumaster: 270.0 ± 256.7
mspacman: 397.4 ± 99.8
pong: -21.0 ± 0.0
privateeye: 52.0 ± 43.5
qbert: 325.0 ± 15.8
roadrunner: 1270.0 ± 361.6
seaquest: 128.8 ± 34.1
upndown: 834.8 ± 466.6
Mean Human Normalized: -0.04271625231291354
Median Human Normalized: 0.015681259862356113

@slerman12
Copy link
Author

Hi, did you ever get a chance to look at this code?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants