You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The environment in this repo is implemented directly from atari_py. However, in some cases, it might be useful to have the option to build it using OpenAI Gym's syntax. Is there a direct equivalent of the current environment in terms of OpenAi baselines.common.atari_wrapper? I was thinking something along the lines of wrap_deepmind(gym.make('QbertNoFrameskip-v4'),episode_life=False,frame_stack=True,scale=True,clip_rewards=False), but this doesn't seem to achieve quite the same rewards as your implementation.
The text was updated successfully, but these errors were encountered:
I remember looking at baselines a long time ago and for whatever reason thinking it didn't quite work out, so maybe, but not sure. Should be possible to work out if you go through both codebases carefully. Would not introduce baselines as a dependency for this project though because it brings in a lot of unnecessary things.
The environment in this repo is implemented directly from
atari_py
. However, in some cases, it might be useful to have the option to build it using OpenAI Gym's syntax. Is there a direct equivalent of the current environment in terms of OpenAibaselines.common.atari_wrapper
? I was thinking something along the lines ofwrap_deepmind(gym.make('QbertNoFrameskip-v4'),episode_life=False,frame_stack=True,scale=True,clip_rewards=False)
, but this doesn't seem to achieve quite the same rewards as your implementation.The text was updated successfully, but these errors were encountered: