-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't find documentation about DQN in this environment #8
Comments
The python example is of a random agent. |
I wonder if this RLE environment is used to produce the Table 3 in the paper "Playing SNES with RLE" especially for Mortal_Kombat? |
The deep_w_rl repository should be able to reproduce the the results for the first column (DQN). Notice that the results for Mortal Kombat were achieved using random initialization at the beginning of each level. |
Hi @nadavbh12, I am working on this repository but i got an issue while running this command Please guide me where i am going wrong. Thanks |
Try removing the CMakeCache.txt file from the project's main directory and re-running the command. |
Thanks, I fix this issue, but next when I run this command |
Hey @Noor59007, Regarding the new issue, I was unable to reproduce it. |
84x84 is the cropped image size. If that doesn't work, try running the original atari version of deep_q_rl so we're sure the problem is with my fork rather than your setup. |
I run environment through python interface from doc/example like
$ python python_example.py path_to_rom path_to_core
I modified the code set episode to 2000 and the training was running for 1 day but the agent is not learning.
I searched the code but couldn't find the module for DQN.
Please kindly Help.
The text was updated successfully, but these errors were encountered: