Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disagreements with the async paper #53

Closed
YurongYou opened this issue Aug 18, 2016 · 2 comments
Closed

Disagreements with the async paper #53

YurongYou opened this issue Aug 18, 2016 · 2 comments

Comments

@YurongYou
Copy link

I am a newcomer to DRL and have just read through the code of this project, it is really an amazing job! As the title says, I found some implementations maybe different from the setting of the Asynchronous Methods for Deep Reinforcement Learning paper, list them as follows.

  1. The network setting. By the part 8. Experimental Setup of the paper, it applies a smaller neural network, I think the body maybe like this

    net:add(nn.SpatialConvolution(histLen*self.stateSpec[2][1], 16, 8, 8, 4, 4))
    net:add(nn.ReLU(true))
    net:add(nn.SpatialConvolution(16, 32, 4, 4, 2, 2))
    net:add(nn.ReLU(true))
    net:add(nn.Linear(#sizeof_previous_layer, 256))
    net:add(nn.ReLU(true))
    net:add(nn.Linear(256, #moves))
    

    but the project use the standard DQN's network, much bigger than the proposed network.

  2. No action repeat? The paper says they apply an action repeat of 4, but I have not found in the code... Is it because the environment automatically does this?

  3. In Atari/async/A3CAgent.lua:90,

    self.vTarget[1] = -0.5 * (R - V)
    

    I have not figure out why there is a 0.5, but anyway it's no problem to keep it...

  4. In Atari/async/Qagent.lua:6-7

    local EPSILON_ENDS = { 0.01, 0.1, 0.5}
    local EPSILON_PROBS = { 0.4, 0.7, 1 }
    

    the probability of setting epsilon is different from the paper, should it be like this?

    local EPSILON_ENDS = { 0.1, 0.01, 0.5}
    local EPSILON_PROBS = { 0.4, 0.7, 1 }
    

And below might be a small but essential bug on asyncAgent using Q learning.

Look the line 59 and 67 of Atari/async/QAgent.lua,

56:function QAgent:eGreedy(state, net)
57:  self.epsilon = math.max(self.epsilonStart + (self.step - 1)*self.epsilonGrad, self.epsilonEnd)
58:
59:  if self.alwaysComputeGreedyQ then
60:    self.QCurr = net:forward(state):squeeze()
61:  end
62:
63:  if torch.uniform() < self.epsilon then
64:    return torch.random(1,self.m)
65:  end
66:
67:  if not self.alwaysComputeGreedyQ then
68:    self.QCurr = net:forward(state):squeeze()
69:  end
70:
71:  local _, maxIdx = self.QCurr:max(1)
72:  return maxIdx[1]
73:end

should them be exchanged? Otherwise if set self.alwaysComputeGreedyQ = false, the eGreedy on the contrary will always Compute Greedy Q.

@Kaixhin
Copy link
Owner

Kaixhin commented Aug 18, 2016

Thanks for finding these - I'll just update on this comment if it's simple.

  1. Oh that's subtle - they are indeed using the smaller architecture from the pre-Google 2013 NIPS paper. Just pushed a fix in 20181f3. This should make the async experiments faster as well!
  2. Yes, that's part of the ALE wrapper they wrote - but it can be controlled using the -actRep option with this code.
  3. Leaving to @lake4790k.
  4. That looks like an error to me - fixed with dc25dad.
  5. From what I remember, that has something to do with recurrent networks, but I'll let @lake4790k have a look since he wrote this bit.

@lake4790k
Copy link
Collaborator

lake4790k commented Aug 26, 2016

@YurongYou re 3: the actor and critique need to have different learning rates, so that means that learns half as fast.

re 5: I think lines 59 and 67 are indeed correct! If alwaysComputeGreedyQ is false then forward is NOT done when the action is random (notice the return statement), only when the action selection is greedy.

I think this can be closed, all has been addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants