Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on training function #19

Closed
yiwan-rl opened this issue Feb 1, 2018 · 5 comments
Closed

Question on training function #19

yiwan-rl opened this issue Feb 1, 2018 · 5 comments

Comments

@yiwan-rl
Copy link

yiwan-rl commented Feb 1, 2018

I noticed that in your player_util.py action_train function:

if self.done:
    if self.gpu_id >= 0:
        with torch.cuda.device(self.gpu_id):
            self.cx = Variable(torch.zeros(1, 512).cuda())
            self.hx = Variable(torch.zeros(1, 512).cuda())
    else:
        self.cx = Variable(torch.zeros(1, 512))
        self.hx = Variable(torch.zeros(1, 512))
else:
    self.cx = Variable(self.cx.data)
    self.hx = Variable(self.hx.data)

But how can you backpropagate gradients through time, to the past 20 steps, if you set:

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)
@dgriff777
Copy link
Owner

dgriff777 commented Feb 1, 2018

Hi! So this is a stateful lstm implementation so the cell state is kept and sent forward through time. So the cell state at step 20 is input for the lstmcell at step 21. What is done here:

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

The hx, cx output of lstmcell the Variables are volatile and cannot be bppt so we create new Variables for the underlying data in hx, cx Variables and now they can be ready to bppt for next update.

@yiwan-rl
Copy link
Author

yiwan-rl commented Feb 1, 2018

Hi, thanks for your replay. Your action_train function is executed for every training step. And self.done is always False until the env resets. So you are actually setting

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

nearly every time step.
Now self.cx and self.hx are new Variables and gradients will not be passed through.

If you check the project you reference, https://github.com/ikostrikov/pytorch-a3c, it doesn't have such problem because it sets

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

every args.num_steps, instead of every step.

@dgriff777
Copy link
Owner

dgriff777 commented Feb 1, 2018

hmm your right it looks like I changed something here. I'll take a look in a little bit but very busy at the moment

@yiwan-rl
Copy link
Author

yiwan-rl commented Feb 1, 2018

Oh, I don't think it's the problem of GPU/CPU.

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

is ok for both GPU and CPU.

The problem is you don't want to put these two lines in the "else" condition. This will make this two lines execute every time step, except episode terminates (self.done = True).

What you want to do is to execute these 2 lines every args.num_steps (in your setting, args.num_steps = 20).

@dgriff777
Copy link
Owner

dgriff777 commented Feb 2, 2018

its fixed now should be fine now thanks!

Wow thanks for spotting had not noticed this error in repo. My version is not linked to GitHub and just been checking using trained models. And test part was fine lol. Good spot! For clarity all final performance of models posted were not trained with this bug in code. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants