You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think I figure it out.
there is no grad in global params (None type), the way to solve this problem is to move the local parameters to CPU, then you can assign local parameter. to make life easier, you can wrap this function in the model.
ensure_shared_grads(model, shared_model)
becomes
model.to("cpu").ensure_shared_grads(shared_model)
but do mind that when you run your model on GPU, you have to switch it back to the device which it was running for next backward() (in case sometimes model can have "expected GPU but got CPU backend" error)
in pytorch 1.0 the following line:
https://github.com/ikostrikov/pytorch-a3c/blob/master/train.py#L14
returns the gradient assignment error:
RuntimeError: assigned grad has data of a different type.
any fixes or suggestions?
The text was updated successfully, but these errors were encountered: