You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the step() method here, if p.grad = None, then this line will break causing the optimizer to crash. However, in several Deep Learning applications, it is common to have some parameters within a layer or even whole layers to be frozen. QPyTorch's optimizer would not be applicable to these cases.
PyTorch's default optimizer has a simple and elegant solution which is to just skip these parameters treating None has equivalent to 0 gradient. We have implemented this solution here. I would like to propose this change to QPyTorch as well.
The text was updated successfully, but these errors were encountered:
hi @kamikazekartik thank you for your suggestion! if you can help fix it and submit a PR, I am happy to merge it. Otherwise it might take a while before I get to it.
In the
step()
method here, ifp.grad = None
, then this line will break causing the optimizer to crash. However, in several Deep Learning applications, it is common to have some parameters within a layer or even whole layers to be frozen. QPyTorch's optimizer would not be applicable to these cases.PyTorch's default optimizer has a simple and elegant solution which is to just skip these parameters treating
None
has equivalent to0
gradient. We have implemented this solution here. I would like to propose this change to QPyTorch as well.The text was updated successfully, but these errors were encountered: