You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And pytorch report RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
when I fix this error like this:
G_new = G.clone()
for i in range(1, 24):
G_new[:,i,:,:] = torch.matmul(G[:,self.parent[i-1],:,:], G_[:, i, :, :])
Then the test code runs without any problem.
Although I found this indeed an dangerous inplace operation, however, when I ran the training code, pytorch did't report any error! I guess maybe gradient flow is cut off somewhere, so it did't need to BP through SMPL(). But this explanation sounds contradicts with the optimization process.
How do you think? Is this inplace operation a real problem? If not, how do you manage to neglect it without causing any problem?
The text was updated successfully, but these errors were encountered:
I carefully read your code, and found a dangerous inplace operation, which may lead to Runtime Error.
line 87-88 in models.smpl.py:
G is modified inplace after matmul by G_.
To check its correctness, I wrote the following test code:
And pytorch report
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
when I fix this error like this:
Then the test code runs without any problem.
Although I found this indeed an dangerous inplace operation, however, when I ran the training code, pytorch did't report any error! I guess maybe gradient flow is cut off somewhere, so it did't need to BP through SMPL(). But this explanation sounds contradicts with the optimization process.
How do you think? Is this inplace operation a real problem? If not, how do you manage to neglect it without causing any problem?
The text was updated successfully, but these errors were encountered: