Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is a dangerous inplace operation in SMPL() #19

Closed
Maqingyang opened this issue Jul 30, 2019 · 3 comments
Closed

There is a dangerous inplace operation in SMPL() #19

Maqingyang opened this issue Jul 30, 2019 · 3 comments

Comments

@Maqingyang
Copy link

Maqingyang commented Jul 30, 2019

I carefully read your code, and found a dangerous inplace operation, which may lead to Runtime Error.
line 87-88 in models.smpl.py:

for i in range(1, 24):
    G[:,i,:,:] = torch.matmul(G[:,self.parent[i-1],:,:], G_[:, i, :, :])

G is modified inplace after matmul by G_.

To check its correctness, I wrote the following test code:

device = 'cuda'
pred_theta = torch.ones([1,24,3,3], requires_grad=True).to(device)
pred_beta = torch.ones([1,10], requires_grad=True).to(device)
smpl = SMPL().to(device)

pred_vertices = smpl(pred_theta, pred_beta)

torch.sum(pred_vertices).backward()

And pytorch report RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

when I fix this error like this:

G_new = G.clone()   
for i in range(1, 24):
        G_new[:,i,:,:] = torch.matmul(G[:,self.parent[i-1],:,:], G_[:, i, :, :])

Then the test code runs without any problem.

Although I found this indeed an dangerous inplace operation, however, when I ran the training code, pytorch did't report any error! I guess maybe gradient flow is cut off somewhere, so it did't need to BP through SMPL(). But this explanation sounds contradicts with the optimization process.

How do you think? Is this inplace operation a real problem? If not, how do you manage to neglect it without causing any problem?

@nkolot
Copy link
Owner

nkolot commented Jul 30, 2019

Yes this is a bug that I probably introduced when refactoring the code for release. I will fix it today.

nkolot added a commit that referenced this issue Jul 30, 2019
@nkolot
Copy link
Owner

nkolot commented Jul 30, 2019

Should be ok now.

@Maqingyang
Copy link
Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants