You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use liegroups in a loss function, it reports error as follows:
Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Here is the code:
print('rotation_ab_pred size = {} type = {}'.format(rotation_ab_pred.size(), rotation_ab_pred.type()))
rvec_ab_pred=SO2.from_matrix(rotation_ab_pred)
And the result:
rotation_ab_pred size = torch.Size([32, 2, 2]) type = torch.cuda.FloatTensor
The text was updated successfully, but these errors were encountered:
There are a couple places where we convert to numpy to use np.linalg functions (specifically the determinant and SVD) to validate and normalize rotation matrices. I added the missing detach() calls in c2b479e so it should work as expected now.
As an alternative, you can also use the less safe constructor, which doesn't do any validity checking:
rvec_ab_pred=SO2.from_matrix(rotation_ab_pred) # checks validity of matrixrvec_ab_pred=SO2(rotation_ab_pred) # does not check validity of matrix
When I use liegroups in a loss function, it reports error as follows:
Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Here is the code:
And the result:
rotation_ab_pred size = torch.Size([32, 2, 2]) type = torch.cuda.FloatTensor
The text was updated successfully, but these errors were encountered: