Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kwarg to disable manifold checks in constructor #540

Merged
merged 4 commits into from
Jun 22, 2023

Conversation

luisenp
Copy link
Contributor

@luisenp luisenp commented Jun 21, 2023

Closes #536

@luisenp luisenp requested a review from fantaosha June 21, 2023 20:14
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 21, 2023
Copy link
Contributor

@fantaosha fantaosha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only minor changes are needed.

@@ -166,12 +176,14 @@ def _log_map_impl(
return SO3_base.log(self.tensor, jacobians=jacobians)

def _compose_impl(self, so3_2: LieGroup) -> "SO3":
return SO3(tensor=SO3_base.compose(self.tensor, so3_2.tensor))
return SO3(
tensor=SO3_base.compose(self.tensor, so3_2.tensor), disable_checks=True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if we should disable checks for compose() since composition is one of the major sources for the accumulation of numerical errors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed by strict_checks=False so they are normalized if the check fails.


def _inverse_impl(self, get_jacobian: bool = False) -> "SO2":
cosine, sine = self.to_cos_sin()
return SO2(tensor=torch.stack([cosine, -sine], dim=1), strict=False)
return SO2(tensor=torch.stack([cosine, -sine], dim=1), strict_checks=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is safe for disable_checks=True.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

@@ -220,11 +227,11 @@ def _compose_impl(self, so2_2: LieGroup) -> "SO2":
cos_2, sin_2 = so2_2.to_cos_sin()
new_cos = cos_1 * cos_2 - sin_1 * sin_2
new_sin = sin_1 * cos_2 + cos_1 * sin_2
return SO2(tensor=torch.stack([new_cos, new_sin], dim=1), strict=False)
return SO2(tensor=torch.stack([new_cos, new_sin], dim=1), strict_checks=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe disable_checks=True?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

@@ -171,10 +181,12 @@ def _log_map_impl(
return SE3_base.log(self.tensor, jacobians=jacobians)

def _compose_impl(self, se3_2: LieGroup) -> "SE3":
return SE3(tensor=SE3_base.compose(self.tensor, se3_2.tensor))
return SE3(
tensor=SE3_base.compose(self.tensor, se3_2.tensor), disable_checks=True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe disable_checks=False is better since composition (matrix multiplication) might accumulate numerical errors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed by strict_checks=False so they are normalized if the check fails.

return SE3(
tensor=SE3_base.exp(tangent_vector, jacobians=jacobians),
disable_checks=True,
)

@staticmethod
def normalize(tensor: torch.Tensor) -> torch.Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should normalize() return a group instead of a tensor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was the old API from your initial implementation, but Tensor -> Tensor op makes sense to me.

@@ -316,7 +323,7 @@ def _compose_impl(self, se2_2: LieGroup) -> "SE2":
)
return SE2(
tensor=torch.cat([new_translation.tensor, new_rotation.tensor], dim=1),
strict=False,
strict_checks=False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

disable_checks=True.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

@luisenp luisenp requested a review from fantaosha June 22, 2023 14:05
Copy link
Contributor

@fantaosha fantaosha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Feel free to merge.

@luisenp luisenp merged commit ee7c46a into main Jun 22, 2023
15 checks passed
@luisenp luisenp deleted the lep.disable_lie_checks_kwarg branch June 22, 2023 16:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add constructor option to Manifold for turning off all checks
3 participants