-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make LieTensor a subclass of torch.Tensor and override function/operators #452
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example script looks good to me!
152a318
to
a501bb1
Compare
3df7e53
to
52b05d9
Compare
One other detail to consider for discussion. I did some quick tests with torch optimizers, and could only get the subclass based version to work by doing the following, which requires an extra step. We also need to do some tests to make sure backprop is doing the right thing. Curious to hear thoughts from @albanD import theseus.labs.lie as lie
import theseus.labs.lie.functional as lieF
# Leaf tensor needs to be a regular tensor, so we need to explicitly pass the tensor data
g1_data = lieF.se3.rand(1, requires_grad=True)
g1 = lie.cast(g1_data, ltype=lie.SE3)
g2 = lie.rand(1, lie.SE3)
opt = torch.optim.Adam([g1_data])
opt.zero_grad()
d = g1.inv().compose(g2).log()
loss = torch.sum(d**2)
loss.backward()
opt.step() |
cc @rmurai0610 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments. Not sure if we should override __add()
and use LieAsEuclidean
.
examples/lie_labs_api.py
Outdated
|
||
# However, for safety, one cannot use overriden + with torch tensors | ||
try: | ||
y = g + x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly, I'm a little nervous about overriding __add()
since they are very dangerous. Also, one can not use *
, -
, /
, either in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have some operators called to_euclidean()
that makes a pytorch tensor copy (not reference) of current data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In summary, I would like exactly function names rather than overrides to avoid confusion. I encountered lots of problems to generalize the first-order optimizers on Lie groups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be consistent, if g + x = g.compose(x)
, then g-x
should be g.compose(x.inverse())
.
examples/lie_labs_api.py
Outdated
print(e) | ||
|
||
# But you can add lie.as_euclidean() and then everything is valid | ||
with lie.as_euclidean(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned with this. If the user wants to use Lie group as a Euclidean tensor. Maybe it is safer to do something like g.to_euclidean().matmul(torch.rand(4,7)
. Maybe to_euclidean()
should be a property such that users can not modify the data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the concern that users would override the internal data? (e.g., something like g.add_(something)
? This seems a reasonable concern, because I believe subclassing allows this. I'm OK with replacing this with to_euclidean()
.
73f464b
to
318af59
Compare
5a72342
to
1e2b457
Compare
1e2b457
to
6fcbe77
Compare
f"tensor of type {tensor.ltype}" | ||
) | ||
super().set_(tensor) # type: ignore | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A less esoteric but perhaps less safe alternative for add_
could be to check the tensor shape and if it looks like (...,3, 4) (for SE3) this is interpreted as an euclidean gradient, something with shape (..., 6) (for SE3) interpret this as a Riemannian graadient, and everything else throws an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Let's merge once the open comments are resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This PR is mostly to illustrate some ideas on how to safely overload operators in LieTensor class. It'd better to start looking at the example script, and if that looks OK, you can take a quick look at what's happening under the hood. This is not meant to be a final implementation.