Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

STNkD final bias not trainable? #9

Closed
laoreja opened this issue Mar 6, 2018 · 3 comments
Closed

STNkD final bias not trainable? #9

laoreja opened this issue Mar 6, 2018 · 3 comments
Labels

Comments

@laoreja
Copy link

laoreja commented Mar 6, 2018

Hi,
in

return input.view(-1,self.eye.size(1),self.eye.size(2)) + Variable(self.eye)
you are making the bias for the projection an Identity matrix and not trainable?

In the original pointnet implementation, they make this a bias term (https://github.com/charlesq34/pointnet/blob/d64d2398e55b24f69e95ecb549ff7d4581ffc21e/models/transform_nets.py#L49), which is trainable.

The STN is actually supposed to output a rotation matrix, so in my reasoning, the bias term should be trainable.

Do you make it untrainable on purpose, and why?

Also, why the projection are initialized as all 0s. In deep learning courses, it's not encouraged to set the weights as the same value.

@laoreja
Copy link
Author

laoreja commented Mar 6, 2018

Also, why do you only do spatial transformer on xy dimensions, but not on xyz dimensions?

@loicland
Copy link
Owner

loicland commented Mar 6, 2018

Hi,

the biais is trainable, as a parameter of the proj layer (doc on pytorch). We add the identity matrix afterwards, which amounts to the same thing as https://github.com/charlesq34/pointnet/blob/d64d2398e55b24f69e95ecb549ff7d4581ffc21e/models/transform_nets.py#L47

Projections are initialized at zero because we want to initialize the transform at identity.

We only transform on xy because we don't want the network to change the vertical orientation of superpoints. A wall and a ceiling have the same shape, barring their vertical orientation. Of course the network could learn not to rotate on z if need be, but we save parameters with this insight.

However, feel free to experiment with initilialization and feature transform and report potential improvements!

@laoreja
Copy link
Author

laoreja commented Mar 6, 2018

I see. Thank you very much for your quick reply!

@loicland loicland closed this as completed Mar 6, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants