You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Projections are initialized at zero because we want to initialize the transform at identity.
We only transform on xy because we don't want the network to change the vertical orientation of superpoints. A wall and a ceiling have the same shape, barring their vertical orientation. Of course the network could learn not to rotate on z if need be, but we save parameters with this insight.
However, feel free to experiment with initilialization and feature transform and report potential improvements!
Hi,
in
superpoint_graph/learning/pointnet.py
Line 51 in 518fc08
In the original pointnet implementation, they make this a bias term (https://github.com/charlesq34/pointnet/blob/d64d2398e55b24f69e95ecb549ff7d4581ffc21e/models/transform_nets.py#L49), which is trainable.
The STN is actually supposed to output a rotation matrix, so in my reasoning, the bias term should be trainable.
Do you make it untrainable on purpose, and why?
Also, why the projection are initialized as all 0s. In deep learning courses, it's not encouraged to set the weights as the same value.
The text was updated successfully, but these errors were encountered: