You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @jchibane,
Thanks for sharing your work (and nice code btw).
I have some doubts on the point cloud generation algorithm in ndf/models/generation.py.
More precisely:
samples are created in the range [-1.5, 1.5] here; if I understand it correctly, this is because pytorch grid_sample requires the grid coordinates to be in the range [-1, 1] and you add and extra 0.5 because the network was trained with coordinates possibly out of the [-1, 1] range
df_pred though should be distances predicted by the network for coordinates in the range [-0.5, 0.5] since the udf ground truth is computed here on coordinates belonging to the range [-0.5, 0.5]
thus, I think that the computation samples = samples - F.normalize(gradient, dim=2) * df_pred.reshape(-1, 1)here is moving samples in the range [-1.5, 1.5] with udf predictions for the range [-0.5, 0.5]
Also: since samples x and z dimensions are not swapped before feeding them to the network (as it is done for training), it seems to me that samples and udf predictions are in a different reference system.
What am I missing?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Yes, exactly - for the grid_sample function we need to convert to [-1,1] , additional 0.5 serves as slack - not strongly necessary theoretically.
Points in [-1,1] in grid sample coordiantes correspond to [-0.5,0.5] in mesh coordinates, so there is no mismatch.
The movements are predicted in mesh space, the grid_sample space is just intermediate and only important for implementation. The network is supervised with mesh space and the results are interpreted in mesh space - so this is correct as is.
We are sampling random points in a symmetric cube - so we don't need to swap any coordinates.
I think @lykius is right here. The code here should be samples = samples - F.normalize(gradient, dim=2) * df_pred.reshape(-1, 1)*2. Because your inputs are grid coordinate points [-1,1] and your outputs are udf under mesh space [-0.5,0.5]. Here the samples are grid coordinate points instead of mesh space points. You move the grid coordinates points instead of mesh space points here.
Hi @jchibane,
Thanks for sharing your work (and nice code btw).
I have some doubts on the point cloud generation algorithm in
ndf/models/generation.py
.More precisely:
samples
are created in the range [-1.5, 1.5] here; if I understand it correctly, this is because pytorchgrid_sample
requires the grid coordinates to be in the range [-1, 1] and you add and extra 0.5 because the network was trained with coordinates possibly out of the [-1, 1] rangedf_pred
though should be distances predicted by the network for coordinates in the range [-0.5, 0.5] since the udf ground truth is computed here on coordinates belonging to the range [-0.5, 0.5]samples = samples - F.normalize(gradient, dim=2) * df_pred.reshape(-1, 1)
here is movingsamples
in the range [-1.5, 1.5] with udf predictions for the range [-0.5, 0.5]Also: since
samples
x and z dimensions are not swapped before feeding them to the network (as it is done for training), it seems to me thatsamples
and udf predictions are in a different reference system.What am I missing?
Thanks in advance!
The text was updated successfully, but these errors were encountered: