You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I'm bringing up the questions I had after you closed #16, in case you missed that part. I would be interested to know as much as possible about the (sub)set of examples you used to evaluate on ShapeNet-vol, such that I can meanigfully compare to LION in absence of released weights/samples. The easiest would be if you can share that subset as a files, but in absence I would need
The IDs of the models used.
The preprocessing scheme (in particular, are applying the scale and loc parameters found in the .npz files of the dataset?).
Ideally the IDs of the points you picked in each model (since the dataset provides more than 2048 points per model).
Thanks for your swift responses on the previous issues.
The text was updated successfully, but these errors were encountered:
Thanks for bring this up. I upload the 1000 point cloud here (if you want to directly use this data, please verify the point cloud's axis/pose align with your training data)
for the pre-processing scheme, I normalized both the sampled data and the validation data into [-1,1] using this part of code, i.e. I call the function with compute_score(samples='samples.pt', ref_name='ref_ns_val_all.pt', norm_box=True)
Hello,
I'm bringing up the questions I had after you closed #16, in case you missed that part. I would be interested to know as much as possible about the (sub)set of examples you used to evaluate on ShapeNet-vol, such that I can meanigfully compare to LION in absence of released weights/samples. The easiest would be if you can share that subset as a files, but in absence I would need
Thanks for your swift responses on the previous issues.
The text was updated successfully, but these errors were encountered: