Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Details of ShapeNet-vol evaluation #20

Closed
jatentaki opened this issue Feb 9, 2023 · 1 comment
Closed

Details of ShapeNet-vol evaluation #20

jatentaki opened this issue Feb 9, 2023 · 1 comment

Comments

@jatentaki
Copy link

Hello,
I'm bringing up the questions I had after you closed #16, in case you missed that part. I would be interested to know as much as possible about the (sub)set of examples you used to evaluate on ShapeNet-vol, such that I can meanigfully compare to LION in absence of released weights/samples. The easiest would be if you can share that subset as a files, but in absence I would need

  1. The IDs of the models used.
  2. The preprocessing scheme (in particular, are applying the scale and loc parameters found in the .npz files of the dataset?).
  3. Ideally the IDs of the points you picked in each model (since the dataset provides more than 2048 points per model).

Thanks for your swift responses on the previous issues.

@ZENGXH
Copy link
Collaborator

ZENGXH commented Feb 20, 2023

Thanks for bring this up. I upload the 1000 point cloud here (if you want to directly use this data, please verify the point cloud's axis/pose align with your training data)
for the pre-processing scheme, I normalized both the sampled data and the validation data into [-1,1] using this part of code, i.e. I call the function with compute_score(samples='samples.pt', ref_name='ref_ns_val_all.pt', norm_box=True)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants