Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the evaluation of your pre-trained model #8

Open
sandeep-ipk opened this issue Jan 14, 2023 · 1 comment
Open

Regarding the evaluation of your pre-trained model #8

sandeep-ipk opened this issue Jan 14, 2023 · 1 comment
Labels
good first issue Good for newcomers

Comments

@sandeep-ipk
Copy link

I have come to the problem of evaluating your the pre-trained ShAPO model you provided in the repo here
I could not find an evaluation script in your ShAPO repository, and I found a similar issue in your CenterSnap repo here. The author of the issue states problems with finding the predicted class labels and sizes, then in one of your replies you provide this helper function and ask them to use mask_rcnn results from the object-derformnet repository.

I have done all the things you asked in that issue. However, I used your pre-trained shapo model (without post-optimization) for evaluation, instead of training one of my own from scratch. But I cannot reproduce the numbers you report in ShAPO paper (assuming your pre-trained model performs as well as CenterSnap's numbers). therefore I have the following questions:

  1. Is the pre-trained ShAPO you provided not an optimal one, but an intermediate one? and therefore I cannot reproduce the numbers (without post-optimization). Moreover, does using your pretrained ShAPO model without post-optimization give similar numbers to CenterSnap?

  2. How can one determine the f_size in result['pred_scales'] = f_size statement you write in that issue. I calculate the f_size using the predicted point-cloud from the predicted shape latents using this line of code from object-deformnet. As i understand, this f_size is important in calculating the 3D IoU numbers you report in the ShAPO paper.

  3. To alleviate this confusion, is there a possibility that you could share the evaluation script that you used to generate the numbers using the compute_mAP function as you mentioned in that github Issue?

Thank you,
Sandeep

@zubair-irshad
Copy link
Owner

zubair-irshad commented Jan 14, 2023

Thanks for your email and trying out our codebase. Unfortunately it is not possible for us to share the complete evaluation script at this point but I can help as much as I can to help you reproduce numbers in the paper:

  1. The pre-trained model we provided in the codebase is only for demo and may be sub-optimal for synthetic scenes. I would highly recommend training your own model using the instructions provided in our repo to get the best checkpoints which you can quantitatively evaluate.
    That's correct. ShAPO training from scratch without post-optimization should give numbers close to CenterSnap.

  2. Your understanding is correct. We use the following to get the predicted sizes. where pcd_dsdf_actual is the pointcloud obtained from the sdf latent codes as here:

pred_size = 2 * np.amax(np.abs(pcd_dsdf_actual), axis=0)

Hope it helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants