New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ScanObjectNN (paper table 2) #5
Comments
Hi @sheshap python run_mvtn.py --data_dir data/ScanObjectNN/ --run_mode train --mvnetwork viewgcn --nb_views 12 --views_config learned_spherical --pc_rendering --pretrained --shape_extractor PointNet --features_type logits --clip_grads |
Few of the arguments mentioned in the above command are not recognized. run_mvtn.py: error: unrecognized arguments: --pretrained --shape_extractor PointNet --features_type logits --clip_grads |
Because the python run_mvtn.py --data_dir data/ScanObjectNN/ --run_mode train --mvnetwork viewgcn --nb_views 12 --pc_rendering |
The above command resulted in the best training accuracy of 89.02% and test accuracy of 87.5 on the obj_only variant. Paper: 92.6% |
Are you using one-stage training or two-stage? The results in the paper reported are for two-stage in which the first stage is trained on ModelNet ( CNN ) . Also, make sure you are setting Please read the documentation in the |
Please confirm if the following two commands represent the two stages. Much appreciated. The first stage is 50 epochs of training the backbone CNN on the single view images
The second stage is 35 epochs on the multi-view network on the M views of the 3D object.
P.S: Using only the 2nd command without resume flags has given me 91.4% accuracy for the obj_only variant of ScanObjectNN Kindly, Please provide exact commands/configurations to recreate your results on the ScanObjectNN dataset variants. Thanks |
@ajhamdi Can you please help with exact configurations/commands to recreate results on ScanObjectNN? |
I used the below command and got 82.5% on PB_T50_RS python run_mvtn.py --data_dir data/ScanObjectNN --run_mode train --mvnetwork mvcnn --nb_views 12 --views_config learned_spherical --pc_rendering -dsetp hardest Can you please provide the exact configurations/commands to recreate results on ScanObjectNN based on two-phase training for viewgcn? Thanks in Advance. |
I was able to reproduce 92.6% (scanobjectnn - with_bg) using the below commands. python run_mvtn.py --data_dir ../MVTN/data/ModelNet40/ --run_mode train --mvnetwork viewgcn --nb_views 1 --views_config learned_spherical --pc_rendering --viewgcn_phase first python run_mvtn.py --data_dir data/ScanObjectNN/ --run_mode train --mvnetwork viewgcn --nb_views 12 --views_config learned_spherical --pc_rendering --viewgcn_phase second --dset_variant with_bg Closing the issue. Thanks P.S: reproduced 92.3% (obj_only) and 82.9% (hardest) |
Hi Thank you so much for the code release.
Can you please give the exact training and evaluation commands used for training and testing the ScanObjectNN dataset to recreate the results of table 2 in paper?
Thanks in advance. Much appreciated.
The text was updated successfully, but these errors were encountered: