-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to Easily Test Data on Already Trained Data #46
Comments
Yes you need to compute the partition, the spg and the inference on your target data set. If you just want to see the results use the |
Do I have to run training as well again? I want to avoid it retraining again. You just load the pth.tar file correct? and with the --test test_reduced flag right? |
1 - compute the partition of the train and test sets |
Great and just to clarify, inference on test set is just the terminal command for training but with the -----resume RESUME argument correct? UPDATE: Right now I am running these commands for a 12 class dataset: python learning/custom_dataset.py --CUSTOM_SET_PATH /home/amanda/Semantic3D_small CUDA_VISIBLE_DEVICES=0 python learning/main.py --dataset custom_dataset --CUSTOM_SET_PATH /home/amanda/Semantic3D_small/ --epochs 500 --lr_steps '[350, 400, 450]' --test_nth_epoch 100 --model_config 'gru_10_0,f_12' --ptn_nfeat_stn 8 --pc_attribs xyzelpsv --nworkers 2 --odir "results/simple/trainval_best" CUDA_VISIBLE_DEVICES=0 python learning/main.py --dataset custom_dataset --CUSTOM_SET_PATH /home/amanda/Semantic3D_small --db_test_name testred --db_train_name train --epochs -1 --lr_steps '[350, 400, 450]' --test_nth_epoch 100 --model_config 'gru_10_0,f_12' --ptn_nfeat_stn 8 --pc_attribs xyzelpsv --nworkers 2 --odir "results/simple/trainval_best" --resume "results/simple/trainval_best/model.pth.tar" I am having difficulty getting the last command to run due to this error: Not sure how to resolve. changed partition.py, provider.py, main.py in learned, etc to accommodate 12 classes but not sure if i missed somewhere. |
correct!
Did you adapt the If so, can you run the code in debug with a stoppoint at `learning/metrics.py", line 19 and provide here the size of the following variables:
loic |
Yes this is what I wrote:
My other attributes are thus: Trying to debug why ground truth vec is wrong. |
Indeed the problem seems to be in the ground_truth_vec. Some leads:
|
|
There seems to be a problem when building the ground truth label matrix for your test data. Since you don't have labels for your test data, you could simply comment the call to I will look into it more in-depth soon. |
Yep, now I can run the whole pipeline, seem the issue is when you want to do inference on custom test data labels that line trips up in terms of indexing: let me know if you figure it out. |
i have met the same problom, u can check /learning/spg.py. In the line 73 |
I already trained my data on part of training and part of a withheld set. so I have a pth.tar file.
I am still confused as to what the terminal command I should use to write the labels for the test data I want to analyze.
I tried this command: python partition/write_Semantic3d.py --SEMA3D_PATH /media/amanda/Seagate\ Expansion\ Drive//Semantic3D_13 --odir "results/custom12/validation_best" --db_test_name testred
with my own custom classes (in same format as Semantic3D) and it complains that it doesn't have files:
(unable to open file: name = './results/custom12/validation_best/predictions_testred.h5')
What am I missing to just run test data on my trained NN? The documentation is a little unclear. Do I have to repartition, create superpoints, again for newly added test data?
The text was updated successfully, but these errors were encountered: