Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gtad_inference_fs_inductive py #4

Closed
lucky-23 opened this issue Feb 3, 2022 · 10 comments
Closed

gtad_inference_fs_inductive py #4

lucky-23 opened this issue Feb 3, 2022 · 10 comments

Comments

@lucky-23
Copy link

lucky-23 commented Feb 3, 2022

Hi, I followed the steps you gave which is
python gtad_inference_fs_inductive.py --meta_learn True --shot 5 --multi_instance False
python gtad_inference_fs_inductive.py --meta_learn False --shot 5 --multi_instance False
python gtad_c3d_postprocess_fs.py
However, when I run the inference code, i am not able to create the output/results2 file that is needed for the postprocessing step. I was wondering if I missed out something?
Is this not the inference code? Sorry, I am confused.

@sauradip
Copy link
Owner

sauradip commented Feb 3, 2022

Hi,

Try changing this line for "results1" to "results2"

if not os.path.exists(opt['output'] + "/results1"):

When you run the mentioned inference file , it will create the "results2" folder

new_df.to_csv(opt["output"]+"/results2/" + video_name + ".csv", index=False)

Hence, this should be used during postprocess. SO it should be used by L362 and L365 in gtad_c3d_postprocess_fs.py

@sauradip
Copy link
Owner

sauradip commented Feb 3, 2022

I have resolved it now , kindly use the updated inference code and see if issue still persist!

@lucky-23
Copy link
Author

lucky-23 commented Feb 4, 2022

Hello, thanks for the reply. I actually updated the inference code and the output/results2 file is created. However, the file is empty. It seems that the function findTAL(pred_q,gt_q,video_name) in L441 is never called and the code is terminated after saving the checkpoint in L314. I might be missing out something but I cannot find out what is wrong? Can you please look into this as well? Thanks once again!

@sauradip
Copy link
Owner

sauradip commented Feb 4, 2022

Hi,

Make sure this parameter is set as False

meta_learn = opt["meta_learn"]

Let me explain the process :
1) python gtad_inference_fs_inductive.py --meta_learn True --shot 5 --multi_instance False

This step is meta-learning the transformer and the checkpoint will be saved , note that meta-learn param is set as True. This is essentially training during inference

2) python gtad_inference_fs_inductive.py --meta_learn False --shot 5 --multi_instance False

This is the step where we do the actual inference after meta-learning is finished, here we use the learned transformer checkpoint and we do inference. Note that the meta-learn param is set to False. If in False, this will evoke Line 441 and save the results in "results2" folder.

3) python gtad_c3d_postprocess_fs.py

This step is take the proposals from the "results2" file and does apply SOftNMS before evaluation.

Let me know if this helps, i guess you set the meta-learn True both times , so Line 441 is never evoked.

@lucky-23
Copy link
Author

lucky-23 commented Feb 7, 2022

Hi, thanks for the reply
2) python gtad_inference_fs_inductive.py --meta_learn False --shot 5 --multi_instance False

I actually set the meta_learn to False but still the actual inference is not done and "results2" folder is still empty. It seems that Line441 is not evoked even after changing the parameter.

@sauradip
Copy link
Owner

sauradip commented Feb 7, 2022

hI,

Can you send me the output of this line

fg = logits[task][shot1][0].detach().cpu().numpy()

I can then see if it is actually some fault ! This issue did not occur to me or others

@lucky-23
Copy link
Author

lucky-23 commented Feb 8, 2022

Hi, thanks for the reply and I solved the problem .
The output of L74 is as follows:

fg
Also when I check the csv file, I found that the value of xmin is 0.0. Is this correct?
xmin
Also, the scores I got are:
Average mAP :0.30
mAP@0.50 : 55.54
mAP@0.60 : 45.30
mAP@0.70 : 34.03
which seems much lower than the results reported in the paper.

@sauradip
Copy link
Owner

sauradip commented Feb 8, 2022

Hi ,

I don't think this should be 0 always , some issue with the training of model. Have you trained with the base class for 10 epochs ? before running the inference ? I am saying this because the code which we partly borrowed ( GTAD ) the author of that paper checked the code and got the score as reported in paper after training. Secondly as per I understand the metrics Avg mAP is 0.5 to 0.95 IOU avg, if you get 55% in 0.5 , 34 % in 0.7 , I don't think avg will be 0.3 % I think will be excess of 30 %.

@lucky-23
Copy link
Author

lucky-23 commented Feb 9, 2022

Hi,the average score reported is 30.67% .Sorry,my mistake :) Also I trained the base class with 10 epochs as well before the inference. I will check the code and see if I made any mistake. Thanks for all the help :)

@Mufanyin
Copy link

Hi,the average score reported is 30.67% .Sorry,my mistake :) Also I trained the base class with 10 epochs as well before the inference. I will check the code and see if I made any mistake. Thanks for all the help :)

After running the code, I also found that the value of xmin is 0.0. May I ask how you solved this problem? Thank you!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants