Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation error args = checkpoint['args'] #11

Closed
zazgf opened this issue May 22, 2021 · 7 comments
Closed

Evaluation error args = checkpoint['args'] #11

zazgf opened this issue May 22, 2021 · 7 comments

Comments

@zazgf
Copy link

zazgf commented May 22, 2021

@brade31919 ,
I debugged the code , and encountered the error in main.py line 199 "args = checkpoint['args']", error notice is KeyError: 'args' I found checkpoint have function value 'arch'.
Can you help me sovle this problem?

@zazgf zazgf changed the title args = checkpoint['args'] error evluate :args = checkpoint['args'] error May 22, 2021
@zazgf zazgf changed the title evluate :args = checkpoint['args'] error Evaluation error args = checkpoint['args'] May 22, 2021
@zazgf
Copy link
Author

zazgf commented May 22, 2021

Hi, @brade31919 ,
I think just use arch instead of args, but I'm not sure

args.arch = checkpoint['arch']
print(args)

@zazgf
Copy link
Author

zazgf commented May 22, 2021

@brade31919 ,
Yes, when I edit the code line 199 as above and ran out the desired result.

python main.py --evaluate /home/username/radar_depth/pretrained/resnet18_multistage.pth.tar --data nuscenes --arch resnet18_multistage_uncertainty_fixs --modality rgbd --sparsifier radar --decoder upproj
  • set num_workers=0 ,as My computer configuration is relatively low.

Thanks again for @brade31919's help.

@zazgf zazgf closed this as completed May 22, 2021
@Kirstihly
Copy link

With the provided checkpoints, the keys are

dict_keys(['arch', 'model_state_dict', 'optimizer_state_dict'])

But if trained ourselves, the checkpoint will include

dict_keys(['args', 'epoch', 'arch', 'model_state_dict', 'best_result', 'optimizer_state_dict'])

@Abdulaaty
Copy link

Have you managed to get similar results to the publication when only evaluating the pretrained models? I got very bad results here (RMSE is in magnitude of 20 )

@brade31919 , Yes, when I edit the code line 199 as above and ran out the desired result.

python main.py --evaluate /home/username/radar_depth/pretrained/resnet18_multistage.pth.tar --data nuscenes --arch resnet18_multistage_uncertainty_fixs --modality rgbd --sparsifier radar --decoder upproj
  • set num_workers=0 ,as My computer configuration is relatively low.

Thanks again for @brade31919's help.

@zazgf
Copy link
Author

zazgf commented Mar 30, 2022

@Abdulaaty Sorry for late reply. This is result what I got.
Screenshot from 2022-03-30 14-09-45
And I did not commented code else.

@Abdulaaty
Copy link

@zazgf Thanks for your reply. I mean did you evaluate the pretrained models on the dataset that they provide in the repo (as in to replicate the results) and got the same results as they state in the paper?

@zazgf
Copy link
Author

zazgf commented Mar 31, 2022

@zazgf Thanks for your reply. I mean did you evaluate the pretrained models on the dataset that they provide in the repo (as in to replicate the results) and got the same results as they state in the paper?

I did not evaluate the models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants