Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation on KITTI-360 Test set #30

Closed
hansoogithub opened this issue Sep 27, 2023 · 2 comments
Closed

Evaluation on KITTI-360 Test set #30

hansoogithub opened this issue Sep 27, 2023 · 2 comments

Comments

@hansoogithub
Copy link

hansoogithub commented Sep 27, 2023

I have a problem viewing the performance evaluation numbers when i run

python src/eval.py experiment=kitti360  ckpt_path='downloaded checkpoint from your website'

below is the result i got

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃        Test metric        ┃       DataLoader 0        ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│         test/loss         │            0.0            │
│         test/macc         │            0.0            │
│         test/miou         │            0.0            │
│          test/oa          │            0.0            │
└───────────────────────────┴───────────────────────────┘

but when i trained a new model from scratch with the kitti360

python src/train.py experiment=kitti360

i can get view the numbers during training

val/miou_best: 63.757 val/oa_best: 92.886 val/macc_best: 79.989 

I get this warning during evaluation
"You are using cuda device ('nvidia geforce rtx 4090') that has Tensor Cores. To properly utilize them you should set 'torch.set_float32_matmul_precision('medium' | 'high) which will trade-off precision for performance. For more details read https://pytorch.org/docs/stable/generated.torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision

i tried using high and medium precision but theres no change in the evaluation result.

i used this project in a docker container with gpu passthrough according to your setup. cuda11.8
please help, thank you for the project

@drprojects
Copy link
Owner

Hi @hansoogithub, thanks for your interest in the project.

I have a problem viewing the performance evaluation numbers when i run

This is normal behavior. KITTI-360's test set has held-out labels. Meaning you do not have access to the labels for performance evaluation, those are stored on a benchmarking server (see the official website. So the local performance evaluation of SPT can only be run on the validation set, as communicated in our paper. This is why you see empty test performance when running python src/eval.py experiment=kitti360.

I get this warning during evaluation
"You are using cuda device ('nvidia geforce rtx 4090') that has Tensor Cores. To properly utilize them you should set 'torch.set_float32_matmul_precision('medium' | 'high) which will trade-off precision for performance. For more details read https://pytorch.org/docs/stable/generated.torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision

This is unrelated to the above comment. You can safely ignore this warning.

Best,

Damien

@drprojects drprojects changed the title Evaluation Errors Evaluation on KITTI-360 Test set Oct 24, 2023
@meehirmhatrepy
Copy link

how to get evaluation metrics on validation data[kitti-360]? where should I mention that to get evaluation metrics?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants