New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nuScenes evaluation #8
Comments
Hi @kaancayli , may I ask if you have solved this problem? I am doing something quite similar and it will be helpful if you are willing to help! To be specific, have you get reasonable results when evaluate on transferred-to-kitti-NuScenes data? Thanks in advance for your reply and help! |
Hi @Galaxy-ZRX , I solved the problem by using another evaluation script ("SECOND" Object detection repository). The problem is caused by a numerical error. Today, it's hard for me to dig through my browser history but I will try to share the solution with you tomorrow. |
Hi @kaancayli , thanks for sharing! It doesn’t matter if you can’t find it since it has been so long :), I also know that model and I will turn to it to look for some solutions. Many thanks again! |
3D_adapt_auto_driving-evaluate.tar.gz |
@kaancayli Thank you so much! |
Hi,
I'm currently working on my bachelor thesis. In my work, I convert the nuScenes dataset into the KITTI format. Your work helped me a lot. I would like to thank you in advance for that.
I have a small problem. I'm using Frustum PointNet as a network architecture. After I convert the nuScenes dataset into the KITTI format, I train the network on this converted nuScenes dataset. Everything works fine but, when I want to evaluate the network performance, I encounter with some problems. I use KITTI benchmark for evaluation. The problem is, I get 3D AP scores close to 0, if I evaluate the network (trained on converted nuScenes data) on converted nuScenes data. However, this is not the case if I evaluate the same network (trained on converted nuScenes data) on KITTI. In that case, I get nice results (for example 3D AP for Car is 66 in Easy).
My question is, do you have any idea, what may cause such a big difference? Do you apply different operations according to dataset in your evaluation script?
I would really appreciate you answer and support. Have a nice day.
The text was updated successfully, but these errors were encountered: