Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to test on real-world images #28

Closed
mikqs opened this issue Jun 13, 2022 · 1 comment
Closed

How to test on real-world images #28

mikqs opened this issue Jun 13, 2022 · 1 comment

Comments

@mikqs
Copy link

mikqs commented Jun 13, 2022

Hello,

Could you please explain how one can use real-world images after training the model ? I have tested the model successfully on the validation dataset from VTD, but for real-world data, I believe that I have to semantically segment it with the color palette that was used for training. Is there an existing model that you recommend for segmentation ? (i.e. which model do you use to label the left-most real-world input pictures of Fig. 6 from the paper ?)

Thank you

Originally posted by @mikqs in #16 (comment)

@lreiher
Copy link
Member

lreiher commented Jun 19, 2022

You are correct that our proposed methodology takes semantically segmented camera images as input. As stated in our paper, we have used an in-house semantic segmentation model for the real-world examples shown in the paper.

We don't specifically recommend any model for semantic segmentation, but naturally, a better semantic segmentation model would also provide better input for Cam2BEV. There should be plenty of semantic segmentation methods publicly available, one starting point might be to take a look at paperswithcode.

@lreiher lreiher closed this as completed Jun 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants