You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you please explain how one can use real-world images after training the model ? I have tested the model successfully on the validation dataset from VTD, but for real-world data, I believe that I have to semantically segment it with the color palette that was used for training. Is there an existing model that you recommend for segmentation ? (i.e. which model do you use to label the left-most real-world input pictures of Fig. 6 from the paper ?)
You are correct that our proposed methodology takes semantically segmented camera images as input. As stated in our paper, we have used an in-house semantic segmentation model for the real-world examples shown in the paper.
We don't specifically recommend any model for semantic segmentation, but naturally, a better semantic segmentation model would also provide better input for Cam2BEV. There should be plenty of semantic segmentation methods publicly available, one starting point might be to take a look at paperswithcode.
Hello,
Could you please explain how one can use real-world images after training the model ? I have tested the model successfully on the validation dataset from VTD, but for real-world data, I believe that I have to semantically segment it with the color palette that was used for training. Is there an existing model that you recommend for segmentation ? (i.e. which model do you use to label the left-most real-world input pictures of Fig. 6 from the paper ?)
Thank you
Originally posted by @mikqs in #16 (comment)
The text was updated successfully, but these errors were encountered: