New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing On Other Image Sets #90
Comments
Yes, the cityscapes images are statistically very different from the Kitti Data. You will need to train the network using Cityscapes data, if you want meaningful results. If this is not an option, you need to do at least perform some kind of domain adaptation before using different data. This is however a different field of research. |
Great, thanks for your response. Do you by chance have results from training and evaluating KittiSeg on other datasets or using a domain adaptation? |
I got around 98% iou by training on cityscapes data. |
Great, thanks for your response. |
Is there any chance you could share the CityScapes trained model? |
Did the experiments quite a while ago. The scripts won't work out of the box and are not compatible with the current code base. So I think it would be much easier if you just implemented a cityscapes data loader yourself. |
Hey Marvin,
Thank you so much for sharing your code. When I run KittiSeg on the Kitti dataset, the segmentation is able to find the road to the precision you posted. But when I try to run KittiSeg on images from other datasets like CITYSCAPES, the network is either not able to detect any road segmentation or just the left edge. I don't see any image filtering on the input image before running the tf session in the demo.py. Any ideas why there would be such a big difference in performance?
Best,
David
The text was updated successfully, but these errors were encountered: