New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
accuracy of your SegNet model #38
Comments
Thanks for your interest. The code from the BeyondRGB paper uses multiple data sources (DSM + RGB), it is a multimodal network. I do not have the time right now to update the PyTorch code with this model but I hope to do it in the near future. |
In addition, what are your training dataset and validation dataset in Vaihingen and Potsdam? You did not explain it in your article. Are these your training dataset and testing dataset in your article? segnet_vaihingen_128x128_fold1_iter_60000.caffemodel (112.4 Mo) (backup link): pre-trained model on the ISPRS Vaihingen dataset (trained on tiles 1, 3, 5, 7, 11, 13, 15, 17, 21, 23, 26, 28, 30, validated on tiles 32, 34, 37). Best, |
IIRC final results reported on the article are trained on the whole training set and metrics are computed using the ISPRS official test set. Different test splits can have different metrics. |
@nshaud
I just run the default PyTorch code SegNet_PyTorch_v2.ipynb. However, the accuracy of Vaihingen is about 86%, not as good as your paper. Then I read your paper again, and I found that you have made some change. Do you have any plan to share the code in the BeyondRGB paper?
The text was updated successfully, but these errors were encountered: