Our model code will be uploaded soon.
Please follw the GeoSeg to preprocess the LoveDA, Potsdam and Vaihingen dataset.
"-c" means the path of the config, use different config to train different models.
python train_supervision_dp.py -c ./config/potsdam/convalsrnet.py
python train_supervision_dp.py -c ./config/vaihingen/convalsrnet.py
python train_supervision_dp.py -c ./config/loveda/convalsrnet.py
Vaihingen
python test_vaihingen.py -c ./config/vaihingen/convalsrnet.py -o ./fig_results/convalsrnet_vaihingen/ --rgb -t "d4"
Potsdam
python test_potsdam.py -c ./config/potsdam/convalsrnet.py -o ./fig_results/convalsrnet_potsdam/ --rgb -t "d4"
LoveDA (Online Testing)
Output RGB images (Offline testing, using the validation set for testing, directly output the mIOU results)
python test_loveda.py -c ./config/loveda/convalsrnet.py -o ./fig_results/convalsrnet_loveda_rgb --rgb --val -t "d4"
Output label images (need to be compressed and uploaded to the online testing website)
python test_loveda.py -c ./config/loveda/convalsrnet.py -o ./fig_results/convalsrnet_loveda_onlinetest -t "d4"
Our training scripts come from ConvLSR-Net which is based on GeoSeg. Thanks for the author's open-sourcing code.