Skip to content

csliujw/uda-self-training

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

uda-self-training

This repo is the official implementation of Weakly supervised high spatial resolution land cover mapping based on self-training with weighted pseudo-labels. The code will be release public in June 2023.

Model zoo

The models with the scores can be downloaded from Baidu Cloud (Extraction code: 1234).

Dataset

LoveDA Dataset

Getting Started

Requirments:

conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch

pip install mmcv-full==1.5.1 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html

pip install mmsegmentation==0.24.1

pip install ever-beta==0.2.3

pip install timm

pip install --upgrade git+https://github.com/Z-Zheng/SimpleCV.git

Prepare LoveDA Dataset

ln -s </path/to/LoveDA> ./LoveDA

Train

  • Stage one, training with source domain: python source.py
  • Stage two, self-training: python train.py
  • the oracle setting to test the upper limit of our method’s accuracy in a single domain: python oracle.py

Acknowledgments

This code is heavily borrowed from LoveDA.

Citation

If you find this repo useful for your research, please consider citing the paper as follows:

@article{liu2022weakly,
  title={Weakly supervised high spatial resolution land cover mapping based on self-training with weighted pseudo-labels},
  author={Liu, Wei and Liu, Jiawei and Luo, Zhipeng and Zhang, Hongbin and Gao, Kyle and Li, Jonathan},
  journal={Int. J. Appl. Earth Obs. Geoinf.},
  volume={112},
  pages={102931},
  year={2022}
}

About

This repo is the official implementation of ...

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages