This is an unofficial implement of LEDNet.
the official version:LEDNet-official
- Python 3.6
- PyTorch 1.1
- Base Size 1024, Crop Size 768, only fine. (new-version, with dropout)
Model | Paper | OHEM | Drop-rate | lr | Epoch | val (crop) | val |
---|---|---|---|---|---|---|---|
LEDNet | / | ✗ | 0.1 | 0.0005 | 800 | 60.32/94.51 | 66.29/94.40 |
LEDNet | / | ✗ | 0.1 | 0.005 | 600 | 61.29/94.75 | 66.56/94.72 |
LEDNet | / | ✗ | 0.3 | 0.01 | 800 | 63.84/94.83 | 69.09/94.75 |
Note:
- The paper only provide the test results: 69.2/86.8 (class mIoU/category mIoU).
- And the training setting is a little different with original paper (original paper use 1024x512)
Some things you can use to improve the performance:
- use larger learning rate (like 0.01)
- use more epochs (like 1000)
- use larger training input size (like Base Size 1344, Crop Size 1024)
Please download pretrained model first
$ python demo.py [--input-pic png/demo.png] [--pretrained your-root-of-pretrained] [--cuda true]
The default data root is ~/.torch/datasets
(You can download dataset and build a soft-link to it)
$ python eval.py [--mode testval] [--pretrained root-of-pretrained-model] [--cuda true]
Recommend to using distributed training.
$ export NGPUS=4
$ python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py [--dataset citys] [--batch-size 8] [--base-size 1024] [--crop-size 768] [--epochs 800] [--warmup-factor 0.1] [--warmup-iters 200] [--log-step 10] [--save-epoch 40] [--lr 0.005]
Your can reference gluon-cv-cityspaces to prepare the dataset