Skip to content

Orion-AI-Lab/EfficientBigEarthNet

Repository files navigation

PWC

Benchmarking and scaling of deep learning models for land cover image classification.

Code and models from the paper Benchmarking and scaling of deep learning models for land cover image classification.

Citation

If you use the models or code provided in this repo, please cite our paper:

@article{PAPOUTSIS2023250,
title = {Benchmarking and scaling of deep learning models for land cover image classification},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {195},
pages = {250-268},
year = {2023},
issn = {0924-2716},
doi = {https://doi.org/10.1016/j.isprsjprs.2022.11.012},
url = {https://www.sciencedirect.com/science/article/pii/S0924271622003057},
author = {Ioannis Papoutsis and Nikolaos Ioannis Bountos and Angelos Zavras and Dimitrios Michail and Christos Tryfonopoulos},
keywords = {Benchmark, Land use land cover image classification, BigEarthNet, Wide Residual Networks, EfficientNet, Deep learning, Model zoo, Transfer learning},
abstract = {The availability of the sheer volume of Copernicus Sentinel-2 imagery has created new opportunities for exploiting deep learning methods for land use land cover (LULC) image classification at large scales. However, an extensive set of benchmark experiments is currently lacking, i.e. deep learning models tested on the same dataset, with a common and consistent set of metrics, and in the same hardware. In this work, we use the BigEarthNet Sentinel-2 multispectral dataset to benchmark for the first time different state-of-the-art deep learning models for the multi-label, multi-class LULC image classification problem, contributing with an exhaustive zoo of 62 trained models. Our benchmark includes standard Convolution Neural Network architectures, as well as non-convolutional methods, such as Multi-Layer Perceptrons and Vision Transformers. We put to the test EfficientNets and Wide Residual Networks (WRN) architectures, and leverage classification accuracy, training time and inference rate. Furthermore, we propose to use the EfficientNet framework for the compound scaling of a lightweight WRN, by varying network depth, width, and input data resolution. Enhanced with an Efficient Channel Attention mechanism, our scaled lightweight model emerged as the new state-of-the-art. It achieves 4.5% higher averaged F-Score classification accuracy for all 19 LULC classes compared to a standard ResNet50 baseline model, with an order of magnitude less trainable parameters. We provide access to all trained models, along with our code for distributed training on multiple GPU nodes. This model zoo of pre-trained encoders can be used for transfer learning and rapid prototyping in different remote sensing tasks that use Sentinel-2 data, instead of exploiting backbone models trained with data from a different domain, e.g., from ImageNet. We validate their suitability for transfer learning in different datasets of diverse volumes. Our top-performing WRN achieves state-of-the-art performance (71.1% F-Score) on the SEN12MS dataset while being exposed to only a small fraction of the training dataset.}
}

Available pretrained models (All pretrained models can be found here):

Requirements :

tensorflow==2.4.1, horovod==0.21.0

Usage:

To run an experiment modify the config file and execute train.py. Example for MLPMixer with batch size = 100 and learning rate 1e-4:


{
  "model_name": "MLPMixer",
  "hparams": {"phi": 1.0, "alpha": 1.0, "beta": 1.0, "gamma": 1.0, "dropout": 0.1},
  "batch_size": 100,
  "nb_epoch": 30,
  "learning_rate": 1e-4,
  "save_checkpoint_after_iteration": 0,
  "save_checkpoint_per_iteration": 1,
  "tr_tf_record_files": ["/work2/pa20/ipapout/gitSpace/TF1.10.1gpu_Py3/NikosTmp/v2/bigearthnet-noa-hua/bigearthnet-tf2/fulldataset/split-10nodes-fulldataset/train*.tfrecord"],
  "val_tf_record_files": ["/work2/pa20/ipapout/gitSpace/TF1.10.1gpu_Py3/NikosTmp/v2/bigearthnet-noa-hua/bigearthnet-tf2/fulldataset/split-10nodes-fulldataset/val*.tfrecord"],
  "test_tf_record_files": ["/work2/pa20/ipapout/gitSpace/TF1.10.1gpu_Py3/NikosTmp/v2/bigearthnet-noa-hua/bigearthnet-tf2/fulldataset/split-10nodes-fulldataset/test*.tfrecord"],
  "label_type": "BigEarthNet-19",
  "fine_tune": false,
  "shuffle_buffer_size": 5000,
  "training_size": 269695,
  "val_size": 125866,
  "test_size": 125866,
  "decay_rate": 0.1,
  "backward_passes": 4,
  "decay_step": 27,
  "label_smoothing": 0,
  "mode": "train",
  "eval_checkpoint": "/work2/pa20/ipapout/gitSpace/TF1.10.1gpu_Py3/NikosTmp/v2/charmbigearth/bigearthnet-tf2/bestTestResNet50/checkpoint_ResNet50",
  "augment": true
}

To execute in a single-GPU machine:

python3 train.py --parallel=False

or for multi node training :

horovodrun --gloo -np $SLURM_NTASKS -H $WORKERS --network-interface ib0 --start-timeout 120 --gloo-timeout-seconds 120 python3 train.py --parallel=True

About

Code and models for efficient training on the BigEarthNet dataset for Land Use Land Cover classification

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages