Skip to content

[AAAI 2024] Official code for Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation

Notifications You must be signed in to change notification settings

RoyZry98/MoFME-Pytorch

Repository files navigation

Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation

Python 3.8 arXiv

Installation

  1. Clone the repository:
git clone https://github.com/RoyZry98/MoFME-AAAI2024-Offiicial.git
  1. Install the required packages:
pip install -r requirements.txt

Dataset Preparation

  1. Download the Allweather Dataset via Baidu Netdisk:
  1. Modify the dataset path:
  • In MoWE_DDP/configs/dataset_cfg.py, replace '/data/lyl/data/allweather' on line 39 with your dataset path.

Training

  1. Train the model:
  • Navigate to the project directory:

    cd MoWE_DDP
    
  • Configure the hyperparameters:

    ps=8  # patch size of Transformer backbone
    bs=64  # batch size
    ep=200  # training epoch
    lr=0.0002  # learning rate
    scheduler='cosine+warmup'  # scheduler, optional: []
    task='low_level'   # optional: derain, desnow, deraindrop(allweather)
    dataset='allweather'  # optional: allweather
    model='mowe'       # optional: []
    dim=384  # embedding dimension of Transformer backbone
    interval=20  # Test once per ${interval} epoch
  • Adjust the number of GPUs to be used:

    export CUDA_VISIBLE_DEVICES='4,5,6,7'
    
    torchrun --master_port 29510 --nproc_per_node=4 train.py \
    
    --gpu-list 4 5 6 7
    
  • Run baseline and naive MoE:

    bash scripts_allweather_mofme_baseline.sh
    
  • Run MoFME:

    bash scripts_allweather_mofme_ours.sh
    

Testing

  1. Test the model:
  • After training, use the official Allweather test set.

    $output_dir: set as the output dir of the testing metrics

    $model_path: replaced to the path of best_metric.pth

    output_dir=allweather_moe-film-linear-basenet-star-gelu-n${n}-k${k}_ep200
    model_path=output/train/allweather_moe-film-linear-basenet-star-gelu-n${n}-k${k}_bs64_ep200_ps8_embed384_mlpx4_mlpupsample-outchx4_cnn-embed_wo-pe_normalize_vgg0.04_lr0.0002/best_metric.pth
    
  • Run the testing script:

    bash scripts_test.sh
    

Inference

  1. Infer all images under a directory:
  • Configure the script:

    Change $model_path, $output_dir, $task[optional: derain, deraindrop, desnow], $cuda

    bash scripts_infer.sh
    
  1. Infer a single image:
  • Configure the script:

    Change $model_path, $output_dir, $img_path, $task[optional: derain, deraindrop, desnow], $cuda

    bash scripts_infer_one.sh
    

Citation

Please cite our work if you find it useful.

@inproceedings{zhang2024efficient,
  title={Efficient Deweahter Mixture-of-Experts with Uncertainty-Aware Feature-Wise Linear Modulation},
  author={Zhang, Rongyu and Luo, Yulin and Liu, Jiaming and Yang, Huanrui and Dong, Zhen and Gudovskiy, Denis and Okuno, Tomoyuki and Nakata, Yohei and Keutzer, Kurt and Du, Yuan and others},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={15},
  pages={16812--16820},
  year={2024}
}

About

[AAAI 2024] Official code for Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published