Skip to content

PyTorch codes for "Learning multi-granularity semantic interactive representation for joint low-light image enhancement and super-resolution", Information Fusion

License

Notifications You must be signed in to change notification settings

liushh39/MSIRNet

Repository files navigation

MSIRNet: Learning multi-granularity semantic interactive representation for joint low-light image enhancement and super-resolution

Jing Ye 1Shenghao Liu 1Changzhen Qiu 1Zhiyong Zhang 1†
1 Sun Yat-Sen University  Corresponding author;

Introduction 📖

This repo, named MSIRNet, contains the official PyTorch implementation of our paper Learning multi-granularity semantic interactive representation for joint low-light image enhancement and super-resolution. We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) 💖.

Dependencies and Installation

# create new anaconda env
conda create -n msir python=3.7.11
conda activate msir 

# install python dependencies
pip3 install -r requirements.txt
python setup.py develop

Dataset

  • Download the dataset.
  • Specify their path in the corresponding option file or extract it to the project root directory.

Quick Inference

  • Download our model
  • Put the pretrained models in experiments/
python inference_MSIRNet.py

Train the model

Model preparation

Before training, you need to

  • Download the pretrained HRP model: generator, discriminator
  • Put the pretrained models in experiments/pretrained_models
  • Specify their path in the corresponding option file.

Train SR model

python basicsr/train.py -opt options/train_MSIR_LQ_stage_LOLX4.yml

Contact Informaiton

If you have any questions, please feel free to contact me at liushh39@mail2.sysu.edu.cn.

Citation

@article{YE2024102467,
title = {Learning multi-granularity semantic interactive representation for joint low-light image enhancement and super-resolution},
journal = {Information Fusion},
pages = {102467},
year = {2024},
issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2024.102467},
url = {https://www.sciencedirect.com/science/article/pii/S1566253524002458},
author = {Jing Ye and Shenghao Liu and Changzhen Qiu and Zhiyong Zhang},
}

Acknowledgement

The code is based on FeMaSR and BasicSR.

About

PyTorch codes for "Learning multi-granularity semantic interactive representation for joint low-light image enhancement and super-resolution", Information Fusion

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published