Zeyu Zhu
, and Xiangyong Cao
For more information please see our paper: CVPR-2023-Open-Access
or Arxiv
Setup a virtual conda environment using the provided requirements.txt
.
conda create --name PGCU --file requirements.txt
conda activate PGCU
You can samply define PGCU module in your model with three hyper-parameters, i.e., Channel, VecLen, NumberBlocks, which represent image channel, feature vector length and the number of stacked DSBlocks. Details can be found in our paper.
self.upsample = PGCU(Channel, VecLen, NumberBlocks)
In the forward funcation in your model, you can samply upsample the image with the guiding image, by following code,
# lrms: low resolution multispectral image
# pan: panchromatic (PAN) image
self.upsample.forward(lrms, pan)
It's worth noting that our implementation of PGCU is used to upsample LRMS to the scale of PAN while PAN is four times the size of LRMS. So PGCU will upsample LRMS for four times. If you want to change it, you may add MaxPooling
or Conv2d
with stride=2
to make the information matrix extracted from LRMS and PAN to be in the same size. The implementation of PGCU is in model/PGCU.py
Data preprocessing, dataloader and metrics(SAM, ESGAR, PSNR and etc) are implemented in utils
.
Pre-trained models on WorldView2 and WorldView3 datasets are saved in result/PanNet/WV2exp0
and result/PanNet/WV3exp0
, respectively.
If you find our work useful, please cite our paper.
@article{zhu2023probability,
title={Probability-based Global Cross-modal Upsampling for Pansharpening},
author={Zhu, Zeyu and Cao, Xiangyong and Zhou, Man and Huang, Junhao and Meng, Deyu},
journal={arXiv preprint arXiv:2303.13659},
year={2023}
}