Skip to content

strath-ai/multispectral-satellite-inpainting-with-text

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

multispectral-satellite-inpainting-with-text

Arxiv Link IEEE Xplore Link

Official code for the paper "Exploring the Capability of Text-to-Image Diffusion Models With Structural Edge Guidance for Multispectral Satellite Image Inpainting" published in IEEE Geoscience and Remote Sensing Letters:

@ARTICLE{10445344,
  author={Czerkawski, Mikolaj and Tachtatzis, Christos},
  journal={IEEE Geoscience and Remote Sensing Letters}, 
  title={Exploring the Capability of Text-to-Image Diffusion Models With Structural Edge Guidance for Multispectral Satellite Image Inpainting}, 
  year={2024},
  volume={21},
  pages={1-5},
  keywords={Satellite images;Image edge detection;Data models;Standards;Noise reduction;Task analysis;Process control;Generative models;image completion;image inpainting},
  doi={10.1109/LGRS.2024.3370212}
}

Method

There two components to the considered system:

RGB-based Inpainting Model

The method explores the capabilities of existing off-the-shelf inpainting models, in this case, StableDiffusion 1.5 and ControlNet conditioning. diffusion-edgeguided-infer

Based on the ControlNetInpaint GitHub Repo stars repository. Check it out for other types of conditioned inpainting!

Channel-wise Inpainting

The output of an existing model can be used to propagate the inpainting information across channels. Here, Deep Image Prior is used to do this in an internal learning regime (no pre-training necessary). dip-diagram

Variants

Variant Details
Direct-DIP Direct use of Deep Image Prior to inpaint an image (With an optional conditioning signal, such as historical)
SD-Inpainting Conventional use of StableDiffusion 1.5 Inpainting Model
Edge-Guided Inpainting StableDiffusion 1.5 inpainting with the use of edge-guided conditioning via ControlNet

Example use:

model = InpainterMSI(type='EG')

out = model(current,
            mask,
            condition=hist,
            #prompt='Your custom text prompt',
            text_guidance_scale = 7.5, # influence of text prompt
            edge_guidance_scale = 0.5, # influence of historical edge
            num_inference_steps=20, # number of diffusion RGB inpainting steps
            num_DIP_steps=4000 # number of DIP optimisation steps for RGB-to-MSI
           )

About

Official code for the paper "Exploring the Capability of Text-to-Image Diffusion Models With Structural Edge Guidance for Multispectral Satellite Image Inpainting"

Resources

License

Stars

Watchers

Forks