Skip to content

zzhwfy/VSpSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VSpSR: Explorable Super-Resolution via Variational Sparse Representation

This repository is an official PyTorch implementation of the VSpSR from NTIRE21, Learning SR Space.

We provide scripts for reproducing all the results. You can train your model from scratch, or use a pre-trained model to get different sr images.

Requirements

VSpSR is built in Python 3.6 using PyTorch 1.7.1. Use the following command to install the requirements:

pip install -r requirements.txt

Quickstart (Demo)

You can test our super-resolution algorithm with your images. Place your images in any folder you like. (like test).

Run the script in src folder. Before you run the demo, please uncomment the appropriate line in demo.sh that you want to execute.

cd VSpSR      # You are now in */VSpSR
sh demo.sh

You can find the result images from output folder.

How to train VSpSR

We used DIV2K from NTIRE21 dataset to train our model. Please download it from tr_1X, tr_4X, tr_8X, va_1X, va_4X, va_8X.

Download all the files to any place you want(like NTIRE2021). Then, change the dataset_root argument in demo.sh to the place where DIV2K images are located.

You can train VSpSR by yourself. All scripts are provided in the demo.sh.

One 12-GB Titan X GPU is used for training VSpSR. Training takes about 13 hours.

cd VSpSR       # You are now in */VSpSR
sh demo.sh

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published