Skip to content
This repository has been archived by the owner on Aug 20, 2023. It is now read-only.

Thesis Project: Research on Full-reference Image Quality Assessment by Using Transformer and DISTS with GAN-based Augmentation

Notifications You must be signed in to change notification settings

nelson870708/Thesis-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Thesis Project: Research on Full-reference Image Quality Assessment by Using Transformer and DISTS with GAN-based Augmentation

Table of Contents

  1. Hardware
  2. Installation
  3. Dataset Preparation
  4. Prepare Data and Code
  5. Training
  6. Evaluation

Hardware

The following specs were used to create the original solution.

  • Ubuntu 20.04.3 LTS
  • Intel(R) Core(TM) i9-10900 CPU @ 2.80GHz
  • NVIDIA GeForce RTX 3090

Installation

  1. Anaconda is required. Modify environment.yml to satisfy your machine environment.

    name: iqa  # change the environment name if you want
    channels:
    ...
    ...
    prefix: /home/rhome/nelson/.conda/envs/iqa  # Must change the prefix to the directory where you want the conda environment to be set.
  2. Create a conda environment with environment.yml. {envs_name} is the environment name which you should assign. The default of the environment name is iqa.

    conda env create -f environment.yml
    conda activate <envs_name>

Dataset Preparation

Download the dataset from the following website.

Prepare Data and Code

I split PIPAL Public Training Set into training, validation, and test sets by splitting "Train_Label" into "Train_Label", "Val_Label" and "Test_Label", and placed them into "PIPAL(processed)" directory, which is currently provided to DCS Lab member only. After downloading and extracting, the data directory is structured as:

+- data
    +- PIPAL(processed)
      +- Dist
      +- Ref
      +- Test_Label
      +- Train_Label
      +- Val_Label
    +- LIVE
    +- TID2013
+- code
    +- scores_record
    +- src
    ATDIQA.py
    Augmented ATDIQA.py
    environment.yml
    eval.py
    pred.py
    README.md
    train.py

Then, get into the 'code' directory.

cd code

Training

If you need the help of train.py, you can use the following instruction.

python train.py --help

Train with configuration file:

python train.py --config <config_path>

DISTS-based and IQT-based Methods

There are several default configuration files in src/config/experiments.

Model Configuration files
DISTS-Tune DISTS-Tune_config.yaml
IQT IQT_config.yaml
IQT-C IQT-C_config.yaml
IQT-L IQT-L_config.yaml
IQT-M IQT-M_config.yaml
IQT-H IQT-H_config.yaml
IQT-Mixed IQT-Mixed_config.yaml

These models are mentioned in my thesis.

Example

Take IQT-L for example:

python train.py --config src/config/experiments/IQT-L_config.yaml

Augmented FR-IQA

There are two Augmented FR-IQA mentioned in my thesis, Augmented DISTS-Tune and Augmented IQT-Mixed, respectively. In addition, these methods require three phases training pipeline.

You should train phase 1, before training phase 2. Likewise, you should train phase 2, before training phase3.

Example

Take Augmented DISTS-Tune for example:

  1. Train phase 1

    python train.py --config src/config/experiments/Aug_DISTS-Tune_phase1_config.yaml
  2. Train phase 2

    python train.py --config src/config/experiments/Aug_DISTS-Tune_phase2_config.yaml
  3. Train phase 3

    python train.py --config src/config/experiments/Aug_DISTS-Tune_phase3_config.yaml

Evaluation

If you need the help of eval.py, you can use the following instruction.

python eval.py --help

Evaluate with configuration file and the path of model:

python eval.py --config <config_path> --netD_path <netD_path> --dataset <dataset_name>
  • <config_path> is the path to the configuration file of model.
  • <netD_path> is the path of the weights of FR-IQA.
  • <dataset_name> can be chose from 'PIPAL', 'LIVE' and 'TID2013'.

Example

Take evaluating IQT-L on LIVE for example. Assume that the weights of IQT-L are saved at experiments/IQT-L/models/netD_epoch200.pth.

python eval.py --config src/config/experiments/IQT-L_config.yaml --netD_path experiments/IQT-L/models/netD_epoch200.pth --dataset LIVE

Prediction

If you need the help of pred.py, you can use the following instruction.

python pred.py --help

Get prediction (pickle file) with specific configuration file and the path of model:

python pred.py --config <config_path> --netD_path <netD_path> --output <pkl_file_name> --dataset <dataset_name>
  • <config_path> is the path to the configuration file of model.
  • <netD_path> is the path of the weights of FR-IQA.
  • <pkl_file_name> is the file name of the prediction (pickle file).
  • <dataset_name> can be chose from 'PIPAL', 'LIVE' and 'TID2013'.

Example

Take output the predict scores of IQT-L on LIVE for example. Assume that the weights of IQT-L are saved at experiments/IQT-L/models/netD_epoch200.pth.

python pred.py --config src/config/experiments/IQT-L_config.yaml --netD_path experiments/IQT-L/models/netD_epoch200.pth --output IQT-L --dataset LIVE

You will get a file named IQT-L, which is a pickle file.

About

Thesis Project: Research on Full-reference Image Quality Assessment by Using Transformer and DISTS with GAN-based Augmentation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages