Skip to content

Joint-ID: Transformer-based Joint Image Enhancement and Depth Estimation for Underwater Environments

License

Notifications You must be signed in to change notification settings

sparolab/Joint_ID

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

20 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Joint-ID: Transformer-Based Joint Image Enhancement and Depth Estimation for Underwater Environments

IEEE Sensors Journal 2023

This repository represents the official implementation of the paper titled "Transformer-Based Joint Image Enhancement and Depth Estimation for Underwater Environments".

ProjectPage Paper YouTube Docker License

Geonmo Yang, Gilhwan Kang, Juhhui Lee, Younggun Cho

We propose a novel approach for enhancing underwater images that leverages the benefits of joint learning for simultaneous image enhancement and depth estimation. We introduce Joint-ID, a transformer-based neural network that can obtain high-perceptual image quality and depth information from raw underwater images. Our approach formulates a multi-modal objective function that addresses invalid depth, lack of sharpness, and image degradation based on color and local texture.

teaser

πŸ› οΈ Prerequisites

  1. Run the demo locally (requires a GPU and an nvidia-docker2, see Installation Guide)

  2. Optionally, we provide instructions to use docker in multiple ways. (But, recommended using docker compose, see Installation Guide).

  3. The code requires python>=3.8, as well as pytorch>=1.7 and torchvision>=0.8. But, we don't provide the instructions to install both PyTorch and TorchVision dependencies. Please use nvidia-docker2 😁. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

  4. This code was tested on:

  • Ubuntu 22.04 LTS, Python 3.10.12, CUDA 11.7, GeForce RTX 3090 (pip)
  • Ubuntu 22.04 LTS, Python 3.8.6, CUDA 12.0, RTX A6000 (pip)
  • Ubuntu 20.04 LTS, Python 3.10.12, CUDA 12.1, GeForce RTX 3080ti (pip)

πŸš€ Contents Table

πŸ› οΈ Setup

  1. πŸ“¦ Prepare Repository & Checkpoints
  2. ⬇ Prepare Dataset
  3. πŸ‹ Prepare Docker Image and Run the Docker Container

πŸš€ Traning or Testing for Joint-ID

  1. πŸš€ Training for Joint-ID on Joint-ID Dataset
  2. πŸš€ Testing for Joint-ID on Joint-ID Dataset
  3. πŸš€ Testing for Joint-ID on Standard or Custom Dataset

✏️ ETC

  1. βš™οΈ Inference settings

  2. πŸŽ“ Citation

  3. βœ‰οΈ Contact


πŸ› οΈ Setup

πŸ“¦ Prepare Repository & Checkpoints

  1. Clone the repository (requires git):

    git clone https://github.com/sparolab/Joint_ID.git
    cd Joint_ID
  2. Let's call the path where Joint-ID's repository is located ${Joint-ID_root}.

  3. Download a checkpoint joint_id_ckpt.pth of our model on path ${Joint-ID_root}/Joint_ID.


(back to table)

⬇ Prepare Dataset

dataset

  1. Download the Joint_ID_Dataset.zip

  2. Next, unzip the file named Joint_ID_Dataset.zip with the downloaded path as ${dataset_root_path}.

    sudo unzip ${dataset_root_path}/Joint_ID_Dataset.zip   # ${dataset_root_path} requires at least 2.3 Gb of space.
    # ${dataset_root_path} is the absolute path, not relative path.
  3. After downloading, you should see the following file structure in the Joint_ID_Dataset folder

    πŸ“¦ Joint_ID_Dataset
    ┣ πŸ“‚ train
    ┃ ┣ πŸ“‚ LR                  # GT for traning dataset
    ┃ ┃ ┣ πŸ“‚ 01_Warehouse  
    ┃ ┃ ┃ ┣ πŸ“‚ color           # enhanced Image
    ┃ ┃ ┃ ┃ ┣ πŸ“œ in_00_160126_155728_c.png
    ┃ ┃ ┃ ┃       ...
    ┃ ┃ ┃ ┃
    ┃ ┃ ┃ β”— πŸ“‚ depth_filled    # depth Image
    ┃ ┃ ┃   ┣ πŸ“œ in_00_160126_155728_depth_filled.png
    ┃ ┃ ┃         ...
    ┃ ┃ ...
    ┃ β”— πŸ“‚ synthetic           # synthetic distorted dataset
    ┃   ┣ πŸ“œ LR@01_Warehouse@color...7512.jpg
    ┃   ┣      ...
    ┃  
    β”— πŸ“‚ test         # 'test'folder has same structure as 'train'folder
          ...
    
  4. After downloading, you should see the following file structure in the Joint_ID_Dataset folder

  5. If you want to know the dataset, then see the project page for additional dataset details.


(back to table)

πŸ‹ Prepare Docker Image and Run the Docker Container

To run a docker container, we need to create a docker image. There are two ways to create a docker image and run the docker container.

  1. Use docker pull or:

    # download the docker image
    docker pull ygm7422/official_joint_id:latest    
    
    # run the docker container
    nvidia-docker run \
    --privileged \
    --rm \
    --gpus all -it \
    --name joint-id \
    --ipc=host \
    --shm-size=256M \
    --net host \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -e DISPLAY=unix$DISPLAY \
    -v /root/.Xauthority:/root/.Xauthority \
    --env="QT_X11_NO_MITSHM=1" \
    -v ${dataset_root_path}/Joint_ID_Dataset:/root/workspace/dataset_root \
    -v ${Joint-ID_root}/Joint_ID:/root/workspace \
    ygm7422/official_joint_id:latest 
  2. Use docker compose (this is used to build docker iamges and run container simultaneously):

    cd ${Joint-ID_root}/Joint_ID
    
    # build docker image and run container simultaneously
    bash run_docker.sh up gpu ${dataset_root_path}/Joint_ID_Dataset
    
    # Inside the container
    docker exec -it Joint_ID bash

Regardless of whether you use method 1 or 2, you should have a docker container named Joint_ID running.


(back to table)

πŸš€ Traning or Testing for Joint-ID

πŸš€ Training for Joint-ID on Joint-ID Dataset

  1. First, move to the /root/workspace folder inside the docker container. Then, run the following command to start the training.
    # move to workspace
    cd /root/workspace
    
    # start to train on Joint-ID Dataset
    python run.py local_configs/arg_joint_train.txt
  2. The model's checkpoints and log files are saved in the /root/workspace/save folder.
  3. If you want to change the default variable setting for training, see Inference settings below.

(back to table)

πŸš€ Testing for Joint-ID on Joint-ID Dataset

  1. First, move to the /root/workspace folder inside the docker container. Then, run the following command to start the testing.

    # move to workspace
    cd /root/workspace
    
    # start to test on Joint-ID Dataset
    python run.py local_configs/arg_joint_test.txt
  2. The test images and results are saved in the result_joint.diml.joint_id folder.

  3. If you want to change the default variable setting for testing, see Inference settings below.


(back to table)

πŸš€ Testing for Joint-ID on Standard or Custom Dataset

  1. Set the dataset related variables in the local_configs/cfg/joint.diml.joint_id.py file. Below, enter the input image path in the sample_test_data_path variable.

    ...
    
    # If you want to adjust the image size, adjust the `image_size` below.
    image_size = dict(input_height=288,
                      input_width=512)
    ...
    
    # Dataset
    dataset = dict(
               train_data_path='dataset_root/train/synthetic',
               ...
               # sample_test_data_path='${your standard or custom dataset path}',
               sample_test_data_path='demo',
               video_txt_file=''
               )
    ...
  2. First, move to the /root/workspace folder inside the docker container. Then, run the following command to start the testing.

    # move to workspace
    cd /root/workspace
    
    # start to test on standard datasets
    python run.py local_configs/arg_joint_samples_test.txt
  3. The test images and results are saved in the sample_eval_result_joint.diml.joint_id folder.


(back to table)

βš™οΈ Inference settings

We set the hyperparameters in 'local_configs/cfg/joint.diml.joint_id.py'.

depth_range: Range of depth we want to estimate

image_size: the size of the input image data. If you set this variable, make sure to set auto_crop to False in train_dataloader_cfg, or eval_dataloader_cfg, or test_dataloader_cfg, or sample_test_cfg below. If you do not want to set image_size, please set auto_crop to True. auto_crop will be input to the model at the original size of the input data.

train_parm: hyperparameters to set when training.

test_parm: hyperparameters to set when testing.


(back to table)

πŸŽ“ Citation

Please cite our paper:

@article{yang2023joint,
  title={Joint-ID: Transformer-based Joint Image Enhancement and Depth Estimation for Underwater Environments},
  author={Yang, Geonmo and Kang, Gilhwan and Lee, Juhui and Cho, Younggun},
  journal={IEEE Sensors Journal},
  year={2023},
  publisher={IEEE}
}

(back to table)

βœ‰οΈ Contact

Geonmo Yang: ygm7422@gmail.com

Project Link: https://sites.google.com/view/joint-id/home


(back to table)

🎫 License

This work is licensed under the GPL License, Version 3.0 (as defined in the LICENSE).

License

About

Joint-ID: Transformer-based Joint Image Enhancement and Depth Estimation for Underwater Environments

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages