Skip to content
/ AHMF Public

High-resolution Depth Maps Imaging via Attention-based Hierarchical Multi-modal Fusion (IEEE TIP 2022)

Notifications You must be signed in to change notification settings

zhwzhong/AHMF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

📖 High-resolution Depth Maps Imaging via Attention-based Hierarchical Multi-modal Fusion (IEEE TIP 2022)

[Paper] Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao ,Xiangyang Ji
Harbin Institute of Technology, Tsinghua University

Abstract

Depth map records distance between the viewpoint and objects in the scene, which plays a critical role in many realworld applications. However, depth map captured by consumergrade RGB-D cameras suffers from low spatial resolution. Guided depth map super-resolution (DSR) is a popular approach to address this problem, which attempts to restore a highresolution (HR) depth map from the input low-resolution (LR) depth and its coupled HR RGB image that serves as the guidance. The most challenging issue for guided DSR is how to correctly select consistent structures and propagate them, and properly handle inconsistent ones. In this paper, we propose a novel attention-based hierarchical multi-modal fusion (AHMF) network for guided DSR. Specifically, to effectively extract and combine relevant information from LR depth and HR guidance, we propose a multi-modal attention based fusion (MMAF) strategy for hierarchical convolutional layers, including a feature enhancement block to select valuable features and a feature recalibration block to unify the similarity metrics of modalities with different appearance characteristics. Furthermore, we propose a bi-directional hierarchical feature collaboration (BHFC) module to fully leverage low-level spatial information and high-level structure information among multi-scale features. Experimental results show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.


This repository is an official PyTorch implementation of the paper "High-resolution Depth Maps Imaging via Attention-based Hierarchical Multi-modal Fusion"

🔧 Dependencies and Installation

Installation

  1. Clone repo

    git https://github.com/zhwzhong/AHMF.git
    cd AHMF
  2. Install dependent packages

    pip install -r requirements.txt

Train

You can also train by yourself:

 The training codes can be found at https://github.com/zhwzhong/Guided-Depth-Map-Super-resolution-A-Survey.

Quick Test (We provide the trained models for test. The trained models can be can be found at:

https://github.com/zhwzhong/AHMF/releases/download/Middlebury/ahmf.tar.gz

python test.py

📧 Contact

If you have any question, please email zhwzhong@hit.edu.cn

Cititation

@ARTICLE{9642435, author={Zhong, Zhiwei and Liu, Xianming and Jiang, Junjun and Zhao, Debin and Chen, Zhiwen and Ji, Xiangyang}, journal={IEEE Transactions on Image Processing}, title={High-Resolution Depth Maps Imaging via Attention-Based Hierarchical Multi-Modal Fusion}, year={2022}, volume={31}, number={}, pages={648-663}, doi={10.1109/TIP.2021.3131041}}

About

High-resolution Depth Maps Imaging via Attention-based Hierarchical Multi-modal Fusion (IEEE TIP 2022)

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages