Skip to content

IDEA-Research/3D-deformable-attention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3D Deformable Attention (DFA3D)

By Hongyang Li*, Hao Zhang*, Zhaoyang Zeng, Shilong Liu, Feng Li, Tianhe Ren, and Lei Zhang 📧.

[Paper] [BibTex]

This repository is the official implementation of the paper "DFA3D: 3D Deformable Attention For 2D-to-3D Feature Lifting".

🔥 News

[2023/7/15] Our paper is accepted by ICCV2023.

[2023/8/24] We opensource our 3D Deformable Attention (DFA3D) and also DFA3D-enabled BEVFormer.

🗓️ TODO List

  • Release 3D Deformable Attention.
  • Release BEVFormer-DFA3D-PredDepth (-base & -small) and BEVFormer-DFA3D-GTDepth.
  • Add more comments.
  • Format and release the code of "preparing depth map"
  • Release 3D attention visualization tool.

📜 Abstract

In this paper, we propose a new operator, called 3D DeFormable Attention (DFA3D), for 2D-to-3D feature lifting, which transforms multi-view 2D image features into a unified 3D space for 3D object detection. Existing feature lifting approaches, such as Lift-Splat-based and 2D attention-based, either use estimated depth to get pseudo LiDAR features and then splat them to a 3D space, which is a one-pass operation without feature refinement, or ignore depth and lift features by 2D attention mechanisms, which achieve finer semantics while suffering from a depth ambiguity problem. In contrast, our DFA3D-based method first leverages the estimated depth to expand each view's 2D feature map to 3D and then utilizes DFA3D to aggregate features from the expanded 3D feature maps. With the help of DFA3D, the depth ambiguity problem can be effectively alleviated from the root, and the lifted features can be progressively refined layer by layer, thanks to the Transformer-like architecture. In addition, we propose a mathematically equivalent implementation of DFA3D which can significantly improve its memory efficiency and computational speed. We integrate DFA3D into several methods that use 2D attention-based feature lifting with only a few modifications in code and evaluate on the nuScenes dataset. The experiment results show a consistent improvement of +1.41 mAP on average, and up to +15.1 mAP improvement when high-quality depth information is available, demonstrating the superiority, applicability, and huge potential of DFA3D.

🛠️ Method

Comparison of feature lifting methods.

Improvements.

Our DFA3D brings consistent improvement on several methods, including two concurrent works (DA-BEV and Sparse4D).

Improving the quality of depth will bring further gains (up to 15.1% mAP).

How to transform your 2D Attention-based feature lifting into our 3D Deformable Attention-based one.

Here, we take 2D Deformable Attention as an example, only a few modifications in code are required. For more details, please refer to our examples provided in Model Zoo.

For more details, please refer to our provided DFA3D-enabled BEVFormer.

🚀 Model Zoo

We denote 2D Deformable Attention and our 3D Deformable Attention as DFA2D and DFA3D respectively.

Method Feature Lifting mAP / NDS Config Checkpoint
0 BEVFormer-base DFA2D-based 41.6 / 51.7 config model
DFA3D-based 43.2 / 53.2
+1.6 / +1.5
config model
1 BEVFormer-small DFA2D-based 37.0 / 47.9 config model
DFA3D-based 40.3 / 50.9
+3.3 / +3.0
config model
2 BEVFormer-base-GTDepth DFA2D-based - / - - -
DFA3D-based 57.6 / 63.6
+16.0 / +11.9
config model

⚙️ Usage

We develop our 3D Deformable Attention based on mmcv. We test our method under python=3.8.13,pytorch=1.9.1,cuda=11.1. Other versions might be available as well.

Installation

  1. Clone this repo.
git clone https://github.com/IDEA-Research/3D-deformable-attention.git
cd 3D-deformable-attention/
  1. Install Pytorch and torchvision.

Follow the instructions at https://pytorch.org/get-started/locally/.

# an example:
conda install -c pytorch pytorch torchvision
  1. Compile and install 3D-Deformable-Attention.
cd DFA3D
bash setup.sh 0
# check if it is installed correctly.
cd ../
python unittest_DFA3D.py

Run

Prepare datasets

Construct the dataset as in BEVFormer. And download our prepared depth map (obtained by projecting single sweep lidar points on to the multi-view images), and unzip it at

./data/nuscenes/depth_gt/

Eval our pretrianed models

Download our provided checkpoints in Model Zoo.

cd BEVFormer_DFA3D
bash tools/dist_test.sh path_to_config  path_to_checkpoint 1
# an example: 
bash tools/dist_test.sh ./projects/configs/bevformer/bevformer_base_DFA3D_GTDpt.py ./ckpt/bevformer_base_DFA3D_gtdpt.pth 1

Train the models

bash ./tools/dist_train.sh path_to_config 8
# an example
bash ./tools/dist_train.sh ./projects/configs/bevformer/bevformer_base_DFA3D_GTDpt.py 8

✒️ Citation

@inproceedings{
  title={DFA3D: 3D Deformable Attention For 2D-to-3D Feature Lifting},
  author={Hongyang Li and Hao Zhang and Zhaoyang Zeng and Shilong Liu and Feng Li and Tianhe Ren and Lei Zhang},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  year={2023}
}

About

[ICCV 2023] Official implementation of the paper "DFA3D: 3D Deformable Attention For 2D-to-3D Feature Lifting"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published