IEEE Transactions on Intelligent Vehicles
Arxiv
ResearchGate
🛠️ 👷 🚀
🔥 The keypoints mask data and pretrained models are now available. 🔥
- 2023.08.14 Init repository.
- 2024.1.10 Code release.
- 2024.1.21 Pretrained models and mask data release.
- Code release.
- Pretrained models release.
- Mask data release.
We recommend using Anaconda to set up the environment.
conda create -n focusflow python=3.10
conda activate focusflow
pip install -r requirements.txt
The following datasets are required for training and testing:
Since the official KITTI and Sintel benchmark do not provide optical flow benchmark on keypoints,
we randomly split the training set of Sintel and KITTI into training and validation sets. The split details are provided in the Sintel_split.txt
and KITTI_split.txt
files.
Additionally, we provide the preprocessed keypoints masks for SIFT, ORB, GoodFeature and SiLK in the data
folder.
The SIFT, ORB and GoodFeature keypoints are extracted using the OpenCV library, while the SiLK keypoints are extracted using the SiLK library.
Scripts used for generating keypoint masks are provided in the scripts
folder.
By default, the dataset is expected to be stored in the following structure:
data
├── FlyingChairs_release
│ ├── data
│ ├── FlyingChairs_train_val.txt
├── FlyingThings3D
│ ├── frames_cleanpass
│ ├── frames_finalpass
│ ├── optical_flow
├── KITTI
│ ├── training
│ │ ├── image_2
│ │ ├── flow_occ
│ ├── val
├── Sintel
│ ├── training
│ │ ├── clean
│ │ ├── final
│ │ ├── flow
│ ├── val
├── mask
│ ├── FlyingChairs_release
│ │ ├── orb
│ │ ├── sift
│ │ ├── goodfeature
│ │ ├── silk
│ ├── FlyingThings3D
│ ├── KITTI
│ ├── Sintel
The mask
data could be downloaded in OneDrive.
To use the specific model, please run the training or evaluation script in the core/models/{model_name}
folder.
For example, to train the FocusRAFT model for ORB points, please run the following command:
cd core/models/ff-raft
python train.py --yaml configs/experiment/ffraft_chairs_orb.yaml
The pretrained model are supposed to be stored in the pretrain
folder in each model's folder.
Pretrained models could be downloaded in OneDrive.
Key-point-based scene understanding is fundamental for autonomous driving applications.
At the same time, optical flow plays an important role in many vision tasks.
However, due to the implicit bias of equal attention on all points, classic data-driven optical flow estimation methods yield less satisfactory performance on key points, limiting their implementations in key-point-critical safety-relevant scenarios.
To address these issues, we introduce a points-based modeling method that requires the model to learn key-point-related priors explicitly. Based on the modeling method, we present FocusFlow, a framework consisting of 1) a mix loss function combined with a classic photometric loss function and our proposed Conditional Point Control Loss (CPCL) function for diverse point-wise supervision; 2) a conditioned controlling model which substitutes the conventional feature encoder by our proposed Condition Control Encoder (CCE).
CCE incorporates a Frame Feature Encoder (FFE) that extracts features from frames, a Condition Feature Encoder (CFE) that learns to control the feature extraction behavior of FFE from input masks containing information of key points, and fusion modules that transfer the controlling information between FFE and CFE.
Our FocusFlow framework shows outstanding performance with up to
Conditional Point Control Loss (CPCL)
Conditional Architecture
The FocusFlow Framework
The code is based on the following open-source project:
- RAFT (BSD 3-Clause License)
- FlowFormer (Apache-2.0 License)
- pytorch-pwc (GPL-3.0 License)
Due to the use of the above open-source projects, our code is under the GPL-3.0 License.
[1] @article{yi2023focusflow,
title={FocusFlow: Boosting Key-Points Optical Flow Estimation for Autonomous Driving},
journal={IEEE Transactions on Intelligent Vehicles},
year={2023},
publisher={IEEE}
}
Feel free to contact me if you have additional questions or have interests in collaboration. Please drop me an email at yizhonghua@zju.edu.cn. =)