Official PyTorch implementation for paper Adaptive Short-Temporal Induced Aware Fusion Network for Predicting Attention Regions like a Driver.
ASIAF-Net is a novel driver attention prediction framework that performs self-adaptive short-temporal feature extraction. And for object-level driver attention prediction, we designed an extra object saliency estimation branch to find objects which should be focused.
This code was developed with Python 3.6 on Ubuntu 16.04. The main Python requirements:
pytorch==1.4
opencv-python
apex
- Download and extract DADA-2000 Dataset.
- Download the Ground Truth of Object Saliency Estimation (password:enmq) and extract it.
- Put all the samples of the DADA-2000 datasets in the same folder, as follows
DIR
|- json_file
| └─train_file
| └─val_file
| └─test_file
|- DADA_dataset
| └─1
| └─001
| └─fixation
| └─images
| └─maps
| |
| └─027
| └─fixation
| └─images
| └─maps
| |
| └─54
| └─001
| └─fixation
| └─images
| └─maps
| |
| └─020
| └─fixation
| └─images
| └─maps
|- OSE
March 2022 Update - the ground truth of object salieny detection March 2022 - Release the part of code
Coming Soon after publication.