Skip to content

Latest commit

 

History

History
72 lines (47 loc) · 3.66 KB

README.md

File metadata and controls

72 lines (47 loc) · 3.66 KB

[Paper] [Supplementary video]

This repository contains the official code from Event Transformer. A sparse-aware solution for efficient event data processing.

Event Transformer (EvT) takes advantage of the event-data sparsity to increase its efficiency. EvT usses a new sparse patch-based event-data representation and a compact transformer architecture that naturally processes it. EvT shows high classification accuracy while requiring minimal computation resources, being able to work with minimal latency both in GPU and CPU.

Citation:

@InProceedings{Sabater_2022_CVPR,
    author    = {Sabater, Alberto and Montesano, Luis and Murillo, Ana C.},
    title     = {Event Transformer. A sparse-aware solution for efficient event data processing},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2022},
}

REPOSITORY REQUIREMENTS

The present work has been developed and tested with Python 3.7.10, pytorch 1.9.0 and Ubuntu 18.04 To reproduce our results we suggest to create a Python environment as follows.

conda create --name evt python=3.7.10
conda activate evt
pip install -r requirements.txt

PRETRAINED MODELS

The pretrained models must be located under a ./pretrained_models directory and can be downloaded from Drive ( DVS128 10 classes, DVS128 11 classes, Sl-Animals 3-Sets, Sl-Animals 4-Sets, ASL-DVS).

DATA DOWNLOAD AND PRE-PROCESSING

The datasets involved in the present work must be downloaded from their source and stored under a ./datasets path:

In order to have a faster training process we pre-process the source data by building intermediate sparse frame representations, that will be later loaded by our data generator. This transformation can be perfomed with the files located under ./dataset_scripts. In the case of DVS128, it is mandatory to execute first dvs128_split_dataset.py and later dvs128.py.

EvT EVALUATION

The evaluation of our pretrained models can be performed by executing: python evaluation_stats.py At the beginning of the file you can select the pretrained model to evaluate and the device where to evaluate it (CPU or GPU). Evaluation results include FLOPs, parameters, average activated patches, average processing time, and validation accuracy.

EvT TRAINING

The training of a new model can be performed by executing: python train.py At the beginning of the file you can select the pretraining model from where to copy its training hyper-parameters. Note that, since the involved datasets do not contain many training samples and there is data augmentation involed in the training, final results might not be exactly equal than the ones reported in the article. If so, please perform several training executions.