Skip to content

Official resources for "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", IEEE/CVF WACV, 2023.

Notifications You must be signed in to change notification settings

CEA-LIST/MT-DETR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

MT-DETR

Official resources for "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", IEEE/CVF WACV, 2023.

Citation

If you find this repository useful for your own work, please cite our paper:

Q. Bouniot, A. Loesch, R. Audigier, A. Habrard, "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", accepted in IEEE/CVF WACV, Jan. 2023.

@InProceedings{bouniot2023wacv,
  TITLE = {{Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?}},
  AUTHOR = {Bouniot, Quentin and Loesch, Angelique and Audigier, Romaric and Habrard, Amaury},
  BOOKTITLE = {{IEEE/CVF WACV}},
  YEAR = {2023},
  MONTH = Jan,
}

About

Official resources for "Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient?", IEEE/CVF WACV, 2023.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published