Skip to content

hammoudhasan/DiversitySSL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

On Pretraining Data Diversity for Self-Supervised Learning

Code and models will be released upon acceptance.
Hasan Abed Al Kader Hammoud1*   Tuhin Das2*   Fabio Pizzati2*   Philip Torr2   Adel Bibi2   Bernard Ghanem1
1 KAUST, 2 University of Oxford,

SynthCLIP Teaser

Paper GitHub stars

Abstract

We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal. Notably, even with an exceptionally large pretraining data diversity achieved through methods like web crawling or diffusion-generated data, among other ways, the inherent distribution shift remains a challenge. Our experiments are comprehensive with seven SSL methods using large-scale datasets such as ImageNet and YFCC100M amounting to over 200 GPU days.

📖 Citation

If you find this work useful in your research, please consider citing:

@misc{hammoud2024pretraining,
      title={On Pretraining Data Diversity for Self-Supervised Learning}, 
      author={Hasan Abed Al Kader Hammoud and Tuhin Das and Fabio Pizzati and Philip Torr and Adel Bibi and Bernard Ghanem},
      year={2024},
      eprint={2403.13808},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published