Skip to content

philippmwirth/byol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BYOL

PyTorch Implementation of the BYOL paper: Bootstrap your own latent: A new approach to self-supervised Learning

This is currently a work in progress. The code is a modified version of SimSiam here.

  • Time per epoch is around 1 minute on a V100 GPU
  • GPU usage is around 9 GBytes

Todo:

  • warmup learning rate from 0
  • report results on cifar-10
  • create PR to add to lightly

Installation

pip install -r requirements.txt

Dependencies

  • PyTorch
  • PyTorch Lightning
  • Torchvision
  • lightly

Benchmarks

We benchmark the BYOL model on the CIFAR-10 dataset following the KNN evaluation protocol.

Epochs Batch Size warmup Test Accuracy Peak GPU Usage
200 512 0.85 9.3GBytes
200 512 0.86 9.3GBytes
800 512 0.91 9.3GBytes
Accuracy Loss

|

Paper

Bootstrap your own latent: A new approach to self-supervised Learning

About

Implementation of the BYOL paper.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages