Skip to content
This repository has been archived by the owner on Mar 19, 2024. It is now read-only.

Commit

Permalink
Removed LARC mentions in doc as Barlow Twins uses its own version.
Browse files Browse the repository at this point in the history
  • Loading branch information
OlivierDehaene committed Apr 29, 2021
1 parent 1c113fe commit a5ba52b
Showing 1 changed file with 0 additions and 19 deletions.
19 changes: 0 additions & 19 deletions docs/source/ssl_approaches/barlow_twins.rst
Expand Up @@ -44,25 +44,6 @@ To use SyncBN during training, one needs to set the following parameters in conf
# global sync is done.
GROUP_SIZE: 8
Using LARC for training
--------------------------------------------

Barlow Twins training uses LARC from `NVIDIA's Apex LARC <https://github.com/NVIDIA/apex/blob/master/apex/parallel/LARC.py>`_. To use LARC, users need to set config option
:code:`OPTIMIZER.use_larc=True`. VISSL exposed LARC parameters that users can tune. Full list of LARC parameters exposed by VISSL:

.. code-block:: yaml
OPTIMIZER:
name: "sgd"
use_larc: False # supported for SGD only for now
larc_config:
clip: False
eps: 1e-08
trust_coefficient: 0.001
.. note::

LARC is currently supported for SGD optimizer only.
Vary the training loss settings
------------------------------------------------
Expand Down

0 comments on commit a5ba52b

Please sign in to comment.