Skip to content
main
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 

dfa-paper

This repository contains the codes to reproduce the experiments in the paper "Learning Dynamics of Direct Feedback Alignment", by Maria Refinetti, Stéphane d'Ascoli, Ruben Ohana and Sebastian Goldt.

It is separated into two folders : the one named "shallow" contains experiments on shallow networks in the online setup, whereas the one named "deep" contains experiments on deep networks in the general setup. Both contain their own README files.

About

No description, website, or topics provided.

Resources

Releases

No releases published

Packages

No packages published