Skip to content


Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Principled Training of Neural Networks with Direct Feedback Alignment

This is the code for reproducing the results of our paper (preprint):

Principled Training of Neural Networks with Direct Feedback Alignment

Julien Launay, Iacopo Poli, Florent Krzakala


The backpropagation algorithm has long been the canonical training method for neural networks. Modern paradigms are implicitly optimized for it, and numerous guidelines exist to ensure its proper use. Recently, synthetic gradients methods -- where the error gradient is only roughly approximated -- have garnered interest. These methods not only better portray how biological brains are learning, but also open new computational possibilities, such as updating layers asynchronously. Even so, they have failed to scale past simple tasks like MNIST or CIFAR-10. This is in part due to a lack of standards, leading to ill-suited models and practices forbidding such methods from performing to the best of their abilities. In this work, we focus on direct feedback alignment and present a set of best practices justified by observations of the alignment angles. We characterize a bottleneck effect that prevents alignment in narrow layers, and hypothesize it may explain why feedback alignment methods have yet to scale to large convolutional networks.

Reproducing the results

Running the provided code requires a CUDA-enabled GPU with around 1GB of memory.

When actions are required to make the code work (such as adding paths to datasets), they are marked with a comment starting with # TODO:.

To reproduce,

  • contains code related to the tables for FC networks in section 3 and for CNNs in section 4;
  • contains the code for the bottlenecking experiments of section 4, and and are contingency codes that were used as a file containing angle measurements value was corrupted.

Furthermore, the mltools folder is a custom-made library to simplify classic ML tasks with or without DFA. dfatools contains our implementation of DFA. A more streamlined and complete version of it will be released eventually.


If you found this implementation useful in your research, please consider citing:

    title={Principled Training of Neural Networks with Direct Feedback Alignment},
    author={Launay, Julien and Poli, Iacopo and Krzakala, Florent},

Code author: @slippylolo (Julien Launay - julien[at]


Code for our paper on best practices to train neural networks with direct feedback alignment (DFA).







No releases published


No packages published