Skip to content

Continual learning of neural networks with 11 state-of-the-art methods and 4 baselines. Codebase of the continual learning survey: "A continual learning survey: Defying forgetting in classification tasks." arXiv preprint arXiv:1909.08383 (2019)."

Switch branches/tags
Go to file


Failed to load latest commit information.
Latest commit message
Commit time

A continual learning survey: Defying forgetting in classification tasks

Source code for the Continual Learning survey paper:

  title={A continual learning survey: Defying forgetting in classification tasks},
  author={De Lange, Matthias and Aljundi, Rahaf and Masana, Marc and Parisot, Sarah and Jia, Xu and Leonardis, Ale{\v{s}} and Slabaugh, Gregory and Tuytelaars, Tinne},
  journal={arXiv preprint arXiv:1909.08383},

The code contains a generalizing framework for 11 SOTA methods and 4 baselines in Pytorch:

  • Methods: SI, EWC, MAS, mean/mode-IMM, LWF, EBLL, PackNet, HAT, GEM, iCaRL
  • Baselines
    • Joint: Learn from all task data at once with a single head (multi-task learning baseline).
    • Finetuning: standard SGD
    • Finetuning with Full Memory replay: Allocate memory dynamically to incoming tasks.
    • Finetuning with Partial Memory replay: Divide memory a priori over all tasks.

This source code is released under a Attribution-NonCommercial 4.0 International license, find out more about it in the LICENSE file.


Reproducability: Results from the paper can be obtained from src/main_'dataset'.sh. Full pipeline example in src/ .

Pipeline: Constructing a custom pipeline typically requires the following steps.

  1. Project Setup
    1. For all requirements see requirements.txt. Main packages can be installed as in
      conda create --name <ENV-NAME> python=3.7
      conda activate <ENV-NAME>
      # Main packages
      conda install -c conda-forge matplotlib tqdm
      conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
      # For GEM QP
      conda install -c omnia quadprog
      # For PackNet: torchnet 
      pip install git+
    2. Set paths in 'config.init' (or leave default)
      1. '{tr,test}_results_root_path': where to save training/testing results.
      2. 'models_root_path': where to store initial models (to ensure same initial model)
      3. 'ds_root_path': root path of your datasets
    3. Prepare dataset: see src/data/"dataset" (e.g. src/data/
  2. Train any out of the 11 SOTA methods or 4 baselines
    1. Regularization-based/replay methods: We run a first task model dump, for Synaptic Intelligence (SI) as it acquires importance weights during training. Other methods start from this same initial model.
    2. Baselines/parameter isolation methods: Start training sequence from scratch
  3. Evaluate performance, sequence for testing on a task is saved in dictionary format under test_results_root_path defined in config.init.
  4. Plot the evaluation results, using one of the configuration files in utilities/plot_configs

Implement Your Method

  1. Find class "YourMethod" in methods/ Implement the framework phases (documented in code).
  2. Implement your task-based training script in methods: methods/"YourMethodDir". The class "YourMethod" will call this code for training/eval/processing of a single task.

Project structure

  • src/data: datasets and automated preparation scripts for Tiny Imagenet and iNaturalist.
  • src/framework: the novel task incremental continual learning framework. starts training pipeline, specify --test argument to perform evaluation with
  • src/methods: all methods source code and wrapper.
  • src/models: all model preprocessing.
  • src/utilities: utils used across all modules and plotting.
  • Config:



  • If you have troubles, please open a Git issue.
  • Have you defined your method in the framework and want to share it with the community? Send a pull request!