Skip to content

DonkeyShot21/continual_learning_papers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Continual Learning Literature

This repository is maintained by Massimo Caccia and Timothée Lesort don't hesitate to send us an email to collaborate or fix some entries ({massimo.p.caccia , t.lesort} at gmail.com). The automation script of this repo is adapted from Automatic_Awesome_Bibliography.

For contributing to the repository please follow the process here

You can directly use our bib.tex in overleaf with this link

Outline

Classics

Empirical Study

Surveys

Influentials

New Settings or Metrics

General Continual Learning Methods (SL and RL)

Task-Agnostic Continual Learning

Regularization Methods

Distillation Methods

  • Dark Experience for General Continual Learning: a Strong, Simple Baseline , (NeurIPS 2020) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo, Abati, Davide and Calderara, Simone [bib]
  • Online Continual Learning under Extreme Memory Constraints , (ECCV 2020) by Fini, Enrico, Lathuilière, Stèphane, Sangineto, Enver, Nabi, Moin and Ricci, Elisa [bib] Introduces Memory-Constrained Online Continual Learning, a setting where no information can be transferred between tasks, and proposes a distillation-based solution (Batch-level Distillation)
  • PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning , (ECCV 2020) by Douillard, Arthur, Cord, Matthieu, Ollion, Charles, Robert, Thomas and Valle, Eduardo [bib] Novel knowledge distillation that trades efficiently rigidity and plasticity to learn large amount of small tasks
  • Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild , (ICCV 2019) by Lee, Kibok, Lee, Kimin, Shin, Jinwoo and Lee, Honglak [bib] Introducing global distillation loss and balanced finetuning; leveraging unlabeled data in the open world setting (Single-head setting)
  • Large scale incremental learning , (CVPR 2019) by Wu, Yue, Chen, Yinpeng, Wang, Lijuan, Ye, Yuancheng, Liu, Zicheng, Guo, Yandong and Fu, Yun [bib] Introducing bias parameters to the last fully connected layer to resolve the data imbalance issue (Single-head setting)
  • Continual Reinforcement Learning deployed in Real-life using PolicyDistillation and Sim2Real Transfer, (ICML Workshop 2019) by *Kalifou, René Traoré, Caselles-Dupré, Hugo, Lesort, Timothée, Sun, Te, Diaz-Rodriguez, Natalia and Filliat, David * [bib]
  • Lifelong learning via progressive distillation and retrospection , (ECCV 2018) by Hou, Saihui, Pan, Xinyu, Change Loy, Chen, Wang, Zilei and Lin, Dahua [bib] Introducing an expert of the current task in the knowledge distillation method (Multi-head setting)
  • End-to-end incremental learning , (ECCV 2018) by Castro, Francisco M, Marin-Jimenez, Manuel J, Guil, Nicolas, Schmid, Cordelia and Alahari, Karteek [bib] Finetuning the last fully connected layer with a balanced dataset to resolve the data imbalance issue (Single-head setting)
  • Learning without forgetting , (2017) by Li, Zhizhong and Hoiem, Derek [bib] Functional regularization through distillation (keeping the output of the updated network on the new data close to the output of the old network on the new data)
  • icarl: Incremental classifier and representation learning , (CVPR 2017) by Rebuffi, Sylvestre-Alvise, Kolesnikov, Alexander, Sperl, Georg and Lampert, Christoph H [bib] Binary cross-entropy loss for representation learning \& exemplar memory (or coreset) for replay (Single-head setting)

Rehearsal Methods

Generative Replay Methods

Dynamic Architectures or Routing Methods

Hybrid Methods

Continual Few-Shot Learning

Meta-Continual Learning

Lifelong Reinforcement Learning

Task-Agnostic Lifelong Reinforcement Learning

Continual Generative Modeling

Miscellaneous

Applications

Thesis

Libraries

  • Sequoia - Towards a Systematic Organization of Continual Learning Research , (2021) by Fabrice Normandin, Florian Golemo, Oleksiy Ostapenko, Matthew Riemer, Pau Rodriguez, Julio Hurtado, Khimya Khetarpal, Timothée Lesort, Laurent Charlin, Irina Rish and Massimo Caccia [bib] A library that unifies Continual Supervised and Continual Reinforcement Learning research
  • Avalanche: an End-to-End Library for Continual Learning , (2021) by Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Gabriele Graffieti and Antonio Carta [bib] A library for Continual Supervised Learning
  • Continuous Coordination As a Realistic Scenario for Lifelong Learning , (2021) by Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville and Sarath Chandar [bib] a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings.
  • River: machine learning for streaming data in Python, (2020) by Jacob Montiel, Max Halford, Saulo Martiello Mastelini and Geoffrey Bolmier, Raphael Sourty, Robin Vaysse and Adil Zouitine, Heitor Murilo Gomes, Jesse Read and Talel Abdessalem and Albert Bifet [bib] A library for online learning.
  • Continuum, Data Loaders for Continual Learning, (2020) by Douillard, Arthur and Lesort, Timothée [bib] A library proposing continual learning scenarios and metrics.
  • Framework for Analysis of Class-Incremental Learning , (arXiv 2020) by Masana, Marc, Liu, Xialei, Twardowski, Bartlomiej, Menta, Mikel, Bagdanov, Andrew D and van de Weijer, Joost [bib] A library for Continual Class-Incremental Learning

Workshops

About

Relevant papers in Continual Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TeX 93.5%
  • Python 6.5%