Skip to content

BurakGurbuz97/SHARP-Continual-Learning

Repository files navigation

SHARP: Sparsity and Hidden Activation RePlay for Neuro-Inspired Continual Learning

Abstract: Deep neural networks (DNNs) struggle in changing environments due to their dependence on static datasets and stable conditions. Continual learning (CL) aims to overcome this limitation by enabling DNNs to continuously learn and adapt, similar to biological learning systems. A central method in CL is replay, which involves training DNNs on a mixture of new and previously encountered data. While sharing common objectives, biological and artificial replay differ significantly. This paper focuses on two main distinctions: firstly, biological replay processes neural patterns rather than raw sensory inputs, and secondly, it prioritizes reinforcing recent information instead of revisiting all past experiences uniformly. To address these differences, we introduce the SHARP architecture, which aligns more closely with biological principles. SHARP incorporates sparse dynamic connectivity and a novel activation replay method, selectively focusing on recent classes and thereby eliminating the need to revisit all past datasets. Additionally, SHARP updates all network layers continually, in contrast to other activation replay methods that fix layers not subjected to replay after a pretraining phase. Our experiments across five datasets demonstrate that SHARP outperforms existing leading replay methods in class incremental learning.

SHARP

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published