Skip to content

TanmDL/SCALE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

SCALE

Scalable Adversarial Online Continual Learning

Abstract: dversarial continual learning is effective for continual learning prob- lems because of the presence of feature alignment process generating task-invariant features having low susceptibility to the catastrophic forgetting problem. Never- theless, the ACL method imposes considerable complexities because it relies on task-specific networks and discriminators. It also goes through an iterative train- ing process which does not fit for online (one-epoch) continual learning problems. This paper proposes a scalable adversarial continual learning (SCALE) method putting forward a parameter generator transforming common features into task- specific features and a single discriminator in the adversarial game to induce common features. The training process is carried out in meta-learning fashions using a new combination of three loss functions. SCALE outperforms prominent baselines with noticeable margins in both accuracy and execution time.

Authors: Tanmoy Dam, Mahardhika Pratama, MD Meftahul Ferdaus, Sreenatha Anavatti and Hussein Abbas

Requirements

  • Pytorch 1.10.0
  • CUDA 11.4

All the experiments were tested on a single NVIDIA RTX 3080 GPU with 16Gb memory.

Benchmarks

  1. Prepare Data

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published