Authors: Patryk Krukowski, Jan Miksa, Piotr Helm, Jacek Tabor, Paweł Wawrzyński, Przemysław Spurek
GMUM — Jagiellonian University
Note: This repository will be merged to the repository in the future.
This repository contains the implementation of InTAct (Interval-based Task Activation Consolidation) integrated into state-of-the-art prompt-based continual learning methods, such as L2P, DualPrompt, and CODA-Prompt.
While prompt-based methods help reduce catastrophic forgetting, they still experience representation drift within shared parameters, such as the classifier weights. InTAct addresses this by applying functional regularization to these shared components, allowing them to adapt to new domains without overwriting the functional behavior needed by previously learned tasks.
We provide InTAct integration for the following baselines:
- L2P (Learning to Prompt) + InTAct
- DualPrompt + InTAct
- CODA-Prompt + InTAct
-
Clone the repository
git clone [https://github.com/pkrukowski1/PromptInTAct](https://github.com/pkrukowski1/PromptInTAct) cd PromptInTAct -
Create and activate the conda environment The environment is named
prompt_intact.conda create -n prompt_intact python=3.8 conda activate prompt_intact
-
Install dependencies
pip install -r requirements.txt
All training scripts and configurations are located in the experiments/ folder.
-
Configure Hyperparameters To ensure reproducibility and match the results reported in the paper, you must first examine and configure the hyperparameters used in the scripts. Before running, check the relevant bash script (e.g.,
experiments/cifar100_l2p.sh) and any configuration files. -
Execute Training Scripts Once hyperparameters are correctly configured, you can reproduce the results by executing the corresponding bash scripts:
# Example: Run L2P integration on Split CIFAR-100 in CIL scenario bash experiments/cifar-100.sh # Example: Run DualPrompt integration on DomainNet in DIL scenario bash experiments/dil_domainnet.sh
- Thanks to the authors of the CODA-Prompt paper for providing the implementation of CODA-Prompt and related methods.
- Thanks to the authors of the KAC aper for providing scripts for the Domain-Incremental Learning (DIL) scenario.
