This is a refactored version of the code used for "A Probabilistic Framework for Modular Continual Learning" [1].
conda create -n PICLE python=3.9
conda activate PICLE
pip install --upgrade pip
pip install -r requirements.txt
The underlying image datasets will be downloaded as needed when running the experiments. The rest of the data is generated on the fly.
From the project's folder, running the following:
python Experiments/BELL/Experiment.py -sequence S_out -cl_alg PICLE -device cpu -dbg
evaluates PICLE on the Sout sequence of BELL on the cpu, for 1 training epoch per path, and 1 random seed. To evaluate for all 3 random seeds and for more epochs, remove the -dbg argument.
This file can be used to run different baselines on different BELL sequences. To see all options, run:
python Experiments/BELL/Experiment.py -h
- The code currently refers to paths as programs, as both specify how an input should be processed.
- If you get an OSError, it might be caused by the result file locker, which safely handles concurrently running algorithms for the same sequence. You can go to Experiments/Interface/Experiment.py and comment out its use on line 44.
[1] Valkov, L., Srivastava, A., Chaudhuri, S. and Sutton, C., 2023. A Probabilistic Framework for Modular Continual Learning. arXiv preprint arXiv:2306.06545.