CTL (Cascaded Transfer Learning) is a lightweight research framework for many-task learning under a global training budget.
Instead of training tasks independently or relying on expensive joint multi-task learning, CTL organizes tasks into a directed tree and performs cascade transfer: models are trained at a root task and progressively fine-tuned along the tree.
-
Tree-structured transfer learning
Tasks are connected by directed edges; each task inherits parameters from its parent and is fine-tuned locally. -
Global budget constraint
A fixed budget (B) of gradient steps is shared across all tasks, making transfer decisions meaningful.
LinearModel— exact match with theory.SimpleMLP— minimal non-linear extension.ResidualMLP— near-linear model preserving alignment structure.
- Full-batch SGD training.
- Model cloning for cascade transfer.
- Cost tracking (gradient steps, wall-clock time).
- Optuna-based optimizer for cascade-level parameters:
- alignment learning rate
eta - seed budget fraction
- alignment learning rate
- Clean train/validation/test protocol (no leakage).
The easiest way to run experiments is via the provided notebook: expe-alignment-weave.ipynb.
It walks through:
- data loading and normalization ;
- tree constructions ;
- cascade training and baselines ;
- hyperparameter optimization ;
- final test evaluation.
If you use the CTL in your work, please cite the corresponding paper:
@article{anonymous_2026_CTL,
author = {Anonymous authors},
title = {Cascaded Transfer: Learning Many Tasks under Budget Constraints},
year = {2026}
}This project is licensed under the GPL License - see the LICENSE file for details.