Official PyTorch Implementation of DMTG. The codes are primarily contributed by Shuguo Jiang, Moran Li, and Yuan Gao.
Please refer to our paper for more technical details:
Yuan Gao, Shuguo Jiang, Moran Li, Jin-Gang Yu, Gui-Song Xia. DMTG: One-Shot Differentiable Multi-Task Grouping, International Conference on Machine Learning (ICML), 2024. [arXiv]
If this code is helpful to your research, please consider citing our paper by:
@inproceedings{dmtg2024,
title={DMTG: One-Shot Differentiable Multi-Task Grouping},
author={Yuan Gao and Shuguo Jiang and Moran Li and Jin-Gang Yu and Gui-Song Xia},
year={2024},
booktitle = {International Conference on Machine Learning (ICML)}
}
Install the necessary dependencies:
pip install -r requirements.txtDownload CelebA from this website:
python preprocess/preprocess_celeba.pyDownload Taskonomy from this website (you may not need this if you only want to test our algorithm on CelebA, as the Taskonomy dataset is extremely large):
python preprocess/preprocess_taskonomy.py --root {root_path} --nthreads {n_threads} --whitelist {whitelist_path}Our trained checkpoints can be downloaded here.
CelebA:
python test.py --cfg configs/test/test_celeb_a_9_tasks.yaml --opts run.load_ckpt_dir {2_groups.pth}Taskonomy:
python test.py --cfg configs/test/test_taskonomy_5_tasks.yaml --opts run.load_ckpt_dir {3_groups.pth}CelebA:
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc-per-node 1 train.py --cfg configs/train_celeb_a/train_celeb_a_100_epoches_2_groups_9_tasks.yamlTaskonomy:
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc-per-node 2 train.py --cfg configs/train_taskonomy/train_taskonomy_100_epoches_3_groups_5_tasks.yaml