Skip to content

Samsung/Achievement-based-MTL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Achievement-based Training Progress Balancing for Multi-Task Learning

This is the official implementation of ICCV'23 paper Achievement-based Training Progress Balancing for Multi-Task Learning by Hayoung Yun and Hanjoo Cho [Paper][Video][Poster]

In this paper, we address two major challenges of multi-task learning (1) the high cost of annotating labels for all tasks and (2) balancing training progress of diverse tasks with distinct characteristics.

image

We address the high annotation cost by integrating task-specific datasets to construct a large-scale multi-task datset. The composed dataset is thereby partially-annotated because each image of the dataset is labeled only for the task from which it originated. Hence, the numbers of labels for individual tasks could be different. The difference in the number of task labels exacerbates the imbalance in training progress among tasks. To handle the intensified imbalance, we propose a novel multi-task loss named achievement-based multi-task loss.

image

The previous accuracy-based multi-task loss, DTP, focused on the current accuracy of each task. Instead, we pay attention to how the accuracy can be improved further. For that, considering the accuracy of the single-task model as the accuracy potential of the task, we define an ”achievement” as the ratio of current accuracy to its potential. Our achievement-based task weights encourage tasks with low achievements and slow down tasks converged early.

Then, we formulate a multi-task loss as weighted geometric mean, instead of a weighted sum generally used for multi-task losses. A weighted sum can be easily dominated by the largest one, if their scales are significantly different. Hence, we employ the weighted geometric mean to multi-task loss to capture the variance in all losses.

image

The proposed loss achieved the best multi-task accuracy without incurring training time overhead. Compared to single-task models, the proposed one achieved 1.28%, 1.65%, and 1.18% accuracy improvement in object detection, semantic segmentation, and depth estimation, respectively, while reducing computations to 33.73%.

🚀 This repo is scheduled for release on November 1, 2023.

Contents

  1. Installation
  2. Datasets
  3. Experiments
  4. Citation

Installation

Our setup

  • Python3.8
  • CUDA 11.3
  • PyTorch 1.13

Script

Clone this repository.

git clone https://github.com/Samsung/Achievement-based-MTL.git

Install requirements

pip install -r requirements.txt

Datasets

We support PASCAL VOC and NYU v2 datasets now. Download and organize the dataset files as follows:

VOC Dataset

$datasets/VOC/

NYU v2 Dataset

$datasets/NYU/

Experiments

Supported Multi-Task Losses

Method Paper
RLW (rlw) Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning
DWA (dwa) End-to-End Multi-Task Learning with Attention
GLS (geometric) MultiNet++: Multi-Stream Feature Aggregation and Geometric Loss Strategy for Multi-Task Learning
MGDA (mgda) Multi-Task Learning as Multi-Objective Optimization
PCGrad (pcgrad) Gradient Surgery for Multi-Task Learning
CAGrad (cagrad) Conflict-Averse Gradient Descent for Multi-task Learning
GradNorm (grad-norm) GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks
IMTL (imtl / imtl-g) Towards Impartial Multi-task Learning
DTP (dtp) Dynamic Task Prioritization for Multitask Learning
Proposed (amtl) Achievement-Based Training Progress Balancing for Multi-Task Learning

Scripts

Training using the conventional fully-annotated Multi-Dataset (NYUv2)

# single-task
python3 train_test.py cfg/segmentation/NYU/DeepLab_resnet50.cfg
python3 train_test.py cfg/depth/NYU/DeepLab_resnet50.cfg
python3 train_test.py cfg/normal/NYU/DeepLab_resnet50.cfg

# multi-task
python3 train_test.py cfg/seg+depth+normal/NYU/DeepLab_resnet50.cfg

Training using the partially-annotated multi-dataset (PASCAL VOC + NYU depth)

# single-task
python3 train_test.py cfg/detection/VOC/VMM_efficientnet-v2-s.cfg
python3 train_test.py cfg/segmentation/VOC/VMM_efficientnet-v2-s.cfg
python3 train_test.py cfg/depth/NYU/VMM_efficientnet-v2-s.cfg

# multi-task
python3 train_test.py cfg/seg+det+depth/NYU/VMM_efficientnet-v2-s.cfg

Citation

@InProceedings{Yun_2023_ICCV,
    author    = {Yun, Hayoung and Cho, Hanjoo},
    title     = {Achievement-Based Training Progress Balancing for Multi-Task Learning},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {16935-16944}
}

If you have any questions, please feel free to contact Hayoung Yun (hayoung.yun@samsung.com) and Hanjoo Cho (hanjoo.cho@samsung.com)

About

Source Code of the paper "Achievement based Training Progress Balancing for Multi-Task Learning" accepted in ICCV2023

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages