Skip to content
Asymmetric Multi-Task Learning code, If you want to use it, please let me know and cite AMTL paper
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.


Type Name Latest commit message Commit time
Failed to load latest commit information.
dataset Add dataset Aug 1, 2016 Update Aug 3, 2016
cal_loss.m 1st Jul 26, 2016
featureNormalize.m 1st Jul 26, 2016
general_loss.m 1st Jul 26, 2016
learnB.m 1st Jul 26, 2016
learnW.m 1st Jul 26, 2016
learn_B_regression.m 1st Jul 26, 2016
learn_W_regression.m 1st Jul 26, 2016
learn_old.m 1st Jul 26, 2016
linear_regression.m 1st Jul 26, 2016
normalize_feats.m 1st Jul 26, 2016
predict.m 1st Jul 26, 2016
predict_linear.m 1st Jul 26, 2016
regression_loss.m 1st Jul 26, 2016
run_amtl_class.m 1st Jul 26, 2016
run_atml_regression.m 1st Jul 26, 2016
sigmoid.m 1st Jul 26, 2016
trainSTL.m 1st Jul 26, 2016
unit_noramlization.m 1st Jul 26, 2016
unit_normalize.m 1st Jul 26, 2016
unit_normalizem.m 1st Jul 26, 2016

Asymmetric Multi-Task Learning(AMTL)

  • Giwoong Lee(UNIST), Eunho Yang(KAIST), Sung Ju Hwang(UNIST)

motivation mainIdea


We propose a novel multi-task learning method that minimizes the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks selected based on the task-wise loss. We present two dif- ferent algorithms that jointly learn the task pre- dictors as well as the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and existing multitask learning models.


If you use this code or dataset (such as imbalanced dataset) as part of any published research, please refer the following paper.

  title={Asymmetric Multi-task Learning based on Task Relatedness and Confidence},
  author={Lee, Giwoong and Yang, Eunho and others},
  booktitle={Proceedings of The 33rd International Conference on Machine Learning},

Running code

We have two types of code, regression(run_amtl_regression) and classification(run_amtl_class). I uploaded these codes with example dataset.

Details of AMTL

Details of AMTL are described in [AMTL paper][paperlink] [paperlink]:

You can’t perform that action at this time.