Skip to content
The implementation of "End-to-End Multi-Task Learning with Attention" [CVPR'19].
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
im2im_pred Merge branch 'master' of Jun 25, 2019
visual_decathlon update code Jun 25, 2019
.gitignore update code Apr 10, 2019 Merge branch 'master' of Jun 25, 2019

Multi-Task Attention Network (MTAN)

This repository contains the source code of Multi-Task Attention Network (MTAN) and baselines from the paper, End-to-End Multi-Task Learning with Attention, introduced by Shikun Liu, Edward Johns, and Andrew Davison.


Image-to-Image Predictions (One-to-Many)

Under the folder im2im_pred, we have provided our proposed network alongside with all the baselines on NYUv2 dataset presented in the paper. All the models were written in pytorch. So please first make sure you have pytorch 1.0 framework or above installed in your machine.

Download our pre-processed NYUv2 dataset here which we used in the paper. The original NYUv2 dataset can be found in here with pre-computed ground-truth normals from here.

Update: I have now released the pre-processing CityScapes dataset with 2, 7, and 19-class semantic labels (see the paper for more details) and (inverse) depth labels. Download [256x512, 2.42GB] version here and [128x256, 651MB] version here.

All the models (files) are built with SegNet and described in the following table:

File Name Type Flags Comments Single task, dataroot standard single task learning Single task, dataroot our approach whilst applied on one task Multi weight, dataroot, temp, type multi-task learning baseline in which the shared network splits at the last layer (also known as hard-parameter sharing) Multi weight, dataroot, temp multi-task learning baseline in which each task has its own paramter space (also known as soft-paramter sharing) Multi weight, dataroot, temp our implementation of the Cross Stitch Network Multi weight, dataroot, temp our approach

For each flag, it represents

Flag Name Usage Comments
task pick one task to train: semantic (semantic segmentation, depth-wise cross-entropy loss), depth (depth estimation, l1 norm loss) or normal (normal prediction, cos-similarity loss) only available in single-task learning
dataroot directory root for NYUv2 dataset just put under the folder im2im_pred to avoid any concerns :D
weight weighting options for multi-task learning: equal (direct summation of all task losses), DWA (our proposal), uncert (our implementation of the Weight Uncertainty Method) only available in multi-task learning
temp hyper-parameter temperature in DWA weighting option to determine the softness of task weighting
type different versions of multi-task baseline split: standard, deep, wide only available in the baseline split

To run any model, cd im2im_pred/ and run python --FLAG_NAME 'FLAG_OPTION'.

Visual Decathlon Challenge (Many-to-Many)

We have also provided source code for the recently proposed Visual Decathlon Challenge for which we build MTAN based on Wide Residual Network from the implementation here.

To run the code, first download the dataset and devkit at the official Visual Decathlon Challenge website here and put it in the folder visual_decathlon. Then, put decathlon_mean_std.pickle into the folder of the downloaded dataset decathlon-1.0-data.

Finally, run python for training python --dataset 'imagenet' and 'notimagenet' for evaluation and python for COCO format for online evaluation.

Other Notices

  1. The provided code is highly optimised for readability. If you find any unusual behaviour, please post an issue or directly contact my email below.
  2. Training the provided code will result slightly better performances than the reported numbers in the paper for image-to-image prediction tasks (the rankings stay the same). If you want to compare any models in the paper for image-to-image prediction tasks, please re-run the model directly with your own training strategies (learning rate, optimiser, etc) and keep all training strategies consistent to ensure fairness. To compare results in Visual Decathlon Challenge, you may directly check out the results in the paper. To compare with your own research, please build your multi-task network with the same backbone architecture: SegNet for image-to-image tasks, and Wide Residual Network for the Visual Decathlon Challenge.
  3. From my personal experience, designing a better architecture is usually more helpful (and easier) than finding a better task weighting in multi-task learning.


If you found this code/work to be useful in your own research, please considering citing the following:

  title={End-to-End Multi-task Learning with Attention},
  author={Liu, Shikun and Johns, Edward and Davison, Andrew J},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},


If you have any questions, please contact

You can’t perform that action at this time.