Skip to content

amazon-science/DeepMTL2R

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Deep Multi-task Learning to Rank

Introduction

DeepMTL2R is a deep learning framework used for multi-task learning to rank tasks.

Setup environment

aws s3 sync s3://personal-tests/chaosd/DeepMTL2R-dev/ DeepMTL2R/

Setup enviroment for running dmtl2r

conda create -n dmtl2r python=3.9.7
source ~/anaconda3/etc/profile.d/conda.sh
conda activate dmtl2r

cd DeepMTL2R
python -m pip install -e . --extra-index-url https://download.pytorch.org/whl/cu113

chmod +x *.sh

Setup enviroment for plotting and computing metrics

conda create -n pygmo python=3.9.7
source ~/anaconda3/etc/profile.d/conda.sh
conda activate pygmo

cd DeepMTL2R
pip install -r requirements-hvi.txt
conda install pygmo

source ~/anaconda3/etc/profile.d/conda.sh
conda activate pygmo

Add a Conda environment to Jupyter Notebook

conda install ipykernel
python -m ipykernel install --name pygmo --display-name pygmo

Usage

To train the model, configure the experiment in a config.json file. The code in allrank provides the core components for model training. The task-specific files in DeepMTL2R uses core modules to run experiments.

We provide one example using MSLR30K data as follows.

CUDA_VISIBLE_DEVICES=0 python main_ntasks.py \
            --config-file-path scripts/local_config_web.json \
            --output-dir "allrank/run"
            --task-indices 0,135 \
            --task-weights 0,10 \
            --moo-method ls \
            --dataset-name "original" \
            --reduction-method "mean" 

We also provide run_2tasks_web30k.sh and run_5tasks_web30k.sh script to run the experiments in our paper which trains Transformer models on the MSLR30K data for two tasks and five tasks, respectively.

MTL methods

We support the following MTL methods in weight_methods.py.

Method (code name) Paper (notes)
STL (stl) - (Single Task Learning baseline)
Linear scalarization (ls) - (Linear scalarization baseline which minimizes $\sum_k w_k\ell_k$)
Uncertainty weighting (uw) Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
Scale-invariant baseline (scaleinvls) - (Scale-invariant baseline which minimizes $\sum_k w_klog\ell_k$)
Random Loss Weighting (rlw) A Closer Look at Loss Weighting in Multi-Task Learning
DWA (dwa) End-to-End Multi-Task Learning with Attention
PCGrad (pcgrad) Gradient Surgery for Multi-Task Learning
MGDA (mgda) Multi-Task Learning as Multi-Objective Optimization
GradDrop (graddrop) Optimizing Deep Multitask Models with Gradient Sign Dropout
LOG_MGDA (log_mgda) - (Log-scaled MGDA variant)
CAGrad (cagrad) Conflict-Averse Gradient Descent for Multi-task Learning
LOG_CAGrad (log_cagrad) - (Log-scaled CAGrad variant)
IMTL-G (imtl) Towards Impartial Multi-task Learning
LOG_IMTLG (log_imtl) - (Log-scaled IMTL-G variant)
Nash-MTL (nashmtl) Multi-Task Learning as a Bargaining Game
FAMO (famo) Fast Adaptive Multitask Optimization
SDMGrad (sdmgrad) Direction-oriented Multi-objective Learning: Simple and Provable Stochastic Algorithms
Weighted Chebyshev (wc) Multi-Objective Optimization for Sparse Deep Multi-Task Learning
Soft Weighted Chebyshev (soft_wc) - (Soft variant of Weighted Chebyshev)
EPO (epo) Exact Pareto Optimal Search for Multi-Task Learning
WC_MGDA (wc_mgda) A Multi-objective / Multi-task Learning Framework Induced by Pareto Preferences
EC (ec) Multi-objective Relevance Ranking

Citation

If you use this work, or otherwise found our work valuable, please consider citing the paper:

@article{chaoshengdong-deepmtl2r2025,
  title={DeepMTL2R: A Library for Deep Multi-task Learning to Rank},
  author={Dong, Chaosheng and Xiao, Peiyao and Ji, Kaiyi and Martinez, Aleix},
  year={2025}
}

Contact

For any question, you can contact chaosd@amazon.com.

License

This project is licensed under the Apache-2.0 License.

Acknowlegement

We thank authors of the following repositories, upon which we built the present codebase: allRank, FAMO, SDMGrad, MGDA, EPO, MO-LightGBM.

About

Deep Multi-task Learning to Rank

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published