Skip to content

xzhang2523/awesome-moo-ml-papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 

Repository files navigation

awesome-moo-ml-papers

Pareto set learning

  1. PHN: Learning Pareto front by hypernetwork
    Authors: Navon et al
    Conference: ICLR, 2019
    Link: arXiv:2010.04104

  2. PHN-HVI: Improving pareto front learning via multi-sample hyper-networks
    Author: LP Hoang et al
    Conference: AAAI, 2023
    Link: arXiv:2212.01130

  3. PSL-Frame: A framework for controllable pareto front learning with completed scalarization functions and its applications
    Author: TA Tuan et al
    Journal: Neural Networks, 2024
    Link: arXiv:2302.12487

  4. PSL-Exp: Pareto set learning for expensive multiobjective optimization
    Authors: Lin et al
    Conference: NeurIPS, 2022
    Link: arXiv:2210.08495

  5. COSMOS: Scalable Pareto Front Approximation for Deep Multi-Objective Learning Authors: Ruchte et al
    Conference: ICDM, 2022 Link: arXiv:2103.13392.pdf

  6. PaMaL: Pareto manifold learning: Tackling multiple tasks via ensembles of single-task models
    Authors: Dimitriadis et al
    Conference: ICLR, 2023 Link: Proceedings of Machine Learning Research (PMLR)

  7. GMOOAR: Multi-objective deep learning with adaptive reference vectors Authors: Ruchte et al Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper

  8. HVPSL: Multi-objective deep learning with adaptive reference vectors Authors: Xiaoyuan Zhang et al
    Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper

  9. Smooth Tchebycheff Scalarization for Multi-Objective Optimization Authors: Xi Lin et al
    Conference: ICML 2024
    Link: arxiv

  10. Low Rank (LoRA) PSL.

  11. Learning a Neural Pareto Manifold Extractor with Constraints
    Authors: Gupta et al
    Conference: UAI 2021

Pareto Multitask learning (Discrete Solutions)

  1. PMTL: Pareto Multi Task Learning. NeurIPS, 2018.

  2. MGDA. (Sener 2018).

  3. EPO.

  4. MOO-SVGD Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent. Conference: NeurIPS 2022. Author: Xingchao Liu

  5. GMOOAR. Multi-objective deep learning with adaptive reference vectors.

  6. PNG.

Using MOO idea to solve MTL only for a single solution

It is noticed that, this line using Pareto ideas to solve MTL.

  1. Nash-MTL. Navon 2022.

Theories.

  1. HVPSL: Multi-objective deep learning with adaptive reference vectors Authors: Xiaoyuan Zhang et al
    Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper TL: Understanding the generlization bound of PSL.

  2. Revisiting scalarization in multi-task learning: A theoretical perspective Authors: Yuzheng Hu et al
    Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper TL: When MOO-MTL actually has no tradeoff.

Applications in very large problems.

A. Drug deign

B. LLM

  1. Panacea: Pareto Alignment via Preference Adaptation for LLMs Authors: Yifan Zhong et al
    Conference: Unknown
    Link: arxiv

  2. Controllable Preference Optimization.

NN meets MOEA.

  1. Pseudo Weight Net: Learning to Predict Pareto-optimal Solutions From Pseudo-weights Author: Deb. Jornal: TEVC link: https://www.egr.msu.edu/~kdeb/papers/c2022010.pdf

Awesome MOO libs

  1. libMTL. Yu Zhang's group. Sustech.

  2. Libmoon. Xiaoyuan Zhang. CityU HK.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published