Awesome
awesome-moo-ml-papers
Pareto set learning
-
PHN: Learning Pareto front by hypernetwork
Authors: Navon et al
Conference: ICLR, 2019
Link: arXiv:2010.04104 -
PHN-HVI: Improving pareto front learning via multi-sample hyper-networks
Author: LP Hoang et al
Conference: AAAI, 2023
Link: arXiv:2212.01130 -
PSL-Frame: A framework for controllable pareto front learning with completed scalarization functions and its applications
Author: TA Tuan et al
Journal: Neural Networks, 2024
Link: arXiv:2302.12487 -
PSL-Exp: Pareto set learning for expensive multiobjective optimization
Authors: Lin et al
Conference: NeurIPS, 2022
Link: arXiv:2210.08495 -
COSMOS: Scalable Pareto Front Approximation for Deep Multi-Objective Learning Authors: Ruchte et al
Conference: ICDM, 2022 Link: arXiv:2103.13392.pdf -
PaMaL: Pareto manifold learning: Tackling multiple tasks via ensembles of single-task models
Authors: Dimitriadis et al
Conference: ICLR, 2023 Link: Proceedings of Machine Learning Research (PMLR) -
GMOOAR: Multi-objective deep learning with adaptive reference vectors Authors: Ruchte et al Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper
-
HVPSL: Multi-objective deep learning with adaptive reference vectors Authors: Xiaoyuan Zhang et al
Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper -
Smooth Tchebycheff Scalarization for Multi-Objective Optimization Authors: Xi Lin et al
Conference: ICML 2024
Link: arxiv -
Low Rank (LoRA) PSL.
-
Learning a Neural Pareto Manifold Extractor with Constraints
Authors: Gupta et al
Conference: UAI 2021
Pareto Multitask learning (Discrete Solutions)
-
PMTL: Pareto Multi Task Learning. NeurIPS, 2018.
-
MGDA. (Sener 2018).
-
EPO.
-
MOO-SVGD Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent. Conference: NeurIPS 2022. Author: Xingchao Liu
-
GMOOAR. Multi-objective deep learning with adaptive reference vectors.
-
PNG.
Using MOO idea to solve MTL only for a single solution
It is noticed that, this line using Pareto ideas to solve MTL.
- Nash-MTL. Navon 2022.
Theories.
-
HVPSL: Multi-objective deep learning with adaptive reference vectors Authors: Xiaoyuan Zhang et al
Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper TL: Understanding the generlization bound of PSL. -
Revisiting scalarization in multi-task learning: A theoretical perspective Authors: Yuzheng Hu et al
Conference: NeurIPS, 2023 Link: NeurIPS Conference Paper TL: When MOO-MTL actually has no tradeoff.
Applications in very large problems.
A. Drug deign
B. LLM
-
Panacea: Pareto Alignment via Preference Adaptation for LLMs Authors: Yifan Zhong et al
Conference: Unknown
Link: arxiv -
Controllable Preference Optimization.
NN meets MOEA.
- Pseudo Weight Net: Learning to Predict Pareto-optimal Solutions From Pseudo-weights Author: Deb. Jornal: TEVC link: https://www.egr.msu.edu/~kdeb/papers/c2022010.pdf
Awesome MOO libs
-
libMTL. Yu Zhang's group. Sustech.
-
Libmoon. Xiaoyuan Zhang. CityU HK.