Awesome
[NeurIPS 2023] DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization
🥳 Welcome! This codebase accompanies the paper DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization.
🚀 Introduction
DeepACO is a generic framework that leverages deep reinforcement learning to automate heuristic designs. It serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications.
🔑 Usage
Dependencies
- Python 3.8
- CUDA 11.0 (Using a CPU works just as well 🥺)
- PyTorch 1.7.0
- PyTorch Scatter 2.0.7
- PyTorch Sparse 0.6.9
- PyTorch Geometric 2.0.4
- d2l
- networkx 2.8.4
- numpy 1.23.3
- numba 0.56.4
Available Problems
- Traveling Salesman Problem (TSP). Please refer to
tsp/
for vanilla DeepACO andtsp_nls/
for DeepACO with NLS on TSP. - Capacitated Vehicle Routing Problem (CVRP). Please refer to
cvrp/
for vanilla DeepACO andcvrp_nls/
for DeepACO with NLS on CVRP. - Orienteering Problem (OP). Please refer to
op/
. - Prize Collecting Travelling Salesman Problem (PCTSP). Please refer to
pctsp/
. - Sequential Ordering Problem (SOP). Please refer to
sop/
. - Single Machine Total Weighted Tardiness Problem (SMTWTP). Please refer to
smtwtp/
. - Resource-Constrained Project Scheduling Problem (RCPSP). Please refer to
rcpsp/
. - Multiple Knapsack Problem (MKP). Please refer to
mkp/
for the implementation of pheromone model $PH_{suc}$ andmkp_transformer/
for that of $PH_{items}$. - Bin Packing Problem (BPP). Please refer to
bpp/
.
🎥 Resources
- An elegant DeepACO implementation can be found in RL4CO library.
- You may be interested in ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution. ReEvo leverages large language models to automate heuristic designs under a reflective evolution framework. It outperforms DeepACO in terms of the scalability and generality of the heuristics.
- You may be interested in Ant Colony Sampling with GFlowNets for Combinatorial Optimization by Minsu Kim, Sanghyeok Choi, Jiwoo Son, Hyeonah Kim, Jinkyoo Park, and Yoshua Bengio, which suggests that DeepACO can be improved by training with GFlowNets.
- Video and slides
- Video-Chinese
- Blog-Chinese
🤩 Citation
If you encounter any difficulty using our code, please do not hesitate to submit an issue or directly contact us!
If you do find our code helpful (or if you would be so kind as to offer us some encouragement), please consider kindly giving a star, and citing our paper.
@inproceedings{ye2023deepaco,
title={DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization},
author={Ye, Haoran and Wang, Jiarui and Cao, Zhiguang and Liang, Helan and Li, Yong},
booktitle={Advances in Neural Information Processing Systems},
year={2023}
}