Awesome
Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization
Sym-NCO is deep reinforcement learning-based neural combinatorial optimization scheme that exploits the symmetric nature of combinatorial optimization.
Before reading our code, we strongly recommend reading the code of AM and POMO which is base of our code.
Sym-NCO-POMO
Sym-NCO is an extended method of POMO. POMO will give powerful performances in TSP. Sym-NCO can improve POMO at CVRP and slightly at TSP.
TSP
Firstly, go to folder:
cd Sym-NCO-POMO/TSP/
Test
Sym-NCO test
python test_symnco.py
You can change "test_batch_size", "aug_factor" (related to sample width), "aug_batch_size".
POMO baseline test
python test_baseline.py
Training
Sym-NCO training
python train_symnco.py
POMO training
python train_baseline.py
CVRP
Firstly, go to folder:
cd Sym-NCO-POMO/CVRP/
Test
Sym-NCO test
python test_symnco.py
You can change "test_batch_size", "aug_factor" (related to sample width), "aug_batch_size".
POMO baseline test
python test_baseline.py
Training
Sym-NCO training
python train_symnco.py
POMO training
python train_baseline.py
Sym-NCO-AM
Sym-NCO can be also applied to vanilla AM model.
AM is more expandable to solve various problems including TSP,CVRP,PCTSP and OP.
We provide pretrained Sym-NCO based AM model for PCTSP and OP.
Firstly, go to folder:
cd Sym-NCO-AM/
Test
General
python eval.py --dataset_path [YOUR_DATASET] --model [YOUR_MODEL] --eval_batch_size [YOUR BATCH SIZE] -- augment [SAMPLE WIDTH]
PCTSP reproduce
python eval.py --dataset_path 'data/pctsp100_test_seed1234.pkl' --model pretrained_model/pctsp_100/epoch-99.pt
OP reproduce
python eval.py --dataset_path 'data/op_dist100_test_seed1234.pkl' --model pretrained_model/op_100/epoch-99.pt
Train
General
python run.py --problem [Target Problem ('tsp', 'cvrp', 'pctsp_det', 'op')] --N_aug [L: problem symmetric width]
Example
python run.py --problem 'tsp' --N_aug 10
Dependencies (Same with AM)
- Python>=3.8
- NumPy
- SciPy
- PyTorch>=1.7
- tqdm
- pytz
- Matplotlib (optional, only for plotting)
Further Work
Symmetric learning gives powerful benefits for combinatorial optimization. However, there are no remaining rooms at DRL. Instead, imitation learning with sparse data setting can be an alterative benchmark for further work. If you want to do further work of Sym-NCO, please check out Sym-NCO-IL.
Paper
This is official PyTorch code for our paper Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization which has been accepted at [NeurIPS 2022].
@article{kim2022sym,
title={Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization},
author={Kim, Minsu and Park, Junyoung and Park, Jinkyoo},
journal={arXiv preprint arXiv:2205.13209},
year={2022}
}