Home

Awesome

Cross-Problem Learning for Solving Vehicle Routing Problems

This repo implements our paper:

Zhuoyi Lin, Yaoxin Wu, Bangjian Zhou, Zhiguang Cao, Wen Song, Yingqian Zhang, and Senthilnath Jayavelu, “Cross-Problem Learning for Solving Vehicle Routing Problems”, in International Joint Conferences on Artificial Intelligence (IJCAI), 2024. <img width="754" alt="Screenshot 2024-08-17 at 11 48 00 AM" src="https://github.com/user-attachments/assets/d10e5de1-231e-4048-b736-8cdcb651c608">

Please cite our paper if the code is useful for your project.

@inproceedings{
    lin2024cross,
    title={Cross-Problem Learning for Solving Vehicle Routing Problems},
    author={Zhuoyi Lin, Yaoxin Wu, Bangjian Zhou, Zhiguang Cao, Wen Song, Yingqian Zhang, and Senthilnath Jayavelu},
    booktitle = {International Joint Conferences on Artificial Intelligence},
    year={2024}
}

Dependencies

Usage

Main differences to Attention-learn-to-solve routing problems

We typically get 3 additional options when run training and evaluating, that is: "finetune_ways","rank","activation_func" The "finetune_ways" is to set the training ways, if set to "normal", it's for full-finetuning and from-scratch in paper; if set to "inside_tuning", you should use with "activation_func" to select the type of activations in adapters; if set to "lora", you should use with "rank" to set the rank for LoRA module; if set to "side_tuning", if's for side-tuning in paper.

Generating data

Training data is generated on the fly. To generate validation and test data (same as used in the paper) for all problems:

python generate_data.py --problem all --name validation --seed 4321
python generate_data.py --problem all --name test --seed 1234

Training

For training OP instances with 20 nodes and using rollout as REINFORCE baseline and using the generated validation set. and load weight from TSP 20 pretrain model and do the full fine-tuning:

python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_full_finetuning' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways normal --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt 

For training OP instances with 20 nodes and load weight from TSP 20 pretrain model and do the lora fine-tuning:

python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_lora' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways lora --rank 2 --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt 

For training OP instances with 20 nodes and load weight from TSP 20 pretrain model and do the side-tuning:

python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_side_tuning' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways side_tuning --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt 

For training OP instances with 20 nodes and load weight from TSP 20 pretrain model and do the inside-tuning with leakyrelu activation:

python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_inside_tuning_leakyrelu' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways inside_tuning --activation_func leakyrelu --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt 

For others settings, you could change the parameters accordingly.

Evaluation

To evaluate a model, you can use eval.py, which will additionally measure timing and save the results Note that, you need to add the additional parameters like training, to specify the model type:

python eval.py data/op/op_const20_test_seed1234.pkl --model pretrain_checkpoints/op20/op_full_finetuning --finetune_ways normal --epochs 99  --decode_strategy greedy

Sampling

To report the best of 1280 sampled solutions, use

python eval.py data/op/op_const20_test_seed1234.pkl --model pretrain_checkpoints/op20/op_full_finetuning --finetune_ways normal --epochs 99 --decode_strategy sample --width 1280 --eval_batch_size 1

To run baselines

Baselines for different problems are within the corresponding folders and can be ran (on multiple datasets at once) as follows

python -m problems.tsp.tsp_baseline farthest_insertion data/tsp/tsp20_test_seed1234.pkl data/tsp/tsp50_test_seed1234.pkl data/tsp/tsp100_test_seed1234.pkl

To run baselines, you need to install Compass by running the install_compass.sh script from within the problems/op directory and Concorde using the install_concorde.sh script from within problems/tsp. LKH3 should be automatically downloaded and installed when required. To use Gurobi, obtain a (free academic) license and follow the installation instructions.

Other options and help

You could run the command bellow or see the comments in options.py or eval.py

python run.py -h
python eval.py -h