Home

Awesome

GOAL: A Generalist Combinatorial Optimization Agent Learning

We provide the code to learn to solve 16 standard combinatorial optimization problems.

Multi-task model is pretrained on eight problems:

Fine-tuning (supervised and unsupervised) is possible on eight new problems:

Paper

See GOAL: A Generalist Combinatorial Optimization Agent Learning for the paper associated with this codebase. If you find this code useful, please cite our paper as:

@article{
   drakulic2024goal,
   title={GOAL: A Generalist Combinatorial Optimization Agent Learning},
   author={Darko Drakulic and Sofia Michel and Jean-Marc Andreoli},
   journal = {arXiv:2406.15079 [(cs.LG)]},,
   year={2024},
   url={https://arxiv.org/abs/2406.15079},
}

Run training on set of problems

python3 train.py 
  --problems [list_of_problems] 
  --train_datasets [list_of_train_datasets] 
  --val_datasets [list_of_val_datasets] 
  --test_datasets [list_of_test_datasets]

For all training arguments

python3 train.py -h

Testing the trained model

python3 test.py 
  --problems [tsp|cvrp|cvrptw|op|kp|upms|jssp] 
  --test_datasets [list_of_test_datasets]
  --pretrained_model pretrained/multi.best

For all test options

python3 test.py -h

Fine-tuning the pretrained model

Supervised

python3 finetune.py
  --problems [trp|pctsp|ocvrp|sdcvrp|sop|mis|mclp|ossp]
  --train_datasets [train_dataset]
  --test_datasets [test_dataset]
  --pretrained_model pretrained/multi.best

Unsupervised

python3 finetune.py
  --problems [trp|pctsp|ocvrp|sdcvrp|sop|mis|mclp|ossp]
  --test_datasets [test_dataset]
  --pretrained_model pretrained/multi.best

Data

Test and fine-tuning data are provided in the data/ directory.

Pretrained models are provided in the pretrained/ directory.