Home

Awesome

pytorch-drl4vrp

Implementation of: Nazari, Mohammadreza, et al. "Deep Reinforcement Learning for Solving the Vehicle Routing Problem." arXiv preprint arXiv:1802.04240 (2018).

Currently, Traveling Salesman Problems and Vehicle Routing Problems are supported. See the tasks/ folder for details.

Requirements:

To Run

Run by calling python trainer.py

Tasks and complexity can be changed through the "task" and "nodes" flag:

python trainer.py --task=vrp --nodes=10

To restore a checkpoint, you must specify the path to a folder that has "actor.pt" and "critic.pt" checkpoints. Sample weights can be found here

python trainer.py --task=vrp --nodes=10 --checkpoint=vrp10

Differences from paper:

TSP Sample Tours:

Left: TSP with 20 cities

Right: TSP with 50 cities

<p align="center"> <img src="./docs/tsp20.png" width="300"/> <img src="./docs/tsp50.png" width="300"/> </p>

VRP Sample Tours:

Left: VRP with 10 cities + load 20

Right: VRP with 20 cities + load 30

<p align="center"> <img src="./docs/vrp10.png" width="300"/> <img src="./docs/vrp20.png" width="300"/> </p>

TSP

The following masking scheme is used for the TSP:

  1. If a salesman has visited a city, it is not allowed to re-visit it.

VRP

The VRP deals with dynamic elements (load 'L', demand 'D') that change everytime the vehicle / salesman visits a city. Each city is randomly generated with random demand in the range [1, 9]. The salesman has an initial capacity that changes with the complexity of the problem (e.g. number of nodes)

The following masking scheme is used for the VRP:

  1. If there is no demand remaining at any city, end the tour. Note this means that the vehicle must return to the depot to complete
  2. The vehicle can visit any city, as long as it is able to fully satisfy demand (easy to modify for partial trips if needed)
  3. The vehicle may not visit the depot more then once in a row (to speed up training)
  4. A vehicle may only visit the depot twice or more in a row if it has completed its route and waiting for other vehicles to finish (e.g. training in a minibatch setting)

In this project the following dynamic updates are used:

  1. If a vehicle visits a city, its load changes according to: Load = Load - Demand_i, and the demand at the city changes according to: Demand_i = (Demand_i - load)+
  2. Returning to the vehicle refills the vehicles load. The depot is given a "negative" demand that increases proportional to the amount of load missing from the vehicle

Results:

Tour Accuracy

This repo only implements the "Greedy" approach during test time, which selects the city with the highest probability. Tour length comparing this project to the corresponding paper is reported below. Differences in tour length may likely be optimized further through hyperparameter search, which has not been conducted here.

Paper ("Greedy")This
TSP203.974.032
TSP506.086.226
TSP1008.44
VRP10 Cap 204.845.082
VRP20 Cap 306.596.904
VRP50 Cap 4011.39
VRP100 Cap 5017.23

Training Time

On a Tesla P-100 GPU, the following training times are observed. Results were obtained by taking the the total time for the first 100 training iterations (with respective batch sizes), and converting into the appopriate time unit. Note that for the VRP in particular, as models are relatively untrained during this time, this may be slightly inaccurate results and YMMV.

TaskBatch SizeSec / 100 UpdatesMin / EpochHours/Epoch20 Epochs
TSP201288.2310.710.183.57
TSP2025611.907.750.132.58
TSP2051219.106.220.102.07
TSP5012821.6428.170.479.39
TSP5025631.3320.400.346.80
TSP5051251.7016.830.285.61
TSP10012848.2762.851.0520.95
TSP10025673.5147.850.8015.95
TaskBatch SizeSec / 100 UpdatesMin / EpochHours/Epoch20 Epochs
VRP1012812.1515.820.265.27
VRP1025615.7510.250.173.42
VRP1051223.307.580.132.53
VRP2012821.4527.930.479.31
VRP2025628.2918.420.316.14
VRP2051243.2014.060.234.69
VRP5012853.5969.771.1623.26
VRP5025677.2550.290.8416.76
VRP50512127.7341.580.6913.86
VRP100128130.06169.352.8256.45
VRP1006495.03247.484.1282.49

Acknowledgements:

Thanks to https://github.com/pemami4911/neural-combinatorial-rl-pytorch for insight on bug with random number generator on GPU