Home

Awesome

GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks

A pytorch implementation for the paper: GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks<br />

Zhonghang Li, Lianghao Xia, Yong Xu, Chao Huang* (*Correspondence)<br />

Data Intelligence Lab@University of Hong Kong, South China University of Technology, PAZHOU LAB

This repository hosts the code, data, and model weights of GPT-ST. Furthermore, it also includes the code for the baselines used in the paper.

Introduction

<p style="text-align: justify"> GPT-ST is a generative pre-training framework for improving the spatio-temporal prediction performance of downstream models. The framework is built upon two key designs: (i) We propose a spatio-temporal mask autoencoder as a pre-training model for learning spatio-temporal dependencies. The model incorporates customized parameter learners and hierarchical spatial pattern encoding networks, which specifically designed to capture spatio-temporal customized representations and intra- and inter-cluster region semantic relationships. (ii) We introduce an adaptive mask strategy as part of the pre-training mechanism. This strategy guides the mask autoencoder in learning robust spatio-temporal representations and facilitates the modeling of different relationships, ranging from intra-cluster to inter-cluster, in an easy-to-hard training manner. </p>

The detailed framework of the proposed GPT-ST.

Code structure

Environment requirement

The code can be run in the following environments, other version of required packages may also work.

Or you can install the required environment, which can be done by running the following commands:

# cteate new environmrnt
conda create -n GPT-ST python=3.9.12

# activate environmrnt
conda activate GPT-ST

# Torch with CUDA 11.1
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html

# Install required libraries
pip install -r requirements.txt

Run the codes

cd model
# Evaluate the performance of STGCN enhanced by GPT-ST on the PEMS08 dataset
python Run.py -dataset PEMS08 -mode eval -model STGCN

# Evaluate the performance of ASTGCN enhanced by GPT-ST on the METR_LA dataset
python Run.py -dataset METR_LA -mode eval -model ASTGCN

# Evaluate the original performance of CCRNN on the NYC_TAXI dataset
python Run.py -dataset NYC_TAXI -mode ori -model CCRNN

# Pretrain from scratch on NYC_BIKE dataset, checkpoint will be saved in model/SAVE/NYC_BIKE/new_pretrain_model.pth
python Run.py -dataset NYC_BIKE -mode pretrain
# Set first_layer_embedding_size and out_layer_dim to 32 in STFGNN
python Run.py -model STFGNN -mode eval -dataset PEMS08 --first_layer_embedding_size 32 --out_layer_dim 32

Citation

@inproceedings{
li2023gptst,
author={Zhonghang Li and Lianghao Xia and Yong Xu and Chao Huang},
title = {GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks},
booktitle = {Advances in Neural Information Processing Systems},
pages = {70229-70246},
year={2023}
}

Acknowledgements

We developed our code framework drawing inspiration from AGCRN and STEP. Furthermore, the implementation of the baselines primarily relies on a combination of the code released by the original author and the code from LibCity. We extend our heartfelt gratitude for their remarkable contribution.