Home

Awesome

Time Series Foundation Model - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting

preprint huggingface License: MIT

</div> <div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO_logo.png width=80% /></div>

The official code for ["TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)"].

TEMPO is one of the very first open source Time Series Foundation Models for forecasting task v1.0 version.

<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO.png width=80% /></div>

⏳ Upcoming Features

πŸš€ News

Build the environment

conda create -n tempo python=3.8
conda activate tempo
pip install timeagi

Script Demo

A streamlining example showing how to perform forecasting using TEMPO:

# Third-party library imports
import numpy as np
import torch
from numpy.random import choice
# Local imports
from tempo.models.TEMPO import TEMPO


model = TEMPO.load_pretrained_model(
        device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu'),
        repo_id = "Melady/TEMPO",
        filename = "TEMPO-80M_v1.pth",
        cache_dir = "./checkpoints/TEMPO_checkpoints"  
)

input_data = np.random.rand(336)    # Random input data
with torch.no_grad():
        predicted_values = model.predict(input_data, pred_length=96)
print("Predicted values:")
print(predicted_values)

Demos

1. Reproducing zero-shot experiments on ETTh2:

Please try to reproduc the zero-shot experiments on ETTh2 [here on Colab].

2. Zero-shot experiments on customer dataset:

We use the following Colab page to show the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [Colab]

3. Online demo:

Please try our foundation model demo [here].

<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO_demo.jpg width=80% /></div>

Practice on your end

We also updated our models on HuggingFace: [Melady/TEMPO].

Get Data

Download the data from [Google Drive] or [Baidu Drive], and place the downloaded data in the folder./dataset. You can also download the STL results from [Google Drive], and place the downloaded data in the folder./stl.

Run TEMPO

Pre-Training Stage

bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh

Test/ Inference Stage

After training, we can test TEMPO model under the zero-shot setting:

bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather]_test.sh
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/results.jpg width=90% /></div>

Pre-trained Models

You can download the pre-trained model from [Google Drive] and then run the test script for fun.

TETS dataset

Here is the prompts use to generate the coresponding textual informaton of time series via [OPENAI ChatGPT-3.5 API]

<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TETS_prompt.png width=80% /></div>

The time series data are come from [S&P 500]. Here is the EBITDA case for one company from the dataset:

<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/Company1_ebitda_summary.png width=80% /></div>

Example of generated contextual information for the Company marked above:

<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics//Company1_ebitda_summary_words.jpg width=80% /></div>

You can download the processed data with text embedding from GPT2 from: [TETS].

Contact

Feel free to connect DefuCao@USC.EDU / YanLiu.CS@USC.EDU if you’re interested in applying TEMPO to your real-world application.

Cite our work

@inproceedings{
cao2024tempo,
title={{TEMPO}: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting},
author={Defu Cao and Furong Jia and Sercan O Arik and Tomas Pfister and Yixiang Zheng and Wen Ye and Yan Liu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=YH5w12OUuU}
}
@article{
   Jia_Wang_Zheng_Cao_Liu_2024, 
   title={GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting}, 
   volume={38}, 
   url={https://ojs.aaai.org/index.php/AAAI/article/view/30383}, 
   DOI={10.1609/aaai.v38i21.30383}, 
   number={21}, 
   journal={Proceedings of the AAAI Conference on Artificial Intelligence}, 
   author={Jia, Furong and Wang, Kevin and Zheng, Yixiang and Cao, Defu and Liu, Yan}, 
   year={2024}, month={Mar.}, pages={23343-23351} 
   }