Awesome
[!NOTE] We release a open codebase OpenLTM to explore the design philosophy of large time-series models, which contains a simple pipeline to train large time-series models :)
Timer (Large Time-Series Model)
This repo provides official code, datasets and checkpoints for Timer: Generative Pre-trained Transformers Are Large Time Series Models. [Poster], [Slides].
Updates
:triangular_flag_on_post: News (2024.12) Timer is enhanced with our further work and pre-trained on 260B time points. Checkpoint is now available: [HuggingFace] [Benchmark]. An example of zero-shot forecasting is provided here.
:triangular_flag_on_post: News (2024.10) We release numpy format of UTSD. An easier and more efficient dataloader can be found here.
:triangular_flag_on_post: News (2024.6) Pre-training dataset (UTSD) is available in HuggingFace. Dataloader is also contained.
:triangular_flag_on_post: News (2024.5) Accepted by ICML 2024, a camera-ready version of 31 pages.
:triangular_flag_on_post: News (2024.2) Releasing model checkpoints and code for fine-tuning.
Introduction
Time Series Transformer (Timer) is a Generative Pre-trained Transformer for general time series analysis.
<p align="center"> <img src="./figures/abilities.png" alt="" align=center /> </p>Zero-Shot Forecasting
We provide the checkpoint to make predictions without training samples. See our HuggingFace Repo for the detialed information and usage.
A inference example (minimal dependencies required):
import torch
from transformers import AutoModelForCausalLM
# load pretrain model
model = AutoModelForCausalLM.from_pretrained('thuml/timer-base-84m', trust_remote_code=True)
# prepare input
batch_size, lookback_length = 1, 2880
seqs = torch.randn(batch_size, lookback_length)
# generate forecast
prediction_length = 96
normed_output = model.generate(normed_seqs, max_new_tokens=prediction_length)
print(output.shape)
There's indeed room for improvement in this small model. We are actively working around it and are glad to see constructive suggestions and noteworthy cases :)
Datasets
We collect Unified Time Series Datasets (UTSD), which encompass well-curated time series to facilitate the research on large time-series models. Our dataset is released in HuggingFace.
<p align="center"> <img src="./figures/utsd.png" alt="" align=center /> </p>Usage
You can access and load UTSD in the style of TSLib based on the following steps:
# huggingface-cli login
# export HF_ENDPOINT=https://hf-mirror.com
python ./scripts/UTSD/download_dataset.py
# dataloader
python ./scripts/UTSD/utsdataset.py
For Developers
For developers interest in large model adaptation, we provide fine-tuning code based on non-HuggingFace checkpoints, which is a smaller version of Timer developed in the TSLib style.
[!NOTE] We recommend using checkpoints on HuggingFace for model evaluation (e.g., zero-shot forecasting). However, it is not compatiable with the following fine-tuning code (but we are working on it :)
Supported Tasks
Forecasting: We provide all scripts for few-shot forecasting in this repo.
Imputation: We propose segment-level imputation, which is more challenging than point-level imputation.
Anomaly Detection: We provide new benchmarks of predictive anomaly detection on UCR Anomaly Archive.
We provide the README files illustrating each task under the folder ./scripts/
.
Code for Fine-tuning
- Use Python 3.10 and install necessary dependencies.
pip install -r requirements.txt
-
Put downstream datasets from Google Drive and Baidu Drive under the folder
./dataset/
. -
Put the checkpoint from Google Drive and Baidu Drive under the folder
./checkpoints/
. -
Train and evaluate the model. We provide the above tasks under the folder
./scripts/
.
# forecasting
bash ./scripts/forecast/ECL.sh
# segement-level imputation
bash ./scripts/imputation/ECL.sh
# anomaly detection
bash ./scripts/anomaly_detection/UCR.sh
Train on Custom Dataset
To fine-tune on your time series dataset, you can try out the following steps:
- The key is to reload the customized dataloader and load the pre-trained checkpoint (See
./scripts/
folder). CIDatasetBenchmark
/CIAutoRegressionDatasetBenchmark
in thedata_provider
folder can train and evaluate models in direct / iterative multi-step mode.
Approach
Pre-training and Adaptation
To pre-train on heterogeneous time series, we propose single-series sequence (S3), reserving series variations into the unified 1D context. Further, we convert forecasting, imputation, and anomaly detection into a unified generative task.
<p align="center"> <img src="./figures/pretrain_adaptation.png" align=center /> </p>Model Architecture
We evaluate various candidate backbones and eventually adopt the decoder-only Transformer, which provides notable generalization performance and length-flexibility that accommodate various time series.
<p align="center"> <img src="./figures/architecture.png" align=center /> </p>Performance
Timer achieves state-of-the-art performance in zero-shot forecasting, general time series analysis, and present the pre-training benefit on few-shot scenarios.
<p align="center"> <img src="./figures/performance.png" align=center /> </p>Scalability
By scaling, Timer achieves notable performance improvement. Currently, we provide the base version containing 84M paramaters that is pre-trained on 260B time points, which supports a maximum context length of 2880.
<p align="center"> <img src="./figures/scale.png" alt="300" align=center /> </p>Futher Improvement
We enhanced Timer by this paper with longer context and TimeAttention.
<p align="center"> <img src="./figures/timer-xl.png" alt="300" align=center /> </p>Citation
If you find this repo helpful, please cite our paper.
@inproceedings{liutimer,
title={Timer: Generative Pre-trained Transformers Are Large Time Series Models},
author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
booktitle={Forty-first International Conference on Machine Learning}
}
@article{liu2024timer,
title={Timer-XL: Long-Context Transformers for Unified Time Series Forecasting},
author={Liu, Yong and Qin, Guo and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
journal={arXiv preprint arXiv:2410.04803},
year={2024}
}
Contributors
If you have any questions or want to use the code, feel free to contact:
- Yong Liu (liuyong21@mails.tsinghua.edu.cn)
- Guo Qin (qinguo24@mails.tsinghua.edu.cn)
- Haoran Zhang (zhang-hr24@mails.tsinghua.edu.cn)
- Chenyu Li (lichenyu20@mails.tsinghua.edu.cn)