Home

Awesome

<div align="center"> <img width="60%" alt="MOMENT" src="assets/MOMENT Logo.png"> <h1>MOMENT: A Family of Open Time-series Foundation Models</h1>

preprint huggingface huggingface License: MIT Python: 3.11

</div>

MOMENT

Official research code for the paper MOMENT: A Family of Open Time-series Foundation Models. For a functional package to use just Moment model, use momentfm.

📖 Introduction

We introduce MOMENT, a family of open-source foundation models for general-purpose time-series analysis. Pre-training large models on time-series data is challenging due to (1) the absence a large and cohesive public time-series repository, and (2) diverse time-series characteristics which make multi-dataset training onerous. Additionally, (3) experimental benchmarks to evaluate these models especially in scenarios with limited resources, time, and supervision, are still in its nascent stages. To address these challenges, we compile a large and diverse collection of public time-series, called the Time-series Pile, and systematically tackle time-series-specific challenges to unlock large-scale multi-dataset pre-training. Finally, we build on recent work to design a benchmark to evaluate time-series foundation models on diverse tasks and datasets in limited supervision settings. Experiments on this benchmark demonstrate the effectiveness of our pre-trained models with minimal data and task-specific fine-tuning. Finally, we present several interesting empirical observations about large pre-trained time-series models.

MOMENT: One Model, Multiple Tasks, Datasets & Domains

<div align="center"> <img width="60%" alt="MOMENT: One Model, Multiple Tasks, Datasets & Domains" src="https://github.com/moment-timeseries-foundation-model/moment/assets/26150479/90c7d055-36d2-42aa-92b1-c5cfade22b3e"> </div>

MOMENT on different datasets and tasks, without any parameter updates:

By linear probing (fine-tuning the final linear layer):

MOMENT Captures the Language of Time Series

Principal components of the embeddings of synthetically generated sinusoids suggest that MOMENT can capture subtle trend, scale, frequency, and phase information. In each experiment, $c$ controls the factor of interest, for example the power of the trend polynomial $c \in [\frac{1}{8}, 8) (Oreshkin et al., 2020). We generate multiple sine waves by varying $c$, derive their sequence-level representations using MOMENT, and visualize them in a 2-dimensional space using PCA.

<div align="center"> <img width="60%" alt="MOMENT Captures the Language of Time Series" src="https://github.com/moment-timeseries-foundation-model/moment/assets/26150479/fce67d3e-84ff-4219-bef2-9079162c4c9b"> </div>

MOMENT Learns Meaningful Representation of Data

PCA visualizations of representations learned by MOMENT on the ECG5000 dataset in UCR Classification Archive. Here different colors represent different classes. Even without dataset-specific fine-tuning, MOMENT learns distinct representations for different classes.

<div align="center"> <img width="60%" alt="MOMENT Learns Meaningful Representation of Data" src="https://github.com/moment-timeseries-foundation-model/moment/assets/26150479/cb7b5233-a215-4287-8576-9625f002c1ff"> </div>

Architecture in a Nutshell

A time series is broken into disjoint fixed-length sub-sequences called patches, and each patch is mapped into a D-dimensional patch embedding. During pre-training, we mask patches uniformly at random by replacing their patch embeddings using a special mask embedding [MASK]. The goal of pre-training is to learn patch embeddings which can be used to reconstruct the input time series using a light-weight reconstruction head.

<div align="center"> <img src="assets/moment_architecture.png" width="60%"> </div>

Usage

Install the package using:

pip install git+https://github.com/moment-timeseries-foundation-model/moment-research.git

To use the model, you can use the following code:

from models.moment import MOMENTPipeline

# Options: "pre-training", "short-horizon-forecasting", "long-horizon-forecasting", "classification", "imputation", "anomaly-detection", "embed"
task_name = "classification"  

model = MOMENTPipeline.from_pretrained(
    "AutonLab/test-t5-small",
    model_kwargs={
        "task_name": task_name,
        "n_channels": 1,
        "num_class": 2,
    },
)
model.init()

Installation

Required Python version: 3.11.5

To reproduce our development environment, run the following commands:

> # Create a Conda environment
> conda create -n moment python=3.11.5
> # Activate the environment
> conda activate moment 
> # Install all the dependencies
> pip install git+https://github.com/moment-timeseries-foundation-model/moment-research.git

Experiments Reproduction

First create a .env file in the moment-research/ directory, and add the following environment paths:

## MOMENT project Environment Variables
MOMENT_DATA_DIR=data/Timeseries-PILE
MOMENT_CHECKPOINTS_DIR=results/moment_checkpoints/
MOMENT_RESULTS_DIR=results/moment_results/

# Weights and Biases Environment Variables
WANDB_DIR=results/wandb/wandb
WANDB_CACHE_DIR=results/.cache/wandb

To download the Timeseries-PILE dataset, run the following command:

bash reproduce/download_pile.sh

To pre-train the model on the previously downloaded Timeseries-PILE dataset, run the following command:

bash reproduce/pretraining/pretrain.sh

To reproduce any other experiment, look into the reproduce/ directory and run the corresponding script. For example, to reproduce the cross-modal experiments, run the following command:

bash reproduce/cross-modal/FlanT5.sh

[!TIP] Have more questions about using MOMENT? Checkout Frequently Asked Questions, and you might find your answer!

BibTeX

@inproceedings{goswami2024moment,
  title={MOMENT: A Family of Open Time-series Foundation Models},
  author={Mononito Goswami and Konrad Szafer and Arjun Choudhry and Yifu Cai and Shuo Li and Artur Dubrawski},
  booktitle={International Conference on Machine Learning},
  year={2024}
}

âž• Contributions

We encourage researchers to contribute their methods and datasets to MOMENT. We are actively working on contributing guidelines. Stay tuned for updates!

📰 Coverage

🤟 Contemporary Work

There's a lot of cool work on building time series forecasting foundation models! Here's an incomplete list. Checkout Table 9 in our paper for qualitative comparisons with these studies:

There's also some recent work on solving multiple time series modeling tasks in addition to forecasting:

🪪 License

MIT License

Copyright (c) 2024 Auton Lab, Carnegie Mellon University

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

See MIT LICENSE for details.

<img align="right" height ="120px" src="assets/cmu_logo.png"> <img align="right" height ="110px" src="assets/autonlab_logo.png">