Home

Awesome

SCINet

Arxiv link

state-of-the-artpytorch

This is the original pytorch implementation for the following paper: SCINet: Time Series Modeling and Forecasting with Sample Convolution and Interaction. Alse see the Open Review verision.

If you find this repository useful for your research work, please consider citing it as follows:


@article{liu2022SCINet,

title={SCINet: Time Series Modeling and Forecasting with Sample Convolution and Interaction},

author={Liu, Minhao and Zeng, Ailing and Chen, Muxi and Xu, Zhijian and Lai, Qiuxia and Ma, Lingna and Xu, Qiang},

journal={Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS), 2022},

year={2022}

}

Updates

Features

To-do items

Stay tuned!

Used Datasets

We conduct the experiments on 11 popular time-series datasets, namely Electricity Transformer Temperature (ETTh1, ETTh2 and ETTm1) , PeMS (PEMS03, PEMS04, PEMS07 and PEMS08) and Traffic, Solar-Energy, Electricity and Exchange Rate, ranging from power, energy, finance and traffic domains.

Overall information of the 11 datasets

DatasetsVariantsTimestepsGranularityStart timeTask Type
ETTh1717,4201hour7/1/2016Multi-step
ETTh2717,4201hour7/1/2016Multi-step
ETTm1769,68015min7/1/2016Multi-step
PEMS0335826,2095min5/1/2012Multi-step
PEMS0430716,9925min7/1/2017Multi-step
PEMS0788328,2245min5/1/2017Multi-step
PEMS0817017,8565min3/1/2012Multi-step
Traffic86217,5441hour1/1/2015Single-step
Solar-Energy13752,5601hour1/1/2006Single-step
Electricity32126,3041hour1/1/2012Single-step
Exchange-Rate87,5881hour1/1/1990Single-step

Get started

Requirements

Install the required package first:

cd SCINet
conda create -n scinet python=3.8
conda activate scinet
pip install -r requirements.txt

Dataset preparation

All datasets can be downloaded here. To prepare all dataset at one time, you can just run:

source prepare_data.sh

ett pems financial

The data directory structure is shown as follows.

./
└── datasets/
    ├── ETT-data
    │   ├── ETTh1.csv
    │   ├── ETTh2.csv
    │   └── ETTm1.csv
    ├── financial
    │   ├── electricity.txt
    │   ├── exchange_rate.txt
    │   ├── solar_AL.txt
    │   └── traffic.txt
    └── PEMS
        ├── PEMS03.npz
        ├── PEMS04.npz
        ├── PEMS07.npz
        └── PEMS08.npz

Run training code

We follow the same settings of StemGNN for PEMS 03, 04, 07, 08 datasets, MTGNN for Solar, electricity, traffic, financial datasets, Informer for ETTH1, ETTH2, ETTM1 datasets. The detailed training commands are given as follows.

For PEMS dataset (All datasets follow Input 12, Output 12):

pems03

python run_pems.py --dataset PEMS03 --hidden-size 0.0625 --dropout 0.25 --model_name pems03_h0.0625_dp0.25 --num_decoder_layer 2

pems04

python run_pems.py --dataset PEMS04 --hidden-size 0.0625 --dropout 0 --model_name pems04_h0.0625_dp0

pems07

python run_pems.py --dataset PEMS07 --hidden-size 0.03125 --dropout 0.25 --model_name pems07_h0.03125_dp0.25

pems08

python run_pems.py --dataset PEMS08 --hidden-size 1 --dropout 0.5 --model_name pems08_h1_dp0.5
PEMS Parameter highlights
Parameter NameDescriptionParameter in paperDefault
datasetName of datasetN/APEMS08
horizonHorizonHorizon12
window_sizeLook-back windowLook-back window12
hidden-sizehidden expansionh1
levelsSCINet block levelsL2
stacksThe number of SCINet blockK1

For Solar dataset:

predict 3

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 3 --hidden-size 1  --lastWeight 0.5 --stacks 2 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 256 --model_name so_I160_o3_lr1e-4_bs256_dp0.25_h1_s2l4_w0.5

predict 6

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 6 --hidden-size 0.5 --lastWeight 0.5 --stacks 2 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 256 --model_name so_I160_o6_lr1e-4_bs256_dp0.25_h0.5_s2l4_w0.5 

predict 12

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 12 --hidden-size 2 --lastWeight 0.5 --stacks 2 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 1024 --model_name so_I160_o12_lr1e-4_bs1024_dp0.25_h2_s2l4_w0.5

predict 24

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 24 --hidden-size 1 --lastWeight 0.5 --stacks 1 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 256 --model_name so_I160_o24_lr1e-4_bs256_dp0.25_h1_s1l4_w0.5

For Electricity dataset:

predict 3

python run_financial.py --dataset_name electricity --window_size 168 --horizon 3 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o3_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321 --num_decoder_layer 2

predict 6

python run_financial.py --dataset_name electricity --window_size 168 --horizon 6 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o6_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321 --num_decoder_layer 3

predict 12

python run_financial.py --dataset_name electricity --window_size 168 --horizon 12 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o12_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321 --num_decoder_layer 3

predict 24

python run_financial.py --dataset_name electricity --window_size 168 --horizon 24 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o24_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321 --num_decoder_layer 3

predict 96

python -u run_financial.py --dataset_name electricity --window_size 96 --horizon 96 --hidden-size 8 --stacks 2 --levels 3 --lr 9e-4 --dropout 0 --batch_size 32 --model_name ele_I96_o96_lr9e-4_bs32_dp0_h8_s2l3_w0.5_n4 --groups 321  --concat_len 0   --normalize 4 --long_term_forecast

predict 192

python -u run_financial.py --dataset_name electricity --window_size 96 --horizon 192 --hidden-size 8 --stacks 2 --levels 3 --lr 9e-4 --dropout 0 --batch_size 32 --model_name ele_I96_o192_lr9e-4_bs32_dp0_h8_s2l3_w0.5_n4 --groups 321  --concat_len 0   --normalize 4 --long_term_forecast

predict 336

python -u run_financial.py --dataset_name electricity --window_size 96 --horizon 336 --hidden-size 8 --stacks 2 --levels 3 --lr 9e-4 --dropout 0 --batch_size 32 --model_name ele_I168_o336_lr9e-4_bs32_dp0_h8_s2l3_w0.5_n4 --groups 321  --concat_len 0   --normalize 4 --long_term_forecast

predict 720

python -u run_financial.py --dataset_name electricity --window_size 96 --horizon 720 --hidden-size 8 --stacks 2 --levels 3 --lr 9e-4 --dropout 0 --batch_size 32 --model_name ele_I168_o24_lr9e-4_bs32_dp0_h8_s2l3_w0.5_n4 --groups 321  --concat_len 0   --normalize 4 --long_term_forecast

For Traffic dataset (warning: 20,000MiB+ memory usage!):

predict 3

python run_financial.py --dataset_name traffic --window_size 168 --horizon 3 --hidden-size 1 --single_step 1 --stacks 2 --levels 3 --lr 5e-4 --dropout 0.5 --batch_size 16 --model_name traf_I168_o3_lr5e-4_bs16_dp0.5_h1_s2l3_w1.0

predict 6

python run_financial.py --dataset_name traffic --window_size 168 --horizon 6 --hidden-size 2 --single_step 1 --stacks 1 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I168_o6_lr5e-4_bs16_dp0.25_h2_s1l3_w1.0

predict 12

python run_financial.py --dataset_name traffic --window_size 168 --horizon 12 --hidden-size 0.5 --single_step 1 --stacks 2 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I168_o12_lr5e-4_bs16_dp0.25_h0.5_s2l3_w1.0

predict 24

python run_financial.py --dataset_name traffic --window_size 168 --horizon 24 --hidden-size 2 --single_step 1 --stacks 2 --levels 2 --lr 5e-4 --dropout 0.5 --batch_size 16 --model_name traf_I168_o24_lr5e-4_bs16_dp0.5_h2_s2l2_w1.0

predict 96

python -u run_financial.py --dataset_name traffic --window_size 96 --horizon 96 --hidden-size 2 --stacks 1 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I96_o96_lr5e-4_bs16_dp0.25_h2_s1l3_w1.0 --normalize 4 --long_term_forecast

predict 192

python -u run_financial.py --dataset_name traffic --window_size 96 --horizon 192 --hidden-size 1 --stacks 1 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I96_o192_lr5e-4_bs16_dp0.25_h2_s1l3_w1.0 --normalize 4  --long_term_forecast

predict 336

python -u run_financial.py --dataset_name traffic --window_size 96 --horizon 336 --hidden-size 1 --stacks 1 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I96_o336_lr5e-4_bs16_dp0.25_h2_s1l3_w1.0 --normalize 4 --long_term_forecast

predict 720

python -u run_financial.py --dataset_name traffic --window_size 96 --horizon 720 --hidden-size 1 --stacks 1 --levels 3 --lr 5e-4 --dropout 0.25 --batch_size 16 --model_name traf_I96_o720_lr5e-4_bs16_dp0.25_h2_s1l3_w1.0 --normalize 4 --long_term_forecast

For Exchange rate dataset:

predict 3

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 3 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o3_lr5e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --num_decoder_layer 2 --epochs 150

predict 6

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 6 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o6_lr5e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --num_decoder_layer 2 --epochs 150

predict 12

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 12 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o12_lr5e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --num_decoder_layer 2 --epochs 150

predict 24

python run_financial.py --dataset_name exchange_rate --window_size 168 --horizon 24 --hidden-size 0.125 --lastWeight 0.5 --stacks 1 --levels 3 --lr 7e-3 --dropout 0.5 --batch_size 4 --model_name ex_I168_o24_lr7e-3_bs4_dp0.5_h0.125_s1l3_w0.5 --num_decoder_layer 2 --epochs 150

predict 96

python run_financial.py --dataset_name exchange_rate --epochs 20 --window_size 96 --horizon 96 --hidden-size 0.125 --normalize 3 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-5  --dropout 0 --model_name final --num_decoder_layer 2 --long_term_forecast

predict 192

python run_financial.py --dataset_name exchange_rate --epochs 20 --window_size 96 --horizon 192 --hidden-size 0.125 --normalize 3 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-5  --dropout 0 --model_name final --num_decoder_layer 2 --long_term_forecast

predict 336

python run_financial.py --dataset_name exchange_rate --epochs 20 --window_size 96 --horizon 336 --hidden-size 0.125 --normalize 3 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-5  --dropout 0 --model_name final --num_decoder_layer 2 --long_term_forecast

predict 720

python run_financial.py --dataset_name exchange_rate --epochs 20 --window_size 96 --horizon 720 --hidden-size 0.125 --normalize 3 --lastWeight 0.5 --stacks 1 --levels 3 --lr 5e-5  --dropout 0 --model_name final --num_decoder_layer 2 --long_term_forecast
Financial Parameter highlights
Parameter NameDescriptionParameter in paperDefault
dataset_nameData nameN/Aexchange_rate
horizonHorizonHorizon3
window_sizeLook-back windowLook-back window168
batch_sizeBatch sizebatch size8
lrLearning ratelearning rate5e-3
hidden-sizehidden expansionh1
levelsSCINet block levelsL3
stacksThe number of SCINet blockK1
lastweightLoss weight of the last frameLoss weight ($\lambda$)1.0

For ETTH1 dataset:

multivariate, out 24

python run_ETTh.py --data ETTh1 --features M  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 3e-3 --batch_size 8 --dropout 0.5 --model_name etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3

multivariate, out 48

python run_ETTh.py --data ETTh1 --features M  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 0.009 --batch_size 16 --dropout 0.25 --model_name etth1_M_I96_O48_lr0.009_bs16_dp0.25_h4_s1l3

multivariate, out 168

python run_ETTh.py --data ETTh1 --features M  --seq_len 336 --label_len 168 --pred_len 168 --hidden-size 4 --stacks 1 --levels 3 --lr 5e-4 --batch_size 32 --dropout 0.5 --model_name etth1_M_I336_O168_lr5e-4_bs32_dp0.5_h4_s1l3

multivariate, out 336

python run_ETTh.py --data ETTh1 --features M  --seq_len 336 --label_len 336 --pred_len 336 --hidden-size 1 --stacks 1 --levels 4 --lr 1e-4 --batch_size 512 --dropout 0.5 --model_name etth1_M_I336_O336_lr1e-4_bs512_dp0.5_h1_s1l4

multivariate, out 720

python run_ETTh.py --data ETTh1 --features M  --seq_len 736 --label_len 720 --pred_len 720 --hidden-size 1 --stacks 1 --levels 5 --lr 5e-5 --batch_size 256 --dropout 0.5 --model_name etth1_M_I736_O720_lr5e-5_bs256_dp0.5_h1_s1l5

Univariate, out 24

python run_ETTh.py --data ETTh1 --features S  --seq_len 64 --label_len 24 --pred_len 24 --hidden-size 8 --stacks 1 --levels 3 --lr 0.007 --batch_size 64 --dropout 0.25 --model_name etth1_S_I64_O24_lr0.007_bs64_dp0.25_h8_s1l3

Univariate, out 48

python run_ETTh.py --data ETTh1 --features S  --seq_len 720 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 4 --lr 0.0001 --batch_size 8 --dropout 0.5 --model_name etth1_S_I720_O48_lr0.0001_bs8_dp0.5_h4_s1l4

Univariate, out 168

python run_ETTh.py --data ETTh1 --features S  --seq_len 720 --label_len 168 --pred_len 168 --hidden-size 4 --stacks 1 --levels 4 --lr 5e-5 --batch_size 8 --dropout 0.5 --model_name etth1_S_I720_O168_lr5e-5_bs8_dp0.5_h4_s1l4

Univariate, out 336

python run_ETTh.py --data ETTh1 --features S  --seq_len 720 --label_len 336 --pred_len 336 --hidden-size 1 --stacks 1 --levels 4 --lr 1e-3 --batch_size 128 --dropout 0.5 --model_name etth1_S_I720_O336_lr1e-3_bs128_dp0.5_h1_s1l4

Univariate, out 720

python run_ETTh.py --data ETTh1 --features S  --seq_len 736 --label_len 720 --pred_len 720 --hidden-size 4 --stacks 1 --levels 5 --lr 1e-4 --batch_size 32 --dropout 0.5 --model_name etth1_S_I736_O720_lr1e-5_bs32_dp0.5_h4_s1l5

For ETTH2 dataset:

multivariate, out 24

python run_ETTh.py --data ETTh2 --features M  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 8 --stacks 1 --levels 3 --lr 0.007 --batch_size 16 --dropout 0.25 --model_name etth2_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3

multivariate, out 48

python run_ETTh.py --data ETTh2 --features M  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 4 --lr 0.007 --batch_size 4 --dropout 0.5 --model_name etth2_M_I96_O48_lr7e-3_bs4_dp0.5_h4_s1l4

multivariate, out 168

python run_ETTh.py --data ETTh2 --features M  --seq_len 336 --label_len 168 --pred_len 168 --hidden-size 0.5 --stacks 1 --levels 4 --lr 5e-5 --batch_size 16 --dropout 0.5 --model_name etth2_M_I336_O168_lr5e-5_bs16_dp0.5_h0.5_s1l4

multivariate, out 336

python run_ETTh.py --data ETTh2 --features M  --seq_len 336 --label_len 336 --pred_len 336 --hidden-size 1 --stacks 1 --levels 4 --lr 5e-5 --batch_size 128 --dropout 0.5 --model_name etth2_M_I336_O336_lr5e-5_bs128_dp0.5_h1_s1l4

multivariate, out 720

python run_ETTh.py --data ETTh2 --features M  --seq_len 736 --label_len 720 --pred_len 720 --hidden-size 4 --stacks 1 --levels 5 --lr 1e-5 --batch_size 128 --dropout 0.5 --model_name etth2_M_I736_O720_lr1e-5_bs128_dp0.5_h4_s1l5

Univariate, out 24

python run_ETTh.py --data ETTh2 --features S  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.001 --batch_size 16 --dropout 0 --model_name etth2_S_I48_O24_lr1e-3_bs16_dp0_h4_s1l3

Univariate, out 48

python run_ETTh.py --data ETTh2 --features S  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 2 --levels 4 --lr 0.001 --batch_size 32 --dropout 0.5 --model_name etth2_S_I96_O48_lr1e-3_bs32_dp0.5_h4_s2l4

Univariate, out 168

python run_ETTh.py --data ETTh2 --features S  --seq_len 336 --label_len 168 --pred_len 168 --hidden-size 4 --stacks 1 --levels 3 --lr 1e-4 --batch_size 8 --dropout 0 --model_name etth2_S_I336_O168_lr1e-4_bs8_dp0_h4_s1l3

Univariate, out 336

python run_ETTh.py --data ETTh2 --features S  --seq_len 336 --label_len 336 --pred_len 336 --hidden-size 8 --stacks 1 --levels 3 --lr 5e-4 --batch_size 512 --dropout 0.5 --model_name etth2_S_I336_O336_lr5e-4_bs512_dp0.5_h8_s1l3

Univariate, out 720

python run_ETTh.py --data ETTh2 --features S  --seq_len 720 --label_len 720 --pred_len 720 --hidden-size 8 --stacks 1 --levels 3 --lr 1e-5 --batch_size 128 --dropout 0.6 --model_name etth2_S_I736_O720_lr1e-5_bs128_dp0.6_h8_s1l3

For ETTM1 dataset:

multivariate, out 24

python run_ETTh.py --data ETTm1 --features M  --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3

multivariate, out 48

python run_ETTh.py --data ETTm1 --features M  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 2 --levels 4 --lr 0.001 --batch_size 16 --dropout 0.5 --model_name ettm1_M_I96_O48_lr1e-3_bs16_dp0.5_h4_s2l4

multivariate, out 96

python run_ETTh.py --data ETTm1 --features M  --seq_len 384 --label_len 96 --pred_len 96 --hidden-size 0.5 --stacks 2 --levels 4 --lr 5e-5 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I384_O96_lr5e-5_bs32_dp0.5_h0.5_s2l4

multivariate, out 288

python run_ETTh.py --data ETTm1 --features M  --seq_len 672 --label_len 288 --pred_len 288 --hidden-size 4 --stacks 1 --levels 5 --lr 1e-5 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I672_O288_lr1e-5_bs32_dp0.5_h0.5_s1l5

multivariate, out 672

python run_ETTh.py --data ETTm1 --features M  --seq_len 672 --label_len 672 --pred_len 672 --hidden-size 4 --stacks 2 --levels 5 --lr 1e-5 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I672_O672_lr1e-5_bs32_dp0.5_h4_s2l5

Univariate, out 24

python run_ETTh.py --data ETTm1 --features S  --seq_len 96 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 4 --lr 0.001 --batch_size 8 --dropout 0 --model_name ettm1_S_I96_O24_lr1e-3_bs8_dp0_h4_s1l4

Univariate, out 48

python run_ETTh.py --data ETTm1 --features S  --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 0.0005 --batch_size 16 --dropout 0 --model_name ettm1_S_I96_O48_lr5e-4_bs16_dp0_h4_s1l3

Univariate, out 96

python run_ETTh.py --data ETTm1 --features S  --seq_len 384 --label_len 96 --pred_len 96 --hidden-size 2 --stacks 1 --levels 4 --lr 1e-5 --batch_size 8 --dropout 0 --model_name ettm1_S_I384_O96_lr1e-5_bs8_dp0_h2_s1l4

Univariate, out 288

python run_ETTh.py --data ETTm1 --features S  --seq_len 384 --label_len 288 --pred_len 288 --hidden-size 4 --stacks 1 --levels 4 --lr 1e-5 --batch_size 64 --dropout 0 --model_name ettm1_S_I384_O288_lr1e-5_bs64_dp0_h4_s1l4

Univariate, out 672

python run_ETTh.py --data ETTm1 --features S  --seq_len 672 --label_len 672 --pred_len 672 --hidden-size 1 --stacks 1 --levels 5 --lr 1e-4 --batch_size 32 --model_name ettm1_S_I672_O672_lr1e-4_bs32_dp0.5_h1_s1l5
ETT Parameter highlights
Parameter NameDescriptionParameter in paperDefault
root_pathThe root path of subdatasetsN/A'./datasets/ETT-data/ETT/'
dataSubdatasetN/AETTh1
pred_lenHorizonHorizon48
seq_lenLook-back windowLook-back window96
batch_sizeBatch sizebatch size32
lrLearning ratelearning rate0.0001
hidden-sizehidden expansionh1
levelsSCINet block levelsL3
stacksThe number of SCINet blocksK1

Special Constraint

<img src="https://render.githubusercontent.com/render/math?math=W\bmod{2^{L}}=0">

(The formula might not be shown in the darkmode Github)

References

[1] [Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift](https://openreview.net/forum?id=cGDAkQo1C0p&#41;

Contact

If you have any questions, feel free to contact us or post github issues. Pull requests are highly welcomed!


Minhao Liu: mhliu@cse.cuhk.edu.hk

Ailing Zeng: alzeng@cse.cuhk.edu.hk

Zhijian Xu: zjxu21@cse.cuhk.edu.hk

Acknowledgements

Thank you all for your attention to our work!

This code uses (Informer, MTGNN, StemGNN) as baseline methods for comparison.