Home

Awesome

Deep Learning for Time Series Classification

As the simplest type of time series data, univariate time series provides a reasonably good starting point to study the temporal signals. The representation learning and classification research has found many potential application in the fields like finance, industry, and health care. Common similarity measures like Dynamic Time Warping (DTW) or the Euclidean Distance (ED) are decades old. Recent efforts on different feature engineering and distance measures designing give much higher accuracy on the UCR time series classification benchmarks (like BOSS [1],[2], PROP [3] and COTE [4]) but also let to the pitfalls of higher complexity and interpretability.

The exploition on the deep neural networks, especially convolutional neural networks (CNN) for end-to-end time series classification are also under active exploration like multi-channel CNN (MC-CNN) [5] and multi-scale CNN (MCNN) [6]. However, they still need heavy preprocessing and a large set of hyperparameters which would make the model complicated to deploy.

This repository contains three deep neural networks models (MLP, FCN and ResNet) for the pure end-to-end and interpretable time series analytics. These models provide a good baseline for both application for real-world data and future research in deep learning on time series.

Before Start

What is the best approach to classfiy time series? Very hard to say. From the experiments we did, COTE, BOSS are among the best and DL-based appraoch (FCN, ResNet or MCNN) show no significant difference with them. If you prefer white box model, try BOSS first. If you like end-to-end solution, use FCN or even MLP with dropout as your fisrt baseline (FCN also support a certain level of model interpretability as from CAM or grad-CAM).

However, the UCR time series is kind of the 'extremely ideal data'. In a more applicable scenario, highly skewed labels with very non-stationary dynamics and frequent distribution/concept drift occur everywhere. Hopefully we can address these more complex issue with a very neat and effective DL based framework to enable end-2-end solution with good model interpretability , and yeah, we are exactly working on it.

Network Structure

Network Structure Three deep neural network architectures are exploited to provide a fully comprehensive baseline.

Localize the Contributing Region with Class Activation Map

Another benefit of FCN and ResNet with the global average pooling layer is its natural extension, the class activation map (CAM) to interpret the class-specific region in the data [7]. CAM

We can see that the discriminative regions of the time series for the right classes are highlighted. We also highlight the differences in the CAMs for the different labels. The contributing regions for different categories are different. The CAM provides a natural way to find out the contributing region in the raw data for the specific labels. This enables classification-trained convolutional networks to learn to localize without any extra effort. Class activation maps also allow us to visualize the predicted class scores on any given time series, highlighting the discriminative subsequences detected by the convolutional networks. CAM also provide a way to find a possible explanation on how the convolutional networks work for the setting of classification.

Visualize the Filter/Weights

We adopt the Gramian Angular Summation Field (GASF) [8] to visualize the filters/weights in the neural networks. The weights from the second and the last layer in MLP are very similar with clear structures and very little degradation occurring. The weights in the first layer, generally, have the higher values than the following layers. Feature

Classification Results

This table provides the testing (not training) classification error rate on 85 UCR time series data sets. For more experimental settings please refer to our paper.

Please note that the 'best' row is not a reasonable peformance measure. The MPCE score is TODO.

MLPFCNResNetPROPCOTE1NN-DTW1NN-BOSSBOSS-VS
50words0.2880.3210.2730.1800.1910.3100.3010.367
Adiac0.2480.1430.1740.3530.2330.3960.2200.302
ArrowHead0.1770.1200.1830.103/0.3370.1430.171
Beef0.1670.250.2330.3670.1330.3670.2000.267
BeetleFly0.1500.0500.2000.400/0.3000.1000.000
BirdChicken0.2000.0500.1000.350/0.2500.0000.100
Car0.1670.0830.067/////
CBF0.1400.0060.0020.0010.00300.001
ChlorineCon0.1280.1570.1720.3600.3140.3520.3400.345
CinCECGTorso0.1580.1870.2290.0620.0640.3490.1250.130
Coffee00000000.036
Computers0.4600.1520.1760.1160.3000.2960.324
CricketX0.4310.1850.1790.2030.1540.2460.2590.346
CricketY0.4050.2080.1950.1560.1670.2560.2080.328
CricketZ0.4080.1870.1870.1560.1280.2460.2460.313
DiatomSizeR0.0360.070.0690.0590.0820.0330.0460.036
DistalPhalanxOutlineAgeGroup0.1730.1650.2020.223/0.2080.1800.155
DistalPhalanxOutlineCorrect0.1900.1880.1800.232/0.2320.2080.282
DistalPhalanxTW0.2530.2100.2600.317/0.2900.2230.253
Earthquakes0.2080.1990.2140.281/0.2580.1860.193
ECG2000.0800.1000.130//0.2300.1300.180
ECG50000.0650.0590.0690.350/0.2500.0560.110
ECGFiveDays0.030.0150.0450.17800.2320.0000.000
ElectricDevices0.4200.2770.2720.277/0.3990.2820.351
FaceAll0.1150.0710.1660.1520.1050.1920.2100.241
FaceFour0.170.0680.0680.0910.0910.17000.034
FacesUCR0.1850.0520.0420.0630.0570.0950.0420.103
fish0.1260.0290.0110.0340.0290.1770.0110.017
FordA0.2310.0940.0720.182/0.4380.0830.096
FordB0.3710.1170.1000.265/0.4060.1090.111
GunPoint0.06700.0070.0070.0070.09300
Ham0.2860.2380.219//0.5330.3430.286
HandOutlines0.1930.2240.139//0.2020.1300.152
Haptics0.5390.4490.4940.5840.4810.6230.5360.584
Herring0.3130.2970.4060.079/0.4690.3750.406
InlineSkate0.6490.5890.6350.5670.5510.6160.5110.573
InsectWingbeatSound0.3690.5980.469//0.6450.4790.430
ItalyPower0.0340.030.0400.0390.0360.0500.0530.086
LargeKitchenAppliances0.5200.1040.1070.232/0.2050.2800.304
Lightning20.2790.1970.2460.1150.1640.1310.1480.262
Lightning70.3560.1370.1640.2330.2470.2740.3420.288
MALLAT0.0640.020.0210.0500.0360.0660.0580.064
Meat0.0670.0330.000//0.0670.1000.167
MedicalImages0.2710.2080.2280.2450.2580.2630.2880.474
MiddlePhalanxOutlineAgeGroup0.2650.2320.2400.474/0.2500.2180.253
MiddlePhalanxOutlineCorrect0.2400.2050.2070.210/0.3520.2550.350
MiddlePhalanxTW0.3910.3880.3930.630/0.4160.3730.414
MoteStrain0.1310.050.1050.1140.0850.1650.0730.115
NonInvThorax10.0580.0390.0520.1780.0930.2100.1610.169
NonInvThorax20.0570.0450.0490.1120.0730.1350.1010.118
OliveOil0.600.1670.1330.1330.1000.1670.1000.133
OSULeaf0.430.0120.0210.1940.1450.4090.0120.074
PhalangesOutlinesCorrect0.1700.1740.175//0.2720.2170.317
Phoneme0.9020.6550.676//0.7720.7330.825
Plane0.01900////
ProximalPhalanxOutlineAgeGroup0.1760.1510.1510.117/0.1950.1370.244
ProximalPhalanxOutlineCorrect0.1130.1000.0820.172/0.2160.1310.134
ProximalPhalanxTW0.2030.1900.1930.244/0.2630.2030.248
RefrigerationDevices0.6290.4670.4720.424/0.5360.5120.488
ScreenType0.5920.3330.2930.440/0.6030.5440.464
ShapeletSim0.5170.1330.000//0.3500.0440.022
ShapesAll0.2250.1020.0880.187/0.2320.0820.188
SmallKitchenAppliances0.6110.1970.2030.187/0.3570.2000.221
SonyAIBORobot0.2730.0320.0150.2930.1460.2750.3210.265
SonyAIBORobotII0.1610.0380.0380.1240.0760.1690.0980.188
StarLightCurves0.0430.0330.0250.0790.0310.0930.0210.096
Strawberry0.0330.0310.042//0.0600.0420.024
SwedishLeaf0.1070.0340.0420.0850.0460.2080.0720.141
Symbols0.1470.0380.1280.0490.0460.0500.0320.029
SyntheticControl0.050.010.0000.0100.0000.0070.0300.040
ToeSegmentation10.3990.0310.0350.079/0.2280.0480.031
ToeSegmentation20.2540.0850.1380.085/0.1620.0380.069
Trace0.18000.0100.010000
TwoLeadECG0.147000.0670.0150.0960.0160.001
TwoPatterns0.1140.10300000.0040.015
UWaveGestureLibraryAll0.0460.1740.1320.1990.1960.2720.2410.270
UWaveX0.2320.2460.2130.2830.2670.3660.3130.364
UWaveY0.2970.2750.3320.2900.2650.3420.3120.336
UWaveZ0.2950.2710.2450.029/0.1080.0590.098
wafer0.0040.0030.0030.0030.0010.0200.0010.001
Wine0.2040.1110.204//0.4260.1670.296
WordSynonyms0.4060.420.3680.226/0.2520.3450.491
Worms0.6570.3310.381//0.5360.3920.398
WormsTwoClass0.4030.2710.265//0.3370.2430.315
yoga0.1450.1550.1420.1210.1130.1640.0810.169
Best6272114104219

Dependencies

Keras (Tensorflow backend), Numpy.

Cite

If you find either the codes or the results are helpful to your work, please kindly cite our paper

[Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline] (https://arxiv.org/abs/1611.06455)

[Imaging Time-Series to Improve Classification and Imputation] (https://arxiv.org/abs/1506.00327)

License

This project is licensed under the MIT License.

MIT License

Copyright (c) [2019] [Zhiguang Wang]

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.