Home

Awesome

Wearables Development Toolkit (WDK)

The Wearables Development Toolkit (WDK) is a framework and set of tools to facilitate the iterative development of activity recognition applications with wearable and IoT devices. It supports the annotation of time series data, the analysis and visualization of data to identify patterns and the development and performance assessment of activity recognition algorithms. At the core of the WDK is a repository of high-level components that encapsulate functionality used across activity recognition applications. These components can be used within a Matlab script or within a visual flo-based programming platform (i.e. Node-RED).

<p align="center"> <img width="700" src="doc/images/ARCDevelopment.png"> </p>

To get a first insight into the WDK, watch this demo video: https://www.youtube.com/embed/Ow0b0vkciDs and read my paper.

To install and use the WDK, refer to the Documentation.

1- Data Annotation

An annotated data set is needed to train a machine learning algorithm and to assess its performance. The Data Annotation App offers functionality to annotate time series data. Depending on the particular application, one might want to annotate events that occurr at a specific moment in time or activities that have a duration in time, called ranges. The following image shows the squared magnitude of the accelerometer signal collected by a motion sensor attached at a hind leg of a cow. The individual strides of the cow have been annotated as event annotations (red) and the walking and running activities as ranges (black rectangles).

Data Annotation App

Annotating with video (optional)

The Data Annotation App can load and display videos next to the data wich is synchronized by specifying at least two data samples and two video frames that correspond to the same event in time. The frames in the video file are displayed by the Movie Player at the bottom right of the window:

Movie Player

In this application, we asked the subject to applaud three times in front of the camera while wearing an armband with an Inertial Measurement Unit (IMU). We matched the samples at the peak of squared magnitude of acceleration to the video frames where the subject's hands make contact with each other.

Please note:

  1. The Data Annotation App synchronises video and data at two points and interpolates linearly inbetween. I recommend the synchronization points to take place in the beginning and end of a recording.
  2. Annotation, marker, synchronization and video files should be consistent with the data files. If a data file is named 'S1.mat', its annotation file should be named 'S1-annotations.txt', its synchronization file 'S1-synchronization.txt' and the video 'S1-video.<extension>'.
  3. By default, the Data Annotation App loads annotation files from the './data/annotations/', video and synchronization files from './data/videos' directory. Saved annotation files are located in the root './' directory.
  4. The labels to annotate should be defined in the 'labels.txt' file beforehand.
  5. You can use the keyboard shortcuts arrow-right, arrow-left and spacebar to iterate through data and video.

Automatic Annotation (optional)

The Data Annotation App offers two features to facilitate the annotation of sensor signals.

Unsupervised automatic annotation

The unsupervised feature analyzes the entire data set, clusters similar portions of data and suggests annotations for some of the portions of data. This feature requires a segmentation algorithm to be provided by application developers. A set of heuristic features are extracted for each segment extracted. Finally kmeans is used to cluster the feature vectors using as many labels as defined in the current project. The N feature vectors closest in distance to a centroid are suggested to the user together with a cluster id. With a single click, the user can modify the suggested cluster into the appropriate class. The parameter N is configurable over the user interface.

Movie Player

Supervised automatic annotation

The supervised feature searches for portions of data similar to previously added range annotations. When a range annotation is added while the supervised auto-annotation is enabled, the Data Annotation App scans the current file for segments of data of the same length and compares the segments to the range annotation using Dynamic Time Warping. The resulting segments are sorted and the N most similar segments are suggested to the user. The parameter N is configurable over the user interface.

Movie Player

2- Data Analysis

The Data Analysis App displays segments of data grouped by class. This is useful to study the differences across classes to design a recognition algorithm able to discriminate between classes. Segments can be plotted either on top of each other or sequentially (i.e. after each other).

Data Annotation App

In order to visualize data:

  1. Select one or more input data files.
  2. Select where the segments should come from. Manual annotations creates segments from the range annotations and loads event annotations to create segments using the ManualSegmentationStrategy. The Automatic segmentation uses a preprocessing, event detection and segmentation algorithms selected over the user interface to create segments.
  3. (in Automatic segmentation mode) Select the signals to use, a preprocessing algorithm and (optionally) an event detection algorithm.
  4. Select a segmentation strategy and (optionally) a grouping strategy. Click the Execute button. At this point the segments are created. A grouping strategy maps annotated labels to classes, usually by grouping different labels into classes.
  5. Select signals and classes to visualize and a plot style (i.e. overlapping or sequential).

3- Algorithm Implementation

Most wearable device applications execute a sequence of computations to recognize specific patterns based on sensor signals. This sequence of computations is called the Activity Recognition Chain and consists of the following stages: Activity Recognition Chain

Programming

Activity recognition applications can be developed directly in Matlab using the WDK's framework of reusable components.

The following text snippet creates a chain of computations and saves it to the goalkeeperChain.mat file. This chain of computations detects events using a peak detector on the squared magnitude of the accelerometer signal, segments the data around the detected events (200 samples to the left of the event and 30 sampels to the right) and extracts the features defined in the goalkeeperFeatureChain.mat file.

%select first three axes of acceleration
axisSelector = AxisSelector(1:3);%AX AY AZ

%compute the magnitude of acceleration
magnitudeSquared = MagnitudeSquared();

%detect peaks on the magnitude of acceleration
simplePeakDetector = SimplePeakDetector();
simplePeakDetector.minPeakHeight = single(0.8);
simplePeakDetector.minPeakDistance  = int32(100);

%create segments around detected peaks
eventSegmentation = EventSegmentation();
eventSegmentation.segmentSizeLeft = 200;
eventSegmentation.segmentSizeRight = 30;

%label created segments
labeler = EventSegmentsLabeler();

%load feature extraction algorithm
featureExtractor = DataLoader.LoadComputer('goalkeeperFeatureChain.mat');

%create the recognition algorithm
arcChain = Computer.ComputerWithSequence({FileLoader(),PropertyGetter('data'),...
axisSelector,magnitudeSquared,simplePeakDetector,eventSegmentation,labeler,...
featureExtractor});

%export the recognition algorithm
DataLoader.SaveComputer(arcChain,'goalkeeperChain.mat');

This chain of computations produces a feature table that can be used within the Assessment App to study the performance of different machine learning algorithms.

Visual Programming (optional)

Activity recognition applications can also be developed visually in Node-RED using the nodes available in the WDK-RED platform. The following image shows an activity recognition chain for detecting and classifying soccer goalkeeper training exercises using a wearable motion sensor attached to a glove worn by a goalkeeper:

Activity Recognition Chain

Activity Recognition Chains can be imported and executed in the WDK as follows:

4- Algorithm Assessment

The development and assessement / evaluation of an activity recognition algorithm usually represents a large fraction of the effort to develop the entire application. The Assessment App enables developers to design algorithms by selecting reusable components at each stage of the activity recognition chain and to assess their performance. The recognition performance metrics provided by this tool are:

and the computational performance metrics are:

The following image shows the configuration and classification results of an algorithm to detect and classify exercises performed by patients after a hip replacement surgery.

Assessment App

The Performance Asessment Detail View displays the classification results on top of the data and next to the corresponding video. Green overlays indicate correctly classified segments and red overlays indicate misclassified segments.

Assessment Detail View

Note: Feature tables generated with a particular activity recognition algorithm can be exported to .txt formats to study the classification on other platforms such as Python / tensorFlow and WEKA.

Reusable Components

Here you can find a list of the reusable components, their configurable properties and their performance metrics relative to an input of size n.

Preprocessing

NameDescriptionFlopsMemory
HighPassFilterButterworth High-pass filter with order k13 k n1 / n
LowPassFilterButterworth Low-pass filter with order k31 k n1 / n
MagnitudeMagnitude4 n1 / n
SquaredMagnitudeEnergy2 n1 / n
NormNorm2 n1 / n
DerivativeFirst derivative: Derivative. Second derivative: Second derivative40 n1 / n
S1S140 k nn
S2S2203 k nn

Invocations to preprocessing algorithms produce n values. The memory consumption of most preprocessing algorithms is 1 if the inPlaceComputation property is set to true or n otherwise.

Event Detection

NameDescriptionFlopsMemory
SimplePeakDetectorThreshold-based peak detector. Properties are: minPeakheight and minPeakDistance. Note: this algorithm is more suitable for deployment into an embedded device than the MatlabPeakDetector11 n1
MatlabPeakDetectorMatlab's peak detector. Properties are: minPeakheight and minPeakDistance.1787 nn

Note: The flops metrics shown above have been calculated with random values in the range [0 1] using minPeakHeight = 0.8, minPeakDist = 100. The performance metrics will vary depending on how often peaks are detected based on the input data and minPeakHeight and minPeakDist properties.

The event detection algorithms output a single value.

An invocation to an event detection algorithm produces 0 or 1 value.

Segmentation

NameDescriptionFlopsMemory
SlidingWindowCreates a segment of size segmentSize after every sampleInterval samples. E.g. segmentSize = 200 and sampleInterval = 100 creates segments with a 50% overlapping. Note: receives a signal as input.segmentSizesegmentSize
EventSegmentationCreates a segment around an event by taking segmentSizeLeft samples to the left and segmentSizeRight to the right of the event. Note: receives an event as input.segmentSizeLeft + segmentSizeRightsegmentSizeLeft + segmentSizeRight
ManualSegmentationCreates segments for each annotation (converts RangeAnnotations to segments and creates segments around each EventAnnotations). This segmentation strategy cannot be used in a real application, but is useful to study the quality of the annotations--

An invocation to a segmentation algorithm produces a segment (of segmentSize or segmentSizeLeft + segmentSizeRight values).

Labeling

NameDescription
EventsLabelerLabels events as the closest event annotation under a specified tolerance
EventSegmentsLabelerLabels segments generated from an event using the EventsLabeler
RangeSegmentsLabelerLabels segments based on range annotations. If the shouldContainEntireSegment is set to true, segments are labeled if they are fully contained in an annotation. Otherwise, segments are labeled if their middle point is contained within a range annotation
LabelMapperMaps labels to groups

Feature Extraction

Time-domain features

NameDescriptionFlopsMemory
MinMinimum value in the input signal.n1
MaxMaximum value in the signal.n1
MeanAverage of every value in the input signal.n1
MedianMedian of the input signal.15 n1
VarianceVariance of the input signal.2 n1
STDStandard Deviation of the input signal.2 n1
ZCRZero Crossing Rate. Amount of times the signal crosses the zero line.5 n1
SkewnessA measure of the asymmetry in the distribution of values in the input signal calculated as: Skewness6 n1
KurtosisDescribes the "tailedness" of the distribution of values in the input signal. Kurtosis6 n1
IQRComputes the difference between Q3 and Q1 where Q1 is the median of the n/2 smallest values and Q3 is the median of the n/2 largest values in an input signal of size n. Can be calculated in O(n) time.57 nn
AUCArea under the curve computed with the trapezoid rule: AUC8 n1
AAVAverage Absolute Variation: AAV5 n1
Correlation Pearson correlation coefficient of two n-dimensional inputs3 nn
Energysum of squared values in the input signal: Energy2 n1
EntropyEstimates the amount of information in the input signal. Rare events (i.e. sample values) carry more information (and have a higher entropy) than seldom events. AUC where pi are the probability distribution values of the input signaln^2n
MADMean Absolute Deviation. The average distance of each data point to the mean. MAD5 n1
MaxCrossCorrMaximum value of the cross correlation coefficients of two input signals. Note: input should be a nx2 array.161 nn
OctantsDetermines the octant of each sample in an input array of n samples with 3 columns each (e.g. if all three columns are positive, octant = 1. If all 3 columns are negative, octant = 7).7 n1
P2PPeak to Peak distance (distance between maximum and minimum values).3 n1
QuantileComputes the numQuantileParts cutpoints that separate the distribution of samples in the input signal.3 n log2(n)numQuantileParts
RMSRoot Mean Squared. AUC2 n1
SMVSignalVectorMagnitude4 n1
SMASum of absolute values of every sample (takes 2-D inputs) SMAn m1

Invocations to time-domain feature extraction algorithms produce a single value except for the Quantile component which produces numQuantileParts values.

Frequency-domain features

NameDescriptionFlopsMemory
FFTFFT of the input signaln log2(n)n
FFTDCDC component of the FFT of the input signal11
MaxFrequencyMaximum value in the fourier transform of the input signaln1
PowerSpectrumDistribution of power into frequency components. Note: outputs n coefficients4 nn
SpectralCentroidIndicates where the "center of mass" of the spectrum is located. SpectralCentroid10 n1
SpectralEnergyThe energy of the frequency domain (sum of squared values of dft coefficients). SpectralEnergy2 n1
SpectralEntropyIndicates how chaotic / how much informatiomn there is in the frequency distribution. Calculated as: SpectralEntropy where yi are the coefficients of the power spectrum of the input signal21 n1
SpectralFlatnessProvides a way to quantify how noise-like a sound is. White noise has peaks in all frequencies making its spectrum look flat. SpectralFlatness68 n1
SpectralSpreadIndicates the variance in the distribution of frequencies.11 n1

Invocations to frequency-domain feature extraction algorithms output a single value except for the FFT and PowerSpectrum which produce n/2 and n values respectively.

Classification

NameDescription
LDClassifierLinear Discriminant classifier
TreeClassifierProperties: maxNumSplits
KNNClassifierK Nearest Neighbors classifier. Properties: nNeighbors, distanceMetric
EnsembleClassifierProperties: nLearners
SVMClassifierSupport Vector Machine. Properties: order, boxConstraint

The performance metrics of each classifier depend on its configuration and are calculated at runtime.

Postprocessing

NameDescription
LabelMappermaps classified labels to different labels. Can be used to group classified labels when a greater level of detail was used in the classification.
LabelSlidingWindowMaxSelectorreplaces every label at index labelIndex in an array of predicted labels with the most frequent label in the range [labelIndex −windowSize/2,labelIndex +windowSize/2, or with the NULL-class if no label occurs at least minimumCount times in the range

The postprocessing components produce a label as output.

Utilities

NameDescriptionFlopMemoryCommunication
FeatureExtractorGenerates a table from an array of segments using the feature extraction method set in the computers property---
FeatureNormalizerNormalizes a features table by subtracting the means property and dividing by the stds property. If the shouldComputeNormalizationValues poperty is set to true, it computes the means and *stds * properties from the input table---
FeatureSelectorReturns a table with the columns in selectedFeatureIdxs property. The findBestFeaturesForTable(f) method can be used to identify the f most relevant features. The mRMR feature selection algorithm is used for this purpose---
ConstantMultiplierMultiplies an input by the constant propertynnn
SubstractionSubtracts the second column from the first column of the input signal2 nnn
AxisMergerMerges m signals of size n into an nxm matrix. Outputs the merged signal as soon as m signals have been received. The nAxes property indicates how many signals are expected3 nm nm n
AxisSelectorSelects the axes columns of the provided input matrix. Returns a new matrix-m nm n
MergerMerges m objects into a cell array. Generates an output as soon as m objects have been received111
NoOpOutputs the input object without modification. Can be useful when connected to several nodes to start an execution graph with multiple entry points11*
PropertyGetterOutputs the value of the property property of the input object1**
PropertySetterSets the property property of the node object to the input value. Outputs an empty object11-
RangeSelectorOutputs a signal with the values in the range [rangeStart - rangeEnd] of the input signal2 nrangeEnd - rangeStartrangeEnd - rangeStart
SegmentsGrouperReceives an array of segments and outputs the segments grouped by their class in a cell array. Each cell at index i contains an array of the segments of class i in the input---
TableRowSelectorSelects the rows with selectedLabels labels in the input Table. Returns the filtered input Table---

The amount of memory and output size of the PropertyGetter and NoOp modules depend on their input and configuraton values and are computed at runtime.

References

Applications developed with the WDK:

  1. https://www.mdpi.com/2414-4088/2/2/27
  2. https://dl.acm.org/citation.cfm?id=3267267

About

My name is Juan Haladjian. I developed the Wearables Development Toolkit as part of my post-doc at the Technical University of Munich. Feel free to contact me with questions or feature requests. The project is under an MIT license. You are welcome to use the WDK, extend it and redistribute it for any purpose, as long as you give credit for it by copying the LICENSE.txt file to any copy you make.

Feel free to contact me with feedback or feature requests.

Website: www.jhaladjian.com

Academic Website: http://in.tum.de/~haladjia

LinkedIn: www.linkedin.com/in/juan-haladjian

Email: haladjia@in.tum.de

Cite this project

@misc{haladjian2019,
  author =       {Juan Haladjian},
  title =        {{The Wearables Development Toolkit (WDK)}},
  howpublished = {\url{https://github.com/avenix/WDK}},
  year =         {2019}
}