Home

Awesome

<p align="center"> :fire: Please remember to :star: this repo if you find it useful and cite our work if you end up using it in your work! :fire: </p> <p align="center"> :fire: If you have any questions or concerns, please create an <a href="https://github.com/ubicomplab/rPPG-Toolbox/issues">issue</a> :memo:! :fire: </p>

rPPG-Toolbox Logo

:wave: Introduction

rPPG-Toolbox is an open-source platform designed for camera-based physiological sensing, also known as remote photoplethysmography (rPPG).

Overview of the rPPG

rPPG-Toolbox not only benchmarks the existing state-of-the-art neural and unsupervised methods, but it also supports flexible and rapid development of your own algorithms. Overview of the toolbox

:notebook: Algorithms

rPPG-Toolbox currently supports the following algorithms:

:file_folder: Datasets

The toolbox supports seven datasets, namely SCAMPS, UBFC-rPPG, PURE, BP4D+, UBFC-Phys, MMPD and iBVP. Please cite the corresponding papers when using these datasets. For now, we recommend training with UBFC-rPPG, PURE, iBVP or SCAMPS due to the level of synchronization and volume of the datasets. To use these datasets in a deep learning model, you should organize the files as follows.

:bar_chart: Benchmarks

The table shows Mean Absolute Error (MAE) and Mean Absolute Percent Error (MAPE) performance across all the algorithms and datasets:

The overview of the results

:wrench: Setup

You can use either conda or uv with this toolbox. Most users are already familiar with conda, but uv may be a bit less familiar - check out some highlights about uv here. If you use uv, it's highly recommended you do so independently of conda, meaning you should make sure you're not installing anything in the base conda environment or any other conda environment. If you're having trouble making sure you're not in your base conda environment, try setting conda config --set auto_activate_base false.

STEP 1: bash setup.sh conda or bash setup.sh uv

STEP 2: conda activate rppg-toolbox or, when using uv, source .venv/bin/activate

NOTE: the above setup should work without any issues on machines using Linux or MacOS. If you run into compiler-related issues using uv when installing tools related to mamba, try checking to see if clang++ is in your path using which clang++. If nothing shows up, you can install clang++ using sudo apt-get install clang on Linux or xcode-select --install on MacOS.

If you use Windows or other operating systems, consider using Windows Subsystem for Linux and following the steps within setup.sh independently.

:computer: Example of Using Pre-trained Models

Please use config files under ./configs/infer_configs

For example, if you want to run The model trained on PURE and tested on UBFC-rPPG, use python main.py --config_file ./configs/infer_configs/PURE_UBFC-rPPG_TSCAN_BASIC.yaml

If you want to test unsupervised signal processing methods, you can use python main.py --config_file ./configs/infer_configs/UBFC-rPPG_UNSUPERVISED.yaml

:computer: Examples of Neural Network Training

Please use config files under ./configs/train_configs

Training on PURE and Testing on UBFC-rPPG With TSCAN

STEP 1: Download the PURE raw data by asking the paper authors.

STEP 2: Download the UBFC-rPPG raw data via link

STEP 3: Modify ./configs/train_configs/PURE_PURE_UBFC-rPPG_TSCAN_BASIC.yaml

STEP 4: Run python main.py --config_file ./configs/train_configs/PURE_PURE_UBFC-rPPG_TSCAN_BASIC.yaml

Note 1: Preprocessing requires only once; thus turn it off on the yaml file when you train the network after the first time.

Note 2: The example yaml setting will allow 80% of PURE to train and 20% of PURE to valid. After training, it will use the best model(with the least validation loss) to test on UBFC-rPPG.

Training on SCAMPS and testing on UBFC-rPPG With DeepPhys

STEP 1: Download the SCAMPS via this link and split it into train/val/test folders.

STEP 2: Download the UBFC-rPPG via link

STEP 3: Modify ./configs/train_configs/SCAMPS_SCAMPS_UBFC-rPPG_DEEPPHYS_BASIC.yaml

STEP 4: Run python main.py --config_file ./configs/train_configs/SCAMPS_SCAMPS_UBFC-rPPG_DEEPPHYS_BASIC.yaml

Note 1: Preprocessing requires only once; thus turn it off on the yaml file when you train the network after the first time.

Note 2: The example yaml setting will allow 80% of SCAMPS to train and 20% of SCAMPS to valid. After training, it will use the best model(with the least validation loss) to test on UBFC-rPPG.

:zap: Inference With Unsupervised Methods

STEP 1: Download the UBFC-rPPG via link

STEP 2: Modify ./configs/infer_configs/UBFC_UNSUPERVISED.yaml

STEP 3: Run python main.py --config_file ./configs/infer_configs/UBFC_UNSUPERVISED.yaml

:eyes: Visualization of Preprocessed Data

A python notebook for visualizing preprocessed data can be found in tools/preprocessing_viz along with an associated README. The notebook, viz_preprocessed_data.ipynb, automatically detects the preprocessed data format and then plots input image examples and waveforms. Data Visualization Example

:chart_with_downwards_trend: Plots of Training Losses and LR

This toolbox saves plots of training, and if applicable, validation losses automatically. Plots are saved in LOG.PATH (runs/exp by default). An example of these plots when training and validating with the UBFC-rPPG dataset and testing on the PURE dataset are shown below.

<img src="./figures/example_losses_plot.png" alt="drawing" width="600"/> <img src="./figures/example_lr_schedule_plot.png" alt="drawing" width="400"/>

:straight_ruler: Bland-Altman Plots

By default, this toolbox produces Bland-Altman plots as a part of its metrics evaluation process for both supervised and unsupervised methods. These plots are saved in the LOG.PATH (runs/exp by default). An example of these plots after training and validating with the UBFC-rPPG dataset and testing on the PURE dataset are shown below.

<img src="./figures/example_scatter_plot.png" alt="drawing" width="450"/> <img src="./figures/example_difference_plot.png" alt="drawing" width="450"/>

:eyes: Visualization of Neural Method Predictions

A python notebook for visualizing test-set neural method output predictions and labels can be found in tools/output_signal_viz along with an associated README. The notebook, data_out_viz.ipynb, given a .pickle output file, generated by setting TEST.OUTPUT_SAVE_DIR assists in plotting predicted PPG signals against ground-truth PPG signals. Prediction Visualization Example

:scroll: YAML File Setting

The rPPG-Toolbox uses yaml file to control all parameters for training and evaluation. You can modify the existing yaml files to meet your own training and testing requirements.

Here are some explanation of parameters:

:open_file_folder: Adding a New Dataset

:robot: Adding a New Neural Algorithms

:chart_with_upwards_trend: Adding a New Unsupervised Algorithms

:green_book: Weakly Supervised Training

Supervised rPPG training requires high fidelity synchronous PPG waveform labels. However not all datasets contain such high quality labels. In these cases we offer the option to train on synchronous PPG "pseudo" labels derived through a signal processing methodology. These labels are produced by using POS-generated PPG waveforms, which are then bandpass filtered around the normal heart-rate frequencies, and finally amplitude normalized using a Hilbert-signal envelope. The tight filtering and envelope normalization results in a strong periodic proxy signal, but at the cost of limited signal morphology.

pseudo_labels

:blue_book: Motion Augmented Training

The usage of synthetic data in the training of machine learning models for medical applications is becoming a key tool that warrants further research. In addition to providing support for the fully synthetic dataset SCAMPS, we provide provide support for synthetic, motion-augmented versions of the UBFC-rPPG, PURE, SCAMPS, and UBFC-Phys datasets for further exploration toward the use of synthetic data for training rPPG models. The synthetic, motion-augmented datasets are generated using the MA-rPPG Video Toolbox, an open-source motion augmentation pipeline targeted for increasing motion diversity in rPPG videos. You can generate and utilize the aforementioned motion-augmented datasets using the steps below.

If you use the aforementioned functionality, please remember to cite the following in addition to citing the rPPG-Toolbox:

Refer to this BibTeX for quick inclusion into a .bib file.

<p align="center"> <img src="./figures/ma_rppg_video_toolbox_teaser.gif" alt="Examples of motion augmentation applied to subjects in the UBFC-rPPG dataset." /> </p>

:orange_book: Extending the Toolbox to Multi-Tasking With BigSmall

We implement BigSmall as an example to show how this toolbox may be extended to support physiological multitasking. If you use this functionality please cite the following publication:

The BigSmall mode multi-tasks pulse (PPG regression), respiration (regression), and facial action (multilabel AU classification). The model is trained and evaluated (in this toolbox) on the AU label subset (described in the BigSmall publication) of the BP4D+ dataset, using a 3-fold cross validation method (using the same folds as in the BigSmall publication).

<p align="center"> <img src="./figures/bigsmall_ex1.gif" alt="Example Multi-Task Output From BigSmall." /> </p>

:page_with_curl: Using Custom Data Splits and Custom File Lists

Best practice for rPPG model evaluation involves training and validating a model on one dataset and then evaluating (testing) the performance on additional datasets (Eg. training on PURE and testing on UBFC). Data splits used for training, validation, and testing are saved as .csv filelists with the default directory path set as CACHED_PATH/DataFileLists (this are generally auto generated). In cases where users would like to define their own data splits (Eg. for intra-dataset cross validation), the following steps can be used to achieve this.

:scroll: Citation

If you find our paper or this toolbox useful for your research, please cite our work.

@article{liu2022rppg,
  title={rPPG-Toolbox: Deep Remote PPG Toolbox},
  author={Liu, Xin and Narayanswamy, Girish and Paruchuri, Akshay and Zhang, Xiaoyu and Tang, Jiankai and Zhang, Yuzhe and Wang, Yuntao and Sengupta, Soumyadip and Patel, Shwetak and McDuff, Daniel},
  journal={arXiv preprint arXiv:2210.00716},
  year={2022}
}

License

<a href="https://www.licenses.ai/source-code-license"> <img src="https://images.squarespace-cdn.com/content/v1/5c2a6d5c45776e85d1482a7e/1546750722018-T7QVBTM15DQMBJF6A62M/RAIL+Final.png" alt="License: Responsible AI" width="30%"> </a>

Acknowledgement

This research project is supported by a Google PhD Fellowship for Xin Liu and a research grant from Cisco for the University of Washington as well as a career start-up funding grant from the Department of Computer Science at UNC Chapel Hill. This research is also supported by Tsinghua University Initiative Scientific Research Program, Beijing Natural Science Foundation, and the Natural Science Foundation of China (NSFC). We also would like to acknowledge all the contributors from the open-source community.