Home

Awesome

PhysBench

image

Please use the Tutorial/Noob Heart.ipynb to learn about this framework.

Although I personally prefer to use TensorFlow, PhysBench is not tied to any specific deep learning framework. For Pytorch and JAX users, please refer to:Tutorial/Noob Heart (Pytorch).ipynb and Tutorial/Noob Heart (JAX).ipynb

Environments

First, create a new environment for PhysBench.

conda create -n physbench python=3.9
conda activate physbench
pip install -r requirements.txt

Then, install the deep learning frameworks according to your needs. If you need to install multiple frameworks, it is recommended to create different environments for them.
Install TensorFlow environment:

conda install -c conda-forge tensorflow-gpu keras

Install Pytorch environment:

conda install pytorch torchvision torchaudio pytorch-cuda -c pytorch -c nvidia

Inference on a single video

To extract BVP signals from your own collected video, please execute the following code.

python inference.py --video input_face.avi --model seq

Currently supported models include seq, tscan, deepphys, efficientphys, physnet, chrom, pos, ica.
Use --out path_to_bvp.csv to specify the save path for the output BVP waveform;
use --show-wave for visualization of the output;
use --weights path_to_weights.h5 to specify weights path (or it will automatically use the weights trained on RLAP).

Models

We implemented 7 neural models and 3 unsupervised models, DeepPhys, TS-CAN, EfficientPhys, PhysNet, PhysFormer, 1D CNN, NoobHeart, Chrom, ICA, and POS. Among them, the Seq-rPPG is a new model we proposed that uses only one-dimensional convolution with minimal computational complexity and high performance. NoobHeart is a toy model used in the tutorial with only 361 parameters and includes a simple 2 layers 3-dimensional convolution structure; however it has decent performance making it suitable as an entry-level model. Chrom,ICA,and POS are three unsupervised models. Among the neural models,PhysFormer is implemented using Pytorch while others use Tensorflow.

For unsupervised methods, please refer to unsupervised_methods.py; for methods implemented using TensorFlow, please refer to models.py; for methods implemented using PyTorch, please refer to models_torch.py. Our framework is not dependent on a specific deep learning framework. Please configure the environment as needed and install the required packages using requirements.txt.

ModelPublicationResolutionParamsFrame FLOPsInputOutputType
DeepPhysECCV 1836x36532K52MDiff+RGBDiff2D CNN
TS-CANNIPS 2036x36532K52MDiff+RGBDiff2D CNN
EfficientPhysWACV 2372x722.16M230MStd RGBDiff2D CNN
PhysNetBMVC 1932x32770K54MRGBWave3D CNN
PhysFormerCVPR 22128x1287.03M324MRGBWaveTransformer
Seq-rPPGThis paper8x8196K261KRGBWave1D CNN
NoobHeartThis paper8x83615790RGBWave3D CNN
ChromTBME 13-----Unsupervised
ICATBME 11-----Unsupervised
POSTBME 16-----Unsupervised

Add new models (supervised or unsupervised)

For any model, whether it's Tensorflow, Pytorch, or using Numpy, the input is facial video clips and the output is corresponding physiological signals. The only thing that needs to be done is to encapsulate the algorithm into a function, inputting video frames and outputting BVP signals or heart rate.

def model(frames):
    # Frames is (Batch, Depth, H, W, C) matrix, only contain the face.
    input = preprocess(frames) # Preprocessing (if necessary)
    BVP   = algorithm(input)  
    return BVP                 # (Batch, Depth)
    
# Evaluate the model on the HDF5 standard dataset
eval_on_dataset('test_set.h5', model, depth, (H, W), save='results/my_result.h5')

# Obtain HR metrics
hr_metrics = get_metrics('results/my_result.h5')

# Obtain HRV metrics
hrv_metrics = get_metrics_HRV('results/my_result.h5')

Open the visualization webpage, where you can find my_result.h5 and view the waveform of each video.

python visualization.py

Datasets

Adding a dataset is simple, just write a loader and include a index file (usually only 20 lines of code). Currently supported loaders are RLAP (i.e., CCNU), UBFC-rPPG2, UBFC-PHYS, MMPD, PURE, COHFACE, and SCAMPS. You can use our recording program PhysRecorder https://github.com/KegangWangCCNU/PhysRecorder to record datasets, just need a webcam and Contec CMS50E to collect strictly synchronized lossless format datasets, which can be directly used with the RLAP loader.
It's recommended to train on datasets with Good Synchronicity, as most models are highly sensitive to the synchronicity of the training set. Moreover, not all videos in UBFC-rPPG are unsynchronized; based on experience, some models with a Temporal Shift Module (TSM) can adapt to it, such as TS-CAN and EfficientPhys, but their performance is still inferior compared to training on highly synchronized datasets.

DatasetParticipantsFramesLosslessSynchronicity
RLAP583.53MMJPGGood
RLAP-rPPG58781KYESGood
PURE10106KYESGood
UBFC-rPPG4275KYESBad
UBFC-Phys561.06MMJPG-
MMPD331.15MH.264-
COHFACE40192KMPEG-4Good
SCAMPS28001.68MSyntheticsGood

You need to organize an index file for each dataset, and PhysBench provides the official versions of these files. Usually, you don't need to change the folder structure of the datasets to use them. Please check the csv files in the datasets folder.

Note: Our framework implemented UBFC-Phys, but due to the large motion amplitude, there is a lot of noise in its Ground Truth, and the test results may not be reliable, so they are not listed. Further measures may need to be taken to filter out inaccurate Ground Truth signals before the results can be released.

Add new datasets

To add a new dataset, two things need to be prepared: adding a Loader and organizing a file index.
Taking MMPD as an example:

class LoaderMMPD(Loader):

    def __call__(self, vid):                        # vid is the relative path of the video file.
        path = f"{self.base}{vid}"                  # Obtain the absolute path
        f = scipy.io.loadmat(path)                  
        bvp = f['GT_ppg'][0]                        # (Depth, )
        ts = np.arange(bvp.shape[0])/30 # 30fps     # (Depth, )
        vid = (f['video']*255).astype(np.uint8)     # (Depth, H, W, C)
        return vid, bvp, ts                         # Return video frame, BVP, timestamps
        
loader_mmpd = LoaderMMPD(mmpd_root) # Use the MMPD dataset root directory to initialize the loader.

# Use Loader to package the MMPD raw dataset into a HDF5 standard dataset, witch can be used for testing models.
dump_dataset("mmpd_dataset.h5", files_mmpd, loader_mmpd, labels=labels_list)

Train and Test

Train on our RLAP dataset, please see the benchmark_RLAP folder. Train on the SCAMPS dataset, please see the benchmark_SCAMPS folder. In addition, for ablation experiments and training on PURE and UBFC, please see benchmark_addition. All code is provided in Jupyter notebooks with our replication included; if you have read the tutorial, replicating results should be easy.

Training evaluation on RLAP

RLAP is an appropriate training set, and we divide RLAP into training ,validation and testing set. In addition, tests were also conducted on the entire UBFC and PURE datasets. For code and results, please refer to benchmark_RLAP.
The testing on the RLAP and RLAP-rPPG dataset is different from other datasets. Due to the longer duration of RLAP dataset videos, a 30s moving window is used instead of the entire video for heart rate prediction. For other datasets, the entire 1min video is used for heart rate prediction.

Intra-dataset testing on RLAP

ModelMAERMSEPearson Coef.
DeepPhys1.524.400.906
TS-CAN1.233.590.937
EfficientPhys1.053.410.943
PhysNet1.124.130.916
PhysFormer1.566.280.803
Seq-rPPG1.074.150.917
NoobHeart1.795.850.832
Chrom6.9016.00.341
ICA6.0513.30.380
POS4.2512.10.501

Intra-dataset testing on RLAP-rPPG

<form action="" method="post" name="form1" class="form" id="form1"> <table width="100%" cellpadding="0" cellspacing="0"> <thead> <tr> <th rowspan="2" colspan="1">Model</td> <th colspan="3">HR</td> <th colspan="3">HRV-SDNN</td> </tr> <tr> <th rowspan="1" colspan="1">MAE</td> <th rowspan="1" colspan="1">RMSE</td> <th rowspan="1" colspan="1">Pearson Coef.</td> <th rowspan="1" colspan="1">MAE</td> <th rowspan="1" colspan="1">RMSE</th> <th rowspan="1" colspan="1">Pearson Coef.</th> </tr> </thead> <tbody align="center"> <tr> <td rowspan="1" colspan="1">DeepPhys</td> <td rowspan="1" colspan="1">1.76</td> <td rowspan="1" colspan="1">4.87</td> <td rowspan="1" colspan="1">0.877</td> <td rowspan="1" colspan="1">57.6</td> <td rowspan="1" colspan="1">64.2</td> <td rowspan="1" colspan="1">0.338</td> </tr> <tr> <td rowspan="1" colspan="1">TS-CAN</td> <td rowspan="1" colspan="1">1.23</td> <td rowspan="1" colspan="1">3.82</td> <td rowspan="1" colspan="1">0.922</td> <td rowspan="1" colspan="1">50.1</td> <td rowspan="1" colspan="1">59.3</td> <td rowspan="1" colspan="1">0.395</td> </tr> <tr> <td rowspan="1" colspan="1">EfficientPhys</td> <td rowspan="1" colspan="1">1.00</td> <td rowspan="1" colspan="1">3.39</td> <td rowspan="1" colspan="1">0.939</td> <td rowspan="1" colspan="1">43.7</td> <td rowspan="1" colspan="1">53.7</td> <td rowspan="1" colspan="1">0.356</td> </tr> <tr> <td rowspan="1" colspan="1">PhysNet</td> <td rowspan="1" colspan="1">1.04</td> <td rowspan="1" colspan="1">3.80</td> <td rowspan="1" colspan="1">0.923</td> <td rowspan="1" colspan="1">36.4</td> <td rowspan="1" colspan="1">43.8</td> <td rowspan="1" colspan="1">0.306</td> </tr> <tr> <td rowspan="1" colspan="1">PhysFormer</td> <td rowspan="1" colspan="1">0.78</td> <td rowspan="1" colspan="1">2.83</td> <td rowspan="1" colspan="1">0.957</td> <td rowspan="1" colspan="1">28.8</td> <td rowspan="1" colspan="1">34.4</td> <td rowspan="1" colspan="1">0.450</td> </tr> <tr> <td rowspan="1" colspan="1">Seq-rPPG</td> <td rowspan="1" colspan="1">0.81</td> <td rowspan="1" colspan="1">2.97</td> <td rowspan="1" colspan="1">0.953</td> <td rowspan="1" colspan="1">14.4</td> <td rowspan="1" colspan="1">22.1</td> <td rowspan="1" colspan="1">0.424</td> </tr> <tr> <td rowspan="1" colspan="1">NoobHeart</td> <td rowspan="1" colspan="1">1.57</td> <td rowspan="1" colspan="1">4.71</td> <td rowspan="1" colspan="1">0.883</td> <td rowspan="1" colspan="1">52.3</td> <td rowspan="1" colspan="1">57.3</td> <td rowspan="1" colspan="1">0.488</td> </tr> <tr> <td rowspan="1" colspan="1">Chrom</td> <td rowspan="1" colspan="1">5.88</td> <td rowspan="1" colspan="1">14.1</td> <td rowspan="1" colspan="1">0.451</td> <td rowspan="1" colspan="1">63.7</td> <td rowspan="1" colspan="1">69.8</td> <td rowspan="1" colspan="1">0.267</td> </tr> <tr> <td rowspan="1" colspan="1">ICA</td> <td rowspan="1" colspan="1">4.56</td> <td rowspan="1" colspan="1">9.91</td> <td rowspan="1" colspan="1">0.569</td> <td rowspan="1" colspan="1">74.7</td> <td rowspan="1" colspan="1">77.7</td> <td rowspan="1" colspan="1">0.408</td> </tr> <tr> <td rowspan="1" colspan="1">POS</td> <td rowspan="1" colspan="1">3.60</td> <td rowspan="1" colspan="1">10.1</td> <td rowspan="1" colspan="1">0.634</td> <td rowspan="1" colspan="1">70.6</td> <td rowspan="1" colspan="1">75.8</td> <td rowspan="1" colspan="1">0.267</td> </tr> </tbody></table> </form>

Cross-dataset testing on UBFC-rPPG

The videos and physiological signals of UBFC-rPPG are not strictly synchronized, which results in a fixed error between the heart rate extracted by the rPPG algorithm and GT. Therefore, the error limit of UBFC-rPPG is approximately Pearson's coefficient 0.997, and further improvement in model accuracy will not yield better metrics.

<form action="" method="post" name="form1" class="form" id="form1"> <table width="100%" cellpadding="0" cellspacing="0"> <thead> <tr> <th rowspan="2" colspan="1">Model</td> <th colspan="3">HR</td> <th colspan="3">HRV-SDNN</td> </tr> <tr> <th rowspan="1" colspan="1">MAE</td> <th rowspan="1" colspan="1">RMSE</td> <th rowspan="1" colspan="1">Pearson Coef.</td> <th rowspan="1" colspan="1">MAE</td> <th rowspan="1" colspan="1">RMSE</th> <th rowspan="1" colspan="1">Pearson Coef.</th> </tr> </thead> <tbody align="center"> <tr> <td rowspan="1" colspan="1">DeepPhys</td> <td rowspan="1" colspan="1">1.06</td> <td rowspan="1" colspan="1">1.51</td> <td rowspan="1" colspan="1">0.997</td> <td rowspan="1" colspan="1">30.0</td> <td rowspan="1" colspan="1">37.8</td> <td rowspan="1" colspan="1">0.648</td> </tr> <tr> <td rowspan="1" colspan="1">TS-CAN</td> <td rowspan="1" colspan="1">0.99</td> <td rowspan="1" colspan="1">1.44</td> <td rowspan="1" colspan="1">0.997</td> <td rowspan="1" colspan="1">25.6</td> <td rowspan="1" colspan="1">31.8</td> <td rowspan="1" colspan="1">0.588</td> </tr> <tr> <td rowspan="1" colspan="1">EfficientPhys</td> <td rowspan="1" colspan="1">1.03</td> <td rowspan="1" colspan="1">1.45</td> <td rowspan="1" colspan="1">0.997</td> <td rowspan="1" colspan="1">10.1</td> <td rowspan="1" colspan="1">15.4</td> <td rowspan="1" colspan="1">0.827</td> </tr> <tr> <td rowspan="1" colspan="1">PhysNet</td> <td rowspan="1" colspan="1">0.92</td> <td rowspan="1" colspan="1">1.46</td> <td rowspan="1" colspan="1">0.997</td> <td rowspan="1" colspan="1">12.2</td> <td rowspan="1" colspan="1">14.9</td> <td rowspan="1" colspan="1">0.887</td> </tr> <tr> <td rowspan="1" colspan="1">PhysFormer</td> <td rowspan="1" colspan="1">1.06</td> <td rowspan="1" colspan="1">1.53</td> <td rowspan="1" colspan="1">0.997</td> <td rowspan="1" colspan="1">8.37</td> <td rowspan="1" colspan="1">11.1</td> <td rowspan="1" colspan="1">0.921</td> </tr> <tr> <td rowspan="1" colspan="1">Seq-rPPG</td> <td rowspan="1" colspan="1">0.87</td> <td rowspan="1" colspan="1">1.40</td> <td rowspan="1" colspan="1">0.997</td> <td rowspan="1" colspan="1">4.73</td> <td rowspan="1" colspan="1">8.25</td> <td rowspan="1" colspan="1">0.911</td> </tr> <tr> <td rowspan="1" colspan="1">NoobHeart</td> <td rowspan="1" colspan="1">1.14</td> <td rowspan="1" colspan="1">1.69</td> <td rowspan="1" colspan="1">0.996</td> <td rowspan="1" colspan="1">33.1</td> <td rowspan="1" colspan="1">36.5</td> <td rowspan="1" colspan="1">0.697</td> </tr> <tr> <td rowspan="1" colspan="1">Chrom</td> <td rowspan="1" colspan="1">3.82</td> <td rowspan="1" colspan="1">12.3</td> <td rowspan="1" colspan="1">0.830</td> <td rowspan="1" colspan="1">23.7</td> <td rowspan="1" colspan="1">28.6</td> <td rowspan="1" colspan="1">0.672</td> </tr> <tr> <td rowspan="1" colspan="1">ICA</td> <td rowspan="1" colspan="1">1.58</td> <td rowspan="1" colspan="1">2.55</td> <td rowspan="1" colspan="1">0.990</td> <td rowspan="1" colspan="1">33.3</td> <td rowspan="1" colspan="1">42.0</td> <td rowspan="1" colspan="1">0.604</td> </tr> <tr> <td rowspan="1" colspan="1">POS</td> <td rowspan="1" colspan="1">2.45</td> <td rowspan="1" colspan="1">8.56</td> <td rowspan="1" colspan="1">0.900</td> <td rowspan="1" colspan="1">30.5</td> <td rowspan="1" colspan="1">37.6</td> <td rowspan="1" colspan="1">0.513</td> </tr> </tbody></table> </form>

Cross-dataset testing on PURE

Unsupervised methods are usually sensitive to preprocessing and postprocessing, and many parameters affect their performance. PhysBench optimizes these additional steps as much as possible to fully demonstrate the model's performance. Surprisingly, POS outperforms most supervised methods on the PURE dataset, and after careful verification, the results are genuine.

<form action="" method="post" name="form1" class="form" id="form1"> <table width="100%" cellpadding="0" cellspacing="0"> <thead> <tr> <th rowspan="2" colspan="1">Model</td> <th colspan="3">HR</td> <th colspan="3">HRV-SDNN</td> </tr> <tr> <th rowspan="1" colspan="1">MAE</td> <th rowspan="1" colspan="1">RMSE</td> <th rowspan="1" colspan="1">Pearson Coef.</td> <th rowspan="1" colspan="1">MAE</td> <th rowspan="1" colspan="1">RMSE</th> <th rowspan="1" colspan="1">Pearson Coef.</th> </tr> </thead> <tbody align="center"> <tr> <td rowspan="1" colspan="1">DeepPhys</td> <td rowspan="1" colspan="1">2.80</td> <td rowspan="1" colspan="1">8.31</td> <td rowspan="1" colspan="1">0.937</td> <td rowspan="1" colspan="1">86.0</td> <td rowspan="1" colspan="1">92.0</td> <td rowspan="1" colspan="1">0.297</td> </tr> <tr> <td rowspan="1" colspan="1">TS-CAN</td> <td rowspan="1" colspan="1">2.12</td> <td rowspan="1" colspan="1">6.67</td> <td rowspan="1" colspan="1">0.960</td> <td rowspan="1" colspan="1">61.4</td> <td rowspan="1" colspan="1">74.1</td> <td rowspan="1" colspan="1">0.293</td> </tr> <tr> <td rowspan="1" colspan="1">EfficientPhys</td> <td rowspan="1" colspan="1">1.33</td> <td rowspan="1" colspan="1">5.97</td> <td rowspan="1" colspan="1">0.968</td> <td rowspan="1" colspan="1">28.0</td> <td rowspan="1" colspan="1">44.0</td> <td rowspan="1" colspan="1">0.468</td> </tr> <tr> <td rowspan="1" colspan="1">PhysNet</td> <td rowspan="1" colspan="1">0.51</td> <td rowspan="1" colspan="1">0.91</td> <td rowspan="1" colspan="1">0.999</td> <td rowspan="1" colspan="1">22.5</td> <td rowspan="1" colspan="1">35.7</td> <td rowspan="1" colspan="1">0.560</td> </tr> <tr> <td rowspan="1" colspan="1">PhysFormer</td> <td rowspan="1" colspan="1">1.63</td> <td rowspan="1" colspan="1">9.45</td> <td rowspan="1" colspan="1">0.941</td> <td rowspan="1" colspan="1">21.6</td> <td rowspan="1" colspan="1">32.0</td> <td rowspan="1" colspan="1">0.576</td> </tr> <tr> <td rowspan="1" colspan="1">Seq-rPPG</td> <td rowspan="1" colspan="1">0.37</td> <td rowspan="1" colspan="1">0.63</td> <td rowspan="1" colspan="1">1.000</td> <td rowspan="1" colspan="1">9.51</td> <td rowspan="1" colspan="1">15.8</td> <td rowspan="1" colspan="1">0.872</td> </tr> <tr> <td rowspan="1" colspan="1">NoobHeart</td> <td rowspan="1" colspan="1">0.45</td> <td rowspan="1" colspan="1">0.70</td> <td rowspan="1" colspan="1">1.000</td> <td rowspan="1" colspan="1">50.8</td> <td rowspan="1" colspan="1">58.1</td> <td rowspan="1" colspan="1">0.657</td> </tr> <tr> <td rowspan="1" colspan="1">Chrom</td> <td rowspan="1" colspan="1">2.08</td> <td rowspan="1" colspan="1">12.3</td> <td rowspan="1" colspan="1">0.856</td> <td rowspan="1" colspan="1">40.4</td> <td rowspan="1" colspan="1">56.2</td> <td rowspan="1" colspan="1">0.418</td> </tr> <tr> <td rowspan="1" colspan="1">ICA</td> <td rowspan="1" colspan="1">1.12</td> <td rowspan="1" colspan="1">3.97</td> <td rowspan="1" colspan="1">0.986</td> <td rowspan="1" colspan="1">67.5</td> <td rowspan="1" colspan="1">76.5</td> <td rowspan="1" colspan="1">0.376</td> </tr> <tr> <td rowspan="1" colspan="1">POS</td> <td rowspan="1" colspan="1">0.39</td> <td rowspan="1" colspan="1">0.66</td> <td rowspan="1" colspan="1">1.000</td> <td rowspan="1" colspan="1">56.1</td> <td rowspan="1" colspan="1">69.2</td> <td rowspan="1" colspan="1">0.467</td> </tr> </tbody></table> </form>

Cross-dataset testing on MMPD-Simplest

Referencing https://github.com/McJackTang/MMPD_rPPG_dataset, we tested all models in the simplest scenario. MMPD is a highly compressed dataset using H.264 encoding, which may affect some compression-sensitive models. In the simplest scenario, it only contains light skin samples and no head movement.
The simplest scenario is as follows: motion='Stationary', skin_color='3', light=['LED-high', 'LED-low', 'Incandescent']

ModelMAERMSEPearson Coef.
DeepPhys1.031.460.987
TS-CAN0.951.400.989
EfficientPhys1.575.400.821
PhysNet0.971.450.988
PhysFormer1.704.130.890
Seq-rPPG1.523.930.915
NoobHeart2.786.310.763
Chrom12.219.20.151
ICA4.089.450.642
POS4.3010.80.426

Cross-dataset testing on COHFACE

COHFACE is a dataset using MPEG-4 compression with a very high compression ratio, and the size of each video does not exceed 2MB, which causes most rPPG algorithms to fail on it. However, some structures show robustness to high compression ratios: such as DeepPhys-like structures that input the difference between video frames and output the difference in BVP. In addition, other poorly performing algorithms are not completely without performance; due to the failure of predicting some videos, this part of the error is actually meaningless and more appropriate metrics should be found to measure performance.

ModelMAERMSEPearson Coef.
DeepPhys2.758.630.733
TS-CAN2.287.810.774
EfficientPhys3.9412.00.528
PhysNet19.626.9-0.45
PhysFormer20.026.1-0.37
Seq-rPPG16.125.7-0.12
NoobHeart25.029.5-0.36
Chrom27.432.4-0.32
ICA7.9116.10.282
POS22.329.9-0.32

Training evaluation on SCAMPS

Training on synthetic datasets is difficult, and we observed that overfitting can easily occur, requiring many steps to prevent overfitting, such as controlling the learning rate, additional regularization operations, etc. Smaller models may not be prone to overfitting; NoobHeart is an example where we froze the LayerNormalization layer with initial parameters and trained for 5 epochs while achieving similar performance as training on real datasets. This could be the first step in training on synthetic datasets.

Referencing https://github.com/remotebiosensing/rppg and rPPG-Toolbox, we use OneCycle learning rate and AdamW optimizer to mitigate overfitting, and train DeepPhys. For details, please refer to https://github.com/KegangWangCCNU/PhysBench/blob/main/benchmark_SCAMPS/DeepPhys.ipynb

Cross-dataset testing on UBFC

ModelMAERMSEPearson Coef.
DeepPhys9.5118.20.608
NoobHeart1.051.490.997

Cross-dataset testing on PURE

ModelMAERMSEPearson Coef.
DeepPhys5.4113.30.852
NoobHeart0.530.880.999

Visualization

Please run visualization.py to open the visualization webpage. Before visualizing, make sure all result files are saved in the results folder. When the framework generates result files, it links to the dataset files, so the visualization webpage can display face images synchronously. Once the link is invalid, such as when dataset files are moved, faces cannot be displayed on the webpage.

Limitation

The test data used by PhysBench may not necessarily reflect the accuracy in real-world scenarios, where there are more diverse lighting conditions, head movements, skin tones and age groups. The heart rate provided by the algorithm through Welch method may not fully comply with medical standards and requires further rigorous evaluation before clinical use. We aim to inform users of the weaknesses and limitations of the algorithm as much as possible through the visualization webpage.

Full Benchmark Table

All the results of the experiments we conducted can be found here.
FullBench.pdf

Request RLAP dataset

If you wish to obtain the RLAP dataset, please send an email to kegangwang@mails.ccnu.edu.cn and cc yantaowei@ccnu.edu.cn, with the Data Usage Agreement attached.
See https://github.com/KegangWangCCNU/RLAP-dataset

Citation

If you use PhysBench framework, PhysRecorder data collection tool, or the models included in this framework, please cite the following <a href="https://github.com/KegangWangCCNU/PICS/raw/main/PhysBench.pdf" target="_blank">paper</a>

@misc{wang2023physbench,
      title={PhysBench: A Benchmark Framework for Remote Physiological Sensing with New Dataset and Baseline}, 
      author={Kegang Wang and Yantao Wei and Mingwen Tong and Jie Gao and Yi Tian and YuJian Ma and ZhongJin Zhao},
      year={2023},
      eprint={2305.04161},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

I am looking for a CS Ph.D. position, my research field is computer vision and remote physiological sensing, and I will graduate with a master's degree in June 2024. If anyone is interested, please send an email to kegangwang@mails.ccnu.edu.cn.