π₯ Please remember to β this repo if you find it useful and cite our work if you end up using it in your work! π₯
π₯ If you have any questions or concerns, please create an issue π! π₯
rPPG-Toolbox is an open-source platform designed for camera-based physiological sensing, also known as remote photoplethysmography (rPPG).
rPPG-Toolbox not only benchmarks the existing state-of-the-art neural and unsupervised methods, but it also supports flexible and rapid development of your own algorithms.
rPPG-Toolbox currently supports the following algorithms:
-
Traditional Unsupervised Algorithms
- Remote plethysmographic imaging using ambient light (GREEN), by Verkruysse et al., 2008
- Advancements in noncontact multiparameter physiological measurements using a webcam (ICA), by Poh et al., 2011
- Robust pulse rate from chrominance-based rppg (CHROM), by Haan et al., 2013
- Local group invariance for heart rate estimation from face videos in the wild (LGI), by Pilz et al., 2018
- Improved motion robustness of remote-PPG by using the blood volume pulse signature (PBV), by Haan et al., 2014
- Algorithmic principles of remote ppg (POS), by Wang et al., 2016
-
Supervised Neural Algorithms
- DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks (DeepPhys), by Chen et al., 2018
- Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks (PhysNet), by Yu et al., 2019
- Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement (TS-CAN), by Liu et al., 2020
- EfficientPhys: Enabling Simple, Fast and Accurate Camera-Based Cardiac Measurement (EfficientPhys), by Liu et al., 2023
- BigSmall: Efficient Multi-Task Learning for Disparate Spatial and Temporal Physiological Measurements (BigSmall), by Narayanswamy et al., 2023
The toolbox supports six datasets, namely SCAMPS, UBFC, PURE, BP4D+, UBFC-Phys, and MMPD. Please cite the corresponding papers when using these datasets. For now, we recommend training with UBFC, PURE, or SCAMPS due to the level of synchronization and volume of the datasets. To use these datasets in a deep learning model, you should organize the files as follows.
- MMPD
- Jiankai Tang, Kequan Chen, Yuntao Wang, Yuanchun Shi, Shwetak Patel, Daniel McDuff, Xin Liu, "MMPD: Multi-Domain Mobile Video Physiology Dataset", IEEE EMBC, 2023
data/MMPD/ | |-- subject1/ | |-- p1_0.mat | |-- p1_1.mat | |... | |-- p1_19.mat | |-- subject2/ | |-- p2_0.mat | |-- p2_1.mat | |... |... | |-- subjectn/ | |-- pn_0.mat | |-- pn_1.mat | |...
- SCAMPS
- D. McDuff, M. Wander, X. Liu, B. Hill, J. Hernandez, J. Lester, T. Baltrusaitis, "SCAMPS: Synthetics for Camera Measurement of Physiological Signals", NeurIPS, 2022
data/SCAMPS/Train/ |-- P00001.mat |-- P00002.mat |... data/SCAMPS/Val/ |-- P00001.mat |-- P00002.mat |... data/SCAMPS/Test/ |-- P00001.mat |-- P00002.mat |...
- UBFC
- S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, J. Dubois, "Unsupervised skin tissue segmentation for remote photoplethysmography", Pattern Recognition Letters, 2017.
data/UBFC/ | |-- subject1/ | |-- vid.avi | |-- ground_truth.txt | |-- subject2/ | |-- vid.avi | |-- ground_truth.txt |... | |-- subjectn/ | |-- vid.avi | |-- ground_truth.txt
- PURE
- Stricker, R., MΓΌller, S., Gross, H.-M.Non-contact "Video-based Pulse Rate Measurement on a Mobile Service Robot" in: Proc. 23st IEEE Int. Symposium on Robot and Human Interactive Communication (Ro-Man 2014), Edinburgh, Scotland, UK, pp. 1056 - 1062, IEEE 2014
data/PURE/ | |-- 01-01/ | |-- 01-01/ | |-- 01-01.json | |-- 01-02/ | |-- 01-02/ | |-- 01-02.json |... | |-- ii-jj/ | |-- ii-jj/ | |-- ii-jj.json
- BP4D+
- Zhang, Z., Girard, J., Wu, Y., Zhang, X., Liu, P., Ciftci, U., Canavan, S., Reale, M., Horowitz, A., Yang, H., Cohn, J., Ji, Q., Yin, L. "Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis", IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
RawData/ | |-- 2D+3D/ | |-- F001.zip/ | |-- F002.zip | |... | |-- 2DFeatures/ | |-- F001_T1.mat | |-- F001_T2.mat | |... | |-- 3DFeatures/ | |-- F001_T1.mat | |-- F001_T2.mat | |... | |-- AUCoding/ | |-- AU_INT/ | |-- AU06/ | |-- F001_T1_AU06.csv | |... | |... | |-- AU_OCC/ | |-- F00_T1.csv | |... | |-- IRFeatures/ | |-- F001_T1.txt | |... | |-- Physiology/ | |-- F001/ | |-- T1/ | |-- BP_mmHg.txt | |-- microsiemens.txt | |--LA Mean BP_mmHg.txt | |--LA Systolic BP_mmHg.txt | |-- BP Dia_mmHg.txt | |-- Pulse Rate_BPM.txt | |-- Resp_Volts.txt | |-- Respiration Rate_BPM.txt | |... | |-- Thermal/ | |-- F001/ | |-- T1.mv | |... | |... | |-- BP4D+UserGuide_v0.2.pdf
- UBFC-Phys
- Sabour, R. M., Benezeth, Y., De Oliveira, P., Chappe, J., & Yang, F. (2021). Ubfc-phys: A multimodal database for psychophysiological studies of social stress. IEEE Transactions on Affective Computing.
RawData/ | |-- s1/ | |-- vid_s1_T1.avi | |-- vid_s1_T2.avi | |... | |-- bvp_s1_T1.csv | |-- bvp_s1_T2.csv | |-- s2/ | |-- vid_s2_T1.avi | |-- vid_s2_T2.avi | |... | |-- bvp_s2_T1.csv | |-- bvp_s2_T2.csv |... | |-- sn/ | |-- vid_sn_T1.avi | |-- vid_sn_T2.avi | |... | |-- bvp_sn_T1.csv | |-- bvp_sn_T2.csv
The table shows Mean Absolute Error (MAE) and Mean Absolute Percent Error (MAPE) performance across all the algorithms and datasets:
STEP 1: bash setup.sh
STEP 2: conda activate rppg-toolbox
STEP 3: pip install -r requirements.txt
Please use config files under ./configs/infer_configs
For example, if you want to run The model trained on PURE and tested on UBFC, use python main.py --config_file ./configs/infer_configs/PURE_UBFC-rPPG_TSCAN_BASIC.yaml
If you want to test unsupervised signal processing methods, you can use python main.py --config_file ./configs/infer_configs/UBFC_UNSUPERVISED.yaml
Please use config files under ./configs/train_configs
STEP 1: Download the PURE raw data by asking the paper authors.
STEP 2: Download the UBFC raw data via link
STEP 3: Modify ./configs/train_configs/PURE_PURE_UBFC_TSCAN_BASIC.yaml
STEP 4: Run python main.py --config_file ./configs/train_configs/PURE_PURE_UBFC_TSCAN_BASIC.yaml
Note 1: Preprocessing requires only once; thus turn it off on the yaml file when you train the network after the first time.
Note 2: The example yaml setting will allow 80% of PURE to train and 20% of PURE to valid. After training, it will use the best model(with the least validation loss) to test on UBFC.
STEP 1: Download the SCAMPS via this link and split it into train/val/test folders.
STEP 2: Download the UBFC via link
STEP 3: Modify ./configs/train_configs/SCAMPS_SCAMPS_UBFC_DEEPPHYS_BASIC.yaml
STEP 4: Run python main.py --config_file ./configs/train_configs/SCAMPS_SCAMPS_UBFC_DEEPPHYS_BASIC.yaml
Note 1: Preprocessing requires only once; thus turn it off on the yaml file when you train the network after the first time.
Note 2: The example yaml setting will allow 80% of SCAMPS to train and 20% of SCAMPS to valid. After training, it will use the best model(with the least validation loss) to test on UBFC.
STEP 1: Download the UBFC via link
STEP 3: Modify ./configs/infer_configs/UBFC_UNSUPERVISED.yaml
STEP 4: Run python main.py --config_file ./configs/infer_configs/UBFC_UNSUPERVISED.yaml
The rPPG-Toolbox uses yaml file to control all parameters for training and evaluation. You can modify the existing yaml files to meet your own training and testing requirements.
Here are some explanation of parameters:
-
train_and_test
: train on the dataset and use the newly trained model to test.only_test
: you need to set INFERENCE-MODEL_PATH, and it will use pre-trained model initialized with the MODEL_PATH to test.
-
USE_EXCLUSION_LIST
: IfTrue
, utilize a provided list to exclude preprocessed videosSELECT_TASKS
: IfTrue
, explicitly select tasks to loadDATA_PATH
: The input path of raw dataCACHED_PATH
: The output path to preprocessed data. This path also houses a directory of .csv files containing data paths to files loaded by the dataloader. This filelist (found in default at CACHED_PATH/DataFileLists). These can be viewed for users to understand which files are used in each data split (train/val/test)EXP_DATA_NAME
If it is "", the toolbox generates a EXP_DATA_NAME based on other defined parameters. Otherwise, it uses the user-defined EXP_DATA_NAME.BEGIN" & "END
: The portion of the dataset used for training/validation/testing. For example, if theDATASET
is PURE,BEGIN
is 0.0 andEND
is 0.8 under the TRAIN, the first 80% PURE is used for training the network. If theDATASET
is PURE,BEGIN
is 0.8 andEND
is 1.0 under the VALID, the last 20% PURE is used as the validation set. It is worth noting that validation and training sets don't have overlapping subjects.DATA_TYPE
: How to preprocess the video dataDATA_AUG
: If present, the type of generative data augmentation applied to video dataLABEL_TYPE
: How to preprocess the label dataUSE_PSUEDO_PPG_LABEL
: IfTrue
use POS generated PPG psuedo labels instead of dataset ground truth heart singal waveformDO_CHUNK
: Whether to split the raw data into smaller chunksCHUNK_LENGTH
: The length of each chunk (number of frames)DO_CROP_FACE
: Whether to perform face detectionDYNAMIC_DETECTION
: IfFalse
, face detection is only performed at the first frame and the detected box is used to crop the video for all of the subsequent frames. IfTrue
, face detection is performed at a specific frequency which is defined byDYNAMIC_DETECTION_FREQUENCY
.DYNAMIC_DETECTION_FREQUENCY
: The frequency of face detection (number of frames) if DYNAMIC_DETECTION isTrue
USE_MEDIAN_FACE_BOX
: IfTrue
andDYNAMIC_DETECTION
isTrue
, use the detected face boxs throughout each video to create a single, median face box per video.LARGE_FACE_BOX
: Whether to enlarge the rectangle of the detected face region in case the detected box is not large enough for some special cases (e.g., motion videos)LARGE_BOX_COEF
: The coefficient to scale the face box ifLARGE_FACE_BOX
isTrue
.
-
USE_SMALLER_WINDOW
: IfTrue
, use an evaluation window smaller than the video length for evaluation.
-
STEP 1: Create a new python file in dataset/data_loader, e.g. MyLoader.py
-
STEP 2: Implement the required functions, including:
def preprocess_dataset(self, config_preprocess):
@staticmethod def read_video(video_file):
@staticmethod def read_wave(bvp_file):
-
STEP 3:[Optional] Override optional functions. In principle, all functions in BaseLoader can be override, but we do not recommend you to override __len__, __get_item__,save,load.
-
STEP 4:Set or add configuration parameters. To set paramteters, create new yaml files in configs/ . Adding parameters requires modifying config.py, adding new parameters' definition and initial values.
Supervised rPPG training requires high fidelity synchronous PPG waveform labels. However not all datasets contain such high quality labels. In these cases we offer the option to train on synchronous PPG "pseudo" labels derived through a signal processing methodology. These labels are produced by using POS-generated PPG waveforms, which are then bandpass filtered around the normal heart-rate frequencies, and finally amplitude normalized using a Hilbert-signal envelope. The tight filtering and envelope normalization results in a strong periodic proxy signal, but at the cost of limited signal morphology.
The usage of synthetic data in the training of machine learning models for medical applications is becoming a key tool that warrants further research. In addition to providing support for the fully synthetic dataset SCAMPS, we provide provide support for synthetic, motion-augmented versions of the UBFC, PURE, SCAMPS, and UBFC-Phys datasets for further exploration toward the use of synthetic data for training rPPG models. The synthetic, motion-augmented datasets are generated using the MA-rPPG Video Toolbox, an open-source motion augmentation pipeline targeted for increasing motion diversity in rPPG videos. You can generate and utilize the aforementioned motion-augmented datasets using the steps below.
-
STEP 1: Follow the instructions in the README of the MA-rPPG Video Toolbox GitHub repo to generate any of the supported motion-augmented datasets. NOTE: You will have to have an original, unaugmented version of a dataset and driving video to generate a motion-augmented dataset. More information can be found here.
-
STEP 2: Using any config file of your choice in this toolbox, modify the
DATA_AUG
parameter (set to'None'
by default) to'Motion'
. Currently, onlytrain_configs
that utilize the UBFC-rPPG or PURE datasets have this parameter visible, but you can also modify other config files to add theDATA_AUG
parameter below theDATA_TYPE
parameter that is visible in all config files. This will enable the proper funciton for loading motion-augmented data that is in the.npy
format. -
STEP 3: Run the corresponding config file. Your saved model's filename will have
MA
appended to the corresponding data splits that are motion-augmented.
If you use the aforementioned functionality, please remember to cite the following in addition to citing the rPPG-Toolbox:
- Paruchuri, A., Liu, X., Pan, Y., Patel, S., McDuff, D., & Sengupta, S. (2023). Motion Matters: Neural Motion Transfer for Better Camera Physiological Sensing. arXiv preprint arXiv:2303.12059.
Refer to this BibTeX for quick inclusion into a .bib
file.
We implement BigSmall as an example to show how this toolbox may be extended to support physiological multitasking. If you use this functionality please cite the following publication:
- Narayanswamy, G., Liu, Y., Yang, Y., Ma, C., Liu, X., McDuff, D., Patel, S. "BigSmall: Efficient Multi-Task Learning For Physiological Measurements" https://arxiv.org/abs/2303.11573
The BigSmall mode multi-tasks pulse (PPG regression), respiration (regression), and facial action (multilabel AU classification). The model is trained and evaluated (in this toolbox) on the AU label subset (described in the BigSmall publication) of the BP4D+ dataset, using a 3-fold cross validation method (using the same folds as in the BigSmall publication).
-
STEP 1: Download the BP4D+ by emailing the authors found here.
-
STEP 2: Modify
./configs/train_configs/BP4D_BP4D_BIGSMALL_FOLD1.yaml
to train the first fold (config files also exist for the 2nd and 3rd fold). -
STEP 3: Run
python main.py --config_file ./configs/train_configs/BP4D_BP4D_BIGSMALL_FOLD1.yaml
If you find our paper or this toolbox useful for your research, please cite our work.
@misc{liu2023rppgtoolbox,
title={rPPG-Toolbox: Deep Remote PPG Toolbox},
author={Xin Liu and Girish Narayanswamy and Akshay Paruchuri and Xiaoyu Zhang and Jiankai Tang and Yuzhe Zhang and Yuntao Wang and Soumyadip Sengupta and Shwetak Patel and Daniel McDuff},
year={2023},
eprint={2210.00716},
archivePrefix={arXiv},
primaryClass={cs.CV}
}