Skip to content

To overcome fundamental biases in camera-based remote plethysmography, we propose an adversarial learning-based fair fusion method, using a novel RGB-Radar hardware setup.

License

Notifications You must be signed in to change notification settings

UCLA-VMG/EquiPleth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Equitable Plethysmography Blending Camera and 77 GHz Radar Sensing for Equitable, Robust Plethysmography

Blending Camera and 77 GHz Radar Sensing for Equitable, Robust Plethysmography
Alexander Vilesov, Pradyumna Chari, Adnan Armouti, Anirudh Bindiganavale Harish, Kimaya Kulkarni, Ananya Deoghare, Laleh Jalilian, Achuta Kadambi
ACM Transactions on Graphics (SIGGRAPH), July 2022
Project page / Paper / Dataset Request Form / Presentation / Los-Res Paper

For details on the citation format, kindly refer to the Citation section below.

Repository Contributors: Adnan Armouti, Anirudh Bindiganavale Harish, Alexander (Sasha) Vilesov, Pradyumna Chari.


Hardware Setup

This section is pertinent to those who wish to collect tier own data. All following instructions are with respect to the hardware used by the authors.

Prior to running the codes for data acquisition, please ensure the following:

(1) MX800

Kindly refer to the following link. The link will redirect to the software we have utilized to connect to the MX800 through the Ethernet port. A configuration named mx800.bpc has been provided in data_acquisition /sensors/configs folder. However, you may need to regenerate one for your specific system.

Once connected, please clone this GitHub Repository that contains the C# code to collect data from the MX800. You will need to compile the C# code with a tool such as Visual Studio. The generated binaries will need to be linked with the mx800_sensor.py file. The binaries can be linked through the __init__ function of the MX800_Sensor class.

(2) RF

Any documentation for the AWR1443Boost radar can be found in the following link.

To run data collection you will need to install mmWave studio GUI tools for 1st-generation parts (xWR1243, xWR1443, xWR1642, xWR1843, xWR6843, xWR6443).

Also, please follow these instructions provided by TI to install mmWave studio and any drivers necessary to operate the radar device correctly.

In-order to not configure the radar with certain parameters upon start-up each time, we have provided a lua script which can be fed into mmWave studio via the windows powershell to automatically boot the radar with the preset configurations. The following bash script commands (2 lines) can be used to boot mmWave studio runtime with the lua script.

>> cd "C:\ti\mmwave_studio_02_01_01_00\mmWaveStudio\RunTime"
>> cmd /C "C:\ti\mmwave_studio_02_01_01_00\mmWaveStudio\RunTime\mmWaveStudio.exe /lua "path_to_data_acquisition\sensors\configs\awr1443_config.lua"

(3) RGB Camera

If you would like to use the Zed Camera mentioned in the paper, kindly follow these instructions provided by StereoLabs.

The rgbd_sensor.py file provided in the data_acquisition/sensors folder is for the Zed Camera used in the paper. For any other camera, please use rgb_sensor.py.


Data Acquisition

All runtime parameters can be adjusted by editing the sensors_config.ini file in data_acquisition/sensors/configs.

The following command can be used to acquire data:

>> python sync_sensors.py

Please make sure navigate into the data_acquisition folder prior to running the file.

In sync_sensors.py please edit the rf_dump_path in cleanup_rf. This is the location where mmWave studio continuously dumps the recorded data from the radar. This data is not needed, as sync_sensors.py records the required subset of the same data during the its runtime and creates the rf output file. The cleanup_rf function in sync_sensors.py deletes these unnecessary file to avoid redundancy.


Dataset and Pre-prep

The EquiPleth dataset can be downloaded by filling this Google Form.

If you choose to collect your own data, please adhere to the following pre-processing instructions to obtain a similar dataset to the EquiPleth dataset.

  1. Use data_interpolation.py to interpolate the MX800 waveforms to the timestamps of the sensors.

  2. Use a face cropping software (MTCNN in our case) to crop the face and save each frame as an image within the trial/volunteer's folder.

Hierarchy of the EquiPleth dataset - RGB Files

|
|--- rgb_files
|        |
|        |--- volunteer id 1 trial 1 (v_1_1)
|        |         |
|        |         |--- frame 0 (rgbd_rgb_0.png)
|        |         |--- frame 1 (rgbd_rgb_1.png)
|        |         |
|        |         |
|        |         |
|        |         |--- last frame (rgbd_rgb_899.png)
|        |         |--- ground truth PPG (rgbd_ppg.npy)
|        | 
|        | 
|        |--- volunteer id 1 trial 2 (v_1_2)
|        | 
|        | 
|        | 
|        |--- volunteer id 2 trial 1 (v_2_1)
|        |
|        |
|        |
|
|
|
|
|--- rf files
|        |
|        |---- volunteer id 1 trial 1 (1_1)
|        |           |
|        |           |--- Radar data (rf.pkl)
|        |           |--- ground truth PPG (vital_dict.npy)
|        |
|        |
|        |--- volunteer id 1 trial 2 (1_2)
|        |
|        |
|        |
|        |--- volunteer id 2 trial 1 (2_1)
|        |
|        |
|        |
|
|
|--- fitzpatrick labels file (fitzpatrick_labels.pkl)
|--- folds pickle file (demo_fold.pkl)
|--- {user generated fusion data after rgb & rf training (more details in the section below)}

Create a new folder (dataset in our case) in nndl and place the downloaded/processed dataset in the same.


NNDL Execution

Please make sure to navigate into the nndl folder prior to running the following scripts.

(1) RGB / RF

Run the following command to train the rf and the rgb models.

>> python {rf or rgb}/train.py --train-shuffle --verbose

Run the following command to test the rf and the rgb models.

>> python {rf or rgb}/test.py --verbose

(2) Fusion Data Generation

Run the following command to generate the pickle file with the data for the fusion model.

>> python data/fusion_gen.py --verbose

(3) Fusion

Run the following command to train the fusion model.

>> python fusion/train.py --shuffle --verbose

Run the following command to test the fusion model.

>> python fusion/test.py --verbose

(4) Command Line Args

For more info about the command line arguments, please run the following:

>> python {folder}/file.py --help

References

  1. Zheng, Tianyue, et al. "MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar." Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 2021.

  2. Yu, Zitong, Xiaobai Li, and Guoying Zhao. "Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks." arXiv preprint arXiv:1905.02419 (2019).


Citation

@article{vilesov2022blending,
  title={Blending camera and 77 GHz radar sensing for equitable, robust plethysmography},
  author={Vilesov, Alexander and Chari, Pradyumna and Armouti, Adnan and Harish, Anirudh Bindiganavale and Kulkarni, Kimaya and Deoghare, Ananya and Jalilian, Laleh and Kadambi, Achuta},
  journal={ACM Transactions on Graphics (TOG)},
  volume={41},
  number={4},
  pages={1--14},
  year={2022},
  publisher={ACM New York, NY, USA}
}

About

To overcome fundamental biases in camera-based remote plethysmography, we propose an adversarial learning-based fair fusion method, using a novel RGB-Radar hardware setup.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published