Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BPM for a random video input #13

Closed
rhaizer opened this issue Mar 20, 2021 · 8 comments
Closed

BPM for a random video input #13

rhaizer opened this issue Mar 20, 2021 · 8 comments

Comments

@rhaizer
Copy link

rhaizer commented Mar 20, 2021

Thank you so much for creating this framework! I have a question about usage.

How can I get the BPM values of a video that any GT values are not available? Is there any documentation that anyone can link? Because from example notebooks it only shows to visualize BPM with the comparison with GT values. It would be really great if anyone could give me an example to get BPM output from a random video input without ground truth values.

@LegrandNico
Copy link

As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:

from pyVHR.signals.video import Video

# -- Video object
videoFilename = './VideoStreamOutput.avi'
video = Video(videoFilename)

# -- extract faces
video.getCroppedFaces(detector='mtcnn', extractor='opencv')
video.printVideoInfo()

# -- apply remote PPG method

from pyVHR.methods.pos import POS
from pyVHR.methods.ssr import SSR
from pyVHR.methods.pbv import PBV

params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2}

pos = POS(**params)
ssr = SSR(**params)
pbv = PBV(**params)

# -- get BPM values
bpmES_pos, timesES_pos = pos.runOffline(**params)
bpmES_ssr, timesES_ssr = ssr.runOffline(**params)
bpmES_pbv, timesES_pbv = pbv.runOffline(**params)

But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.

@rhaizer
Copy link
Author

rhaizer commented Mar 20, 2021

As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:

from pyVHR.signals.video import Video

# -- Video object
videoFilename = './VideoStreamOutput.avi'
video = Video(videoFilename)

# -- extract faces
video.getCroppedFaces(detector='mtcnn', extractor='opencv')
video.printVideoInfo()

# -- apply remote PPG method

from pyVHR.methods.pos import POS
from pyVHR.methods.ssr import SSR
from pyVHR.methods.pbv import PBV

params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2}

pos = POS(**params)
ssr = SSR(**params)
pbv = PBV(**params)

# -- get BPM values
bpmES_pos, timesES_pos = pos.runOffline(**params)
bpmES_ssr, timesES_ssr = ssr.runOffline(**params)
bpmES_pbv, timesES_pbv = pbv.runOffline(**params)

But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.

Thank you so much for your fast and detailed reply! It works like a charm. However, the results are not good as I expected. On a video with still face and stable lighting(webcam video) results can vary between 50 to 140 which was not true and the results are not even close on any of the methods! Is there any video length requirement(I used a 55-second video input) for it to start giving better results as some other repos requiring?

@rhaizer
Copy link
Author

rhaizer commented Mar 20, 2021

As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:

from pyVHR.signals.video import Video

# -- Video object
videoFilename = './VideoStreamOutput.avi'
video = Video(videoFilename)

# -- extract faces
video.getCroppedFaces(detector='mtcnn', extractor='opencv')
video.printVideoInfo()

# -- apply remote PPG method

from pyVHR.methods.pos import POS
from pyVHR.methods.ssr import SSR
from pyVHR.methods.pbv import PBV

params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2}

pos = POS(**params)
ssr = SSR(**params)
pbv = PBV(**params)

# -- get BPM values
bpmES_pos, timesES_pos = pos.runOffline(**params)
bpmES_ssr, timesES_ssr = ssr.runOffline(**params)
bpmES_pbv, timesES_pbv = pbv.runOffline(**params)

But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.

It continues to process the old video if I run this again with the different video even though variables are not in cache or anything. Looks like something being stored locally for the first time and the second time it continues to use the data extracted from the first one.

@farukcolak53
Copy link

As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:

from pyVHR.signals.video import Video

# -- Video object
videoFilename = './VideoStreamOutput.avi'
video = Video(videoFilename)

# -- extract faces
video.getCroppedFaces(detector='mtcnn', extractor='opencv')
video.printVideoInfo()

# -- apply remote PPG method

from pyVHR.methods.pos import POS
from pyVHR.methods.ssr import SSR
from pyVHR.methods.pbv import PBV

params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2}

pos = POS(**params)
ssr = SSR(**params)
pbv = PBV(**params)

# -- get BPM values
bpmES_pos, timesES_pos = pos.runOffline(**params)
bpmES_ssr, timesES_ssr = ssr.runOffline(**params)
bpmES_pbv, timesES_pbv = pbv.runOffline(**params)

But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.

How can we run this code snippet with different ROIs? For example I want to run it with first 'forehead', then 'left cheek', etc. I tried but I think we need to change many things and I couldn't able to do that.

@AIshutin
Copy link

AIshutin commented Apr 6, 2021

It continues to process the old video if I run this again with the different video even though variables are not in cache or anything. Looks like something being stored locally for the first time and the second time it continues to use the data extracted from the first one.

Use this:

Video.loadCropFaces = Video.saveCropFaces = False

@dhananjay1710
Copy link

Video.loadCropFaces = Video.saveCropFaces = False

Add this where?

@AIshutin
Copy link

Right after the import statements.

@vcuculo
Copy link
Member

vcuculo commented Oct 21, 2021

Closing this issue since related to a previous version of pyVHR. Please refer to README for basic usage instructions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants