Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video Input #3

Open
blank-ed opened this issue Aug 24, 2022 · 10 comments
Open

Video Input #3

blank-ed opened this issue Aug 24, 2022 · 10 comments

Comments

@blank-ed
Copy link

Input video as a recorded video does not work. The error is "No frames received"

@SamProell
Copy link
Owner

@blank-ed , thanks for raising the issue.
For me, accessing a recorded video works without problems. How are you passing the filename to yarppg?

I do get the same error, if the file cannot be found (this should of course be made more clear in the error message)...

@blank-ed
Copy link
Author

Ok I apologize, that was silly of me. The issue right now is with the FPS. OpenCV just reads like 100+ frames every second. Is there a way to set the FPS?

@SamProell
Copy link
Owner

Apparently, there is no clear cut way to limit the FPS through OpenCV directly. I found several related questions on Stackoverflow (like this one), but no official documentation.

A quick hack would be to add a sleep within the camera's run loop like so:

def run(self):
    self._running = True
    while self._running:
        ret, frame = self._cap.read()

        if not ret:
            self._running = False
            raise RuntimeError("No frame received")
        else:
            self.frame_received.emit(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
        time.sleep(0.05)

Would this be enough for you? Please let me know if you can think of a better solution!

@blank-ed
Copy link
Author

blank-ed commented Aug 30, 2022

Alright. time.sleep(1/30) (1/30 because of 30 fps) works well. If I may, the heart rate depends on the fps right? For live video, constant 30 fps wouldn't be a problem, but with recorded video, without time.sleep(), opencv would read 100+ frames/s. Additionally, time.sleep() has raised problems before, for example, if the laptop decided to slow down, fps would not be constant anymore. Wouldn't it be better if the number of frames are set constant, for both live video and recorded video. I have faced this problem of 100+ frames/s before on recorded videos and I fixed it with this:

import cv2
import time

cap = cv2.VideoCapture(your video)

initial_time = time.time()
to_time = time.time()

set_fps = 25 # Set your desired frame rate

# Variables Used to Calculate FPS
prev_frame_time = 0 # Variables Used to Calculate FPS
new_frame_time = 0

while True:
    while_running = time.time() # Keep updating time with each frame

    new_time = while_running - initial_time # If time taken is 1/fps, then read a frame

    if new_time >= 1 / set_fps:
        ret, frame = cap.read()
        if ret:
            # Calculating True FPS
            new_frame_time = time.time()
            fps = 1 / (new_frame_time - prev_frame_time)
            prev_frame_time = new_frame_time
            fps = int(fps)
            fps = str(fps)
            print(fps)

            cv2.imshow('joined', frame)
            initial_time = while_running # Update the initial time with current time

        else:
            total_time_of_video = while_running - to_time # To get the total time of the video
            print(total_time_of_video)
            break

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

cap.release()
cv2.destroyAllWindows()

Do you think this is a possible solution? OOP isn't my forte, so I'm having a hard time trying to place the correct parts of the codes above into yarppg. But the method above does give a constant fps, be it a recorded video or live video. But of course, for live video, this is dependent on the type of camera used. This wouldn't work if your desired fps is set at 60 but your camera can only handle around 30. This would give a fluctuating fps.

@SamProell
Copy link
Owner

many thanks for the detailed elaboration. I agree that an optional FPS limit would be a good thing. And yes, sleeping the entire delay is not great - doing it incrementally like in your snippet above is better.

I could imagine the following non-breaking change to the Camera class:

class Camera(QThread):
    """..."""
    frame_received = pyqtSignal(np.ndarray)

    def __init__(self, video=0, parent=None, limit_fps=None):
        """..."""
        QThread.__init__(self, parent=parent)
        self._cap = cv2.VideoCapture(video)
        self._running = False
        self._delay = 1 / limit_fps if limit_fps else np.nan
        # np.nan will always evaluate to False in a comparison

    def run(self):
        self._running = True
        while self._running:
            ret, frame = self._cap.read()
            last_time = time.perf_counter()

            if not ret:
                self._running = False
                raise RuntimeError("No frame received")
            else:
                self.frame_received.emit(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))

            while (time.perf_counter() - last_time) < self._delay:
                time.sleep(0.001)
    # ...

From an OOP-perspective, there surely are many things to improve in yarppg (this is also not my specialty). It becomes evident here, as such a seemingly simple change will need adjustments at several locations throughout the codebase. I would likely move the creation of the Camera object outside the RPPG
class.

Would you agree with such a change?

@blank-ed
Copy link
Author

blank-ed commented Sep 2, 2022

Yes, I think such a change would be useful for recorded videos with different fps and even for cameras that can handle and allow different values of fps. This current fix from the snippet you have given above works well for setting different fps for recorded videos. However, if I set it to 30 fps, it's at 20 fps. I'm not sure if it's a laptop issue. Is it slowing the recorded video's down to 20 fps for you as well? The current fix to this issue for me is:

self._delay = (1 / limit_fps)*(limit_fps/(limit_fps+10)) if type(video) == str else np.nan

Also, from my current fix above, I think the if and else statements should depend on whether the input is from recorded video or from the camera. Because if the input is from a recorded video, we want to limit the fps to the value of the user's choice (with initial condition given as 30 fps since without limit_fps the recorded video would process in about 100+ fps) and if the recorded video is from the input camera, your fix of np.nan works well since its limited to 30 fps (with some minor fluctuations about +/- 3 fps).

Additionally, yarppg is incredible, thank you! I understood RPPG much better and faster from reading and understanding your code compared to reading different papers since your code shows the step by step process. Right now I am trying to implement heart rate detection using EVM (Eulerian Video Magnification). Would this be an additional improvement for yarppg in the future, or even for a way to let people visualize the RGB color changes in the face more clearly, rather than a graph? If so, would it be possible for me to help add this?

@SamProell
Copy link
Owner

I have started working on this in a new branch. Yes I am seeing the same problem, with FPS being lower than intended. For me, it looks like there is a constant delay by ~12ms. I experimented also with a QTimer-based implementation but it has the same problem. I am starting to believe that this is a fundamental limitation with PyQt or the OS (Windows in my case), but I am not sure.

I added two options to the command line interface. You could now limit FPS by running python -m yarppg --limitfps=30. However, since the specified FPS do not match the actual FPS this might not be so good. Unless we find a solution for this, it might be better remove --limitfps and instead use --delay-frames which specifies the time to wait in ms. Then, with python -m yarppg --delay-frames=20 you would get roughly 30 FPS. This is kind of a hack though 😅

@SamProell
Copy link
Owner

@blank-ed

Additionally, yarppg is incredible, thank you! I understood RPPG much better and faster from reading and understanding your code compared to reading different papers since your code shows the step by step process. Right now I am trying to implement heart rate detection using EVM (Eulerian Video Magnification). Would this be an additional improvement for yarppg in the future, or even for a way to let people visualize the RGB color changes in the face more clearly, rather than a graph? If so, would it be possible for me to help add this?

I am glad to hear that! Yes, any contribution to yarppg is more than welcome, especially since I myself find less and less time to work on it. You can always create a pull request and I would happily review your work.

@blank-ed
Copy link
Author

I have started working on this in a new branch. Yes I am seeing the same problem, with FPS being lower than intended. For me, it looks like there is a constant delay by ~12ms. I experimented also with a QTimer-based implementation but it has the same problem. I am starting to believe that this is a fundamental limitation with PyQt or the OS (Windows in my case), but I am not sure.

I added two options to the command line interface. You could now limit FPS by running python -m yarppg --limitfps=30. However, since the specified FPS do not match the actual FPS this might not be so good. Unless we find a solution for this, it might be better remove --limitfps and instead use --delay-frames which specifies the time to wait in ms. Then, with python -m yarppg --delay-frames=20 you would get roughly 30 FPS. This is kind of a hack though 😅

I would call it a temporary fix instead of a hack haha. Sure, let's keep trying to find a more permanent solution for this FPS issue. OOP does get confusing at times and I feel like OOP kind of also reduces the FPS since the frame has to jump through all the hoops before being displayed, which can add more delays.

@blank-ed
Copy link
Author

@blank-ed

Additionally, yarppg is incredible, thank you! I understood RPPG much better and faster from reading and understanding your code compared to reading different papers since your code shows the step by step process. Right now I am trying to implement heart rate detection using EVM (Eulerian Video Magnification). Would this be an additional improvement for yarppg in the future, or even for a way to let people visualize the RGB color changes in the face more clearly, rather than a graph? If so, would it be possible for me to help add this?

I am glad to hear that! Yes, any contribution to yarppg is more than welcome, especially since I myself find less and less time to work on it. You can always create a pull request and I would happily review your work.

@SamProell Awesome! I am currently working on LiCVPR for background rectification and EVM for visualization purposes. I have moved away from EVM for hr detection for now since magnifying the RGB signals also amplifies the noises in the total RGB signal, based on the frequency spectrum that I obtained. However, the significant frequency detected in all the videos with the ground truth data shows that it is quite close to the average ground truth heart rate frequency. Maybe with your help, we can find a way to make the signal less noisy?

Disclaimer alert: I just started researching into rPPG about 4 months ago, and I am just a beginner researcher (~1 yr exp) so I am very green. So I might be slow 😂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants