Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Freezes video thread when processing background thread? #393

Closed
3 tasks done
aronchick opened this issue Apr 14, 2024 · 5 comments
Closed
3 tasks done

[Bug]: Freezes video thread when processing background thread? #393

aronchick opened this issue Apr 14, 2024 · 5 comments
Assignees
Labels
QUESTION ❓ User asked about the working/usage of VidGear APIs. SOLVED 🏁 This issue/PR is resolved now. Goal Achieved! WAITING FOR RESPONSE ⏳ Waiting for the user response. WAITING TO TEST ⏲️ Asked user to test the suggested example/binary/solution

Comments

@aronchick
Copy link

Description

Love your product! I'm trying to do object detection in the video, and when I run the model thread, it freezes the front most video thread.

Do you have any examples of how to do this properly? Here is my frames_producer function:

async def frames_producer():
    settings = get_settings()

    signal.signal(signal.SIGINT, signal_handler)
    ml_model_config = settings.get("ml_model_config")
    if ml_model_config["source_video_path"] is None:
        print("No video file found - reloading model config")
        settings.load_model_config()
        ml_model_config = settings.get("ml_model_config")
        
    NUMBER_OF_SECONDS_PER_CLIP = settings.get("NUMBER_OF_SECONDS_PER_CLIP")
    FPS = settings.get("FPS")

    video_file = ml_model_config["source_video_path"]
    stream = cv2.VideoCapture(video_file)
    total_frames = int(stream.get(cv2.CAP_PROP_FRAME_COUNT))
    frames_in_current_clip = random.randint(0, total_frames)
    stream.set(cv2.CAP_PROP_POS_FRAMES, frames_in_current_clip)

    frames_in_current_clip = 0
    current_clip_frames = []

    while continue_stream:
        (grabbed, frame) = stream.read()
        frames_in_current_clip += 1
        current_clip_frames.append(frame)

        if (
            not grabbed
            or frames_in_current_clip
            >= NUMBER_OF_SECONDS_PER_CLIP * FPS
        ):
            # If not grabbed, assume we're at the end, and start over
            if not grabbed:
                stream.set(cv2.CAP_PROP_POS_FRAMES, 0)
            (grabbed, frame) = stream.read()

            logger.debug("Starting background task to track video.")
            logger.debug(f"Frames in current clip: {frames_in_current_clip}")
            loop = asyncio.get_event_loop()
            loop.create_task(track_video(current_clip_frames))

            frames_in_current_clip = 0
            current_clip_frames = []
            continue

        # reducer frames size if you want more performance otherwise comment this line
        # frame = await reducer(frame, percentage=30)  # reduce frame by 30%
        # handle JPEG encoding
        encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
        # yield frame in byte format
        yield (
            b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n"
        )
        await asyncio.sleep(1.0 / 30.0)

Issue Checklist

  • I have searched open or closed issues for my problem and found nothing related or helpful.
  • I have read the Documentation and found nothing related to my problem.
  • I've read the Issue Guidelines and wholeheartedly agree.

Expected behaviour

The video stream should keep going with no stutter.

Actual behaviour

The video stream stutters even though the high compute item (run_model) is done in the background.

Steps to reproduce

Just run the above code where run_model is an intensive process.

Terminal log output

No response

Python Code(Optional)

No response

VidGear Version

0.3.2

Python version

3.10.14

OpenCV version

4.9.0

Operating System version

MacOS Sonoma 14.4.1

Any other Relevant Information?

No

@aronchick aronchick added the BUG 🐛 Vidgear api's error, flaw or fault label Apr 14, 2024
@abhiTronix
Copy link
Owner

@aronchick The issue you're facing is likely due to the fact that the frames_producer function is both responsible for reading frames from the video and running the object detection model. This can lead to a bottleneck, where the video reading process is blocked by the model inference process.

Possible Solution:

To address this, you can try the following approach:

  • Separate Video Reading and Model Inference: Instead of running the model inference within the frames_producer function, create a separate coroutine or task to handle the model inference. This will allow the frames_producer function to focus solely on reading and yielding frames, without being blocked by the model inference.
  • Use a Queue or Event to Coordinate Between Threads: You can use a queue or an event to coordinate the communication between the frames_producer and the model inference task. The frames_producer can put the frames into the queue, and the model inference task can retrieve the frames, process them, and update the video output accordingly.

@aronchick
Copy link
Author

Thank you very much for the feedback! I've tried to rework it so that it now spins off a thread into a new pool, but it still freezes every time the model runs. Any suggestions?

from concurrent.futures import ProcessPoolExecutor
executor = ProcessPoolExecutor(max_workers=5)

# various performance tweaks
options = {
    "frame_size_reduction": 40,
    "jpeg_compression_quality": 80,
    "jpeg_compression_fastdct": True,
    "jpeg_compression_fastupsample": False,
    "hflip": True,
    "exposure_mode": "auto",
    "iso": 800,
    "exposure_compensation": 15,
    "awb_mode": "horizon",
    "sensor_mode": 0,
    "skip_generate_webdata": True,
}

# ... Other Code ...

async def frames_producer():
    settings = get_settings()

    ml_model_config = settings.get("ml_model_config")
    if ml_model_config["source_video_path"] is None:
        logger.info("No video file found - reloading model config")
        settings.load_model_config()
        ml_model_config = settings.get("ml_model_config")
        
    number_of_seconds_per_clip = ml_model_config["number_of_seconds_per_clip"]  
    FPS = settings.get("FPS")

    video_file = ml_model_config["source_video_path"]
    stream = cv2.VideoCapture(video_file)
    total_frames = int(stream.get(cv2.CAP_PROP_FRAME_COUNT))
    frames_in_current_clip = random.randint(0, total_frames)
    stream.set(cv2.CAP_PROP_POS_FRAMES, frames_in_current_clip)

    frames_in_current_clip = 0
    current_clip_frames = []

    while settings.get_continue_stream():
        (grabbed, frame) = stream.read()
        frames_in_current_clip += 1
        current_clip_frames.append(frame)

        if (
            not grabbed
            or frames_in_current_clip
            >= number_of_seconds_per_clip * FPS
        ):
            # If not grabbed, assume we're at the end, and start over
            if not grabbed:
                stream.set(cv2.CAP_PROP_POS_FRAMES, 0)
            (grabbed, frame) = stream.read()

            logger.info("Starting background task to track video.")
            logger.debug(f"Frames in current clip: {frames_in_current_clip}")
            executor.submit(track_video, current_clip_frames)

            frames_in_current_clip = 0
            current_clip_frames = []
            continue

        # # reducer frames size if you want more performance otherwise comment this line
        # frame = await reducer(frame, percentage=50)  # reduce frame by 50%
        
        # handle JPEG encoding
        encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
        # yield frame in byte format
        yield (
            b"--frame\r\nContent-Type:video/jpeg2000\r\n\r\n" + encodedImage + b"\r\n"
        )
        await asyncio.sleep(1.0 / 30.0)
    # close stream
    stream.release()

@abhiTronix
Copy link
Owner

abhiTronix commented Apr 23, 2024

options = {
"frame_size_reduction": 40,
"jpeg_compression_quality": 80,
"jpeg_compression_fastdct": True,
"jpeg_compression_fastupsample": False,
"hflip": True,
"exposure_mode": "auto",
"iso": 800,
"exposure_compensation": 15,
"awb_mode": "horizon",
"sensor_mode": 0,
"skip_generate_webdata": True,
}

@aronchick These parameters won't work since you're using a custom frame producer.

@abhiTronix
Copy link
Owner

abhiTronix commented Apr 23, 2024

Solution for using CPU Extensive code with WebGear API custom source:

@aronchick Here's how you do it properly. I used queues and Separate Video Reading and Model Inference as suggested earlier:

# import necessary libs
import uvicorn, asyncio, cv2
from vidgear.gears.asyncio import WebGear
from vidgear.gears.asyncio.helper import reducer

import random
import asyncio
import queue
import threading


class AsyncCPUIntensiveTask:
    def __init__(self, max_queue_size=100):
        self.queue = queue.Queue(maxsize=max_queue_size)
        self.threads = []
        self.running = False

    async def put_data(self, data):
        """
        Add data to the queue.
        """
        try:
            self.queue.put_nowait(data)
        except queue.Full:
            print("Queue is full. Waiting for space to become available...")
            await self.queue.put(data)

    def worker(self):
        """
        Worker thread to process data from the queue.
        """
        while self.running:
            try:
                data = self.queue.get(block=False)
            except queue.Empty:
                continue
            else:
                # Perform CPU-intensive task with data
                result = self._process_data(data)
                # Do something with the result
                print(f"Result: {result}")
                self.queue.task_done()

    def _process_data(self, data):
        """
        Placeholder for CPU-intensive task.
        """

        # Put object detection code and use data(list of frames) for processing

        # Simulate CPU-intensive task
        # !!! warning remove this code !!!
        result = sum(i**2 for i in range(1000000))
        return result

    def start(self, num_threads=4):
        """
        Start worker threads.
        """
        self.running = True
        for _ in range(num_threads):
            thread = threading.Thread(target=self.worker)
            thread.start()
            self.threads.append(thread)

    def stop(self):
        """
        Stop worker threads.
        """
        self.running = False
        for thread in self.threads:
            thread.join()
        self.threads.clear()


# initialize WebGear app without any source
web = WebGear(logging=True)


async def frames_producer():
    # settings = get_settings()
    # ml_model_config = settings.get("ml_model_config")
    # if ml_model_config["source_video_path"] is None:
    #     print("No video file found - reloading model config")
    #     settings.load_model_config()
    #     ml_model_config = settings.get("ml_model_config")
    # number_of_seconds_per_clip = ml_model_config["number_of_seconds_per_clip"]
    # FPS = settings.get("FPS")

    # I gave some dummy values
    number_of_seconds_per_clip = 2
    FPS = 30.0

    # video_file = ml_model_config["source_video_path"]
    video_file = "big_buck_bunny_scene.mp4"

    stream = cv2.VideoCapture(video_file)
    total_frames = int(stream.get(cv2.CAP_PROP_FRAME_COUNT))
    frames_in_current_clip = random.randint(0, total_frames)
    stream.set(cv2.CAP_PROP_POS_FRAMES, frames_in_current_clip)

    frames_in_current_clip = 0
    current_clip_frames = []

    task = AsyncCPUIntensiveTask()
    task.start()  # Start worker threads

    # while settings.get_continue_stream():
    while True:
        (grabbed, frame) = stream.read()
        frames_in_current_clip += 1
        current_clip_frames.append(frame)

        if not grabbed or frames_in_current_clip >= number_of_seconds_per_clip * FPS:
            # If not grabbed, assume we're at the end, and start over
            if not grabbed:
                stream.set(cv2.CAP_PROP_POS_FRAMES, 0)
            (grabbed, frame) = stream.read()

            print("Starting background task to track video.")
            print(f"Frames in current clip: {frames_in_current_clip}")

            # put frames list for object detection here
            await task.put_data(frames_in_current_clip)

            frames_in_current_clip = 0
            current_clip_frames = []
            continue

        # # reducer frames size if you want more performance otherwise comment this line
        # frame = await reducer(frame, percentage=50)  # reduce frame by 50%

        # handle JPEG encoding
        encodedImage = cv2.imencode(".jpg", frame)[1].tobytes()
        # yield frame in byte format
        yield (b"--frame\r\nContent-Type:image/jpeg\r\n\r\n" + encodedImage + b"\r\n")
        await asyncio.sleep(1.0 / 30.0)
    # close stream
    stream.release()


# add your custom frame producer to config
web.config["generator"] = frames_producer

# run this app on Uvicorn server at address http://localhost:8000/
uvicorn.run(web(), host="localhost", port=8000)

# close app safely
web.shutdown()

@abhiTronix
Copy link
Owner

@aronchick Put your Object detection and tracking code in this function in above code:

def _process_data(self, data):
        """
        Placeholder for CPU-intensive task.
        """

        # Put object detection code and use data(list of frames) for processing

        # Simulate CPU-intensive task
        # !!! warning remove this code !!!
        result = sum(i ** 2 for i in range(1000000))
        return result

Goodluck with your project. And thanks a million for the donation. 🥇

@abhiTronix abhiTronix added QUESTION ❓ User asked about the working/usage of VidGear APIs. WAITING TO TEST ⏲️ Asked user to test the suggested example/binary/solution SOLVED 🏁 This issue/PR is resolved now. Goal Achieved! WAITING FOR RESPONSE ⏳ Waiting for the user response. and removed BUG 🐛 Vidgear api's error, flaw or fault labels Apr 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
QUESTION ❓ User asked about the working/usage of VidGear APIs. SOLVED 🏁 This issue/PR is resolved now. Goal Achieved! WAITING FOR RESPONSE ⏳ Waiting for the user response. WAITING TO TEST ⏲️ Asked user to test the suggested example/binary/solution
Projects
None yet
Development

No branches or pull requests

2 participants