-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor of the code #6
base: master
Are you sure you want to change the base?
Conversation
This will be useful to better split the code. Now, Frame class will contain the frame to analyze and all methods to work on the image. Instad MediaAnalysis will be focused only on feature extraction
…f function for later fine-tuning
- Update main.py for improved configuration loading and parsing - Switch to using `dotenv_values` for loading configuration values - Add command line argument parsing with `argparse` - Set defaults for command line arguments based on `.env` file values - Change logging configuration to write to a file specified in the `.env` file
- Revamped logger setup using kwargs in `main.py` - Added flags for logging to screen and writing to a file - Logging level set to DEBUG with `--verbose` flag
- Separate frame receiving and processing into distinct threads - Initialize `SleepyBaby` object with specified parameters - Use `output` object to display body details in processed frame - Use `SleepyBaby.start_thread` to start frame processing thread - Create `show_video` thread to display processed frames - Read and append frames from `cv2.VideoCapture` to `frame_q` for processing - Set `terminate_event` to end all threads in case of program termination or error in video source
I was able to get it to run locally. But with a clip where the system should easily detect sleeping, it's not. Have you seen it detect sleeping yet on your end? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't combed through all the code yet, want to see it working on recorded & live video streams before polishing code up
- Update function signature in sleepy_baby/frame.py - Simplify progress bar drawing code - Modify progress bar position and label text - Improve progress bar text display
- Refactor `add_progress_bar` function in `frame.py` for improved readability and consistency - Update variable naming to use `adj_percent` instead of `display_perc`
this has been solved with last commits. I simplified the code and put all parameters in function signature for a easier customization of other users. |
- Add optional parameters to `add_body_details` and `add_wrist_position` methods in `frame.py` - Change colors and thickness of face details in `add_face_details` method in `frame.py` - Add `refine_landmarks` param to `SleepyBaby` class in `__init__.py`
- Refactor `main.py` into a class and methods - Add `fire` to the imports and requirements.txt - Create `live` and `recorded` methods for streaming and recorded video inputs
- Set default value to config['VIDEO_PATH'] for better user experience
I added code to implement python-fire as CLI. Please for testing purpose use following command line: You can use the help command as well: |
- Improve initialization of `working_area` in `app` class constructor - Add support for passing `working_area` as a tuple in `main.py` - Simplify `app` class constructor by using tuple unpacking for `working_area` parameter
success = True | ||
logging.info("Start receiving frames.") | ||
fps = vcap.get(cv2.CAP_PROP_FPS) | ||
self.sleepy_baby.start_thread(frame_q, terminate_event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, so for me live
mode appears to be functional, however I haven't seen recorded
mode work. I see you're continuing the pattern of using two threads (one push, one pop) for reading & processing images. I recall resorting to that for the live
mode, but i don't recall that working for recorded
clips, e.g. how I had it set up here:
https://github.com/calebolson123/BabySleepCoach/blob/master/main.py#L567
Have you seen recorded
mode work w/ this two thread push/pop pattern? I do agree with aligning to one approach for both modes like you did here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you look at the code live
and recorded
calls the same function with the unique difference in the arguments population. In particular recorded
sets to True the argumetn apply_delay_between_frames
this is used to delay next frame based on the actual FPS of the video in order to reproduce it at standard speed.
Do you have troubles to use this command?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah I saw that, that seems to work fine. The problem I'm seeing is that for recorded
, it never detects sleeping, stdout looks like this:
When I set up the frame queue push/pop logic, I noticed similar behavior to what I'm seeing now on this PR (doesn't detect sleeping). I think something about that pattern was not working back then, so I just used the queue for the live
mode. I believe whatever the underlying issue is, is back now that we're using push/pop queue for recorded
mode. I'll try to find some time to step through it & try to understand what's happening.
Have you tested and seen the system detect sleeping on a recorded clip?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw this already this problem also during live streaming. I don't know why it happens, but it is not related to the queue logic. This should be debugged in mediapipe library.
Did you have some time to make some tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, I'll find some time to look into it further
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had you have a chance to check it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I was on vacation for a few weeks. Hmm..back on the main branch, I'm actually seeing recorded
mode work (w/ threads & shared queue, i.e. how you have it set up here) intermittently, seemingly dependent on the resolution of the image. However I'm not seeing this behavior while running your branch. I'm still looking into this
Ah, tiny modification w/ the -, works now. python main.py -v --log_on_screen - live Nice, cool to see python-fire in action. "existing feature that I have commented at the beginning of my work" - what are you referencing here? |
Add working area configuration to the application, obtained from .env file, and set it as a parameter when initializing the app.
In order to improve the generation of augmented frames, we added a gamma correction function to the `getAugmentedFrame` method of the `Frame` class in the `frame.py` file. This method now accepts a `gamma` parameter as input and applies the `gamma_correction` function to the augmented frame. The `gamma_correction` function was also updated to return the lookup table rather than a cv2.LUT object. It was also decorated with a caching mechanism from the `functools` library to improve performance.
Instead of calculating the distances between landmarks repeatedly, a get_height method was created to calculate the distance between the desired landmarks requested by a list of tuples, iterating through each tuple and calculating the euclidean distances between them using numpy. This made the code more readable and less repetitive.
…intain image aspect ratio during resizing The code changes include making sure that the aspect ratio of images is maintained when resizing in both the app.photo() and app._process_streaming() functions. A new parameter `max_width` and `max_height` were also included in the app.photo() function to ensure that the image does not exceed those dimensions when resizing. The maintain_aspect_ratio_resize() function in sleepy_baby/helpers.py was also refactored to accept width and height parameters and select the smallest format.
This commit adds a new `debug` variable to the `SleepyBaby` class to allow for easier debugging. When true, status messages will be added to frames to indicate whether the baby is awake or asleep. The text and color of the message will depend on the state of the baby.
HATCH_IP=192.168.HATCH.IP | ||
AREA_X= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, when we eventually get around to updating the README I want to mention that users should play around with this. The smaller the area, the less demanding it is on client cpu/gpu. Can be computationally expensive & pointless to pass the entire image through the models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As soon as we freeze the modification, I'll comment the code and update all related file (including README) to be aligned with new structure
Refact the code in different files.