Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding cli script to process a video as well as corresponding config file #16

Merged
merged 3 commits into from Nov 11, 2023

Conversation

KaareLJensen
Copy link
Contributor

This is PR to add a command line interface script to process a video given one or more masks as input for the video segmentation.
The script is written based on the eval_vos script as well as the Cutie Colab notebook. Not changing existing repo files only using already existing code.
It uses a new video_config.yaml configuration to keep it separated from other configs.
The process_video supports multiple input masks as input.
The script syntax is:
process_video.py [-h] [-v VIDEO] [-m MASK_DIR] [-o OUTPUT_DIR] [-d DEVICE] [--mem_every MEM_EVERY] [--max_internal_size MAX_INTERNAL_SIZE]
Only VIDEO, MASK_DIR and OUTPUT_DIR are required input.

@hkchengrex
Copy link
Owner

Thank you. I have added a comment. Can you please check?

@KaareLJensen
Copy link
Contributor Author

Unclear about your comment. Cannot see it on my side. Can you guide me to where to find it?
I looked at the changed files tab, but no comments shown or conversation their?

process_video.py Outdated
last_frame=(current_frame_index == total_frame_count - 1),
path_to_image=None)

check_to_clear_non_permanent_cuda_memory(processor=processor, device=device)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? Is OOM a problem without this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see an actual OOM. Which is very nice that that processing do not crash.
But I see issues when max GPU memory is reached.
For context I'm working on Windows 11, using a GeForce GTX 1060 6GB. Testing on a ~1 min (1628 frames) Full HD video.

With out the memory clean up the full video FPS is 3.6.
The ~1200 frames processes smoothly and even, but the I can observ that GPU mem max is reached and processing slows down and is herafter done in bursts very slowly. Seems like some kind of memory swapping happening. But just a guess; I don't know.

With the memory clean up the full video FPS is 8.8.
And all processing runs smoothly and even for all frames. On a GPU mem graph I can observ a drop when the clean up kicks in. But processing time do not really seem to be affected.

The clean up step was inspired by the interactive UI fuctionallity to do the same manually.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I don't think PyTorch supports GPU memory swapping so it might just be PyTorch greedily grabbing too much memory. Cleaning up the memory sometimes does improve the speed but might lead to non-intuitive failure (e.g., in some more challenging tracking scenarios). Is it possible to turn this feature into a command line option that is defaulted off?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will make the memory cleaning optional with a command line arg.
But will first have time to do this end of next week.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have now added a command line option arg for the cuda memory cleaning. Default disabled.
Let me know what you think.

@hkchengrex
Copy link
Owner

Oh right, I didn't submit the review. Sorry about that.

@hkchengrex
Copy link
Owner

Thank you for your contribution! I will add a reference in the readme.

@hkchengrex hkchengrex merged commit f4a3aa8 into hkchengrex:main Nov 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants