-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I run it in video/real-time? #240
Comments
@rkz98 you can run it on realtime /video also, first you need to detect the object, use object detector & pass that detected object frame by frame . |
I'd like to know this as well. dont care how small image or whatevre, just want it running real time. But right now for me swapping out the set image is taking 45 second no matter what size the image passed to it is |
Take a look at https://github.com/z-x-yang/Segment-and-Track-Anything |
No, this cannot be run in real time (at least not more than about 5-10 FPS). Every video frame needs to have a heavyweight image feature detector run first, before the segmenter runs. Even on an A100, the feature detector takes some 150ms per image. |
Yes, checkout https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once. they combine models together to solve problem |
I haven't test the fps. this also depends on the gpu you are using. On one V100 gpu, a 20s video will take 20s to process in total. Something like this. The hugging face demo takes longer because of upload video. |
Can I run it in video/real-time? Or just in images for now?
The text was updated successfully, but these errors were encountered: