You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a great instance segmentation mask propagation tool [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model by Deep learning researchers Yue Zhao and Rex Cheng.
We have a video, and one instance segmentation mask for the first frame of our video.
We give them to the neural network and it predicts masks for all other frames.
Handle very long videos with limited GPU memory usage
Quite fast. Expect ~20 FPS even with long videos (hardware dependent)
It doesn't have any special list of classes. We can annotate by mask everything (Unlike SiamMask, which has a specific set of classes that it can work with)
They have a Colab demo with their neural network link. There are easy launch and simple code that can be integrated.
The example from the link is great for creating a potential CVAT interactor of mask polygon tracker type )
This DL model also have a GUI. We can annotate mask for some objects and then propagate masks for them through all video.
It's interesting that the creators use f-BRS for creating instance segmentation mask for future mask propagation.
CVAT have f-BRS as segmentation interactor too.
But in general, it is not so important what the preliminary mask is created with.
I think other CVAT's AI tool's interactors will work great too (for example HRnet).
It seems like a very intresting tool for CVAT for video instance segmentation tool.
Context
Get a powerful instance segmentation annotation tool for video )
The text was updated successfully, but these errors were encountered:
medphisiker
changed the title
enhancement: XMem mask video propagation as CVAT AI Tools polygon tracker
Enhancement: XMem mask video propagation as CVAT AI Tools polygon tracker
Dec 14, 2022
My tests have shown that XMem is not lost among many objects with the same texture as the selected mask.
It's also great. For example, another neural network MiVOS jumped from one fish to another when propagating masks.
It also works correctly with small twitches, for example, if a couple of frames were skipped in the video.
Given the segment results requested per frame, should we have long-term memory for each user?
The memory reset problem may occur by another user while using Xmem.
My actions before raising this issue
There is a great instance segmentation mask propagation tool
[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
by Deep learning researchers Yue Zhao and Rex Cheng.We have a video, and one instance segmentation mask for the first frame of our video.
We give them to the neural network and it predicts masks for all other frames.
There is a link to git repo.
Features:
SiamMask
, which has a specific set of classes that it can work with)They have a Colab demo with their neural network link. There are easy launch and simple code that can be integrated.
The example from the link is great for creating a potential CVAT interactor of mask polygon tracker type )
There are some demo on video:
It works great and looks as fantastic.
This DL model also have a GUI. We can annotate mask for some objects and then propagate masks for them through all video.
It's interesting that the creators use f-BRS for creating instance segmentation mask for future mask propagation.
CVAT have f-BRS as segmentation interactor too.
But in general, it is not so important what the preliminary mask is created with.
I think other CVAT's AI tool's interactors will work great too (for example HRnet).
It seems like a very intresting tool for CVAT for video instance segmentation tool.
Context
Get a powerful instance segmentation annotation tool for video )
The text was updated successfully, but these errors were encountered: