Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the mask for mots task? #6

Closed
JessicaChanzc opened this issue Aug 5, 2021 · 4 comments
Closed

How to get the mask for mots task? #6

JessicaChanzc opened this issue Aug 5, 2021 · 4 comments

Comments

@JessicaChanzc
Copy link

JessicaChanzc commented Aug 5, 2021

Hi!

Thanks for your great work!

When I prepare the segmentation mask for mots task, I followed the recommended instruction https://github.com/Zhongdao/UniTrack/blob/main/docs/DATA.md, and used the gen_mots_costa.py. Then i get the txt fils like follows:

1 2001 2 1080 1920 UkU\1`0RQ1>PoN\OVP1X1F=I3oSOTNlg0U2lWOVNng0m1nWOWNlg0n1PXOWNlg0l1SXOUNjg0P2.......
But it seems that the txt fils are not segmentation mask. Are these txt files right? or could you pls describe the mask generation process in more details?

Thank you!

@Zhongdao
Copy link
Owner

Zhongdao commented Aug 5, 2021

Hi @JessicaChanzc,
If you noticed, the string starts with "UkU" encodes the object masks. This is called the RLE format, widely used to compactly encode masks with strings, see here.

You can use the cocoapi (link above) to encode masks to RLE strings. And also to decode RLE strings to numpy arrays.

@JessicaChanzc
Copy link
Author

Thanks for your reply,

It seems that in your test_mots.py , you don't load any weights? but just visualize the tracking results?

I'm really appreciate your work and could you pls tell me how to test on my own dataset?

Best regards

@Zhongdao
Copy link
Owner

Zhongdao commented Aug 6, 2021

@JessicaChanzc

  • The appearance model loads its weights when initialization, see here and here. When using ImageNet pre-train models, we just load default torchvision weights by setting pretrain=True.
  • If you want to test your own dataset, I recommend preparing the dataset in the same format as MOTS (video frames, detections). The output, of course, will be in the same format as the MOTS result text. Another alternative is to combine the segmentation model with UniTrack, which may need some time coding but not very hard. If you have good recommenataions for some fast segmentation models, maybe I could write a demo in the future.

@JessicaChanzc
Copy link
Author

Thanks for your reply, it helps me lot!

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants