Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No appearance embedding is used? #47

Closed
JunweiLiang opened this issue Nov 2, 2021 · 5 comments
Closed

No appearance embedding is used? #47

JunweiLiang opened this issue Nov 2, 2021 · 5 comments

Comments

@JunweiLiang
Copy link

Hi,
according to this code example: https://github.com/ifzhang/ByteTrack#combining-byte-with-other-detectors
There is no appearance embedding input for the tracker.
Could you confirm that your tracking algo is better than DeepSORT and TMOT (https://github.com/Zhongdao/Towards-Realtime-MOT) that use appearance embedding without using it to compute similarities between tracklets?
I have not check the paper yet just looking for a quick answer. Many thanks! :)

@Mohamed209
Copy link

Mohamed209 commented Nov 2, 2021

in the paper , authors mentioned that embedings are not important for their algorithm , from my reading to the paper and using the code base in my projects , kalman uses only location and motion to predict new ids

@JunweiLiang
Copy link
Author

@ifzhang Any insights would be greatly appreciated. I'm really surprised that appearance features do not help.

@qwe1444
Copy link

qwe1444 commented Nov 4, 2021

We replaced the detector, use darknet-yolov4 ,compared Bytetrack and Deepsort at MOT20 train-set,deepsort better than Bytetrack. Whether appearance embedding is useful is still a question

@ifzhang
Copy link
Owner

ifzhang commented Nov 8, 2021

BYTE can be combined with embedding and sometimes can achieve better results when only using Kalman. You can see some example in tutorials, like FairMOT and CSTrack. In some videos with fast camera motion or low fps, embedding is more accurate than Kalman. We do not use embedding because we want to get faster inference speed. BTW, embedding tends to perform better on the training set (e.g. MOT20) because it can overfit the training set. However, it will drop performance on the test set because of the domain gap.

@JunweiLiang
Copy link
Author

@ifzhang Thanks! That is very insightful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants