New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
running tracking after detection #332
Comments
I believe these are general purpose tracking algorithms if you provide them a detection it will be able to track it without training them on your grapes dataset. As I have used Dimp for objects that too are not common as well as not included in the training dataset. Secondly, I have not read each and every paper of this repository, but as far as I understand ATOM, DiMP, PrDiMP have both offline and online learning. Offline Learning for Target estimation component, whereas online learning for classification component. |
@Ahsanr312 Thank you for the reply and your experience! So if I do not need the classification component since I already have the detections they will be offline methods for object tracking? Anyway I will read the papers to clarify my doubts |
@andreaceruti You do need the classification component as well. Your detection will pause once you hand over your detected bounding box to the tracker. From there onwards, the tracker will be responsible for tracking the object and classification is one of the important components to keep the track. If you lose the track, then again your detector should start detection over the frames to relocate the object. |
Hi. Yes, you can easily provide a bounding box from a detector, and the tracker should track it. |
@martin-danelljan So to be sure I can use all the algorithms without training a classification component? I would like to pass just the detections to the tracker at each frame, and then compute some sort of association between detection from one frame to another one |
Yes, that is correct. All you need to provide is the video and the detection (bounding box) of the object that you want to track in the video. The trackers were trained offline to be able to track any object that you provide via a bounding box. Trackers like Atom or DiMP contain an online training step where the tracker adjusts its weights automatically such that it can track the provided object. This is done automatically. |
Thank you all for the clarifications @2006pmach @martin-danelljan @Ahsanr312 !! Before closing the issue I would like to know if in this repo there are some OFFLINE methods. |
We do not have any offline trackers yet. And none are planned right now. |
Hi, I would like to know if it is possible to use DiMP and ATOM with detections already provided by my trained detector.
I have simply trained a maskrcnn on my object (grape bunches, so it is not in the common datasets used).
Then I decompose a video representing a vineyard row in multiple frame and I generate the detections for each frame, saving them to a json file.
Is it possible to use my already generated detections in these 2 tracking methods without doing any other sort of training?
Sorry for the probably stupid question but I am new in this field and I am searching for a quick solution for my project.
Another question that I have is if you can tell me what are the offline tracking methods you have in the repo since I do not require a real-time/online method
The text was updated successfully, but these errors were encountered: