Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running tracking after detection #332

Closed
andreaceruti opened this issue May 13, 2022 · 8 comments
Closed

running tracking after detection #332

andreaceruti opened this issue May 13, 2022 · 8 comments

Comments

@andreaceruti
Copy link

Hi, I would like to know if it is possible to use DiMP and ATOM with detections already provided by my trained detector.
I have simply trained a maskrcnn on my object (grape bunches, so it is not in the common datasets used).
Then I decompose a video representing a vineyard row in multiple frame and I generate the detections for each frame, saving them to a json file.
Is it possible to use my already generated detections in these 2 tracking methods without doing any other sort of training?
Sorry for the probably stupid question but I am new in this field and I am searching for a quick solution for my project.

Another question that I have is if you can tell me what are the offline tracking methods you have in the repo since I do not require a real-time/online method

@Ahsanr312
Copy link

I believe these are general purpose tracking algorithms if you provide them a detection it will be able to track it without training them on your grapes dataset. As I have used Dimp for objects that too are not common as well as not included in the training dataset.

Secondly, I have not read each and every paper of this repository, but as far as I understand ATOM, DiMP, PrDiMP have both offline and online learning. Offline Learning for Target estimation component, whereas online learning for classification component.

@andreaceruti
Copy link
Author

@Ahsanr312 Thank you for the reply and your experience! So if I do not need the classification component since I already have the detections they will be offline methods for object tracking? Anyway I will read the papers to clarify my doubts

@Ahsanr312
Copy link

@andreaceruti You do need the classification component as well. Your detection will pause once you hand over your detected bounding box to the tracker. From there onwards, the tracker will be responsible for tracking the object and classification is one of the important components to keep the track. If you lose the track, then again your detector should start detection over the frames to relocate the object.

@martin-danelljan
Copy link
Contributor

Hi. Yes, you can easily provide a bounding box from a detector, and the tracker should track it.

@andreaceruti
Copy link
Author

@martin-danelljan So to be sure I can use all the algorithms without training a classification component? I would like to pass just the detections to the tracker at each frame, and then compute some sort of association between detection from one frame to another one

@2006pmach
Copy link
Collaborator

2006pmach commented May 24, 2022

Yes, that is correct. All you need to provide is the video and the detection (bounding box) of the object that you want to track in the video. The trackers were trained offline to be able to track any object that you provide via a bounding box. Trackers like Atom or DiMP contain an online training step where the tracker adjusts its weights automatically such that it can track the provided object. This is done automatically.

@andreaceruti
Copy link
Author

Thank you all for the clarifications @2006pmach @martin-danelljan @Ahsanr312 !! Before closing the issue I would like to know if in this repo there are some OFFLINE methods.
Up to now I have used SORT, that is online since use past detections to determine current frame detection IDs, and I have used another offline algorithm that builds a graph over all detections video and then It finds the shortest path over this graph.
So I am interested in an algorithm that uses all the detections in each frame (or a past and future batch) to determine the detection IDs of the current frame
Do you know if there is some OFFLINE method in this repo? With a quick glance It seems that the majority is online but I am not an expert and I have not dived into all papers

@martin-danelljan
Copy link
Contributor

We do not have any offline trackers yet. And none are planned right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants