You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In mytest.py it seems to me that calc_trkpt helps you to find the next detection point using tracking. Are you storing tracklets somewhere to help you rebuild the complete track of each pedestrian ? How can we reproduce the tracking that is illustrated in the DroneCrowd README.md ?
The text was updated successfully, but these errors were encountered:
At this stage of my understanding, it seems that the the tracking information is used to localize pedestrians at the t moment relying on their t-1 moment positions ? Am I right ?
I didn't find in your project how do you draw long tracks. In your paper, you said that you used Social LSTM and Min Cost Flow to compute these tracks. Do you use them in tandem or you compared both of the methods ?
In
mytest.py
it seems to me thatcalc_trkpt
helps you to find the next detection point using tracking. Are you storing tracklets somewhere to help you rebuild the complete track of each pedestrian ? How can we reproduce the tracking that is illustrated in the DroneCrowd README.md ?The text was updated successfully, but these errors were encountered: