-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduce results on ShanghaiTech #4
Comments
I remember the frames are provided by the SH-Tech? |
I have downloaded the dataset from the link as suggested. The testing directory contains frames while the training one constains videos in .avi format. |
Hi,thank you for your contribution and for providing the code! |
Oh, yes, the training directory only contains .avi videos. I extracted the frames by Please note that the frame numbers of some videos are more than a thousand, so I think |
Thanks for your reply, i will try it! |
Don't forget to check out the recent commit before preprocessing the dataset. |
Hi,
Thank you for your contribution and for providing the code!
Unfortunately I am not able to reproduce the results with the model pretrained on the ShanghaiTech dataset. I have performed all the pre-processing steps as you have clearly explained here and used the same pre-trained cascade RCNN and Flownet2 weights.
Could you provide some information about how you extracted frames from the original videos in the training dataset? I have done it in the following way:
For each of the videos, I created a folder named as the base name of the file, where I extracted the frames using ffmpeg. For example, for the video
01_001.avi
I created a folder named01_001
and I extracted the frames using the commandffmpeg -r 1 -i 01_001.avi -r 1 -start_number 0 "01_001/%03d.jpg"
. As a result, the training folder is organized in the same way as the testing folder.Many thanks in advance.
The text was updated successfully, but these errors were encountered: