-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I train with my own video sequence? #3
Comments
hi @Wisgon, More detail: If your input dimension, say 240x320, so after 2 times downsampling by the encoder, your output will be 60x80. So, after 2 times upsampling by the decoder, your output will be 240x320. In this case, you don't need to use |
OK, I will try it later, thank you very much. |
Hello @lim-anggun, |
As I know, if I want to train with my own video sequence, I should manually config FgSegNetModule.py.
But I'm a newbie on keras even on deep learning.
I found that I should modify the code bellow to fit my input video:
But I don't know what num_pixels I should pass to it...
How can I know what num_pixels corresponding my video sequence?And under what situation I should use Cropping2D()?
And is there anything I should modify?
Thank you very much for replying.
The text was updated successfully, but these errors were encountered: