-
Notifications
You must be signed in to change notification settings - Fork 45.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reproducing the video prediction model #553
Comments
For the most part, yes. There are a few differences:
That curve is about what I would expect. It looks strange is because of scheduled sampling, a curriculum which stochastically passes in ground truth frames at some times during the beginning of training. The curriculum ends around 12k. (See citation [2] in paper for details). To turn off scheduled sampling, you can set --schedsamp_k=-1
I did this work when I was an intern at Google Brain, and I no longer have access to data/code/training curves that I used for the paper.
I'm not planning on doing this in the immediate future, but love to have something like this added to the released code. I'd be happy to help review code for this, and potentially add to it. For example, I think that tiling animated gifs is a great way to visualize the model's predictions, as seen here: https://sites.google.com/site/robotprediction/ (scroll down about halfway). I have the code for tiling predictions together and saving into a gif, which I'd be happy to share. It would also be really useful to visualize the gifs during training, e.g., in tensorboard (tensorflow/tensorflow#3936) |
Thanks for the response @cbfinn. Closing this out. If you have more concerns, please do file a new issue/check with @cbfinn |
@cbfinn Thanks for the clarifications and pointers! i will follow up with more specific issues should they arise. |
@cbfinn |
Here's an example script that loads images from the pushing dataset and exports them to gifs, using the moviepy package (though does not tile them). It is straightforward to use moviepy to stack gifs side-by-side, to form a tiling. |
@cbfinn @falcondai |
@tegg89 i ended up using |
@cbfinn @falcondai |
@tegg89 Make sure you are only calling session.run() once for the entire sequence, rather than once for each frame. The script grab_train_images.py shows how to extract a sequence of images in order, with a single sess.run() per sequence. |
@cbfinn @falcondai Referring to
Then it came out with no sequential form. In my opinion, the network model makes input file not sequential form. How did you do visualizing evaluation? |
@cbfinn Thanks for your paper and codes. And sorry to bother you for a little detail.
Is |
@carsonDB The percentage is the same, but the actual videos used for training and validation are different (as they are randomized). |
@cbfinn Thanks for your quick reply! |
models/video_prediction @cbfinn
Thank you for generously sharing the code! I have three questions about the released code:
prediction_train.py
? in particular the number of training steps.The text was updated successfully, but these errors were encountered: