Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation on long time series: use split feed-forward mechanism #3482

AlexDBlack opened this issue Jun 5, 2017 · 2 comments


Copy link

commented Jun 5, 2017

Example: a time series of length 10k.
You might train with TBPTT length of 100 - but then at evaluation time, we just naively feed-forward the whole time series. In the 10k/100 example, that means activation memory requirements are 100x higher at test time vs. at train time.

MultiLayerNetwork/ComputationGraph.doEvaluation() automatically uses a similar "partial/segmented time series feed-forward" mechanism, similar to TBPTT, for long time series.
This may be as simple as doing it using the TBPTT lengths, and only when TBPTT is enabled (if we can train without requiring this, then testing without it should also be possible).


This comment has been minimized.

Copy link
Contributor Author

commented Apr 25, 2018

Added here: #4405

@AlexDBlack AlexDBlack closed this Apr 25, 2018


This comment has been minimized.

Copy link

commented Sep 22, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Sep 22, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
None yet
1 participant
You can’t perform that action at this time.