Fix LSTM benchmark to evaluate on test set#263
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## dev #263 +/- ##
=======================================
Coverage 77.63% 77.63%
=======================================
Files 43 43
Lines 805 805
Branches 119 119
=======================================
Hits 625 625
Misses 133 133
Partials 47 47
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Thanks @YounesBouhadjar, I think there was just one small issue - When the hidden states are populated in the last epoch of the training at L139, there is still an optimizer step / weight update after that. So I believe that the hidden states initialized for testing would be using the weights after the second-to-last update, rather than the weights after the final update. I changed it to re-run the forward pass on the train data before the benchmark run on the test set, and also re-did the benchmark, but the difference in results is negligible. |
jasonlyik
left a comment
There was a problem hiding this comment.
Update corrects the issue with the test dataset for the LSTM Mackey-Glass benchmark.
Fixes #262
The benchmark is corrected to evaluate LSTM on the test set, using the hidden states warmed up at the end of training.