-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a unit test for training and validation callbacks #32847
Conversation
Can you verify that progress bar logger looks good and works as expected? i remember trying this and something was failing, don't recall what now. |
Well, I tried with the same code as the one I wrote in #32819, and it worked as expected. However, I don’t think I’m in the position to guarantee that it doesn’t break anything. I assumed the test suite would catch if there had been any problems. |
Sounds good, can you add a test case for the two Dataset use case you tested in callbacks_test? There are existing unit tests that check Progbar, may be we can add something similar for this case? |
The current status is that all tests in
(There was apparently one skipped, but I suspect it was due to a previous run.) I will then add a test, as it was suggested. |
Thank you for adding the test, can you confirm that the test you added fails without the change? |
I did the following on master (earlier I reported results with respect to what comes with Without any changes and any new tests, the following one was failing under
I assumed it was irrelevant. Then I focused on
However, it was due to a different reason (not what the test was asserting):
It complained that @keras_parameterized.run_all_keras_modes(always_skip_v1=True) Still without any changes outside the tests, the new test succeeded. I confirmed that the test was properly executed by changing the expected output and seeing it fail. I thought it was fixed on master and checked out bazel test //tensorflow/python/keras/...
...
//tensorflow/python/keras:callbacks_test FAILED in 4 out of 4 in 61.1s
...
Executed 153 out of 153 tests: 152 tests pass and 1 fails locally.
INFO: Build completed, 1 test FAILED, 10644 total actions And it was
And it was, as expected, due to
( In summary, the proposed change fixes a bug in 1.15.0-rc1 but doesn’t seem to be needed for what is on the master branch. I think this is the following commit fixed it: 327c5be. So what do we do with all this? The change doesn’t affect master; then it’s probably better to do not touch if it’s not broken. The test might be helpful still, though. |
Thank you for the detailed notes, i agree about adding the test case since it will be useful and we can call the issue done. |
I’ve rebased and removed the first commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Could you update the title/desc to reflect the changes
Done! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
I’ve looked through the failed builds, and one was them was due to the new code. There was incorrect indentation on two lines. Fixed. |
Imported from GitHub PR #32847 The test is to check that the progress bar shown Keras during the training process is working properly when training and validating with inputs of unknown sizes. Copybara import of the project: - 3912067 Add a unit test for training and validation callbacks by Ivan Ukhov <ivan.ukhov@gmail.com> - 2f4d26b Merge 3912067 into 18f70... by Ivan Ukhov <ivan.ukhov@gmail.com> COPYBARA_INTEGRATE_REVIEW=#32847 from IvanUkhov:shared-callbacks 3912067 PiperOrigin-RevId: 272958417
Seems auto-merge is not happening but the changes are now committed so we can close this. Thank you for the PR. |
The test is to check that the progress bar shown by Keras during the training process is working properly when training and validating with inputs of unknown sizes.