New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix flaky keras test #2926
Fix flaky keras test #2926
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2926 +/- ##
=======================================
Coverage 85.04% 85.04%
=======================================
Files 20 20
Lines 1050 1050
=======================================
Hits 893 893
Misses 157 157 Continue to review full report at Codecov.
|
I wrote a notebook to verify that a small learning rate prevents gradients from exploding. https://colab.research.google.com/drive/1a0b60Gk9ItEfQDluyi8W49xE6Uckf5eS?usp=sharing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks OK. I wonder if we could also set a fixed random seed so this is deterministic?
@aarondav We could do that too. I'll add a fixture that fixes a random seed. |
What changes are proposed in this pull request?
test_model_save_load
intest_keras_model_export.py
fails when the gradients of a model explode during training and the prediction values become infinity. This PR aims to fix this issue by using a smaller learning rate.https://github.com/mlflow/mlflow/pull/2914/checks?check_run_id=762832699#step:5:347
How is this patch tested?
(Details)
Release Notes
Is this a user-facing change?
(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls forModel Registry
area/models
: MLmodel format, model serialization/deserialization, flavorsarea/projects
: MLproject format, project running backendsarea/scoring
: Local serving, model deployment tools, spark UDFsarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, JavaScript, plottingarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientsIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsHow should the PR be classified in the release notes? Choose one:
rn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes