-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating torch version to latest stable release - 1.6.0 #3452
Updating torch version to latest stable release - 1.6.0 #3452
Conversation
Signed-off-by: Shrinath Suresh <shrinath@ideas2it.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shrinath-suresh thanks for the PR! We should definitely make sure we test against the latest pytorch :) (cc @apurva-koti who was also looking at bumping test dependency versions). It looks like one of the unit tests fails with torch 1.6.0 - would you have bandwidth to take a look at why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM if we can fix the test breakage!
@smurching Sure. I will debug and let you know. |
@smurching Shall we update the cloud pickle version too ? Eventhough we are going to remove it from main script, cloud pickle is still used for the test script (where user wants to override default pickle argument) . Moreover, I am facing error when using |
…ad mechanism where pickle_module.load is called Signed-off-by: Shrinath Suresh <shrinath@ideas2it.com>
@smurching Figured out the reason why test_load_model_allows_user_to_override_pickle_module_via_keyword_argument is getting failed. torch 1.6 saves the model as zip file and uses _load method and in turn uses pickle_module.Unpickler to load the model. whereas in torch 1.4 , Test case is passing locally by setting |
Signed-off-by: Shrinath Suresh <shrinath@ideas2it.com>
@apurva-koti @smurching Getting some random linting failure - (mlflow/mlflow/pytorch/pickle_module.py). should i need to push any dummy change to restart the linting build or is there any other way to re-run the build ? |
@shrinath-suresh , you can re-run the build by clicking on the details link in the failed check, and pressing the "re-run workflows" button on the checks page. |
@apurva-koti I dont see the "re-run workflows" button on the checks page (may be due to lack of permission ??). i will try command line option to add one more commit. |
Signed-off-by: Shrinath Suresh <shrinath@ideas2it.com>
Signed-off-by: Shrinath Suresh <shrinath@ideas2it.com>
Ah sorry! Probably a permissions issue. Let me take a look at this now you've addressed the lint issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Current failures are on master and unrelated to this PR.
Thanks @shrinath-suresh !
Signed-off-by: Sid Murching <sid.murching@databricks.com>
Signed-off-by: Sid Murching <sid.murching@databricks.com>
@smurching Thank you very much. Can we merge the PR if the changes are fine? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@shrinath-suresh yes! Done :) |
What changes are proposed in this pull request?
Updating torch version to recent stable release - 1.6.0.
How is this patch tested?
Verified
mlflow/tests/pytorch/test_pytorch_model_export.py
andmlflow/examples/pytorch/mnist_tensorboard_artifact.py
with pytorch 1.6.0.Release Notes
Is this a user-facing change?
(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/projects
: MLproject format, project running backendsarea/scoring
: Local serving, model deployment tools, spark UDFsarea/server-infra
: MLflow server, JavaScript dev serverarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, JavaScript, plottingarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes