-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix batch scoring pipeline artifact writing on Databricks #6766
Conversation
mlflow/pipelines/steps/predict.py
Outdated
".mlflow", | ||
_get_execution_directory_basename(self.pipeline_root), | ||
_SCORED_OUTPUT_FOLDER_NAME, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Databricks, it seems like the spark writes parquet files as a folder. This makes it difficult for the get_artifact
logic to work because when the artifact path has a .parquet
file extension, it reads the folder as if it were a file. Thus, I've updated the scored_data
artifact writing and reading logic to an artifact path that does not have a .parquet
file extension.
mlflow/pipelines/steps/predict.py
Outdated
if databricks_utils.is_in_databricks_runtime(): | ||
dbfs_path = os.path.join( | ||
".mlflow", | ||
_get_execution_directory_basename(self.pipeline_root), | ||
_SCORED_OUTPUT_FOLDER_NAME, | ||
) | ||
shutil.rmtree("/dbfs/" + dbfs_path, ignore_errors=True) | ||
scored_sdf.coalesce(1).write.format("parquet").save(dbfs_path) | ||
_logger.info("Moving artifact from DBFS to driver disk") | ||
shutil.copytree( | ||
"/dbfs/" + dbfs_path, os.path.join(output_directory, _SCORED_OUTPUT_FOLDER_NAME) | ||
) | ||
shutil.rmtree("/dbfs/" + dbfs_path) | ||
else: | ||
scored_sdf.coalesce(1).write.format("parquet").save( | ||
os.path.join(output_directory, _SCORED_OUTPUT_FILE_NAME) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we abstract this logic into a util function e.g. in https://github.com/mlflow/mlflow/blob/master/mlflow/utils/file_utils.py? I can easily see it's used in other places in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick fix, @jerrylian-db !
* wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> * wip Signed-off-by: Jerry Liang <jerry.liang@databricks.com> Signed-off-by: Jerry Liang <jerry.liang@databricks.com>
Related Issues/PRs
#xxx
What changes are proposed in this pull request?
This change fixes the artifact writing and reading logic for the batch scoring predict step.
How is this patch tested?
I cloned the example MLP regression template repo. Updated the requirements.txt file to point to my fix branch. Ran the entire Databricks notebook in the example. Then, I added notebook cells and ran the ingest_scoring and predict steps. Both of which succeeded. Finally, I loaded the scored dataset artifact using the
get_artifact
API and it succeeded.See screenshots:
Does this PR change the documentation?
Details
link on thePreview docs
check.Release Notes
Is this a user-facing change?
(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/pipelines
: Pipelines, Pipeline APIs, Pipeline configs, Pipeline Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes