New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it possible to log a dataset without loading anything #11172
Make it possible to log a dataset without loading anything #11172
Conversation
Documentation preview for 121756f will be available when this CircleCI job More info
|
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
2a8d839
to
ca3eaa4
Compare
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
ca3eaa4
to
da6f5c2
Compare
aaf2f51
to
ad1686a
Compare
base_dict: A string dictionary of base information about the | ||
dataset, including: name, digest, source, and source | ||
type. | ||
def to_dict(self) -> Dict[str, str]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reason for this signature change:
- classes that are overridden in subclasses should be public.
- It feels odd to have
to_dict
method taking abase_dict
as input. We should just put the default logic in the body, and have subclasses call the super class' method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(1) makes sense to me. i dont feel strongly about (2), but it seems like a resonable change
eaa5b0a
to
c0c8ffb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good. Left a few small comments about organization and testing. Thanks @chenmoneygithub
with suppress(ImportError): | ||
# Suppressing ImportError to pass mlflow-skinny testing. | ||
from mlflow.data import meta_dataset # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what part of meta_dataset
is breaking the test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import numpy as np
- which also exists in numpy_dataset.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i see. it looks like we handle these in the dataset registry mlflow/data/dataset_registry.py
rather than in the module __init__.py
. can we use that approach here for consistency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually those imports inside mlflow/data/dataset_registry.py
should also be written in __init__.py
, otherwise it's quite unclear why mlflow.data.numpy_dataset
is a valid module without having from mlflow.data import numpy_dataset
in the __init__.py
. I will open a followup PR to clean them up.
json_str = dataset.to_json() | ||
parsed_json = json.loads(json_str) | ||
|
||
assert parsed_json["digest"] is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add some tests on the digest content itself? its important that different dataset sources will map to different digests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense to me, adding.
base_dict: A string dictionary of base information about the | ||
dataset, including: name, digest, source, and source | ||
type. | ||
def to_dict(self) -> Dict[str, str]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(1) makes sense to me. i dont feel strongly about (2), but it seems like a resonable change
mlflow/data/meta_dataset.py
Outdated
super().__init__(source=source, name=name, digest=digest) | ||
|
||
def _compute_digest(self) -> str: | ||
"""Computes a digest for the dataset.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we update this docstring with some information about how this hash works and differs from other dataset hashes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call, done!
config = { | ||
"name": self.name, | ||
"source": self.source.to_json(), | ||
"source_type": self.source._get_source_type(), | ||
"schema": self.schema.to_dict() if self.schema else "", | ||
} | ||
return hashlib.sha256(json.dumps(config).encode("utf-8")).hexdigest()[:8] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for consistency, can we pull this out to a helper fn in mlflow/data/digest_utils.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm I would prefer inlining the code, since this is not sharing any logic with other functions in mlflow/data/digest_utils.py
and pretty short.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have other hashing functions in mlflow/data/digest_utils.py
such as compute_tensorflow_dataset_digest
that are only used for one dataset type. i think we should pull this out even if its short.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should do the reverse - for the util functions that is specific to a module, they should go to its own module not the util file, I will clean it up in a followup PR.
c0c8ffb
to
fd31fbc
Compare
fd31fbc
to
d32d7d8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM once the small reorganizations are addressed. Thanks @chenmoneygithub !
with suppress(ImportError): | ||
# Suppressing ImportError to pass mlflow-skinny testing. | ||
from mlflow.data import meta_dataset # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i see. it looks like we handle these in the dataset registry mlflow/data/dataset_registry.py
rather than in the module __init__.py
. can we use that approach here for consistency?
config = { | ||
"name": self.name, | ||
"source": self.source.to_json(), | ||
"source_type": self.source._get_source_type(), | ||
"schema": self.schema.to_dict() if self.schema else "", | ||
} | ||
return hashlib.sha256(json.dumps(config).encode("utf-8")).hexdigest()[:8] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have other hashing functions in mlflow/data/digest_utils.py
such as compute_tensorflow_dataset_digest
that are only used for one dataset type. i think we should pull this out even if its short.
tests/data/test_meta_dataset.py
Outdated
|
||
assert dataset1.digest != dataset2.digest | ||
|
||
source = DeltaDatasetSource("fake/path/to/delta") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can we call this delta_source
rather than overwriting source
? its hard to tell that dataset1
and dataset3
are meant to be different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call, done!
Signed-off-by: chenmoneygithub <chen.qian@databricks.com>
馃洜 DevTools 馃洜
Install mlflow from this PR
Checkout with GitHub CLI
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Make it possible to log a dataset without loading anything. In more details, this is what happens under the hood:
log_input
.How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes