Skip to content

Commit

Permalink
Merge pull request #1272 from mzjp2/fix/profile-docstring
Browse files Browse the repository at this point in the history
Remove profile CLI command parameter docstring
  • Loading branch information
Aylr committed Apr 8, 2020
2 parents 9323f9f + 80f77ac commit fa2851a
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 18 deletions.
16 changes: 6 additions & 10 deletions docs/changelog/changelog.rst
Expand Up @@ -19,10 +19,8 @@ develop
* Remove the "project new" option from the command line (since it is not implemented; users can only run "init" to create a new project).
* Update type detection for bigquery based on driver changes in pybigquery driver 0.4.14. Added a warning for users who are running an older pybigquery driver
* added execution tests to the NotebookRenderer to mitigate codegen risks
* Add option "persist", true by default, for SparkDFDataset to persist the DataFrame it is passed. This addresses #1133
in a deeper way (thanks @tejsvirai for the robust debugging support and reproduction on spark).
- Disabling this option should *only* be done if the user has *already* externally persisted the DataFrame, or if the
dataset is too large to persist but *computations are guaranteed to be stable across jobs*.
* Add option "persist", true by default, for SparkDFDataset to persist the DataFrame it is passed. This addresses #1133 in a deeper way (thanks @tejsvirai for the robust debugging support and reproduction on spark).
* Disabling this option should *only* be done if the user has *already* externally persisted the DataFrame, or if the dataset is too large to persist but *computations are guaranteed to be stable across jobs*.
* Enable passing dataset kwargs through datasource via dataset_options batch_kwarg.
* Fix AttributeError when validating expectations from a JSON file
* Data Docs: fix bug that was causing erratic scrolling behavior when table of contents contains many columns
Expand All @@ -31,10 +29,8 @@ dataset is too large to persist but *computations are guaranteed to be stable ac
0.9.7
-----------------
* Update marshmallow dependency to >3. NOTE: as of this release, you MUST use marshamllow >3.0, which REQUIRES python 3. (`#1187 <https://github.com/great-expectations/great_expectations/issues/1187>`_) @jcampbell
- Schema checking is now stricter for expectation suites, and data_asset_name must not be present as a top-level
key in expectation suite json. It is safe to remove.
- Similarly, datasource configuration must now adhere strictly to the required schema, including having any
required credentials stored in the "credentials" dictionary.
* Schema checking is now stricter for expectation suites, and data_asset_name must not be present as a top-level key in expectation suite json. It is safe to remove.
* Similarly, datasource configuration must now adhere strictly to the required schema, including having any required credentials stored in the "credentials" dictionary.
* New beta CLI command: `tap new` that generates an executable python file to expedite deployments. (`#1193 <https://github.com/great-expectations/great_expectations/issues/1193>`_) @Aylr
* bugfix in TableBatchKwargsGenerator docs
* Added feature maturity in README (`#1203 <https://github.com/great-expectations/great_expectations/issues/1203>`_) @kyleaton
Expand Down Expand Up @@ -78,8 +74,8 @@ dataset is too large to persist but *computations are guaranteed to be stable ac
* Add support for transient table creation in snowflake (#1012)
* Improve path support in TupleStoreBackend for better cross-platform compatibility
* New features on `ExpecatationSuite`
- `.add_citation()`
- `get_citations()`
- `.add_citation()`
- `get_citations()`
* `SampleExpectationsDatasetProfiler` now leaves a citation containing the original batch kwargs
* `great_expectations suite edit` now uses batch_kwargs from citations if they exist
* Bugfix :: suite edit notebooks no longer blow away the existing suite while loading a batch of data
Expand Down
8 changes: 0 additions & 8 deletions great_expectations/cli/datasource.py
Expand Up @@ -155,14 +155,6 @@ def datasource_profile(datasource, generator_name, data_assets, profile_all_data
if the number of data assets in the datasource exceeds the internally defined limit. If it does, it will
prompt the user to either specify the list of data assets to profile or to profile all.
If the limit is not exceeded, the profiler will profile all data assets in the datasource.
:param datasource: name of the datasource to profile
:param data_assets: if this comma-separated list of data asset names is provided, only the specified data assets will be profiled
:param profile_all_data_assets: if provided, all data assets will be profiled
:param directory:
:param view: Open the docs in a browser
:param additional_batch_kwargs: Additional keyword arguments to be provided to get_batch when loading the data asset.
:return:
"""
cli_message("<yellow>Warning - this is a BETA feature.</yellow>")
try:
Expand Down

0 comments on commit fa2851a

Please sign in to comment.