Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MAINTENANCE] Remove fluent partitioner methods from DataAssets #9517

Merged
merged 120 commits into from
Feb 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
120 commits
Select commit Hold shift + click to select a range
c3969dd
wip
joshua-stauffer Feb 14, 2024
c9d892a
Merge branch 'develop' into f/v1-175/generic_partitioners
joshua-stauffer Feb 14, 2024
3cc0639
add generic partitioners
joshua-stauffer Feb 14, 2024
36239d9
move Partitioner out of typechecking
joshua-stauffer Feb 16, 2024
4854727
update import
joshua-stauffer Feb 16, 2024
19164a2
type
joshua-stauffer Feb 16, 2024
8bf119b
schema update
joshua-stauffer Feb 16, 2024
e695d19
add partitioner to BatchRequest
joshua-stauffer Feb 16, 2024
a167db1
update tests
joshua-stauffer Feb 16, 2024
50994e7
update test snapshot
joshua-stauffer Feb 16, 2024
e20b62e
add partitioner to DataAsset.build_batch_request
joshua-stauffer Feb 16, 2024
307263b
update type stub
joshua-stauffer Feb 16, 2024
5032352
update test snapshot
joshua-stauffer Feb 16, 2024
8193139
missed one
joshua-stauffer Feb 16, 2024
fa5cda8
Merge branch 'develop' into f/v1-175/add_partitioners_to_batch_request
joshua-stauffer Feb 16, 2024
9218fc9
add test
joshua-stauffer Feb 16, 2024
f8780c2
add partitioner map to base class
joshua-stauffer Feb 20, 2024
c7e9103
add partitioner maps
joshua-stauffer Feb 20, 2024
29e28af
rename spark partitioners
joshua-stauffer Feb 20, 2024
c0d2c1a
refactor sql partitioners
joshua-stauffer Feb 20, 2024
af5e0d9
rename spark partitioners
joshua-stauffer Feb 20, 2024
8795e5f
Merge branch 'develop' into f/v1-175/add_partitioners_to_batch_request
joshua-stauffer Feb 20, 2024
9d2bdf5
schema update
joshua-stauffer Feb 20, 2024
f9b2210
Merge branch 'f/v1-175/add_partitioners_to_batch_request' into f/v1-1…
joshua-stauffer Feb 20, 2024
89f7193
update per rename
joshua-stauffer Feb 20, 2024
4b405a0
Merge branch 'develop' into f/v1-175/asset_uses_partitioner_from_batc…
joshua-stauffer Feb 20, 2024
7bb9777
implement method to replace batch_options property
joshua-stauffer Feb 20, 2024
1c0c5d7
move partitioner resolve method to subclass
joshua-stauffer Feb 21, 2024
baff01e
typeguard
joshua-stauffer Feb 21, 2024
364dfaa
add override
joshua-stauffer Feb 21, 2024
f0d2f52
rename param
joshua-stauffer Feb 21, 2024
d626182
update types
joshua-stauffer Feb 21, 2024
a4e24b4
remove from interface
joshua-stauffer Feb 21, 2024
269089e
add types to subclass
joshua-stauffer Feb 21, 2024
f00a1fb
tweak type
joshua-stauffer Feb 21, 2024
dbd95d0
revert
joshua-stauffer Feb 21, 2024
1004b30
types
joshua-stauffer Feb 21, 2024
8623707
move type out of typechecking
joshua-stauffer Feb 21, 2024
ac3ee51
move import
joshua-stauffer Feb 21, 2024
85f072a
try type as function
joshua-stauffer Feb 21, 2024
9ed5cee
revert
joshua-stauffer Feb 21, 2024
c2ba4c7
hacky type workaround
joshua-stauffer Feb 21, 2024
fbf7657
use Type
joshua-stauffer Feb 21, 2024
abc76d5
spark: use batch request partitioner
joshua-stauffer Feb 21, 2024
0cf9e3a
sql: use batch request partitioner
joshua-stauffer Feb 21, 2024
de6fc0c
add sqlite partitioners
joshua-stauffer Feb 21, 2024
454aef8
fix sqlite tests
joshua-stauffer Feb 22, 2024
7a9fafa
use get_batch_request_options instead of batch_request_options property
joshua-stauffer Feb 22, 2024
6edf63f
wip
joshua-stauffer Feb 22, 2024
7da1552
rename method
joshua-stauffer Feb 22, 2024
aa244ab
sqlite tests passing
joshua-stauffer Feb 22, 2024
d0b3961
update signature
joshua-stauffer Feb 22, 2024
ab1149f
update types
joshua-stauffer Feb 22, 2024
e2e4d16
move ConvertedDatetime partitioner into sql
joshua-stauffer Feb 22, 2024
d801d17
ensure sqlite assets have access to correct partitioner map
joshua-stauffer Feb 22, 2024
f97c441
schema update
joshua-stauffer Feb 22, 2024
d88a277
Merge branch 'develop' into f/v1-175/asset_uses_partitioner_from_batc…
joshua-stauffer Feb 22, 2024
d0ff1ca
schema update
joshua-stauffer Feb 22, 2024
b924f97
refactor partitioner to batch request in conftest
joshua-stauffer Feb 23, 2024
7b7247f
update integration tests to use partitioner in batch request
joshua-stauffer Feb 23, 2024
3362915
remove deprecated partitioner
joshua-stauffer Feb 23, 2024
7611ecb
Merge branch 'develop' into f/v1-175/asset_uses_partitioner_from_batc…
joshua-stauffer Feb 23, 2024
46a5c3e
remove sql partitioner_year
joshua-stauffer Feb 23, 2024
ff5abdf
schema update
joshua-stauffer Feb 23, 2024
b38fa01
remove sql add_partitioner_year_and_month
joshua-stauffer Feb 23, 2024
4953def
remove sql add_partitioner_year_and_month_and_day
joshua-stauffer Feb 23, 2024
5cf5c9e
remove sql add_partitioner_datetime_part
joshua-stauffer Feb 23, 2024
519867e
remove sql add_partitioner_column_value
joshua-stauffer Feb 23, 2024
6137d92
update integration tests
joshua-stauffer Feb 23, 2024
ec0e7f1
remove sql add_partitioner_divided_integer
joshua-stauffer Feb 23, 2024
dc32b3b
remove sql add_partitioner_mod_integer
joshua-stauffer Feb 23, 2024
5b188d9
remove sql add_partitioner_multi_column_values
joshua-stauffer Feb 23, 2024
795c831
remove remaining helper methods
joshua-stauffer Feb 23, 2024
3e708f8
remove sql test_partitioner
joshua-stauffer Feb 23, 2024
2509852
remove references to batch_request_options
joshua-stauffer Feb 23, 2024
1ae3497
update viral snippet
joshua-stauffer Feb 23, 2024
990cb11
remove sql batch_request_options
joshua-stauffer Feb 23, 2024
d91e36d
update tests related to fixture
joshua-stauffer Feb 23, 2024
6c964ad
update postgresql tests
joshua-stauffer Feb 23, 2024
e7d28e3
update snippets
joshua-stauffer Feb 23, 2024
b19afcf
Merge branch 'f/v1-175/asset_uses_partitioner_from_batch_request' int…
joshua-stauffer Feb 23, 2024
e6e777c
add partitioners to public api
joshua-stauffer Feb 23, 2024
175814b
public api
joshua-stauffer Feb 23, 2024
53d8c89
schema update
joshua-stauffer Feb 23, 2024
4d3b6ee
update api excludes
joshua-stauffer Feb 23, 2024
344bedc
Merge branch 'f/v1-175/asset_uses_partitioner_from_batch_request' int…
joshua-stauffer Feb 23, 2024
4eab300
fix doc snippet
joshua-stauffer Feb 23, 2024
4b0651e
remove assert
joshua-stauffer Feb 26, 2024
43888ed
remove comment
joshua-stauffer Feb 26, 2024
9d42d57
remove batch config option keys method from batch config
joshua-stauffer Feb 26, 2024
cb0987c
Merge branch 'f/v1-175/asset_uses_partitioner_from_batch_request' int…
joshua-stauffer Feb 26, 2024
0b63506
Merge branch 'develop' into m/v1-175/remove_legacy_partition_methods
joshua-stauffer Feb 26, 2024
3a4b906
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 26, 2024
52e4f63
update test
joshua-stauffer Feb 26, 2024
dbd18ac
remove batch_config_options
joshua-stauffer Feb 26, 2024
3dac9c9
update batch config persistence behavior
joshua-stauffer Feb 26, 2024
9c1134e
ensure partitioner is an optional param
joshua-stauffer Feb 26, 2024
8ad0efc
update postgresql tests
joshua-stauffer Feb 27, 2024
17a32b7
update docs
joshua-stauffer Feb 27, 2024
c6f3772
remove spark add_partitioner_year
joshua-stauffer Feb 27, 2024
f34d8d7
remove spark add_partitioner_year_and_month
joshua-stauffer Feb 27, 2024
854d523
remove spark add_partitioner_year_and_month_and_day
joshua-stauffer Feb 27, 2024
c7a899b
remove spark add_partitioner_datetime_part
joshua-stauffer Feb 27, 2024
d79245c
remove spark add_partitioner_column_value
joshua-stauffer Feb 27, 2024
b666761
remove spark add_partitioner_divided_integer
joshua-stauffer Feb 27, 2024
82ac959
remove spark add_partitioner_mod_integer
joshua-stauffer Feb 27, 2024
b020097
remove spark add_partitioner_multi_column_values
joshua-stauffer Feb 27, 2024
a3bc3b5
remove _add_partitioner
joshua-stauffer Feb 27, 2024
e8954fe
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 27, 2024
8b237f6
Merge branch 'develop' into m/v1-175/remove_legacy_partition_methods
joshua-stauffer Feb 27, 2024
31b6674
remove type ignores
joshua-stauffer Feb 27, 2024
b4be3df
add return type
joshua-stauffer Feb 28, 2024
ae5bb0d
remove sqlite add_partitioner method
joshua-stauffer Feb 28, 2024
9198fe6
remove DataAsset.partitioner and update tests
joshua-stauffer Feb 28, 2024
e9d1fde
Merge branch 'develop' into m/v1-175/remove_legacy_partition_methods
joshua-stauffer Feb 28, 2024
18dec3f
update yaml
joshua-stauffer Feb 28, 2024
212de7e
remove stray reference to DataAsset.partitioner
joshua-stauffer Feb 28, 2024
9d3bfd5
update unit test
joshua-stauffer Feb 28, 2024
92b7b0f
up schema
joshua-stauffer Feb 28, 2024
4720bd0
add back removed test
joshua-stauffer Feb 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@
my_asset = my_asset.add_sorters(["+year", "-month"])
# </snippet>

assert my_asset.batch_request_options == ("year", "month", "path")
assert my_asset.get_batch_request_options_keys() == ("year", "month", "path")

# Python
# <snippet name="docs/docusaurus/docs/oss/guides/connecting_to_your_data/fluent/data_assets/organize_batches_in_pandas_filesystem_datasource.py my_batch_list">
Expand Down
2 changes: 1 addition & 1 deletion docs/docusaurus/docs/snippets/batch_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
assert batch_request.options == {"year": "2019", "month": "02"}

# <snippet name="docs/docusaurus/docs/snippets/batch_request options">
options = asset.batch_request_options
options = asset.get_batch_request_options_keys()
print(options)
# </snippet>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,10 +51,10 @@

# Python
# <snippet name="docs/docusaurus/docs/snippets/get_existing_data_asset_from_existing_datasource_pandas_filesystem_example.py my_batch_request_options">
print(my_asset.batch_request_options)
print(my_asset.get_batch_request_options_keys())
# </snippet>

assert my_asset.batch_request_options == ("year", "month", "path")
assert my_asset.get_batch_request_options_keys() == ("year", "month", "path")

# Python
# <snippet name="docs/docusaurus/docs/snippets/get_existing_data_asset_from_existing_datasource_pandas_filesystem_example.py my_batch_request">
Expand Down
2 changes: 1 addition & 1 deletion great_expectations/datasource/fluent/batch_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ class BatchRequest(pydantic.BaseModel):
data_asset_name: The name of the Data Asset used to connect to the data.
options: A dict that can be used to filter the batch groups associated with the Data Asset.
The dict structure depends on the asset type. The available keys for dict can be obtained by
calling DataAsset.batch_request_options.
calling DataAsset.get_batch_request_options_keys(...).
batch_slice: A python slice that can be used to filter the sorted batches by index.
e.g. `batch_slice = "[-5:]"` will request only the last 5 batches after the options filter is applied.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def _get_batch_definition_list(
Returns:
List of batch definitions, in the case of a _DirectoryDataAssetMixin the list contains a single item.
"""
if self.partitioner:
if batch_request.partitioner:
# Currently non-sql asset partitioners do not introspect the datasource for available
# batches and only return a single batch based on specified batch_identifiers.
batch_identifiers = batch_request.options
Expand Down
178 changes: 3 additions & 175 deletions great_expectations/datasource/fluent/file_path_data_asset.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,6 @@
)

if TYPE_CHECKING:
from typing_extensions import Self

from great_expectations.core.batch import BatchDefinition, BatchMarkers
from great_expectations.core.id_dict import BatchSpec
from great_expectations.core.partitioners import Partitioner
Expand Down Expand Up @@ -106,7 +104,6 @@ class _FilePathDataAsset(DataAsset):
default_factory=dict,
description="Optional filesystem specific advanced parameters for connecting to data assets",
)
partitioner: Optional[SparkPartitioner] = None

_unnamed_regex_param_prefix: str = pydantic.PrivateAttr(
default="batch_request_param_"
Expand Down Expand Up @@ -176,36 +173,10 @@ def get_partitioner_implementation(
)
return PartitionerClass(**abstract_partitioner.dict())

@property
@override
def batch_request_options(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a ticket to add this property to BatchConfigs and have them call this method on their asset?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v1-205 👍

self,
) -> tuple[str, ...]:
"""The potential keys for BatchRequestOptions.

Example:
```python
>>> print(asset.batch_request_options)
("day", "month", "year", "path")
>>> options = {"year": "2023"}
>>> batch_request = asset.build_batch_request(options=options)
```

Returns:
A tuple of keys that can be used in a BatchRequestOptions dictionary.
"""
partitioner_options: tuple[str, ...] = tuple()
if self.partitioner:
partitioner_options = tuple(self.partitioner.param_names)
return (
tuple(self._all_group_names)
+ (FILE_PATH_BATCH_SPEC_KEY,)
+ partitioner_options
)

@override
def get_batch_request_options_keys(
self, partitioner: Optional[Partitioner]
self,
partitioner: Optional[Partitioner] = None,
) -> tuple[str, ...]:
option_keys: tuple[str, ...] = tuple(self._all_group_names) + (
FILE_PATH_BATCH_SPEC_KEY,
Expand All @@ -228,7 +199,7 @@ def build_batch_request(
Args:
options: A dict that can be used to filter the batch groups returned from the asset.
The dict structure depends on the asset type. The available keys for dict can be obtained by
calling batch_request_options.
calling get_batch_request_options_keys(...).
batch_slice: A python slice that can be used to limit the sorted batches by index.
e.g. `batch_slice = "[-5:]"` will request only the last 5 batches after the options filter is applied.
partitioner: A Partitioner used to narrow the data returned from the asset.
Expand Down Expand Up @@ -478,146 +449,3 @@ def _get_reader_options_include(self) -> set[str]:
"""One needs to explicitly provide set(str)-valued reader options for "pydantic.BaseModel.dict()" method \
to use as its "include" directive for File-Path style DataAsset processing."""
)

def _add_partitioner(self: Self, partitioner: SparkPartitioner) -> Self:
self.partitioner = partitioner
return self

@public_api
def add_partitioner_year(
self: Self,
column_name: str,
) -> Self:
"""Associates a year partitioner with this data asset.
Args:
column_name: A column name of the date column where year will be parsed out.
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerYear(
method_name="partition_on_year", column_name=column_name
)
)

@public_api
def add_partitioner_year_and_month(
self: Self,
column_name: str,
) -> Self:
"""Associates a year, month partitioner with this asset.
Args:
column_name: A column name of the date column where year and month will be parsed out.
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerYearAndMonth(
method_name="partition_on_year_and_month", column_name=column_name
)
)

@public_api
def add_partitioner_year_and_month_and_day(
self: Self,
column_name: str,
) -> Self:
"""Associates a year, month, day partitioner with this asset.
Args:
column_name: A column name of the date column where year and month will be parsed out.
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerYearAndMonthAndDay(
method_name="partition_on_year_and_month_and_day",
column_name=column_name,
)
)

@public_api
def add_partitioner_datetime_part(
self: Self, column_name: str, datetime_parts: List[str]
) -> Self:
"""Associates a datetime part partitioner with this asset.
Args:
column_name: Name of the date column where parts will be parsed out.
datetime_parts: A list of datetime parts to partition on, specified as DatePart objects or as their string equivalent e.g. "year", "month", "week", "day", "hour", "minute", or "second"
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerDatetimePart(
method_name="partition_on_date_parts",
column_name=column_name,
datetime_parts=datetime_parts,
)
)

@public_api
def add_partitioner_column_value(self: Self, column_name: str) -> Self:
"""Associates a column value partitioner with this asset.
Args:
column_name: A column name of the column to partition on.
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerColumnValue(
method_name="partition_on_column_value",
column_name=column_name,
)
)

@public_api
def add_partitioner_divided_integer(
self: Self, column_name: str, divisor: int
) -> Self:
"""Associates a divided integer partitioner with this asset.
Args:
column_name: A column name of the column to partition on.
divisor: The divisor to use when partitioning.
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerDividedInteger(
method_name="partition_on_divided_integer",
column_name=column_name,
divisor=divisor,
)
)

@public_api
def add_partitioner_mod_integer(self: Self, column_name: str, mod: int) -> Self:
"""Associates a mod integer partitioner with this asset.
Args:
column_name: A column name of the column to partition on.
mod: The mod to use when partitioning.
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerModInteger(
method_name="partition_on_mod_integer",
column_name=column_name,
mod=mod,
)
)

@public_api
def add_partitioner_multi_column_values(
self: Self, column_names: list[str]
) -> Self:
"""Associates a multi-column value partitioner with this asset.
Args:
column_names: A list of column names to partition on.
Returns:
This asset so we can use this method fluently.
"""
return self._add_partitioner(
SparkPartitionerMultiColumnValue(
column_names=column_names,
method_name="partition_on_multi_column_values",
)
)
26 changes: 4 additions & 22 deletions great_expectations/datasource/fluent/interfaces.py
Original file line number Diff line number Diff line change
Expand Up @@ -208,28 +208,8 @@ def test_connection(self) -> None:
"""One needs to implement "test_connection" on a DataAsset subclass."""
)

# Abstract Methods
@property
def batch_request_options(self) -> tuple[str, ...]:
"""The potential keys for BatchRequestOptions.

Example:
```python
>>> print(asset.batch_request_options)
("day", "month", "year")
>>> options = {"year": "2023"}
>>> batch_request = asset.build_batch_request(options=options)
```

Returns:
A tuple of keys that can be used in a BatchRequestOptions dictionary.
"""
raise NotImplementedError(
"""One needs to implement "batch_request_options" on a DataAsset subclass."""
)

def get_batch_request_options_keys(
self, partitioner: Optional[Partitioner]
self, partitioner: Optional[Partitioner] = None
) -> tuple[str, ...]:
raise NotImplementedError(
"""One needs to implement "get_batch_request_options_keys" on a DataAsset subclass."""
Expand All @@ -246,7 +226,7 @@ def build_batch_request(
Args:
options: A dict that can be used to filter the batch groups returned from the asset.
The dict structure depends on the asset type. The available keys for dict can be obtained by
calling batch_request_options.
calling get_batch_request_options_keys(...).
batch_slice: A python slice that can be used to limit the sorted batches by index.
e.g. `batch_slice = "[-5:]"` will request only the last 5 batches after the options filter is applied.
partitioner: A Partitioner used to narrow the data returned from the asset.
Expand Down Expand Up @@ -304,11 +284,13 @@ def add_batch_config(
batch_config = BatchConfig(name=name, partitioner=partitioner)
batch_config.set_data_asset(self)
self.batch_configs.append(batch_config)
self.update_batch_config_field_set()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We now need to always remember to call this after each update to self.batch_configs? Is there a better way to do this without putting the burden on future devs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @tyler-hoffman - this was a bugfix he snuck in, since the context was in this PR

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've added a ticket, and will have a solution as a followup 🙇

if self.datasource.data_context:
try:
batch_config = self.datasource.add_batch_config(batch_config)
except Exception:
self.batch_configs.remove(batch_config)
self.update_batch_config_field_set()
raise
self.update_batch_config_field_set()
return batch_config
Expand Down
10 changes: 4 additions & 6 deletions great_expectations/datasource/fluent/pandas_datasource.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
Optional,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
Expand Down Expand Up @@ -114,13 +115,10 @@ def _get_reader_method(self) -> str:
def test_connection(self) -> None:
...

@property
@override
def batch_request_options(self) -> tuple[str, ...]:
return tuple()

@override
def get_batch_request_options_keys(self, partitioner):
def get_batch_request_options_keys(
self, partitioner: Optional[Partitioner] = None
) -> Tuple[str, ...]:
return tuple()

@override
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"title": "BatchRequest",
"description": "--Public API--A BatchRequest is the way to specify which data Great Expectations will validate.\n\nA Batch Request is provided to a Data Asset in order to create one or more Batches.\n\nArgs:\n datasource_name: The name of the Datasource used to connect to the data.\n data_asset_name: The name of the Data Asset used to connect to the data.\n options: A dict that can be used to filter the batch groups associated with the Data Asset.\n The dict structure depends on the asset type. The available keys for dict can be obtained by\n calling DataAsset.batch_request_options.\n batch_slice: A python slice that can be used to filter the sorted batches by index.\n e.g. `batch_slice = \"[-5:]\"` will request only the last 5 batches after the options filter is applied.\n\nReturns:\n BatchRequest",
"description": "--Public API--A BatchRequest is the way to specify which data Great Expectations will validate.\n\nA Batch Request is provided to a Data Asset in order to create one or more Batches.\n\nArgs:\n datasource_name: The name of the Datasource used to connect to the data.\n data_asset_name: The name of the Data Asset used to connect to the data.\n options: A dict that can be used to filter the batch groups associated with the Data Asset.\n The dict structure depends on the asset type. The available keys for dict can be obtained by\n calling DataAsset.get_batch_request_options_keys(...).\n batch_slice: A python slice that can be used to filter the sorted batches by index.\n e.g. `batch_slice = \"[-5:]\"` will request only the last 5 batches after the options filter is applied.\n\nReturns:\n BatchRequest",
"type": "object",
"properties": {
"datasource_name": {
Expand Down