-
-
Notifications
You must be signed in to change notification settings - Fork 292
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* init * init * init structure * disable imports' * adding structure for pyspark * setting dependency * update class import * fixing datatype cls * adding dtypes for pyspark * keep only bool type * remove pydantic schema * register pyspark data types * add check method * updating equivalents to native types * update column schema * refactor array to column * rename array to column_schema * remove pandas imports * remove index and multiindex functionality * adding pydantic schema class * adding model components * add model config * define pyspark BaseConfig class * removing index and multi-indexes * remove modify schema * Pyspark backend components, base, container, accessor, test file for accessor * Pyspark backend components, base, container, accessor, test file for accessor * Pyspark backend components, base, container, accessor, test file for accessor * Pyspark backend components, base, container, accessor, test file for accessor * add pyspark model components and types * remove hypothesis * remove synthesis and hypothesis * Pyspark backend components, base, container, accessor, test file for accessor * test for pyspark dataframeschema class * test schema with alias types * ensuring treat dataframes as tables types * update container for pyspark dataframe * adding negative test flow * removing series and index on pysparrk dataframes * remove series * revert series from pyspark.pandas * adding checks for pyspark * registering pysparkCheckBackend * cleaning base * Fixing the broken type cast check, validation of schema fix. * define spark level schema * fixing check flow * setting apply fn * add sub sample functionality * adjusting test case against common attributes * need apply for column level check * adding builtin checks for pyspark * adding checks for pyspark df * getting check registered * fixing a bug a in error handling for schema check * check_name validation fixed * implementing dtype checks for pyspark * updating error msg * fixing dtype reason_code * updating builtin checks for pyspark * registeration * Implementation of checks import and spark columns information check * enhancing __call__, checks classes and builtin_checks * delete junk files * Changes to fix the implemtation of checks. Changed Apply function to send list with dataframe and column name, builtin function registers functions with lists which inculdes the dataframe * extending pyspark checks * Fixed builtin check bug and added test for supported builtin checks for pyspark * add todos * bydefault validate all checks * fixing issue with sqlctx * add dtypes pytests * setting up schema * add negative and positive tests * add fixtures and refactor tests * generalize spark_df func * refactor to use conftest * use conftest * add support for decimal dtype and fixing other types * Added new Datatypes support for pyspark, test cases for dtypes pyspark, created test file for error * refactor ArraySchema * rename array to column.py * 1) Changes in test cases to look for summarised error raise instead of fast fail, since default behaviour is changed to summarised. 2) Added functionality to accept and check the precision and scale in Decimal Datatypes. * add neg test * add custom ErrorHandler * Added functionality to DayTimeIntervalType datatype to accept parameters * Added functionality to DayTimeIntervalType datatype to accept parameters * return summarized error report * replace dataframe to dict for return obj * Changed checks input datatype to custom named tuple from the existing list. Also started changing the pyspark checks to include more datatypes * refactor * introduce error categories * rename error categories * fixing bug in schema.dtype.check * fixing error category to by dynamic * Added checks for each datatype in test cases. Reduced the code redundancy of the code in test file. Refactored the name of custom datatype object for checks. * error_handler pass through * add ErrorHandler to column api * removed SchemaErrors since we now aggregate in errorHandler * fixing dict keys * Added Decorator to raise TypeError in case of unexpected input type for the check function. * replace validator with report_errors * cleaning debugs * Support DataModels and Field * Added Decorator to raise TypeError in case of unexpected input type for the check function. Merged with Develop * Fix to run using the class schema type * use alias types * clean up * add new typing for pyspark.sql * Added Decorator to raise TypeError in case of unexpected input type for the check function. Merged with Develop * Added changes to support raising error for use of datatype not supported by the check and support for map and array type. * support bare dtypes for DataFrameModel * remove resolved TODOs and breakpoints * change to bare types * use spark types instead of bare types * using SchemaErrorReason instead of hardcode in container * fixing an issue with error reason codes * minor fix * fixing checks and errors in pyspark * Changes include the following: 1) Updated dtypes test functionality to make it more readable 2) Changes in accessor tests to support the new functionality 3) Changes in engine class to conform to check class everywhere else * enhancing dataframeschema and model classes * Changes to remove the pandas dependency * Refactoring of the checks test functions * Fixing the test case breaking * Isort and Black formatting * Container Test function failure * Isort and black linting * Changes to remove the pandas dependency * Refactoring of the checks test functions * Isort and black linting * Added Changes to refactor the checks class. Fixes to some test cases failures. * Removing breakpoint * fixing raise error * adding metadata dict * Removing the reference of pandas from docstrings * Removing redundant code block in utils * Changes to return dataframe with errors property * add accessor for errorHandler * support errors access on pyspark.sql * updating pyspark error tcs * fixing model test cases * adjusting errors to use pandera.errors * use accessor instead of dict * revert to develop * Removal of imports which are not needed and improved test case. * setting independent pyspark import * pyspark imports * revert comments * store and retrieve metadata at schema levels * adding metadata support * Added changes to support parameter based run. 1) Added parameters.yaml file to hold the configurations 2) Added code in utility to read the config 3) Updated the test cases to support the parameter based run 4) Moved pyspark decorators to a new file decorators.py in backend 5) Type fix in get_matadata property in container.py file * Changing the default value in config * change to consistent interface * cleaning api/pyspark * backend and tests * adding setter on errors accessors for pyspark * reformatting error dict * doc * run black linter Signed-off-by: Niels Bantilan <niels.bantilan@gmail.com> * fix lint Signed-off-by: Niels Bantilan <niels.bantilan@gmail.com> * update pylintrc Signed-off-by: Niels Bantilan <niels.bantilan@gmail.com> --------- Signed-off-by: Niels Bantilan <niels.bantilan@gmail.com> Co-authored-by: jaskaransinghsidana <jaskaran_singh_sidana@mckinsey.com> Co-authored-by: jaskaransinghsidana <112083212+jaskaransinghsidana@users.noreply.github.com> Co-authored-by: Niels Bantilan <niels.bantilan@gmail.com>
- Loading branch information
1 parent
f401617
commit 74be58c
Showing
83 changed files
with
7,455 additions
and
105 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Params are case sensitive use only upper case | ||
VALIDATION: ENABLE # Supported Value [ENABLE/DISABLE] | ||
DEPTH: SCHEMA_AND_DATA #[Supported values: SCHEMA_ONLY, DATA_ONLY, SCHEMA_AND_DATA] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,136 @@ | ||
"""Custom accessor functionality for PySpark.Sql. | ||
""" | ||
|
||
import warnings | ||
from functools import wraps | ||
from typing import Optional, Union | ||
|
||
|
||
from pandera.api.pyspark.container import DataFrameSchema | ||
from pandera.api.pyspark.error_handler import ErrorHandler | ||
|
||
"""Register pyspark accessor for pandera schema metadata.""" | ||
|
||
|
||
Schemas = Union[DataFrameSchema] | ||
Errors = Union[ErrorHandler] | ||
|
||
|
||
# Todo Refactor to create a seperate module for panderaAccessor | ||
class PanderaAccessor: | ||
"""Pandera accessor for pyspark object.""" | ||
|
||
def __init__(self, pyspark_obj): | ||
"""Initialize the pandera accessor.""" | ||
self._pyspark_obj = pyspark_obj | ||
self._schema: Optional[Schemas] = None | ||
self._errors: Optional[Errors] = None | ||
|
||
@staticmethod | ||
def check_schema_type(schema: Schemas): | ||
"""Abstract method for checking the schema type.""" | ||
raise NotImplementedError | ||
|
||
def add_schema(self, schema): | ||
"""Add a schema to the pyspark object.""" | ||
self.check_schema_type(schema) | ||
self._schema = schema | ||
return self._pyspark_obj | ||
|
||
@property | ||
def schema(self) -> Optional[Schemas]: | ||
"""Access schema metadata.""" | ||
return self._schema | ||
|
||
@property | ||
def errors(self) -> Optional[Errors]: | ||
"""Access errors data.""" | ||
return self._errors | ||
|
||
@errors.setter | ||
def errors(self, value: dict): | ||
"""Set errors data.""" | ||
self._errors = value | ||
|
||
|
||
class CachedAccessor: | ||
""" | ||
Custom property-like object. | ||
A descriptor for caching accessors: | ||
:param name: Namespace that accessor's methods, properties, etc will be | ||
accessed under, e.g. "foo" for a dataframe accessor yields the accessor | ||
``df.foo`` | ||
:param cls: Class with the extension methods. | ||
For accessor, the class's __init__ method assumes that you are registering | ||
an accessor for one of ``Series``, ``DataFrame``, or ``Index``. | ||
""" | ||
|
||
def __init__(self, name, accessor): | ||
self._name = name | ||
self._accessor = accessor | ||
|
||
def __get__(self, obj, cls): | ||
if obj is None: # pragma: no cover | ||
return self._accessor | ||
accessor_obj = self._accessor(obj) | ||
object.__setattr__(obj, self._name, accessor_obj) | ||
return accessor_obj | ||
|
||
|
||
def _register_accessor(name, cls): | ||
""" | ||
Register a custom accessor on {class} objects. | ||
:param name: Name under which the accessor should be registered. A warning | ||
is issued if this name conflicts with a preexisting attribute. | ||
:returns: A class decorator callable. | ||
""" | ||
|
||
def decorator(accessor): | ||
if hasattr(cls, name): | ||
msg = ( | ||
f"registration of accessor {accessor} under name '{name}' for " | ||
"type {cls.__name__} is overriding a preexisting attribute " | ||
"with the same name." | ||
) | ||
|
||
warnings.warn( | ||
msg, | ||
UserWarning, | ||
stacklevel=2, | ||
) | ||
setattr(cls, name, CachedAccessor(name, accessor)) | ||
return accessor | ||
|
||
return decorator | ||
|
||
|
||
def register_dataframe_accessor(name): | ||
""" | ||
Register a custom accessor with a DataFrame | ||
:param name: name used when calling the accessor after its registered | ||
:returns: a class decorator callable. | ||
""" | ||
# pylint: disable=import-outside-toplevel | ||
from pyspark.sql import DataFrame | ||
|
||
return _register_accessor(name, DataFrame) | ||
|
||
|
||
class PanderaDataFrameAccessor(PanderaAccessor): | ||
"""Pandera accessor for pyspark DataFrame.""" | ||
|
||
@staticmethod | ||
def check_schema_type(schema): | ||
if not isinstance(schema, DataFrameSchema): | ||
raise TypeError( | ||
f"schema arg must be a DataFrameSchema, found {type(schema)}" | ||
) | ||
|
||
|
||
register_dataframe_accessor("pandera")(PanderaDataFrameAccessor) | ||
# register_series_accessor("pandera")(PanderaSeriesAccessor) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.