New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MAINTENANCE] Performance improvement refactor for Spark unexpected values #3368
[MAINTENANCE] Performance improvement refactor for Spark unexpected values #3368
Conversation
…mn_map_condition_values where unexpected values no longer returns identical set of values every run.
✔️ Deploy Preview for niobium-lead-7998 ready! 🔨 Explore the source changes: 753e44d 🔍 Inspect the deploy log: https://app.netlify.com/sites/niobium-lead-7998/deploys/614a50b5928d5c00087a2a59 😎 Browse the preview: https://deploy-preview-3368--niobium-lead-7998.netlify.app |
…ExecutionEngine_performance
HOWDY! This is your friendly 🤖 CHANGELOG bot 🤖Please don't forget to add a clear and succinct description of your change under the Develop header in ✨ Thank you! ✨ |
…ExecutionEngine_performance
…mn_map_condition_values where unexpected values no longer returns identical set of values every run (#3368).
…e_performance' of github.com:great-expectations/great_expectations into working-branch/DEVREL-154/improve_SparkDFExecutionEngine_performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved after further discussion.
… sorting results (#3368).
…ExecutionEngine_performance
@@ -213,7 +213,7 @@ | |||
},{ | |||
"title": "unexpected_values_exact_match_out_without_unexpected_index_list", | |||
"exact_match_out" : true, | |||
"suppress_test_for": ["pandas"], | |||
"only_for": ["sqlalchemy"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"exact_match_out" : true
will no longer work for spark since we are no longer sorting results
…e_performance' of github.com:great-expectations/great_expectations into working-branch/DEVREL-154/improve_SparkDFExecutionEngine_performance
@@ -2429,7 +2425,6 @@ def _spark_column_map_condition_value_counts( | |||
|
|||
result_format = metric_value_kwargs["result_format"] | |||
|
|||
filtered = data.filter(F.col("__unexpected") == True).drop(F.col("__unexpected")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer I believe that we can leave the filtered =
statement where it was (because there could be an exception raised earlier, so no need to have it before). Would you agree or not? Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexsherstinsky Sure, we can leave this line where it is, but then I would advocate to move lines 2408-2409 down below as well for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer The way I am seeing it now seems good. Thank you.
@@ -2585,7 +2562,7 @@ def _spark_multicolumn_map_condition_values( | |||
): | |||
"""Return values from the specified domain that match the map-style metric in the metrics dictionary.""" | |||
( | |||
boolean_mapped_unexpected_values, | |||
unexpected_condition, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer Could we place keep this variable named boolean_mapped_unexpected_values
as it was before (I was following the previous style in such methods across all execution engines and would like to keep it consistent, unless you have a strong reason for changing it now). Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for changing this was that metrics are not returning booleans in all cases. Cases that return a window function would fail for the 1-line solution filtered = df.filter(boolean_mapped_unexpected_values)
. The 2-line solution using withColumn
creates a new boolean mapped column from this variable. You can see how I left it alone on line 2504 because it actually is a boolean mapping for all cases there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer I cannot find all the line numbers involved (the UI here is confusing, but I follow your logic). Thank you! P.S.: Should we standardize all cases to use unexpected_condition
? Or do we want to preserve a special spot and variable name for the situations where it will always be strictly-boolean mapped? Thoughts welcome. Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexsherstinsky If we were to standardize all cases to use unexpected_condition
that would future-proof metrics that return a window function. The tradeoff is that using boolean_mapped_unexpected_values
with df.filter
(1-line solution) is measurably faster where it is possible. I would propose that we leave boolean_mapped_unexpected_values
as-is in places where it is not currently needed, and if a new metric requires it to be changed, it can be done at that time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer Perhaps we need to clean up our code? Can you please help us out here -- I see in Pandas, SQL, and Spark the pattern that in some cases defines unexpected_condition
and in other cases defines boolean_mapped_unexpected_values
as variable names, but ultimately these variables are used the same way: in a WHERE
type clause. So are we simply misnaming this variable in one case for spark, because it is truly an (unexpected) condition and not merely boolean-mapped (unexpected) values? If this is correct, then please go ahead and fix as you deem appropriate. Thanks so much!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexsherstinsky It is correct that the metric always returns an unexpected_condition
for Spark. So for semantic correctness, I have changed the one remaining use of Spark boolean_mapped_unexpected_values
to unexpected_condition
. In most Spark cases unexpected_condition
can be passed directly to a WHERE
clause, except in the case of a window function. I went ahead and changed:
data = df.filter(unexpected_condition)
to:
data = df.withColumn("__unexpected", unexpected_condition)
filtered = data.filter(F.col("__unexpected") == True).drop(F.col("__unexpected"))
for consistency in this single case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer Thanks -- at least it is consistent, and we can revise later. We now need to solve the remaining problem: the equivalency of results, when the sort order is not enforced. If we can do this, then it is a big gain!
@@ -1930,6 +1931,14 @@ def check_json_test_result(test, result, data_asset=None): | |||
elif key == "unexpected_list": | |||
# check if value can be sorted; if so, sort so arbitrary ordering of results does not cause failure | |||
if (isinstance(value, list)) & (len(value) >= 1): | |||
# dictionary handling isn't implemented in great_expectations.core.data_context_key.__lt__ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer I ❤️ the idea here; I have two questions about it:
- Would it be better to follow up on your comment and update
great_expectations.core.data_context_key.__lt_
or not? - If we must do this "consistent sort" here, is there a strong justification for using
itemgetter
instead of justlambda
? If at all possible, I would prefer thelambda
(for simplicity).
Thank you! I love how this idea absolves us from the need for the computationally-prohibitive row_number()
!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I clarified the lt comment (it is actually looking at the built-in Python class that is passed). I also changed itemgetter
to lambda
.
@@ -213,7 +213,7 @@ | |||
},{ | |||
"title": "unexpected_values_exact_match_out_without_unexpected_index_list", | |||
"exact_match_out" : true, | |||
"suppress_test_for": ["pandas"], | |||
"only_for": ["sqlalchemy"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer I feel that we must make it (or an equivalent, additional test work for Spark
). The existing test excludes Pandas
, because unexpected_index_list
only applies to Pandas
; however, the functionality must work for the other execution engines. Since you introduced the "force-sorting" of both sides in the assertion in self_check/util.py
, why would the existing test not work? To me, having this test is critical. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In self_check/util.py#L1882 when test["exact_match_out"] is True
all of the test sort logic is foregone. There is no analog to this test that would work that I can think of.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: There is a test above this one for pandas
. It is possible if we sort the data in the test harness it will work for spark
but I don't think even that is guaranteed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer I feel that we should discuss this in order to find a solution for testing the functionality for the Spark
engine. The reasoning is that the addition of row_number()
and sorting on it was key to making Spark work. However, I believe that you are pointing out that the sort order of the results is the only difference, the output being the same as expected otherwise. If this is the case, then we need to figure out how to show that in a test. If I am not understanding this correctly, then we might have to disable the expectation for Spark if it is qualitatively wrong without the use of row_number()
(and incurring the corresponding performance penalty). Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexsherstinsky I was able to make changes to self_check/util.py
to address this and reverted this test back to its original state. Please let me know your thoughts on the code, but the short version is we are now sorting unexpected_values
and partial_unexpected_values
in both cases where test["exact_match_out"]
is True
or False
on Spark dataframes.
.drop(F.col("__unexpected")) | ||
.drop(F.col("__row_number")) | ||
) | ||
filtered = df.filter(boolean_mapped_unexpected_values) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer Can we please preserve the pattern for having data = df.withColumn("__unexpected", boolean_mapped_unexpected_values)
first -- and then followed by the filter on F.col("__unexpected")
-- for consistency and readability purposes (plus for potential extensibility needs). Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my other comments about that pattern being measurably slower. We can discuss the tradeoffs in our face-to-face testing discussion.
@@ -1930,7 +1930,19 @@ def check_json_test_result(test, result, data_asset=None): | |||
elif key == "unexpected_list": | |||
# check if value can be sorted; if so, sort so arbitrary ordering of results does not cause failure | |||
if (isinstance(value, list)) & (len(value) >= 1): | |||
if type(value[0].__lt__(value[0])) != type(NotImplemented): | |||
# __lt__ is not implemented for python dictionaries making sorting trickier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer I added some more comments and would propose that you and I discuss. I think that there is only one open issue remaining, albeit an important one. Thank you!
@@ -1879,7 +1903,32 @@ def evaluate_json_test_cfe(validator, expectation_type, test): | |||
|
|||
def check_json_test_result(test, result, data_asset=None): | |||
# Check results | |||
if test["exact_match_out"] is True: | |||
# For Spark we cannot guarantee the order in which values are returned, so we sort for testing purposes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer Naive thought: Why not sort for all backends? I do not see detriment in pre-sorting in a well-defined order these unexpected values for all backends indiscriminately. What do you think? Thanks!
@@ -213,7 +213,7 @@ | |||
},{ | |||
"title": "unexpected_values_exact_match_out_without_unexpected_index_list", | |||
"exact_match_out" : true, | |||
"only_for": ["sqlalchemy"], | |||
"suppress_test_for": ["pandas"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NathanFarmer Unfortunately, GitHub is still showing me the apparently incorrectly changed code, so reviewing the affected modules via screen sharing with the focus on what had to be changed would be great. Thanks!
…her exact_match_out True or False (#3368).
…e_performance' of github.com:great-expectations/great_expectations into working-branch/DEVREL-154/improve_SparkDFExecutionEngine_performance
…ExecutionEngine_performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM -- Thank you very much!
Changes proposed in this pull request:
expect_column_values_to_not_be_null
with 4.5 GB of local filesystem data reduced total runtime by 40%Definition of Done
Please delete options that are not relevant.