⚡️ Speed up function columns_equal by 17%
#37
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 17% (0.17x) speedup for
columns_equalindatacompy/spark/sql.py⏱️ Runtime :
230 milliseconds→197 milliseconds(best of5runs)📝 Explanation and details
The optimized code achieves a 16% speedup through two key optimizations that reduce computational overhead in PySpark operations:
1. Optimized
_get_column_dtypes()function:next()with generator expressions that scan the entiredataframe.dtypeslist twice (O(n) for each column lookup)dataframe.dtypesto a dictionary once, enabling O(1) lookups for both columns2. Expression caching in numeric comparisons:
col(col_1),lit(abs_tol),isnan(col(col_1))multiple times within the same logical plancol1_expr,abs_tol_lit,nan_col1, etc.) to avoid redundant Spark expression construction3. Consistent expression reuse in string comparisons:
col1_exprandcol2_expr, then builds normalized versions (c1n,c2n) based on the transformation flags, avoiding repeatedcol()callsPerformance benefits are amplified in the hot path: Based on the function references,
columns_equal()is called in a loop within_intersect_compare()for every shared column between DataFrames. Since DataFrame comparisons often involve many columns, these micro-optimizations compound significantly across multiple invocations.The optimization is particularly effective for workloads with DataFrames containing many columns or when performing repeated comparisons, as the expression caching and O(1) dtype lookups provide consistent performance gains without changing the function's behavior or API.
✅ Correctness verification report:
⚙️ Existing Unit Tests and Runtime
test_spark/test_sql_spark.py::test_bad_date_columnstest_spark/test_sql_spark.py::test_columns_equal_arraystest_spark/test_sql_spark.py::test_date_columns_equaltest_spark/test_sql_spark.py::test_date_columns_equal_with_ignore_spacestest_spark/test_sql_spark.py::test_date_columns_equal_with_ignore_spaces_and_casetest_spark/test_sql_spark.py::test_date_columns_unequaltest_spark/test_sql_spark.py::test_decimal_columns_equaltest_spark/test_sql_spark.py::test_decimal_columns_equal_reltest_spark/test_sql_spark.py::test_decimal_float_columns_equaltest_spark/test_sql_spark.py::test_decimal_float_columns_equal_reltest_spark/test_sql_spark.py::test_infinity_and_beyondtest_spark/test_sql_spark.py::test_numeric_columns_equal_abstest_spark/test_sql_spark.py::test_numeric_columns_equal_reltest_spark/test_sql_spark.py::test_rounded_date_columnstest_spark/test_sql_spark.py::test_string_columns_equaltest_spark/test_sql_spark.py::test_string_columns_equal_with_ignore_spacestest_spark/test_sql_spark.py::test_string_columns_equal_with_ignore_spaces_and_caseTo edit these changes
git checkout codeflash/optimize-columns_equal-mi68u9sgand push.