-
Notifications
You must be signed in to change notification settings - Fork 1.9k
perf: optimize spark_hex dictionary path by avoiding dictionary expansion
#19832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| let encoded_values_array: ArrayRef = match encoded_values { | ||
| ColumnarValue::Array(a) => a, | ||
| ColumnarValue::Scalar(s) => Arc::new(s.to_array()?), | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably refactor hex_encode_bytes and hex_encode_int64 to return arrays only, as their signature say they return ColumnarValue but they never return the scalar variant, forcing handling like this
| } | ||
| DataType::Dictionary(_, value_type) => { | ||
| DataType::Dictionary(_, _) => { | ||
| let dict = as_dictionary_array::<Int32Type>(&array); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we should have some check that the dictionary has i32 key type, otherwise this will panic
| let dict_values = dict.values(); | ||
|
|
||
| match **value_type { | ||
| let encoded_values: ColumnarValue = match dict_values.data_type() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to consider arms for LargeUtf, views, etc.
| FROM VALUES ('foo'), ('bar'), ('foo'), (NULL), ('baz'), ('bar'); | ||
|
|
||
| query T | ||
| SELECT hex(dict_col) FROM t_dict_utf8; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we check the output type here with arrow_typeof to ensure they are dictionaries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After running the SLT tests for hex, it seems the planner might unpack dictionary-encoded inputs like Dictionary(Int32, Utf8) or Dictionary(Int32, Int64) into their underlying types (Utf8View or Int64) before calling the function. However, Dictionary(Binary) appears to stay as a dictionary
logical_plan
01)Projection: arrow_typeof(hex(CAST(t_dict_utf8.dict_col AS Utf8View)))
physical_plan
01)ProjectionExec: expr=[arrow_typeof(hex(CAST(dict_col@0 AS Utf8View)))]
logical_plan
01)Projection: arrow_typeof(hex(t_dict_binary.dict_col))
physical_plan
01)ProjectionExec: expr=[arrow_typeof(hex(dict_col@0))]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it's related to this issue
We can still push through with this PR even though it only works for binary (we can change the tests to binary here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the clarification. the tests have been updated to use dictionary(binary).
Which issue does this PR close?
Follow up to #19738
Rationale for this change
The current hex implementation expands
DictionaryArrayinputs into a regular array, which causes loss of dictionary encoding and redundant hex computation for repeated values.What changes are included in this PR?
Benchmark
Are these changes tested?
Yes. Existing unit tests and
sqllogictesttests pass.Are there any user-facing changes?
No.