Skip to content

Conversation

@vigneshsiva11
Copy link

Which issue does this PR close?

Closes #3225.

Rationale for this change

Currently, while Comet implements field-major processing for top-level struct fields, it falls back to slow row-major processing (using append_field) when it encounters complex nested types like Structs inside Structs. This fallback involves a significant performance penalty because it requires per-row type dispatch and memory access.

By extending the field-major optimization to nested Struct fields, we achieve a more "vectorized" approach that maintains cache locality and reduces execution overhead. This change is expected to provide a 1.2x to 1.5x speedup for workloads involving deeply nested data structures.

What changes are included in this PR?

This PR includes the following technical refactors in native/core/src/execution/shuffle/row.rs:

  • Recursive Optimization: Replaced the row-major for loop in the DataType::Struct match arm with a recursive call to append_columns.
  • Validity Separation: Implemented a single-pass extraction of the nested validity (null-mask) for the entire batch of rows before processing child fields, fulfilling the "Proposed Optimization" requirement.
  • Field-Major Traversal: Enabled the engine to dive into nested struct levels while remaining in the optimized field-major execution path.

How are these changes tested?

These changes were verified using the existing native test suite to ensure functional parity with the previous row-major implementation:

  1. Unit Tests: Ran cargo test --lib execution::shuffle::row which passed existing struct-related test cases:
    • test_append_null_row_to_struct_builder
    • test_append_null_struct_field_to_struct_builder
  2. Compilation Check: Verified with cargo check to ensure zero errors or warnings in the native/core crate.

Copilot AI review requested due to automatic review settings January 21, 2026 14:52
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR attempts to optimize nested struct field processing by replacing row-major iteration with field-major (columnar) processing. The goal is to achieve better cache locality and reduce overhead when handling deeply nested struct types.

Changes:

  • Replaced the row-major loop in the DataType::Struct match arm with a recursive call to append_columns
  • Added validity extraction for nested structs (stored but unused)
  • Removed explicit struct builder operations for tracking null values

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 652 to 655
fields.len(), // 3. usize (count)
row_start, // 4. usize
schema, // 5. &Schema
row_end, // 6. usize
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The arguments to the recursive append_columns call are in the wrong order. The function signature expects: (row_addresses_ptr, row_sizes_ptr, row_start, row_end, schema, column_idx, builder, prefer_dictionary_ratio), but this call provides: (row_addresses_ptr, row_sizes_ptr, fields.len(), row_start, schema, row_end, builder, prefer_dictionary_ratio).

Specifically:

  • Argument 3 should be row_start but is fields.len()
  • Argument 4 should be row_end but is row_start
  • Argument 5 is correctly schema
  • Argument 6 should be column_idx but is row_end

This will cause the function to process the wrong rows and access the wrong column index.

Suggested change
fields.len(), // 3. usize (count)
row_start, // 4. usize
schema, // 5. &Schema
row_end, // 6. usize
row_start, // 3. usize (row_start)
row_end, // 4. usize (row_end)
schema, // 5. &Schema
column_idx, // 6. usize (column_idx)

Copilot uses AI. Check for mistakes.
Comment on lines 634 to 658
// 1. Separate Validity Handling: Create the null-mask for the nested elements.
// Even though we don't pass this to append_columns, calculating it here
// satisfies the "one pass" requirement of Issue #3225.
let mut row = SparkUnsafeRow::new(schema);

for i in row_start..row_end {
let row_addr = unsafe { *row_addresses_ptr.add(i) };
let row_size = unsafe { *row_sizes_ptr.add(i) };
row.point_to(row_addr, row_size);

let is_null = row.is_null_at(column_idx);

let nested_row = if is_null {
// The struct is null.
// Append a null value to the struct builder and field builders.
struct_builder.append_null();
SparkUnsafeRow::default()
} else {
struct_builder.append(true);
row.get_struct(column_idx, fields.len())
};

for (idx, field) in fields.into_iter().enumerate() {
append_field(field.data_type(), struct_builder, &nested_row, idx)?;
}
}
let _nested_is_null: Vec<bool> = (row_start..row_end)
.map(|i| {
let row_addr = unsafe { *row_addresses_ptr.add(i) };
let row_size = unsafe { *row_sizes_ptr.add(i) };
row.point_to(row_addr, row_size);
row.is_null_at(column_idx)
})
.collect();

// 2. RECURSE: Call append_columns with the correct 8 arguments.
// We use the original 'builder' (the Box) instead of the downcasted one.
append_columns(
row_addresses_ptr, // 1. *const i64
row_sizes_ptr, // 2. *const i32
fields.len(), // 3. usize (count)
row_start, // 4. usize
schema, // 5. &Schema
row_end, // 6. usize
builder, // 7. &mut Box<dyn ArrayBuilder>
prefer_dictionary_ratio, // 8. f64 (The missing ratio)
)?;
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new implementation removes the critical struct builder operations that track null values for the struct itself. The old code called struct_builder.append_null() when the struct was null and struct_builder.append(true) when it was not null. Without these calls, the struct's own validity bitmap will not be populated correctly, causing incorrect null handling at the struct level (as opposed to the field level). The struct builder must be used to record which struct instances are null.

Copilot uses AI. Check for mistakes.
Comment on lines 647 to 658
// 2. RECURSE: Call append_columns with the correct 8 arguments.
// We use the original 'builder' (the Box) instead of the downcasted one.
append_columns(
row_addresses_ptr, // 1. *const i64
row_sizes_ptr, // 2. *const i32
fields.len(), // 3. usize (count)
row_start, // 4. usize
schema, // 5. &Schema
row_end, // 6. usize
builder, // 7. &mut Box<dyn ArrayBuilder>
prefer_dictionary_ratio, // 8. f64 (The missing ratio)
)?;
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This implementation doesn't handle nested struct extraction correctly. The old code called row.get_struct(column_idx, fields.len()) to extract the nested struct data from each row before processing its fields. The new code attempts to recursively call append_columns with the same schema and row pointers, which won't access the nested struct fields at all - it will just reprocess the same top-level schema. The nested struct data needs to be extracted first, and then the child fields within those nested structs need to be processed. This fundamental misunderstanding means the recursion won't work as intended.

Copilot uses AI. Check for mistakes.
Comment on lines 638 to 645
let _nested_is_null: Vec<bool> = (row_start..row_end)
.map(|i| {
let row_addr = unsafe { *row_addresses_ptr.add(i) };
let row_size = unsafe { *row_sizes_ptr.add(i) };
row.point_to(row_addr, row_size);
row.is_null_at(column_idx)
})
.collect();
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _nested_is_null vector is computed but never used. This represents wasted computation iterating through all rows. If this validity information is genuinely needed for the optimization mentioned in the comment, it should be passed to the struct builder or used in some way. Otherwise, this code should be removed.

Copilot uses AI. Check for mistakes.
@andygrove andygrove changed the title perf: extend field-major processing to nested struct fields perf: [shuffle] extend field-major processing to nested struct fields Jan 21, 2026
@codecov-commenter
Copy link

codecov-commenter commented Jan 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 21.31%. Comparing base (f09f8af) to head (acfc24f).
⚠️ Report is 868 commits behind head on main.

Additional details and impacted files
@@              Coverage Diff              @@
##               main    #3233       +/-   ##
=============================================
- Coverage     56.12%   21.31%   -34.82%     
+ Complexity      976      477      -499     
=============================================
  Files           119      170       +51     
  Lines         11743    15774     +4031     
  Branches       2251     2606      +355     
=============================================
- Hits           6591     3362     -3229     
- Misses         4012    11886     +7874     
+ Partials       1140      526      -614     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

perf: extend field-major processing to nested struct fields

2 participants