Skip to content

Branch 53#100

Closed
jayshrivastava wants to merge 492 commits intobranch-52from
branch-53
Closed

Branch 53#100
jayshrivastava wants to merge 492 commits intobranch-52from
branch-53

Conversation

@jayshrivastava
Copy link
Copy Markdown

Which issue does this PR close?

  • Closes #.

Rationale for this change

What changes are included in this PR?

Are these changes tested?

Are there any user-facing changes?

dependabot Bot and others added 30 commits February 3, 2026 20:06
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.11.0 to 1.11.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/releases">bytes's
releases</a>.</em></p>
<blockquote>
<h2>Bytes v1.11.1</h2>
<h1>1.11.1 (February 3rd, 2026)</h1>
<ul>
<li>Fix integer overflow in <code>BytesMut::reserve</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md">bytes's
changelog</a>.</em></p>
<blockquote>
<h1>1.11.1 (February 3rd, 2026)</h1>
<ul>
<li>Fix integer overflow in <code>BytesMut::reserve</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/tokio-rs/bytes/commit/417dccdeff249e0c011327de7d92e0d6fbe7cc43"><code>417dccd</code></a>
Release bytes v1.11.1 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/820">#820</a>)</li>
<li><a
href="https://github.com/tokio-rs/bytes/commit/d0293b0e35838123c51ca5dfdf468ecafee4398f"><code>d0293b0</code></a>
Merge commit from fork</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/bytes/compare/v1.11.0...v1.11.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bytes&package-manager=cargo&previous-version=1.11.0&new-version=1.11.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/apache/datafusion/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Rationale for this change

We have the next flow in our DF based project: create a base
`SessionStateBuilder` and then, when a new user session is created, it
is used to build a session state. As `build(...)` consumes `self`, it
would be good to have `Clone` on `SesssionStateBuilder`, what this patch
adds.
## Which issue does this PR close?

- Closes apache#20109

## Rationale for this change

see issue apache#20109

## What changes are included in this PR?

1. Remap parent filter expressions: When a FilterExec has a projection,
remap unsupported parent filter expressions from output schema
coordinates to input schema coordinates using `reassign_expr_columns()`
before combining them with the current filter's predicates.

2. Preserve projection: When creating the merged FilterExec, preserve
the original projection instead of discarding it .

## Are these changes tested?

yes, add some test case

## Are there any user-facing changes?

---------

Co-authored-by: Adrian Garcia Badaracco <1755071+adriangb@users.noreply.github.com>
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

N/A

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

Removing dead code and remove functions from public API.

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

See comments.

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

Existing tests.

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

Yes, some functions removed from public API, but they likely weren't
intended to be in our public API.

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes apache#18943

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

Originally:

> As pointed out by @martin-g, even though we plan to remove `NUMERICS`
(see apache#18092) we should probably add f16 first so we don't conflate
adding new functionality with refactoring changes.

Updated:

> apache#19727 removes `NUMERICS` for us, which surfaced a bug where f16
wasn't being coerced to f64. Turns out we didn't have f16 support in the
logic calculating the potential coercions. Fixing this so f16 input to a
signature expected f64 is now allowed and coerced.

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

Support coercion of f16 to f64 as specified by signature.

Add tests for regr, percentile & covar functions.

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

Added tests.

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

No.

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
## Which issue does this PR close?

- Closes apache#20080

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

- Decimal support
- SLT Tests for Floor preimage

## Are these changes tested?

- Unit Tests
- SLT Tests

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

No

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

---------

Co-authored-by: Devanshu <devanshu@codapayments.com>
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Part of apache/datafusion-comet#2986

## Rationale for this change

`to_hex` (used by benchmark items `hex_int` / `hex_long`) previously
routed evaluation through `make_scalar_function(..., vec![])`, which
converts scalar inputs into size‑1 arrays before execution. This adds
avoidable overhead for constant folding / scalar evaluation.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

- Add match-based scalar fast path for integer scalars:
  - `Int8/16/32/64` and `UInt8/16/32/64`
- Remove `make_scalar_function(..., vec![])` usage

| Type | Before | After | Speedup |
|------|--------|-------|---------|
| `to_hex/scalar_i32` | 270.73 ns | 86.676 ns | **3.12x** |
| `to_hex/scalar_i64` | 254.71 ns | 89.254 ns | **2.85x** |

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

Yes

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

No

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
Adding a test that forces a RepartitionExec in the final plan.
Our goal will be to get the `get_field(...)` expression pushed through
the RepartitionExec into the DataSourceExec
## Which issue does this PR close?

- N/A.

## Rationale for this change

Make the `serialize_to_file` substrait test work on different platforms.

## What changes are included in this PR?

- Updated `serialize_to_file`.

## Are these changes tested?

Yes.

## Are there any user-facing changes?

No.
…ss costly) (apache#19893)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes apache#19852

## Rationale for this change

Improve performance of query planning and plan state re-set by making
node clone cheap.

## What changes are included in this PR?

- Store projection as `Option<Arc<[usize]>>` instead of
`Option<Vec<usize>>` in `FilterExec`, `HashJoinExec`,
`NestedLoopJoinExec`.
- Store exprs as `Arc<[ProjectionExpr]>` instead of Vec in
`ProjectionExprs`.
- Store arced aggregation, filter, group by expressions within
`AggregateExec`.
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes apache#20011.

## Rationale for this change

- `dict_id` is intentionally not preserved protobuf (it’s deprecated in
Arrow schema metadata), but Arrow IPC still requires dict IDs for
dictionary encoding/decoding.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

- Fix protobuf serde for nested ScalarValue (list/struct/map) containing
dictionary arrays by using Arrow IPC’s dictionary handling correctly.
- Seed DictionaryTracker by encoding the schema before encoding the
nested scalar batch.
- On decode, reconstruct an IPC schema from the protobuf schema and use
arrow_ipc::reader::read_dictionary to build dict_by_id before reading
the record batch.

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

Yes added a test for this

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

No

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
## Are these changes tested?

`cargo fmt`

## Are there any user-facing changes?

No
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes apache#20075.

## Rationale for this change

The previous implementation of `array_repeat` relied on Arrow defaults
when handling null and negative count values. As a result, null counts
were implicitly treated as zero and returned empty arrays, which is a
correctness issue.

This PR makes the handling of these edge cases explicit and aligns the
function with SQL null semantics.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

- Explicit handling of null and negative count values
- Planner-time coercion of the count argument to `Int64`

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

Yes, SLTs added and pass.

## Are there any user-facing changes?

Yes. When the count value is null, `array_repeat` now returns a null
array instead of an empty array.

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

---------

Co-authored-by: Martin Grigorov <martin-g@users.noreply.github.com>
Co-authored-by: Jeffrey Vo <jeffrey.vo.australia@gmail.com>
## Which issue does this PR close?

- Closes apache#20103

## Rationale for this change

A refactoring PR for performance improvement PRs for left apache#19749 and
right apache#20068.

## What changes are included in this PR?

1. Removed a lot of code duplication by extracting a common stringarray
/ stringview implementation. Now left and right UDFs entry points are
leaner. Differences are only in slicing - from the left or from the
right - which is implemented in a generic trait parameter, following the
design of trim.

2. Switched `left` to use `make_view` to avoid buffer tinkering in
datafusion code.

4. Combine left and right benches together

## Are these changes tested?

- Existing unit tests
- Existing SLTs passed
- Benches show the same performance improvement of 60-85%

Bench results against pre-optimisation commit
458b491:
<details>
left size=1024/string_array positive n/1024
                        time:   [34.150 µs 34.694 µs 35.251 µs]
change: [−71.694% −70.722% −69.818%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild
left size=1024/string_array negative n/1024
                        time:   [30.860 µs 31.396 µs 31.998 µs]
change: [−85.846% −85.294% −84.759%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  2 (2.00%) low mild
  4 (4.00%) high mild
  2 (2.00%) high severe

left size=4096/string_array positive n/4096
                        time:   [112.19 µs 114.28 µs 116.98 µs]
change: [−71.673% −70.934% −70.107%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
  2 (2.00%) high mild
  1 (1.00%) high severe
left size=4096/string_array negative n/4096
                        time:   [126.71 µs 129.06 µs 131.26 µs]
change: [−84.204% −83.809% −83.455%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) low mild
  2 (2.00%) high mild

left size=1024/string_view_array positive n/1024
                        time:   [30.249 µs 30.887 µs 31.461 µs]
change: [−75.288% −74.499% −73.743%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
  3 (3.00%) low mild
  1 (1.00%) high mild
left size=1024/string_view_array negative n/1024
                        time:   [48.404 µs 49.007 µs 49.608 µs]
change: [−66.827% −65.727% −64.652%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) low mild
  1 (1.00%) high mild
  1 (1.00%) high severe

left size=4096/string_view_array positive n/4096
                        time:   [145.25 µs 148.47 µs 151.85 µs]
change: [−68.913% −67.836% −66.770%] (p = 0.00 < 0.05)
                        Performance has improved.
left size=4096/string_view_array negative n/4096
                        time:   [203.11 µs 206.31 µs 209.98 µs]
change: [−57.411% −56.773% −56.142%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 15 outliers among 100 measurements (15.00%)
  1 (1.00%) low mild
  13 (13.00%) high mild
  1 (1.00%) high severe

right size=1024/string_array positive n/1024
                        time:   [30.820 µs 31.674 µs 32.627 µs]
change: [−84.230% −83.842% −83.402%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
  5 (5.00%) high mild
right size=1024/string_array negative n/1024
                        time:   [32.434 µs 33.170 µs 33.846 µs]
change: [−88.796% −88.460% −88.164%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
  3 (3.00%) high mild

right size=4096/string_array positive n/4096
                        time:   [124.71 µs 126.54 µs 128.27 µs]
change: [−83.321% −82.902% −82.537%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild
right size=4096/string_array negative n/4096
                        time:   [125.05 µs 127.67 µs 130.35 µs]
change: [−89.376% −89.193% −89.004%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild

right size=1024/string_view_array positive n/1024
                        time:   [29.110 µs 29.608 µs 30.141 µs]
change: [−79.807% −79.330% −78.683%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  6 (6.00%) high mild
  2 (2.00%) high severe
right size=1024/string_view_array negative n/1024
                        time:   [44.883 µs 45.656 µs 46.511 µs]
change: [−71.157% −70.546% −69.874%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  5 (5.00%) high mild
  1 (1.00%) high severe

right size=4096/string_view_array positive n/4096
                        time:   [139.57 µs 142.18 µs 144.96 µs]
change: [−75.610% −75.088% −74.549%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high severe
right size=4096/string_view_array negative n/4096
                        time:   [221.47 µs 224.47 µs 227.72 µs]
change: [−64.625% −64.047% −63.504%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
  3 (3.00%) high mild

</details>


## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
…0116)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes apache#20046 .

## Rationale for this change

Spark `sha2` currently evaluates scalars via
`make_scalar_function(sha2_impl, vec![])`, which expands scalar inputs
to size-1 arrays before execution. This adds avoidable overhead for
scalar evaluation / constant folding scenarios.

In addition, the existing digest-to-hex formatting uses `write!(&mut s,
"{b:02x}")` in a loop, which is significantly slower than a LUT-based
hex encoder.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

1) a match-based scalar fast path for `sha2` to avoid scalar→array
expansion, and
2) a faster LUT-based hex encoder to replace `write!` formatting.

| Benchmark | Before | After | Speedup |
|----------|--------|-------|---------|
| `sha2/scalar/size=1` | 1.0408 µs | 339.29 ns | **~3.07x** |
| `sha2/array_binary_256/size=1024` | 604.13 µs | 295.09 µs | **~2.05x**
|
| `sha2/array_binary_256/size=4096` | 2.3508 ms | 1.2095 ms | **~1.94x**
|
| `sha2/array_binary_256/size=8192` | 4.5192 ms | 2.2826 ms | **~1.98x**
|

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

Yes

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

No

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

---------

Co-authored-by: Jeffrey Vo <jeffrey.vo.australia@gmail.com>
Co-authored-by: Martin Grigorov <martin-g@users.noreply.github.com>
Co-authored-by: Oleks V <comphead@users.noreply.github.com>
…rs, add mismatch coverage (apache#20166)

## Which issue does this PR close?

* Closes apache#20161.

## Rationale for this change

This change is a focused refactor of the `PhysicalExprAdapter` schema
rewriter to improve readability and maintainability while preserving
behavior.

Key motivations:

* Reduce complexity from explicit lifetimes by storing schema references
as `SchemaRef`.
* Make column/index/type handling easier to follow by extracting helper
functions.
* Strengthen the test suite to ensure refactors do not alter adapter
output.

## What changes are included in this PR?

* Refactored `DefaultPhysicalExprAdapterRewriter` to own `SchemaRef`
values instead of borrowing `&Schema`.

  * Simplifies construction and avoids lifetime plumbing.
* Simplified column rewrite logic by:

* Early-exiting when both the physical index and data type already
match.
* Extracting `resolve_column` to handle physical index/name resolution.
* Extracting `create_cast_column_expr` to validate cast compatibility
(including nested structs) and build `CastColumnExpr`.
* Minor cleanups in struct compatibility validation and field selection
to ensure the cast checks are performed against the *actual* physical
field resolved by the final column index.
* Test updates and additions:

* Simplified construction of expected struct `Field`s in tests for
clarity.
* Added `test_rewrite_column_index_and_type_mismatch` to validate the
combined case where the logical column index differs from the physical
schema *and* the data type requires casting.

## Are these changes tested?

Yes.

* Existing unit tests continue to pass.
* Added a new unit test to cover the index-and-type mismatch scenario
for column rewriting, asserting:

  * The inner `Column` points to the correct physical index.
* The resulting expression is a `CastColumnExpr` producing the expected
logical type.

## Are there any user-facing changes?

No.

* This is a refactor/cleanup intended to preserve existing behavior.
* No public API changes, no behavioral changes expected in query
results.

## LLM-generated code disclosure

This PR includes LLM-generated code and comments. All LLM-generated
content has been manually reviewed and tested.
## What changes are included in this PR?

Adds support for negative spark function in data fusion.

## Are these changes tested?

yes, using UTs

## Are there any user-facing changes?

yes, adds new function.

---------

Co-authored-by: Nisha Agrawal <nishaagrawal@Nishas-MacBook-Air.local>
Co-authored-by: Jeffrey Vo <jeffrey.vo.australia@gmail.com>
Co-authored-by: Subham Singhal <subhamsinghal@Nishas-MacBook-Air.local>
Co-authored-by: Oleks V <comphead@users.noreply.github.com>
Co-authored-by: Martin Grigorov <martin-g@users.noreply.github.com>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
…pache#20139)

## Which issue does this PR close?

- Closes apache#20113.

## Rationale for this change

This PR fixes a potential panic in `ListingTableFactory::create` when
the provided
[Session](cci:2://file:///Users/evangelisilva/.gemini/antigravity/scratch/datafusion/datafusion/session/src/session.rs:71:0-140:1)
instance is not a `SessionState`.

Previously, the code used `.unwrap()` on
`downcast_ref::<SessionState>()`. If a custom
[Session](cci:2://file:///Users/evangelisilva/.gemini/antigravity/scratch/datafusion/datafusion/session/src/session.rs:71:0-140:1)
implementation was used (which is allowed by the trait), this would
cause a crash. This change replaces `.unwrap()` with `ok_or_else`,
returning a proper `DataFusionError::Internal` instead.

## What changes are included in this PR?

- Replaced `.unwrap()` with `ok_or_else` in
`ListingTableFactory::create` to safely handle session downcasting.
- Added a regression test
[test_create_with_invalid_session](cci:1://file:///Users/evangelisilva/.gemini/antigravity/scratch/datafusion/datafusion/core/src/datasource/listing_table_factory.rs:554:4-638:5)
in
[datafusion/core/src/datasource/listing_table_factory.rs](cci:7://file:///Users/evangelisilva/.gemini/antigravity/scratch/datafusion/datafusion/core/src/datasource/listing_table_factory.rs:0:0-0:0)
that uses a
[MockSession](cci:2://file:///Users/evangelisilva/.gemini/antigravity/scratch/datafusion/datafusion/core/src/datasource/listing_table_factory.rs:570:8-570:27)
to verify the error is returned instead of panicking.

## Are these changes tested?

Yes.
- Added new unit test
[test_create_with_invalid_session](cci:1://file:///Users/evangelisilva/.gemini/antigravity/scratch/datafusion/datafusion/core/src/datasource/listing_table_factory.rs:554:4-638:5).
- Ran `cargo test -p datafusion --lib
datasource::listing_table_factory::tests::test_create_with_invalid_session`
and it passed.

## Are there any user-facing changes?

No.

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Bumps [time](https://github.com/time-rs/time) from 0.3.44 to 0.3.47.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/time-rs/time/releases">time's
releases</a>.</em></p>
<blockquote>
<h2>v0.3.47</h2>
<p>See the <a
href="https://github.com/time-rs/time/blob/main/CHANGELOG.md">changelog</a>
for details.</p>
<h2>v0.3.46</h2>
<p>See the <a
href="https://github.com/time-rs/time/blob/main/CHANGELOG.md">changelog</a>
for details.</p>
<h2>v0.3.45</h2>
<p>See the <a
href="https://github.com/time-rs/time/blob/main/CHANGELOG.md">changelog</a>
for details.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/time-rs/time/blob/main/CHANGELOG.md">time's
changelog</a>.</em></p>
<blockquote>
<h2>0.3.47 [2026-02-05]</h2>
<h3>Security</h3>
<ul>
<li>
<p>The possibility of a stack exhaustion denial of service attack when
parsing RFC 2822 has been
eliminated. Previously, it was possible to craft input that would cause
unbounded recursion. Now,
the depth of the recursion is tracked, causing an error to be returned
if it exceeds a reasonable
limit.</p>
<p>This attack vector requires parsing user-provided input, with any
type, using the RFC 2822 format.</p>
</li>
</ul>
<h3>Compatibility</h3>
<ul>
<li>Attempting to format a value with a well-known format (i.e. RFC
3339, RFC 2822, or ISO 8601) will
error at compile time if the type being formatted does not provide
sufficient information. This
would previously fail at runtime. Similarly, attempting to format a
value with ISO 8601 that is
only configured for parsing (i.e. <code>Iso8601::PARSING</code>) will
error at compile time.</li>
</ul>
<h3>Added</h3>
<ul>
<li>Builder methods for format description modifiers, eliminating the
need for verbose initialization
when done manually.</li>
<li><code>date!(2026-W01-2)</code> is now supported. Previously, a space
was required between <code>W</code> and <code>01</code>.</li>
<li><code>[end]</code> now has a <code>trailing_input</code> modifier
which can either be <code>prohibit</code> (the default) or
<code>discard</code>. When it is <code>discard</code>, all remaining
input is ignored. Note that if there are components
after <code>[end]</code>, they will still attempt to be parsed, likely
resulting in an error.</li>
</ul>
<h3>Changed</h3>
<ul>
<li>More performance gains when parsing.</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>If manually formatting a value, the number of bytes written was one
short for some components.
This has been fixed such that the number of bytes written is always
correct.</li>
<li>The possibility of integer overflow when parsing an owned format
description has been effectively
eliminated. This would previously wrap when overflow checks were
disabled. Instead of storing the
depth as <code>u8</code>, it is stored as <code>u32</code>. This would
require multiple gigabytes of nested input to
overflow, at which point we've got other problems and trivial
mitigations are available by
downstream users.</li>
</ul>
<h2>0.3.46 [2026-01-23]</h2>
<h3>Added</h3>
<ul>
<li>All possible panics are now documented for the relevant
methods.</li>
<li>The need to use <code>#[serde(default)]</code> when using custom
<code>serde</code> formats is documented. This applies
only when deserializing an <code>Option&lt;T&gt;</code>.</li>
<li><code>Duration::nanoseconds_i128</code> has been made public,
mirroring
<code>std::time::Duration::from_nanos_u128</code>.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/time-rs/time/commit/d5144cd2874862d46466c900910cd8577d066019"><code>d5144cd</code></a>
v0.3.47 release</li>
<li><a
href="https://github.com/time-rs/time/commit/f6206b050fd54817d8872834b4d61f605570e89b"><code>f6206b0</code></a>
Guard against integer overflow in release mode</li>
<li><a
href="https://github.com/time-rs/time/commit/1c63dc7985b8fa26bd8c689423cc56b7a03841ee"><code>1c63dc7</code></a>
Avoid denial of service when parsing Rfc2822</li>
<li><a
href="https://github.com/time-rs/time/commit/5940df6e72efb63d246ca1ca59a0f836ad32ad8a"><code>5940df6</code></a>
Add builder methods to avoid verbose construction</li>
<li><a
href="https://github.com/time-rs/time/commit/00881a4da1bc5a6cb6313052e5017dbd7daa40f0"><code>00881a4</code></a>
Manually format macros everywhere</li>
<li><a
href="https://github.com/time-rs/time/commit/bb723b6d826e46c174d75cd08987061984b0ceb7"><code>bb723b6</code></a>
Add <code>trailing_input</code> modifier to <code>end</code></li>
<li><a
href="https://github.com/time-rs/time/commit/31c4f8e0b56e6ae24fe0d6ef0e492b6741dda783"><code>31c4f8e</code></a>
Permit <code>W12</code> in <code>date!</code> macro</li>
<li><a
href="https://github.com/time-rs/time/commit/490a17bf306576850f33a86d3ca95d96db7b1dcd"><code>490a17b</code></a>
Mark error paths in well-known formats as cold</li>
<li><a
href="https://github.com/time-rs/time/commit/6cb1896a600be1538ecfab8f233fe9cfe9fa8951"><code>6cb1896</code></a>
Optimize <code>Rfc2822</code> parsing</li>
<li><a
href="https://github.com/time-rs/time/commit/6d264d59c25e3da0453c3defebf4640b0086a006"><code>6d264d5</code></a>
Remove erroneous <code>#[inline(never)]</code> attributes</li>
<li>Additional commits viewable in <a
href="https://github.com/time-rs/time/compare/v0.3.44...v0.3.47">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=time&package-manager=cargo&previous-version=0.3.44&new-version=0.3.47)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/apache/datafusion/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
We were missing some closing backticks
…asmtest/datafusion-wasm-app (apache#20178)

Bumps [webpack](https://github.com/webpack/webpack) from 5.94.0 to
5.105.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/webpack/webpack/releases">webpack's
releases</a>.</em></p>
<blockquote>
<h2>v5.105.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>
<p>Allow resolving worker module by export condition name when using
<code>new Worker()</code> (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20353">#20353</a>)</p>
</li>
<li>
<p>Detect conditional imports to avoid compile-time linking errors for
non-existent exports. (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20320">#20320</a>)</p>
</li>
<li>
<p>Added the <code>tsconfig</code> option for the <code>resolver</code>
options (replacement for <code>tsconfig-paths-webpack-plugin</code>).
Can be <code>false</code> (disabled), <code>true</code> (use the default
<code>tsconfig.json</code> file to search for it), a string path to
<code>tsconfig.json</code>, or an object with <code>configFile</code>
and <code>references</code> options. (by <a
href="https://github.com/alexander-akait"><code>@​alexander-akait</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20400">#20400</a>)</p>
</li>
<li>
<p>Support <code>import.defer()</code> for context modules. (by <a
href="https://github.com/ahabhgk"><code>@​ahabhgk</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20399">#20399</a>)</p>
</li>
<li>
<p>Added support for array values ​​to the <code>devtool</code> option.
(by <a href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20191">#20191</a>)</p>
</li>
<li>
<p>Improve rendering node built-in modules for ECMA module output. (by
<a href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20255">#20255</a>)</p>
</li>
<li>
<p>Unknown import.meta properties are now determined at runtime instead
of being statically analyzed at compile time. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20312">#20312</a>)</p>
</li>
</ul>
<h3>Patch Changes</h3>
<ul>
<li>
<p>Fixed ESM default export handling for <code>.mjs</code> files in
Module Federation (by <a
href="https://github.com/y-okt"><code>@​y-okt</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20189">#20189</a>)</p>
</li>
<li>
<p>Optimized <code>import.meta.env</code> handling in destructuring
assignments by using cached stringified environment definitions. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20313">#20313</a>)</p>
</li>
<li>
<p>Respect the <code>stats.errorStack</code> option in stats output. (by
<a
href="https://github.com/samarthsinh2660"><code>@​samarthsinh2660</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20258">#20258</a>)</p>
</li>
<li>
<p>Fixed a bug where declaring a <code>module</code> variable in module
scope would conflict with the default <code>moduleArgument</code>. (by
<a href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in
<a
href="https://redirect.github.com/webpack/webpack/pull/20265">#20265</a>)</p>
</li>
<li>
<p>Fix VirtualUrlPlugin to set resourceData.context for proper module
resolution. Previously, when context was not set, it would fallback to
the virtual scheme path (e.g., <code>virtual:routes</code>), which is
not a valid filesystem path, causing subsequent resolve operations to
fail. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20390">#20390</a>)</p>
</li>
<li>
<p>Fixed Worker self-import handling to support various URL patterns
(e.g., <code>import.meta.url</code>, <code>new
URL(import.meta.url)</code>, <code>new URL(import.meta.url,
import.meta.url)</code>, <code>new URL(&quot;./index.js&quot;,
import.meta.url)</code>). Workers that resolve to the same module are
now properly deduplicated, regardless of the URL syntax used. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20381">#20381</a>)</p>
</li>
<li>
<p>Reuse the same async entrypoint for the same Worker URL within a
module to avoid circular dependency warnings when multiple Workers
reference the same resource. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20345">#20345</a>)</p>
</li>
<li>
<p>Fixed a bug where a self-referencing dependency would have an unused
export name when imported inside a web worker. (by <a
href="https://github.com/samarthsinh2660"><code>@​samarthsinh2660</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20251">#20251</a>)</p>
</li>
<li>
<p>Fix missing export generation when concatenated modules in different
chunks share the same runtime in module library bundles. (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20346">#20346</a>)</p>
</li>
<li>
<p>Fixed <code>import.meta.env.xxx</code> behavior: when accessing a
non-existent property, it now returns empty object instead of full
object at runtime. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20289">#20289</a>)</p>
</li>
<li>
<p>Improved parsing error reporting by adding a link to the loader
documentation. (by <a
href="https://github.com/gaurav10gg"><code>@​gaurav10gg</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20244">#20244</a>)</p>
</li>
<li>
<p>Fix typescript types. (by <a
href="https://github.com/alexander-akait"><code>@​alexander-akait</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20305">#20305</a>)</p>
</li>
<li>
<p>Add declaration for unused harmony import specifier. (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20286">#20286</a>)</p>
</li>
<li>
<p>Fix compressibility of modules while retaining portability. (by <a
href="https://github.com/dmichon-msft"><code>@​dmichon-msft</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20287">#20287</a>)</p>
</li>
<li>
<p>Optimize source map generation: only include <code>ignoreList</code>
property when it has content, avoiding empty arrays in source maps. (by
<a href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in
<a
href="https://redirect.github.com/webpack/webpack/pull/20319">#20319</a>)</p>
</li>
<li>
<p>Preserve star exports for dependencies in ECMA module output. (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20293">#20293</a>)</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/webpack/webpack/blob/main/CHANGELOG.md">webpack's
changelog</a>.</em></p>
<blockquote>
<h2>5.105.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>
<p>Allow resolving worker module by export condition name when using
<code>new Worker()</code> (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20353">#20353</a>)</p>
</li>
<li>
<p>Detect conditional imports to avoid compile-time linking errors for
non-existent exports. (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20320">#20320</a>)</p>
</li>
<li>
<p>Added the <code>tsconfig</code> option for the <code>resolver</code>
options (replacement for <code>tsconfig-paths-webpack-plugin</code>).
Can be <code>false</code> (disabled), <code>true</code> (use the default
<code>tsconfig.json</code> file to search for it), a string path to
<code>tsconfig.json</code>, or an object with <code>configFile</code>
and <code>references</code> options. (by <a
href="https://github.com/alexander-akait"><code>@​alexander-akait</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20400">#20400</a>)</p>
</li>
<li>
<p>Support <code>import.defer()</code> for context modules. (by <a
href="https://github.com/ahabhgk"><code>@​ahabhgk</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20399">#20399</a>)</p>
</li>
<li>
<p>Added support for array values ​​to the <code>devtool</code> option.
(by <a href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20191">#20191</a>)</p>
</li>
<li>
<p>Improve rendering node built-in modules for ECMA module output. (by
<a href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20255">#20255</a>)</p>
</li>
<li>
<p>Unknown import.meta properties are now determined at runtime instead
of being statically analyzed at compile time. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20312">#20312</a>)</p>
</li>
</ul>
<h3>Patch Changes</h3>
<ul>
<li>
<p>Fixed ESM default export handling for <code>.mjs</code> files in
Module Federation (by <a
href="https://github.com/y-okt"><code>@​y-okt</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20189">#20189</a>)</p>
</li>
<li>
<p>Optimized <code>import.meta.env</code> handling in destructuring
assignments by using cached stringified environment definitions. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20313">#20313</a>)</p>
</li>
<li>
<p>Respect the <code>stats.errorStack</code> option in stats output. (by
<a
href="https://github.com/samarthsinh2660"><code>@​samarthsinh2660</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20258">#20258</a>)</p>
</li>
<li>
<p>Fixed a bug where declaring a <code>module</code> variable in module
scope would conflict with the default <code>moduleArgument</code>. (by
<a href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in
<a
href="https://redirect.github.com/webpack/webpack/pull/20265">#20265</a>)</p>
</li>
<li>
<p>Fix VirtualUrlPlugin to set resourceData.context for proper module
resolution. Previously, when context was not set, it would fallback to
the virtual scheme path (e.g., <code>virtual:routes</code>), which is
not a valid filesystem path, causing subsequent resolve operations to
fail. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20390">#20390</a>)</p>
</li>
<li>
<p>Fixed Worker self-import handling to support various URL patterns
(e.g., <code>import.meta.url</code>, <code>new
URL(import.meta.url)</code>, <code>new URL(import.meta.url,
import.meta.url)</code>, <code>new URL(&quot;./index.js&quot;,
import.meta.url)</code>). Workers that resolve to the same module are
now properly deduplicated, regardless of the URL syntax used. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20381">#20381</a>)</p>
</li>
<li>
<p>Reuse the same async entrypoint for the same Worker URL within a
module to avoid circular dependency warnings when multiple Workers
reference the same resource. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20345">#20345</a>)</p>
</li>
<li>
<p>Fixed a bug where a self-referencing dependency would have an unused
export name when imported inside a web worker. (by <a
href="https://github.com/samarthsinh2660"><code>@​samarthsinh2660</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20251">#20251</a>)</p>
</li>
<li>
<p>Fix missing export generation when concatenated modules in different
chunks share the same runtime in module library bundles. (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20346">#20346</a>)</p>
</li>
<li>
<p>Fixed <code>import.meta.env.xxx</code> behavior: when accessing a
non-existent property, it now returns empty object instead of full
object at runtime. (by <a
href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20289">#20289</a>)</p>
</li>
<li>
<p>Improved parsing error reporting by adding a link to the loader
documentation. (by <a
href="https://github.com/gaurav10gg"><code>@​gaurav10gg</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20244">#20244</a>)</p>
</li>
<li>
<p>Fix typescript types. (by <a
href="https://github.com/alexander-akait"><code>@​alexander-akait</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20305">#20305</a>)</p>
</li>
<li>
<p>Add declaration for unused harmony import specifier. (by <a
href="https://github.com/hai-x"><code>@​hai-x</code></a> in <a
href="https://redirect.github.com/webpack/webpack/pull/20286">#20286</a>)</p>
</li>
<li>
<p>Fix compressibility of modules while retaining portability. (by <a
href="https://github.com/dmichon-msft"><code>@​dmichon-msft</code></a>
in <a
href="https://redirect.github.com/webpack/webpack/pull/20287">#20287</a>)</p>
</li>
<li>
<p>Optimize source map generation: only include <code>ignoreList</code>
property when it has content, avoiding empty arrays in source maps. (by
<a href="https://github.com/xiaoxiaojx"><code>@​xiaoxiaojx</code></a> in
<a
href="https://redirect.github.com/webpack/webpack/pull/20319">#20319</a>)</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/webpack/webpack/commit/1486f9aacca11d79dbb7ddbceed29b7e6df7a7ab"><code>1486f9a</code></a>
chore(release): new release</li>
<li><a
href="https://github.com/webpack/webpack/commit/1a517f665aae7b4d3d29c8b408d09488a21fbf94"><code>1a517f6</code></a>
feat: added the <code>tsconfig</code> option for the
<code>resolver</code> options (<a
href="https://redirect.github.com/webpack/webpack/issues/20400">#20400</a>)</li>
<li><a
href="https://github.com/webpack/webpack/commit/7b3b0f795df377a9d0073822a2d60c1390d03109"><code>7b3b0f7</code></a>
feat: support <code>import.defer()</code> for context modules</li>
<li><a
href="https://github.com/webpack/webpack/commit/c4a6a922de4af37a92d05c0ddc975b5348cfa9a1"><code>c4a6a92</code></a>
refactor: more types and increase types coverage</li>
<li><a
href="https://github.com/webpack/webpack/commit/5ecc58d722da7715ede7de59b97108dd715d1bfa"><code>5ecc58d</code></a>
feat: consider asset module as side-effect-free (<a
href="https://redirect.github.com/webpack/webpack/issues/20352">#20352</a>)</li>
<li><a
href="https://github.com/webpack/webpack/commit/cce0f6989888771ec279777ab8f8dce8e39198a0"><code>cce0f69</code></a>
test: avoid comma operator in BinaryMiddleware test (<a
href="https://redirect.github.com/webpack/webpack/issues/20398">#20398</a>)</li>
<li><a
href="https://github.com/webpack/webpack/commit/cd4793d50e8e1e519ecd07b76d9e5dc06357341e"><code>cd4793d</code></a>
feat: support import specifier guard (<a
href="https://redirect.github.com/webpack/webpack/issues/20320">#20320</a>)</li>
<li><a
href="https://github.com/webpack/webpack/commit/fe486552d060f6d2815a39a6bd0fb351d348658c"><code>fe48655</code></a>
docs: update examples (<a
href="https://redirect.github.com/webpack/webpack/issues/20397">#20397</a>)</li>
<li><a
href="https://github.com/webpack/webpack/commit/de107f8767a2a11759f8261ed1ac49bcddec34b6"><code>de107f8</code></a>
fix(VirtualUrlPlugin): set resourceData.context to avoid invalid
fallback (<a
href="https://redirect.github.com/webpack/webpack/issues/2">#2</a>...</li>
<li><a
href="https://github.com/webpack/webpack/commit/a656ab1fd1064ef8dd3eef1a2f3071fc176b948f"><code>a656ab1</code></a>
test: add self-import test case for dynamic import (<a
href="https://redirect.github.com/webpack/webpack/issues/20389">#20389</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/webpack/webpack/compare/v5.94.0...v5.105.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for webpack since your current version.</p>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=webpack&package-manager=npm_and_yarn&previous-version=5.94.0&new-version=5.105.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/apache/datafusion/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Which issue does this PR close?

- Closes apache#19425

## Rationale for this change

This adjusts the way that the spill channel works. Currently we have a
spill writer & reader pairing which uses a mutex to coordindate when a
file is ready to be read.

What happens is, that because we were using a `spawn_buffered` call, the
read task would race ahead trying to read a file which is yet to be
written out completely.

Alongside this, we need to flush each write to the file, as there is a
chance that another thread may see stale data.

## What changes are included in this PR?

Adds a flush on write, and converts the read task to not buffer reads.

## Are these changes tested?

I haven't written a test, but I have been running the example in the
attached issue. While it now fails with allocation errors, the original
error goes away.

## Are there any user-facing changes?

Nope
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

No

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

I think the original comment is misleading as we actually want to
express if a parent filter was pushed down to any child successfully.

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

Updated the comment about the `filters` in the
`FilterPushdownPropagation` struct.

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

No need to test as it only modifies comments.

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

Yes, but only touches the comment.

---------

Co-authored-by: Adrian Garcia Badaracco <1755071+adriangb@users.noreply.github.com>
## Which issue does this PR close?

When checking logical equivalence with `Dictionary<_, Utf8>` and
`Utf8View`, the response was `false` which is not what we expect
(logical equivalence should be a transitive property).

## What changes are included in this PR?

This PR introduces a test and a fix. The test fails without the fix. The
fix is simply calling `datatype_is_logically_equal` again on the `v1`
and `othertype` when called with `Dictionary<K1, V1>` and `othertype`.

## Are these changes tested?

Yes.

## Are there any user-facing changes?

No.

---------

Co-authored-by: Jeffrey Vo <jeffrey.vo.australia@gmail.com>
Co-authored-by: Dmitrii Blaginin <dmitrii@blaginin.me>
Co-authored-by: blaginin <github@blaginin.me>
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes #apache#20025.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
Along the way, improve the docs slightly.

## Rationale for this change

Evaluating `SELECT SPLIT_PART('', '', -9223372036854775808);` yields (in
a debug build):

```
thread 'main' (41405991) panicked at datafusion/functions/src/string/split_part.rs:236:47:
attempt to negate with overflow
```

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## Are these changes tested?

Yes, added unit test.

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
…event potential overflow (apache#20185)

Currently `print_options::MaxRows::Unlimited` basically always panics
with `attempt to add with overflow`, because we are summing `usize::MAX`
with a positive number.

`saturating_add` solves this issue

# MRE

```rs
fn main() {
    let max = usize::MAX;
    println!("max: {}", max);
    let max = max + 1;
    println!("max: {}", max);
}
```
## Rationale for this change

We support `DELETE LIMIT` query and it would be good to port our fork
patch to the upstream.

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

This patch adds a support for delete statement with limit planning. An
inner table scan is wrapped with a limit in this case. e.g.:

```
query TT
explain delete from t1 limit 10
----
logical_plan
01)Dml: op=[Delete] table=[t1]
02)--Limit: skip=0, fetch=10
03)----TableScan: t1
physical_plan
01)CooperativeExec
02)--DmlResultExec: rows_affected=0
```
## Are these changes tested?

Covered with SLT.

## Are there any user-facing changes?

Now queries with limited deletion are successfully planned, instead of
returning not-supported error.
comphead and others added 11 commits March 17, 2026 16:17
…1016)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123. -->

- Closes apache#21011 .

## Rationale for this change

Handle correctly `array_remove_*` functions if NULL is a value to delete

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes. -->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

(cherry picked from commit 6ab16cc)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes #.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
…che#20962) (apache#20996)

- Part of apache#19692
- Closes apache#20996 on branch-53

This PR:
- Backports apache#20962 from
@erratic-pattern to the branch-53 line
- Backports the related tests from
apache#20960

Co-authored-by: Adam Curtis <adam.curtis.dev@gmail.com>
- Port of apache#21084 to 53

- Related to apache#21079

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
…elds (apache#21057) (apache#21142)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Related to apache#21063
- Related to apache#21079

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

Currently if we see a filter with a limit underneath, we don't push the
filter past the limit. However, sort nodes and table scan nodes can have
fetch fields which do essentially the same thing, and we don't stop
filters being pushed past them. This is a correctness bug that can lead
to undefined behaviour.

I added checks for exactly this condition so we don't push the filter
down. I think the prior expectation was that there would be a limit node
between any of these nodes, but this is also not true. In
`push_down_limit.rs`, there's code that does this optimisation when a
limit has a sort under it:


```rust
LogicalPlan::Sort(mut sort) => {
    let new_fetch = {
        let sort_fetch = skip + fetch;
        Some(sort.fetch.map(|f| f.min(sort_fetch)).unwrap_or(sort_fetch))
    };
    if new_fetch == sort.fetch {
        if skip > 0 {
            original_limit(skip, fetch, LogicalPlan::Sort(sort))
        } else {
            Ok(Transformed::yes(LogicalPlan::Sort(sort)))
        }
    } else {
        sort.fetch = new_fetch;
        limit.input = Arc::new(LogicalPlan::Sort(sort));
        Ok(Transformed::yes(LogicalPlan::Limit(limit)))
    }
}
```

The first time this runs, it sets the internal fetch of the sort to
new_fetch, and on the second optimisation pass it hits the branch where
we just get rid of the limit node altogether, leaving the sort node
exposed to potential filters which can now push down into it.

There is also a related fix in `gather_filters_for_pushdown` in
`SortExec`, which does the same thing for physical plan nodes. If we see
that a given execution plan has non-empty fetch, it should not allow any
parent filters to be pushed down.

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

Added checks in the optimisation rule to avoid pushing filters past
children with built-in limits.

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

Yes:
- Unit tests in `push_down_filter.rs`
- Fixed an existing test in `window.slt`
- Unit tests for the physical plan change in `sort.rs`
- New slt test in `push_down_filter_sort_fetch.slt` for this exact
behaviour

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
No

Co-authored-by: Shiv Bhatia <shivbhatia10@gmail.com>
Co-authored-by: Shiv Bhatia <sbhatia@palantir.com>
…oin keys (apache#21121) (apache#21162)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Related to apache#21124 
- Related to apache#21079

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

When a Substrait join expression contains both equal and
is_not_distinct_from predicates (e.g. Spark pushes a null-safe filter
into a join that already has a regular equality key), the
`split_eq_and_noneq_join_predicate_with_nulls_equality` function uses a
single `nulls_equal_nulls` boolean that gets overwritten per-predicate.
Whichever operator is processed last determines the `NullEquality` for
all keys, silently dropping NULL-matching rows.

Since NullEquality is a join-level setting (not per-key) across all
physical join implementations (HashJoinExec, SortMergeJoinExec,
SymmetricHashJoinExec), the correct fix is to match DataFusion's own SQL
planner behavior: demote IS NOT DISTINCT FROM keys to the join filter
when mixed with Eq keys. This is already correctly handled for SQL as
shown in

[join_is_not_distinct_from.slt:L188](https://sourcegraph.com/r/github.com/apache/datafusion@2b7d4f9a5b005905b23128274ad37c3306ffcd15/-/blob/datafusion/sqllogictest/test_files/join_is_not_distinct_from.slt?L188)
```
# Test mixed equal and IS NOT DISTINCT FROM conditions
# The `IS NOT DISTINCT FROM` expr should NOT in HashJoin's `on` predicate
query TT
EXPLAIN SELECT t1.id AS t1_id, t2.id AS t2_id, t1.val, t2.val
FROM t1
JOIN t2 ON t1.id = t2.id AND t1.val IS NOT DISTINCT FROM t2.val
----
logical_plan
01)Projection: t1.id AS t1_id, t2.id AS t2_id, t1.val, t2.val
02)--Inner Join: t1.id = t2.id Filter: t1.val IS NOT DISTINCT FROM t2.val
03)----TableScan: t1 projection=[id, val]
04)----TableScan: t2 projection=[id, val]
```

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

`datafusion/substrait/src/logical_plan/consumer/rel/join_rel.rs`:
- Collect eq_keys and indistinct_keys separately instead of using a
single vec with an overwritable boolean
- When both are present (mixed case), use eq_keys as equijoin keys with
NullEqualsNothing and reconstruct the IsNotDistinctFrom expressions into
the join filter
  - Return NullEquality directly instead of converting from bool

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->
Yes, three levels of coverage:

1. Unit tests (join_rel.rs) — directly assert the output of
split_eq_and_noneq_join_predicate_with_nulls_equality for eq-only,
indistinct-only, mixed, and non-column-operand cases
2. Integration test (consumer_integration.rs) — loads a JSON-encoded
Substrait plan with a JoinRel containing both operators through
from_substrait_plan, executes it, and asserts 6 rows (including
NULL=NULL matches)
3. Existing SLT (join_is_not_distinct_from.slt:179-205) — confirms the
SQL planner already exhibits the same demotion behavior that this PR
adds to the Substrait consumer

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
No API changes. Substrait plans with mixed equal/is_not_distinct_from
join predicates now correctly preserve null-safe semantics instead of
silently dropping NULL-matching rows.
…ache#21183)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123. -->

- Closes #.

## Rationale for this change

The rewriter actually has 3 responsibilities:
1. Index remapping — column indices in expressions may not match the
file schema
  2. Type casting — when logical and physical field types differ
3. Missing column handling — replacing references to absent columns with
nulls

Do not use cycles for schema rewrite if predicate is not set or logic
schema equal to physical schema

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes. -->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes #.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
… schema for … (apache#21451)

…spill files

(cherry picked from commit e133dd3)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes #.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

Co-authored-by: Marko Grujic <markoog@gmail.com>
…park bitmap/… (apache#21452)

…math modules

(cherry picked from commit 39fb9cc)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes #.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

Co-authored-by: David López <hola@devel0pez.com>
…ion pushdown (apache#21492)

(cherry picked from commit 330d57f)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes #.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

Co-authored-by: Huaijin <haohuaijin@gmail.com>
- Part of apache#21079
- Closes apache#21155 on branch-53

This PR:
- Backports apache#21439 from
@timsaucer to the branch-53 line

Co-authored-by: Tim Saucer <timsaucer@gmail.com>
apache#20658) (apache#21523)

- Part of apache#21079
- Closes apache#20905 on branch-53

This PR:
- Backports apache#21358 from @alamb
to the branch-53 line

Co-authored-by: Viktor Yershov <viktor@spice.ai>
alamb and others added 2 commits April 13, 2026 12:12
…#21587)

- Part of apache#21079

# Rationale
@comphead notes that `cargo audit` is failing on
apache#21559 (comment)

I previously fixed something similar on `branch-52`: 
- apache#21415

# Changes 
- Backports the cargo audit dependency updates from apache#21415 to branch-53
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

- Closes #.

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.