Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion crates/integration_tests/testdata/spark/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ WORKDIR ${SPARK_HOME}

ENV SPARK_VERSION=3.5.6
ENV ICEBERG_SPARK_RUNTIME_VERSION=3.5_2.12
ENV ICEBERG_VERSION=1.6.0
ENV ICEBERG_VERSION=1.10.0

RUN curl --retry 5 -s -C - https://dlcdn.apache.org/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop3.tgz -o spark-${SPARK_VERSION}-bin-hadoop3.tgz \
&& tar xzf spark-${SPARK_VERSION}-bin-hadoop3.tgz --directory /opt/spark --strip-components 1 \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ async fn test_read_table_with_positional_deletes() {

// Scan plan phase should include delete files in file plan
// when with_delete_file_processing_enabled == true
assert_eq!(plan[0].deletes.len(), 2);
assert_eq!(plan[0].deletes.len(), 1);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deletes are now automatically compacted to avoid read amplification 👍


// we should see two rows deleted, returning 10 rows instead of 12
let batch_stream = scan.to_arrow().await.unwrap();
Expand Down
Loading