From 8a5f4e0a1cdb5e5136b850406c8c546f79b40caa Mon Sep 17 00:00:00 2001 From: Amanda Liu Date: Tue, 2 Jul 2024 16:16:12 +0800 Subject: [PATCH] [SPARK-48759][SQL] Add migration doc for CREATE TABLE AS SELECT behavior change behavior change since Spark 3.4 ### What changes were proposed in this pull request? Add migration guide for `CREATE TABLE AS SELECT...` behavior change. SPARK-41859 changes the behaviour for `CREATE TABLE AS SELECT ...` from OVERWRITE to APPEND when `spark.sql.legacy.allowNonEmptyLocationInCTAS` is set to `true`: ``` drop table if exists test_table; create table test_table location '/tmp/test_table' stored as parquet as select 1 as col union all select 2 as col; drop table if exists test_table; create table test_table location '/tmp/test_table' stored as parquet as select 3 as col union all select 4 as col; select * from test_table; ``` This produces {3, 4} in Spark <3.4.0 and {1, 2, 3, 4} in Spark 3.4.0 and later. This is a silent change in `spark.sql.legacy.allowNonEmptyLocationInCTAS` behaviour which introduces wrong results in the user application. ### Why are the changes needed? This documents a behavior change starting in Spark 3.4 for `CREATE TABLE AS SELECT` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? `doc build ` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #47152 from asl3/allowNonEmptyLocationInCTAS. Authored-by: Amanda Liu Signed-off-by: Wenchen Fan --- docs/sql-migration-guide.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md index 8f6a415569863..4707e491fa674 100644 --- a/docs/sql-migration-guide.md +++ b/docs/sql-migration-guide.md @@ -97,6 +97,7 @@ license: | - Since Spark 3.4, `BinaryType` is not supported in CSV datasource. In Spark 3.3 or earlier, users can write binary columns in CSV datasource, but the output content in CSV files is `Object.toString()` which is meaningless; meanwhile, if users read CSV tables with binary columns, Spark will throw an `Unsupported type: binary` exception. - Since Spark 3.4, bloom filter joins are enabled by default. To restore the legacy behavior, set `spark.sql.optimizer.runtime.bloomFilter.enabled` to `false`. - Since Spark 3.4, when schema inference on external Parquet files, INT64 timestamps with annotation `isAdjustedToUTC=false` will be inferred as TimestampNTZ type instead of Timestamp type. To restore the legacy behavior, set `spark.sql.parquet.inferTimestampNTZ.enabled` to `false`. + - Since Spark 3.4, the behavior for `CREATE TABLE AS SELECT ...` is changed from OVERWRITE to APPEND when `spark.sql.legacy.allowNonEmptyLocationInCTAS` is set to `true`. Users are recommended to avoid CTAS with a non-empty table location. ## Upgrading from Spark SQL 3.2 to 3.3