New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-39226][DOCS][FOLLOWUP] Update the migration guide after fixing the precision of the return type of round-like functions #36821
Conversation
docs/sql-migration-guide.md
Outdated
@@ -65,6 +65,8 @@ license: | | |||
- Since Spark 3.3, when reading values from a JSON attribute defined as `FloatType` or `DoubleType`, the strings `"+Infinity"`, `"+INF"`, and `"-INF"` are now parsed to the appropriate values, in addition to the already supported `"Infinity"` and `"-Infinity"` variations. This change was made to improve consistency with Jackson's parsing of the unquoted versions of these values. Also, the `allowNonNumericNumbers` option is now respected so these strings will now be considered invalid if this option is disabled. | |||
|
|||
- Since Spark 3.3, Spark will try to use built-in data source writer instead of Hive serde in `INSERT OVERWRITE DIRECTORY`. This behavior is effective only if `spark.sql.hive.convertMetastoreParquet` or `spark.sql.hive.convertMetastoreOrc` is enabled respectively for Parquet and ORC formats. To restore the behavior before Spark 3.3, you can set `spark.sql.hive.convertMetastoreInsertDir` to `false`. | |||
|
|||
- Since Spark 3.3, the precision of the return type of round-like functions has been fixed. This may cause Spark throw a `CANNOT_UP_CAST_DATATYPE` exception when using views created by prior versions. In such cases, you need to recreate the views using ALTER VIEW AS or CREATE OR REPLACE VIEW AS with newer Spark versions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: more precisely: ... Spark throw AnalysisException of the CANNOT_UP_CAST_DATATYPE
error class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
Failed with:
Is this related? |
I do not think it is related. |
… the precision of the return type of round-like functions ### What changes were proposed in this pull request? Update the migration guide after fixing the precision of the return type of round-like functions. How to reproduce this issue: ```sql -- Spark 3.2 CREATE TABLE t1(CURNCY_AMT DECIMAL(18,6)) using parquet; CREATE VIEW v1 AS SELECT BROUND(CURNCY_AMT, 6) AS CURNCY_AMT FROM t1; ``` ```sql -- Spark 3.3 SELECT * FROM v1; org.apache.spark.sql.AnalysisException: [CANNOT_UP_CAST_DATATYPE] Cannot up cast CURNCY_AMT from "DECIMAL(19,6)" to "DECIMAL(18,6)". ``` ### Why are the changes needed? Update the migration guide. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? N/A Closes #36821 from wangyum/SPARK-39226. Authored-by: Yuming Wang <yumwang@ebay.com> Signed-off-by: Max Gekk <max.gekk@gmail.com> (cherry picked from commit 1053794) Signed-off-by: Max Gekk <max.gekk@gmail.com>
What changes were proposed in this pull request?
Update the migration guide after fixing the precision of the return type of round-like functions.
How to reproduce this issue:
Why are the changes needed?
Update the migration guide.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
N/A