Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-40018][SQL][TESTS] Output SparkThrowable to SQL golden files in JSON format #37452

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
70 changes: 17 additions & 53 deletions sql/core/src/test/resources/sql-tests/results/ansi/array.sql.out
Expand Up @@ -128,7 +128,7 @@ select sort_array(array('b', 'd'), '1')
struct<>
-- !query output
org.apache.spark.sql.AnalysisException
cannot resolve 'sort_array(array('b', 'd'), '1')' due to data type mismatch: Sort order in second argument requires a boolean literal.; line 1 pos 7
{"errorClass":"legacy","messageParameters":["cannot resolve 'sort_array(array('b', 'd'), '1')' due to data type mismatch: Sort order in second argument requires a boolean literal.; line 1 pos 7"],"queryContext":[]}


-- !query
Expand All @@ -137,7 +137,7 @@ select sort_array(array('b', 'd'), cast(NULL as boolean))
struct<>
-- !query output
org.apache.spark.sql.AnalysisException
cannot resolve 'sort_array(array('b', 'd'), CAST(NULL AS BOOLEAN))' due to data type mismatch: Sort order in second argument requires a boolean literal.; line 1 pos 7
{"errorClass":"legacy","messageParameters":["cannot resolve 'sort_array(array('b', 'd'), CAST(NULL AS BOOLEAN))' due to data type mismatch: Sort order in second argument requires a boolean literal.; line 1 pos 7"],"queryContext":[]}


-- !query
Expand Down Expand Up @@ -165,10 +165,7 @@ select element_at(array(1, 2, 3), 5)
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX_IN_ELEMENT_AT] The index 5 is out of bounds. The array has 3 elements. Use `try_element_at` to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select element_at(array(1, 2, 3), 5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX_IN_ELEMENT_AT","messageParameters":["5","3","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":35,"fragment":"element_at(array(1, 2, 3), 5"}]}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the design of JSON called for messageParameter entry to be a map (parameterName -> parameterValue).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we put parameterName to golden files, we prevent tech writers from modifying of parameters names in error-classes.json. It seems it is unnecessary restriction, isn't. In any case, the order of params is fixed/constant in the code. cc @cloud-fan WDYT?



-- !query
Expand All @@ -177,10 +174,7 @@ select element_at(array(1, 2, 3), -5)
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX_IN_ELEMENT_AT] The index -5 is out of bounds. The array has 3 elements. Use `try_element_at` to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select element_at(array(1, 2, 3), -5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX_IN_ELEMENT_AT","messageParameters":["-5","3","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":36,"fragment":"element_at(array(1, 2, 3), -5"}]}


-- !query
Expand All @@ -189,7 +183,7 @@ select element_at(array(1, 2, 3), 0)
struct<>
-- !query output
org.apache.spark.SparkRuntimeException
[ELEMENT_AT_BY_INDEX_ZERO] The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).
{"errorClass":"ELEMENT_AT_BY_INDEX_ZERO","messageParameters":[],"queryContext":[]}


-- !query
Expand All @@ -198,10 +192,7 @@ select elt(4, '123', '456')
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index 4 is out of bounds. The array has 2 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select elt(4, '123', '456')
^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["4","2","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":26,"fragment":"elt(4, '123', '456'"}]}


-- !query
Expand All @@ -210,10 +201,7 @@ select elt(0, '123', '456')
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index 0 is out of bounds. The array has 2 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select elt(0, '123', '456')
^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["0","2","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":26,"fragment":"elt(0, '123', '456'"}]}


-- !query
Expand All @@ -222,10 +210,7 @@ select elt(-1, '123', '456')
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index -1 is out of bounds. The array has 2 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select elt(-1, '123', '456')
^^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["-1","2","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":27,"fragment":"elt(-1, '123', '456'"}]}


-- !query
Expand Down Expand Up @@ -266,10 +251,7 @@ select array(1, 2, 3)[5]
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index 5 is out of bounds. The array has 3 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select array(1, 2, 3)[5]
^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["5","3","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":23,"fragment":"array(1, 2, 3)[5"}]}


-- !query
Expand All @@ -278,10 +260,7 @@ select array(1, 2, 3)[-1]
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index -1 is out of bounds. The array has 3 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select array(1, 2, 3)[-1]
^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["-1","3","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":24,"fragment":"array(1, 2, 3)[-1"}]}


-- !query
Expand Down Expand Up @@ -322,7 +301,7 @@ select array_size(map('a', 1, 'b', 2))
struct<>
-- !query output
org.apache.spark.sql.AnalysisException
cannot resolve 'array_size(map('a', 1, 'b', 2))' due to data type mismatch: argument 1 requires array type, however, 'map('a', 1, 'b', 2)' is of map<string,int> type.; line 1 pos 7
{"errorClass":"legacy","messageParameters":["cannot resolve 'array_size(map('a', 1, 'b', 2))' due to data type mismatch: argument 1 requires array type, however, 'map('a', 1, 'b', 2)' is of map<string,int> type.; line 1 pos 7"],"queryContext":[]}


-- !query
Expand Down Expand Up @@ -355,10 +334,7 @@ select element_at(array(1, 2, 3), 5)
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX_IN_ELEMENT_AT] The index 5 is out of bounds. The array has 3 elements. Use `try_element_at` to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select element_at(array(1, 2, 3), 5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX_IN_ELEMENT_AT","messageParameters":["5","3","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":35,"fragment":"element_at(array(1, 2, 3), 5"}]}


-- !query
Expand All @@ -367,10 +343,7 @@ select element_at(array(1, 2, 3), -5)
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX_IN_ELEMENT_AT] The index -5 is out of bounds. The array has 3 elements. Use `try_element_at` to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select element_at(array(1, 2, 3), -5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX_IN_ELEMENT_AT","messageParameters":["-5","3","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":36,"fragment":"element_at(array(1, 2, 3), -5"}]}


-- !query
Expand All @@ -379,7 +352,7 @@ select element_at(array(1, 2, 3), 0)
struct<>
-- !query output
org.apache.spark.SparkRuntimeException
[ELEMENT_AT_BY_INDEX_ZERO] The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).
{"errorClass":"ELEMENT_AT_BY_INDEX_ZERO","messageParameters":[],"queryContext":[]}


-- !query
Expand All @@ -388,10 +361,7 @@ select elt(4, '123', '456')
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index 4 is out of bounds. The array has 2 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select elt(4, '123', '456')
^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["4","2","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":26,"fragment":"elt(4, '123', '456'"}]}


-- !query
Expand All @@ -400,10 +370,7 @@ select elt(0, '123', '456')
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index 0 is out of bounds. The array has 2 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select elt(0, '123', '456')
^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["0","2","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":26,"fragment":"elt(0, '123', '456'"}]}


-- !query
Expand All @@ -412,7 +379,4 @@ select elt(-1, '123', '456')
struct<>
-- !query output
org.apache.spark.SparkArrayIndexOutOfBoundsException
[INVALID_ARRAY_INDEX] The index -1 is out of bounds. The array has 2 elements. Use `try_element_at` and increase the array index by 1(the starting array index is 1 for `try_element_at`) to tolerate accessing element at invalid index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select elt(-1, '123', '456')
^^^^^^^^^^^^^^^^^^^^^
{"errorClass":"INVALID_ARRAY_INDEX","messageParameters":["-1","2","\"spark.sql.ansi.enabled\""],"queryContext":[{"objectType":"","objectName":"","startIndex":7,"stopIndex":27,"fragment":"elt(-1, '123', '456'"}]}