Skip to content

Comments

[SPARK-44559][PYTHON][3.5] Improve error messages for Python UDTF arrow cast#42290

Closed
allisonwang-db wants to merge 1 commit intoapache:branch-3.5from
allisonwang-db:spark-44559-3.5
Closed

[SPARK-44559][PYTHON][3.5] Improve error messages for Python UDTF arrow cast#42290
allisonwang-db wants to merge 1 commit intoapache:branch-3.5from
allisonwang-db:spark-44559-3.5

Conversation

@allisonwang-db
Copy link
Contributor

What changes were proposed in this pull request?

This PR cherry-picks 5384f46. It improves error messages when the output of an arrow-optimized Python UDTF cannot be casted to the specified return schema of the UDTF.

Why are the changes needed?

To make Python UDTFs more user-friendly.

Does this PR introduce any user-facing change?

Yes, before this PR, when the output of a UDTF fails to cast to the desired schema, Spark will throw this confusing error message:

@udtf(returnType="x: int")
class TestUDTF:
    def eval(self):
        yield [1, 2],

TestUDTF().collect()
  File "pyarrow/array.pxi", line 1044, in pyarrow.lib.Array.from_pandas
  File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
  File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
  File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert [1, 2] with type list: tried to convert to int32

Now, after this PR, the error message will look like this:
pyspark.errors.exceptions.base.PySparkRuntimeError: [UDTF_ARROW_TYPE_CAST_ERROR] Cannot convert the output value of the column 'x' with type 'object' to the specified return type of the column: 'int32'. Please check if the data types match and try again.

How was this patch tested?

New unit tests

This PR improves error messages when the output of an arrow-optimized Python UDTF cannot be casted to the specified return schema of the UDTF.

To make Python UDTFs more user-friendly.

Yes, before this PR, when the output of a UDTF fails to cast to the desired schema, Spark will throw this confusing error message:
```python
udtf(returnType="x: int")
class TestUDTF:
    def eval(self):
        yield [1, 2],

TestUDTF().collect()
```

```
  File "pyarrow/array.pxi", line 1044, in pyarrow.lib.Array.from_pandas
  File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
  File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
  File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert [1, 2] with type list: tried to convert to int32
```
Now, after this PR, the error message will look like this:
`pyspark.errors.exceptions.base.PySparkRuntimeError: [UDTF_ARROW_TYPE_CAST_ERROR] Cannot convert the output value of the column 'x' with type 'object' to the specified return type of the column: 'int32'. Please check if the data types match and try again.
`

New unit tests

Closes apache#42191 from allisonwang-db/spark-44559-arrow-cast.

Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
(cherry picked from commit 5384f46)
Signed-off-by: allisonwang-db <allison.wang@databricks.com>
@ueshin
Copy link
Member

ueshin commented Aug 2, 2023

Thanks! merging to 3.5.

ueshin pushed a commit that referenced this pull request Aug 2, 2023
…ow cast

### What changes were proposed in this pull request?
This PR cherry-picks 5384f46. It improves error messages when the output of an arrow-optimized Python UDTF cannot be casted to the specified return schema of the UDTF.

### Why are the changes needed?
To make Python UDTFs more user-friendly.

### Does this PR introduce _any_ user-facing change?

Yes, before this PR, when the output of a UDTF fails to cast to the desired schema, Spark will throw this confusing error message:
```python
udtf(returnType="x: int")
class TestUDTF:
    def eval(self):
        yield [1, 2],

TestUDTF().collect()
```

```
  File "pyarrow/array.pxi", line 1044, in pyarrow.lib.Array.from_pandas
  File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
  File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
  File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert [1, 2] with type list: tried to convert to int32
```
Now, after this PR, the error message will look like this:
`pyspark.errors.exceptions.base.PySparkRuntimeError: [UDTF_ARROW_TYPE_CAST_ERROR] Cannot convert the output value of the column 'x' with type 'object' to the specified return type of the column: 'int32'. Please check if the data types match and try again.
`

### How was this patch tested?

New unit tests

Closes #42290 from allisonwang-db/spark-44559-3.5.

Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
@ueshin ueshin closed this Aug 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants