-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-53934][SQL][FOLLOWUP] Close Spark connect client connection and mark as closed #53236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@dongjoon-hyun , Please review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @pan3793 , @LuciferYang , too.
| try { | ||
| conn.spark.interruptOperation(operationId) | ||
| } catch { | ||
| case _: Exception => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be more specific exception, @vinodkc ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great if we can match the exceptions explicitly. For example,
java.net.ConnectException
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just use the utility method Utils.closeQuietly?
nvm, this is fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just use the utility method
Utils.closeQuietly?
+1
| resultSet = null | ||
| } | ||
| closed = false | ||
| closed = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, this is an obvious bug.
Does #53233 help here? |
We should fully understand the issue. Do you know where we fail to cancel an operation when the operation is already done? What kind of exception we throw? |
What changes were proposed in this pull request?
This PR fixes flaky test failures in SparkConnectJdbcDataTypeSuite by adding defensive exception handling in SparkConnectStatement.close(). It also fixes a bug where the statement was not properly marked as closed.
Why are the changes needed?
The test
get binary type(and potentially others) was failing ~27% of the time with:Root Cause: During statement cleanup, close() calls interruptOperation() which makes a gRPC network call.
This fails when:
Does this PR introduce any user-facing change?
No. This only affects internal test reliability.
How was this patch tested?
Verified the fix follows the established pattern in SparkSession.close() (lines 679-690)
The existing test suite validates correct behavior
The exception handling only affects cleanup, not core functionality
Was this patch authored or co-authored using generative AI tooling?
No