Skip to content

Conversation

@LuciferYang
Copy link
Contributor

@LuciferYang LuciferYang commented Oct 13, 2025

What changes were proposed in this pull request?

This pr adds a pre-check for module dependencies to the test cases in the connect module that utilize Python client code. This ensures that when the required Python modules are missing, the relevant tests are skipped rather than failing.

Why are the changes needed?

Some test cases in the connect module involve interactions between Python client code and the Connect server. These tests currently fail due to the absence of the required Python modules.

build/sbt 'connect/test'

[info] - Pipeline with selective full_refresh *** FAILED *** (374 milliseconds)
[info]   java.lang.RuntimeException: Pipeline update process failed with exit code 1.
[info] Output: 
[info] Error: Traceback (most recent call last):
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 47, in require_minimum_grpc_version
[info]     import grpc
[info] ModuleNotFoundError: No module named 'grpc'
[info] 
[info] The above exception was the direct cause of the following exception:
[info] 
[info] Traceback (most recent call last):
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/pipelines/cli.py", line 36, in <module>
[info]     from pyspark.pipelines.block_session_mutations import block_session_mutations
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/pipelines/block_session_mutations.py", line 21, in <module>
[info]     from pyspark.sql.connect.catalog import Catalog
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/catalog.py", line 20, in <module>
[info]     check_dependencies(__name__)
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 37, in check_dependencies
[info]     require_minimum_grpc_version()
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 49, in require_minimum_grpc_version
[info]     raise PySparkImportError(
[info] pyspark.errors.exceptions.base.PySparkImportError: [PACKAGE_NOT_INSTALLED] grpcio >= 1.48.1 must be installed; however, it was not found.
...
raceback (most recent call last):
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 47, in require_minimum_grpc_version
    import grpc
ModuleNotFoundError: No module named 'grpc'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/yangjie01/SourceCode/gravitino/.gradle/python/MacOSX/Miniforge3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/yangjie01/SourceCode/gravitino/.gradle/python/MacOSX/Miniforge3/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/streaming/worker/foreach_batch_worker.py", line 32, in <module>
    from pyspark.sql.connect.session import SparkSession
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/session.py", line 20, in <module>
    check_dependencies(__name__)
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 37, in check_dependencies
    require_minimum_grpc_version()
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 49, in require_minimum_grpc_version
    raise PySparkImportError(
pyspark.errors.exceptions.base.PySparkImportError: [PACKAGE_NOT_INSTALLED] grpcio >= 1.48.1 must be installed; however, it was not found.
[info] - python foreachBatch process: process terminates after query is stopped *** FAILED *** (14 seconds, 188 milliseconds)
[info]   org.apache.spark.SparkException: Python worker failed to connect back.

...
[info] *** 48 TESTS FAILED ***
[error] Failed tests:
[error]   org.apache.spark.sql.connect.pipelines.PythonPipelineSuite
[error]   org.apache.spark.sql.connect.pipelines.EndToEndAPISuite
[error]   org.apache.spark.sql.connect.service.SparkConnectSessionHolderSuite
[error] (connect / Test / test) sbt.TestsFailedException: Tests unsuccessful

Does this PR introduce any user-facing change?

No

How was this patch tested?

  • Pass Github Actions
  • Manually verified by running build/sbt 'connect/test'
[info] - temporary views works !!! CANCELED !!! (0 milliseconds)
[info]   org.apache.spark.sql.connect.PythonTestDepsChecker.isConnectDepsAvailable was false (PythonPipelineSuite.scala:52)
...
[info] - create named flow with multipart name will fail !!! CANCELED !!! (1 millisecond)
[info]   org.apache.spark.sql.connect.PythonTestDepsChecker.isConnectDepsAvailable was false (PythonPipelineSuite.scala:521)
...
[info] - create flow with multipart target and no explicit name succeeds !!! CANCELED !!! (1 millisecond)
[info]   org.apache.spark.sql.connect.PythonTestDepsChecker.isConnectDepsAvailable was false (PythonPipelineSuite.scala:52)
...
[info] Run completed in 3 minutes, 27 seconds.
[info] Total number of tests run: 1088
[info] Suites: completed 37, aborted 0
[info] Tests: succeeded 1088, failed 0, canceled 45, ignored 0, pending 0
[info] All tests passed.

Was this patch authored or co-authored using generative AI tooling?

No

@LuciferYang LuciferYang marked this pull request as draft October 13, 2025 10:06
@LuciferYang
Copy link
Contributor Author

Test first

@LuciferYang LuciferYang changed the title [CONNECT][TESTS] Add dependency checks for Python-related tests in the connect module [SPARK-53897][CONNECT][TESTS] Add dependency checks for Python-related tests in the connect module Oct 14, 2025
@LuciferYang LuciferYang marked this pull request as ready for review October 14, 2025 03:34

override def awaitPipelineTermination(pipeline: PipelineReference, duration: Duration): Unit = {
assume(PythonTestDepsChecker.isConnectDepsAvailable)
assume(PythonTestDepsChecker.isYamlAvailable)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

End-to-end testing for SDP requires pyyaml

@LuciferYang
Copy link
Contributor Author

Merged into master. Thanks @HyukjinKwon

@LuciferYang LuciferYang deleted the connect-py-tests branch October 15, 2025 05:37
huangxiaopingRD pushed a commit to huangxiaopingRD/spark that referenced this pull request Nov 25, 2025
…d tests in the `connect` module

### What changes were proposed in this pull request?
This pr adds a pre-check for module dependencies to the test cases in the `connect` module that utilize Python client code. This ensures that when the required Python modules are missing, the relevant tests are skipped rather than failing.

### Why are the changes needed?
Some test cases in the `connect` module involve interactions between Python client code and the Connect server. These tests currently fail due to the absence of the required Python modules.

`build/sbt 'connect/test'`

```
[info] - Pipeline with selective full_refresh *** FAILED *** (374 milliseconds)
[info]   java.lang.RuntimeException: Pipeline update process failed with exit code 1.
[info] Output:
[info] Error: Traceback (most recent call last):
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 47, in require_minimum_grpc_version
[info]     import grpc
[info] ModuleNotFoundError: No module named 'grpc'
[info]
[info] The above exception was the direct cause of the following exception:
[info]
[info] Traceback (most recent call last):
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/pipelines/cli.py", line 36, in <module>
[info]     from pyspark.pipelines.block_session_mutations import block_session_mutations
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/pipelines/block_session_mutations.py", line 21, in <module>
[info]     from pyspark.sql.connect.catalog import Catalog
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/catalog.py", line 20, in <module>
[info]     check_dependencies(__name__)
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 37, in check_dependencies
[info]     require_minimum_grpc_version()
[info]   File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 49, in require_minimum_grpc_version
[info]     raise PySparkImportError(
[info] pyspark.errors.exceptions.base.PySparkImportError: [PACKAGE_NOT_INSTALLED] grpcio >= 1.48.1 must be installed; however, it was not found.
...
raceback (most recent call last):
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 47, in require_minimum_grpc_version
    import grpc
ModuleNotFoundError: No module named 'grpc'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/yangjie01/SourceCode/gravitino/.gradle/python/MacOSX/Miniforge3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/yangjie01/SourceCode/gravitino/.gradle/python/MacOSX/Miniforge3/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/streaming/worker/foreach_batch_worker.py", line 32, in <module>
    from pyspark.sql.connect.session import SparkSession
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/session.py", line 20, in <module>
    check_dependencies(__name__)
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 37, in check_dependencies
    require_minimum_grpc_version()
  File "/Users/yangjie01/SourceCode/git/spark-sbt/python/pyspark/sql/connect/utils.py", line 49, in require_minimum_grpc_version
    raise PySparkImportError(
pyspark.errors.exceptions.base.PySparkImportError: [PACKAGE_NOT_INSTALLED] grpcio >= 1.48.1 must be installed; however, it was not found.
[info] - python foreachBatch process: process terminates after query is stopped *** FAILED *** (14 seconds, 188 milliseconds)
[info]   org.apache.spark.SparkException: Python worker failed to connect back.

...
[info] *** 48 TESTS FAILED ***
[error] Failed tests:
[error]   org.apache.spark.sql.connect.pipelines.PythonPipelineSuite
[error]   org.apache.spark.sql.connect.pipelines.EndToEndAPISuite
[error]   org.apache.spark.sql.connect.service.SparkConnectSessionHolderSuite
[error] (connect / Test / test) sbt.TestsFailedException: Tests unsuccessful
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
- Pass Github Actions
- Manually verified by running `build/sbt 'connect/test'`

```
[info] - temporary views works !!! CANCELED !!! (0 milliseconds)
[info]   org.apache.spark.sql.connect.PythonTestDepsChecker.isConnectDepsAvailable was false (PythonPipelineSuite.scala:52)
...
[info] - create named flow with multipart name will fail !!! CANCELED !!! (1 millisecond)
[info]   org.apache.spark.sql.connect.PythonTestDepsChecker.isConnectDepsAvailable was false (PythonPipelineSuite.scala:521)
...
[info] - create flow with multipart target and no explicit name succeeds !!! CANCELED !!! (1 millisecond)
[info]   org.apache.spark.sql.connect.PythonTestDepsChecker.isConnectDepsAvailable was false (PythonPipelineSuite.scala:52)
...
[info] Run completed in 3 minutes, 27 seconds.
[info] Total number of tests run: 1088
[info] Suites: completed 37, aborted 0
[info] Tests: succeeded 1088, failed 0, canceled 45, ignored 0, pending 0
[info] All tests passed.
```

### Was this patch authored or co-authored using generative AI tooling?
No

Closes apache#52588 from LuciferYang/connect-py-tests.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: yangjie01 <yangjie01@baidu.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants