Skip to content

Conversation

@nemanjapetr-db
Copy link
Contributor

What changes were proposed in this pull request?

This PR contains a PySpark test that is checking parameter binding wrapped within Limit node that has caused an infinite loop before #47271 bug fix. This test shall fail should the bug be accidentally reintroduced.

Why are the changes needed?

PySpark test complements Scala test.

Does this PR introduce any user-facing change?

No

How was this patch tested?

Manually ran the test.

Was this patch authored or co-authored using generative AI tooling?

No.

…s with BindParameters. Executes a query where parameter binding was wrapped in a Limit node.
assert cls.spark is not None
assert cls.spark._jvm.SparkSession.getDefaultSession().isDefined()

def test_wrapping_plan_in_limit_node(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason why we need a PySpark specific test for this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spark.sql has no difference between Scala Spark and Python Spark, otherwise we will need to duplicate a lot of tests in pyspark.

@nemanjapetr-db
Copy link
Contributor Author

Cancelling this PR.

@nemanjapetr-db nemanjapetr-db deleted the infiniteloop2 branch August 13, 2024 12:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants