Skip to content

Commit

Permalink
[SPARK-47818][CONNECT][FOLLOW-UP] Introduce plan cache in SparkConnec…
Browse files Browse the repository at this point in the history
…tPlanner to improve performance of Analyze requests

### What changes were proposed in this pull request?

In [this previous PR](#46012), we introduced two new confs for the introduced plan cache - a static conf `spark.connect.session.planCache.maxSize` and a dynamic conf `spark.connect.session.planCache.enabled`. The plan cache is enabled by default with size 5. In this PR, we are marking them as internal because we don't expect users to deal with it.

### Why are the changes needed?

These two confs are not expected to be used under normal circumstances, and we don't need to document them on the Spark Configuration reference page.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing tests.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #46638 from xi-db/SPARK-47818-plan-cache-followup2.

Authored-by: Xi Lyu <xi.lyu@databricks.com>
Signed-off-by: Herman van Hovell <herman@databricks.com>
  • Loading branch information
xi-db authored and hvanhovell committed May 17, 2024
1 parent 3edd6c7 commit 5162378
Showing 1 changed file with 2 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -279,6 +279,7 @@ object Connect {
.doc("Sets the maximum number of cached resolved logical plans in Spark Connect Session." +
" If set to a value less or equal than zero will disable the plan cache.")
.version("4.0.0")
.internal()
.intConf
.createWithDefault(5)

Expand All @@ -289,6 +290,7 @@ object Connect {
s" When false, the cache is disabled even if '${CONNECT_SESSION_PLAN_CACHE_SIZE.key}' is" +
" greater than zero. The caching is best-effort and not guaranteed.")
.version("4.0.0")
.internal()
.booleanConf
.createWithDefault(true)
}

0 comments on commit 5162378

Please sign in to comment.