Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-32019][SQL] Add spark.sql.files.minPartitionNum config #28853

Closed
wants to merge 10 commits into from
Closed
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Expand Up @@ -1176,6 +1176,15 @@ object SQLConf {
.longConf
.createWithDefault(4 * 1024 * 1024)

val FILES_MIN_PARTITION_NUM = buildConf("spark.sql.files.minPartitionNum")
.doc("The suggested (not guaranteed) minimum number of file split partitions. If not set, " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

file split -> split file?

"the default value is the default parallelism of the Spark cluster. This configuration is " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the default parallelism of the Spark cluster -> spark.default.parallelism?

"effective only when using file-based sources such as Parquet, JSON and ORC.")
.version("3.1.0")
.intConf
cloud-fan marked this conversation as resolved.
Show resolved Hide resolved
.checkValue(v => v > 0, "The min partition number must be a positive integer.")
.createOptional

val IGNORE_CORRUPT_FILES = buildConf("spark.sql.files.ignoreCorruptFiles")
.doc("Whether to ignore corrupt files. If true, the Spark jobs will continue to run when " +
"encountering corrupted files and the contents that have been read will still be returned. " +
Expand Down Expand Up @@ -2782,6 +2791,8 @@ class SQLConf extends Serializable with Logging {

def filesOpenCostInBytes: Long = getConf(FILES_OPEN_COST_IN_BYTES)

def filesMinPartitionNum: Option[Int] = getConf(FILES_MIN_PARTITION_NUM)

def ignoreCorruptFiles: Boolean = getConf(IGNORE_CORRUPT_FILES)

def ignoreMissingFiles: Boolean = getConf(IGNORE_MISSING_FILES)
Expand Down
Expand Up @@ -88,9 +88,10 @@ object FilePartition extends Logging {
selectedPartitions: Seq[PartitionDirectory]): Long = {
val defaultMaxSplitBytes = sparkSession.sessionState.conf.filesMaxPartitionBytes
val openCostInBytes = sparkSession.sessionState.conf.filesOpenCostInBytes
val defaultParallelism = sparkSession.sparkContext.defaultParallelism
val minPartitionNum = sparkSession.sessionState.conf.filesMinPartitionNum
.getOrElse(sparkSession.sparkContext.defaultParallelism)
val totalBytes = selectedPartitions.flatMap(_.files.map(_.getLen + openCostInBytes)).sum
val bytesPerCore = totalBytes / defaultParallelism
val bytesPerCore = totalBytes / minPartitionNum

Math.min(defaultMaxSplitBytes, Math.max(openCostInBytes, bytesPerCore))
}
Expand Down
Expand Up @@ -528,6 +528,18 @@ class FileSourceStrategySuite extends QueryTest with SharedSparkSession with Pre
}
}

test("Add spark.sql.files.minPartitionNum config") {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we add SPARK-32019: prefix into this test case name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it's not a bug. So it's not need ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's better to add it since it's a dedicated test case for this JIRA ticket.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add it.

withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "1") {
val table =
createTable(files = Seq(
"file1" -> 1,
"file2" -> 1,
"file3" -> 1
))
assert(table.rdd.partitions.length == 1)
dongjoon-hyun marked this conversation as resolved.
Show resolved Hide resolved
}
}

// Helpers for checking the arguments passed to the FileFormat.

protected val checkPartitionSchema =
Expand Down