Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1598,7 +1598,7 @@ object SQLConf {
"variant logical type.")
.version("4.1.0")
.booleanConf
.createWithDefault(false)
.createWithDefault(true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should at least enable this config in 4.1, to conform to the Parquet spec. If many people start to use variant type with Spark 4.1, then the entire ecosystem may be forced to support the Spark-specific variant type in Parquet.

@harshmotw-db can we open a separate PR for this config? And also cc @dongjoon-hyun


val PARQUET_IGNORE_VARIANT_ANNOTATION =
buildConf("spark.sql.parquet.ignoreVariantAnnotation")
Expand Down Expand Up @@ -5610,15 +5610,15 @@ object SQLConf {
"requested fields.")
.version("4.0.0")
.booleanConf
.createWithDefault(false)
.createWithDefault(true)

val VARIANT_WRITE_SHREDDING_ENABLED =
buildConf("spark.sql.variant.writeShredding.enabled")
.internal()
.doc("When true, the Parquet writer is allowed to write shredded variant. ")
.version("4.0.0")
.booleanConf
.createWithDefault(false)
.createWithDefault(true)

val VARIANT_FORCE_SHREDDING_SCHEMA_FOR_TEST =
buildConf("spark.sql.variant.forceShreddingSchemaForTest")
Expand Down Expand Up @@ -5651,7 +5651,7 @@ object SQLConf {
.doc("Infer shredding schema when writing Variant columns in Parquet tables.")
.version("4.1.0")
.booleanConf
.createWithDefault(false)
.createWithDefault(true)

val LEGACY_CSV_ENABLE_DATE_TIME_PARSING_FALLBACK =
buildConf("spark.sql.legacy.csv.enableDateTimeParsingFallback")
Expand Down