Skip to content

Commit

Permalink
[SPARK-42345][SQL] Rename TimestampNTZ inference conf as spark.sql.so…
Browse files Browse the repository at this point in the history
…urces.timestampNTZTypeInference.enabled

### What changes were proposed in this pull request?

Rename TimestampNTZ data source inference configuration from `spark.sql.inferTimestampNTZInDataSources.enabled` to `spark.sql.sources.timestampNTZTypeInference.enabled`
For more context on this configuration:
#39777
#39812
#39868
### Why are the changes needed?

Since the configuration is for data source, we can put it under the prefix `spark.sql.sources`. The new naming is consistent with another configuration `spark.sql.sources.partitionColumnTypeInference.enabled`.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

Closes #39885 from gengliangwang/renameConf.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
  • Loading branch information
gengliangwang authored and MaxGekk committed Feb 5, 2023
1 parent 67285c3 commit c5c1927
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 11 deletions.
Expand Up @@ -1416,6 +1416,16 @@ object SQLConf {
.booleanConf
.createWithDefault(true)

val INFER_TIMESTAMP_NTZ_IN_DATA_SOURCES =
buildConf("spark.sql.sources.timestampNTZTypeInference.enabled")
.doc("For the schema inference of JSON/CSV/JDBC data sources and partition directories, " +
"this config determines whether to choose the TimestampNTZ type if a column can be " +
"either TimestampNTZ or TimestampLTZ type. If set to true, the inference result of " +
"the column will be TimestampNTZ type. Otherwise, the result will be TimestampLTZ type.")
.version("3.4.0")
.booleanConf
.createWithDefault(false)

val BUCKETING_ENABLED = buildConf("spark.sql.sources.bucketing.enabled")
.doc("When false, we will treat bucketed table as normal table")
.version("2.0.0")
Expand Down Expand Up @@ -3518,16 +3528,6 @@ object SQLConf {
.checkValues(TimestampTypes.values.map(_.toString))
.createWithDefault(TimestampTypes.TIMESTAMP_LTZ.toString)

val INFER_TIMESTAMP_NTZ_IN_DATA_SOURCES =
buildConf("spark.sql.inferTimestampNTZInDataSources.enabled")
.doc("For the schema inference of JSON/CSV/JDBC data sources and partition directories, " +
"this config determines whether to choose the TimestampNTZ type if a column can be " +
"either TimestampNTZ or TimestampLTZ type. If set to true, the inference result of " +
"the column will be TimestampNTZ type. Otherwise, the result will be TimestampLTZ type.")
.version("3.4.0")
.booleanConf
.createWithDefault(false)

val DATETIME_JAVA8API_ENABLED = buildConf("spark.sql.datetime.java8API.enabled")
.doc("If the configuration property is set to true, java.time.Instant and " +
"java.time.LocalDate classes of Java 8 API are used as external types for " +
Expand Down
Expand Up @@ -490,7 +490,7 @@ object PartitioningUtils extends SQLConfHelper {
val unescapedRaw = unescapePathName(raw)
// try and parse the date, if no exception occurs this is a candidate to be resolved as
// TimestampType or TimestampNTZType. The inference timestamp typ is controlled by the conf
// "spark.sql.inferTimestampNTZInDataSources.enabled".
// "spark.sql.sources.timestampNTZTypeInference.enabled".
val timestampType = conf.timestampTypeInSchemaInference
timestampType match {
case TimestampType => timestampFormatter.parse(unescapedRaw)
Expand Down

0 comments on commit c5c1927

Please sign in to comment.