Skip to content

[SPARK-30841][SQL][DOC][FOLLOW-UP] Add version information to the configuration of SQL#27829

Closed
beliefer wants to merge 1 commit intoapache:masterfrom
beliefer:add-version-to-sql-config-part-four
Closed

[SPARK-30841][SQL][DOC][FOLLOW-UP] Add version information to the configuration of SQL#27829
beliefer wants to merge 1 commit intoapache:masterfrom
beliefer:add-version-to-sql-config-part-four

Conversation

@beliefer
Copy link
Contributor

@beliefer beliefer commented Mar 6, 2020

What changes were proposed in this pull request?

This PR follows #27691, #27730 and #27770
I sorted out some information show below.

Item name Since version JIRA ID Commit ID Note
spark.sql.redaction.options.regex 2.2.2 SPARK-23850 6a55d8b#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.redaction.string.regex 2.3.0 SPARK-22791 2831571#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.function.concatBinaryAsString 2.3.0 SPARK-22771 f2b3525#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.function.eltOutputAsString 2.3.0 SPARK-22937 bf85301#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.sources.validatePartitionColumns 3.0.0 SPARK-26263 5a140b7#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.streaming.continuous.epochBacklogQueueSize 3.0.0 SPARK-24063 c4bbfd1#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.streaming.continuous.executorQueueSize 2.3.0 SPARK-22789 8941a4a#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.streaming.continuous.executorPollIntervalMs 2.3.0 SPARK-22789 8941a4a#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.sources.useV1SourceList 3.0.0 SPARK-28747 cb06209#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.streaming.disabledV2Writers 2.3.1 SPARK-23196 588b969#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.streaming.disabledV2MicroBatchReaders 2.4.0 SPARK-23362 0a73aa3#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.sources.partitionOverwriteMode 2.3.0 SPARK-20236 b962488#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.storeAssignmentPolicy 3.0.0 SPARK-28730 895c90b#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.ansi.enabled 3.0.0 SPARK-30125 d9b3069#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.execution.sortBeforeRepartition 2.1.4 SPARK-23207 and SPARK-22905 and SPARK-24564 and SPARK-25114 4d2d3d4#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.optimizer.nestedSchemaPruning.enabled 2.4.1 SPARK-4502 dfcff38#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.optimizer.serializer.nestedSchemaPruning.enabled 3.0.0 SPARK-26837 0f2c0b5#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.optimizer.expression.nestedPruning.enabled 3.0.0 SPARK-27707 127bc89#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.execution.topKSortFallbackThreshold 2.4.0 SPARK-24193 8a837bf#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.csv.parser.columnPruning.enabled 2.4.0 SPARK-24244 and SPARK-24368 64fad0b#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.repl.eagerEval.enabled 2.4.0 SPARK-24215 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.repl.eagerEval.maxNumRows 2.4.0 SPARK-24215 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.repl.eagerEval.truncate 2.4.0 SPARK-24215 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.codegen.aggregate.fastHashMap.capacityBit 2.4.0 SPARK-24978 6193a20#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.avro.compression.codec 2.4.0 SPARK-24881 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.avro.deflate.level 2.4.0 SPARK-24881 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.sizeOfNull 2.4.0 SPARK-24605 d08f53d#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.replaceDatabricksSparkAvro.enabled 2.4.0 SPARK-25129 ac0174e#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.setopsPrecedence.enabled 2.4.0 SPARK-24966 73dd6cf#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.exponentLiteralAsDecimal.enabled 3.0.0 SPARK-29956 87ebfaf#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.allowNegativeScaleOfDecimal 3.0.0 SPARK-30812 b76bc0b#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.createHiveTableByDefault.enabled 3.0.0 SPARK-30098 58be82a#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.integralDivide.returnBigint 3.0.0 SPARK-25457 47d6e80#diff-9a6b543db706f1a90f790783d6930a13 Exists in branch-3.0 branch, but the pom.xml file corresponding to the commit log is 2.5.0-SNAPSHOT
spark.sql.legacy.bucketedTableScan.outputOrdering 3.0.0 SPARK-28595 469423f#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.parser.havingWithoutGroupByAsWhere 2.4.1 SPARK-25708 3dba5d4#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.json.allowEmptyString.enabled 3.0.0 SPARK-25040 d3de756#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.createEmptyCollectionUsingStringType 3.0.0 SPARK-30790 8ab6ae3#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.allowUntypedScalaUDF 3.0.0 SPARK-26580 bc30a07#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.truncateTable.ignorePermissionAcl.enabled 2.4.6 SPARK-30312 830a4ec#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.legacy.dataset.nameNonStructGroupingKeyAsValue 3.0.0 SPARK-26085 ab2eafb#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.debug.maxToStringFields 3.0.0 SPARK-26066 81550b3#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.maxPlanStringLength 3.0.0 SPARK-26103 812ad55#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.setCommandRejectsSparkCoreConfs 3.0.0 SPARK-26060 1ab3d3e#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.datetime.java8API.enabled 3.0.0 SPARK-27008 52671d6#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.sources.binaryFile.maxLength 3.0.0 SPARK-27588 618d6bf#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.typeCoercion.datetimeToString.enabled 3.0.0 SPARK-27638 83d289e#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.defaultCatalog 3.0.0 SPARK-29753 942753a#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.catalog.$SESSION_CATALOG_NAME 3.0.0 SPARK-29412 9407fba#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.legacy.doLooseUpcast 3.0.0 SPARK-30812 b76bc0b#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.ctePrecedencePolicy 3.0.0 SPARK-30829 00943be#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.timeParserPolicy 3.1.0 SPARK-30668 7db0af5#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.followThreeValuedLogicInArrayExists 3.0.0 SPARK-30812 b76bc0b#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.maven.additionalRemoteRepositories 3.0.0 SPARK-29175 3d7359a#diff-9a6b543db706f1a90f790783d6930a13
spark.sql.legacy.fromDayTimeString.enabled 3.0.0 SPARK-29864 and SPARK-29920 e933539#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.notReserveProperties 3.0.0 SPARK-30812 b76bc0b#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.addSingleFileInAddFile 3.0.0 SPARK-30234 8a8d1fb#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.mssqlserver.numericMapping.enabled 2.4.5 SPARK-28152 69de7f3#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.csv.filterPushdown.enabled 3.0.0 SPARK-30323 4e50f02#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.addPartitionInBatch.size 3.0.0 SPARK-29938 5ccbb38#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.timeParser.enabled 3.0.0 SPARK-30668 92f5723#diff-9a6b543db706f1a90f790783d6930a13  
spark.sql.legacy.allowDuplicatedMapKeys 3.0.0 SPARK-25829 33329ca#diff-9a6b543db706f1a90f790783d6930a13  

Why are the changes needed?

Supplemental configuration version information.

Does this PR introduce any user-facing change?

No

How was this patch tested?

Exists UT

"in the explain output. This redaction is applied on top of the global redaction " +
s"configuration defined by ${SECRET_REDACTION_PATTERN.key}.")
.version("")
.version("2.2.2")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-23850, commit ID: 6a55d8b#diff-9a6b543db706f1a90f790783d6930a13

"dummy value. This is currently used to redact the output of SQL explain commands. " +
"When this conf is not set, the value from `spark.redaction.string.regex` is used.")
.version("")
.version("2.3.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-22791, commit ID: 2831571#diff-9a6b543db706f1a90f790783d6930a13

.doc("When this option is set to false and all inputs are binary, `functions.concat` returns " +
"an output as binary. Otherwise, it returns as a string.")
.version("")
.version("2.3.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-22771, commit ID: f2b3525#diff-9a6b543db706f1a90f790783d6930a13

.doc("When this option is set to false and all inputs are binary, `elt` returns " +
"an output as binary. Otherwise, it returns as a string.")
.version("")
.version("2.3.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-22937, commit ID: bf85301#diff-9a6b543db706f1a90f790783d6930a13

"When this option is set to false, the partition column value will be converted to null " +
"if it can not be casted to corresponding user-specified schema.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-26263, commit ID: 5a140b7#diff-9a6b543db706f1a90f790783d6930a13

.doc("The max number of entries to be stored in queue to wait for late epochs. " +
"If this parameter is exceeded by the size of the queue, stream will stop with an error.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24063, commit ID: c4bbfd1#diff-9a6b543db706f1a90f790783d6930a13

.doc("The size (measured in number of rows) of the queue used in continuous execution to" +
" buffer the results of a ContinuousDataReader.")
.version("")
.version("2.3.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-22789, commit ID: 8941a4a#diff-9a6b543db706f1a90f790783d6930a13

.doc("The interval at which continuous execution readers will poll to check whether" +
" the epoch has advanced on the driver.")
.version("")
.version("2.3.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-22789, commit ID: 8941a4a#diff-9a6b543db706f1a90f790783d6930a13

"implementation class names for which Data Source V2 code path is disabled. These data " +
"sources will fallback to Data Source V1 code path.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-28747, commit ID: cb06209#diff-9a6b543db706f1a90f790783d6930a13

.doc("A comma-separated list of fully qualified data source register class names for which" +
" StreamWriteSupport is disabled. Writes to these sources will fall back to the V1 Sinks.")
.version("")
.version("2.3.1")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-23196, commit ID: 588b969#diff-9a6b543db706f1a90f790783d6930a13

"MicroBatchReadSupport is disabled. Reads from these sources will fall back to the " +
"V1 Sources.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-23362, commit ID: 0a73aa3#diff-9a6b543db706f1a90f790783d6930a13

"dataframe.write.option(\"partitionOverwriteMode\", \"dynamic\").save(path)."
)
.version("")
.version("2.3.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-20236, commit ID: b962488#diff-9a6b543db706f1a90f790783d6930a13

"not allowed."
)
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-28730, commit ID: 895c90b#diff-9a6b543db706f1a90f790783d6930a13

"field. 2. Spark will forbid using the reserved keywords of ANSI SQL as identifiers in " +
"the SQL parser.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30125, commit ID: d9b3069#diff-9a6b543db706f1a90f790783d6930a13

"to generate consistent repartition results. The performance of repartition() may go " +
"down since we insert extra local sort before it.")
.version("")
.version("2.1.4")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-23207 and SPARK-22905 and SPARK-24564 and SPARK-25114, commit ID: 4d2d3d4#diff-9a6b543db706f1a90f790783d6930a13

"reading unnecessary nested column data. Currently Parquet and ORC are the " +
"data sources that implement this optimization.")
.version("")
.version("2.4.1")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-4502, commit ID: dfcff38#diff-9a6b543db706f1a90f790783d6930a13

"satisfying a query. This optimization allows object serializers to avoid " +
"executing unnecessary nested expressions.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-26837, commit ID: 0f2c0b5#diff-9a6b543db706f1a90f790783d6930a13

"physical data source scanning. For pruning nested fields from scanning, please use " +
"`spark.sql.optimizer.nestedSchemaPruning.enabled` config.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-27707, commit ID: 127bc89#diff-9a6b543db706f1a90f790783d6930a13

"'SELECT x FROM t ORDER BY y LIMIT m', if m is under this threshold, do a top-K sort" +
" in memory, otherwise do a global sort which spills to disk if necessary.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24193, commit ID: 8a837bf#diff-9a6b543db706f1a90f790783d6930a13

.doc("If it is set to true, column names of the requested schema are passed to CSV parser. " +
"Other column values can be ignored during parsing even if they are malformed.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24244 and SPARK-24368, commit ID: 64fad0b#diff-9a6b543db706f1a90f790783d6930a13

"REPL, the returned outputs are formatted like dataframe.show(). In SparkR, the returned " +
"outputs are showed similar to R data.frame would.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24215, commit ID: 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13

"config is from 0 to (Int.MaxValue - 1), so the invalid config like negative and " +
"greater than (Int.MaxValue - 1) will be normalized to 0 and (Int.MaxValue - 1).")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24215, commit ID: 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13

.doc("The max number of characters for each cell that is returned by eager evaluation. " +
s"This only takes effect when ${REPL_EAGER_EVAL_ENABLED.key} is set to true.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24215, commit ID: 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13

"but the actual numBuckets is determined by loadFactor " +
"(e.g: default bit value 16 , the actual numBuckets is ((1 << 16) / 0.5).")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24978, commit ID: 6193a20#diff-9a6b543db706f1a90f790783d6930a13

.doc("Compression codec used in writing of AVRO files. Supported codecs: " +
"uncompressed, deflate, snappy, bzip2 and xz. Default codec is snappy.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24881, commit ID: 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13

"Valid value must be in the range of from 1 to 9 inclusive or -1. " +
"The default value is -1 which corresponds to 6 level in the current implementation.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24881, commit ID: 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13

.doc("If it is set to true, size of null returns -1. This behavior was inherited from Hive. " +
"The size function returns null for null input if the flag is disabled.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24605, commit ID: d08f53d#diff-9a6b543db706f1a90f790783d6930a13

.doc("If it is set to true, the data source provider com.databricks.spark.avro is mapped " +
"to the built-in but external Avro data source module for backward compatibility.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-25129, commit ID: ac0174e#diff-9a6b543db706f1a90f790783d6930a13

"to false and order of evaluation is not specified by parentheses, INTERSECT operations " +
"are performed before any UNION, EXCEPT and MINUS operations.")
.version("")
.version("2.4.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-24966, commit ID: 73dd6cf#diff-9a6b543db706f1a90f790783d6930a13

"Catalyst's TimestampType and DateType. If it is set to false, java.sql.Timestamp " +
"and java.sql.Date are used for the same purpose.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-27008, commit ID: 52671d6#diff-9a6b543db706f1a90f790783d6930a13

"Spark will fail fast and not attempt to read the file if its length exceeds this value. " +
"The theoretical max is Int.MaxValue, though VMs might implement a smaller max.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-27588, commit ID: 618d6bf#diff-9a6b543db706f1a90f790783d6930a13

.doc("If it is set to true, date/timestamp will cast to string in binary comparisons " +
"with String")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-27638, commit ID: 83d289e#diff-9a6b543db706f1a90f790783d6930a13

.doc("Name of the default catalog. This will be the current catalog if users have not " +
"explicitly set the current catalog yet.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-29753, commit ID: 942753a#diff-9a6b543db706f1a90f790783d6930a13

s"metadata. To delegate operations to the $SESSION_CATALOG_NAME, implementations can " +
"extend 'CatalogExtension'.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-29412, commit ID: 9407fba#diff-9a6b543db706f1a90f790783d6930a13

.internal()
.doc("When true, the upcast will be loose and allows string to atomic types.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30812, commit ID: b76bc0b#diff-9a6b543db706f1a90f790783d6930a13

"AnalysisException is thrown while name conflict is detected in nested CTE. This config " +
"will be removed in future versions and CORRECTED will be the only behavior.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30829, commit ID: 00943be#diff-9a6b543db706f1a90f790783d6930a13

"When set to CORRECTED, classes from java.time.* packages are used for the same purpose. " +
"The default value is EXCEPTION, RuntimeException is thrown when we will get different " +
"results.")
.version("3.1.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30668, commit ID: 7db0af5#diff-9a6b543db706f1a90f790783d6930a13

.internal()
.doc("When true, the ArrayExists will follow the three-valued boolean logic.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30812, commit ID: b76bc0b#diff-9a6b543db706f1a90f790783d6930a13

"repositories. This is only used for downloading Hive jars in IsolatedClientLoader " +
"if the default Maven Central repo is unreachable.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-29175, commit ID: 3d7359a#diff-9a6b543db706f1a90f790783d6930a13

"`ParseException` is thrown if the input does not match to the pattern " +
"defined by `from` and `to`.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-29864 and SPARK-29920, commit ID: e933539#diff-9a6b543db706f1a90f790783d6930a13

"create/alter syntaxes. But please be aware that the reserved properties will be " +
"silently removed.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30812, commit ID: b76bc0b#diff-9a6b543db706f1a90f790783d6930a13

.doc("When true, only a single file can be added using ADD FILE. If false, then users " +
"can add directory by passing directory path to ADD FILE.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30234, commit ID: 8a8d1fb#diff-9a6b543db706f1a90f790783d6930a13

.internal()
.doc("When true, use legacy MySqlServer SMALLINT and REAL type mapping.")
.version("")
.version("2.4.5")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-28152, commit ID: 69de7f3#diff-9a6b543db706f1a90f790783d6930a13

val CSV_FILTER_PUSHDOWN_ENABLED = buildConf("spark.sql.csv.filterPushdown.enabled")
.doc("When true, enable filter pushdown to CSV datasource.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

@beliefer beliefer Mar 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30323, commit ID: 4e50f02#diff-9a6b543db706f1a90f790783d6930a13

"`AlterTableAddPartitionCommand` to add partitions into table. The smaller " +
"batch size is, the less memory is required for the real handler, e.g. Hive Metastore.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-29938, commit ID: 5ccbb38#diff-9a6b543db706f1a90f790783d6930a13

"dates/timestamps in a locale-sensitive manner. When set to false, classes from " +
"java.time.* packages are used for the same purpose.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-30668, commit ID: 92f5723#diff-9a6b543db706f1a90f790783d6930a13

.doc("When set to true, hash expressions can be applied on elements of MapType. Otherwise, " +
"an analysis exception will be thrown.")
.version("")
.version("3.0.0")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SPARK-25829, commit ID: 33329ca#diff-9a6b543db706f1a90f790783d6930a13

@SparkQA
Copy link

SparkQA commented Mar 6, 2020

Test build #119452 has finished for PR 27829 at commit f8ea7e0.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@beliefer
Copy link
Contributor Author

beliefer commented Mar 8, 2020

@HyukjinKwon @MaxGekk Thanks!

@HyukjinKwon
Copy link
Member

Merged to master.

@HyukjinKwon
Copy link
Member

Merged to branch-3.0 too.

HyukjinKwon pushed a commit to HyukjinKwon/spark that referenced this pull request Apr 7, 2020
…figuration of SQL

This PR follows apache#27691, apache#27730 and apache#27770
I sorted out some information show below.

Item name | Since version | JIRA ID | Commit ID | Note
-- | -- | -- | -- | --
spark.sql.redaction.options.regex | 2.2.2 | SPARK-23850 | 6a55d8b#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.redaction.string.regex | 2.3.0 | SPARK-22791 | 2831571#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.function.concatBinaryAsString | 2.3.0 | SPARK-22771 | f2b3525#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.function.eltOutputAsString | 2.3.0 | SPARK-22937 | bf85301#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.validatePartitionColumns | 3.0.0 | SPARK-26263 | 5a140b7#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.continuous.epochBacklogQueueSize | 3.0.0 | SPARK-24063 | c4bbfd1#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.continuous.executorQueueSize | 2.3.0 | SPARK-22789 | 8941a4a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.continuous.executorPollIntervalMs | 2.3.0 | SPARK-22789 | 8941a4a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.useV1SourceList | 3.0.0 | SPARK-28747 | cb06209#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.streaming.disabledV2Writers | 2.3.1 | SPARK-23196 | 588b969#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.disabledV2MicroBatchReaders | 2.4.0 | SPARK-23362 | 0a73aa3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.partitionOverwriteMode | 2.3.0 | SPARK-20236 | b962488#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.storeAssignmentPolicy | 3.0.0 | SPARK-28730 | 895c90b#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.ansi.enabled | 3.0.0 | SPARK-30125 | d9b3069#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.execution.sortBeforeRepartition | 2.1.4 | SPARK-23207 and SPARK-22905 and SPARK-24564 and SPARK-25114 | 4d2d3d4#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.optimizer.nestedSchemaPruning.enabled | 2.4.1 | SPARK-4502 | dfcff38#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.optimizer.serializer.nestedSchemaPruning.enabled | 3.0.0 | SPARK-26837 | 0f2c0b5#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.optimizer.expression.nestedPruning.enabled | 3.0.0 | SPARK-27707 | 127bc89#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.execution.topKSortFallbackThreshold | 2.4.0 | SPARK-24193 | 8a837bf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.csv.parser.columnPruning.enabled | 2.4.0 | SPARK-24244 and SPARK-24368 | 64fad0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.repl.eagerEval.enabled | 2.4.0 | SPARK-24215 | 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.repl.eagerEval.maxNumRows | 2.4.0 | SPARK-24215 | 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.repl.eagerEval.truncate | 2.4.0 | SPARK-24215 | 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.codegen.aggregate.fastHashMap.capacityBit | 2.4.0 | SPARK-24978 | 6193a20#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.avro.compression.codec | 2.4.0 | SPARK-24881 | 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.avro.deflate.level | 2.4.0 | SPARK-24881 | 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.sizeOfNull | 2.4.0 | SPARK-24605 | d08f53d#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.replaceDatabricksSparkAvro.enabled | 2.4.0 | SPARK-25129 | ac0174e#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.setopsPrecedence.enabled | 2.4.0 | SPARK-24966 | 73dd6cf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.exponentLiteralAsDecimal.enabled | 3.0.0 | SPARK-29956 | 87ebfaf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.allowNegativeScaleOfDecimal | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.createHiveTableByDefault.enabled | 3.0.0 | SPARK-30098 | 58be82a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.integralDivide.returnBigint | 3.0.0 | SPARK-25457 | 47d6e80#diff-9a6b543db706f1a90f790783d6930a13 | Exists in branch-3.0 branch, but the pom.xml file corresponding to the commit log is 2.5.0-SNAPSHOT
spark.sql.legacy.bucketedTableScan.outputOrdering | 3.0.0 | SPARK-28595 | 469423f#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.parser.havingWithoutGroupByAsWhere | 2.4.1 | SPARK-25708 | 3dba5d4#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.json.allowEmptyString.enabled | 3.0.0 | SPARK-25040 | d3de756#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.createEmptyCollectionUsingStringType | 3.0.0 | SPARK-30790 | 8ab6ae3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.allowUntypedScalaUDF | 3.0.0 | SPARK-26580 | bc30a07#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.truncateTable.ignorePermissionAcl.enabled | 2.4.6 | SPARK-30312 | 830a4ec#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.legacy.dataset.nameNonStructGroupingKeyAsValue | 3.0.0 | SPARK-26085 | ab2eafb#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.debug.maxToStringFields | 3.0.0 | SPARK-26066 | 81550b3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.maxPlanStringLength | 3.0.0 | SPARK-26103 | 812ad55#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.setCommandRejectsSparkCoreConfs | 3.0.0 | SPARK-26060 | 1ab3d3e#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.datetime.java8API.enabled | 3.0.0 | SPARK-27008 | 52671d6#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.binaryFile.maxLength | 3.0.0 | SPARK-27588 | 618d6bf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.typeCoercion.datetimeToString.enabled | 3.0.0 | SPARK-27638 | 83d289e#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.defaultCatalog | 3.0.0 | SPARK-29753 | 942753a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.catalog.$SESSION_CATALOG_NAME | 3.0.0 | SPARK-29412 | 9407fba#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.legacy.doLooseUpcast | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.ctePrecedencePolicy | 3.0.0 | SPARK-30829 | 00943be#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.timeParserPolicy | 3.1.0 | SPARK-30668 | 7db0af5#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.followThreeValuedLogicInArrayExists | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.maven.additionalRemoteRepositories | 3.0.0 | SPARK-29175 | 3d7359a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.legacy.fromDayTimeString.enabled | 3.0.0 | SPARK-29864 and SPARK-29920 | e933539#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.notReserveProperties | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.addSingleFileInAddFile | 3.0.0 | SPARK-30234 | 8a8d1fb#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.mssqlserver.numericMapping.enabled | 2.4.5 | SPARK-28152 | 69de7f3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.csv.filterPushdown.enabled | 3.0.0 | SPARK-30323 | 4e50f02#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.addPartitionInBatch.size | 3.0.0 | SPARK-29938 | 5ccbb38#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.timeParser.enabled | 3.0.0 | SPARK-30668 | 92f5723#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.allowDuplicatedMapKeys | 3.0.0 | SPARK-25829 | 33329ca#diff-9a6b543db706f1a90f790783d6930a13 |  

Supplemental configuration version information.

No

Exists UT

Closes apache#27829 from beliefer/add-version-to-sql-config-part-four.

Authored-by: beliefer <beliefer@163.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
sjincho pushed a commit to sjincho/spark that referenced this pull request Apr 15, 2020
…figuration of SQL

### What changes were proposed in this pull request?
This PR follows apache#27691, apache#27730 and apache#27770
I sorted out some information show below.

Item name | Since version | JIRA ID | Commit ID | Note
-- | -- | -- | -- | --
spark.sql.redaction.options.regex | 2.2.2 | SPARK-23850 | 6a55d8b#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.redaction.string.regex | 2.3.0 | SPARK-22791 | 2831571#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.function.concatBinaryAsString | 2.3.0 | SPARK-22771 | f2b3525#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.function.eltOutputAsString | 2.3.0 | SPARK-22937 | bf85301#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.validatePartitionColumns | 3.0.0 | SPARK-26263 | 5a140b7#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.continuous.epochBacklogQueueSize | 3.0.0 | SPARK-24063 | c4bbfd1#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.continuous.executorQueueSize | 2.3.0 | SPARK-22789 | 8941a4a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.continuous.executorPollIntervalMs | 2.3.0 | SPARK-22789 | 8941a4a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.useV1SourceList | 3.0.0 | SPARK-28747 | cb06209#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.streaming.disabledV2Writers | 2.3.1 | SPARK-23196 | 588b969#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.streaming.disabledV2MicroBatchReaders | 2.4.0 | SPARK-23362 | 0a73aa3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.partitionOverwriteMode | 2.3.0 | SPARK-20236 | b962488#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.storeAssignmentPolicy | 3.0.0 | SPARK-28730 | 895c90b#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.ansi.enabled | 3.0.0 | SPARK-30125 | d9b3069#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.execution.sortBeforeRepartition | 2.1.4 | SPARK-23207 and SPARK-22905 and SPARK-24564 and SPARK-25114 | 4d2d3d4#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.optimizer.nestedSchemaPruning.enabled | 2.4.1 | SPARK-4502 | dfcff38#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.optimizer.serializer.nestedSchemaPruning.enabled | 3.0.0 | SPARK-26837 | 0f2c0b5#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.optimizer.expression.nestedPruning.enabled | 3.0.0 | SPARK-27707 | 127bc89#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.execution.topKSortFallbackThreshold | 2.4.0 | SPARK-24193 | 8a837bf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.csv.parser.columnPruning.enabled | 2.4.0 | SPARK-24244 and SPARK-24368 | 64fad0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.repl.eagerEval.enabled | 2.4.0 | SPARK-24215 | 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.repl.eagerEval.maxNumRows | 2.4.0 | SPARK-24215 | 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.repl.eagerEval.truncate | 2.4.0 | SPARK-24215 | 6a0b77a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.codegen.aggregate.fastHashMap.capacityBit | 2.4.0 | SPARK-24978 | 6193a20#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.avro.compression.codec | 2.4.0 | SPARK-24881 | 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.avro.deflate.level | 2.4.0 | SPARK-24881 | 0a0f68b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.sizeOfNull | 2.4.0 | SPARK-24605 | d08f53d#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.replaceDatabricksSparkAvro.enabled | 2.4.0 | SPARK-25129 | ac0174e#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.setopsPrecedence.enabled | 2.4.0 | SPARK-24966 | 73dd6cf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.exponentLiteralAsDecimal.enabled | 3.0.0 | SPARK-29956 | 87ebfaf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.allowNegativeScaleOfDecimal | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.createHiveTableByDefault.enabled | 3.0.0 | SPARK-30098 | 58be82a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.integralDivide.returnBigint | 3.0.0 | SPARK-25457 | 47d6e80#diff-9a6b543db706f1a90f790783d6930a13 | Exists in branch-3.0 branch, but the pom.xml file corresponding to the commit log is 2.5.0-SNAPSHOT
spark.sql.legacy.bucketedTableScan.outputOrdering | 3.0.0 | SPARK-28595 | 469423f#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.parser.havingWithoutGroupByAsWhere | 2.4.1 | SPARK-25708 | 3dba5d4#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.json.allowEmptyString.enabled | 3.0.0 | SPARK-25040 | d3de756#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.createEmptyCollectionUsingStringType | 3.0.0 | SPARK-30790 | 8ab6ae3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.allowUntypedScalaUDF | 3.0.0 | SPARK-26580 | bc30a07#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.truncateTable.ignorePermissionAcl.enabled | 2.4.6 | SPARK-30312 | 830a4ec#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.legacy.dataset.nameNonStructGroupingKeyAsValue | 3.0.0 | SPARK-26085 | ab2eafb#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.debug.maxToStringFields | 3.0.0 | SPARK-26066 | 81550b3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.maxPlanStringLength | 3.0.0 | SPARK-26103 | 812ad55#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.setCommandRejectsSparkCoreConfs | 3.0.0 | SPARK-26060 | 1ab3d3e#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.datetime.java8API.enabled | 3.0.0 | SPARK-27008 | 52671d6#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.sources.binaryFile.maxLength | 3.0.0 | SPARK-27588 | 618d6bf#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.typeCoercion.datetimeToString.enabled | 3.0.0 | SPARK-27638 | 83d289e#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.defaultCatalog | 3.0.0 | SPARK-29753 | 942753a#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.catalog.$SESSION_CATALOG_NAME | 3.0.0 | SPARK-29412 | 9407fba#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.legacy.doLooseUpcast | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.ctePrecedencePolicy | 3.0.0 | SPARK-30829 | 00943be#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.timeParserPolicy | 3.1.0 | SPARK-30668 | 7db0af5#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.followThreeValuedLogicInArrayExists | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.maven.additionalRemoteRepositories | 3.0.0 | SPARK-29175 | 3d7359a#diff-9a6b543db706f1a90f790783d6930a13 |
spark.sql.legacy.fromDayTimeString.enabled | 3.0.0 | SPARK-29864 and SPARK-29920 | e933539#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.notReserveProperties | 3.0.0 | SPARK-30812 | b76bc0b#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.addSingleFileInAddFile | 3.0.0 | SPARK-30234 | 8a8d1fb#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.mssqlserver.numericMapping.enabled | 2.4.5 | SPARK-28152 | 69de7f3#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.csv.filterPushdown.enabled | 3.0.0 | SPARK-30323 | 4e50f02#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.addPartitionInBatch.size | 3.0.0 | SPARK-29938 | 5ccbb38#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.timeParser.enabled | 3.0.0 | SPARK-30668 | 92f5723#diff-9a6b543db706f1a90f790783d6930a13 |  
spark.sql.legacy.allowDuplicatedMapKeys | 3.0.0 | SPARK-25829 | 33329ca#diff-9a6b543db706f1a90f790783d6930a13 |  

### Why are the changes needed?
Supplemental configuration version information.

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Exists UT

Closes apache#27829 from beliefer/add-version-to-sql-config-part-four.

Authored-by: beliefer <beliefer@163.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
@beliefer beliefer deleted the add-version-to-sql-config-part-four branch April 23, 2024 06:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments