Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-22279][SQL] Enable convertMetastoreOrc by default #21186

Closed
wants to merge 1 commit into from
Closed

[SPARK-22279][SQL] Enable convertMetastoreOrc by default #21186

wants to merge 1 commit into from

Conversation

dongjoon-hyun
Copy link
Member

@dongjoon-hyun dongjoon-hyun commented Apr 27, 2018

What changes were proposed in this pull request?

We reverted spark.sql.hive.convertMetastoreOrc at #20536 because we should not ignore the table-specific compression conf. Now, it's resolved via SPARK-23355.

How was this patch tested?

Pass the Jenkins.

@SparkQA
Copy link

SparkQA commented Apr 27, 2018

Test build #89941 has finished for PR 21186 at commit 5383299.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dongjoon-hyun
Copy link
Member Author

dongjoon-hyun commented Apr 29, 2018

@gatorsmile and @cloud-fan .
Could you review this PR? This is a first try after the reverting (#20536).
If we have more things to do for this, please let me know.

@dongjoon-hyun dongjoon-hyun changed the title [SPARK-22279][SPARK-24112] Enable convertMetastoreOrc and add convertMetastore.TableProperty conf [SPARK-22279][SPARK-24112] Enable convertMetastoreOrc and add convertMetastoreTableProperty conf May 1, 2018
@dongjoon-hyun
Copy link
Member Author

Hi, @gatorsmile .
Do you think we need to split this into two separate PRs? If you want, I will split this.

  • SPARK-22279 Enable convertMetastoreOrc by default
  • SPARK-24112 Add spark.sql.hive.convertMetastoreTableProperty for backward compatiblility

@dongjoon-hyun
Copy link
Member Author

Retest this please.

@SparkQA
Copy link

SparkQA commented May 1, 2018

Test build #89986 has finished for PR 21186 at commit 5383299.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dongjoon-hyun
Copy link
Member Author

The failures are irrelevant to this PR.

@dongjoon-hyun
Copy link
Member Author

Retest this please.

@SparkQA
Copy link

SparkQA commented May 1, 2018

Test build #89998 has finished for PR 21186 at commit 5383299.

  • This patch fails SparkR unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

spark.sql.hive.convertMetastoreTableProperty looks unnecessary to me...

@dongjoon-hyun
Copy link
Member Author

Ya. I also thought like that before, @cloud-fan .

Please consider an existing customer environment like the unit test cases. For some Parquet tables having table properties like TBLPROPERTIES (parquet.compression 'NONE'), it was ignored by default before Apache Spark 2.4. After upgrading cluster, Spark will write uncompressed file which is different from Apache Spark 2.3 and old.

Since this is a behavior change, we need to document it and had better provide options for this. We can remove this at Apache Spark 3.0.

@SparkQA
Copy link

SparkQA commented May 3, 2018

Test build #90149 has finished for PR 21186 at commit b9ed640.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@@ -1812,6 +1812,9 @@ working with timestamps in `pandas_udf`s to get the best performance, see
- Since Spark 2.4, creating a managed table with nonempty location is not allowed. An exception is thrown when attempting to create a managed table with nonempty location. To set `true` to `spark.sql.allowCreatingManagedTableUsingNonemptyLocation` restores the previous behavior. This option will be removed in Spark 3.0.
- Since Spark 2.4, the type coercion rules can automatically promote the argument types of the variadic SQL functions (e.g., IN/COALESCE) to the widest common type, no matter how the input arguments order. In prior Spark versions, the promotion could fail in some specific orders (e.g., TimestampType, IntegerType and StringType) and throw an exception.
- In version 2.3 and earlier, `to_utc_timestamp` and `from_utc_timestamp` respect the timezone in the input timestamp string, which breaks the assumption that the input timestamp is in a specific timezone. Therefore, these 2 functions can return unexpected results. In version 2.4 and later, this problem has been fixed. `to_utc_timestamp` and `from_utc_timestamp` will return null if the input timestamp string contains timezone. As an example, `from_utc_timestamp('2000-10-10 00:00:00', 'GMT+1')` will return `2000-10-10 01:00:00` in both Spark 2.3 and 2.4. However, `from_utc_timestamp('2000-10-10 00:00:00+00:00', 'GMT+1')`, assuming a local timezone of GMT+8, will return `2000-10-10 09:00:00` in Spark 2.3 but `null` in 2.4. For people who don't care about this problem and want to retain the previous behaivor to keep their query unchanged, you can set `spark.sql.function.rejectTimezoneInString` to false. This option will be removed in Spark 3.0 and should only be used as a temporary workaround.
- Since Spark 2.4, Spark uses its own ORC support by default instead of Hive SerDe for better performance during Hive metastore table access. To set `false` to `spark.sql.hive.convertMetastoreOrc` restores the previous behavior.
- Since Spark 2.4, Spark supports table properties while converting Parquet/ORC Hive tables. To set `false` to `spark.sql.hive.convertMetastoreTableProperty` restores the previous behavior.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please polish the migration guide w.r.t. https://issues.apache.org/jira/browse/SPARK-24175

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure!

@@ -1812,6 +1812,9 @@ working with timestamps in `pandas_udf`s to get the best performance, see
- Since Spark 2.4, creating a managed table with nonempty location is not allowed. An exception is thrown when attempting to create a managed table with nonempty location. To set `true` to `spark.sql.allowCreatingManagedTableUsingNonemptyLocation` restores the previous behavior. This option will be removed in Spark 3.0.
- Since Spark 2.4, the type coercion rules can automatically promote the argument types of the variadic SQL functions (e.g., IN/COALESCE) to the widest common type, no matter how the input arguments order. In prior Spark versions, the promotion could fail in some specific orders (e.g., TimestampType, IntegerType and StringType) and throw an exception.
- In version 2.3 and earlier, `to_utc_timestamp` and `from_utc_timestamp` respect the timezone in the input timestamp string, which breaks the assumption that the input timestamp is in a specific timezone. Therefore, these 2 functions can return unexpected results. In version 2.4 and later, this problem has been fixed. `to_utc_timestamp` and `from_utc_timestamp` will return null if the input timestamp string contains timezone. As an example, `from_utc_timestamp('2000-10-10 00:00:00', 'GMT+1')` will return `2000-10-10 01:00:00` in both Spark 2.3 and 2.4. However, `from_utc_timestamp('2000-10-10 00:00:00+00:00', 'GMT+1')`, assuming a local timezone of GMT+8, will return `2000-10-10 09:00:00` in Spark 2.3 but `null` in 2.4. For people who don't care about this problem and want to retain the previous behaivor to keep their query unchanged, you can set `spark.sql.function.rejectTimezoneInString` to false. This option will be removed in Spark 3.0 and should only be used as a temporary workaround.
- In version 2.3 and earlier, Spark converts Parquet Hive tables by default but ignores table properties like `TBLPROPERTIES (parquet.compression 'NONE')`. This happens for ORC Hive table properties like `TBLPROPERTIES (orc.compress 'NONE')` in case of `spark.sql.hive.convertMetastoreOrc=true`, too. Since Spark 2.4, Spark supports Parquet/ORC specific table properties while converting Parquet/ORC Hive tables. As an example, `CREATE TABLE t(id int) STORED AS PARQUET TBLPROPERTIES (parquet.compression 'NONE')` would generate Snappy parquet files during insertion in Spark 2.3, and in Spark 2.4, the result would be uncompressed parquet files. To set `false` to `spark.sql.hive.convertMetastoreTableProperty` restores the previous behavior.
- Since Spark 2.0, Spark converts Parquet Hive tables by default for better performance. Since Spark 2.4, Spark converts ORC Hive tables by default, too. It means Spark uses its own ORC support by default instead of Hive SerDe. As an example, `CREATE TABLE t(id int) STORED AS ORC` would be handled with Hive SerDe in Spark 2.3, and in Spark 2.4, it would be converted into Spark's ORC data source table and ORC vectorization would be applied. To set `false` to `spark.sql.hive.convertMetastoreOrc` restores the previous behavior.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cloud-fan and @gatorsmile . I updated according to the guideline SPARK-24175.

@SparkQA
Copy link

SparkQA commented May 4, 2018

Test build #90213 has finished for PR 21186 at commit b746702.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dongjoon-hyun
Copy link
Member Author

Retest this please.

@SparkQA
Copy link

SparkQA commented May 6, 2018

Test build #90263 has finished for PR 21186 at commit b746702.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dongjoon-hyun
Copy link
Member Author

dongjoon-hyun commented May 7, 2018

I'll split this into two PRs in order to make it easy to review.

@dongjoon-hyun dongjoon-hyun changed the title [SPARK-22279][SPARK-24112] Enable convertMetastoreOrc and add convertMetastoreTableProperty conf [SPARK-22279][SQL] Enable convertMetastoreOrc by default May 7, 2018
@dongjoon-hyun
Copy link
Member Author

To reduce the review scope, convertMetastoreTableProperty goes to #21259 .

@SparkQA
Copy link

SparkQA commented May 7, 2018

Test build #90332 has finished for PR 21186 at commit ddd6872.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

can you resolve the conflicts?

@dongjoon-hyun
Copy link
Member Author

Sure, it's rebased now.

@SparkQA
Copy link

SparkQA commented May 9, 2018

Test build #90417 has finished for PR 21186 at commit 10e8319.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

thanks, merging to master!

@asfgit asfgit closed this in e3d4349 May 10, 2018
@dongjoon-hyun
Copy link
Member Author

Thank you, @cloud-fan !

@dongjoon-hyun dongjoon-hyun deleted the SPARK-24112 branch May 10, 2018 15:44
otterc pushed a commit to linkedin/spark that referenced this pull request Mar 22, 2023
We reverted `spark.sql.hive.convertMetastoreOrc` at apache#20536 because we should not ignore the table-specific compression conf. Now, it's resolved via [SPARK-23355](apache@8aa1d7b).

Pass the Jenkins.

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes apache#21186 from dongjoon-hyun/SPARK-24112.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants