Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-20937][DOCS] Describe spark.sql.parquet.writeLegacyFormat property in Spark SQL, DataFrames and Datasets Guide #22453

Closed
wants to merge 4 commits into from

Conversation

seancxmao
Copy link
Contributor

What changes were proposed in this pull request?

Describe spark.sql.parquet.writeLegacyFormat property in Spark SQL, DataFrames and Datasets Guide.

How was this patch tested?

N/A

…erty in Spark SQL, DataFrames and Datasets Guide
@seancxmao
Copy link
Contributor Author

@HyukjinKwon Could you please help review this?

@HyukjinKwon
Copy link
Member

ok to test

@SparkQA
Copy link

SparkQA commented Sep 18, 2018

Test build #96187 has finished for PR 22453 at commit 3af33a3.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
</p>
</td>
</tr>
<tr>
<td><code>spark.sql.parquet.writeLegacyFormat</code></td>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should go with the other parquet properties if anything, but, this one is so old I don't think it's worth documenting. It shouldn't be used today.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srowen, actually, this configuration specifically related with compatibility with other systems like Impala (not only old Spark ones) where decimals are written based on fixed binary format (nowdays it's written in int-based in Spark). If this configurations is not enabled, they are unable to read what Spark wrote.

Given https://stackoverflow.com/questions/44279870/why-cant-impala-read-parquet-files-after-spark-sqls-write and JIRA like SPARK-20297, I think this configuration is kind of important. I even expected more documentation about this configuration specifically at the first place.

Personally I have been thinking it would better to leave this configuration after 3.0 as well for better compatibility.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is, of course, something we should remove in long term but my impression is that it's better to expose and explicitly mention we deprecate this later, and the remove it out.

I already argued a bit (for instance in SPARK-20297) to explain how to workaround and why it is. Was thinking it's better document this and reduce such overhead at least.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to add my 2 cents. We use both Spark and Hive in our Hadoop/Spark clusters. And we have 2 types of tables, working tables and target tables. Working tables are only used by Spark jobs, while target tables are populated by Spark and exposed to downstream jobs including Hive jobs. Our data engineers frequently meet with this issue when they use Hive to read target tables. Finally we decided to set spark.sql.parquet.writeLegacyFormat=true as the default value for target tables and explicitly describe this in our internal developer guide.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK that sounds important to document. But the reasoning in this thread is also more useful information I think. Instead of describing it as a legacy format (implying it's not valid Parquet or something) and that it's required for Hive and Impala, can we mention or point to the specific reason that would cause you to need this? The value of the documentation here is in whether it helps the user know when to set it one way or the other.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++1 for more information actually.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I will update the doc and describe scenarios and reasons why we need this flag.

@HyukjinKwon
Copy link
Member

cc @jaceklaskowski FYI

@seancxmao
Copy link
Contributor Author

FYI. I had a brief survey on Parquet decimal support of computing engines at the time of writing.

Hive

  • HIVE-19069 Hive can't read int32 and int64 Parquet decimal. Not resolved yet.

Impala:

  • IMPALA-5628 Parquet support for additional valid decimal representations. This is an umbrella JIRA.
  • IMPALA-2494 Impala Unable to scan a Decimal column stored as Bytes. Fix Version/s: Impala 2.11.0.
  • IMPALA-5542 Impala cannot scan Parquet decimal stored as int64_t/int32_t. Fix Version/s: Impala 3.1.0, not released yet.

Presto

  • issues/7232. Can't read decimal type in parquet files written by spark and referenced as external in the hive metastore
  • issues/7533. Improve decimal type support in the new Parquet reader. Fixed Version: 0.182

configuration is not enabled, decimals will be written in int-based format in Spark 1.5 and
above, other systems that only support legacy decimal format (fixed length byte array) will not
be able to read what Spark has written. Note other systems may have added support for the
standard format in more recent versions, which will make this configuration unnecessary. Please
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think Hive and Impala also use newer Parquet versions/format. Isn't it sufficient to say older versions of Spark (<= 1.4) and older versions of Hive, Impala (do we know which?) use older Parquet formats and this enables writing it that way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't checked closely but I think Hive still uses binary for decimals (https://github.com/apache/hive/blob/ae008b79b5d52ed6a38875b73025a505725828eb/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java#L503-L541). Given my past investigation, thing is, Parquet supports both ways to write out (https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#decimal) IIRC. They deprecated timestamp based on int 96 (https://github.com/apache/parquet-format/blob/master/src/main/thrift/parquet.thrift#L782) but not decimals.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it somehow leads to confusion since we call the option something "legacy" which isn't actually legacy in Parquet's decimal side.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hive and Impala do NOT support new Parquet format yet.

  • HIVE-19069: Hive can't read int32 and int64 Parquet decimal. This issue is not resolved yet. This is consistent with source code check by @HyukjinKwon
  • IMPALA-5542: Impala cannot scan Parquet decimal stored as int64_t/int32_t. This is resolved, however targeted to Impala 3.1.0, which is a version not released yet. The latest release is 3.0.0 (https://impala.apache.org/downloads.html).

Presto began to support new Parquet format since 0.182.

  • issues/7533: Improve decimal type support in the new Parquet reader. This patch is included in 0.182. Blow is the excerpt:

Fix reading decimal values in the optimized Parquet reader when they are backed by the int32 or int64 types.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds like it isn't quite a legacy format, but one still used by Hive and even considered valid if not current by Parquet? This part I am not sure of, but basing it on Hyukjin's comment above.

I suggest a somewhat shorter text like this, what do you think? its length would be more suitable as a config doc below.

If true, then decimal values will be written in Apache Parquet's fixed-length byte array format. This is used by Spark 1.4 and earlier, and systems like Apache Hive and Apache Impala. If false, decimals will be written using the newer int format in Parquet. If Parquet output is intended for use with systems that do not support this newer format, set to true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we must call it "legacy", I'd think of it legacy implementation in Spark side, rather than legacy format in Parquet side.
As comment in SPARK-20297

The standard doesn't say that smaller decimals have to be stored in int32/int64, it just is an option for subset of decimal types. int32 and int64 are valid representations for a subset of decimal types. fixed_len_byte_array and binary are a valid representation of any decimal type.

The int32/int64 options were present in the original version of the decimal spec, they just weren't widely implemented. So its not a new/old version thing, it was just an alternative representation that many systems didn't implement.

Anyway, it really leads to confusion.

Really appreciate your suggestion @srowen to make the doc shorter, the doc you suggested is more concise and to the point.

One more thing I want to discuss. After investigating the usage of this option, I found this option is not only related to decimals, but also complex types (Array, Map), see below source code. Should we mention this in the doc?

// ===================================
// ArrayType and MapType (legacy mode)
// ===================================
// Spark 1.4.x and prior versions convert `ArrayType` with nullable elements into a 3-level
// `LIST` structure. This behavior is somewhat a hybrid of parquet-hive and parquet-avro
// (1.6.0rc3): the 3-level structure is similar to parquet-hive while the 3rd level element
// field name "array" is borrowed from parquet-avro.
case ArrayType(elementType, nullable @ true) if writeLegacyParquetFormat =>

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make it short and get rid of all other things orthogonal with the issue itself (I think the issue is specific to decimals). For instance, we could say (based upon Sean's comment):

If true, it writes Parquet file in a way of Spark 1.4 and earlier, for instance, decimal values will be written in Apache Parquet's fixed-length byte array format, which other systems such as Apache Hive and Apache Impala use. If false, the newer format in Parquet will be used, for instance, decimals will be written based on int. If Parquet output is intended for use with systems that do not support this newer format, set to true.

Please feel free to change words as what you think is righter

@SparkQA
Copy link

SparkQA commented Sep 24, 2018

Test build #96507 has finished for PR 22453 at commit e6f67c1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

configuration is not enabled, decimals will be written in int-based format in Spark 1.5 and
above, other systems that only support legacy decimal format (fixed length byte array) will not
be able to read what Spark has written. Note other systems may have added support for the
standard format in more recent versions, which will make this configuration unnecessary. Please
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, let's match the doc in SQLConf as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your suggestion. I have updated the doc in SQLConf.

@SparkQA
Copy link

SparkQA commented Sep 25, 2018

Test build #96532 has finished for PR 22453 at commit 6e2680f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Sep 26, 2018

Test build #96590 has finished for PR 22453 at commit 3e51bd9.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@seancxmao
Copy link
Contributor Author

Retest this please.

@SparkQA
Copy link

SparkQA commented Sep 26, 2018

Test build #96608 has finished for PR 22453 at commit 3e51bd9.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

@srowen srowen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the text is close enough.

@HyukjinKwon
Copy link
Member

Merged to master and branch-2.4.

asfgit pushed a commit that referenced this pull request Sep 26, 2018
…erty in Spark SQL, DataFrames and Datasets Guide

## What changes were proposed in this pull request?
Describe spark.sql.parquet.writeLegacyFormat property in Spark SQL, DataFrames and Datasets Guide.

## How was this patch tested?
N/A

Closes #22453 from seancxmao/SPARK-20937.

Authored-by: seancxmao <seancxmao@gmail.com>
Signed-off-by: hyukjinkwon <gurwls223@apache.org>
(cherry picked from commit cf5c9c4)
Signed-off-by: hyukjinkwon <gurwls223@apache.org>
@asfgit asfgit closed this in cf5c9c4 Sep 26, 2018
daspalrahul pushed a commit to daspalrahul/spark that referenced this pull request Sep 29, 2018
…erty in Spark SQL, DataFrames and Datasets Guide

## What changes were proposed in this pull request?
Describe spark.sql.parquet.writeLegacyFormat property in Spark SQL, DataFrames and Datasets Guide.

## How was this patch tested?
N/A

Closes apache#22453 from seancxmao/SPARK-20937.

Authored-by: seancxmao <seancxmao@gmail.com>
Signed-off-by: hyukjinkwon <gurwls223@apache.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants