Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the fileSizeInBytes of orc and parquet are inconsistent #1666

Closed
zhangjun0x01 opened this issue Oct 27, 2020 · 2 comments · Fixed by #1697
Closed

the fileSizeInBytes of orc and parquet are inconsistent #1666

zhangjun0x01 opened this issue Oct 27, 2020 · 2 comments · Fixed by #1697

Comments

@zhangjun0x01
Copy link
Contributor

I found that the value stored in the variable fileSizeInBytes of DataFile, orc and parquet format are inconsistent. The orc format stores the deserialized data size, while the parquet stores the file size.

This will cause a problem. In RewriteDataFilesAction, the default value of the targetSizeInBytes is 128M,if it is orc format, , after rewrite action,the size of the datafile is only 10M. Because in RewriteDataFilesAction ,we read the orc data according to the deserialized data size ,not the file size ,so the size of the new generated datafile is not enough to 128M.

The parquet format is normal and meets my expectations.

@rdblue
Copy link
Contributor

rdblue commented Oct 28, 2020

@shardulm94, can you take a look at the ORC file size metric? It looks like it may be incorrect, which would affect scan planning.

@zhangjun0x01
Copy link
Contributor Author

hi,@rdblue,@shardulm94:
I read the source code. I found that when constructing the DataFile in the BaseTaskWriter.RollingFileWriter#closeCurrent method, we get the fileSizeInBytes by the length() method of the currentAppender, and the OrcFileAppender uses the getRawDataSize() method of the ORC Writer to get the length. I read the comments of this method. It use the deserialized data size.

  /**
   * Return the deserialized data size. Raw data size will be compute when
   * writing the file footer. Hence raw data size value will be available only
   * after closing the writer.
   *
   * @return raw data size
   */
  long getRawDataSize();

Parquet get length by position. I don't know which is correct of orc and parquet, but I think the length obtained in parquet format meets my expectations. Because when I query the hdfs file by the fsck command of hdfs, I found that it split the block according to the file size, not the deserialized data size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants