You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found that the value stored in the variable fileSizeInBytes of DataFile, orc and parquet format are inconsistent. The orc format stores the deserialized data size, while the parquet stores the file size.
This will cause a problem. In RewriteDataFilesAction, the default value of the targetSizeInBytes is 128M,if it is orc format, , after rewrite action,the size of the datafile is only 10M. Because in RewriteDataFilesAction ,we read the orc data according to the deserialized data size ,not the file size ,so the size of the new generated datafile is not enough to 128M.
The parquet format is normal and meets my expectations.
The text was updated successfully, but these errors were encountered:
hi,@rdblue,@shardulm94:
I read the source code. I found that when constructing the DataFile in the BaseTaskWriter.RollingFileWriter#closeCurrent method, we get the fileSizeInBytes by the length() method of the currentAppender, and the OrcFileAppender uses the getRawDataSize() method of the ORC Writer to get the length. I read the comments of this method. It use the deserialized data size.
/**
* Return the deserialized data size. Raw data size will be compute when
* writing the file footer. Hence raw data size value will be available only
* after closing the writer.
*
* @return raw data size
*/
long getRawDataSize();
Parquet get length by position. I don't know which is correct of orc and parquet, but I think the length obtained in parquet format meets my expectations. Because when I query the hdfs file by the fsck command of hdfs, I found that it split the block according to the file size, not the deserialized data size.
I found that the value stored in the variable fileSizeInBytes of DataFile, orc and parquet format are inconsistent. The orc format stores the deserialized data size, while the parquet stores the file size.
This will cause a problem. In RewriteDataFilesAction, the default value of the targetSizeInBytes is 128M,if it is orc format, , after rewrite action,the size of the datafile is only 10M. Because in RewriteDataFilesAction ,we read the orc data according to the deserialized data size ,not the file size ,so the size of the new generated datafile is not enough to 128M.
The parquet format is normal and meets my expectations.
The text was updated successfully, but these errors were encountered: