New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-5944] Support reading of Snappy files #4683
base: master
Are you sure you want to change the base?
Conversation
flink-core/pom.xml
Outdated
<groupId>org.apache.flink</groupId> | ||
<artifactId>flink-shaded-hadoop2</artifactId> | ||
<version>${project.version}</version> | ||
</dependency> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Including hadoop as a dependency in flink-core can be problematic for a number of downstream projects.
I wonder what is the exact difference between the Hadoop and vanilla snappy codec? Is it just due to the fact that there are additional framings in the snappy codec in Hadoop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about adding this dependency to compile-time only (in the scope 'provided')?
Regarding to difference between codecs as I understand the thing is that Snappy compressed files are not splittable. So Hadoop splits raw files into blocks and compresses each block separately using regular Snappy. If you download the whole Hadoop Snappy compressed file regular Snappy will not be able to decompress it since it's not aware of block boundaries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internally we have several users that try Flink to read the files generated by Hadoop (e.g. lz4 / gz / snappy). I think the support of Hadoop is quite important.
I'm not sure supporting the xerial snappy format is a good idea. The two file formats are actually incompatible -- it would be quite confusing for the users to find out that they can't access the files using Spark / MR / Hive due to a missed configuration.
I suggest at least we should make the Hadoop file format as the default -- or to just get rid of the xerial version of the file format.
Putting the dependency in provided sounds fine to me -- if we need even tighter controls on the dependency, we can start thinking about having a separate module for it.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is a good point to make Hadoop Snappy as the default codec. I think we still could support a Xerial Snappy since it comes for free. I will do these changes once we agree on dependencies
Regarding to separate module what would be the content of this model?
As I understand a user which would like to read HDFS files will need flink-java module anyway since it contains Hadoop wrappers like HadoopInputSplit and so on. How do you think if it makes sense to put this Hadoop codec there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As of recently everything except the Hadoop compatibility package is free from Hadoop dependencies. This should also stay like this so that we can provide a Flink distribution without Hadoop dependencies because this was causing problems for some users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your comment Aljoscha,
So there are at least three ways on how to achieve it: either mark this dependency as 'provided', move Hadoop Snappy Codec related classes to flink-java module or move it to some separate module as suggested @haohui, but I'm not sure what should be inside this module
What is the purpose of the change
Support reading of Snappy compressed text files (both Xerial and Hadoop snappy)
Brief change log
Verifying this change
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: noDocumentation