-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-13437][test] Add Hive SQL E2E test #10709
Conversation
cc @bowenli86 @xuefuz @JingsongLi @KurtYoung @lirui-apache to have a review |
It's a base work, and we can add more ITCase based on this PR. |
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit e580f03 (Fri Feb 28 21:48:31 UTC 2020) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
@bowenli86 @JingsongLi Do you guys have time to have a basic look? |
|
||
@Override | ||
public void before() throws Exception { | ||
buildDockerImage(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, building the docker image will take a while for the 1st time, and will be pretty fast for later runs, correct?
* YarnClusterJobController can be used to fetch the execute log. | ||
*/ | ||
public static class YarnClusterJobController implements JobController { | ||
private List<String> lines; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make it final
localFlinkDir = temporaryFolder.newFolder().toPath(); | ||
|
||
LOG.info("Copying distribution to {}.", localFlinkDir); | ||
TestUtils.copyDirectory(originalFlinkDir, localFlinkDir); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to copy the dist dir?
@Override | ||
public ClusterController startCluster(int numTaskManagers) throws IOException { | ||
if (!deployFlinkToRemote) { | ||
yarnCluster.copyLocalFileToYarnMaster(localFlinkDir.toAbsolutePath().toString(), remoteFlinkDir); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of copying the dist dir to the container, can we instead mount the dir to container when it's started, with the -v
option?
nohup sudo -E -u mapred $HADOOP_PREFIX/bin/mapred historyserver 2>> /var/log/hadoop/historyserver.err >> /var/log/hadoop/historyserver.out & | ||
|
||
hdfs dfsadmin -safemode wait | ||
while [ $? -ne 0 ]; do hdfs dfsadmin -safemode wait; done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want to retry if the command fails? I think it's a potential infinite loop if something goes wrong.
hdfs dfsadmin -safemode wait | ||
while [ $? -ne 0 ]; do hdfs dfsadmin -safemode wait; done | ||
|
||
hdfs dfs -chown hdfs:hadoop / |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have disabled dfs.permissions
. So why we need to run these chown
?
JobSubmission.JobSubmissionBuilder jobSubmissionBuilder = new JobSubmission.JobSubmissionBuilder(testJarPath); | ||
jobSubmissionBuilder.setParallelism(1) | ||
.addOption("-ys", "1") | ||
.addOption("-ytm", "1000") | ||
.addOption("-yjm", "1000") | ||
.addOption("-c", HiveReadWriteDataExample.class.getCanonicalName()) | ||
.addArgument("--hiveVersion", hiveVersion) | ||
.addArgument("--sourceTable", "all_types_table") | ||
.addArgument("--targetTable", "dest_all_types_table"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the job submission code is the same for the 2 test cases. Can we reuse it?
@lirui-apache @JingsongLi can you guys help review? I can help merge once it passes |
Thanks @zjuwangg for your great work! I will continue to working on this. |
What's the status of this? With the new sql filesystem connector I suspect more flink users will rely on Hive integration. It would be good to try and get this in for 1.12. |
Yes, we should move on. |
Close this comment! |
What is the purpose of the change
Set up a docker-based yarn-cluster and hive service using the new java based test runtime framework, add HiveConnectorITCase to cover data read/write function, including:
Based on this PR, we can add more test such as function/view in further more.
Brief change log
Verifying this change
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: (no)Documentation