-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-13403] [SQL] Pass hadoopConfiguration to HiveConf constructors. #11273
[SPARK-13403] [SQL] Pass hadoopConfiguration to HiveConf constructors. #11273
Conversation
cc @marmbrus |
Seems reasonable. |
ok to test |
Test build #51669 has finished for PR 11273 at commit
|
FWIW also seems reasonable to me; making a new conf is rarely correct. @rdblue can you rebase? |
Also, could you fix the title to follow the Spark convention? |
sc.hadoopConfiguration may contain Hadoop-specific configuration properties that are not used by SparkSQL's HiveContext because it is not passed when constructing instances of HiveConf.
32a3dcf
to
f0f6cee
Compare
Test build #53377 has finished for PR 11273 at commit
|
Thanks - I'm going to merge this. |
Thanks for the reviews! |
This commit updates the HiveContext so that sc.hadoopConfiguration is used to instantiate its internal instances of HiveConf. I tested this by overriding the S3 FileSystem implementation from spark-defaults.conf as "spark.hadoop.fs.s3.impl" (to avoid [HADOOP-12810](https://issues.apache.org/jira/browse/HADOOP-12810)). Author: Ryan Blue <blue@apache.org> Closes apache#11273 from rdblue/SPARK-13403-new-hive-conf-from-hadoop-conf.
This commit updates the HiveContext so that sc.hadoopConfiguration is used to instantiate its internal instances of HiveConf.
I tested this by overriding the S3 FileSystem implementation from spark-defaults.conf as "spark.hadoop.fs.s3.impl" (to avoid HADOOP-12810).