New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-10181][SQL] Do kerberos login for credentials during hive client initialization #9272
Conversation
… login credentials" This reverts commit 59253b3.
This this please. |
@@ -520,6 +520,8 @@ object SparkSubmit { | |||
} | |||
if (args.principal != null) { | |||
require(args.keytab != null, "Keytab must be specified when the keytab is specified") | |||
sysProps.put("spark.yarn.keytab", args.keytab) | |||
sysProps.put("spark.yarn.principal", args.principal) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harishreedharan I see you changed this part of code last time. If we want to pass these two arguments to Spark SQL, what is the recommended way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harishreedharan Hi Hari, could you let us know the preferred way to pass principal and keytab parameters from spark submit to spark sql? waiting for your response to proceed. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to look at yarn Client.scala setupCredentials() since its doing something pretty similar it looks like.
@yolandagao Is 40d3c67 for the same issue (for a similar issue)? @steveloughran Can you take a look at this one? I am not sure if it is the same issue that you addressed in SPARK-11265. |
@yhuai Hi Yin, SPARK-11265 is a different issue - in yarn Client.scala code, hive metastore token is obtained when kerberos is enabled to set up AM container launch context, during which Hive and HiveConf instantiation failed due to a new change in Hive 1.2.1. SPARK-11265 fix will not solve the problem here. In yarn-client mode, Hive client with IsolatedClientLoader has no access to these delegation tokens and kerberos tickets (set up via UGI.loginUserFromKeytab in SparkSubmit), as it has a pure new UserGroupInformation class loaded. |
@@ -520,6 +520,8 @@ object SparkSubmit { | |||
} | |||
if (args.principal != null) { | |||
require(args.keytab != null, "Keytab must be specified when the keytab is specified") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this line wasn't changed in this PR but since we are in this file anyways could we maybe cleanup this error message (I think it is meant to say "Keytab must be specified when the kerberos principal is specified"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Corrected the error message. Thanks!
Hi folks, |
|
||
val sparkConf = new SparkConf | ||
if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) { | ||
UserGroupInformation.loginUserFromKeytab( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before calling this, actually verify that the keytab file exists and fail with a message including the property name, to help people debug the problem. UGI internal exceptions are rarely informative enough
This is unrelated to the SPARK-11265 patch; that's all about getting reflection to find the right methods. This is about UGI setup. |
val sparkConf = new SparkConf | ||
if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) { | ||
UserGroupInformation.loginUserFromKeytab( | ||
sparkConf.get("spark.yarn.principal"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, you should call SparkHadoopUtil.get.loginUserFromKeytab(principalName, keytabFilename
; maybe the check could be added there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@steveloughran Good point. Better to check the existence of the keytab file before make the login call, as if the keytab doesn't exist the UGI call will definitely fail but with some indirect message like "login failed... no keys found..." ect. Added the check.
However, calling SparkHadoopUtil.get.loginUserFromKeytab instead of UserGroupInformation.loginUserFromKeytab in ClientWrapper will not solve the problem as SparkHadoopUtil is shared and the UserGroupInformation class it includes is not the same one used by SessionState.start in ClientWrapper. Therefore, the program still fails with no tgt exception when connecting to metastore. Also not able to replace the UGI call in SparkSubmit either, as incorrect type of SparkHadoopUtil instance might get created due to yarn mode isn't set in the system until it flows to yarn Client.scala.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. just include the check (actually, UGI itself should do that check shouldn't it? Lazy)
Updated and tested the change - in yarn client and cluster mode, with and without keytab/principal parameters. Please take a look. Thank you! |
ok to test |
Test build #45308 has finished for PR 9272 at commit
|
The org.apache.spark.sql.hive.thriftserver.CliSuite timeout failure seems not related to the change but some environment issue. Ran the test suite on my mac machine several times with the same sbt command, and it all passed. Can we retest? |
@yolandagao you can try saying "jenkins retest this please" in a comment (I know if you've been whitelisted it will trigger a retest less certain if not yet whitelisted). |
Test build #45392 has finished for PR 9272 at commit
|
@@ -150,6 +152,21 @@ private[hive] class ClientWrapper( | |||
val original = Thread.currentThread().getContextClassLoader | |||
// Switch to the initClassLoader. | |||
Thread.currentThread().setContextClassLoader(initClassLoader) | |||
|
|||
val sparkConf = new SparkConf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of creating a new Spark Conf, can we use SparkEnv.get.conf
to get the spark conf associated with the current spark context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yhuai Sorry for the late response. I did the testing after changing to SparkEnv.get.conf but it didn't work. The reason is that Yarn Client.scala resets property spark.yarn.keytab by appending some random strings to keytab file name during setupCredentials, which will be used as the link name in distributed cache. I think the one for the link name should be actually separated from the original keytab setting, e.g. using different property names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yolandagao Thanks for the explanation. Can you add comments to your code (including why we need to put those confs to sysProps and why we need to create a new SparkConf at here)? Basically, we need to document the flow of how these confs get propagated. Otherwise, it is not obvious why we need to do this change. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yhuai Sure. I should have done this earlier, to make everything clearer:) Added some comments there, and please help review. Thank you!
// SparkConf is needed for the original value of spark.yarn.keytab specified by user, | ||
// as yarn.Client resets it for the link name in distributed cache | ||
val sparkConf = new SparkConf | ||
if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make it clear that we set these two settings in SparkSubmit.
@yolandagao Thank you for the update. Overall looks good. Left two comments. |
@@ -20,6 +20,8 @@ package org.apache.spark.sql.hive.client | |||
import java.io.{File, PrintStream} | |||
import java.util.{Map => JMap} | |||
|
|||
import org.apache.hadoop.security.UserGroupInformation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move this import down to the place where we have other hadoop related imports. https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide#SparkCodeStyleGuide-Imports is the doc about import ordering.
Test build #45937 has finished for PR 9272 at commit
|
Thank you Yin for the review. Updated the comments accordingly. |
LGTM pending jenkins. |
Test build #45944 has finished for PR 9272 at commit
|
Thanks! Merging to master and branch 1.6. |
…nt initialization On driver process start up, UserGroupInformation.loginUserFromKeytab is called with the principal and keytab passed in, and therefore static var UserGroupInfomation,loginUser is set to that principal with kerberos credentials saved in its private credential set, and all threads within the driver process are supposed to see and use this login credentials to authenticate with Hive and Hadoop. However, because of IsolatedClientLoader, UserGroupInformation class is not shared for hive metastore clients, and instead it is loaded separately and of course not able to see the prepared kerberos login credentials in the main thread. The first proposed fix would cause other classloader conflict errors, and is not an appropriate solution. This new change does kerberos login during hive client initialization, which will make credentials ready for the particular hive client instance. yhuai Please take a look and let me know. If you are not the right person to talk to, could you point me to someone responsible for this? Author: Yu Gao <ygao@us.ibm.com> Author: gaoyu <gaoyu@gaoyu-macbookpro.roam.corp.google.com> Author: Yu Gao <crystalgaoyu@gmail.com> Closes #9272 from yolandagao/master. (cherry picked from commit 72c1d68) Signed-off-by: Yin Huai <yhuai@databricks.com>
…nt initialization On driver process start up, UserGroupInformation.loginUserFromKeytab is called with the principal and keytab passed in, and therefore static var UserGroupInfomation,loginUser is set to that principal with kerberos credentials saved in its private credential set, and all threads within the driver process are supposed to see and use this login credentials to authenticate with Hive and Hadoop. However, because of IsolatedClientLoader, UserGroupInformation class is not shared for hive metastore clients, and instead it is loaded separately and of course not able to see the prepared kerberos login credentials in the main thread. The first proposed fix would cause other classloader conflict errors, and is not an appropriate solution. This new change does kerberos login during hive client initialization, which will make credentials ready for the particular hive client instance. yhuai Please take a look and let me know. If you are not the right person to talk to, could you point me to someone responsible for this? Author: Yu Gao <ygao@us.ibm.com> Author: gaoyu <gaoyu@gaoyu-macbookpro.roam.corp.google.com> Author: Yu Gao <crystalgaoyu@gmail.com> Closes #9272 from yolandagao/master. (cherry picked from commit 72c1d68) Signed-off-by: Yin Huai <yhuai@databricks.com>
Also merged to branch-1.5. |
On driver process start up, UserGroupInformation.loginUserFromKeytab is called with the principal and keytab passed in, and therefore static var UserGroupInfomation,loginUser is set to that principal with kerberos credentials saved in its private credential set, and all threads within the driver process are supposed to see and use this login credentials to authenticate with Hive and Hadoop. However, because of IsolatedClientLoader, UserGroupInformation class is not shared for hive metastore clients, and instead it is loaded separately and of course not able to see the prepared kerberos login credentials in the main thread.
The first proposed fix would cause other classloader conflict errors, and is not an appropriate solution. This new change does kerberos login during hive client initialization, which will make credentials ready for the particular hive client instance.
@yhuai Please take a look and let me know. If you are not the right person to talk to, could you point me to someone responsible for this?