Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-10181][SQL] Do kerberos login for credentials during hive client initialization #9272

Closed
wants to merge 17 commits into from

Conversation

yolandagao
Copy link
Contributor

On driver process start up, UserGroupInformation.loginUserFromKeytab is called with the principal and keytab passed in, and therefore static var UserGroupInfomation,loginUser is set to that principal with kerberos credentials saved in its private credential set, and all threads within the driver process are supposed to see and use this login credentials to authenticate with Hive and Hadoop. However, because of IsolatedClientLoader, UserGroupInformation class is not shared for hive metastore clients, and instead it is loaded separately and of course not able to see the prepared kerberos login credentials in the main thread.

The first proposed fix would cause other classloader conflict errors, and is not an appropriate solution. This new change does kerberos login during hive client initialization, which will make credentials ready for the particular hive client instance.

@yhuai Please take a look and let me know. If you are not the right person to talk to, could you point me to someone responsible for this?

@yhuai
Copy link
Contributor

yhuai commented Oct 26, 2015

This this please.

@@ -520,6 +520,8 @@ object SparkSubmit {
}
if (args.principal != null) {
require(args.keytab != null, "Keytab must be specified when the keytab is specified")
sysProps.put("spark.yarn.keytab", args.keytab)
sysProps.put("spark.yarn.principal", args.principal)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harishreedharan I see you changed this part of code last time. If we want to pass these two arguments to Spark SQL, what is the recommended way?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harishreedharan Hi Hari, could you let us know the preferred way to pass principal and keytab parameters from spark submit to spark sql? waiting for your response to proceed. Thank you!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to look at yarn Client.scala setupCredentials() since its doing something pretty similar it looks like.

@yhuai
Copy link
Contributor

yhuai commented Nov 2, 2015

@yolandagao Is 40d3c67 for the same issue (for a similar issue)? @steveloughran Can you take a look at this one? I am not sure if it is the same issue that you addressed in SPARK-11265.

@yolandagao
Copy link
Contributor Author

@yhuai Hi Yin, SPARK-11265 is a different issue - in yarn Client.scala code, hive metastore token is obtained when kerberos is enabled to set up AM container launch context, during which Hive and HiveConf instantiation failed due to a new change in Hive 1.2.1.
YarnSparkHadoopUtil.get.obtainTokensForNamenodes(nns, hadoopConf, credentials)
obtainTokenForHiveMetastore(hadoopConf, credentials)
obtainTokenForHBase(hadoopConf, credentials)
....
setupSecurityToken(amContainer)
UserGroupInformation.getCurrentUser().addCredentials(credentials)

SPARK-11265 fix will not solve the problem here. In yarn-client mode, Hive client with IsolatedClientLoader has no access to these delegation tokens and kerberos tickets (set up via UGI.loginUserFromKeytab in SparkSubmit), as it has a pure new UserGroupInformation class loaded.

@@ -520,6 +520,8 @@ object SparkSubmit {
}
if (args.principal != null) {
require(args.keytab != null, "Keytab must be specified when the keytab is specified")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this line wasn't changed in this PR but since we are in this file anyways could we maybe cleanup this error message (I think it is meant to say "Keytab must be specified when the kerberos principal is specified"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Corrected the error message. Thanks!

@yolandagao
Copy link
Contributor Author

Hi folks,
Yarn Client.scala checks the setting from argStrings (passed from SparkSubmit) and sparkConf (which loads java system properties starting with spark.* as well). The args settings (--principal, --keytab) will not be available in spark sql, but we can take advantage of SparkConf with system properties. Updated the pull request and tested. Please advise if this is what you are looking for, Thank you!


val sparkConf = new SparkConf
if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) {
UserGroupInformation.loginUserFromKeytab(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

before calling this, actually verify that the keytab file exists and fail with a message including the property name, to help people debug the problem. UGI internal exceptions are rarely informative enough

@steveloughran
Copy link
Contributor

This is unrelated to the SPARK-11265 patch; that's all about getting reflection to find the right methods. This is about UGI setup.

val sparkConf = new SparkConf
if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) {
UserGroupInformation.loginUserFromKeytab(
sparkConf.get("spark.yarn.principal"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, you should call SparkHadoopUtil.get.loginUserFromKeytab(principalName, keytabFilename; maybe the check could be added there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@steveloughran Good point. Better to check the existence of the keytab file before make the login call, as if the keytab doesn't exist the UGI call will definitely fail but with some indirect message like "login failed... no keys found..." ect. Added the check.

However, calling SparkHadoopUtil.get.loginUserFromKeytab instead of UserGroupInformation.loginUserFromKeytab in ClientWrapper will not solve the problem as SparkHadoopUtil is shared and the UserGroupInformation class it includes is not the same one used by SessionState.start in ClientWrapper. Therefore, the program still fails with no tgt exception when connecting to metastore. Also not able to replace the UGI call in SparkSubmit either, as incorrect type of SparkHadoopUtil instance might get created due to yarn mode isn't set in the system until it flows to yarn Client.scala.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. just include the check (actually, UGI itself should do that check shouldn't it? Lazy)

@yolandagao
Copy link
Contributor Author

Updated and tested the change - in yarn client and cluster mode, with and without keytab/principal parameters. Please take a look. Thank you!

@marmbrus
Copy link
Contributor

marmbrus commented Nov 7, 2015

ok to test

@SparkQA
Copy link

SparkQA commented Nov 8, 2015

Test build #45308 has finished for PR 9272 at commit 7d09f5d.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yolandagao
Copy link
Contributor Author

The org.apache.spark.sql.hive.thriftserver.CliSuite timeout failure seems not related to the change but some environment issue. Ran the test suite on my mac machine several times with the same sbt command, and it all passed.

Can we retest?

@holdenk
Copy link
Contributor

holdenk commented Nov 9, 2015

@yolandagao you can try saying "jenkins retest this please" in a comment (I know if you've been whitelisted it will trigger a retest less certain if not yet whitelisted).

@SparkQA
Copy link

SparkQA commented Nov 9, 2015

Test build #45392 has finished for PR 9272 at commit 7d09f5d.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@@ -150,6 +152,21 @@ private[hive] class ClientWrapper(
val original = Thread.currentThread().getContextClassLoader
// Switch to the initClassLoader.
Thread.currentThread().setContextClassLoader(initClassLoader)

val sparkConf = new SparkConf
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of creating a new Spark Conf, can we use SparkEnv.get.conf to get the spark conf associated with the current spark context?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yhuai Sorry for the late response. I did the testing after changing to SparkEnv.get.conf but it didn't work. The reason is that Yarn Client.scala resets property spark.yarn.keytab by appending some random strings to keytab file name during setupCredentials, which will be used as the link name in distributed cache. I think the one for the link name should be actually separated from the original keytab setting, e.g. using different property names.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yolandagao Thanks for the explanation. Can you add comments to your code (including why we need to put those confs to sysProps and why we need to create a new SparkConf at here)? Basically, we need to document the flow of how these confs get propagated. Otherwise, it is not obvious why we need to do this change. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yhuai Sure. I should have done this earlier, to make everything clearer:) Added some comments there, and please help review. Thank you!

// SparkConf is needed for the original value of spark.yarn.keytab specified by user,
// as yarn.Client resets it for the link name in distributed cache
val sparkConf = new SparkConf
if (sparkConf.contains("spark.yarn.principal") && sparkConf.contains("spark.yarn.keytab")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make it clear that we set these two settings in SparkSubmit.

@yhuai
Copy link
Contributor

yhuai commented Nov 14, 2015

@yolandagao Thank you for the update. Overall looks good. Left two comments.

@@ -20,6 +20,8 @@ package org.apache.spark.sql.hive.client
import java.io.{File, PrintStream}
import java.util.{Map => JMap}

import org.apache.hadoop.security.UserGroupInformation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move this import down to the place where we have other hadoop related imports. https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide#SparkCodeStyleGuide-Imports is the doc about import ordering.

@SparkQA
Copy link

SparkQA commented Nov 14, 2015

Test build #45937 has finished for PR 9272 at commit 1fbc372.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yolandagao
Copy link
Contributor Author

Thank you Yin for the review. Updated the comments accordingly.

@yhuai
Copy link
Contributor

yhuai commented Nov 15, 2015

LGTM pending jenkins.

@SparkQA
Copy link

SparkQA commented Nov 15, 2015

Test build #45944 has finished for PR 9272 at commit caf51a7.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yhuai
Copy link
Contributor

yhuai commented Nov 15, 2015

Thanks! Merging to master and branch 1.6.

asfgit pushed a commit that referenced this pull request Nov 15, 2015
…nt initialization

On driver process start up, UserGroupInformation.loginUserFromKeytab is called with the principal and keytab passed in, and therefore static var UserGroupInfomation,loginUser is set to that principal with kerberos credentials saved in its private credential set, and all threads within the driver process are supposed to see and use this login credentials to authenticate with Hive and Hadoop. However, because of IsolatedClientLoader, UserGroupInformation class is not shared for hive metastore clients, and instead it is loaded separately and of course not able to see the prepared kerberos login credentials in the main thread.

The first proposed fix would cause other classloader conflict errors, and is not an appropriate solution. This new change does kerberos login during hive client initialization, which will make credentials ready for the particular hive client instance.

 yhuai Please take a look and let me know. If you are not the right person to talk to, could you point me to someone responsible for this?

Author: Yu Gao <ygao@us.ibm.com>
Author: gaoyu <gaoyu@gaoyu-macbookpro.roam.corp.google.com>
Author: Yu Gao <crystalgaoyu@gmail.com>

Closes #9272 from yolandagao/master.

(cherry picked from commit 72c1d68)
Signed-off-by: Yin Huai <yhuai@databricks.com>
@asfgit asfgit closed this in 72c1d68 Nov 15, 2015
asfgit pushed a commit that referenced this pull request Nov 16, 2015
…nt initialization

On driver process start up, UserGroupInformation.loginUserFromKeytab is called with the principal and keytab passed in, and therefore static var UserGroupInfomation,loginUser is set to that principal with kerberos credentials saved in its private credential set, and all threads within the driver process are supposed to see and use this login credentials to authenticate with Hive and Hadoop. However, because of IsolatedClientLoader, UserGroupInformation class is not shared for hive metastore clients, and instead it is loaded separately and of course not able to see the prepared kerberos login credentials in the main thread.

The first proposed fix would cause other classloader conflict errors, and is not an appropriate solution. This new change does kerberos login during hive client initialization, which will make credentials ready for the particular hive client instance.

 yhuai Please take a look and let me know. If you are not the right person to talk to, could you point me to someone responsible for this?

Author: Yu Gao <ygao@us.ibm.com>
Author: gaoyu <gaoyu@gaoyu-macbookpro.roam.corp.google.com>
Author: Yu Gao <crystalgaoyu@gmail.com>

Closes #9272 from yolandagao/master.

(cherry picked from commit 72c1d68)
Signed-off-by: Yin Huai <yhuai@databricks.com>
@yhuai
Copy link
Contributor

yhuai commented Nov 16, 2015

Also merged to branch-1.5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants