Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[#3489] improvement(hive-catalog): Add user authentication e2e test for Hive catalog #3525

Merged
merged 63 commits into from
May 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
fd390d0
Add kerberos IT
yuqi1129 May 17, 2024
780e375
Fix
yuqi1129 May 18, 2024
9c0a2ed
Fix test error
yuqi1129 May 18, 2024
3a6a31b
Merge branch 'main' of github.com:datastrato/graviton into issue_3432
yuqi1129 May 20, 2024
3f06416
Add fix.
yuqi1129 May 20, 2024
85e6b73
fix
yuqi1129 May 20, 2024
66640fc
fix
yuqi1129 May 20, 2024
01dbf60
Fix
yuqi1129 May 20, 2024
d6960c8
Merge branch 'main' into issue_3432
yuqi1129 May 20, 2024
e71dde3
Remove unused code.
yuqi1129 May 21, 2024
5a26aa1
Merge remote-tracking branch 'me/issue_3432' into issue_3432
yuqi1129 May 21, 2024
6488d03
Merge branch 'main' into issue_3432
yuqi1129 May 21, 2024
8aa9b4d
optimize code.
yuqi1129 May 21, 2024
4609efb
Merge remote-tracking branch 'me/issue_3432' into issue_3432
yuqi1129 May 21, 2024
30d4c60
Fix mistake
yuqi1129 May 21, 2024
965f63a
Revert the code that check status of `show databases` for Hive contai…
yuqi1129 May 21, 2024
0f8051c
Merge branch 'main' into issue_3432
yuqi1129 May 21, 2024
6919203
Merge main and rebase the code.
yuqi1129 May 22, 2024
ac82116
Merge remote-tracking branch 'me/issue_3432' into issue_3432
yuqi1129 May 22, 2024
b30294c
Merge branch 'issue_3432' into issue_3489
yuqi1129 May 22, 2024
cf212b2
Merge branch 'issue_3432' of github.com:yuqi1129/gravitino into issue…
yuqi1129 May 22, 2024
fee42e5
Merge main and resolve conflicts
yuqi1129 May 23, 2024
3875934
Add user e2e test for Hive catalog
yuqi1129 May 23, 2024
a5d07cb
Fix
yuqi1129 May 23, 2024
56af48f
Fix
yuqi1129 May 23, 2024
05a40ac
Fix test error.
yuqi1129 May 23, 2024
d3a4e5f
fix
yuqi1129 May 23, 2024
291907d
Merge branch 'main' into issue_3489
yuqi1129 May 23, 2024
f94f118
fix
yuqi1129 May 24, 2024
7f50dba
Merge remote-tracking branch 'me/issue_3489' into issue_3489
yuqi1129 May 24, 2024
a231e49
fix
yuqi1129 May 24, 2024
b3cc997
fix
yuqi1129 May 24, 2024
fb29d97
Fix style
yuqi1129 May 24, 2024
845f393
Fix test error.
yuqi1129 May 24, 2024
7c571e4
Fix test error
yuqi1129 May 24, 2024
c73635b
fix ut again
yuqi1129 May 24, 2024
538f91c
Merge branch 'main' into issue_3489
yuqi1129 May 24, 2024
832719c
Fix compile error.
yuqi1129 May 24, 2024
e164532
Merge remote-tracking branch 'me/issue_3489' into issue_3489
yuqi1129 May 24, 2024
84648c0
Fix test
yuqi1129 May 24, 2024
8014be8
Fix test
yuqi1129 May 24, 2024
603bb54
Fix
yuqi1129 May 24, 2024
5bce6c1
Add debug info
yuqi1129 May 25, 2024
c71cbc6
fix
yuqi1129 May 25, 2024
9c200ba
fix
yuqi1129 May 25, 2024
5c11d79
fix
yuqi1129 May 25, 2024
3761b1a
fix
yuqi1129 May 25, 2024
d26e139
fix
yuqi1129 May 25, 2024
a60e6ff
fix
yuqi1129 May 25, 2024
1eab58f
fix again
yuqi1129 May 25, 2024
6524f00
Fix again
yuqi1129 May 25, 2024
05a810a
fix
yuqi1129 May 25, 2024
bb8d5f7
Fix
yuqi1129 May 25, 2024
e8d6a8c
Fix
yuqi1129 May 26, 2024
e6afd7f
Merge branch 'main' into issue_3489
yuqi1129 May 26, 2024
74d1b87
Revert some code.
yuqi1129 May 26, 2024
618e1ca
Merge remote-tracking branch 'me/issue_3489' into issue_3489
yuqi1129 May 26, 2024
f913b68
fix
yuqi1129 May 26, 2024
f63d1cf
Remove some unnecessary log.
yuqi1129 May 26, 2024
7cb5b8b
Revert some code again
yuqi1129 May 26, 2024
b1e04b6
Optimize code.
yuqi1129 May 27, 2024
e004750
Add more assertions
yuqi1129 May 27, 2024
9d9766b
Merge branch 'main' into issue_3489
yuqi1129 May 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/backend-integration-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -113,3 +113,4 @@ jobs:
distribution/package/logs/gravitino-server.log
catalogs/**/*.log
catalogs/**/*.tar
distribution/**/*.log
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@
/** Helper methods to create SortOrders to pass into Gravitino. */
public class SortOrders {

/** NONE is used to indicate that there is no sort order. */
public static final SortOrder[] NONE = new SortOrder[0];

/**
yuqi1129 marked this conversation as resolved.
Show resolved Hide resolved
* Create a sort order by the given expression with the ascending sort direction and nulls first
* ordering.
Expand Down
2 changes: 2 additions & 0 deletions bin/gravitino.sh
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,8 @@ if [ "$JVM_VERSION" -eq 17 ]; then
JAVA_OPTS+=" --add-opens java.security.jgss/sun.security.krb5=ALL-UNNAMED"
fi

#JAVA_OPTS+=" -Djava.securit.krb5.conf=/etc/krb5.conf"

addJarInDir "${GRAVITINO_HOME}/libs"

case "${1}" in
Expand Down
1 change: 1 addition & 0 deletions catalogs/catalog-hive/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,7 @@ tasks.test {

doFirst {
environment("GRAVITINO_CI_HIVE_DOCKER_IMAGE", "datastrato/gravitino-ci-hive:0.1.12")
environment("GRAVITINO_CI_KERBEROS_HIVE_DOCKER_IMAGE", "datastrato/gravitino-ci-kerberos-hive:0.1.1")
}

val init = project.extra.get("initIntegrationTest") as (Test) -> Unit
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
import java.io.File;
import java.io.IOException;
import java.io.UncheckedIOException;
import java.lang.reflect.Method;
import java.time.Instant;
import java.util.Arrays;
import java.util.List;
Expand All @@ -74,6 +75,7 @@
import org.apache.hadoop.hive.metastore.api.UnknownDBException;
import org.apache.hadoop.security.SecurityUtil;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.authentication.util.KerberosName;
import org.apache.thrift.TException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Expand Down Expand Up @@ -144,7 +146,6 @@ public void initialize(Map<String, String> conf, CatalogInfo info) throws Runtim
// and gravitinoConfig will be passed to Hive config, and gravitinoConfig has higher priority
mergeConfig.forEach(hadoopConf::set);
hiveConf = new HiveConf(hadoopConf, HiveCatalogOperations.class);
UserGroupInformation.setConfiguration(hadoopConf);

initKerberosIfNecessary(conf, hadoopConf);

Expand Down Expand Up @@ -173,7 +174,7 @@ private void initKerberosIfNecessary(Map<String, String> conf, Configuration had

String keytabUri =
(String)
catalogPropertiesMetadata.getOrDefault(conf, HiveCatalogPropertiesMeta.KET_TAB_URI);
catalogPropertiesMetadata.getOrDefault(conf, HiveCatalogPropertiesMeta.KEY_TAB_URI);
Preconditions.checkArgument(StringUtils.isNotBlank(keytabUri), "Keytab uri can't be blank");
// TODO: Support to download the file from Kerberos HDFS
Preconditions.checkArgument(
Expand Down Expand Up @@ -201,6 +202,10 @@ private void initKerberosIfNecessary(Map<String, String> conf, Configuration had
new ScheduledThreadPoolExecutor(
1, getThreadFactory(String.format("Kerberos-check-%s", info.id())));

LOG.info("krb5 path: {}", System.getProperty("java.security.krb5.conf"));
refreshKerberosConfig();
KerberosName.resetDefaultRealm();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of this line?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KerberosName.resetDefaultRealm(); will reset the Kerberos realm, some tests use realm EXAMPLE.COM and some use HADOOPKRB, in the same JVM, there can't be more than one realm here or failures in test will occur.

UserGroupInformation.setConfiguration(hadoopConf);
UserGroupInformation.loginUserFromKeytab(catalogPrincipal, keytabFile.getAbsolutePath());

UserGroupInformation kerberosLoginUgi = UserGroupInformation.getCurrentUser();
Expand All @@ -224,10 +229,28 @@ private void initKerberosIfNecessary(Map<String, String> conf, Configuration had

} catch (IOException ioe) {
throw new UncheckedIOException(ioe);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}

private void refreshKerberosConfig() {
Class<?> classRef;
try {
if (System.getProperty("java.vendor").contains("IBM")) {
classRef = Class.forName("com.ibm.security.krb5.internal.Config");
} else {
classRef = Class.forName("sun.security.krb5.Config");
}

Method refershMethod = classRef.getMethod("refresh");
refershMethod.invoke(null);
} catch (Exception e) {
throw new RuntimeException(e);
}
}

@VisibleForTesting
int getClientPoolSize(Map<String, String> conf) {
return (int) catalogPropertiesMetadata.getOrDefault(conf, CLIENT_POOL_SIZE);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ public class HiveCatalogPropertiesMeta extends BaseCatalogPropertiesMetadata {

public static final boolean DEFAULT_IMPERSONATION_ENABLE = false;

public static final String KET_TAB_URI = "kerberos.keytab-uri";
public static final String KEY_TAB_URI = "kerberos.keytab-uri";

public static final String PRINCIPAL = "kerberos.principal";

Expand Down Expand Up @@ -69,9 +69,9 @@ public class HiveCatalogPropertiesMeta extends BaseCatalogPropertiesMetadata {
false,
false))
.put(
KET_TAB_URI,
KEY_TAB_URI,
PropertyEntry.stringImmutablePropertyEntry(
KET_TAB_URI, "The uri of key tab for the catalog", false, null, false, false))
KEY_TAB_URI, "The uri of key tab for the catalog", false, null, false, false))
.put(
PRINCIPAL,
PropertyEntry.stringImmutablePropertyEntry(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ public Object doAs(
ops.getClientPool()
.run(
client -> {
return client.getDelegationToken(realUser.getUserName(), principal.getName());
return client.getDelegationToken(principal.getName(), realUser.getUserName());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This a bug, just fixed it by the way.

});

Token<DelegationTokenIdentifier> delegationToken = new Token<DelegationTokenIdentifier>();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
import static com.datastrato.gravitino.catalog.hive.HiveCatalogPropertiesMeta.CLIENT_POOL_SIZE;
import static com.datastrato.gravitino.catalog.hive.HiveCatalogPropertiesMeta.FETCH_TIMEOUT_SEC;
import static com.datastrato.gravitino.catalog.hive.HiveCatalogPropertiesMeta.IMPERSONATION_ENABLE;
import static com.datastrato.gravitino.catalog.hive.HiveCatalogPropertiesMeta.KET_TAB_URI;
import static com.datastrato.gravitino.catalog.hive.HiveCatalogPropertiesMeta.KEY_TAB_URI;
import static com.datastrato.gravitino.catalog.hive.HiveCatalogPropertiesMeta.METASTORE_URIS;
import static com.datastrato.gravitino.catalog.hive.HiveCatalogPropertiesMeta.PRINCIPAL;
import static com.datastrato.gravitino.connector.BaseCatalog.CATALOG_BYPASS_PREFIX;
Expand Down Expand Up @@ -81,7 +81,7 @@ void testPropertyMeta() {
Assertions.assertFalse(
propertyEntryMap.get(CLIENT_POOL_CACHE_EVICTION_INTERVAL_MS).isRequired());
Assertions.assertFalse(propertyEntryMap.get(IMPERSONATION_ENABLE).isRequired());
Assertions.assertFalse(propertyEntryMap.get(KET_TAB_URI).isRequired());
Assertions.assertFalse(propertyEntryMap.get(KEY_TAB_URI).isRequired());
Assertions.assertFalse(propertyEntryMap.get(PRINCIPAL).isRequired());
Assertions.assertFalse(propertyEntryMap.get(CHECK_INTERVAL_SEC).isRequired());
Assertions.assertFalse(propertyEntryMap.get(FETCH_TIMEOUT_SEC).isRequired());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,9 @@ public static void startup() throws Exception {

@AfterAll
public static void stop() throws IOException {
client.dropMetalake(metalakeName);
if (client != null) {
client.dropMetalake(metalakeName);
}
if (hiveClientPool != null) {
hiveClientPool.close();
}
Expand All @@ -216,6 +218,9 @@ public static void stop() throws IOException {
} catch (Exception e) {
LOG.error("Failed to close CloseableGroup", e);
}

AbstractIT.customConfigs.clear();
AbstractIT.client = null;
}

@AfterEach
Expand Down
Loading
Loading