Skip to content

"salting" names of service principals created by hdfs / hive when kerberos is enabled #402

@maxgruber19

Description

@maxgruber19

in case of using one ldap system for multiple environments like dev, int, prod the problem arises that service principals will be doubled when the cluster resource name and the namespace name is the same across those environments. even doing magic with multiple organisations won't solve the problem because principal names have to be unique within one domain.

we use the same namespace names in different environments (k8s clusters) and i really like that because it makes the platform way easier to use. going for namespaces with an environment suffix would do the job but maybe we can just modify the principal names at creation time.

i suggest adding an optional parameter principalSalt (for sure there is a better name) to secretclasses of kind kerberosKeytab

spec:
  backend:
    kerberosKeytab:
      realmName: CLUSTER.LOCAL
      kdc: krb5-kdc
      admin:
        mit:
          kadminServer: krb5-kdc
        # or...
        activeDirectory:
          # ldapServer must match the AD Domain Controller's FQDN or GSSAPI authn will fail
          # You may need to set AD as your fallback DNS resolver in your Kube DNS Corefile
          ldapServer: addc.example.com
          ldapTlsCaSecret:
            namespace: default
            name: secret-operator-ad-ca
          passwordCacheSecret:
            namespace: default
            name: secret-operator-ad-passwords
          principalSalt: dev
          userDistinguishedName: CN=Users,DC=sble,DC=test
          schemaDistinguishedName: CN=Schema,CN=Configuration,DC=sble,DC=test
      adminKeytabSecret:
        namespace: default
        name: secret-provisioner-keytab
      adminPrincipal: stackable-secret-operator

copied from https://docs.stackable.tech/home/stable/secret-operator/secretclass#backend-kerberoskeytab and modified

that should result in a principal like: dn/dev-hdfs.namespace.svc.cluster.local@CLUSTER.LOCAL instead of dn/hdfs.namespace.svc.cluster.local@CLUSTER.LOCAL

i thought about backwards compatibility for a longer time now and i think this should not be a big problem because no existing cluster is affected by that when the "salt" property is not set. i must confess that i have no clue about the consequences for hdfs or hive when the service principle name and the cluster service name don't match anymore.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions