Skip to content

Latest commit

 

History

History
174 lines (135 loc) · 6.96 KB

Setup-TDE-23.md

File metadata and controls

174 lines (135 loc) · 6.96 KB

Encryption at rest: Transparent Data at Rest Encryption in HDP 2.3 using Ranger KMS

Install Ranger KMS
  • Start the Ranger KMS install by navigating to below link in Ambari (pre-requisite: Ranger is already installed)

    • Admin -> Stacks/Versions -> Ranger KMS -> Add service
  • Below is a summary of the congfigurations needed for Ranger KMS Settings:

    • Advanced kms-properties
      • REPOSITORY_CONFIG_USERNAME = rangeradmin@HORTONWORKS.COM
      • REPOSITORY_CONFIG_PASSWORD = hortonworks
      • db_password = hortonworks (or whatever you set MySql password to when setting up Ranger here)
      • db_root_password = hortonworks (or whatever you set MySql pssword to when setting up Ranger here)
      • KMS_MASTER_KEY_PASSWD = hortonworks (or whatever you wish to set this to be)

Image

  • After setting above, proceed with install of Ranger KMS

  • Post install changes:

    • Link core-site.xml sudo ln -s /etc/hadoop/conf/core-site.xml /etc/ranger/kms/conf/core-site.xml

    • Configure HDFS to access KMS by making the below HDFS config changes

    • Advanced kms-site (these should already be set but just confirm)

      • hadoop.kms.authentication.type=kerberos
      • hadoop.kms.authentication.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
      • hadoop.kms.authentication.kerberos.principal=*
    • Custom kms-site (the proxy user should match the user from REPOSITORY_CONFIG_USERNAME above)

      • hadoop.kms.proxyuser.rangeradmin.users = *
      • hadoop.kms.proxyuser.rangeradmin.hosts = *
      • hadoop.kms.proxyuser.rangeradmin.groups = *

Image

  • Restart Ranger KMS and HDFS services

  • At this point you can query for the list of keys that the KMS: hadoop key list

Enable Ranger plugin for KMS
  • In Ambari, under Ranger KMS -> Configs -> Advanced ->

    • Advanced ranger-kms-audit:
      • Audit to DB: Check
      • Audit to HDFS: Check
      • (Optional) Audit to SOLR: Check
      • (Optional) Audit provider summary enabled: Check
      • (Optional) xasecure.audit.is.enabled: true
      • In the value of xasecure.audit.destination.hdfs.dir, replace "NAMENODE_HOSTNAME" with FQDN of namenode
        Image
    • Note: to audit to Solr, you need to have previously installed Solr and made the necessary changes in Ranger settings under Advanced ranger-admin-site
  • Restart KMS

  • Check that kms audits show up in Solr/banana and HDFS

hadoop fs -ls /ranger/audit/kms
Create key from command line
  • Create key of length 256 from the command line and call it testkeyfromcli
sudo sudo -u hdfs kinit -Vkt /etc/security/keytabs/hdfs.headless.keytab  hdfs@HORTONWORKS.COM
sudo sudo -u hdfs hadoop key create testkeyfromcli -size 256
sudo sudo -u hdfs hadoop key list -metadata
Create key from Ranger
  • Login to Ranger as keyadmin/keyadmin http://sandbox.hortonworks.com:6080

  • Click Encryption tab and select the KMS service from the dropdown. The previously created key should appear

  • Click "Add New Key" and create a new key: testkeyfromui Image

  • Both keys should now appear

Image

  • (Optional) In case of errors, check that:

    • Click edit icon next to Ranger > Access Manager > KMS > Sandbox_kms to edit the service. Ensure the correct values are present for KMS URL, user, password and that test connection works
    • In previous step, the proxyuser was created for the same user as above
  • If you see the below error when creating zones, you may need to restart HDFS

RemoteException: Can't create an encryption zone for /enczone2 since no key provider is available.
Create encryption zones
  • Create an encryption zone under /enczone1 with zone key named testkeyfromui. Then query the encrypted zones to check it was created
sudo sudo -u hdfs hdfs dfs -mkdir /enczone1
sudo sudo -u hdfs hdfs crypto -createZone -keyName testkeyfromui -path /enczone1
sudo sudo -u hdfs hdfs crypto -listZones 
Setup Ranger policy

Since HDFS file encryption/decryption is transparent to its client, user can read/write files to/from encryption zone as long they have the permission to access it.

  • As hdfs user, change permissions of encryption zone
sudo sudo -u hdfs hdfs dfs -chmod 777 /enczone1
  • As hdfs user, create a file and push it to encrypted zone
echo "Hello TDE" >> myfile.txt
sudo hadoop dfs -put myfile.txt /enczone1
sudo sudo -u hdfs hdfs dfs -chmod 700 /enczone1
  • Login to Ranger as admin/admin and setup HDFS policy for only sales groups to have access to /enczone1 dir
    • Resource path: /enczone1
    • Recursive: Yes
    • Audit logging: Yes
    • Group permissions: sales and select Read/Write/Execute
    • Image
Audit excercies
  • Access the file as ali. This should succeed as he is part of Sales group.
sudo su ali
kinit
#hortonworks
hadoop fs -cat /enczone1/myfile.txt
  • Access the file as hr1. This should be denied as he is not part of Sales group.
sudo su hr1
kinit
#hortonworks
hadoop fs -cat /enczone1/myfile.txt
  • Review audit in Ranger Image

  • View contents of raw file in encrypted zone as hdfs super user. This should show some encrypted chacaters

sudo sudo -u hdfs hdfs dfs -cat /.reserved/raw/enczone1/myfile.txt
  • Prevent user hdfs from reading the file by setting security.hdfs.unreadable.by.superuser attribute. Note that this attribute can only be set on files and can never be removed.
sudo sudo -u hdfs hadoop fs -setfattr -n security.hdfs.unreadable.by.superuser /enczone1/myfile.txt
  • Now as hdfs super user, try to read the files or the contents of the raw file
sudo sudo -u hdfs hdfs dfs -cat /enczone1/myfile.txt
sudo sudo -u hdfs hdfs dfs -cat /.reserved/raw/enczone1/myfile.txt
  • You should get an error similar to below in both cases
Access is denied for hdfs since the superuser is not allowed to perform this operation.
  • You have successfully setup Transparent Data Encryption