Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to setup encrypted persistent volumes #13013

Closed
alikhajeh1 opened this issue Feb 20, 2017 · 8 comments
Closed

How to setup encrypted persistent volumes #13013

alikhajeh1 opened this issue Feb 20, 2017 · 8 comments
Assignees
Labels
component/storage kind/question lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@alikhajeh1
Copy link

We're trying to use gluster volume encryption with heketi using instructions mentioned in https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/Disk%20Encryption.md#7-getting-started-with-crypt-translator. We can mount/use a volume from a host that has the volume's encryption.master-key but launching a pod fails, presumably because it can't mount the volume since the pod's file system has the decryption key. Feels like a catch 22. Anyone know of a how to set up gluster encryption with heketi/openshift?

Version
openshift v1.4.1
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0
Steps To Reproduce
  1. Setup Heketi on OpenShift (https://github.com/heketi/heketi/wiki/OpenShift-Integration---Project-Aplo)
  2. Setup a storage class so PVCs can be created dynamically https://docs.openshift.org/latest/install_config/persistent_storage/dynamically_provisioning_pvs.html#glusterfs
  3. Create a PVC via OpenShift.
  4. Follow https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/Disk%20Encryption.md#7-getting-started-with-crypt-translator to setup volume encryption from inside one of the glusterfs pods.
  5. Launch a test deploymentconfig and attach the PVC a pod.
Current Result

We can mount/use a volume from a host that has the volume's encryption.master-key but launching a pod fails, presumably because it can't mount the volume since the pod's file system has the decryption key. Feels like a catch 22. Anyone know of a how to set up gluster encryption with heketi/openshift?

Expected Result

The volume should be decrypted and the pod should launch successfully.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 9, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 13, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@JasonGiedymin
Copy link

JasonGiedymin commented May 4, 2018

I know this is old but I figured I'd chime in with some information, and a caveat.

This relies on you having already installed glusterfs, and you have glusterfs pods running.

Even though heketi can dynamically provision, you should still go about and create both the PV and PVC.

Once a volume is created, you should see the volume info both from using the heketi-cli from the heketi pod, or the gluster-cli from the gluster pod.

For testing, you can see that there is a large list of mounts, on the gluster pod.
One of them is /var/lib/glusterfs (something like this). In this path are the volume/groups .i.e /var/lib/glusterfs/vols/.....

For testing, you can create your key in glusterfs dir on all your pods. Or you can mount a secret with the key into the pod. Once the key exists, you can go ahead and do a volume set all the options to disable cache, and enable the xlator-crypt settings (encryption on). If the commands executed successfully, you should be able to call volume status/info and do a test mount via mount -o. A gluster mount log will appear in /var/log/glusterfs/<some-mount>.log, where you can see if encryption was enabled.

From here, if the volume has encryption enabled all that is required is to use it.

Kubernetes StrorageClasses (of which one would exist for glusterfs-storage) allows the use of the mountOptions field. And if you did a test mount you can use those same xlator options.

The gluster storage plugin is compatible and should take all the mount options and use them.

Other methods also exist, such as setting mountOptions in PersistentVolumes. The legacy way of providing mountOptions was to use the annotation volume.beta.kubernetes.io/mount-options. I can tell you that the storage utility will read the annotations first, and then proceed to read the mountOptions, see here. The team will drop this soon.

If all goes well, everything should work. You should have good logs, and no errors.

NOW the caveat.

If you ran a test mount from one or more (or ALL) of the glusterfs pods, and you indeed specified the xlator-crypt key, but also a fake one to one of the pods you'll see that nothing works. That is, encryption doesn't actually seem to be working at all. I've found that indeed there are warnings in the logs but haven't gotten around to debugging it deeply. It could be that the pods are missing some libraries, or that the crypt-xlator is plain broken (without logging errors).

I've looked at the recent mount parser for xlator-crypt, and I know I'm passing in correct keys. As well as the volume being set up correctly (multiple places to confirm this). However actually getting the mount to be written to with encryption, and trying to verify it has been a real pita. I cannot actually verify that encryption is being done at all.

With that, you will find this recent issue posted to the gluster repo. It says that the crypt xlator is actually going back to unsupported and experimental state, with no guarantees that it works. This is a good move as I can't verify that it does, though there is no documentation saying that it doesn't work.
See that here: gluster/glusterfs#399

Would be nice to see RedHat jump on this and work some magic.

@JasonGiedymin
Copy link

JasonGiedymin commented May 4, 2018

I'm going to reopen this issue, only to see if I can illicit a hands on test/response from RH. :-)

/reopen

Well at least I wish I could teehee.

@openshift-ci-robot
Copy link

@JasonGiedymin: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

I'm going to reopen this issue, only to see if I can illicit a hands on test/response from RH. :-)

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@alikhajeh1
Copy link
Author

/reopen

@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/storage kind/question lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

6 participants