New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support full disk LUKS #550

Closed
schakrava opened this Issue Dec 7, 2014 · 34 comments

Comments

Projects
None yet
4 participants
@schakrava
Member

schakrava commented Dec 7, 2014

No description provided.

@schakrava schakrava self-assigned this Dec 7, 2014

@schakrava schakrava added this to the Yosemite milestone Dec 7, 2014

@schakrava schakrava modified the milestones: Cole Valley, Yosemite Jan 1, 2015

@schakrava schakrava modified the milestones: Nob Hill, Cole Valley Mar 7, 2015

@schakrava schakrava modified the milestones: Yosemite, Nob Hill Jun 26, 2015

@halemmerich

This comment has been minimized.

Show comment
Hide comment
@halemmerich

halemmerich Aug 8, 2015

I have done some experiments using LUKS and wanted to share my experiences:
My target was a RockStor instance running a pool on encrypted disks and I did not accomplish that so far.
After creating a couple of LUKS disks mapped at /dev/mapper RockStor had a problem because the device names seemingly can only be 10 characters long. As I was using the serial number as the device mapper name the webinterface complained. Using shorter names makes the mapped disks appear in the storage view, but all operations including creation of a pool fail, because in the btrfs commands the device mapper disks are also addressed as /dev/name and not found.
I then tried to create a btrfs over the disks manually but RockStor did not pick it up correctly. The encrypted disks in the storage view are shown as having a btrfs on them, but can not be imported, probably again because of path issues.

halemmerich commented Aug 8, 2015

I have done some experiments using LUKS and wanted to share my experiences:
My target was a RockStor instance running a pool on encrypted disks and I did not accomplish that so far.
After creating a couple of LUKS disks mapped at /dev/mapper RockStor had a problem because the device names seemingly can only be 10 characters long. As I was using the serial number as the device mapper name the webinterface complained. Using shorter names makes the mapped disks appear in the storage view, but all operations including creation of a pool fail, because in the btrfs commands the device mapper disks are also addressed as /dev/name and not found.
I then tried to create a btrfs over the disks manually but RockStor did not pick it up correctly. The encrypted disks in the storage view are shown as having a btrfs on them, but can not be imported, probably again because of path issues.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 21, 2016

Member

@schakrava Could we change the title of this issue from:-
"Support dmcrypt"
to
"Support full disk LUKS"
From our recent discussion I think this fits in better with our current capabilities and plans plus from: https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions
we have:- "plain dm-crypt (one passphrase, no management, no metadata on disk) and LUKS (multiple user keys with one master key, anti-forensic features, metadata block at start of device, ...)"
So we gain management and an easy way to identify the device as LUKS encrypted so that scan_disks can hopefully provide an on the spot truth during disks Rescan (I think anyway as yet to confirm). This can then inform the db updater to set the new role flag to identify the device as LUKS formatted and hence trigger looking up it's /dev/mapper/device-name for unencrypted access.
and:
"First, unless you happen to understand the cryptographic background well, you should use LUKS. It does protect the user from a lot of common mistakes. Plain dm-crypt is for experts."
and:
"Advantages [of LUKS ] are a higher usability, automatic configuration of non-default crypto parameters, defenses against low-entropy passphrases like salting and iterated PBKDF2 passphrase hashing, the ability to change passphrases, and others."
My addition in square brackets in the last quote.

  • We could later, by way of a feature issue, add a LUKS header backup button, and perhaps some additional LUKS password management.

We will also need the "cryptsetup" program installed which is available in the current repos. Just noting so it doesn't get missed.

Member

phillxnet commented May 21, 2016

@schakrava Could we change the title of this issue from:-
"Support dmcrypt"
to
"Support full disk LUKS"
From our recent discussion I think this fits in better with our current capabilities and plans plus from: https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions
we have:- "plain dm-crypt (one passphrase, no management, no metadata on disk) and LUKS (multiple user keys with one master key, anti-forensic features, metadata block at start of device, ...)"
So we gain management and an easy way to identify the device as LUKS encrypted so that scan_disks can hopefully provide an on the spot truth during disks Rescan (I think anyway as yet to confirm). This can then inform the db updater to set the new role flag to identify the device as LUKS formatted and hence trigger looking up it's /dev/mapper/device-name for unencrypted access.
and:
"First, unless you happen to understand the cryptographic background well, you should use LUKS. It does protect the user from a lot of common mistakes. Plain dm-crypt is for experts."
and:
"Advantages [of LUKS ] are a higher usability, automatic configuration of non-default crypto parameters, defenses against low-entropy passphrases like salting and iterated PBKDF2 passphrase hashing, the ability to change passphrases, and others."
My addition in square brackets in the last quote.

  • We could later, by way of a feature issue, add a LUKS header backup button, and perhaps some additional LUKS password management.

We will also need the "cryptsetup" program installed which is available in the current repos. Just noting so it doesn't get missed.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 21, 2016

Member

@halemmerich Thanks for your pointers / input / sharing on this and as you found there are a few mechanisms within Rockstor that will require modification to support LUKS encrypted disks. Some work has already been done towards this and my current task involves another by way of issue #1320 "Revise internal use and format of device names". Once that issue is resolved I was hoping to come back to his issue and work through the required changes.

Those elements that have been improved in order to more easily accommodate LUKS include longer device names (now 64 chars) and the addition of a generic 'role' field within the Disk's db model. I like the serial as mount point idea by the way, may have to use that.

Member

phillxnet commented May 21, 2016

@halemmerich Thanks for your pointers / input / sharing on this and as you found there are a few mechanisms within Rockstor that will require modification to support LUKS encrypted disks. Some work has already been done towards this and my current task involves another by way of issue #1320 "Revise internal use and format of device names". Once that issue is resolved I was hoping to come back to his issue and work through the required changes.

Those elements that have been improved in order to more easily accommodate LUKS include longer device names (now 64 chars) and the addition of a generic 'role' field within the Disk's db model. I like the serial as mount point idea by the way, may have to use that.

@schakrava schakrava assigned phillxnet and unassigned schakrava May 21, 2016

@schakrava

This comment has been minimized.

Show comment
Hide comment
@schakrava

schakrava May 21, 2016

Member

@phillxnet , I've assigned the issue to you. I am wondering if that will give you the permission to change the title. Could you please try?

Member

schakrava commented May 21, 2016

@phillxnet , I've assigned the issue to you. I am wondering if that will give you the permission to change the title. Could you please try?

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 21, 2016

Member

@schakrava Thanks, but still no edit button by title.

Member

phillxnet commented May 21, 2016

@schakrava Thanks, but still no edit button by title.

@schakrava schakrava changed the title from Support dmcrypt to Support full disk LUKS May 21, 2016

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet
Member

phillxnet commented May 21, 2016

@schakrava Cheers.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Jun 23, 2016

Member

Note to consider the effect of the smartctl calls on LUKS mapped devices ie update smart get_base_device / get_base_device_byid / or get_dev_options to account for LUKS device names so that the base device is passed to smartctl if required.

Please update the following forum thread with significant development on this issue:
https://forum.rockstor.com/t/smart-log-issues/1653

Member

phillxnet commented Jun 23, 2016

Note to consider the effect of the smartctl calls on LUKS mapped devices ie update smart get_base_device / get_base_device_byid / or get_dev_options to account for LUKS device names so that the base device is passed to smartctl if required.

Please update the following forum thread with significant development on this issue:
https://forum.rockstor.com/t/smart-log-issues/1653

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Jul 6, 2016

Member

Please update the following forum thread with significant development on this issue:
https://forum.rockstor.com/t/luks-dm-crypt/36/9

Member

phillxnet commented Jul 6, 2016

Please update the following forum thread with significant development on this issue:
https://forum.rockstor.com/t/luks-dm-crypt/36/9

@ciroiriarte

This comment has been minimized.

Show comment
Hide comment
@ciroiriarte

ciroiriarte Jul 10, 2016

Hi!, will this support the following use case?:

  • multiple disks encrypted with LUKS (no partitions)
  • multidisk (raid6) BTRFS filesystems on top of the encrypted devices
  • key requested at boot time

It's an usual layout and I'm currently running that on openSUSE Leap 42.1, looking forward to migrate the disks to Rockstor...

ciroiriarte commented Jul 10, 2016

Hi!, will this support the following use case?:

  • multiple disks encrypted with LUKS (no partitions)
  • multidisk (raid6) BTRFS filesystems on top of the encrypted devices
  • key requested at boot time

It's an usual layout and I'm currently running that on openSUSE Leap 42.1, looking forward to migrate the disks to Rockstor...

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Aug 28, 2016

Member

Please also update the following forum thread on significant development on this issue:
https://forum.rockstor.com/t/at-rest-encryption/1993/1

Member

phillxnet commented Aug 28, 2016

Please also update the following forum thread on significant development on this issue:
https://forum.rockstor.com/t/at-rest-encryption/1993/1

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Sep 5, 2016

Member

Please also update the following forum thread on significant development:
https://forum.rockstor.com/t/fail-to-initalize-disk-mounted-with-luks/2034

Member

phillxnet commented Sep 5, 2016

Please also update the following forum thread on significant development:
https://forum.rockstor.com/t/fail-to-initalize-disk-mounted-with-luks/2034

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Oct 21, 2016

Member

In revisiting this issue I think it best if we first tidy up the recently added disk role field which is the intended mechanism to signal the requirement to change the file system access point from that of the raw device to that of it's /dev/mapper LUKS mount point. Once this task is complete we will have a better basis on which to support LUKS mount / device name re-direction in a cleaner fashion. I intend to link the disk role subsystem improvements issue to this issue so that the enhancements in that area can be tracked and to establish the dependency.

Member

phillxnet commented Oct 21, 2016

In revisiting this issue I think it best if we first tidy up the recently added disk role field which is the intended mechanism to signal the requirement to change the file system access point from that of the raw device to that of it's /dev/mapper LUKS mount point. Once this task is complete we will have a better basis on which to support LUKS mount / device name re-direction in a cleaner fashion. I intend to link the disk role subsystem improvements issue to this issue so that the enhancements in that area can be tracked and to establish the dependency.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Mar 13, 2017

Member

I am again working on this issue given it's indicated core dependencies have now been merged.

Member

phillxnet commented Mar 13, 2017

I am again working on this issue given it's indicated core dependencies have now been merged.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Mar 14, 2017

Member

Update on sub issues found:
If we are to enable unattended boot then we also need to store keyfiles to unlock our data drives' LUKS containers. For the keyfiles to be secure they in turn need to be on encrypted storage. So no unattended power on with LUKS is securely possible. Also a 'sister' feature here is encrypted swap and root which is attainable via the current installer. This in turn allows for us to more feasibly store keyfiles in for example /root.

When installing Rockstor and selecting anaconda's build in encrypt my data option we get the following with 3.8.16-1:

lsblk 
NAME                                          MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sr0                                            11:0    1 1024M  0 rom   
sda                                             8:0    0    8G  0 disk  
├─sda2                                          8:2    0  820M  0 part  
│ └─luks-df30128a-2e94-428a-ad47-671602d3ce4c 253:1    0  818M  0 crypt [SWAP]
├─sda3                                          8:3    0  6.7G  0 part  
│ └─luks-b8f89d97-f135-450f-9620-80a9fb421403 253:0    0  6.7G  0 crypt /home
└─sda1                                          8:1    0  500M  0 part  /boot

which equates to our rockstor_rockstor pool existing in it's own LUKS drive:

Label: 'rockstor_rockstor'  uuid: 60100357-ad15-4727-aec5-0f5abd1cee31
	Total devices 1 FS bytes used 1.55GiB
	devid    1 size 6.71GiB used 2.72GiB path /dev/mapper/luks-b8f89d97-f135-450f-9620-80a9fb421403

But there are problems on our Disks page:
3.8.16-1:
3 8 16-1-luks-root

However things improve some what with the recent changes to disk management in #1622 :
3.8.16-16:
3 8 16-16-luks-root

So some attention is require here obviously as previous 'special case' treatment for the root partition/drive is now showing it's short comings. I intend to have a look at improving this behaviour as part of this issue. The root cause here is that the installer works by encrypting 2 partitions but we predominantly address disks bar prior special root disk assessment that in this case fails.

Given the above points it may be pertinent to initially address password during power on for all LUKS volumes as a stepping stone ie following the installer default /etc/crypttab format of:

luks-b8f89d97-f135-450f-9620-80a9fb421403 UUID=b8f89d97-f135-450f-9620-80a9fb421403 none 
luks-df30128a-2e94-428a-ad47-671602d3ce4c UUID=df30128a-2e94-428a-ad47-671602d3ce4c none 

Where none signifies manual password entry on power up. At a later date this could be enhanced to include keyfile generation and management. This may also fit in better with current btrfs development on partial disk count capabilities as each disk is individually unlocked in turn, however currently no default pool mount is achievable until all members are unlocked. But this 'unlock via Rockstor Web-UI' element would also require some attention on #1547 "insufficient use of btrfs device scan" as currently 'btrfs device scan' is only run on boot and until LUKS encrypted volumes are unlocked they are not visible to the btrfs subsystems as available pool members.

Member

phillxnet commented Mar 14, 2017

Update on sub issues found:
If we are to enable unattended boot then we also need to store keyfiles to unlock our data drives' LUKS containers. For the keyfiles to be secure they in turn need to be on encrypted storage. So no unattended power on with LUKS is securely possible. Also a 'sister' feature here is encrypted swap and root which is attainable via the current installer. This in turn allows for us to more feasibly store keyfiles in for example /root.

When installing Rockstor and selecting anaconda's build in encrypt my data option we get the following with 3.8.16-1:

lsblk 
NAME                                          MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sr0                                            11:0    1 1024M  0 rom   
sda                                             8:0    0    8G  0 disk  
├─sda2                                          8:2    0  820M  0 part  
│ └─luks-df30128a-2e94-428a-ad47-671602d3ce4c 253:1    0  818M  0 crypt [SWAP]
├─sda3                                          8:3    0  6.7G  0 part  
│ └─luks-b8f89d97-f135-450f-9620-80a9fb421403 253:0    0  6.7G  0 crypt /home
└─sda1                                          8:1    0  500M  0 part  /boot

which equates to our rockstor_rockstor pool existing in it's own LUKS drive:

Label: 'rockstor_rockstor'  uuid: 60100357-ad15-4727-aec5-0f5abd1cee31
	Total devices 1 FS bytes used 1.55GiB
	devid    1 size 6.71GiB used 2.72GiB path /dev/mapper/luks-b8f89d97-f135-450f-9620-80a9fb421403

But there are problems on our Disks page:
3.8.16-1:
3 8 16-1-luks-root

However things improve some what with the recent changes to disk management in #1622 :
3.8.16-16:
3 8 16-16-luks-root

So some attention is require here obviously as previous 'special case' treatment for the root partition/drive is now showing it's short comings. I intend to have a look at improving this behaviour as part of this issue. The root cause here is that the installer works by encrypting 2 partitions but we predominantly address disks bar prior special root disk assessment that in this case fails.

Given the above points it may be pertinent to initially address password during power on for all LUKS volumes as a stepping stone ie following the installer default /etc/crypttab format of:

luks-b8f89d97-f135-450f-9620-80a9fb421403 UUID=b8f89d97-f135-450f-9620-80a9fb421403 none 
luks-df30128a-2e94-428a-ad47-671602d3ce4c UUID=df30128a-2e94-428a-ad47-671602d3ce4c none 

Where none signifies manual password entry on power up. At a later date this could be enhanced to include keyfile generation and management. This may also fit in better with current btrfs development on partial disk count capabilities as each disk is individually unlocked in turn, however currently no default pool mount is achievable until all members are unlocked. But this 'unlock via Rockstor Web-UI' element would also require some attention on #1547 "insufficient use of btrfs device scan" as currently 'btrfs device scan' is only run on boot and until LUKS encrypted volumes are unlocked they are not visible to the btrfs subsystems as available pool members.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Mar 14, 2017

Member

Note that it is possible to manually import from the open LUKS container from a default encrypted install:
3 8 16-16-rockstor-pool-imported
We then have existing recognition of the rockstor_rockstor pool there after.

Member

phillxnet commented Mar 14, 2017

Note that it is possible to manually import from the open LUKS container from a default encrypted install:
3 8 16-16-rockstor-pool-imported
We then have existing recognition of the rockstor_rockstor pool there after.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Mar 14, 2017

Member

My initial approach is going to be an investigation into improving the low level disk recognition to 'back port' LUKS partitions (FSTYPE="crypto_LUKS") to their parent devices so that we might label such devices accordingly using existing in place mechanisms. This is potentially problematic but in the context of only supporting full disk LUKS, as this issue proposes, it may be a way to 'label' a default anaconda encrypted system disk that is compatible with our existing disk mechanisms. It will however complicate support of encrypted data partitions. But I am of the opinion that this is not required given full disk LUKS support and the ability to recognise and cope with at least the system partition LUKS encryption will allow us to more forward with full data disk LUKS support.

Member

phillxnet commented Mar 14, 2017

My initial approach is going to be an investigation into improving the low level disk recognition to 'back port' LUKS partitions (FSTYPE="crypto_LUKS") to their parent devices so that we might label such devices accordingly using existing in place mechanisms. This is potentially problematic but in the context of only supporting full disk LUKS, as this issue proposes, it may be a way to 'label' a default anaconda encrypted system disk that is compatible with our existing disk mechanisms. It will however complicate support of encrypted data partitions. But I am of the opinion that this is not required given full disk LUKS support and the ability to recognise and cope with at least the system partition LUKS encryption will allow us to more forward with full data disk LUKS support.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 15, 2017

back propagate crypto_LUKS partition info to the base device #550
In order to appropriately identify a disk as hosting a LUKS container
within one of it's partitions we label the base device with this canonical
indicator. This is the same treatment used to identify mdraid disk
members when any one of their partitions is an mdraid member.
Predominantly used to accommodate default anaconda system disk
encrypted (LUKS) installs (ie swap and btrfs system pool partitions
as LUKS containers).

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 15, 2017

enhance root_disk() to identify LUKS system disk #550
When installing via anakonda default "Encrypt my data" option
we end up with our system pool rockstor_rockstor in a LUKS
container (along with swap in it's own). Previous mdadmin root
drive enhancements added non partition ''/' capability but for
correct identification of the root drive in an open LUKS container
and subsequent auto import of system subvols (/root /home)
root_disk() required an additional clause.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 15, 2017

enhance disk table logic to better flag system disk LUKS and container
…#550

For default anaconda ''Encrypt my data" installs ie swap and /
btrfs pool in their own LUKS we need to recognise and
appropriately flag via icons and tooltips the LUKS in partition
exception made for system disk and include LUKS container
flagging in main base flag if-else. Also reduced number of
checks by combining mutually exclusive additional flags.
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Mar 15, 2017

Member

With the above changes we are now able to more appropriately label a default anaconda:
Encryption (section)
“Encrypt my data. You’ll set a passphrase next.”
type install.
550-branch-luks-system-disk-install
System pool rockstor_rockstor and it's associated open LUKS container are now automatically identified and flagged and the base / backing device is also more appropriately flagged, removing it from role and delete configuration options.
Tooltip on base / backing LUKS container drive reads:
"Disk contains at least one partition hosting a LUKS Container. LUKS in partition is only supported for the Rockstor system drive."

Member

phillxnet commented Mar 15, 2017

With the above changes we are now able to more appropriately label a default anaconda:
Encryption (section)
“Encrypt my data. You’ll set a passphrase next.”
type install.
550-branch-luks-system-disk-install
System pool rockstor_rockstor and it's associated open LUKS container are now automatically identified and flagged and the base / backing device is also more appropriately flagged, removing it from role and delete configuration options.
Tooltip on base / backing LUKS container drive reads:
"Disk contains at least one partition hosting a LUKS Container. LUKS in partition is only supported for the Rockstor system drive."

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 20, 2017

add LUKS and bcache devices to hasUserRole UI helper #550
Extend the "hasUserRole" handlebars helper to identify future
user configurable disk roles of full disk LUKS container and
full disk backing and caching bcache devices which we already
label internally via the recent roles system.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 20, 2017

refactor and enhance role flags UI code on Disks page #550
Reduces the number of redundant disk status checks re role
and state of each disk and improve readability and logic
of the associated checks. The intention is to fashion a user
'funnel' to accommodate early LUKS / bcache device role
assignment while attempting to reduce the number of
redundant or unwise configuration options offered. I.e. on
freshly wiped devices a LUKS / bcache tooltip suggestion
introduces the UI entry point to apply such whole disk
configurations.  Note also that a new flag is introduced, that
of a drive mapping (essentially if the device is found to be part
of a pool) this is intended to aid in identifying fresh or already
pool committed drives from the flags alone and works in
concert with LUKS / bcache flags, ie one will replace another.
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Mar 21, 2017

Member

I strongly suspect that the following commit in this issue has caused a regression for bcache device serial attribution when a LUKS container is backed by bcache :
"back propagate crypto_LUKS partition info to the base device #550"
phillxnet@ae7e09e

My next task is to attempt to resolve this whilst maintaining the facility provided by that commit.

Member

phillxnet commented Mar 21, 2017

I strongly suspect that the following commit in this issue has caused a regression for bcache device serial attribution when a LUKS container is backed by bcache :
"back propagate crypto_LUKS partition info to the base device #550"
phillxnet@ae7e09e

My next task is to attempt to resolve this whilst maintaining the facility provided by that commit.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 21, 2017

initial commit of rudimentary 'LUKS format' UI template and logic #550
Includes minor cosmetic and code formatting improvements to
existing disk role UI.
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet Apr 29, 2017

Member

My original intention was to implement a UI component to provide a way to easily backup a LUKS containers header to guard against physical disk damage in this sensitive area that forms a single point of failure for the entire consequent volume. However given the current longevity and complexity of this issue I now plan to instead include simple command line instructions to do the same. This can then, at a later date and in another pr, be extended to a UI component on the LUKS config page. This should help to draw this issue to a conclusion sooner.

Member

phillxnet commented Apr 29, 2017

My original intention was to implement a UI component to provide a way to easily backup a LUKS containers header to guard against physical disk damage in this sensitive area that forms a single point of failure for the entire consequent volume. However given the current longevity and complexity of this issue I now plan to instead include simple command line instructions to do the same. This can then, at a later date and in another pr, be extended to a UI component on the LUKS config page. This should help to draw this issue to a conclusion sooner.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 2, 2017

only wipe locked LUKS containers with no bootup config #550
Akin to only offering to wipe or import un-managed
disks we mirror this with only allowing the wipe
function on LUKS containers that are both locked
and have no /etc/crypttab entry. A locked container
presents no visible filesystem and is consequently
in a fit state to wipe.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 2, 2017

add a "Wipe locked LUKS container" link #550
Presented conditionally on the LUKS config page
whenever a container is found to be both locked
and have no current /etc/crypttab entry. Links
to the role/wipe page for the associated disk.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 3, 2017

on fresh LUKS format create a fresh LUKS role #550
Without this our device is incorrectly identified until
the next background or forced _update_disk_state(). With
this our next view of the Disk reflects our new role
status.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 6, 2017

make LUKS container wipe validation pertain only to LUKS containers #550


Remove overly broad input validation where non LUKS
containers were inadvertently blocked by recent LUKS
wipe validation additions. Plus a minor code simplification.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 6, 2017

update partition role after wipefs is called #550
Previously we failed to update this role directly after
executing a wipefs which is guaranteed to alter the
partition role's value (filesystem list) or existence
(full disk wipe so no paritions). The Disk.role field
would be correctly updated upon the next
_update_disk_state() but that is insufficient and less
robust in this case.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 7, 2017

update whole disk role after wipefs is called #550
Previously we neglected to update whole disk format
associated roles (LUKS container, bcache caching and
backing devices) which resulted in a disks page ghost
until the next _update_disk_state() auto corrected.
By updating these roles appropriately upon whole disks
wipe the consequent Disk page view is made current.
Also update btrfs_uuid comments upon wipe.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 7, 2017

make fstype flag for bcache cdev consistent with role name #550
Minor flag name change - mostly aesthetic - but should help
with code readability by being consistent with consequent role
and role constants lists.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 7, 2017

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 8, 2017

add user text on LUKS boot up config options #550
Mirror formatting found on sister 'roles' config page

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 9, 2017

hide user crypttab text for open LUKS volumes #550
Boot up config pertains to LUKS containers only and
as we have a dual personality LUKS config page we
ony show this text for LUKS container.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 10, 2017

update LUKS role on LUKS boot up config changes #550
Also ensure we avoid making changes to custom
(non native) crypttab configurations, even during
a LUKS config page submit when viewing these settings.
Require a config change prior to defaulting to
native keyfile creation.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 10, 2017

include LUKS Format in role page title #550
Since we have partnered LUKS format with our
existing wipe on this page we should add this
key function to the page title.
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 13, 2017

Member

Noting a thread on the linux-btrfs mailing list re strange interplay between systemd and keyfile activated LUKS containers upon subsequent format:
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg63975.html
Ongoing: the symptoms are that of mapped devices disappearing during mkfs.btrfs on /dev/mapper/dev-name (both the /dev/mapper/dev-name and it's target /dev/dm-* go missing with 100% systemd cpu). I suspect this is only with much newer systemd but will investigate our behaviour. I have not noticed anything akin to this so far in my tests.
latest exposition of issue as of writing:
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg64025.html

Member

phillxnet commented May 13, 2017

Noting a thread on the linux-btrfs mailing list re strange interplay between systemd and keyfile activated LUKS containers upon subsequent format:
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg63975.html
Ongoing: the symptoms are that of mapped devices disappearing during mkfs.btrfs on /dev/mapper/dev-name (both the /dev/mapper/dev-name and it's target /dev/dm-* go missing with 100% systemd cpu). I suspect this is only with much newer systemd but will investigate our behaviour. I have not noticed anything akin to this so far in my tests.
latest exposition of issue as of writing:
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg64025.html

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 13, 2017

populate openLUKS role value with status info #550
Previously we had an unused placeholder of
dm-name-uuid. Replace with output from:
'cryptsetup status dev-name' as a dict/json.
The 'openLUKS' role signifies open LUKS volumes.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 13, 2017

add Open LUKS Volume 'status table' to LUKS info page #550
When our dual mode LUKS page is viewing an Open LUKS
Volume, display the role value which equates to a
slightly tailored cyptsetup status dev-name output where
the backing device listed is converted to a by-id name.
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 13, 2017

Member

Newly added Open LUKS Volume info table:
open-luks-info-table
and the same page but in LUKS Container Config mode viewing the base backing device:
luks-container-config

A little more user facing text to add (as detailed in tasks above) and some more testing and I hope to be at pull request prep / review stage.

Note the Open LUKS Volume info page added here also suffer from the same issue as indicated in #1130 for detached devices. Linking here as the two behaviours share the same cause which can be addressed in the referenced issue: ie that of missing name reference due to dynamic renaming of detached devices.

Member

phillxnet commented May 13, 2017

Newly added Open LUKS Volume info table:
open-luks-info-table
and the same page but in LUKS Container Config mode viewing the base backing device:
luks-container-config

A little more user facing text to add (as detailed in tasks above) and some more testing and I hope to be at pull request prep / review stage.

Note the Open LUKS Volume info page added here also suffer from the same issue as indicated in #1130 for detached devices. Linking here as the two behaviours share the same cause which can be addressed in the referenced issue: ie that of missing name reference due to dynamic renaming of detached devices.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 14, 2017

improve LUKS page user text re custom configs #550
Plus minor text enhancements for clarity.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 15, 2017

add cryptsetup to rpm-deps in base-buildout.cfg #550
Actively used by LUKS additions moving forward.
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 15, 2017

Member

Last minute problem detected, suspect this is related to occasional device name change:
ie display of /dev/mapper type names in 'btrfs fi show' as opposed to the regular canonical names, ie /dev/dm-1 etc.
Will look into this on my next session with this issue.
Similar to reports in above referenced linux-btrfs mailing list re wandering name types.

Member

phillxnet commented May 15, 2017

Last minute problem detected, suspect this is related to occasional device name change:
ie display of /dev/mapper type names in 'btrfs fi show' as opposed to the regular canonical names, ie /dev/dm-1 etc.
Will look into this on my next session with this issue.
Similar to reports in above referenced linux-btrfs mailing list re wandering name types.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 15, 2017

Member

Above mentioned issue tracked down to udev failing to recognise and report (via lsblk) fstype update on our newly formatted luks volume. This is akin to issue #1606 fixed via pr #1607, ie timely use of 'udevadm trigger' or a less heavy weight counterpart.

Member

phillxnet commented May 15, 2017

Above mentioned issue tracked down to udev failing to recognise and report (via lsblk) fstype update on our newly formatted luks volume. This is akin to issue #1606 fixed via pr #1607, ie timely use of 'udevadm trigger' or a less heavy weight counterpart.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 15, 2017

Member

Prior to 'udev trigger' kick we have:
TYPE="crypt" FSTYPE="" LABEL="" UUID=""
and after we have:
TYPE="crypt" FSTYPE="btrfs" LABEL="luks-pool" UUID="9a51f7fb-d82b-496e-aa31-b11f8fe57d89"
So this is looking exactly like the previously reported issue re adding drives with no udev reaction.
I will attempt to produce a more reliable reproducer and apply the same fix used in #1607 only after our mkfs.btrfs. Unless of course we have an instance of drive disconnection during mkfs.btrfs as mentioned in the linux.btrfs mailing list.

Member

phillxnet commented May 15, 2017

Prior to 'udev trigger' kick we have:
TYPE="crypt" FSTYPE="" LABEL="" UUID=""
and after we have:
TYPE="crypt" FSTYPE="btrfs" LABEL="luks-pool" UUID="9a51f7fb-d82b-496e-aa31-b11f8fe57d89"
So this is looking exactly like the previously reported issue re adding drives with no udev reaction.
I will attempt to produce a more reliable reproducer and apply the same fix used in #1607 only after our mkfs.btrfs. Unless of course we have an instance of drive disconnection during mkfs.btrfs as mentioned in the linux.btrfs mailing list.

@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 17, 2017

Member

Tasks related to Open LUKS volume format changes affecting block device mapping and udev reporting of both the LUKS container and the consequent open LUKS volume.

  • wiping (wipefs -a) a whole disk btrfs/ext4 formatted open LUKS volume causes it's /dev/dm-* node and /dev/mapper/luks-* symlink to disappear.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-uuid/89542d8f-2569-4f81-8eb9-5a15cf358de7.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-label/luks-pool.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-id/dm-uuid-CRYPT-LUKS1-5037b32095d64c7494e74c673120d32a-luks-5037b320-95d6-4c74-94e7-
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-id/dm-name-luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/dm-0.
May 17 14:41:52 rtest systemd[1]: Stopped /sys/devices/virtual/block/dm-0.
May 17 14:41:52 rtest systemd[1]: Stopped target Encrypted Volumes.
May 17 14:41:52 rtest systemd[1]: Stopping Encrypted Volumes.
May 17 14:41:52 rtest systemd[1]: Stopping Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a...
May 17 14:41:52 rtest systemd[1]: Stopped Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a.

After an extended period (ie around 24 minutes in given example) the /dev/dm-* and consequent /dev/mapper/luks- are restored:

May 17 15:05:43 rtest systemd[1]: Starting Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a...
...
May 17 15:05:43 rtest systemd-cryptsetup[19832]: Set cipher aes, mode xts-plain64, key size 256 bits for device /dev/disk/by-uuid/5037b320-95d6-4c74-94e7-4c673120d32a.
...
May 17 15:05:48 rtest systemd[1]: Started Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a.
  • Intermittent failure of udev to update FSTYPE, LABEL, and UUID after successfully formatting and mounting a new btrfs pool. It is noteworthy that mount by label fails in this scenario; however our established fail-over of mount by dev succeeded. It is assumed that the necessary udev created by-label symlink is the reason for the mount by label failure. This scenario is, as previously recorded, very similar to issue #1606 pr= #1607 where adding a device to a pool could, intermittently, result in the same udev failure to represent the new disk state: ie new fstype, label, and uuid.
Member

phillxnet commented May 17, 2017

Tasks related to Open LUKS volume format changes affecting block device mapping and udev reporting of both the LUKS container and the consequent open LUKS volume.

  • wiping (wipefs -a) a whole disk btrfs/ext4 formatted open LUKS volume causes it's /dev/dm-* node and /dev/mapper/luks-* symlink to disappear.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-uuid/89542d8f-2569-4f81-8eb9-5a15cf358de7.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-label/luks-pool.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-id/dm-uuid-CRYPT-LUKS1-5037b32095d64c7494e74c673120d32a-luks-5037b320-95d6-4c74-94e7-
May 17 14:41:52 rtest systemd[1]: Stopped /dev/disk/by-id/dm-name-luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 14:41:52 rtest systemd[1]: Stopped /dev/dm-0.
May 17 14:41:52 rtest systemd[1]: Stopped /sys/devices/virtual/block/dm-0.
May 17 14:41:52 rtest systemd[1]: Stopped target Encrypted Volumes.
May 17 14:41:52 rtest systemd[1]: Stopping Encrypted Volumes.
May 17 14:41:52 rtest systemd[1]: Stopping Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a...
May 17 14:41:52 rtest systemd[1]: Stopped Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a.

After an extended period (ie around 24 minutes in given example) the /dev/dm-* and consequent /dev/mapper/luks- are restored:

May 17 15:05:43 rtest systemd[1]: Starting Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a...
...
May 17 15:05:43 rtest systemd-cryptsetup[19832]: Set cipher aes, mode xts-plain64, key size 256 bits for device /dev/disk/by-uuid/5037b320-95d6-4c74-94e7-4c673120d32a.
...
May 17 15:05:48 rtest systemd[1]: Started Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a.
  • Intermittent failure of udev to update FSTYPE, LABEL, and UUID after successfully formatting and mounting a new btrfs pool. It is noteworthy that mount by label fails in this scenario; however our established fail-over of mount by dev succeeded. It is assumed that the necessary udev created by-label symlink is the reason for the mount by label failure. This scenario is, as previously recorded, very similar to issue #1606 pr= #1607 where adding a device to a pool could, intermittently, result in the same udev failure to represent the new disk state: ie new fstype, label, and uuid.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 17, 2017

improve pool mount log reporting re missing devices #550
Add info logging when failing over to 'mount by dev'
from 'mount by label' and specify the dev names as we
go. Add error logging on missing devices.
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 17, 2017

Member

It appears that the auto generated (by systemd-cryptsetup-generator) systemd service file for each crypto block device (based on it's /etc/crypttab entry if any) in:

/var/run/systemd/generator/systemd-cryptsetup

is what is being auto run whenever we 'wipefs -a' a btrfs formatted Open LUKS volume:
ie:

systemctl stop systemd-cryptsetup\@luks\\x2d5037b320\\x2d95d6\\x2d4c74\\x2d94e7\\x2d4c673120d32a.service

results in equivalent observed systemd logging:

May 17 17:32:50 rtest systemd[1]: Stopping Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a...
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-uuid/900d8946-c7c8-4ca7-936a-485323246884.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-label/luks-pool.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-id/dm-uuid-CRYPT-LUKS1-5037b32095d64c7494e74c673120d32a-luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-id/dm-name-luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/dm-0.
May 17 17:32:50 rtest systemd[1]: Stopped /sys/devices/virtual/block/dm-0.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/mapper/luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 17:32:50 rtest systemd[1]: Stopped Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a.

And we again now have missing /dev/dm-* and /dev/mapper/luks-* entries.
However the reverse "start" will, without a keyfile config, request a password via console to re-establish the associated devices.

Member

phillxnet commented May 17, 2017

It appears that the auto generated (by systemd-cryptsetup-generator) systemd service file for each crypto block device (based on it's /etc/crypttab entry if any) in:

/var/run/systemd/generator/systemd-cryptsetup

is what is being auto run whenever we 'wipefs -a' a btrfs formatted Open LUKS volume:
ie:

systemctl stop systemd-cryptsetup\@luks\\x2d5037b320\\x2d95d6\\x2d4c74\\x2d94e7\\x2d4c673120d32a.service

results in equivalent observed systemd logging:

May 17 17:32:50 rtest systemd[1]: Stopping Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a...
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-uuid/900d8946-c7c8-4ca7-936a-485323246884.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-label/luks-pool.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-id/dm-uuid-CRYPT-LUKS1-5037b32095d64c7494e74c673120d32a-luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/disk/by-id/dm-name-luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/dm-0.
May 17 17:32:50 rtest systemd[1]: Stopped /sys/devices/virtual/block/dm-0.
May 17 17:32:50 rtest systemd[1]: Stopped /dev/mapper/luks-5037b320-95d6-4c74-94e7-4c673120d32a.
May 17 17:32:50 rtest systemd[1]: Stopped Cryptography Setup for luks-5037b320-95d6-4c74-94e7-4c673120d32a.

And we again now have missing /dev/dm-* and /dev/mapper/luks-* entries.
However the reverse "start" will, without a keyfile config, request a password via console to re-establish the associated devices.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 18, 2017

rerun systemd generators after /etc/crypttab changes #550
As our 'source of truth' is /etc/crypttab but systemd
generators now abstract the contents to
/var/run/systemd/generators we should ensure that systemd
re-generates the abstraction in a timely manner.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 19, 2017

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 19, 2017

workaround upstream bug in systemd-escape #550
Less flexible than original but serves our current
use. Required as upstream --template option is
broken for all input cases.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 19, 2017

after wipe of Open LUKS Volume fs, reopen volume #550
When a full dev fs exists on an Open LUKS Volume and
that fs is wiped via wipefs -a, systemd tears down the
entire mapped block device. Re-assert there after by
starting the associated generator service if it exists.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 19, 2017

call 'udevadm trigger' after new pool creation #550
Intermittent failure of udev to update fstype, label,
and uuid of dev after pool creation (1-2 times in 10)
was observed for a luks mapped device. After this
udev call no similar failures were observed. Also
prior to patch many mount by label fails were observed
with consequent fail-over mounts by dev, but with no
consequent dev info updates the pool was not recognized
by our UI. Where as after this patch no failures to mount
by label were observed in the test scenarios. This same
treatment has already been applied to adding disks to an
existing pool in pr #1607 commit:
0358560
@phillxnet

This comment has been minimized.

Show comment
Hide comment
@phillxnet

phillxnet May 19, 2017

Member
  • As it is a requirement to re-open automatically closed LUKS containers once their Open LUKS Volumes are wiped (if they originally had a full dev fs on), we must acknowledge to the user that a keyfile config is required as we have, as yet, no method to manually open / mount devices /pools so must, for the time being, accomplish this in an automated fashion. Hence the reliance on a keyfile for auto authentication where LUKS volumes are concerned. At least to avoid the confusing scenario of an Open LUKS Volume becoming detached directly after being wiped.

  • Also confirm sane behaviour on Open LUKS Volume removal from existing multi disk pool re appropriately updated fstype, label, and uuid.

Member

phillxnet commented May 19, 2017

  • As it is a requirement to re-open automatically closed LUKS containers once their Open LUKS Volumes are wiped (if they originally had a full dev fs on), we must acknowledge to the user that a keyfile config is required as we have, as yet, no method to manually open / mount devices /pools so must, for the time being, accomplish this in an automated fashion. Hence the reliance on a keyfile for auto authentication where LUKS volumes are concerned. At least to avoid the confusing scenario of an Open LUKS Volume becoming detached directly after being wiped.

  • Also confirm sane behaviour on Open LUKS Volume removal from existing multi disk pool re appropriately updated fstype, label, and uuid.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 20, 2017

improve user text on LUKS page re crypttab config options #550
Given our NAS remit we favour headless operation and given
LUKS volumes are closed upon wipe we must favour and recommend
auto authentication via keyfile. Also add systemd command
info for those favouring a non keyfile / command line
management approach.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 20, 2017

improve user text re LUKS boot up config on roles page #550
Best we point out "Auto unlock via keyfile" and location of
this config with icon repeats to improve affordance and ease
first time configuration.

@schakrava schakrava closed this in #1716 May 25, 2017

schakrava added a commit that referenced this issue May 25, 2017

@schakrava schakrava modified the milestones: Point Bonita, Yosemite May 31, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment