Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Harden mountpoints #531

Merged
merged 21 commits into from Jul 7, 2022
Merged

Harden mountpoints #531

merged 21 commits into from Jul 7, 2022

Conversation

lbayerlein
Copy link
Contributor

PR for adding multiple mountpoints to do some hardening on a file and mountpoint structure.

# Overview of necessary mount options to be checked:
#
#---------------------------------------------------------
#   Mount point              nodev  noexec  nosuid
#   /boot                      v      v       v
#   /dev                              v       v
#   /dev/shm                   v      v       v
#   /home                      v              v
#   /run                       v              v
#   /tmp                       v      v       v
#   /var                       v              v
#   /var/log                   v      v       v
#   /var/log/audit             v      v       v
#   /var/tmp                   v      v       v
#---------------------------------------------------------

@lbayerlein
Copy link
Contributor Author

This is a first commit on my side. I tested it and it worked. But there is some work to do.

I wanted you to have a look on it. We want to configure every mountpoint, because in our environment, we have 8 disks per virtual machine assigned. Every mountpoint can be enabled and disabled. I enabled everything, what RedHat has in its default (like /dev, /dev/shm..).

Before adding molecule and readme, can you give feedback to variable names, structure and what is missing on you opinion?

Thank you :)

@rndmh3ro
Copy link
Member

rndmh3ro commented Mar 7, 2022

For me this looks good on first glance.

We could maybe make one task out of the first two.

Instead of:

- name: Mount /boot with hardened options
  mount:
    path: /boot
    src: '{{ os_mnt_boot_src }}'
    fstype: '{{ os_mnt_boot_filesystem }}'
    opts: '{{ os_mnt_boot_options }}'
    state: remounted
  when: os_mnt_boot_enabled | bool

- name: Harden mount options for /boot
  mount:
    path: /boot
    src: '{{ os_mnt_boot_src }}'
    fstype: '{{ os_mnt_boot_filesystem }}'
    opts: '{{ os_mnt_boot_options }}'
    state: present
  when: os_mnt_boot_enabled | bool

we could do:

- name: Mount /boot with hardened options
  mount:
    path: /boot
    src: '{{ os_mnt_boot_src }}'
    fstype: '{{ os_mnt_boot_filesystem }}'
    opts: '{{ os_mnt_boot_options }}'
    state: '{{ item}}'
  when: os_mnt_boot_enabled | bool
  list:
    - remounted
    - present

This would mean much less code.

@schurzi, what do you think?

@schurzi
Copy link
Contributor

schurzi commented Mar 7, 2022

I think it's a good idea to shorten this. Maybe we could take this even further with a second loop for all the mount points. like described in https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#iterating-over-a-list-of-hashes and https://www.netways.de/blog/2020/11/20/ansible-loop-over-multiple-tasks/

- name: harden mountpoints
  include_tasks: minimize_access_mountpoints.yml #(tasks for remount, fstab and dir permissions in this file)
  loop:
    - { path: '/boot', src: '{{ os_mnt_boot_src }}' .... }
    - { path: '/dev', src: '{{ os_mnt_dev_src }}' .... }
    - ...

We can combine both ideas, or if we use my idea, I would be fine with 3 tasks in a separate file. If you think a loop with a separate file is too complicated, we can only implement your idea. Personally I like a solution, that keeps the number of copy-paste tasks low.

I like the many vars, that makes it easy to override things for our users. I also like the chosen mountpoints. All make sense to me, and depending on your environment you can enable what you need. In this time it may also be good to add a separate mountpoint for Docker to our hardened configuration?

@rndmh3ro
Copy link
Member

rndmh3ro commented Mar 8, 2022

I'm fine with both. @lbayerlein as long as there are less tasks than now, you can chose either solution.

@lbayerlein
Copy link
Contributor Author

Thanks @rndmh3ro I will update the shorten version on wednesday. Sorry for my delay

@lbayerlein
Copy link
Contributor Author

Do I have to add all parameters to your molecule testing file?

@rndmh3ro
Copy link
Member

Do I have to add all parameters to your molecule testing file?

I don't know yet. For this PR we can leave it for now as it is.

roles/os_hardening/tasks/minimize_access.yml Outdated Show resolved Hide resolved
roles/os_hardening/tasks/minimize_access.yml Outdated Show resolved Hide resolved
@lbayerlein
Copy link
Contributor Author

@rndmh3ro Thanks for your hint. I fixed this for every task and pushed it.

@rndmh3ro
Copy link
Member

@rndmh3ro Thanks for your hint. I fixed this for every task and pushed it.

Don't forget to fix all list instances. :)

@lbayerlein
Copy link
Contributor Author

@rndmh3ro Thanks for your hint. I fixed this for every task and pushed it.

Don't forget to fix all list instances. :)

oops yes sorry. its done

@lbayerlein
Copy link
Contributor Author

Is there something missing for merge I can give?

@rndmh3ro
Copy link
Member

Hey @lbayerlein,

sorry for not answering sooner.

The problem here right now is that the code is not idempotent:

- name: Mount /var/log with hardened options
  mount:
    path: /var/log
    src: '{{ os_mnt_var_log_src }}'
    fstype: '{{ os_mnt_var_log_filesystem }}'
    opts: '{{ os_mnt_var_log_options }}'
    state: '{{ item }}'
  when: os_mnt_var_log_enabled | bool
  loop:
    - remounted
    - present

The remounted step is always happening. We didn't think of that. :)

The remount should only happen if something changes..

@lbayerlein
Copy link
Contributor Author

Hey @rndmh3ro ,
no worries, there is enough work to do on my side, too. :)

Hmm loop is not, what it solves, isn't it? Is it a solution, to use register to detect, if there was a change on the previous task? Or should we build a list for a loop to iterate over tasks given by a previous task? Do you have an idea?

Thank you

@rndmh3ro
Copy link
Member

Is it a solution, to use register to detect, if there was a change on the previous task?

This is probably the way. Mount it, check for changes, remount if necessary.

- Default: `ext4`
- Description: Configure file system for fstab entry /var/log
- `os_mnt_var_log_audit_dir_mode`
- Default: `1777`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not seem correct...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand

In roles/os_hardening/tasks/minimize_access.yml there is

...
- name: Harden permissions for /var/log/audit directory
  file:
    dest: /var/log/audit
    owner: 'root'
    group: 'root'
    mode: '{{ os_mnt_var_log_audit_dir_mode }}'
...

In roles/os_hardening/defaults.yml there is

...
os_mnt_var_log_filesystem: "ext4"

os_mnt_var_log_audit_dir_mode: '1777'
os_mnt_var_log_audit_enabled: false
os_mnt_var_log_audit_src: ""
...

And in README.md there is

- `os_mnt_var_log_audit_dir_mode`
  - Default: `1777`
  - Description: Set default perimissions for /var/log/audit

What am I missing?

dest: /boot
owner: 'root'
group: 'root'
mode: '{{ os_mnt_boot_dir_mode }}'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add the when-condition to all tasks, otherwise it could lead to unexpected surprises by users.

   when: os_mnt_boot_enabled | bool
``

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, that is no good idea (with this variable), because I asked there for "is /boot mounted". If we add a "when" to the harden directory task, you disable harden the directory, if there is no mountpoint, too.

We always want to harden the directory instead of introducing a new variable to enable or disable it. If the user do not want to harden the directory, the user can configure a more open mode for /boot. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schurzi, same problem as other comment. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are mixing different things here. In this PR all the os_mnt_*_enabled options are dedicated to manage mounting of separate devices for our hardening. What you are proposing @rndmh3ro is adding a control to disable hardening for a specific directory altogether, am I right?

I think we could do this by using the os_mnt_*_enabled as a condition for every task and additionally checking the facts dictionary for existing mount points for the specific mount points in the mount tasks. But this might be over-complicating things.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh ok. So a new variable with disable_hardening_all_mountpoints would be a solution? I can add it to all mountpoints as a separate "when" statement.

As you wrote: I add a bool to to every mountpoint, that are optional mountpoints (everything could be on one filesystem). All other mountpoints should be managed by this hardening module.

os_mnt_boot_filesystem: 'ext4'

os_mnt_dev_dir_mode: '0755'
os_mnt_dev_enabled: true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since almost all mounts are disabled by default, we should disable these, too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only my background: /dev is a mountpoint on most/every linux distributions and its a good idea, to support users to get a out of the box hardening with good settings. Thats the way, I would expect how this collections works (per default, good settings without making my system unusable).

But sure, you are the collection owner. If you want to disable it, because all others are set to false, I can check again an set everywhere false

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schurzi, what do you think? Both ways have their pros and cons.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of hardening /dev and /dev/shm by default. I believe these mount points are present on every Linux distribution that we support and the chosen options for /dev/shm seem reasonable and are supported by hardening guideline from CIS (I could find no guideline for /dev so there might be some further research in order). So I vote for true.

While looking at this, I believe there is still something wrong with the filesystem and 'src' variables. Also ext4 seems wrong in any case here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rndmh3ro @schurzi for your support!

Did not find any suggestions in CIS either. We do this on our environment and we do not have any hassle with it.

The filesystem depends on every linux distribution. If you want, I can check /boot on several distributions. I am not sure, but on RHEL7, I thought it was ext4 - RHEL8 it is xfs.
I haven't tested it, yet. Is Ansible able to autodetect the filesystem? If it is a feature of Ansible, I can configure an empty string except it is manually set of the user.

roles/os_hardening/defaults/main.yml Outdated Show resolved Hide resolved
@lbayerlein
Copy link
Contributor Author

Can you merge this PR? Is there more to do for this merge?

Thanks

@rndmh3ro
Copy link
Member

rndmh3ro commented Jun 17, 2022

Hey @lbayerlein,

sorry for not responding sooner, I've been very busy.

The PR works. However there's at leas a problem with /dev:

mount gives this:

devtmpfs on /dev type devtmpfs (rw,nosuid,noexec,seclabel,size=12312408k,nr_inodes=3078102,mode=755)

/etc/fstab shows this:

devtmpfs /dev ext4 rw,nosuid,noexec 0 0

Please check /dev/shm and `/run, too! Thanks. :)

Ludwig Bayerlein added 8 commits June 22, 2022 11:37
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
…n directories. Renaming task names with mountpoints (slashes)

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
…ansible changes boot entries with a default value

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Ludwig Bayerlein and others added 12 commits June 22, 2022 11:37
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
@lbayerlein
Copy link
Contributor Author

Hi @rndmh3ro ,

thanks for review - I checked it again and fixed it. Sorry for that, this should now work. Maybe you can take a look again. Only the be sure it is all fine now :)

Thanks again!

@rndmh3ro
Copy link
Member

The code works and I'd merge it now. However the tests are failing for Centos and Rockylinux, because in the containers there is no /boot-folder (or partition) and therefore the tests fail. Let's try to think how we could change that..

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
@lbayerlein
Copy link
Contributor Author

@rndmh3ro So I added a stat, to ensure, there is a directory /boot. Is that okay? Can we merge now, please? :)

@rndmh3ro rndmh3ro merged commit 488ff6a into dev-sec:master Jul 7, 2022
@rndmh3ro
Copy link
Member

rndmh3ro commented Jul 7, 2022

Thanks! Finally merged. :)

divialth pushed a commit to divialth/ansible-collection-hardening that referenced this pull request Aug 3, 2022
* first testing with tasks and variables

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* update variables for dir options

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* updated permissions and defaults

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* fix home dir permissions

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* updated tasks with useful variables

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* reorder tasks. first remount, then manage fstab and fix permissions on directories. Renaming task names with mountpoints (slashes)

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* shorten tasks with list items

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* change defaults for /boot directory, because its a bad behaviour, if ansible changes boot entries with a default value

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* Update documentation for new parameters to manage mountpoints

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* Update roles/os_hardening/tasks/minimize_access.yml

Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* Update roles/os_hardening/tasks/minimize_access.yml

Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* Fix state on every new task

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* loop instead of list

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* testing remount with register

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* add remounts with loop over all changed folders

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* testing and solving trouble with variable names

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* optimize default permissions for var-log-audit

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* optimize default permissions for var-log-audit

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* change to new optimizied permissions of var-log-audit

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* fix some defaults in fstab to configure as mounted

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

* add stat and check, if boot folder exists

Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>

Co-authored-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com>
@lbayerlein lbayerlein deleted the harden_mountpoints branch August 3, 2022 11:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants