New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Harden mountpoints #531
Harden mountpoints #531
Conversation
53925f0
to
5aab189
Compare
This is a first commit on my side. I tested it and it worked. But there is some work to do. I wanted you to have a look on it. We want to configure every mountpoint, because in our environment, we have 8 disks per virtual machine assigned. Every mountpoint can be enabled and disabled. I enabled everything, what RedHat has in its default (like /dev, /dev/shm..). Before adding molecule and readme, can you give feedback to variable names, structure and what is missing on you opinion? Thank you :) |
For me this looks good on first glance. We could maybe make one task out of the first two. Instead of:
we could do:
This would mean much less code. @schurzi, what do you think? |
I think it's a good idea to shorten this. Maybe we could take this even further with a second loop for all the mount points. like described in https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#iterating-over-a-list-of-hashes and https://www.netways.de/blog/2020/11/20/ansible-loop-over-multiple-tasks/
We can combine both ideas, or if we use my idea, I would be fine with 3 tasks in a separate file. If you think a loop with a separate file is too complicated, we can only implement your idea. Personally I like a solution, that keeps the number of copy-paste tasks low. I like the many vars, that makes it easy to override things for our users. I also like the chosen mountpoints. All make sense to me, and depending on your environment you can enable what you need. In this time it may also be good to add a separate mountpoint for Docker to our hardened configuration? |
I'm fine with both. @lbayerlein as long as there are less tasks than now, you can chose either solution. |
Thanks @rndmh3ro I will update the shorten version on wednesday. Sorry for my delay |
Do I have to add all parameters to your molecule testing file? |
I don't know yet. For this PR we can leave it for now as it is. |
@rndmh3ro Thanks for your hint. I fixed this for every task and pushed it. |
Don't forget to fix all |
oops yes sorry. its done |
Is there something missing for merge I can give? |
Hey @lbayerlein, sorry for not answering sooner. The problem here right now is that the code is not idempotent:
The The remount should only happen if something changes.. |
Hey @rndmh3ro , Hmm loop is not, what it solves, isn't it? Is it a solution, to use Thank you |
This is probably the way. Mount it, check for changes, remount if necessary. |
7b0ade6
to
a8f0864
Compare
roles/os_hardening/README.md
Outdated
- Default: `ext4` | ||
- Description: Configure file system for fstab entry /var/log | ||
- `os_mnt_var_log_audit_dir_mode` | ||
- Default: `1777` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not seem correct...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not understand
In roles/os_hardening/tasks/minimize_access.yml
there is
...
- name: Harden permissions for /var/log/audit directory
file:
dest: /var/log/audit
owner: 'root'
group: 'root'
mode: '{{ os_mnt_var_log_audit_dir_mode }}'
...
In roles/os_hardening/defaults.yml
there is
...
os_mnt_var_log_filesystem: "ext4"
os_mnt_var_log_audit_dir_mode: '1777'
os_mnt_var_log_audit_enabled: false
os_mnt_var_log_audit_src: ""
...
And in README.md there is
- `os_mnt_var_log_audit_dir_mode`
- Default: `1777`
- Description: Set default perimissions for /var/log/audit
What am I missing?
dest: /boot | ||
owner: 'root' | ||
group: 'root' | ||
mode: '{{ os_mnt_boot_dir_mode }}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add the when-condition to all tasks, otherwise it could lead to unexpected surprises by users.
when: os_mnt_boot_enabled | bool
``
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, that is no good idea (with this variable), because I asked there for "is /boot mounted". If we add a "when" to the harden directory task, you disable harden the directory, if there is no mountpoint, too.
We always want to harden the directory instead of introducing a new variable to enable or disable it. If the user do not want to harden the directory, the user can configure a more open mode for /boot
. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@schurzi, same problem as other comment. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are mixing different things here. In this PR all the os_mnt_*_enabled
options are dedicated to manage mounting of separate devices for our hardening. What you are proposing @rndmh3ro is adding a control to disable hardening for a specific directory altogether, am I right?
I think we could do this by using the os_mnt_*_enabled
as a condition for every task and additionally checking the facts dictionary for existing mount points for the specific mount points in the mount tasks. But this might be over-complicating things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh ok. So a new variable with disable_hardening_all_mountpoints
would be a solution? I can add it to all mountpoints as a separate "when" statement.
As you wrote: I add a bool to to every mountpoint, that are optional mountpoints (everything could be on one filesystem). All other mountpoints should be managed by this hardening module.
os_mnt_boot_filesystem: 'ext4' | ||
|
||
os_mnt_dev_dir_mode: '0755' | ||
os_mnt_dev_enabled: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since almost all mounts are disabled by default, we should disable these, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only my background: /dev
is a mountpoint on most/every linux distributions and its a good idea, to support users to get a out of the box hardening with good settings. Thats the way, I would expect how this collections works (per default, good settings without making my system unusable).
But sure, you are the collection owner. If you want to disable it, because all others are set to false
, I can check again an set everywhere false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@schurzi, what do you think? Both ways have their pros and cons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of hardening /dev
and /dev/shm
by default. I believe these mount points are present on every Linux distribution that we support and the chosen options for /dev/shm
seem reasonable and are supported by hardening guideline from CIS (I could find no guideline for /dev
so there might be some further research in order). So I vote for true
.
While looking at this, I believe there is still something wrong with the filesystem
and 'src' variables. Also ext4
seems wrong in any case here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @rndmh3ro @schurzi for your support!
Did not find any suggestions in CIS either. We do this on our environment and we do not have any hassle with it.
The filesystem depends on every linux distribution. If you want, I can check /boot
on several distributions. I am not sure, but on RHEL7, I thought it was ext4 - RHEL8 it is xfs.
I haven't tested it, yet. Is Ansible able to autodetect the filesystem? If it is a feature of Ansible, I can configure an empty string except it is manually set of the user.
Can you merge this PR? Is there more to do for this merge? Thanks |
Hey @lbayerlein, sorry for not responding sooner, I've been very busy. The PR works. However there's at leas a problem with mount gives this:
/etc/fstab shows this:
Please check |
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
…n directories. Renaming task names with mountpoints (slashes) Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
…ansible changes boot entries with a default value Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com> Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com> Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
ef87809
to
725fd05
Compare
Hi @rndmh3ro , thanks for review - I checked it again and fixed it. Sorry for that, this should now work. Maybe you can take a look again. Only the be sure it is all fine now :) Thanks again! |
The code works and I'd merge it now. However the tests are failing for Centos and Rockylinux, because in the containers there is no |
Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com>
@rndmh3ro So I added a |
Thanks! Finally merged. :) |
* first testing with tasks and variables Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * update variables for dir options Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * updated permissions and defaults Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * fix home dir permissions Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * updated tasks with useful variables Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * reorder tasks. first remount, then manage fstab and fix permissions on directories. Renaming task names with mountpoints (slashes) Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * shorten tasks with list items Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * change defaults for /boot directory, because its a bad behaviour, if ansible changes boot entries with a default value Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * Update documentation for new parameters to manage mountpoints Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * Update roles/os_hardening/tasks/minimize_access.yml Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com> Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * Update roles/os_hardening/tasks/minimize_access.yml Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com> Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * Fix state on every new task Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * loop instead of list Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * testing remount with register Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * add remounts with loop over all changed folders Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * testing and solving trouble with variable names Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * optimize default permissions for var-log-audit Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * optimize default permissions for var-log-audit Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * change to new optimizied permissions of var-log-audit Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * fix some defaults in fstab to configure as mounted Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> * add stat and check, if boot folder exists Signed-off-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> Co-authored-by: Ludwig Bayerlein <bayerlein@bayerlein-networks.com> Co-authored-by: Sebastian Gumprich <rndmh3ro@users.noreply.github.com>
PR for adding multiple mountpoints to do some hardening on a file and mountpoint structure.