Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a way to disable automouting snapshots via access to .zfs dir #3963

Open
jelinekr opened this issue Oct 27, 2015 · 20 comments
Open

a way to disable automouting snapshots via access to .zfs dir #3963

jelinekr opened this issue Oct 27, 2015 · 20 comments
Labels
Bot: Not Stale Override for the stale bot Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@jelinekr
Copy link

Hi,

is there is a way to disable automouting snapshots when being accessed via .zfs directory? We need this for security reasons in cases where a too permissive dirent entry gets fixed, the vulnerability is still present and accessible in older snapshots. The "snapdir" zfs property only allows hiding the directory from readdir(3), but it doesn't prevent automounting snapshots when it's accessed. Can we add something similar to zfs_expire_snapshot variable (i.e. zfs_disable_snapshot_automount or zfs_snapshot_canmount) for this?

Thanks,
Rob

@behlendorf
Copy link
Contributor

@jelinekr I'm surprised this hasn't come up before. Right now there's no way to disable .zfs snapshot automounting but I definitely agree there should be. For the moment you'll need to disable it at build time. Probably the best way to do this is to #undef HAVE_AUTOMOUNT in the zfs_config.h after running configure.

As for how to support this I think extending the snapdir property is the logical thing to do. How about adding a disabled option. Where disabled means exactly what you'd expect it to mean, the .zfs directory just won't exist.

       snapdir=hidden | visible | disabled

@chjohnst
Copy link

chjohnst commented Dec 9, 2015

I like that proposal
On Dec 8, 2015 7:40 PM, "Brian Behlendorf" notifications@github.com wrote:

@jelinekr https://github.com/jelinekr I'm surprised this hasn't come up
before. Right now there's no way to disable .zfs snapshot automounting
but I definitely agree there should be. For the moment you'll need to
disable it at build time. Probably the best way to do this is to #undef
HAVE_AUTOMOUNT in the zfs_config.h after running configure.

As for how to support this I think extending the snapdir property is the
logical thing to do. How about adding a disabled option. Where disabled
means exactly what you'd expect it to mean, the .zfs directory just won't
exist.

   snapdir=hidden | visible | disabled


Reply to this email directly or view it on GitHub
#3963 (comment).

@jelinekr
Copy link
Author

jelinekr commented Dec 9, 2015

Me too, thanks for looking into this.

-----Original Message-----
From: Christopher Johnston [mailto:notifications@github.com]
Sent: Tuesday, December 08, 2015 07:59 PM Eastern Standard Time
To: zfsonlinux/zfs
Cc: Robert Jelinek
Subject: Re: [zfs] a way to disable automouting snapshots via access to .zfs dir (#3963)

I like that proposal
On Dec 8, 2015 7:40 PM, "Brian Behlendorf" notifications@github.com wrote:

@jelinekr https://github.com/jelinekr I'm surprised this hasn't come up
before. Right now there's no way to disable .zfs snapshot automounting
but I definitely agree there should be. For the moment you'll need to
disable it at build time. Probably the best way to do this is to #undef
HAVE_AUTOMOUNT in the zfs_config.h after running configure.

As for how to support this I think extending the snapdir property is the
logical thing to do. How about adding a disabled option. Where disabled
means exactly what you'd expect it to mean, the .zfs directory just won't
exist.

snapdir=hidden | visible | disabled


Reply to this email directly or view it on GitHub
#3963 (comment).


Reply to this email directly or view it on GitHub #3963 (comment) . https://github.com/notifications/beacon/AOo7wVnJZfBdZ251U_RbwJmC4NvdWnzZks5pN3SGgaJpZM4GW5aq.gif


This communication is intended only for the addressee(s), may contain confidential, privileged or proprietary information, and may be protected by US and other laws. Your acceptance of this communication constitutes your agreement to keep confidential all the confidential information contained in this communication, as well as any information derived by you from the confidential information contained in this communication. We do not waive any confidentiality by misdelivery.

If you receive this communication in error, any use, dissemination, printing or copying is strictly prohibited; please destroy all electronic and paper copies and notify the sender immediately. Nothing in this email is intended to constitute (1) investment or trading advice or recommendations or any advertisement or (2) a solicitation of an investment in any jurisdiction in which such a solicitation would be unlawful.

Please note that PDT Partners, LLC, including its affiliates, reserves the right to intercept, archive, monitor and review all communications to and from its network.

@ilovezfs
Copy link
Contributor

ilovezfs commented Dec 9, 2015

What would be the expected behavior, while snapdir=disabled is set, when a user tries to mkdir .zfs and possibly populate it with data?

@behlendorf
Copy link
Contributor

Good question, I suppose it will need to fail with a reasonable errno to ensure snapdir can be safely enabled again if desired.

@ilovezfs
Copy link
Contributor

ilovezfs commented Dec 9, 2015

@behlendorf That makes sense. Another issue is whether it makes sense to have the snapdir property controlling .zfs or .zfs/snapshot. It feels like a mistake to conflate snapdir and ctldir. Would it make more sense to have a) two separate properties, snapdir and sharedir, b) three separate properties, snapdir, sharedir, and ctldir, c) just snapdir and ctldir, since a specific need for a separate sharedir property has yet to arise other than in this context, or d) something else?

Yet, I suppose the snapdir property hidden/visible has already conflated them, so it may be fine to continue that practice. However, when snapdir=hidden, both .zfs/snapshot and .zfs/shares are actually still available. With snapdir=disabled, I guess both would become unavailable for the sake of preventing the snapshot automounting, possibly making .zfs/shares "collateral damage" depending on the user's intent. Or would you want to leave .zfs/shares access in tact even when snapdir=disabled?

As for errnos, currently mkdir .zfs{,/snapshot,/shares} all set the exit status to 1 and print a "File exists" error, and touch .zfs{,/snapshot,/shares} all succeed with exit status 0. Strangely, touch .zfs actually updates the timestamp (Is this a bug?), while touch .zfs/snapshot and .zfs/shares do not update the time stamps despite the 0 exit statuses.

@behlendorf
Copy link
Contributor

This issue was addressed in master by adding support for zfs allow in commit f74b821. Enforcement is now done through delegations which is consistent with the other OpenZFS platforms. The zfs_admin_snapshot was left in place for backwards compatibility.

@wl2018
Copy link

wl2018 commented Feb 1, 2018

Sorry, could you tell how to use zfs allow to disable or restrict access to .zfs snapshot dir? After searching and looking references in Internet, I still don't know how to do.

@pgassmann
Copy link

pgassmann commented Dec 6, 2021

@behlendorf how does zfs allow resolve this issue?

I was surprised that the snapdirs are accessible even with snapdir=hidden. To me it seems very strange that something is there but is not listed. that's not something I knew was possible (on Linux)

This could be a potential data protection/privacy issue if data that should be deleted is still accessible to applications/containers.
Additionally through the snapdirs, data could be accessible that are otherwise overmounted. (bad design)

If a ZFS filesystem is mounted into a docker container, the snapdirs are available from within the container.

EDIT: The snapshots can be listed, but the content is not accessible. (At least in my case):

/ # ls -lah /data/.zfs/snapshot/migrate20210510/ -alh
ls: /data/.zfs/snapshot/migrate20210510/: Symbolic link loop

I assume many zfs users are not aware of that doubly-hidden but still available .zfs path and expect snapdir=hidden means not available

@JanBeh
Copy link

JanBeh commented Aug 9, 2022

@behlendorf how does zfs allow resolve this issue?

I don't think resolves this issue. This issue (#3963) should be reopened, see also #9028.

@behlendorf
Copy link
Contributor

Reopening. Thanks for revisiting this, we should provide a mechanism to at least disable the snapshot directories.

@behlendorf behlendorf reopened this Aug 9, 2022
@behlendorf behlendorf added the Type: Defect Incorrect behavior (e.g. crash, hang) label Aug 9, 2022
@stale
Copy link

stale bot commented Aug 10, 2023

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Aug 10, 2023
@JanBeh
Copy link

JanBeh commented Aug 10, 2023

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

This issue is still a problem. See also the corresponding FreeBSD issue.

I have explained why this is a potential security problem in this thread and in that comment. I'm surprised that nothing happens for years and I got a message from that stale bot that the issue is going to be closed automatically if nobody responds. So here is my response: This is still an issue! And it's security relevant, even though some people argue it's not.

@stale stale bot removed the Status: Stale No recent activity for issue label Aug 10, 2023
@rincebrain rincebrain added the Bot: Not Stale Override for the stale bot label Dec 4, 2023
@JanBeh
Copy link

JanBeh commented Dec 8, 2023

Note that if acb33ee lands, there may be more people exposed to this potentially security related issue.

@Fabian-Gruenbichler
Copy link
Contributor

Hi!

we independently discovered this a while back and reported it privately without being aware that there is a public issue about it already anyway.

The proposed approach in the private discussion was the following (slightly truncated):

  • We patch the code to set both nosuid and nosgid on automounts in .zfs by default, controllable by a kernel module parameter.
  • We add an option to allow system administrators to set snapdir=disabled, although when the pool is imported by an older kernel module, it will still show snapdir=hidden. This is how we implemented redundant_metadata=none in a way that did not require a disk format change.

If nobody else is already working on this, we'll allocate some dev time in the near future, as we'd really like to see this gap fixed! AFAIK, the above approach is still the way to go..

@JanBeh
Copy link

JanBeh commented Jan 25, 2024

  • We patch the code to set both nosuid and nosgid on automounts in .zfs by default, controllable by a kernel module parameter.

Note that there are more security implications than SUID and SGID, which I have meanwhile outlined in this thread on the FreeBSD Forum.

  • We add an option to allow system administrators to set snapdir=disabled, although when the pool is imported by an older kernel module, it will still show snapdir=hidden. This is how we implemented redundant_metadata=none in a way that did not require a disk format change.

May I ask why not have an option that limits access to the root user, e.g. something like snapdir=privileged or snapdir=root or snapdir=protect? I would assume that user "root" can/must always be trusted in that matter. Maybe this would make many use-cases easier to handle. (Of course, one could still use zfs clone where desired, but it would require some effort or at least some caution to ensure the mountpoint isn't publicly readible.)

If nobody else is already working on this, we'll allocate some dev time in the near future, as we'd really like to see this gap fixed! AFAIK, the above approach is still the way to go..

I think there are several people wanting to see this fixed (and this issue has been lingering for years), so I'd really appreciate if you provided any solution to the problem. 👍

@Fabian-Gruenbichler
Copy link
Contributor

  • We patch the code to set both nosuid and nosgid on automounts in .zfs by default, controllable by a kernel module parameter.

Note that there are more security implications than SUID and SGID, which I have meanwhile outlined in this thread on the FreeBSD Forum.

sure, but (most of?) the others can only be prevented by not giving access in the first place, while for setuid/setgid we have a mount option.. also, it might be fine to allow unprivileged read access to a dataset's snapshots (e.g., an OSTree or similar contents) but we still want to prevent privilege escalation via known-vulnerable, fixed-in-the-meantime setuid/setgid binaries ;)

  • We add an option to allow system administrators to set snapdir=disabled, although when the pool is imported by an older kernel module, it will still show snapdir=hidden. This is how we implemented redundant_metadata=none in a way that did not require a disk format change.

May I ask why not have an option that limits access to the root user, e.g. something like snapdir=privileged or snapdir=root or snapdir=protect? I would assume that user "root" can/must always be trusted in that matter. Maybe this would make many use-cases easier to handle. (Of course, one could still use zfs clone where desired, but it would require some effort or at least some caution to ensure the mountpoint isn't publicly readible.)

might also work, but I'd have to check whether that makes sense with the way the ctl dir and automounting is currently implemented. might also be possible as a follow up.

@JanBeh
Copy link

JanBeh commented Jan 26, 2024

[…] (most of?) the other[s] [security implications] can only be prevented by not giving access in the first place […]

This argument (edit: without the "most of" restriction) is often brought up, but isn't true in my opinion (as I argued in the linked discussion thread(s)). In short: There are scenarios when you execute a command like chmod g-rwx where there has been no mistake made beforehand. For example, before a user is added to a certain group, you might want to change those group's permissions on a certain directory. The way the ZFS snapshots are currently automounted, the chmod g-rwx command doesn't take immediate effect and users added to a group may gain access to files which they never were supposed to see (even though there was no misconfiguration at any time because the previous members of that group were okay to see the files in that directory). There may be also other cases when access rights to a file change without having been a misconfiguration beforehand.

For these reasons, I consider the world-readable automounting of snapshots to be a security flaw. And this is not just related to SUID/SGID flags. However, some people have a different opinion on that.

That said, I still think it's a good idea to remove SUID/SGID in the snapshots by default because it may prevent some potential security problems. 👍

also, it might be fine to allow unprivileged read access to a dataset's snapshots (e.g., an OSTree or similar contents)

Agreed.

but we still want to prevent privilege escalation via known-vulnerable, fixed-in-the-meantime setuid/setgid binaries ;)

As said, this only helps against certain mistakes/errors. But still, I think it's a good idea, just one should be aware this doesn't cover everything (in particular if you assume that snapshots have been taken at a potentially insecure state).

  • We add an option to allow system administrators to set snapdir=disabled, although when the pool is imported by an older kernel module, it will still show snapdir=hidden. This is how we implemented redundant_metadata=none in a way that did not require a disk format change.

May I ask why not have an option that limits access to the root user, e.g. something like snapdir=privileged or snapdir=root or snapdir=protect? I would assume that user "root" can/must always be trusted in that matter. Maybe this would make many use-cases easier to handle. (Of course, one could still use zfs clone where desired, but it would require some effort or at least some caution to ensure the mountpoint isn't publicly readible.)

might also work, but I'd have to check whether that makes sense with the way the ctl dir and automounting is currently implemented. might also be possible as a follow up.

This is just a "want would be nice to have". If it was possible to disable automounting of snapshots completely, it would at least fix the security problem for now.

Due to security reasons, I would personally advocate that the default behavior should be to have snapshots not world-readable by default (whether restricted to root or having them disabled completely by default). There may be other opinions though for backward compatibility reasons. But I believe security should have precedence here.

@Fabian-Gruenbichler
Copy link
Contributor

I think we are pretty much on the same page :) when I wrote "not giving access in the first place" I was referring to the snapshot, not the original data. I fully agree that disabling automounting/automounted snapshot access in general (or for non-root) makes sense in a lot of setups!

@JanBeh
Copy link

JanBeh commented Jan 29, 2024

I think we are pretty much on the same page :) when I wrote "not giving access in the first place" I was referring to the snapshot, not the original data.

Oh okay, sorry for the misunderstanding.

Fabian-Gruenbichler added a commit to Fabian-Gruenbichler/zfs that referenced this issue Feb 14, 2024
in some environments, just making the .zfs control dir hidden from sight
might not be enough. in particular, the following scenarios might
warrant not allowing access at all:
- old snapshots with wrong permissions/ownership
- old snapshots with exploitable setuid/setgid binaries
- old snapshots with sensitive contents

introducing a new 'disabled' value that not only hides the control dir,
but prevents access to its contents by returning ENOENT solves all of
the above.

the new property value takes advantage of 'iuv' semantics ("ignore
unknown value") to automatically fall back to the old default value when
a pool is accessed by an older version of ZFS that doesn't yet know
about 'disabled' semantics.

Fixes: openzfs#3963

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fabian-Gruenbichler added a commit to Fabian-Gruenbichler/zfs that referenced this issue Feb 15, 2024
in some environments, just making the .zfs control dir hidden from sight
might not be enough. in particular, the following scenarios might
warrant not allowing access at all:
- old snapshots with wrong permissions/ownership
- old snapshots with exploitable setuid/setgid binaries
- old snapshots with sensitive contents

introducing a new 'disabled' value that not only hides the control dir,
but prevents access to its contents by returning ENOENT solves all of
the above.

the new property value takes advantage of 'iuv' semantics ("ignore
unknown value") to automatically fall back to the old default value when
a pool is accessed by an older version of ZFS that doesn't yet know
about 'disabled' semantics.

Fixes: openzfs#3963

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fabian-Gruenbichler added a commit to Fabian-Gruenbichler/zfs that referenced this issue Feb 15, 2024
in some environments, just making the .zfs control dir hidden from sight
might not be enough. in particular, the following scenarios might
warrant not allowing access at all:
- old snapshots with wrong permissions/ownership
- old snapshots with exploitable setuid/setgid binaries
- old snapshots with sensitive contents

introducing a new 'disabled' value that not only hides the control dir,
but prevents access to its contents by returning ENOENT solves all of
the above.

the new property value takes advantage of 'iuv' semantics ("ignore
unknown value") to automatically fall back to the old default value when
a pool is accessed by an older version of ZFS that doesn't yet know
about 'disabled' semantics.

I think that technically the zfs_dirlook change is enough to prevent
access, but preventing lookups and dir entries in an already opened .zfs
handle might also be a good idea to prevent races when modifying the
property at runtime.

Fixes: openzfs#3963

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>

.zfs: don't return .zfs inode if disabled

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fabian-Gruenbichler added a commit to Fabian-Gruenbichler/zfs that referenced this issue Mar 6, 2024
in some environments, just making the .zfs control dir hidden from sight
might not be enough. in particular, the following scenarios might
warrant not allowing access at all:
- old snapshots with wrong permissions/ownership
- old snapshots with exploitable setuid/setgid binaries
- old snapshots with sensitive contents

introducing a new 'disabled' value that not only hides the control dir,
but prevents access to its contents by returning ENOENT solves all of
the above.

the new property value takes advantage of 'iuv' semantics ("ignore
unknown value") to automatically fall back to the old default value when
a pool is accessed by an older version of ZFS that doesn't yet know
about 'disabled' semantics.

I think that technically the zfs_dirlook change is enough to prevent
access, but preventing lookups and dir entries in an already opened .zfs
handle might also be a good idea to prevent races when modifying the
property at runtime.

Fixes: openzfs#3963

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>

.zfs: don't return .zfs inode if disabled

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bot: Not Stale Override for the stale bot Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

9 participants