Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use IOKit to Create /dev Entries for Every ZFS Dataset #116

Closed
ilovezfs opened this issue Jan 8, 2014 · 28 comments
Closed

Use IOKit to Create /dev Entries for Every ZFS Dataset #116

ilovezfs opened this issue Jan 8, 2014 · 28 comments

Comments

@ilovezfs
Copy link
Contributor

ilovezfs commented Jan 8, 2014

No description provided.

@athei
Copy link

athei commented Jan 15, 2014

Is it important that the name returned by f_mntfromname is a device file or is it just important that the name is unique?

What kind of improvements does the generation of actual device files offer? Network file systems does not generate device files on OS X and they working fine.

Or are network shares no first class citizens on OS X? What I mean with this is that they behave in the same way to userspace applications as filesystems mounted from disk.

@ilovezfs
Copy link
Contributor Author

The use of actual device nodes is important for each of the issues mentioned above as previously stated. Network file systems are not treated as first class citizens and do have unique names.

@evansus
Copy link
Contributor

evansus commented Apr 25, 2014

@lundman I noticed that vfs_fsadd is only called once by zfs.kext, during start.

I found the mount.h reference here, but it is vague. mount.h

The best I can tell from the documentation, vfs_fsadd should register each mounted filesystem, not a single top-level fs.
For example a simple Filesystem driver, which presents a single filesystem would call it once at module load. But in that case it would create multiple instances of the filesystem driver may exist in the kernel, and each one would register it's fs using vfs_fsadd.

Does this seem accurate?

@lundman
Copy link
Contributor

lundman commented Apr 25, 2014

Ah hmm vfs_fsadd registers a new type of filesystem, as in msdos, ntfs etc, and should only be called once. Take a look at msdos.kext sources http://opensource.apple.com/source/msdosfs/msdosfs-198/msdosfs.kextproj/msdosfs.kmodproj/

Each mount does not need to register a new "type" of filesystem

@evansus
Copy link
Contributor

evansus commented Apr 25, 2014

Right on - my misunderstanding, thanks for clearing that up.

@ilovezfs
Copy link
Contributor Author

ilovezfs commented Feb 7, 2015

Volume rename also needs /dev/disk* entries in order to be enabled in Finder.
#273

@ghost
Copy link

ghost commented Aug 2, 2015

Just wondering: what is the current status of this issue? Is there any progress?

@lundman
Copy link
Contributor

lundman commented Aug 2, 2015

"issue116_manual" is the snapshot I am working on, the /dev nodes part works, and snapshots can automount. But ZVOLs at the moment can not be unmounted. It is being worked on when convenient :)

@ghost
Copy link

ghost commented Aug 3, 2015

Awesome, thanks!

Keep up the good work!

Jorgen Lundman notifications@github.comschreef:

"issue116_manual" is the snapshot I am working on, the /dev nodes part works, and snapshots can automount. But ZVOLs at the moment can not be unmounted. It is being worked on when convenient :)


Reply to this email directly or view it on GitHub.

@dabrahams
Copy link

Slight bump, a year later. Any progress/anything I can do to help?

@lundman
Copy link
Contributor

lundman commented Jun 20, 2016

This is currently in branch "issue116_manual" and works, if a little roughly. The snapshots sort of mount automatically, but sometimes too late for the triggering process. The zfs_boot is based on this branch, although I noticed I failed to create a fake entry for the boot device. I will fix that soon

@dabrahams
Copy link

I’m not sure which snapshots or triggering process you’re talking about. Care to elucidate?

Thanks!

@lundman
Copy link
Contributor

lundman commented Jun 22, 2016

Lesse now, when access inside ".zfs/snapshot/" is detected, zfs_ctldir.c triggers a mount (read-only) of that snapshot automatically.
https://github.com/openzfsonosx/zfs/blob/issue116_manual/module/zfs/zfs_ctldir.c#L1413

But being second class citizens in the XNU kernel, there just is no way for us to call mount from the kernel. Not even the special helper for NFS - which is a shame, since Apple could have shown how they want us to do kernel mount here. Why does NFS get to cheat! :)

So we create a new fake /dev/disk entry, with contentHint that matches, zfssnapshot.fs bundle, which (eventually), calls mount on it, and we come back into the kernel. We can then release the process that caused it all to start.
https://github.com/openzfsonosx/zfs/blob/issue116_manual/module/zfs/zfs_osx.cpp#L777

As you can see, it is pretty half-cooked and icky (it even just calls delay rather than using condvars to signal mount successful).

@ilovezfs
Copy link
Contributor Author

I wonder if we should file an rdar about the NFS "cheating" since this workaround seems totally unnecessary.

@dabrahams
Copy link

on Wed Jun 22 2016, ilovezfs <notifications-AT-github.com> wrote:

I wonder if we should file an rdar about the NFS "cheating" since this
workaround seems totally unnecessary.

Radars are good :-)

Dave

@dabrahams
Copy link

Thanks for the explanation. Auto-mounting snapshots is not high on my list of priorities so maybe I'll pursue this angle and see where it leads.

Sent from my moss-covered three-handled family gradunza

On Jun 21, 2016, at 7:48 PM, Jorgen Lundman notifications@github.com wrote:

Lesse now, when access inside ".zfs/snapshot/" is detected, zfs_ctldir.c triggers a mount (read-only) of that snapshot automatically.
https://github.com/openzfsonosx/zfs/blob/issue116_manual/module/zfs/zfs_ctldir.c#L1413

But being second class citizens in the XNU kernel, there just is no way for us to call mount from the kernel. Not even the special helper for NFS - which is a shame, since Apple could have shown how they want us to do kernel mount here. Why does NFS get to cheat! :)

So we create a new fake /dev/disk entry, with contentHint that matches, zfssnapshot.fs bundle, which (eventually), calls mount on it, and we come back into the kernel. We can then release the process that caused it all to start.
https://github.com/openzfsonosx/zfs/blob/issue116_manual/module/zfs/zfs_osx.cpp#L777

As you can see, it is pretty half-cooked and icky (it even just calls delay rather than using condvars to signal mount successful).


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

@ilovezfs
Copy link
Contributor Author

ilovezfs commented Jul 5, 2016

@dabrahams which angle do you mean?

@dabrahams
Copy link

@ilovezfs I mean trying the branch as a way to get Backblaze to recognize the filesystems and actually back them up.

@ilovezfs
Copy link
Contributor Author

ilovezfs commented Jul 5, 2016

@dabrahams did you get a chance to try what Michael Newbery suggested?
https://openzfsonosx.org/forum/viewtopic.php?f=26&t=2787&p=7180&hilit=Backblaze#p7186

@jessiebryan
Copy link

@dabrahams, @ilovezfs, I use o3x and Backblaze. Backblaze does not officially support backing up a ZFS dataset, however, their file scanner does include these files as part of my root volume backup. I have verified that my datasets mounted under /zfs/* and /Users/jbryan/Dropbox are backed up, and can be restored. Datasets mounted under /Volumes/* do not get backed up, regardless if you allow /Volumes* in your Backblaze Preferences.

Screenshots:

  1. Backblaze restore http://prntscr.com/bp784v

ZFS Info:

m83:/ root# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
  zpool              14.5T  6.59T   540K  none
  zpool/Dropbox       210G  6.59T   210G  /Users/jbryan/Dropbox
  zpool/Movies1      6.08T  6.59T  6.05T  /zfs/Movies1
  zpool/PLEXMETA     31.0G  6.59T  31.0G  /Volumes/PLEXMETA
  zpool/Photos1       192G  6.59T   192G  /zfs/Photos
  zpool/TV-Active    4.64T  6.59T  4.10T  /zfs/TV-Active
  zpool/TV-Ended     3.16T  6.59T  3.04T  /zfs/TV-Ended
  zpool/WHITELINER1   183G  6.59T   183G  /zfs/WHITELINER1
  zpool/cfg1          935M  6.59T   935M  /zfs/cfg

In the list above, my "/Volumes/PLEXMETA" is not included in Backblaze backup. The workaround here is to adjust the ZFS dataset mountpoint path:

sudo zfs set mountpoint=/zfs/PLEXMETA zpool/PLEXMETA

In case you want to use Dropbox over a ZFS Dataset + Backblaze:

sudo zfs set mountpoint=/Users/${USER}/Dropbox zpool/Dropbox

You would need to do this for each dataset you want Backblaze to backup. Backblaze does not consider these as separate volumes, they just attach to the root volume, and that's OK for me.

@dabrahams
Copy link

@jessiebryan Such a simple answer, and it works! Amazing, thank you!

@lennonolium
Copy link

@jessiebryan @dabrahams

Is that solution still working for you? Backblaze isn't traversing /zfs for me (same perms as /Volumes). I'm wondering if I need to remove the computer and try doing another full backup to BB.

@dabrahams
Copy link

@lennonolium yes it's working for me

@lennonolium
Copy link

@dabrahams, thanks. I should have updated last night. After changing mount points to a similar strategy, I would force BB to conduct a backup, but it wouldn't pick up the new /zfs/... subdirectories. I sat here for hours tweaking various dataset parameters. Then, I woke up this morning and it had started syncing. Ha! All 7TB up on BB and living on ZFS. No problem at all.

@lundman lundman mentioned this issue Jul 25, 2017
@Technofrikus
Copy link

@jessiebryan Thank you very much for the mountpoint idea! I had to reinstall Backblaze because it did not recognize the folder at first and said the backup was complete, even though it was not, not even without the zfs-folder (but this might be another problem of Backblaze and unrelated). But now it works perfectly and uploads. Now I just have to wait forever to complete the large upload :P

@mauricev
Copy link

mauricev commented Jan 4, 2018

I use arq to access Backblaze B2 storage (as well as Wasabi). Arq sees zfs datasets natively.

@JMoVS
Copy link
Contributor

JMoVS commented Apr 28, 2019

@lundman I think as issue116lite was merged, this can be closed...

@JMoVS JMoVS closed this as completed Apr 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants