-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use IOKit to Create /dev Entries for Every ZFS Dataset #116
Comments
Is it important that the name returned by f_mntfromname is a device file or is it just important that the name is unique? What kind of improvements does the generation of actual device files offer? Network file systems does not generate device files on OS X and they working fine. Or are network shares no first class citizens on OS X? What I mean with this is that they behave in the same way to userspace applications as filesystems mounted from disk. |
The use of actual device nodes is important for each of the issues mentioned above as previously stated. Network file systems are not treated as first class citizens and do have unique names. |
@lundman I noticed that I found the mount.h reference here, but it is vague. mount.h The best I can tell from the documentation, Does this seem accurate? |
Ah hmm Each mount does not need to register a new "type" of filesystem |
Right on - my misunderstanding, thanks for clearing that up. |
Volume rename also needs /dev/disk* entries in order to be enabled in Finder. |
Just wondering: what is the current status of this issue? Is there any progress? |
"issue116_manual" is the snapshot I am working on, the /dev nodes part works, and snapshots can automount. But ZVOLs at the moment can not be unmounted. It is being worked on when convenient :) |
Awesome, thanks! Keep up the good work! Jorgen Lundman notifications@github.comschreef:
|
Slight bump, a year later. Any progress/anything I can do to help? |
This is currently in branch "issue116_manual" and works, if a little roughly. The snapshots sort of mount automatically, but sometimes too late for the triggering process. The zfs_boot is based on this branch, although I noticed I failed to create a fake entry for the boot device. I will fix that soon |
I’m not sure which snapshots or triggering process you’re talking about. Care to elucidate? Thanks! |
Lesse now, when access inside ".zfs/snapshot/" is detected, But being second class citizens in the XNU kernel, there just is no way for us to call So we create a new fake /dev/disk entry, with contentHint that matches, As you can see, it is pretty half-cooked and icky (it even just calls |
I wonder if we should file an rdar about the NFS "cheating" since this workaround seems totally unnecessary. |
on Wed Jun 22 2016, ilovezfs <notifications-AT-github.com> wrote:
Radars are good :-) Dave |
Thanks for the explanation. Auto-mounting snapshots is not high on my list of priorities so maybe I'll pursue this angle and see where it leads. Sent from my moss-covered three-handled family gradunza
|
@dabrahams which angle do you mean? |
@ilovezfs I mean trying the branch as a way to get Backblaze to recognize the filesystems and actually back them up. |
@dabrahams did you get a chance to try what Michael Newbery suggested? |
@dabrahams, @ilovezfs, I use o3x and Backblaze. Backblaze does not officially support backing up a ZFS dataset, however, their file scanner does include these files as part of my root volume backup. I have verified that my datasets mounted under /zfs/* and /Users/jbryan/Dropbox are backed up, and can be restored. Datasets mounted under /Volumes/* do not get backed up, regardless if you allow /Volumes* in your Backblaze Preferences. Screenshots:
ZFS Info:
In the list above, my "/Volumes/PLEXMETA" is not included in Backblaze backup. The workaround here is to adjust the ZFS dataset mountpoint path:
In case you want to use Dropbox over a ZFS Dataset + Backblaze:
You would need to do this for each dataset you want Backblaze to backup. Backblaze does not consider these as separate volumes, they just attach to the root volume, and that's OK for me. |
@jessiebryan Such a simple answer, and it works! Amazing, thank you! |
Is that solution still working for you? Backblaze isn't traversing /zfs for me (same perms as /Volumes). I'm wondering if I need to remove the computer and try doing another full backup to BB. |
@lennonolium yes it's working for me |
@dabrahams, thanks. I should have updated last night. After changing mount points to a similar strategy, I would force BB to conduct a backup, but it wouldn't pick up the new /zfs/... subdirectories. I sat here for hours tweaking various dataset parameters. Then, I woke up this morning and it had started syncing. Ha! All 7TB up on BB and living on ZFS. No problem at all. |
@jessiebryan Thank you very much for the mountpoint idea! I had to reinstall Backblaze because it did not recognize the folder at first and said the backup was complete, even though it was not, not even without the zfs-folder (but this might be another problem of Backblaze and unrelated). But now it works perfectly and uploads. Now I just have to wait forever to complete the large upload :P |
I use arq to access Backblaze B2 storage (as well as Wasabi). Arq sees zfs datasets natively. |
@lundman I think as issue116lite was merged, this can be closed... |
No description provided.
The text was updated successfully, but these errors were encountered: