-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for ZPL attribute tables and embedded data #2
Comments
I try to take a look at this and see weather I can make progress in that direction. |
@hiliev I've implemented the embedded data in the dnode and detected the System Attribute bonusbuffer, however I'm trying to understand the format of its content. Is it a zap encoded buffer? append: I think I found it in zdb: dump_znode(objset_t *os, uint64_t object, void *data, size_t size) |
This seems to be a bit more complicated than expected. ZFS has an attribute registration mechanism - SA. There is a bunch of layout tables that define the attributes and their offsets. Those are stored ZAP-like in several system objects. The order of the attributes may differ from pool to pool, therefore that system objects have to be parsed and the tables analysed. The objects are seen in your output from the other issue:
The SA master node (judging from the hex dump, although I haven't really decompressed the embedded data) appears to be a MicroZAP that holds the object IDs of the SA attribute registration and the SA attribute layouts objects. The attributes in the bonus buffer itself are prefixed with a |
@hiliev I think you are right, the SA master node (index 32 actually) contains:
Is it possible that I ask a question: In my test pool datapool I have created a One thing I notices is that py-zfs-rescue does collect the toplevel MOS dnodes with type 16 as the target datasets to archive. The root dataset "datapool" seems is in this set (it says there is data in it, however there are no objects inside it), however not the child dataset "datapool/datadir". How is the child-dataset traversal done when starting from the MOS ? |
I never really looked into how parent-child relationships are implemented. In my case, the MOS was broken and the root dataset was lost. I was happy to just be able to find all accessible datasets and rescue their content. |
@hiliev The child dataset dependency seems to be retrieved by :
|
Let me test it on the pool of my server first. As for the |
@hiliev : Just want to note that I have succeeded to retrive my files now. Wanna thank you for the py-zfs-rescue repo and the hints you gave. The unsorted patches are on https://github.com/eiselekd/dumpbin-py-zfs-rescue, maybe someone will find it useful in the future. |
@hiliev : pushed also https://github.com/eiselekd/dumpbin-py-zfs-rescue/blob/master/zfs/sa.py#L59 and https://github.com/eiselekd/dumpbin-py-zfs-rescue/blob/master/zfs/dnode.py#L194 which implemente a more complete handling of system attribute and bonustype 0x2c. With it symlinks are also handled. Are you intereted in getting a PR? |
Sorry, I'm currently moving to a different country and my FreeNAS system is offline in a locker room and I'm very slow at testing and accepting PRs. I'll be able to work on it again in about a month. |
@hiliev : ok, I understand. If you have time then let me know and I will supply PRs. There is one error that you might be interested in. https://github.com/eiselekd/dumpbin-py-zfs-rescue/blob/d21f4c28acee0d26ab3ba227fc7d8b03881dffd8/zfs/blocktree.py#L85 In the original repo the levelcache is a flat array that is shared between levels. I changed it to be a tree instead. |
Hi again, If you are interested and have time now I can supply patches. (As for py-zfs-rescue enabled me to restore my data I thought I need to contribute back). Tell me which area you want to address first. |
Hi @eiselekd, I'm glad my little project helped your in recovering your data. I had great plans for it and still have a backlog of todo's geared towards making it more user friendly and in particular turning it into a visual ZFS debugger and explorer. Unfortunately, working at a startup company in a completely different field leaves me with zero spare time for this project. If you are willing to take over the CLI branch and develop it further, please feel free to do so. The areas that needs attention are perhaps adding a proper command-line interface, pool scrub functionality, and support for raidz with higher parity (e.g., raidz2). If you wish, I can also make you a project collaborator, so you don't have to fork a separate version. |
@hiliev You can add me as a collaborator and maybe give me access to a special branch that I can hack around with. I could transfer the improvments from https://github.com/eiselekd/dumpbin-py-zfs-rescue back to your repo:
|
I sent you an invitation to become a collaborator. It gives you push access and you should be able to create branches on your own. When I find the time, I'll hack on the GUI stuff in a separate branch too. |
Accepted, thanks. |
@hiliev : Added pull request #12 which add (from list above):
|
Do I have to accept the pull request explicitly or your commit rights allow you to do it? |
I didnt try to push it myself. It is also that even if I tested the code in linux (subfolder test/Makefile) I didnt test it for disks from FreeNAS. I have been setting up a home NAS recenty (with FreeNAS as in a KVM and a SATA controller card passthrough), however I find it a bit hard to work with because the /usr/ports is disabled and I cannot work FreeBSD style with it except within jails however I'm not familiar with those. I didnt find any description on howto enable /usr/ports in FreeNAS again. I could run in FreeBSD but then I'm not shure what the delta to FreeNAS is there. |
FreeNAS is based on FreeBSD-STABLE kernels and the ZFS code should be the same as in the vanilla FreeBSD. My FreeNAS box is back online and I'll be able to test the code. |
I can also try it out on a FreeBSD box in the weekend. |
I tested on FreeBSD 11.2 and mdconfig and |
Conclusion from my side: Ok to push but create an issue to implement child datasets in BSD. |
That's strange. The ZFS implementation in FreeBSD should be the one closest to the reference implementation in OpenSolaris as it borrows directly most of the code. Perhaps Linux is the one that handles child datasets differently. It means that there are ZFS flavours and the code should be able somehow to detect the flavour or get it, e.g., via a command-line argument. In any case, I'm fine with merging and creating a separate issue for ZFS on FreeBSD. |
Attr tables and embedded data are handled |
In order to be able to use
py-zfs-rescue
on pools created by modern OSes, the following two enhancements are needed:znode_phys_t
(bonus data type 0x2c)The text was updated successfully, but these errors were encountered: