-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in zfs when importing a pool with a txg specified to use for rollback #5389
Comments
@behlendorf should I image the drives for backup purposes prior trying to import with 0.7.X or am I good if using |
@fling- there's no need to image the drives before trying 0.7.0. Just make sure you don't run |
@fling- any update? |
@behlendorf still reproducible.
|
@behlendorf also the pool is importable with all the txgs after 13068024 wiped using zfs_revert-0.1.py to destroy uberblocks containing these txgs. Thanks to @jshoward. |
Import works even when I revert to even older txgs. I can snapshot and send old deleted datasets without any issues. I get corrupted data and the pool refusing to import for some txgs but it works in general. |
That's good. So then the data on disk is almost certainly good, we're just requiring too much memory as part of the import. Were the most recently results you reported using the 0.7.1 tag? |
@behlendorf this one:
I used zfs_revert script on a qcow2 snapshot of the pool in qemu. And this is the only way I found to get to the older txgs. |
@fling- This issue caught my eye in light of the recent changes to the pool import code (6cb8e53 etc.). As pointed out in this commit's log, the import process now allows much more flexibility when rewinding pools and also, along with related commits, can provide for better error messages when an import fails. Do you still have this pool? If so, could you try a recent master to see whether the problem still occurs. |
Closing. The improved import code 6cb8e53 should handle this better, if there are still problems for specific pools let's open a new issue. |
I have a healthy and importable pool.
But zpool import hangs when I'm trying to import with a txg:
zpool import -o readonly=on -R /mnt/gentoo -T (some-recent-txg-number) tmp
With atleast one of txgs zfs starts allocating ram and stops at ~16G.
With all other txgs tested it never stops allocating and consuming atleast 60G and memory usage keeps growing.
The box hangs in both cases, import never returns.
The issue is reproducible with both freebsd and illumos and the leaking behavior is fully identical.
The last tested version is 0.6.5.8.
The text was updated successfully, but these errors were encountered: