Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot destroy snapshots on full drive: "internal error: Channel number out of range" #9849

Open
fake-name opened this issue Jan 16, 2020 · 19 comments · May be fixed by #9895
Open

Cannot destroy snapshots on full drive: "internal error: Channel number out of range" #9849

fake-name opened this issue Jan 16, 2020 · 19 comments · May be fixed by #9895
Labels
Type: Defect Incorrect behavior (e.g. crash, hang) Type: Regression Indicates a functional regression

Comments

@fake-name
Copy link

fake-name commented Jan 16, 2020

System information

Type Version/Name
Distribution Name Proxmox
Distribution Version 6.1
Linux Kernel Linux longmox 5.3.13-1-pve #1 SMP PVE 5.3.13-1 (Thu, 05 Dec 2019 07:18:14 +0100) x86_64 GNU/Linux
ZFS Version 0.8.2-pve2
SPL Version 0.8.2-pve2

Describe the problem you're observing

So I have a zpool that's degraded. I then tried to create snapshots to do a backup. I snapshotted the whole pool sudo zfs snapshot -r rpool@dump

I now have a pool that's wedged, and I cannot do anything.

I'm not sure how to reproduce this, and I currently have a server that's basically totally hosed.

rpool   464G   237G   227G        -         -    67%    51%  1.00x  DEGRADED  -
root@longmox:/# sudo zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                      501G     0B      168K  /rpool
rpool/ROOT                13.3G     0B       96K  /rpool/ROOT
rpool/ROOT/pve-1          13.3G     0B     13.3G  /
rpool/base-127-disk-0     18.0G  15.5G     2.57G  -
rpool/basevol-119-disk-0   765M     0B      765M  /rpool/basevol-119-disk-0
rpool/data                  96K     0B       96K  /rpool/data
rpool/subvol-101-disk-0   61.5G     0B     61.5G  /rpool/subvol-101-disk-0
rpool/subvol-102-disk-0    921M     0B      921M  /rpool/subvol-102-disk-0
rpool/subvol-108-disk-0   7.84G     0B     7.84G  /rpool/subvol-108-disk-0
rpool/subvol-120-disk-0   2.12G     0B     2.12G  /rpool/subvol-120-disk-0
rpool/subvol-121-disk-0   3.53G     0B     3.53G  /rpool/subvol-121-disk-0
rpool/subvol-122-disk-0   1.47G     0B     1.47G  /rpool/subvol-122-disk-0
rpool/subvol-123-disk-0   1.80G     0B     1.80G  /rpool/subvol-123-disk-0
rpool/subvol-124-disk-0   1.66G     0B     1.66G  /rpool/subvol-124-disk-0
rpool/subvol-125-disk-0   2.07G     0B     2.07G  /rpool/subvol-125-disk-0
rpool/subvol-126-disk-0   1.43G     0B     1.43G  /rpool/subvol-126-disk-0
rpool/subvol-130-disk-0   1.53G     0B     1.53G  /rpool/subvol-130-disk-0
rpool/subvol-131-disk-0   1.12G     0B     1.12G  /rpool/subvol-131-disk-0
rpool/subvol-135-disk-0    919M     0B      919M  /rpool/subvol-135-disk-0
rpool/subvol-138-disk-0   3.10G     0B     3.10G  /rpool/subvol-138-disk-0
rpool/subvol-141-disk-0    832M     0B      832M  /rpool/subvol-141-disk-0
rpool/vm-103-disk-0       56.1G  33.0G     23.1G  -
rpool/vm-104-disk-0       17.0G  10.3G     6.71G  -
rpool/vm-104-disk-1       6.42G     0B     6.42G  none
rpool/vm-104-disk-2        137G   102G     34.1G  -
rpool/vm-105-disk-0       27.0G  15.4G     11.5G  -
rpool/vm-106-disk-0       25.4G  15.5G     9.94G  -
rpool/vm-128-disk-0       18.4G  15.5G     2.90G  -
rpool/vm-129-disk-0       22.0G  15.5G     6.55G  -
rpool/vm-133-disk-0       46.4G  25.8G     20.6G  -
rpool/vm-134-disk-0       21.3G  15.5G     5.84G  -
root@longmox:/# sudo zfs list -t snapshot
NAME                                USED  AVAIL     REFER  MOUNTPOINT
rpool@dump                            0B      -      168K  -
rpool/ROOT@dump                       0B      -       96K  -
rpool/ROOT/pve-1@dump                 0B      -     13.3G  -
rpool/base-127-disk-0@__base__        8K      -     2.57G  -
rpool/base-127-disk-0@dump            0B      -     2.57G  -
rpool/basevol-119-disk-0@__base__     8K      -      765M  -
rpool/basevol-119-disk-0@dump         0B      -      765M  -
rpool/data@dump                       0B      -       96K  -
rpool/subvol-101-disk-0@dump          0B      -     61.5G  -
rpool/subvol-102-disk-0@dump          0B      -      921M  -
rpool/subvol-108-disk-0@dump          0B      -     7.84G  -
rpool/subvol-120-disk-0@dump          0B      -     2.12G  -
rpool/subvol-121-disk-0@dump          0B      -     3.53G  -
rpool/subvol-122-disk-0@dump          0B      -     1.47G  -
rpool/subvol-123-disk-0@dump          0B      -     1.80G  -
rpool/subvol-124-disk-0@dump          0B      -     1.66G  -
rpool/subvol-125-disk-0@dump          0B      -     2.07G  -
rpool/subvol-126-disk-0@dump          0B      -     1.43G  -
rpool/subvol-130-disk-0@dump          0B      -     1.53G  -
rpool/subvol-131-disk-0@dump          0B      -     1.12G  -
rpool/subvol-135-disk-0@dump          0B      -      919M  -
rpool/subvol-138-disk-0@dump          0B      -     3.10G  -
rpool/subvol-141-disk-0@dump          0B      -      832M  -
rpool/vm-103-disk-0@dump              0B      -     23.1G  -
rpool/vm-104-disk-0@dump             11M      -     6.71G  -
rpool/vm-104-disk-1@snap              0B      -     6.42G  -
rpool/vm-104-disk-1@dump              0B      -     6.42G  -
rpool/vm-104-disk-2@dump            487M      -     33.6G  -
rpool/vm-105-disk-0@dump           26.5M      -     11.5G  -
rpool/vm-106-disk-0@dump           7.99M      -     9.94G  -
rpool/vm-128-disk-0@dump              0B      -     2.90G  -
rpool/vm-129-disk-0@dump              0B      -     6.55G  -
rpool/vm-133-disk-0@dump           10.4M      -     20.6G  -
rpool/vm-134-disk-0@dump              0B      -     5.84G  -

I cannot do anything.

root@longmox:/# zfs destroy rpool/vm-104-disk-2@dump
internal error: Channel number out of range
Aborted
root@longmox:/# zfs destroy rpool/data@dump
internal error: Channel number out of range
Aborted
root@longmox:~# rm vzdump-qemu-104-2019_04_22-23_31_57.vma
rm: cannot remove 'vzdump-qemu-104-2019_04_22-23_31_57.vma': No space left on device
root@longmox:~# echo > vzdump-qemu-104-2019_04_22-23_31_57.vma
bash: vzdump-qemu-104-2019_04_22-23_31_57.vma: No space left on device
root@longmox:~# truncate --size=0 vzdump-qemu-104-2019_04_22-23_31_57.vma
truncate: failed to truncate 'vzdump-qemu-104-2019_04_22-23_31_57.vma' at 0 bytes: No space left on device

(rm vzdump-qemu-104-2019_04_22-23_31_57.vma is a large file I was hoping to delete to clear up space)

I'm not sure what's going on. The underlying zpool apparently has 51% utilization, but all the zvols are 100% full. Additionally, there are no quotas that seem to be causing this:

root@longmox:/# zfs get quota
NAME                               PROPERTY  VALUE  SOURCE
rpool                              quota     none   default
rpool@dump                         quota     -      -
rpool/ROOT                         quota     none   default
rpool/ROOT@dump                    quota     -      -
rpool/ROOT/pve-1                   quota     none   default
rpool/ROOT/pve-1@dump              quota     -      -
rpool/base-127-disk-0              quota     -      -
rpool/base-127-disk-0@__base__     quota     -      -
rpool/base-127-disk-0@dump         quota     -      -
rpool/basevol-119-disk-0           quota     none   default
rpool/basevol-119-disk-0@__base__  quota     -      -
rpool/basevol-119-disk-0@dump      quota     -      -
rpool/data                         quota     none   default
rpool/data@dump                    quota     -      -
rpool/subvol-101-disk-0            quota     none   default
rpool/subvol-101-disk-0@dump       quota     -      -
rpool/subvol-102-disk-0            quota     none   default
rpool/subvol-102-disk-0@dump       quota     -      -
rpool/subvol-108-disk-0            quota     none   default
rpool/subvol-108-disk-0@dump       quota     -      -
rpool/subvol-120-disk-0            quota     none   default
rpool/subvol-120-disk-0@dump       quota     -      -
rpool/subvol-121-disk-0            quota     none   default
rpool/subvol-121-disk-0@dump       quota     -      -
rpool/subvol-122-disk-0            quota     none   default
rpool/subvol-122-disk-0@dump       quota     -      -
rpool/subvol-123-disk-0            quota     none   default
rpool/subvol-123-disk-0@dump       quota     -      -
rpool/subvol-124-disk-0            quota     none   default
rpool/subvol-124-disk-0@dump       quota     -      -
rpool/subvol-125-disk-0            quota     none   default
rpool/subvol-125-disk-0@dump       quota     -      -
rpool/subvol-126-disk-0            quota     none   default
rpool/subvol-126-disk-0@dump       quota     -      -
rpool/subvol-130-disk-0            quota     none   default
rpool/subvol-130-disk-0@dump       quota     -      -
rpool/subvol-131-disk-0            quota     none   default
rpool/subvol-131-disk-0@dump       quota     -      -
rpool/subvol-135-disk-0            quota     none   default
rpool/subvol-135-disk-0@dump       quota     -      -
rpool/subvol-138-disk-0            quota     none   default
rpool/subvol-138-disk-0@dump       quota     -      -
rpool/subvol-141-disk-0            quota     none   default
rpool/subvol-141-disk-0@dump       quota     -      -
rpool/vm-103-disk-0                quota     -      -
rpool/vm-103-disk-0@dump           quota     -      -
rpool/vm-104-disk-0                quota     -      -
rpool/vm-104-disk-0@dump           quota     -      -
rpool/vm-104-disk-1                quota     none   default
rpool/vm-104-disk-1@snap           quota     -      -
rpool/vm-104-disk-1@dump           quota     -      -
rpool/vm-104-disk-2                quota     -      -
rpool/vm-104-disk-2@dump           quota     -      -
rpool/vm-105-disk-0                quota     -      -
rpool/vm-105-disk-0@dump           quota     -      -
rpool/vm-106-disk-0                quota     -      -
rpool/vm-106-disk-0@dump           quota     -      -
rpool/vm-128-disk-0                quota     -      -
rpool/vm-128-disk-0@dump           quota     -      -
rpool/vm-129-disk-0                quota     -      -
rpool/vm-129-disk-0@dump           quota     -      -
rpool/vm-133-disk-0                quota     -      -
rpool/vm-133-disk-0@dump           quota     -      -
rpool/vm-134-disk-0                quota     -      -
rpool/vm-134-disk-0@dump           quota     -      -
root@longmox:/# zfs get refquota
NAME                               PROPERTY  VALUE     SOURCE
rpool                              refquota  none      default
rpool@dump                         refquota  -         -
rpool/ROOT                         refquota  none      default
rpool/ROOT@dump                    refquota  -         -
rpool/ROOT/pve-1                   refquota  none      default
rpool/ROOT/pve-1@dump              refquota  -         -
rpool/base-127-disk-0              refquota  -         -
rpool/base-127-disk-0@__base__     refquota  -         -
rpool/base-127-disk-0@dump         refquota  -         -
rpool/basevol-119-disk-0           refquota  10G       local
rpool/basevol-119-disk-0@__base__  refquota  -         -
rpool/basevol-119-disk-0@dump      refquota  -         -
rpool/data                         refquota  none      default
rpool/data@dump                    refquota  -         -
rpool/subvol-101-disk-0            refquota  100G      local
rpool/subvol-101-disk-0@dump       refquota  -         -
rpool/subvol-102-disk-0            refquota  8G        local
rpool/subvol-102-disk-0@dump       refquota  -         -
rpool/subvol-108-disk-0            refquota  50G       local
rpool/subvol-108-disk-0@dump       refquota  -         -
rpool/subvol-120-disk-0            refquota  10G       local
rpool/subvol-120-disk-0@dump       refquota  -         -
rpool/subvol-121-disk-0            refquota  10G       local
rpool/subvol-121-disk-0@dump       refquota  -         -
rpool/subvol-122-disk-0            refquota  10G       local
rpool/subvol-122-disk-0@dump       refquota  -         -
rpool/subvol-123-disk-0            refquota  8G        local
rpool/subvol-123-disk-0@dump       refquota  -         -
rpool/subvol-124-disk-0            refquota  8G        local
rpool/subvol-124-disk-0@dump       refquota  -         -
rpool/subvol-125-disk-0            refquota  10G       local
rpool/subvol-125-disk-0@dump       refquota  -         -
rpool/subvol-126-disk-0            refquota  10G       local
rpool/subvol-126-disk-0@dump       refquota  -         -
rpool/subvol-130-disk-0            refquota  30G       local
rpool/subvol-130-disk-0@dump       refquota  -         -
rpool/subvol-131-disk-0            refquota  10G       local
rpool/subvol-131-disk-0@dump       refquota  -         -
rpool/subvol-135-disk-0            refquota  10G       local
rpool/subvol-135-disk-0@dump       refquota  -         -
rpool/subvol-138-disk-0            refquota  50G       local
rpool/subvol-138-disk-0@dump       refquota  -         -
rpool/subvol-141-disk-0            refquota  15G       local
rpool/subvol-141-disk-0@dump       refquota  -         -
rpool/vm-103-disk-0                refquota  -         -
rpool/vm-103-disk-0@dump           refquota  -         -
rpool/vm-104-disk-0                refquota  -         -
rpool/vm-104-disk-0@dump           refquota  -         -
rpool/vm-104-disk-1                refquota  none      default
rpool/vm-104-disk-1@snap           refquota  -         -
rpool/vm-104-disk-1@dump           refquota  -         -
rpool/vm-104-disk-2                refquota  -         -
rpool/vm-104-disk-2@dump           refquota  -         -
rpool/vm-105-disk-0                refquota  -         -
rpool/vm-105-disk-0@dump           refquota  -         -
rpool/vm-106-disk-0                refquota  -         -
rpool/vm-106-disk-0@dump           refquota  -         -
rpool/vm-128-disk-0                refquota  -         -
rpool/vm-128-disk-0@dump           refquota  -         -
rpool/vm-129-disk-0                refquota  -         -
rpool/vm-129-disk-0@dump           refquota  -         -
rpool/vm-133-disk-0                refquota  -         -
rpool/vm-133-disk-0@dump           refquota  -         -
rpool/vm-134-disk-0                refquota  -         -
rpool/vm-134-disk-0@dump           refquota  -         -

Additionally, I have no idea where internal error: Channel number out of range is coming from. It doesn't appear to exist in the current source.

@fake-name
Copy link
Author

Ok, I think the zpool list CAP discrepancy is because it's the total possible space, rather then the actual space. Since this should be a mirror, it's therefore 2X the available space.

@fake-name
Copy link
Author

fake-name commented Jan 16, 2020

Ok, interesting. This seems to be a regression!

I pulled the drive and stuck it in another machine running the same version (0.8.2-pve2), and had the same issue.

I then put it on a older proxmox (v5.4) machine:

root@pmox:~# uname -a
Linux pmox 4.15.18-17-pve #1 SMP PVE 4.15.18-43 (Tue, 25 Jun 2019 17:59:49 +0200) x86_64 GNU/Linux
root@pmox:~# modinfo zfs | grep -iw version
version:        0.7.13-pve1~bpo1
root@pmox:~# modinfo spl | grep -iw version
version:        0.7.13-1

and I can delete the snapshots without issue!

@behlendorf behlendorf added Type: Defect Incorrect behavior (e.g. crash, hang) Type: Regression Indicates a functional regression labels Jan 16, 2020
@behlendorf
Copy link
Contributor

This issue appears to have been caused by your pool being entirely out of free space. Normally, this isn't a problem because a percentage of total capacity is reserved to ensure administrative commands like zfs snapshot, zfs destroy, do not fail.

What I suspect happened is that the recursive snapshot was created while the pool was already at, or near, 100% reported capacity. This operation, and possibly others, resulted in a significant amount of the reserve space being consumed. If more than three quarters of the reserve space is used commands such as zfs destroy can fail.

The latest versions of ZFS used Channel Programs internally to more quickly handle removing a large number of datasets in one command. This is why you saw that internal "Channel number out of range" error when the channel program unexpectedly failed. Trying an older version of ZFS was a good idea, since internally things are a little different so you managed to not exceed this limit.

You might have also been able to successfully remove the snapshot by setting the spa_slop_shift module option from 5 to 6. This would have allowed the zfs destroy to use more of that reserved space.

I'd suggest trying not to run your pool at <95% capacity if possible. That will avoid this kind of issue and improve performance.

Regardless, I've gone ahead and tagged this as a defect since internally ZFS should always keep a large enough reserve of free space to avoid this. We may way to look at having zfs destroy fall back to not using channel program when free space is effectively exhausted.

@fake-name
Copy link
Author

I'd suggest trying not to run your pool at <95% capacity if possible. That will avoid this kind of issue and improve performance.

Without the snapshots, the pool is not full:

NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                      370G  79.9G      168K  /rpool
rpool/ROOT                5.16G  79.9G       96K  /rpool/ROOT
rpool/ROOT/pve-1          5.16G  79.9G     5.16G  /
rpool/base-127-disk-0     15.5G  92.8G     2.57G  -
rpool/basevol-119-disk-0   765M  9.25G      765M  /rpool/basevol-119-disk-0
rpool/data                  96K  79.9G       96K  /rpool/data
rpool/subvol-101-disk-0   61.5G  38.5G     61.5G  /rpool/subvol-101-disk-0
rpool/subvol-102-disk-0    921M  7.10G      921M  /rpool/subvol-102-disk-0
rpool/subvol-108-disk-0   7.90G  42.1G     7.90G  /rpool/subvol-108-disk-0
rpool/subvol-120-disk-0   2.09G  7.91G     2.09G  /rpool/subvol-120-disk-0
rpool/subvol-121-disk-0   3.53G  6.47G     3.53G  /rpool/subvol-121-disk-0
rpool/subvol-122-disk-0   1.48G  8.52G     1.48G  /rpool/subvol-122-disk-0
rpool/subvol-123-disk-0   1.80G  6.20G     1.80G  /rpool/subvol-123-disk-0
rpool/subvol-124-disk-0   1.66G  6.34G     1.66G  /rpool/subvol-124-disk-0
rpool/subvol-125-disk-0   2.08G  7.92G     2.08G  /rpool/subvol-125-disk-0
rpool/subvol-126-disk-0   1.43G  8.57G     1.43G  /rpool/subvol-126-disk-0
rpool/subvol-130-disk-0   1.53G  28.5G     1.53G  /rpool/subvol-130-disk-0
rpool/subvol-131-disk-0   1.12G  8.88G     1.12G  /rpool/subvol-131-disk-0
rpool/subvol-135-disk-0    919M  9.10G      919M  /rpool/subvol-135-disk-0
rpool/subvol-138-disk-0   3.15G  46.8G     3.15G  /rpool/subvol-138-disk-0
rpool/subvol-141-disk-0    833M  14.2G      833M  /rpool/subvol-141-disk-0
rpool/vm-103-disk-0       33.0G  89.8G     23.1G  -
rpool/vm-104-disk-0       10.3G  83.5G     6.70G  -
rpool/vm-104-disk-1       6.42G  79.9G     6.42G  none
rpool/vm-104-disk-2        103G   147G     35.6G  -
rpool/vm-105-disk-0       15.5G  83.8G     11.5G  -
rpool/vm-106-disk-0       15.5G  85.4G     9.94G  -
rpool/vm-128-disk-0       15.5G  92.5G     2.90G  -
rpool/vm-129-disk-0       15.5G  88.8G     6.55G  -
rpool/vm-133-disk-0       25.8G  85.1G     20.6G  -
rpool/vm-134-disk-0       15.5G  89.5G     5.84G  -

There's 80GB free (so it's ~83% full), after clearing out the snapshots.

@pstch
Copy link

pstch commented Jan 21, 2020

I'm also observing this problem on a pool that was less than 95% full before making the snapshots.

When the pool uses encryption, using 0.7.X is not an option, so this bug seems to be able to permanently wedge ZFS pools (as zfs destroy doesn't work anymore, and the refreservation cannot be cleared). The only solution I found to recover data from such pools is to send volumes one-by-one (as this also breaks replication streams).

EDIT: Increasing spa_slop_shift to 6 didn't change anything. My pool is 230G, has only 118G allocated, and I'm using ZFS 0.8.2.

@loli10K loli10K linked a pull request Jan 26, 2020 that will close this issue
12 tasks
@zachariasmaladroit
Copy link

zachariasmaladroit commented Jan 27, 2020

I'm totally going to run into this situation soon-ish, pool was 95-97% full recently (3 TB)

(spa_slop_shift is already at 6)

thanks to @loli10K for a suggested fix 👍

@kneutron
Copy link

kneutron commented Apr 6, 2020

I'm totally going to run into this situation soon-ish, pool was 95-97% full recently (3 TB)

--If you're using a 3TB drive that's nearly full, I suggest replacing it with a 4TB or larger.

' zpool set autoexpand=on poolname '
' zpool replace poolname fulldisk newdisk '

--I haven't tried this with a single-disk zpool, but should be able to simulate with a file-backed zpool.

@kneutron
Copy link

kneutron commented Apr 6, 2020

zpool replace poolname fulldisk newdisk

Update: I just verified this works OK with a single file-based pool on OSX. You should see more free space immediately when the resilver finishes.

NOTE I haven't tried doing this on a live rpool (I don't use them) so it might be best to power off the system and try this from a rescue environment...

@ggzengel
Copy link
Contributor

ggzengel commented Jan 4, 2021

I got the same problem but I don't know where the space went. There were 5TBs left and in one hour gone.
I just activated zfs-auto-snapshot and there was a vzdump running for backup to an other host.

The pool is on LVM and I could raise each vdev from 3T to 3.1T but it couldn't get more space.

The VMs were still working and I could shutdown them and vzdump makes still a backup.

# modinfo zfs
filename:       /lib/modules/5.4.78-2-pve/zfs/zfs.ko
version:        0.8.5-pve1

# lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 10 (buster)
Release:	10
Codename:	buster
# zfs set primarycache=none zpool1/proxmox/local/vm-301-disk-1
cannot set property for 'zpool1/proxmox/local/vm-301-disk-1': out of space

# zpool set autoexpand=on zpool1
cannot set property for 'zpool1': out of space

# zfs destroy zpool1/proxmox/local/vm-201-disk-0@zfs-auto-snap_hourly-2021-01-03-1917
internal error: Channel number out of range
Aborted

# zpool status -i
  pool: zpool1
 state: ONLINE
  scan: none requested
config:

	NAME               STATE     READ WRITE CKSUM
	zpool1             ONLINE       0     0     0
	  raidz3-0         ONLINE       0     0     0
	    VG1-ZFS01      ONLINE       0     0     0  (uninitialized) # I think this comes from the online resize
	    VG1-ZFS02      ONLINE       0     0     0  (uninitialized)
	    VG1-ZFS03      ONLINE       0     0     0  (uninitialized)
	    VG1-ZFS04      ONLINE       0     0     0  (uninitialized)
	    VG1-ZFS05      ONLINE       0     0     0  (uninitialized)
	    VG1-ZFS06      ONLINE       0     0     0  (uninitialized)
	    VG1-ZFS07      ONLINE       0     0     0  (uninitialized)
	    VG1-ZFS08      ONLINE       0     0     0  (uninitialized)
	cache
	  VG1-ZFS_CACHE01  ONLINE       0     0     0  (uninitialized)
	  VG1-ZFS_CACHE02  ONLINE       0     0     0  (uninitialized)

errors: No known data errors

# zpool get all
NAME    PROPERTY                       VALUE                          SOURCE
zpool1  size                           24.8T                          -
zpool1  capacity                       41%                            -
zpool1  altroot                        -                              default
zpool1  health                         ONLINE                         -
zpool1  guid                           16430499426189636435           -
zpool1  version                        -                              default
zpool1  bootfs                         -                              default
zpool1  delegation                     on                             default
zpool1  autoreplace                    off                            default
zpool1  cachefile                      -                              default
zpool1  failmode                       wait                           default
zpool1  listsnapshots                  off                            default
zpool1  autoexpand                     off                            default
zpool1  dedupditto                     0                              default
zpool1  dedupratio                     1.00x                          -
zpool1  free                           14.4T                          -
zpool1  allocated                      10.4T                          -
zpool1  readonly                       off                            -
zpool1  ashift                         12                             local
zpool1  comment                        -                              default
zpool1  expandsize                     -                              -
zpool1  freeing                        0                              -
zpool1  fragmentation                  2%                             -
zpool1  leaked                         0                              -
zpool1  multihost                      off                            default
zpool1  checkpoint                     -                              -
zpool1  load_guid                      16849743855265002560           -
zpool1  autotrim                       off                            default
zpool1  feature@async_destroy          enabled                        local
zpool1  feature@empty_bpobj            active                         local
zpool1  feature@lz4_compress           active                         local
zpool1  feature@multi_vdev_crash_dump  enabled                        local
zpool1  feature@spacemap_histogram     active                         local
zpool1  feature@enabled_txg            active                         local
zpool1  feature@hole_birth             active                         local
zpool1  feature@extensible_dataset     active                         local
zpool1  feature@embedded_data          active                         local
zpool1  feature@bookmarks              enabled                        local
zpool1  feature@filesystem_limits      enabled                        local
zpool1  feature@large_blocks           enabled                        local
zpool1  feature@large_dnode            enabled                        local
zpool1  feature@sha512                 enabled                        local
zpool1  feature@skein                  enabled                        local
zpool1  feature@edonr                  enabled                        local
zpool1  feature@userobj_accounting     active                         local
zpool1  feature@encryption             enabled                        local
zpool1  feature@project_quota          active                         local
zpool1  feature@device_removal         enabled                        local
zpool1  feature@obsolete_counts        enabled                        local
zpool1  feature@zpool_checkpoint       enabled                        local
zpool1  feature@spacemap_v2            active                         local
zpool1  feature@allocation_classes     enabled                        local
zpool1  feature@resilver_defer         enabled                        local
zpool1  feature@bookmark_v2            enabled                        local
# zfs list -o creation,name,volsize,volblocksize,refreservation,used,referenced,usedbydataset,written,usedbysnapshots,logicalreferenced,logicalused,compressratio,refcompressratio,userrefs,dedup,primarycache,secondarycache,recordsize,avail -t all
CREATION               NAME                                                                                     VOLSIZE  VOLBLOCK  REFRESERV   USED     REFER  USEDDS  WRITTEN  USEDSNAP  LREFER  LUSED  RATIO  REFRATIO  USERREFS          DEDUP  PRIMARYCACHE  SECONDARYCACHE  RECSIZE  AVAIL
Sat Dec 19  0:30 2020  zpool1                                                                                         -         -       none  14.4T      237K    237K        0        0B     46K  5.59T  1.17x     1.00x         -            off           all             all     128K     0B
Sun Jan  3 18:30 2021  zpool1@zfs-auto-snap_frequent-2021-01-03-1830                                                  -         -          -     0B      237K       -     237K         -     46K      -  1.00x     1.00x         0              -           all             all        -      -
Sun Jan  3 18:45 2021  zpool1@zfs-auto-snap_frequent-2021-01-03-1845                                                  -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -           all             all        -      -
Sun Jan  3 19:00 2021  zpool1@zfs-auto-snap_frequent-2021-01-03-1900                                                  -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -           all             all        -      -
Sun Jan  3 19:15 2021  zpool1@zfs-auto-snap_frequent-2021-01-03-1915                                                  -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -           all             all        -      -
Sun Jan  3 19:17 2021  zpool1@zfs-auto-snap_hourly-2021-01-03-1917                                                    -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -           all             all        -      -
Sat Dec 19  0:32 2020  zpool1/proxmox                                                                                 -         -       none  14.4T      237K    237K        0        0B     46K  5.59T  1.17x     1.00x         -            off           all             all     128K     0B
Sun Jan  3 19:17 2021  zpool1/proxmox@zfs-auto-snap_hourly-2021-01-03-1917                                            -         -          -     0B      237K       -     237K         -     46K      -  1.00x     1.00x         0              -           all             all        -      -
Sat Dec 19  0:32 2020  zpool1/proxmox/drbd                                                                            -         -       none   219K      219K    219K        0        0B     42K    42K  1.00x     1.00x         -            off           all             all     128K     0B
Sun Jan  3 19:17 2021  zpool1/proxmox/drbd@zfs-auto-snap_hourly-2021-01-03-1917                                       -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -           all             all        -      -
Sat Dec 19  0:32 2020  zpool1/proxmox/files                                                                           -         -       none  4.11G     4.11G   4.11G        0        0B   4.60G  4.60G  1.11x     1.11x         -            off           all             all     128K     0B
Sun Jan  3 19:17 2021  zpool1/proxmox/files@zfs-auto-snap_hourly-2021-01-03-1917                                      -         -          -     0B     4.11G       -    4.11G         -   4.60G      -  1.11x     1.11x         0              -           all             all        -      -
Sat Dec 19  0:32 2020  zpool1/proxmox/local                                                                           -         -       none  14.4T      219K    219K        0        0B     42K  5.59T  1.17x     1.00x         -            off           all             all     128K     0B
Sun Jan  3 19:17 2021  zpool1/proxmox/local@zfs-auto-snap_hourly-2021-01-03-1917                                      -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -           all             all        -      -
Mon Dec 28 10:05 2020  zpool1/proxmox/local/vm-201-disk-0                                                           10G       16K      11.6G  13.3G     1.66G   1.66G    54.9M     54.9M   1.59G  1.64G  1.57x     1.58x         -            off           all             all        -  11.5G
Sun Jan  3 19:17 2021  zpool1/proxmox/local/vm-201-disk-0@zfs-auto-snap_hourly-2021-01-03-1917                      10G         -          -  54.9M     1.66G       -    1.66G         -   1.59G      -  1.57x     1.57x         0              -           all             all        -      -
Mon Dec 28 10:05 2020  zpool1/proxmox/local/vm-201-disk-1                                                            1M       16K      3.14M  3.32M      182K    182K        0        0B    160K   160K  3.33x     3.33x         -            off           all             all        -  3.14M
Sun Jan  3 19:17 2021  zpool1/proxmox/local/vm-201-disk-1@zfs-auto-snap_hourly-2021-01-03-1917                       1M         -          -     0B      182K       -     182K         -    160K      -  3.33x     3.33x         0              -           all             all        -      -
Sun Jan  3 18:43 2021  zpool1/proxmox/local/vm-202-disk-0                                                           10G       16K      11.6G  13.0G     2.91G   2.91G    1.98G      502M   2.78G  3.25G  1.57x     1.57x         -            off           all             all        -  9.61G
Sun Jan  3 19:17 2021  zpool1/proxmox/local/vm-202-disk-0@zfs-auto-snap_hourly-2021-01-03-1917                      10G         -          -   502M     1.42G       -    1.42G         -   1.37G      -  1.58x     1.58x         0              -           all             all        -      -
Sun Jan  3  6:08 2021  zpool1/proxmox/local/vm-202-disk-1                                                            1M       16K      3.14M  3.32M      173K    173K     173K      182K    160K   320K  3.47x     3.63x         -            off           all             all        -  2.97M
Sun Jan  3 19:17 2021  zpool1/proxmox/local/vm-202-disk-1@zfs-auto-snap_hourly-2021-01-03-1917                       1M         -          -   182K      182K       -     182K         -    160K      -  3.33x     3.33x         0              -           all             all        -      -
Sun Dec 27 12:50 2020  zpool1/proxmox/local/vm-301-disk-0                                                            1M       16K      3.14M  3.31M      173K    173K        0        0B    160K   160K  3.63x     3.63x         -            off           all             all        -  3.14M
Sun Jan  3 19:17 2021  zpool1/proxmox/local/vm-301-disk-0@zfs-auto-snap_hourly-2021-01-03-1917                       1M         -          -     0B      173K       -     173K         -    160K      -  3.63x     3.63x         0              -           all             all        -      -
Mon Dec 28  0:27 2020  zpool1/proxmox/local/vm-301-disk-1                                                         3.64T       16K      4.21T  6.53T     2.31T   2.31T    6.37G     6.31G   2.32T  2.32T  1.31x     1.31x         -            off           all             all        -  4.21T
Sun Jan  3 19:17 2021  zpool1/proxmox/local/vm-301-disk-1@zfs-auto-snap_hourly-2021-01-03-1917                    3.64T         -          -  6.31G     2.31T       -    2.31T         -   2.32T      -  1.31x     1.31x         0              -           all             all        -      -
Mon Dec 28 19:58 2020  zpool1/proxmox/local/vm-301-disk-2                                                         3.64T       16K      4.21T  7.82T     3.60T   3.60T    2.65G     2.84G   3.25T  3.26T  1.08x     1.08x         -            off           all             all        -  4.21T
Sun Jan  3 19:17 2021  zpool1/proxmox/local/vm-301-disk-2@zfs-auto-snap_hourly-2021-01-03-1917                    3.64T         -          -  2.84G     3.60T       -    3.60T         -   3.25T      -  1.08x     1.08x         0              -           all             all        -      -
Sat Dec 19  0:32 2020  zpool1/store                                                                                   -         -       none  2.42M      256K    256K        0        0B     50K   478K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store@zfs-auto-snap_frequent-2021-01-03-1830                                            -         -          -     0B      256K       -     256K         -     50K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store@zfs-auto-snap_frequent-2021-01-03-1845                                            -         -          -     0B      256K       -        0         -     50K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store@zfs-auto-snap_frequent-2021-01-03-1900                                            -         -          -     0B      256K       -        0         -     50K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store@zfs-auto-snap_frequent-2021-01-03-1915                                            -         -          -     0B      256K       -        0         -     50K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store@zfs-auto-snap_hourly-2021-01-03-1917                                              -         -          -     0B      256K       -        0         -     50K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/3PP-backup                                                                        -         -       none   675K      237K    237K        0        0B     46K   130K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/3PP-backup@zfs-auto-snap_frequent-2021-01-03-1830                                 -         -          -     0B      237K       -     237K         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/3PP-backup@zfs-auto-snap_frequent-2021-01-03-1845                                 -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/3PP-backup@zfs-auto-snap_frequent-2021-01-03-1900                                 -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/3PP-backup@zfs-auto-snap_frequent-2021-01-03-1915                                 -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/3PP-backup@zfs-auto-snap_hourly-2021-01-03-1917                                   -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/3PP-backup/longterm                                                               -         -       none   219K      219K    219K        0        0B     42K    42K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/3PP-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1830                        -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/3PP-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1845                        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/3PP-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1900                        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/3PP-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1915                        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/3PP-backup/longterm@zfs-auto-snap_hourly-2021-01-03-1917                          -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/3PP-backup/shortterm                                                              -         -       none   219K      219K    219K        0        0B     42K    42K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/3PP-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1830                       -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/3PP-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1845                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/3PP-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1900                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/3PP-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1915                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/3PP-backup/shortterm@zfs-auto-snap_hourly-2021-01-03-1917                         -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/work                                                                              -         -       none   219K      219K    219K        0        0B     42K    42K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/work@zfs-auto-snap_frequent-2021-01-03-1830                                       -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/work@zfs-auto-snap_frequent-2021-01-03-1845                                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/work@zfs-auto-snap_frequent-2021-01-03-1900                                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/work@zfs-auto-snap_frequent-2021-01-03-1915                                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/work@zfs-auto-snap_hourly-2021-01-03-1917                                         -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/zmt-backup                                                                        -         -       none  1.30M      219K    219K        0        0B     42K   256K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/zmt-backup@zfs-auto-snap_frequent-2021-01-03-1830                                 -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/zmt-backup@zfs-auto-snap_frequent-2021-01-03-1845                                 -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/zmt-backup@zfs-auto-snap_frequent-2021-01-03-1900                                 -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/zmt-backup@zfs-auto-snap_frequent-2021-01-03-1915                                 -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/zmt-backup@zfs-auto-snap_hourly-2021-01-03-1917                                   -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/zmt-backup/longterm                                                               -         -       none   219K      219K    219K        0        0B     42K    42K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/zmt-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1830                        -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/zmt-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1845                        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/zmt-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1900                        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/zmt-backup/longterm@zfs-auto-snap_frequent-2021-01-03-1915                        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/zmt-backup/longterm@zfs-auto-snap_hourly-2021-01-03-1917                          -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/zmt-backup/shortterm                                                              -         -       none   894K      219K    219K        0        0B     42K   172K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/zmt-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1830                       -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/zmt-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1845                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/zmt-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1900                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/zmt-backup/shortterm@zfs-auto-snap_frequent-2021-01-03-1915                       -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/zmt-backup/shortterm@zfs-auto-snap_hourly-2021-01-03-1917                         -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/zmt-backup/shortterm/px-images                                                    -         -       none   675K      237K    237K        0        0B     46K   130K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/zmt-backup/shortterm/px-images@zfs-auto-snap_frequent-2021-01-03-1830             -         -          -     0B      237K       -     237K         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/zmt-backup/shortterm/px-images@zfs-auto-snap_frequent-2021-01-03-1845             -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/zmt-backup/shortterm/px-images@zfs-auto-snap_frequent-2021-01-03-1900             -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/zmt-backup/shortterm/px-images@zfs-auto-snap_frequent-2021-01-03-1915             -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/zmt-backup/shortterm/px-images@zfs-auto-snap_hourly-2021-01-03-1917               -         -          -     0B      237K       -        0         -     46K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:32 2020  zpool1/store/zmt-backup/shortterm/px-images/high                                               -         -       none   219K      219K    219K        0        0B     42K    42K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/zmt-backup/shortterm/px-images/high@zfs-auto-snap_frequent-2021-01-03-1830        -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/zmt-backup/shortterm/px-images/high@zfs-auto-snap_frequent-2021-01-03-1845        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/zmt-backup/shortterm/px-images/high@zfs-auto-snap_frequent-2021-01-03-1900        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/zmt-backup/shortterm/px-images/high@zfs-auto-snap_frequent-2021-01-03-1915        -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/zmt-backup/shortterm/px-images/high@zfs-auto-snap_hourly-2021-01-03-1917          -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sat Dec 19  0:33 2020  zpool1/store/zmt-backup/shortterm/px-images/low                                                -         -       none   219K      219K    219K        0        0B     42K    42K  1.00x     1.00x         -            off      metadata        metadata     128K     0B
Sun Jan  3 18:30 2021  zpool1/store/zmt-backup/shortterm/px-images/low@zfs-auto-snap_frequent-2021-01-03-1830         -         -          -     0B      219K       -     219K         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 18:45 2021  zpool1/store/zmt-backup/shortterm/px-images/low@zfs-auto-snap_frequent-2021-01-03-1845         -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:00 2021  zpool1/store/zmt-backup/shortterm/px-images/low@zfs-auto-snap_frequent-2021-01-03-1900         -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:15 2021  zpool1/store/zmt-backup/shortterm/px-images/low@zfs-auto-snap_frequent-2021-01-03-1915         -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -
Sun Jan  3 19:17 2021  zpool1/store/zmt-backup/shortterm/px-images/low@zfs-auto-snap_hourly-2021-01-03-1917           -         -          -     0B      219K       -        0         -     42K      -  1.00x     1.00x         0              -      metadata        metadata        -      -

@ggzengel
Copy link
Contributor

ggzengel commented Jan 4, 2021

I have found some old data from my console. I don't know how could it raise so much und where is the space exactly?
The Proxmox graph shows the raise was after the first snapshot.

This is (1-2 days) before the disaster:

# zfs list
NAME                                               USED  AVAIL     REFER  MOUNTPOINT
zpool1                                            8.44T  4.81T      237K  /srv/zfs
zpool1/proxmox                                    8.44T  4.81T      237K  /srv/zfs/proxmox
zpool1/proxmox/drbd                                219K  4.81T      219K  /srv/zfs/proxmox/drbd
zpool1/proxmox/files                              4.11G  4.81T     4.11G  /srv/zfs/proxmox/files
zpool1/proxmox/local                              8.44T  4.81T      219K  /srv/zfs/proxmox/local
zpool1/proxmox/local/vm-201-disk-0                11.6G  4.82T     1.66G  -
zpool1/proxmox/local/vm-201-disk-1                3.14M  4.81T      182K  -
zpool1/proxmox/local/vm-301-disk-0                3.14M  4.81T      164K  -
zpool1/proxmox/local/vm-301-disk-1                4.21T  6.71T     2.31T  -
zpool1/proxmox/local/vm-301-disk-2                4.21T  5.42T     3.60T  -
zpool1/store                                      2.69M  4.81T      256K  /srv/zfs/store
zpool1/store/3PP-backup                            675K  4.81T      237K  /srv/zfs/store/3PP-backup
zpool1/store/3PP-backup/longterm                   219K  4.81T      219K  /srv/zfs/store/3PP-backup/longterm
zpool1/store/3PP-backup/shortterm                  219K  4.81T      219K  /srv/zfs/store/3PP-backup/shortterm
zpool1/store/work                                  493K  4.81T      219K  /srv/zfs/store/work
zpool1/store/work/migration                        274K  4.81T      274K  /srv/zfs/store/work/migration
zpool1/store/zmt-backup                           1.30M  4.81T      219K  /srv/zfs/store/zmt-backup
zpool1/store/zmt-backup/longterm                   219K  4.81T      219K  /srv/zfs/store/zmt-backup/longterm
zpool1/store/zmt-backup/shortterm                  894K  4.81T      219K  /srv/zfs/store/zmt-backup/shortterm
zpool1/store/zmt-backup/shortterm/px-images        675K  4.81T      237K  /srv/zfs/store/zmt-backup/shortterm/px-images
zpool1/store/zmt-backup/shortterm/px-images/high   219K  4.81T      219K  /srv/zfs/store/zmt-backup/shortterm/px-images/high
zpool1/store/zmt-backup/shortterm/px-images/low    219K  4.81T      219K  /srv/zfs/store/zmt-backup/shortterm/px-images/low

# zfs get all zpool1/proxmox/local/vm-301-disk-2
NAME                                PROPERTY                        VALUE                           SOURCE
zpool1/proxmox/local/vm-301-disk-2  type                            volume                          -
zpool1/proxmox/local/vm-301-disk-2  creation                        Mon Dec 28 19:58 2020           -
zpool1/proxmox/local/vm-301-disk-2  used                            4.21T                           -
zpool1/proxmox/local/vm-301-disk-2  available                       5.42T                           -
zpool1/proxmox/local/vm-301-disk-2  referenced                      3.60T                           -
zpool1/proxmox/local/vm-301-disk-2  compressratio                   1.08x                           -
zpool1/proxmox/local/vm-301-disk-2  reservation                     none                            default
zpool1/proxmox/local/vm-301-disk-2  volsize                         3.64T                           local
zpool1/proxmox/local/vm-301-disk-2  volblocksize                    16K                             -
zpool1/proxmox/local/vm-301-disk-2  checksum                        on                              default
zpool1/proxmox/local/vm-301-disk-2  compression                     lz4                             inherited from zpool1
zpool1/proxmox/local/vm-301-disk-2  readonly                        off                             default
zpool1/proxmox/local/vm-301-disk-2  createtxg                       162422                          -
zpool1/proxmox/local/vm-301-disk-2  copies                          1                               default
zpool1/proxmox/local/vm-301-disk-2  refreservation                  4.21T                           local
zpool1/proxmox/local/vm-301-disk-2  guid                            6490480769591684477             -
zpool1/proxmox/local/vm-301-disk-2  primarycache                    all                             default
zpool1/proxmox/local/vm-301-disk-2  secondarycache                  all                             default
zpool1/proxmox/local/vm-301-disk-2  usedbysnapshots                 0B                              -
zpool1/proxmox/local/vm-301-disk-2  usedbydataset                   3.60T                           -
zpool1/proxmox/local/vm-301-disk-2  usedbychildren                  0B                              -
zpool1/proxmox/local/vm-301-disk-2  usedbyrefreservation            626G                            -
zpool1/proxmox/local/vm-301-disk-2  logbias                         latency                         default
zpool1/proxmox/local/vm-301-disk-2  objsetid                        718                             -
zpool1/proxmox/local/vm-301-disk-2  dedup                           off                             default
zpool1/proxmox/local/vm-301-disk-2  mlslabel                        none                            default
zpool1/proxmox/local/vm-301-disk-2  sync                            standard                        default
zpool1/proxmox/local/vm-301-disk-2  refcompressratio                1.08x                           -
zpool1/proxmox/local/vm-301-disk-2  written                         3.60T                           -
zpool1/proxmox/local/vm-301-disk-2  logicalused                     3.25T                           -
zpool1/proxmox/local/vm-301-disk-2  logicalreferenced               3.25T                           -
zpool1/proxmox/local/vm-301-disk-2  volmode                         dev                             inherited from zpool1
zpool1/proxmox/local/vm-301-disk-2  snapshot_limit                  none                            default
zpool1/proxmox/local/vm-301-disk-2  snapshot_count                  none                            default
zpool1/proxmox/local/vm-301-disk-2  snapdev                         hidden                          default
zpool1/proxmox/local/vm-301-disk-2  context                         none                            default
zpool1/proxmox/local/vm-301-disk-2  fscontext                       none                            default
zpool1/proxmox/local/vm-301-disk-2  defcontext                      none                            default
zpool1/proxmox/local/vm-301-disk-2  rootcontext                     none                            default
zpool1/proxmox/local/vm-301-disk-2  redundant_metadata              all                             default
zpool1/proxmox/local/vm-301-disk-2  encryption                      off                             default
zpool1/proxmox/local/vm-301-disk-2  keylocation                     none                            default
zpool1/proxmox/local/vm-301-disk-2  keyformat                       none                            default
zpool1/proxmox/local/vm-301-disk-2  pbkdf2iters                     0                               default
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:daily     true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:weekly    true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:frequent  false                           inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot           true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:hourly    true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:monthly   true                            inherited from zpool1/proxmox

And this after:

# zfs get all zpool1/proxmox/local/vm-301-disk-2
NAME                                PROPERTY                        VALUE                           SOURCE
zpool1/proxmox/local/vm-301-disk-2  type                            volume                          -
zpool1/proxmox/local/vm-301-disk-2  creation                        Mon Dec 28 19:58 2020           -
zpool1/proxmox/local/vm-301-disk-2  used                            7.82T                           -
zpool1/proxmox/local/vm-301-disk-2  available                       4.21T                           -
zpool1/proxmox/local/vm-301-disk-2  referenced                      3.60T                           -
zpool1/proxmox/local/vm-301-disk-2  compressratio                   1.08x                           -
zpool1/proxmox/local/vm-301-disk-2  reservation                     none                            default
zpool1/proxmox/local/vm-301-disk-2  volsize                         3.64T                           local
zpool1/proxmox/local/vm-301-disk-2  volblocksize                    16K                             -
zpool1/proxmox/local/vm-301-disk-2  checksum                        on                              default
zpool1/proxmox/local/vm-301-disk-2  compression                     lz4                             inherited from zpool1
zpool1/proxmox/local/vm-301-disk-2  readonly                        off                             default
zpool1/proxmox/local/vm-301-disk-2  createtxg                       162422                          -
zpool1/proxmox/local/vm-301-disk-2  copies                          1                               default
zpool1/proxmox/local/vm-301-disk-2  refreservation                  4.21T                           local
zpool1/proxmox/local/vm-301-disk-2  guid                            6490480769591684477             -
zpool1/proxmox/local/vm-301-disk-2  primarycache                    all                             default
zpool1/proxmox/local/vm-301-disk-2  secondarycache                  all                             default
zpool1/proxmox/local/vm-301-disk-2  usedbysnapshots                 2.84G                           -
zpool1/proxmox/local/vm-301-disk-2  usedbydataset                   3.60T                           -
zpool1/proxmox/local/vm-301-disk-2  usedbychildren                  0B                              -
zpool1/proxmox/local/vm-301-disk-2  usedbyrefreservation            4.21T                           -
zpool1/proxmox/local/vm-301-disk-2  logbias                         latency                         default
zpool1/proxmox/local/vm-301-disk-2  objsetid                        718                             -
zpool1/proxmox/local/vm-301-disk-2  dedup                           off                             default
zpool1/proxmox/local/vm-301-disk-2  mlslabel                        none                            default
zpool1/proxmox/local/vm-301-disk-2  sync                            standard                        default
zpool1/proxmox/local/vm-301-disk-2  refcompressratio                1.08x                           -
zpool1/proxmox/local/vm-301-disk-2  written                         2.65G                           -
zpool1/proxmox/local/vm-301-disk-2  logicalused                     3.26T                           -
zpool1/proxmox/local/vm-301-disk-2  logicalreferenced               3.25T                           -
zpool1/proxmox/local/vm-301-disk-2  volmode                         dev                             inherited from zpool1
zpool1/proxmox/local/vm-301-disk-2  snapshot_limit                  none                            default
zpool1/proxmox/local/vm-301-disk-2  snapshot_count                  none                            default
zpool1/proxmox/local/vm-301-disk-2  snapdev                         hidden                          default
zpool1/proxmox/local/vm-301-disk-2  context                         none                            default
zpool1/proxmox/local/vm-301-disk-2  fscontext                       none                            default
zpool1/proxmox/local/vm-301-disk-2  defcontext                      none                            default
zpool1/proxmox/local/vm-301-disk-2  rootcontext                     none                            default
zpool1/proxmox/local/vm-301-disk-2  redundant_metadata              all                             default
zpool1/proxmox/local/vm-301-disk-2  encryption                      off                             default
zpool1/proxmox/local/vm-301-disk-2  keylocation                     none                            default
zpool1/proxmox/local/vm-301-disk-2  keyformat                       none                            default
zpool1/proxmox/local/vm-301-disk-2  pbkdf2iters                     0                               default
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:daily     true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:weekly    true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:frequent  false                           inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot           true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:hourly    true                            inherited from zpool1/proxmox
zpool1/proxmox/local/vm-301-disk-2  com.sun:auto-snapshot:monthly   true                            inherited from zpool1/proxmox

@ggzengel
Copy link
Contributor

ggzengel commented Jan 4, 2021

Export/import does not work.
spa_slop_shift (was 5, used 6 and 20) does not work too.
Is there any hope to get this thing online again?

I'm not on site, so I backup with 100MBit for the next 30h one way.
I stopped vzdump, tried to import and now I backup compressed.

# zpool import -o autoexpand=on zpool1
cannot import 'zpool1': out of space
	Destroy and re-create the pool from
	a backup source.
# zpool import -d /dev/mapper/ zpool1
# zfs set primarycache=none zpool1/proxmox/local/vm-301-disk-1
cannot set property for 'zpool1/proxmox/local/vm-301-disk-1': out of space

@helamonster
Copy link

FYI: I just encountered this same error ("internal error: Channel number out of range") while attempting to destroy snapshots on version 2.0.4:

zfs-2.0.4-0york3~20.04
zfs-kmod-2.0.4-0york3~20.04

Linux server 5.4.0-74-generic #83-Ubuntu SMP Sat May 8 02:35:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal

Although I was able to free some space and workaround the issue.

@Henkis
Copy link

Henkis commented Nov 9, 2021

So, I have the exact same problem, I think I happened after upgrading my ZFS version, I had not used sparse volumes and something happened after the upgrade and now I am unable to do anything other than read the datasets, I can not null a file, not set down refreservation, destroy snapshot or datasets or anything. Is there anyway to fix this other than to destroy my pool? And my pool is not full, but refreservations is taking the space, zpool list says I have 8TB of free space. I am running omnios-r151038-96eabf6ba4.

@sirbusby
Copy link

zfs version
zfs-2.0.5-pve1
zfs-kmod-2.0.6-pve1

zfs destroy -f rpool/vm-202-disk-2
cannot destroy 'rpool/vm-202-disk-2': volume has children
use '-r' to destroy the following datasets:
rpool/vm-202-disk-2@zfs-auto-snap_frequent-2021-12-22-1645

zfs destroy -r rpool/vm-202-disk-2
internal error: cannot destroy snapshots: Channel number out of range
Aborted

@ja-cop
Copy link

ja-cop commented Mar 15, 2022

I'm pretty sure I encountered this on Saturday - I neglected to save all the pertinent information, but I'll post what I have FWIW.


It happened in a single vdev pool of 250GB where the vdev is a single LUKS device. The remaining free space was probably just a few gigabytes. I knew it was low on disk space, so I wanted to move some VM-related zvols to a different pool, and I ran zfs snapshot -r mypoolname/VMVOL@sendtotank, where this was the common parent dataset of all my zvols.

The root filesystem and /var/log was on this pool, and me running that command was the last it was able to log, so I know that this command is what caused the pool to become completely full ("no space left on device").

Most of my zvols accidentally had the default refreservation enabled, which I had not intended. After running zfs-snapshot I tried to zfs-send the snapshots to the target pool, but I got either an internal error or "no space left on device" (I unfortunately can't remember which). Then I tried to remove some bigger files on the filesystem to free up space, but rm also failed with no space left on device. zfs destroy on the snapshots didn't work either, but I can't remember which error message I got. I also tried setting refreservation to none on the zvols, but this gave an internal error.

I booted into a rescue environment and sent all the pool's datasets (not snapshots, which wasn't possible) to another pool, recreated the full pool (zpool destroy, zpool create) and sent some of the datasets back.


The host is running Void Linux/musl.

$ zpool --version                                                                                                                                                     ~
zfs-2.1.2-1
zfs-kmod-2.1.2-1

$ uname -a                                                                                                                                                            ~
Linux myhostname 5.15.26_1 #1 SMP Wed Mar 2 13:44:09 UTC 2022 x86_64 GNU/Linux

Layout of the pool in question (this was run after recreating the pool, so it's not the original):

> zpool status mypoolname                                                                                                                                           ~
  pool: mypoolname
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:53 with 0 errors on Mon Mar 14 13:00:54 2022
config:

        NAME                                                     STATE     READ WRITE CKSUM
        mypoolname                                               ONLINE       0     0     0
          /dev/mapper/luks-166a7bfa-54ad-48dd-829b-24e7c252c3aa  ONLINE       0     0     0

errors: No known data errors

The pool's attributes (from after recreating it, but should be the same or very similar):

> zpool get all mypoolname                                                                                                                                          ~
NAME        PROPERTY                       VALUE                          SOURCE
mypoolname  size                           230G                           -
mypoolname  capacity                       10%                            -
mypoolname  altroot                        -                              default
mypoolname  health                         ONLINE                         -
mypoolname  guid                           10132177447976272804           -
mypoolname  version                        -                              default
mypoolname  bootfs                         mypoolname/ROOT/void           local
mypoolname  delegation                     on                             default
mypoolname  autoreplace                    off                            default
mypoolname  cachefile                      -                              default
mypoolname  failmode                       wait                           default
mypoolname  listsnapshots                  off                            default
mypoolname  autoexpand                     off                            default
mypoolname  dedupratio                     1.00x                          -
mypoolname  free                           206G                           -
mypoolname  allocated                      23.7G                          -
mypoolname  readonly                       off                            -
mypoolname  ashift                         12                             local
mypoolname  comment                        -                              default
mypoolname  expandsize                     -                              -
mypoolname  freeing                        0                              -
mypoolname  fragmentation                  0%                             -
mypoolname  leaked                         0                              -
mypoolname  multihost                      off                            default
mypoolname  checkpoint                     -                              -
mypoolname  load_guid                      3557207208251422890            -
mypoolname  autotrim                       on                             local
mypoolname  compatibility                  off                            default
mypoolname  feature@async_destroy          enabled                        local
mypoolname  feature@empty_bpobj            active                         local
mypoolname  feature@lz4_compress           active                         local
mypoolname  feature@multi_vdev_crash_dump  enabled                        local
mypoolname  feature@spacemap_histogram     active                         local
mypoolname  feature@enabled_txg            active                         local
mypoolname  feature@hole_birth             active                         local
mypoolname  feature@extensible_dataset     active                         local
mypoolname  feature@embedded_data          active                         local
mypoolname  feature@bookmarks              enabled                        local
mypoolname  feature@filesystem_limits      enabled                        local
mypoolname  feature@large_blocks           enabled                        local
mypoolname  feature@large_dnode            enabled                        local
mypoolname  feature@sha512                 enabled                        local
mypoolname  feature@skein                  enabled                        local
mypoolname  feature@edonr                  enabled                        local
mypoolname  feature@userobj_accounting     active                         local
mypoolname  feature@encryption             enabled                        local
mypoolname  feature@project_quota          active                         local
mypoolname  feature@device_removal         enabled                        local
mypoolname  feature@obsolete_counts        enabled                        local
mypoolname  feature@zpool_checkpoint       enabled                        local
mypoolname  feature@spacemap_v2            active                         local
mypoolname  feature@allocation_classes     enabled                        local
mypoolname  feature@resilver_defer         enabled                        local
mypoolname  feature@bookmark_v2            enabled                        local
mypoolname  feature@redaction_bookmarks    enabled                        local
mypoolname  feature@redacted_datasets      enabled                        local
mypoolname  feature@bookmark_written       enabled                        local
mypoolname  feature@log_spacemap           active                         local
mypoolname  feature@livelist               enabled                        local
mypoolname  feature@device_rebuild         enabled                        local
mypoolname  feature@zstd_compress          active                         local
mypoolname  feature@draid                  disabled                       local

It was created from my rescue environment running zfs-2.0.4-1, which is probably why not all the features are enabled.

While in the rescue environment, I set spa_slop_shift from the default 5 to 6 via sysfs, but it didn't change the error message when trying to destroy snapshots or remove files.


It was pointed out on IRC that zpool-checkpoint could have made backtracking trivial, which I'm guessing is good advice for anyone who is doing maintenance on pools with very little space left.

@behlendorf
Copy link
Contributor

Just for future reference, I wanted to mention that as of the 2.1.3 release the low space handling has been further improved, commit 145af48. Depending on how overcapacity your pool was you may still have encountered issues, but this should help in the future when trying to remove files to free up space.

@lynxis
Copy link

lynxis commented Feb 19, 2023

I've managed to get into a similar situation where my zpool still has free space but zfs is full because of snapshots. The zvol are still writeable because only the root zfs is full, the volume itself still has free space.

Destroy snapshots fails with:

 1676831085   dsl_destroy.c:662:dsl_destroy_snapshots_nvl(): Could not open pool: data/....@...

When the zfs is such full not even a zfs programm data simple.lua doesn't work anymore. Even when simple.lua only contains

return { }

It fails with ECHRNG and error message "Could not open pool: data"
strace:
ioctl(5, _IOC(_IOC_NONE, 0x5a, 0x48, 0), 0x7ffce335d770) = -1 ECHRNG (Channel number out of range)

For ECHRNG see https://github.com/openzfs/zfs/blob/master/lib/libzfs_core/libzfs_core.c#L1509

@lynxis
Copy link

lynxis commented Feb 20, 2023

Further my zpool seemed to run into very low slop space. Changing spa_slop_shift didn't changed anything.
I'm running debian bullseye with zfs-dkms 2.0.3-9. Trying out 2.1.9-1~bpo11+1 didn't changed anything.

I've managed to remove the snapshots by changing the space requirement for zcp and zfs.sync.destroy.
The zpool recovered!

Be careful here. It (seemed) worked for me. But I'm not a zfs developer and my insight to zfs a very new! I don't know if there are further implications when running such low on slop space.

diff --git a/module/zfs/zcp.c b/module/zfs/zcp.c
index 89ed4f91faa3..c2c5acf6127d 100644
--- a/module/zfs/zcp.c
+++ b/module/zfs/zcp.c
@@ -1157,7 +1157,7 @@ zcp_eval(const char *poolname, const char *program, boolean_t sync,

        if (sync) {
                err = dsl_sync_task_sig(poolname, NULL, zcp_eval_sync,
-                   zcp_eval_sig, &runinfo, 0, ZFS_SPACE_CHECK_ZCP_EVAL);
+                   zcp_eval_sig, &runinfo, 0, ZFS_SPACE_CHECK_NONE);
                if (err != 0)
                        zcp_pool_error(&runinfo, poolname, err);
        } else {
diff --git a/module/zfs/zcp_synctask.c b/module/zfs/zcp_synctask.c
index 058910054d97..c1b76ee1c68d 100644
--- a/module/zfs/zcp_synctask.c
+++ b/module/zfs/zcp_synctask.c
@@ -125,7 +125,7 @@ static const zcp_synctask_info_t zcp_synctask_destroy_info = {
            {.za_name = "defer", .za_lua_type = LUA_TBOOLEAN },
            {NULL, 0}
        },
-       .space_check = ZFS_SPACE_CHECK_DESTROY,
+       .space_check = ZFS_SPACE_CHECK_NONE,
        .blocks_modified = 0
 };

@chazchandler
Copy link

chazchandler commented Jan 15, 2024

I had this happen to me when taking a snapshot of large, non-sparse zvols (ie, refreservation not set to none).

$ zfs list -o space tank/asdf
NAME       AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank/asdf  2.76T   461G      825K    193G           268G         0B
$ zfs get refreservation tank/asdf
NAME       PROPERTY        VALUE      SOURCE
tank/asdf  refreservation  268G       received

Setting refreservation to none gave me the space back, but I didn't discover this until after adding a vdev to the pool so I'm not sure if I could have made that change while experiencing the low-space issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang) Type: Regression Indicates a functional regression
Projects
None yet
Development

Successfully merging a pull request may close this issue.