Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Occasional vdev errors #7575

Closed
jgrund opened this issue May 30, 2018 · 2 comments
Closed

Occasional vdev errors #7575

jgrund opened this issue May 30, 2018 · 2 comments
Labels
Status: Stale No recent activity for issue

Comments

@jgrund
Copy link

jgrund commented May 30, 2018

We are running a series of integration test where we wipe disks between runs. Occasionally we are seeing the following error:

May 25 2018 22:39:37.535636341 ereport.fs.zfs.vdev.unknown
        class = "ereport.fs.zfs.vdev.unknown"
        ena = 0x10bc5367b600401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x291550dcbdf96fea
                vdev = 0x75cec2e52c7beef8
        (end detector)
        pool = "zfs_pool_scsi0QEMU_QEMU_HARDDISK_target5"
        pool_guid = 0x291550dcbdf96fea
        pool_state = 0x0
        pool_context = 0x6
        pool_failmode = "wait"
        vdev_guid = 0x75cec2e52c7beef8
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_target5-part1"
        vdev_devid = "scsi-0QEMU_QEMU_HARDDISK_target5-part1"
        vdev_complete_ts = 0x0
        vdev_delta_ts = 0x0
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        parent_guid = 0x291550dcbdf96fea
        parent_type = "root"
        vdev_spare_paths = 
        vdev_spare_guids = 
        prev_state = 0x1
        time = 0x5b0890a9 0x1fed2975 
        eid = 0x2b

This error is only temporary. Trying again later allows the pool to create without error.

Perhaps unrelatedly, we've seen another case of this where the pool is created despite the create pool command returning a non 0 exit code and the above error being emitted.

Prior to this error we are trying commands similar to the following:

zpool destroy zfs_pool_scsi0QEMU_QEMU_HARDDISK_disk15
udevadm settle
zpool labelclear /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15-part1
wipefs -a /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15
udevadm settle
udevadm info --path=/module/zfs
wipefs -a /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15
udevadm settle
parted /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15 mklabel gpt
udevadm settle
zpool create zfs_pool_scsi0QEMU_QEMU_HARDDISK_disk15 -o cachefile=none -o multihost=on /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15

Is there some other way we should be resetting devices between runs / some other way to wait for the vdev to become known?

@rageagainstthebugs
Copy link

This be duplicate to #2275, please close and use the ticket template.

@stale
Copy link

stale bot commented Aug 25, 2020

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Aug 25, 2020
@stale stale bot closed this as completed Nov 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Stale No recent activity for issue
Projects
None yet
Development

No branches or pull requests

2 participants