Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CKSUM and WRITE errors with 2.2.1 stable, when vdevs are atop LUKS #15533

Closed
Rudd-O opened this issue Nov 16, 2023 · 72 comments
Closed

CKSUM and WRITE errors with 2.2.1 stable, when vdevs are atop LUKS #15533

Rudd-O opened this issue Nov 16, 2023 · 72 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@Rudd-O
Copy link
Contributor

Rudd-O commented Nov 16, 2023

I build and regularly test ZFS from the master branch. A few days go I built and tested the commit specified in the headline of this issue, deploying it to three machines.

On two of them (the ones that had mirrored pools), a data corruption issue arose where many WRITE errors (hundreds) would accumulate when deleting snapshots, but no CKSUM errors took place, nor was there evidence that hardware was the issue. I tried a scrub, and that just made the problem worse.

Initially I assumed I had gotten extremely unlucky and hardware was dying, because two mirrors of one leg were experiencing the issue, but none of the drives of the other leg were -- so I decided best to be safe and attach a third mirror drive to the first leg (that was $200, oof). Since I had no more drive bays, I popped the new drive into a USB port (USB 2.0!) and attached it to the first leg.

During the resilvering process, the third drive also began experiencing WRITE errors, and the first CKSUM errors.

	NAME                                                                                                STATE     READ WRITE CKSUM
	chest                                                                                               DEGRADED     0     0     0
	  mirror-0                                                                                          DEGRADED     0   308     0
	    dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4  DEGRADED     0   363     2  too many errors
	    dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718  DEGRADED     0   369     0  too many errors
	    dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb  DEGRADED     0   423     0  too many errors
	  mirror-3                                                                                          ONLINE       0     0     0
	    dm-uuid-CRYPT-LUKS2-602229e893a34cc7aa889f19deedbeb1-luks-602229e8-93a3-4cc7-aa88-9f19deedbeb1  ONLINE       0     0     0
	    dm-uuid-CRYPT-LUKS2-12c9127aa687463ab335b2e49adbacc7-luks-12c9127a-a687-463a-b335-b2e49adbacc7  ONLINE       0     0     0
	logs	
	  dm-uuid-CRYPT-LUKS2-9210c7657b8a460ba031aab973ca37c5-luks-9210c765-7b8a-460b-a031-aab973ca37c5    ONLINE       0     0     0
	cache
	  dm-uuid-CRYPT-LUKS2-06a5560306c24ad58d3feb3a03cb0a20-luks-06a55603-06c2-4ad5-8d3f-eb3a03cb0a20    ONLINE       0     0     0

I tried different kernels (6.4, 6.5 from Fedora) to no avail. The error was present either way. zpool clear was followed by a few errors whenever disks were written to, and hundreds of errors whenever snapshots were deleted (I have zfs-auto-snapshot running in the background).

Then, my backup machine began experiencing the same WRITE errors. I can't have this backup die on me, especially not that I have actual data corruption on the big data file server.

At this point I concluded there must be some serious issue with the code, and decided to downgrade all machines to a known-good build. After downgrading the most severely affected machine (whose logs are above) to my build of e47e9bb, everything appears nominal and the resilvering is progressing without issues. Deleting snapshots also is no longer causing issues.

Nonetheless, I have forever lost what appears to be "who knows what" metadata, and of course four days trying to resilver unsuccessfully:

Every 2.0s: zpool status -v chest                                                                                                                                                                                                         penny.dragonfear: Thu Nov 16 15:01:22 2023

  pool: chest
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Nov 16 14:51:20 2023
        486G / 6.70T scanned at 826M/s, 13.4G / 6.66T issued at 22.8M/s
        13.4G resilvered, 0.20% done, 3 days 12:43:30 to go
config:

       	NAME                                                                                                STATE     READ WRITE CKSUM
        chest                                                                                               ONLINE	 0     0     0
          mirror-0                                                                                          ONLINE	 0     0     0
            dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4  ONLINE	 0     0     0  (resilvering)
            dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718  ONLINE	 0     0     0  (resilvering)
            dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb  ONLINE	 0     0     0  (resilvering)
          mirror-3                                                                                          ONLINE	 0     0     0
            dm-uuid-CRYPT-LUKS2-602229e893a34cc7aa889f19deedbeb1-luks-602229e8-93a3-4cc7-aa88-9f19deedbeb1  ONLINE	 0     0     0
            dm-uuid-CRYPT-LUKS2-12c9127aa687463ab335b2e49adbacc7-luks-12c9127a-a687-463a-b335-b2e49adbacc7  ONLINE	 0     0     0
        logs
          dm-uuid-CRYPT-LUKS2-9210c7657b8a460ba031aab973ca37c5-luks-9210c765-7b8a-460b-a031-aab973ca37c5    ONLINE	 0     0     0
        cache
          dm-uuid-CRYPT-LUKS2-06a5560306c24ad58d3feb3a03cb0a20-luks-06a55603-06c2-4ad5-8d3f-eb3a03cb0a20    ONLINE	 0     0     0

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x16>
        <metadata>:<0x11d>
        <metadata>:<0x34>
        <metadata>:<0x2838>
        <metadata>:<0x3c>
        <metadata>:<0x44>
        <metadata>:<0x656>
        <metadata>:<0x862>
        <metadata>:<0x594>
        <metadata>:<0x3cf>
        <metadata>:<0x2df>
        <metadata>:<0x1f5>

In conclusion, something added between e47e9bb..786641d is causing this issue.

@Rudd-O Rudd-O added the Type: Defect Incorrect behavior (e.g. crash, hang) label Nov 16, 2023
@KungFuJesus
Copy link

A good number of commits in that range overlap the state of 2.2. It's a shame you probably can't bisect due to this being already data in peril. Any way you could do that?

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 16, 2023

If I had a solid reproducer or spare hardware to set an experiment up, I would most certainly run it over here.

@robn
Copy link
Contributor

robn commented Nov 16, 2023

Was anything logged by the kernel or zed when the write errors happened?

Please clarify "backup machine": is that a machine/pool you're sending streams to? Or just another unrelated computer?

@don-brady
Copy link
Contributor

The system log should have the type of errors can you isolate them and post them here? And zpool events -v will have more details on the write errors that might help with diagnosis.

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 17, 2023

Yes, please stand by....

Here is a sample of zpool events + journalctl logs from that time:

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1155432600401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0xb1310329a9559f6b
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0xb1310329a9559f6b
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114c1b98
        vdev_delta_ts = 0x4c160e
        vdev_read_errors = 0x0
        vdev_write_errors = 0x6d
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f001f0f
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb94000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x0
        zio_level = 0x0
        zio_blkid = 0x1b
        time = 0x65560494 0x68a7db 
        eid = 0x247

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1155f72c00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x70
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da10fea9fd
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb9e000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x0
        zio_level = 0x1
        zio_blkid = 0x1
        time = 0x65560494 0x68a7db 
        eid = 0x248

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1156897900401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x71
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da10fe2150
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb9d000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x0
        zio_level = 0x0
        zio_blkid = 0x0
        time = 0x65560494 0x68a7db 
        eid = 0x249

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da114e33e800401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x72
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f096038
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb9c000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x3c
        zio_level = 0x0
        zio_blkid = 0x0
        time = 0x65560494 0x68a7db 
        eid = 0x24a

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da11515e0100401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x73
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f07e20b
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb9b000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x38
        zio_level = 0x0
        zio_blkid = 0x1
        time = 0x65560494 0x68a7db 
        eid = 0x24b

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1151dd4d00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x74
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f06269a
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb9a000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x34
        zio_level = 0x0
        zio_blkid = 0x0
        time = 0x65560494 0x68a7db 
        eid = 0x24c

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da11527bda00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x75
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f054403
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb99000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x30
        zio_level = 0x0
        zio_blkid = 0x1
        time = 0x65560494 0x68a7db 
        eid = 0x24d

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1153079700401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x76
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f04355d
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb98000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x1c
        zio_level = 0x0
        zio_blkid = 0x0
        time = 0x65560494 0x68a7db 
        eid = 0x24e

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da115394d600401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x77
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f032e66
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb97000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x1
        zio_level = 0x0
        zio_blkid = 0x1
        time = 0x65560494 0x68a7db 
        eid = 0x24f

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1154212800401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x78
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f024b25
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb96000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x0
        zio_level = 0x0
        zio_blkid = 0xdc
        time = 0x65560494 0x68a7db 
        eid = 0x250

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1154b52500401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x79
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f01227a
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb95000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x0
        zio_level = 0x0
        zio_blkid = 0x1e
        time = 0x65560494 0x68a7db 
        eid = 0x251

Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da1155432600401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x9f8a1edbb1089f54
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x9f8a1edbb1089f54
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da114b6eac
        vdev_delta_ts = 0x4bd967
        vdev_read_errors = 0x0
        vdev_write_errors = 0x7a
        vdev_cksum_errors = 0x2
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0f0014cd
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fbb94000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x0
        zio_level = 0x0
        zio_blkid = 0x1b
        time = 0x65560494 0x68a7db 
        eid = 0x252

Nov 16 2023 12:01:24.011858512 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da11a746ea00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x942ee94b9fd31420
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x942ee94b9fd31420
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da11a6bc0c
        vdev_delta_ts = 0x7367fb
        vdev_read_errors = 0x0
        vdev_write_errors = 0x77
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x40080480
        zio_stage = 0x2000000
        zio_pipeline = 0x2100000
        zio_delay = 0x40c81
        zio_timestamp = 0x2da0e8793f2
        zio_delta = 0x71c1a8
        zio_priority = 0x3
        zio_offset = 0x414fb6d9000
        zio_size = 0x4000
        time = 0x65560494 0xb4f250 
        eid = 0x253

Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da0ee61b6c00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x942ee94b9fd31420
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x942ee94b9fd31420
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da11a6bc0c
        vdev_delta_ts = 0x7367fb
        vdev_read_errors = 0x0
        vdev_write_errors = 0x78
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0e8a91a5
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fb6dc000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x3cf
        zio_level = 0x0
        zio_blkid = 0x0
        time = 0x65560494 0xc43468 
        eid = 0x254

Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da0ee61b6c00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x8acc53be158c8a67
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x8acc53be158c8a67
        vdev_type = "mirror"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x0
        vdev_delta_ts = 0x0
        vdev_read_errors = 0x0
        vdev_write_errors = 0x57
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x2336a45308072aca
        parent_type = "root"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x104080
        zio_stage = 0x2000000
        zio_pipeline = 0x2100000
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fb2dc000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x3cf
        zio_level = 0x0
        zio_blkid = 0x0
        time = 0x65560494 0xc43468 
        eid = 0x255

Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da0eea101e00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x942ee94b9fd31420
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x942ee94b9fd31420
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da11a6bc0c
        vdev_delta_ts = 0x7367fb
        vdev_read_errors = 0x0
        vdev_write_errors = 0x79
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0e89bf0e
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fb6db000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x3cb
        zio_level = 0x1
        zio_blkid = 0x0
        time = 0x65560494 0xc43468 
        eid = 0x256

Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da0eea101e00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x8acc53be158c8a67
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x8acc53be158c8a67
        vdev_type = "mirror"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x0
        vdev_delta_ts = 0x0
        vdev_read_errors = 0x0
        vdev_write_errors = 0x58
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x2336a45308072aca
        parent_type = "root"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x104080
        zio_stage = 0x2000000
        zio_pipeline = 0x2100000
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fb2db000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x3cb
        zio_level = 0x1
        zio_blkid = 0x0
        time = 0x65560494 0xc43468 
        eid = 0x257

Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da0eebb8fa00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x942ee94b9fd31420
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x942ee94b9fd31420
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da11a6bc0c
        vdev_delta_ts = 0x7367fb
        vdev_read_errors = 0x0
        vdev_write_errors = 0x7a
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0e88e642
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fb6da000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x3cb
        zio_level = 0x0
        zio_blkid = 0x1
        time = 0x65560494 0xc43468 
        eid = 0x258

Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da0eebb8fa00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x8acc53be158c8a67
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x8acc53be158c8a67
        vdev_type = "mirror"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x0
        vdev_delta_ts = 0x0
        vdev_read_errors = 0x0
        vdev_write_errors = 0x59
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x2336a45308072aca
        parent_type = "root"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x104080
        zio_stage = 0x2000000
        zio_pipeline = 0x2100000
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fb2da000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x3cb
        zio_level = 0x0
        zio_blkid = 0x1
        time = 0x65560494 0xc43468 
        eid = 0x259

Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
        class = "ereport.fs.zfs.io"
        ena = 0x2da0eed7a3a00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x2336a45308072aca
                vdev = 0x942ee94b9fd31420
        (end detector)
        pool = "chest"
        pool_guid = 0x2336a45308072aca
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "wait"
        vdev_guid = 0x942ee94b9fd31420
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
        vdev_ashift = 0xc
        vdev_complete_ts = 0x2da11a6bc0c
        vdev_delta_ts = 0x7367fb
        vdev_read_errors = 0x0
        vdev_write_errors = 0x7b
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0x8acc53be158c8a67
        parent_type = "mirror"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x5
        zio_flags = 0x380080
        zio_stage = 0x2000000
        zio_pipeline = 0x2e00000
        zio_delay = 0x0
        zio_timestamp = 0x2da0e8793f2
        zio_delta = 0x0
        zio_priority = 0x3
        zio_offset = 0x414fb6d9000
        zio_size = 0x1000
        zio_objset = 0x0
        zio_object = 0x11d
        zio_level = 0x0
        zio_blkid = 0x0
        time = 0x65560494 0xc43468 
        eid = 0x25a

Nov 16 12:01:23 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488164118528 size=16384 flags=1074267264
Nov 16 12:01:23 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488164118528 size=16384 flags=1074267264
Nov 16 12:01:23 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488164118528 size=16384 flags=1074267264
Nov 16 12:01:23 penny.dragonfear zed[19233]: eid=562 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=16384 offset=4488164118528 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488169078784 size=36864 flags=1074267264
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488169078784 size=49152 flags=1074267264
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488164130816 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488164130816 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488164130816 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488169078784 size=49152 flags=1074267264
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488169111552 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488169111552 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19239]: eid=566 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164118528 priority=3 err=5 flags=0x380080 bookmark=0:285:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488169111552 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19241]: eid=563 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164130816 priority=3 err=5 flags=0x380080 bookmark=0:975:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172371968 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172371968 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172371968 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172371968 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172371968 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172371968 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19245]: eid=571 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164118528 priority=3 err=5 flags=0x380080 bookmark=0:285:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19248]: eid=567 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=16384 offset=4488164118528 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear zed[19255]: eid=564 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164126720 priority=3 err=5 flags=0x380080 bookmark=0:971:1:0
Nov 16 12:01:24 penny.dragonfear zed[19258]: eid=565 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164122624 priority=3 err=5 flags=0x380080 bookmark=0:971:0:1
Nov 16 12:01:24 penny.dragonfear zed[19261]: eid=573 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169111552 priority=3 err=5 flags=0x380080 bookmark=0:60:0:0
Nov 16 12:01:24 penny.dragonfear zed[19265]: eid=575 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488169123840 priority=3 err=5 flags=0x380080 bookmark=0:48:1:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19266]: eid=572 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=36864 offset=4488169078784 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear zed[19268]: eid=574 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=49152 offset=4488169078784 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear zed[19269]: eid=568 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164130816 priority=3 err=5 flags=0x380080 bookmark=0:975:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19252]: eid=570 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164122624 priority=3 err=5 flags=0x380080 bookmark=0:971:0:1
Nov 16 12:01:24 penny.dragonfear zed[19271]: eid=576 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169107456 priority=3 err=5 flags=0x380080 bookmark=0:56:0:1
Nov 16 12:01:24 penny.dragonfear zed[19275]: eid=578 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169099264 priority=3 err=5 flags=0x380080 bookmark=0:48:0:1
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear zed[19281]: eid=579 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169095168 priority=3 err=5 flags=0x380080 bookmark=0:28:0:0
Nov 16 12:01:24 penny.dragonfear zed[19282]: eid=580 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169091072 priority=3 err=5 flags=0x380080 bookmark=0:1:0:1
Nov 16 12:01:24 penny.dragonfear zed[19278]: eid=569 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164126720 priority=3 err=5 flags=0x380080 bookmark=0:971:1:0
Nov 16 12:01:24 penny.dragonfear zed[19286]: eid=581 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169086976 priority=3 err=5 flags=0x380080 bookmark=0:0:0:220
Nov 16 12:01:24 penny.dragonfear zed[19287]: eid=577 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169103360 priority=3 err=5 flags=0x380080 bookmark=0:52:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992

The media server experienced the failures first. After a day of running the same software on the backup machine (it's just a Borg backup server, no ZFS send or receive) that's when I decided it had to be software instead of hardware. Both machines have ECC memory.

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 21, 2023

I can confirm 2.2.0 as released in this repository does not have the data corruption issue.

@KungFuJesus
Copy link

2.2.0 or 2.2.1? That's a relief if 2.2.0, as that's what went into 14-RELEASE, right?

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 21, 2023

commit 95785196f26e92d82cf4445654ba84e4a9671c57 (tag: zfs-2.2.0, zfsonlinux/zfs-2.2-release)
Author: Brian Behlendorf <behlendorf1@llnl.gov>
Date:   Thu Oct 12 16:14:14 2023 -0700

    Tag 2.2.0
    
    New Features
    - Block cloning (#13392)
    - Linux container support (#14070, #14097, #12263)
    - Scrub error log (#12812, #12355)
    - BLAKE3 checksums (#12918)
    - Corrective "zfs receive"
    - Vdev and zpool user properties
    
    Performance
    - Fully adaptive ARC (#14359)
    - SHA2 checksums (#13741)
    - Edon-R checksums (#13618)
    - Zstd early abort (#13244)
    - Prefetch improvements (#14603, #14516, #14402, #14243, #13452)
    - General optimization (#14121, #14123, #14039, #13680, #13613,
      #13606, #13576, #13553, #12789, #14925, #14948)
    
    Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>

What I build and what I am running right now.

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 21, 2023

Tonight I'll try to upgrade my backup server to 2.2.1. Wish me luck.

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 22, 2023

I will report on the results after several days of testing.

@chenxiaolong
Copy link

chenxiaolong commented Nov 22, 2023

I just ran into this after upgrading to zfs 2.2.1 (immediately after reboot). I'm also on Fedora and running zfs on top of LUKS. I'm seeing write errors, but not any checksum errors, and zpool scrub gets interrupted almost immediately.

I'm pretty sure it's not a hardware issue. None of the drives are reporting SMART errors and each vdev is showing a similar number of errors per drive despite being connected via two different paths (1 drive via LSI HBA, 1 drive via native Intel SATA).

I'm going to try downgrading to zfs 2.2.0 to see if that helps. Unfortunately, I can't downgrade further than that because I've already enabled the new zpool features.

  • OS: Fedora 39 x86_64
  • Kernel: 6.5.12-300.fc39.x86_64
  • ZFS: zfs-2.2.1-1.fc39.x86_64
zpool status -v
  pool: satapool0
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: resilvered 672K in 00:00:01 with 0 errors on Wed Nov 22 11:17:25 2023
config:

        NAME                                             STATE     READ WRITE CKSUM
        satapool0                                        DEGRADED     0     0     0
          mirror-0                                       ONLINE       0     2     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A  ONLINE       0     2     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG  ONLINE       0     2     0
          mirror-1                                       DEGRADED     0     9     0
            luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP  DEGRADED     0    10     0  too many errors
            luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C  FAULTED      0    10     0  too many errors
          mirror-2                                       ONLINE       0     6     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A  ONLINE       0     7     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA  ONLINE       0     7     0
          mirror-3                                       DEGRADED     0    33     0
            luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP  DEGRADED     0    34     0  too many errors
            luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GSX1RC  FAULTED      0    31     0  too many errors
          mirror-4                                       ONLINE       0     2     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T  ONLINE       0     2     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA  ONLINE       0     2     0
          mirror-5                                       DEGRADED     0    29     0
            luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3WH05X7J  DEGRADED     0    30     0  too many errors
            luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3HG62TPN  FAULTED      0    28     0  too many errors
          mirror-6                                       ONLINE       0     2     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG86A8A  ONLINE       0     2     0
            luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_Y5J2Z08C  ONLINE       0     2     0
          mirror-7                                       ONLINE       0     5     0
            luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN  ONLINE       0     5     0
            luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP  ONLINE       0     5     0
        logs
          luks-zfs-nvme-PX04PMC384_77N0A00RTZ6D-part1    ONLINE       0     0     0
        cache
          luks-zfs-nvme-PX04PMC384_77N0A00RTZ6D-part2    ONLINE       0     0     0

errors: No known data errors
dmesg

[    0.000000] microcode: updated early: 0x113 -> 0x11d, date = 2023-08-29
[    0.000000] Linux version 6.5.12-300.fc39.x86_64 (mockbuild@cda4963b6857459f9d1b40ea59f8a44a) (gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4), GNU ld version 2.40-13.fc39) #1 SMP PREEMPT_DYNAMIC Mon Nov 20 22:44:24 UTC 2023
[    0.000000] Command line: root=UUID=e3af9a0d-aa4f-4d81-b315-97fa46206986 ro rd.luks.uuid=luks-cf6bc8bd-2dfb-4b60-b004-4a283c0d2d42 rhgb quiet console=tty0 console=ttyS1,115200n8 intel_iommu=on rd.shell=0 systemd.machine_id=ec7321adfdbb412093efcdc435abb26e
[    0.000000] x86/split lock detection: #AC: crashing the kernel on kernel split_locks and warning on user-space split_locks
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009e000-0x000000000009efff] reserved
[    0.000000] BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007177dfff] usable
[    0.000000] BIOS-e820: [mem 0x000000007177e000-0x000000007487dfff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007487e000-0x000000007499cfff] ACPI data
[    0.000000] BIOS-e820: [mem 0x000000007499d000-0x0000000074ac9fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000074aca000-0x0000000075ffefff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000075fff000-0x0000000075ffffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000076000000-0x0000000079ffffff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007a800000-0x000000007abfffff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007b000000-0x00000000803fffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000c0000000-0x00000000cfffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fed20000-0x00000000fed7ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000107fbfffff] usable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] e820: update [mem 0x66bb6018-0x66bfbe57] usable ==> usable
[    0.000000] e820: update [mem 0x66bb6018-0x66bfbe57] usable ==> usable
[    0.000000] e820: update [mem 0x66b91018-0x66bb5657] usable ==> usable
[    0.000000] e820: update [mem 0x66b91018-0x66bb5657] usable ==> usable
[    0.000000] e820: update [mem 0x63b68018-0x63b8c657] usable ==> usable
[    0.000000] e820: update [mem 0x63b68018-0x63b8c657] usable ==> usable
[    0.000000] e820: update [mem 0x63b36018-0x63b67657] usable ==> usable
[    0.000000] e820: update [mem 0x63b36018-0x63b67657] usable ==> usable
[    0.000000] e820: update [mem 0x66b83018-0x66b90657] usable ==> usable
[    0.000000] e820: update [mem 0x66b83018-0x66b90657] usable ==> usable
[    0.000000] extended physical RAM map:
[    0.000000] reserve setup_data: [mem 0x0000000000000000-0x000000000009dfff] usable
[    0.000000] reserve setup_data: [mem 0x000000000009e000-0x000000000009efff] reserved
[    0.000000] reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] usable
[    0.000000] reserve setup_data: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[    0.000000] reserve setup_data: [mem 0x0000000000100000-0x0000000063b36017] usable
[    0.000000] reserve setup_data: [mem 0x0000000063b36018-0x0000000063b67657] usable
[    0.000000] reserve setup_data: [mem 0x0000000063b67658-0x0000000063b68017] usable
[    0.000000] reserve setup_data: [mem 0x0000000063b68018-0x0000000063b8c657] usable
[    0.000000] reserve setup_data: [mem 0x0000000063b8c658-0x0000000066b83017] usable
[    0.000000] reserve setup_data: [mem 0x0000000066b83018-0x0000000066b90657] usable
[    0.000000] reserve setup_data: [mem 0x0000000066b90658-0x0000000066b91017] usable
[    0.000000] reserve setup_data: [mem 0x0000000066b91018-0x0000000066bb5657] usable
[    0.000000] reserve setup_data: [mem 0x0000000066bb5658-0x0000000066bb6017] usable
[    0.000000] reserve setup_data: [mem 0x0000000066bb6018-0x0000000066bfbe57] usable
[    0.000000] reserve setup_data: [mem 0x0000000066bfbe58-0x000000007177dfff] usable
[    0.000000] reserve setup_data: [mem 0x000000007177e000-0x000000007487dfff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007487e000-0x000000007499cfff] ACPI data
[    0.000000] reserve setup_data: [mem 0x000000007499d000-0x0000000074ac9fff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x0000000074aca000-0x0000000075ffefff] reserved
[    0.000000] reserve setup_data: [mem 0x0000000075fff000-0x0000000075ffffff] usable
[    0.000000] reserve setup_data: [mem 0x0000000076000000-0x0000000079ffffff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007a800000-0x000000007abfffff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007b000000-0x00000000803fffff] reserved
[    0.000000] reserve setup_data: [mem 0x00000000c0000000-0x00000000cfffffff] reserved
[    0.000000] reserve setup_data: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
[    0.000000] reserve setup_data: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] reserve setup_data: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
[    0.000000] reserve setup_data: [mem 0x00000000fed20000-0x00000000fed7ffff] reserved
[    0.000000] reserve setup_data: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[    0.000000] reserve setup_data: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] reserve setup_data: [mem 0x0000000100000000-0x000000107fbfffff] usable
[    0.000000] efi: EFI v2.8 by American Megatrends
[    0.000000] efi: ACPI=0x74a26000 ACPI 2.0=0x74a26014 TPMFinalLog=0x749f5000 SMBIOS=0x75bbc000 SMBIOS 3.0=0x75bbb000 MEMATTR=0x692e7018 RNG=0x748b6f18 INITRD=0x692e1d98 TPMEventLog=0x66bfc018 
[    0.000000] random: crng init done
[    0.000000] efi: Remove mem286: MMIO range=[0xc0000000-0xcfffffff] (256MB) from e820 map
[    0.000000] e820: remove [mem 0xc0000000-0xcfffffff] reserved
[    0.000000] efi: Not removing mem287: MMIO range=[0xfe000000-0xfe010fff] (68KB) from e820 map
[    0.000000] efi: Not removing mem288: MMIO range=[0xfec00000-0xfec00fff] (4KB) from e820 map
[    0.000000] efi: Not removing mem289: MMIO range=[0xfed00000-0xfed00fff] (4KB) from e820 map
[    0.000000] efi: Not removing mem291: MMIO range=[0xfee00000-0xfee00fff] (4KB) from e820 map
[    0.000000] efi: Remove mem292: MMIO range=[0xff000000-0xffffffff] (16MB) from e820 map
[    0.000000] e820: remove [mem 0xff000000-0xffffffff] reserved
[    0.000000] secureboot: Secure boot enabled
[    0.000000] Kernel is locked down from EFI Secure Boot mode; see man kernel_lockdown.7
[    0.000000] SMBIOS 3.5.0 present.
[    0.000000] DMI: Supermicro Super Server/X13SAE-F, BIOS 2.1 04/06/2023
[    0.000000] tsc: Detected 3500.000 MHz processor
[    0.000000] tsc: Detected 3494.400 MHz TSC
[    0.001513] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.001515] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.001525] last_pfn = 0x107fc00 max_arch_pfn = 0x400000000
[    0.001529] total RAM covered: 128960M
[    0.001612] Found optimal setting for mtrr clean up
[    0.001613]  gran_size: 64K 	chunk_size: 128M 	num_reg: 7  	lose cover RAM: 0G
[    0.001616] MTRR map: 6 entries (3 fixed + 3 variable; max 23), built from 10 variable MTRRs
[    0.001618] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[    0.002157] e820: update [mem 0x7c000000-0xffffffff] usable ==> reserved
[    0.002160] last_pfn = 0x76000 max_arch_pfn = 0x400000000
[    0.018585] Using GB pages for direct mapping
[    0.018586] Incomplete global flushes, disabling PCID
[    0.018768] secureboot: Secure boot enabled
[    0.018769] RAMDISK: [mem 0x5fdc8000-0x6273dfff]
[    0.018773] ACPI: Early table checksum verification disabled
[    0.018776] ACPI: RSDP 0x0000000074A26014 000024 (v02 SUPERM)
[    0.018780] ACPI: XSDT 0x0000000074A25728 000144 (v01 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018785] ACPI: FACP 0x000000007499A000 000114 (v06 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018789] ACPI: DSDT 0x00000000748FB000 09CE12 (v02 SUPERM SMCI--MB 01072009 INTL 20200717)
[    0.018791] ACPI: FACS 0x0000000074AC8000 000040
[    0.018793] ACPI: SPMI 0x0000000074999000 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000)
[    0.018796] ACPI: SPMI 0x0000000074998000 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000)
[    0.018798] ACPI: FIDT 0x00000000748FA000 00009C (v01 SUPERM SMCI--MB 01072009 AMI  00010013)
[    0.018800] ACPI: SSDT 0x000000007499C000 00038C (v02 PmaxDv Pmax_Dev 00000001 INTL 20200717)
[    0.018803] ACPI: SSDT 0x00000000748F4000 005C55 (v02 CpuRef CpuSsdt  00003000 INTL 20200717)
[    0.018805] ACPI: SSDT 0x00000000748F1000 002B7D (v02 SaSsdt SaSsdt   00003000 INTL 20200717)
[    0.018807] ACPI: SSDT 0x00000000748ED000 003359 (v02 INTEL  IgfxSsdt 00003000 INTL 20200717)
[    0.018810] ACPI: HPET 0x000000007499B000 000038 (v01 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018812] ACPI: APIC 0x00000000748EC000 0001DC (v05 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018814] ACPI: MCFG 0x00000000748EB000 00003C (v01 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018816] ACPI: SSDT 0x00000000748E1000 009350 (v02 SUPERM AdlS_Rvp 00001000 INTL 20200717)
[    0.018818] ACPI: SSDT 0x00000000748DF000 001F1A (v02 SUPERM Ther_Rvp 00001000 INTL 20200717)
[    0.018820] ACPI: UEFI 0x00000000749DC000 000048 (v01 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018823] ACPI: NHLT 0x00000000748DE000 00002D (v00 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018825] ACPI: LPIT 0x00000000748DD000 0000CC (v01 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018827] ACPI: SSDT 0x00000000748D9000 002A83 (v02 SUPERM PtidDevc 00001000 INTL 20200717)
[    0.018829] ACPI: SSDT 0x00000000748D0000 008F27 (v02 SUPERM TbtTypeC 00000000 INTL 20200717)
[    0.018831] ACPI: DBGP 0x00000000748CF000 000034 (v01 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018833] ACPI: DBG2 0x00000000748CE000 000054 (v00 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018836] ACPI: SSDT 0x00000000748CC000 00190A (v02 SUPERM UsbCTabl 00001000 INTL 20200717)
[    0.018838] ACPI: DMAR 0x00000000748CB000 000088 (v02 INTEL  EDK2     00000002      01000013)
[    0.018841] ACPI: FPDT 0x00000000748CA000 000044 (v01 SUPERM A M I    01072009 AMI  01000013)
[    0.018843] ACPI: SSDT 0x00000000748C8000 0012DA (v02 INTEL  xh_adls3 00000000 INTL 20200717)
[    0.018845] ACPI: SSDT 0x00000000748C4000 003AEA (v02 SocGpe SocGpe   00003000 INTL 20200717)
[    0.018847] ACPI: SSDT 0x00000000748C0000 0039DA (v02 SocCmn SocCmn   00003000 INTL 20200717)
[    0.018849] ACPI: SSDT 0x00000000748BF000 000144 (v02 Intel  ADebTabl 00001000 INTL 20200717)
[    0.018850] ACPI: BGRT 0x00000000748BE000 000038 (v01 SUPERM SMCI--MB 01072009 AMI  00010013)
[    0.018852] ACPI: TPM2 0x00000000748BD000 00004C (v04 SUPERM SMCI--MB 00000001 AMI  00000000)
[    0.018854] ACPI: PHAT 0x00000000748BB000 0005F1 (v01 SUPERM SMCI--MB 00000005 MSFT 0100000D)
[    0.018856] ACPI: ASF! 0x00000000748BC000 000074 (v32 SUPERM SMCI--MB 01072009 AMI  01000013)
[    0.018858] ACPI: WSMT 0x00000000748DC000 000028 (v01 SUPERM SMCI--MB 01072009 AMI  00010013)
[    0.018860] ACPI: EINJ 0x00000000748BA000 000130 (v01 AMI    AMI.EINJ 00000000 AMI. 00000000)
[    0.018862] ACPI: ERST 0x00000000748B9000 000230 (v01 AMIER  AMI.ERST 00000000 AMI. 00000000)
[    0.018864] ACPI: BERT 0x00000000748B8000 000030 (v01 AMI    AMI.BERT 00000000 AMI. 00000000)
[    0.018866] ACPI: HEST 0x00000000748B7000 0000A8 (v01 AMI    AMI.HEST 00000000 AMI. 00000000)
[    0.018867] ACPI: Reserving FACP table memory at [mem 0x7499a000-0x7499a113]
[    0.018868] ACPI: Reserving DSDT table memory at [mem 0x748fb000-0x74997e11]
[    0.018869] ACPI: Reserving FACS table memory at [mem 0x74ac8000-0x74ac803f]
[    0.018869] ACPI: Reserving SPMI table memory at [mem 0x74999000-0x74999040]
[    0.018870] ACPI: Reserving SPMI table memory at [mem 0x74998000-0x74998040]
[    0.018871] ACPI: Reserving FIDT table memory at [mem 0x748fa000-0x748fa09b]
[    0.018872] ACPI: Reserving SSDT table memory at [mem 0x7499c000-0x7499c38b]
[    0.018872] ACPI: Reserving SSDT table memory at [mem 0x748f4000-0x748f9c54]
[    0.018873] ACPI: Reserving SSDT table memory at [mem 0x748f1000-0x748f3b7c]
[    0.018874] ACPI: Reserving SSDT table memory at [mem 0x748ed000-0x748f0358]
[    0.018874] ACPI: Reserving HPET table memory at [mem 0x7499b000-0x7499b037]
[    0.018875] ACPI: Reserving APIC table memory at [mem 0x748ec000-0x748ec1db]
[    0.018876] ACPI: Reserving MCFG table memory at [mem 0x748eb000-0x748eb03b]
[    0.018876] ACPI: Reserving SSDT table memory at [mem 0x748e1000-0x748ea34f]
[    0.018877] ACPI: Reserving SSDT table memory at [mem 0x748df000-0x748e0f19]
[    0.018877] ACPI: Reserving UEFI table memory at [mem 0x749dc000-0x749dc047]
[    0.018878] ACPI: Reserving NHLT table memory at [mem 0x748de000-0x748de02c]
[    0.018879] ACPI: Reserving LPIT table memory at [mem 0x748dd000-0x748dd0cb]
[    0.018879] ACPI: Reserving SSDT table memory at [mem 0x748d9000-0x748dba82]
[    0.018880] ACPI: Reserving SSDT table memory at [mem 0x748d0000-0x748d8f26]
[    0.018881] ACPI: Reserving DBGP table memory at [mem 0x748cf000-0x748cf033]
[    0.018881] ACPI: Reserving DBG2 table memory at [mem 0x748ce000-0x748ce053]
[    0.018882] ACPI: Reserving SSDT table memory at [mem 0x748cc000-0x748cd909]
[    0.018883] ACPI: Reserving DMAR table memory at [mem 0x748cb000-0x748cb087]
[    0.018883] ACPI: Reserving FPDT table memory at [mem 0x748ca000-0x748ca043]
[    0.018884] ACPI: Reserving SSDT table memory at [mem 0x748c8000-0x748c92d9]
[    0.018885] ACPI: Reserving SSDT table memory at [mem 0x748c4000-0x748c7ae9]
[    0.018885] ACPI: Reserving SSDT table memory at [mem 0x748c0000-0x748c39d9]
[    0.018886] ACPI: Reserving SSDT table memory at [mem 0x748bf000-0x748bf143]
[    0.018887] ACPI: Reserving BGRT table memory at [mem 0x748be000-0x748be037]
[    0.018887] ACPI: Reserving TPM2 table memory at [mem 0x748bd000-0x748bd04b]
[    0.018888] ACPI: Reserving PHAT table memory at [mem 0x748bb000-0x748bb5f0]
[    0.018889] ACPI: Reserving ASF! table memory at [mem 0x748bc000-0x748bc073]
[    0.018889] ACPI: Reserving WSMT table memory at [mem 0x748dc000-0x748dc027]
[    0.018890] ACPI: Reserving EINJ table memory at [mem 0x748ba000-0x748ba12f]
[    0.018891] ACPI: Reserving ERST table memory at [mem 0x748b9000-0x748b922f]
[    0.018891] ACPI: Reserving BERT table memory at [mem 0x748b8000-0x748b802f]
[    0.018892] ACPI: Reserving HEST table memory at [mem 0x748b7000-0x748b70a7]
[    0.019056] No NUMA configuration found
[    0.019056] Faking a node at [mem 0x0000000000000000-0x000000107fbfffff]
[    0.019062] NODE_DATA(0) allocated [mem 0x107fbd5000-0x107fbfffff]
[    0.019226] Zone ranges:
[    0.019226]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.019228]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.019229]   Normal   [mem 0x0000000100000000-0x000000107fbfffff]
[    0.019230]   Device   empty
[    0.019231] Movable zone start for each node
[    0.019232] Early memory node ranges
[    0.019232]   node   0: [mem 0x0000000000001000-0x000000000009dfff]
[    0.019233]   node   0: [mem 0x000000000009f000-0x000000000009ffff]
[    0.019234]   node   0: [mem 0x0000000000100000-0x000000007177dfff]
[    0.019234]   node   0: [mem 0x0000000075fff000-0x0000000075ffffff]
[    0.019235]   node   0: [mem 0x0000000100000000-0x000000107fbfffff]
[    0.019238] Initmem setup node 0 [mem 0x0000000000001000-0x000000107fbfffff]
[    0.019242] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.019243] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.019261] On node 0, zone DMA: 96 pages in unavailable ranges
[    0.021359] On node 0, zone DMA32: 18561 pages in unavailable ranges
[    0.095422] On node 0, zone Normal: 8192 pages in unavailable ranges
[    0.095428] On node 0, zone Normal: 1024 pages in unavailable ranges
[    0.096453] ACPI: PM-Timer IO Port: 0x1808
[    0.096460] ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
[    0.096462] ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
[    0.096463] ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
[    0.096463] ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
[    0.096464] ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
[    0.096464] ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
[    0.096464] ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
[    0.096465] ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1])
[    0.096465] ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1])
[    0.096466] ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1])
[    0.096466] ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1])
[    0.096467] ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1])
[    0.096467] ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1])
[    0.096468] ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1])
[    0.096468] ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1])
[    0.096469] ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1])
[    0.096469] ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1])
[    0.096470] ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1])
[    0.096470] ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1])
[    0.096470] ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1])
[    0.096471] ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1])
[    0.096471] ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1])
[    0.096472] ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1])
[    0.096472] ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
[    0.096508] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
[    0.096510] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.096512] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.096515] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.096516] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.096527] e820: update [mem 0x6867d000-0x688bdfff] usable ==> reserved
[    0.096540] TSC deadline timer available
[    0.096541] smpboot: Allowing 20 CPUs, 0 hotplug CPUs
[    0.096558] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.096560] PM: hibernation: Registered nosave memory: [mem 0x0009e000-0x0009efff]
[    0.096561] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
[    0.096562] PM: hibernation: Registered nosave memory: [mem 0x63b36000-0x63b36fff]
[    0.096563] PM: hibernation: Registered nosave memory: [mem 0x63b67000-0x63b67fff]
[    0.096564] PM: hibernation: Registered nosave memory: [mem 0x63b68000-0x63b68fff]
[    0.096565] PM: hibernation: Registered nosave memory: [mem 0x63b8c000-0x63b8cfff]
[    0.096566] PM: hibernation: Registered nosave memory: [mem 0x66b83000-0x66b83fff]
[    0.096567] PM: hibernation: Registered nosave memory: [mem 0x66b90000-0x66b90fff]
[    0.096567] PM: hibernation: Registered nosave memory: [mem 0x66b91000-0x66b91fff]
[    0.096569] PM: hibernation: Registered nosave memory: [mem 0x66bb5000-0x66bb5fff]
[    0.096569] PM: hibernation: Registered nosave memory: [mem 0x66bb6000-0x66bb6fff]
[    0.096570] PM: hibernation: Registered nosave memory: [mem 0x66bfb000-0x66bfbfff]
[    0.096571] PM: hibernation: Registered nosave memory: [mem 0x6867d000-0x688bdfff]
[    0.096572] PM: hibernation: Registered nosave memory: [mem 0x7177e000-0x7487dfff]
[    0.096573] PM: hibernation: Registered nosave memory: [mem 0x7487e000-0x7499cfff]
[    0.096573] PM: hibernation: Registered nosave memory: [mem 0x7499d000-0x74ac9fff]
[    0.096574] PM: hibernation: Registered nosave memory: [mem 0x74aca000-0x75ffefff]
[    0.096575] PM: hibernation: Registered nosave memory: [mem 0x76000000-0x79ffffff]
[    0.096575] PM: hibernation: Registered nosave memory: [mem 0x7a000000-0x7a7fffff]
[    0.096576] PM: hibernation: Registered nosave memory: [mem 0x7a800000-0x7abfffff]
[    0.096576] PM: hibernation: Registered nosave memory: [mem 0x7ac00000-0x7affffff]
[    0.096577] PM: hibernation: Registered nosave memory: [mem 0x7b000000-0x803fffff]
[    0.096577] PM: hibernation: Registered nosave memory: [mem 0x80400000-0xfdffffff]
[    0.096577] PM: hibernation: Registered nosave memory: [mem 0xfe000000-0xfe010fff]
[    0.096578] PM: hibernation: Registered nosave memory: [mem 0xfe011000-0xfebfffff]
[    0.096578] PM: hibernation: Registered nosave memory: [mem 0xfec00000-0xfec00fff]
[    0.096579] PM: hibernation: Registered nosave memory: [mem 0xfec01000-0xfecfffff]
[    0.096579] PM: hibernation: Registered nosave memory: [mem 0xfed00000-0xfed00fff]
[    0.096580] PM: hibernation: Registered nosave memory: [mem 0xfed01000-0xfed1ffff]
[    0.096580] PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfed7ffff]
[    0.096580] PM: hibernation: Registered nosave memory: [mem 0xfed80000-0xfedfffff]
[    0.096581] PM: hibernation: Registered nosave memory: [mem 0xfee00000-0xfee00fff]
[    0.096581] PM: hibernation: Registered nosave memory: [mem 0xfee01000-0xffffffff]
[    0.096583] [mem 0x80400000-0xfdffffff] available for PCI devices
[    0.096584] Booting paravirtualized kernel on bare hardware
[    0.096586] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[    0.102045] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:20 nr_cpu_ids:20 nr_node_ids:1
[    0.102722] percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
[    0.102727] pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
[    0.102728] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 
[    0.102735] pcpu-alloc: [0] 16 17 18 19 -- -- -- -- 
[    0.102753] Kernel command line: root=UUID=e3af9a0d-aa4f-4d81-b315-97fa46206986 ro rd.luks.uuid=luks-cf6bc8bd-2dfb-4b60-b004-4a283c0d2d42 rhgb quiet console=tty0 console=ttyS1,115200n8 intel_iommu=on rd.shell=0 systemd.machine_id=ec7321adfdbb412093efcdc435abb26e
[    0.102812] DMAR: IOMMU enabled
[    0.102832] Unknown kernel command line parameters "rhgb", will be passed to user space.
[    0.107630] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear)
[    0.110042] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
[    0.110223] Fallback order for Node 0: 0 
[    0.110226] Built 1 zonelists, mobility grouping on.  Total pages: 16455217
[    0.110227] Policy zone: Normal
[    0.110396] mem auto-init: stack:all(zero), heap alloc:off, heap free:off
[    0.110403] software IO TLB: area num 32.
[    0.209816] Memory: 65361156K/66866292K available (18432K kernel code, 3267K rwdata, 14476K rodata, 4532K init, 17364K bss, 1504876K reserved, 0K cma-reserved)
[    0.210017] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=20, Nodes=1
[    0.210037] ftrace: allocating 53614 entries in 210 pages
[    0.216433] ftrace: allocated 210 pages with 4 groups
[    0.217060] Dynamic Preempt: voluntary
[    0.217103] rcu: Preemptible hierarchical RCU implementation.
[    0.217103] rcu: 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=20.
[    0.217104] 	Trampoline variant of Tasks RCU enabled.
[    0.217105] 	Rude variant of Tasks RCU enabled.
[    0.217105] 	Tracing variant of Tasks RCU enabled.
[    0.217106] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
[    0.217107] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=20
[    0.219095] NR_IRQS: 524544, nr_irqs: 2216, preallocated irqs: 16
[    0.219394] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[    0.219650] kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
[    0.219682] Console: colour dummy device 80x25
[    0.219684] printk: console [tty0] enabled
[    0.219734] printk: console [ttyS1] enabled
[    0.219774] ACPI: Core revision 20230331
[    0.220123] hpet: HPET dysfunctional in PC10. Force disabled.
[    0.220124] APIC: Switch to symmetric I/O mode setup
[    0.220126] DMAR: Host address width 46
[    0.220127] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.220133] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[    0.220135] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.220139] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[    0.220141] DMAR: RMRR base: 0x0000007c000000 end: 0x000000803fffff
[    0.220143] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.220144] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.220145] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.221720] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.221722] x2apic enabled
[    0.221741] Switched APIC routing to cluster x2apic.
[    0.226137] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x325ea749ca1, max_idle_ns: 440795373125 ns
[    0.226143] Calibrating delay loop (skipped), value calculated using timer frequency.. 6988.80 BogoMIPS (lpj=3494400)
[    0.226191] x86/tme: enabled by BIOS
[    0.226192] x86/tme: Unknown policy is active: 0x2
[    0.226193] x86/mktme: No known encryption algorithm is supported: 0x4
[    0.226194] x86/mktme: enabled by BIOS
[    0.226194] x86/mktme: 15 KeyIDs available
[    0.226201] CPU0: Thermal monitoring enabled (TM1)
[    0.226203] x86/cpu: User Mode Instruction Prevention (UMIP) activated
[    0.226329] process: using mwait in idle threads
[    0.226331] CET detected: Indirect Branch Tracking enabled
[    0.226332] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    0.226333] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[    0.226335] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.226338] Spectre V2 : Mitigation: Enhanced / Automatic IBRS
[    0.226338] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.226339] Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT
[    0.226340] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[    0.226342] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
[    0.226350] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.226351] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.226352] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.226352] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
[    0.226353] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.226354] x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
[    0.226355] x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
[    0.227141] Freeing SMP alternatives memory: 48K
[    0.227141] pid_max: default: 32768 minimum: 301
[    0.227141] LSM: initializing lsm=lockdown,capability,yama,selinux,bpf,landlock,integrity
[    0.227141] Yama: becoming mindful.
[    0.227141] SELinux:  Initializing.
[    0.227141] LSM support for eBPF active
[    0.227141] landlock: Up and running.
[    0.227141] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.227141] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.227141] smpboot: CPU0: 13th Gen Intel(R) Core(TM) i5-13600K (family: 0x6, model: 0xb7, stepping: 0x1)
[    0.227141] RCU Tasks: Setting shift to 5 and lim to 1 rcu_task_cb_adjust=1.
[    0.227141] RCU Tasks Rude: Setting shift to 5 and lim to 1 rcu_task_cb_adjust=1.
[    0.227141] RCU Tasks Trace: Setting shift to 5 and lim to 1 rcu_task_cb_adjust=1.
[    0.227141] Performance Events: XSAVE Architectural LBR, PEBS fmt4+-baseline,  AnyThread deprecated, Alderlake Hybrid events, 32-deep LBR, full-width counters, Intel PMU driver.
[    0.227141] core: cpu_core PMU driver: 
[    0.227141] ... version:                5
[    0.227141] ... bit width:              48
[    0.227141] ... generic registers:      8
[    0.227141] ... value mask:             0000ffffffffffff
[    0.227141] ... max period:             00007fffffffffff
[    0.227141] ... fixed-purpose events:   4
[    0.227141] ... event mask:             0001000f000000ff
[    0.227141] signal: max sigframe size: 3632
[    0.227141] Estimated ratio of average max frequency by base frequency (times 1024): 1492
[    0.227141] rcu: Hierarchical SRCU implementation.
[    0.227141] rcu: 	Max phase no-delay instances is 400.
[    0.227220] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
[    0.227334] smp: Bringing up secondary CPUs ...
[    0.227393] smpboot: x86: Booting SMP configuration:
[    0.227394] .... node  #0, CPUs:        #2  #4  #6  #8 #10 #12 #13 #14 #15 #16 #17 #18 #19
[    0.007566] core: cpu_atom PMU driver: PEBS-via-PT 
[    0.007566] ... version:                5
[    0.007566] ... bit width:              48
[    0.007566] ... generic registers:      6
[    0.007566] ... value mask:             0000ffffffffffff
[    0.007566] ... max period:             00007fffffffffff
[    0.007566] ... fixed-purpose events:   3
[    0.007566] ... event mask:             000000070000003f
[    0.239215]   #1  #3  #5  #7  #9 #11
[    0.243173] smp: Brought up 1 node, 20 CPUs
[    0.243173] smpboot: Max logical packages: 1
[    0.243173] smpboot: Total of 20 processors activated (139776.00 BogoMIPS)
[    0.245697] devtmpfs: initialized
[    0.245697] x86/mm: Memory block size: 2048MB
[    0.245697] ACPI: PM: Registering ACPI NVS region [mem 0x7499d000-0x74ac9fff] (1232896 bytes)
[    0.245697] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[    0.245697] futex hash table entries: 8192 (order: 7, 524288 bytes, linear)
[    0.246202] pinctrl core: initialized pinctrl subsystem
[    0.246417] PM: RTC time: 16:01:02, date: 2023-11-22
[    0.246694] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.246795] DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations
[    0.246798] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[    0.246800] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[    0.246809] audit: initializing netlink subsys (disabled)
[    0.246813] audit: type=2000 audit(1700668862.020:1): state=initialized audit_enabled=0 res=1
[    0.246813] thermal_sys: Registered thermal governor 'fair_share'
[    0.246813] thermal_sys: Registered thermal governor 'bang_bang'
[    0.246813] thermal_sys: Registered thermal governor 'step_wise'
[    0.246813] thermal_sys: Registered thermal governor 'user_space'
[    0.246813] cpuidle: using governor menu
[    0.246813] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.247155] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000)
[    0.247160] PCI: not using MMCONFIG
[    0.247161] PCI: Using configuration type 1 for base access
[    0.247434] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    0.247435] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
[    0.247435] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[    0.247435] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
[    0.247435] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[    0.247435] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
[    0.247435] cryptd: max_cpu_qlen set to 1000
[    0.247435] raid6: skipped pq benchmark and selected avx2x4
[    0.247435] raid6: using avx2x2 recovery algorithm
[    0.247435] ACPI: Added _OSI(Module Device)
[    0.247435] ACPI: Added _OSI(Processor Device)
[    0.247435] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.247435] ACPI: Added _OSI(Processor Aggregator Device)
[    0.324645] ACPI: 14 ACPI AML tables successfully acquired and loaded
[    0.335625] ACPI: Dynamic OEM Table Load:
[    0.335634] ACPI: SSDT 0xFFFF92CAC2609A00 0001AB (v02 PmRef  Cpu0Psd  00003000 INTL 20200717)
[    0.336269] ACPI: \_SB_.PR00: _OSC native thermal LVT Acked
[    0.339825] ACPI: Dynamic OEM Table Load:
[    0.339831] ACPI: SSDT 0xFFFF92CAC16A8800 000394 (v02 PmRef  Cpu0Cst  00003001 INTL 20200717)
[    0.340598] ACPI: Dynamic OEM Table Load:
[    0.340604] ACPI: SSDT 0xFFFF92CAC2687000 0006AA (v02 PmRef  Cpu0Ist  00003000 INTL 20200717)
[    0.341409] ACPI: Dynamic OEM Table Load:
[    0.341414] ACPI: SSDT 0xFFFF92CAC2686800 0004B5 (v02 PmRef  Cpu0Hwp  00003000 INTL 20200717)
[    0.342358] ACPI: Dynamic OEM Table Load:
[    0.342365] ACPI: SSDT 0xFFFF92CAC16A2000 001BAF (v02 PmRef  ApIst    00003000 INTL 20200717)
[    0.343472] ACPI: Dynamic OEM Table Load:
[    0.343478] ACPI: SSDT 0xFFFF92CAC16A0000 001038 (v02 PmRef  ApHwp    00003000 INTL 20200717)
[    0.344468] ACPI: Dynamic OEM Table Load:
[    0.344474] ACPI: SSDT 0xFFFF92CAC268C000 001349 (v02 PmRef  ApPsd    00003000 INTL 20200717)
[    0.345510] ACPI: Dynamic OEM Table Load:
[    0.345516] ACPI: SSDT 0xFFFF92CAC16B7000 000FBB (v02 PmRef  ApCst    00003000 INTL 20200717)
[    0.352883] ACPI: Interpreter enabled
[    0.352931] ACPI: PM: (supports S0 S3 S4 S5)
[    0.352932] ACPI: Using IOAPIC for interrupt routing
[    0.354040] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000)
[    0.355303] PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved as ACPI motherboard resource
[    0.355324] HEST: Table parsing has been initialized.
[    0.355618] GHES: APEI firmware first mode is enabled by APEI bit.
[    0.355620] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.355621] PCI: Ignoring E820 reservations for host bridge windows
[    0.356476] ACPI: Enabled 6 GPEs in block 00 to 7F
[    0.357181] ACPI: \_SB_.PC00.PEG1.PXP_: New power resource
[    0.358050] ACPI: \_SB_.PC00.PEG0.PXP_: New power resource
[    0.360217] ACPI: \_SB_.PC00.RP09.PXP_: New power resource
[    0.361773] ACPI: \_SB_.PC00.RP13.PXP_: New power resource
[    0.363423] ACPI: \_SB_.PC00.RP01.PXP_: New power resource
[    0.365162] ACPI: \_SB_.PC00.RP05.PXP_: New power resource
[    0.368419] ACPI: \_SB_.PC00.RP21.PXP_: New power resource
[    0.370015] ACPI: \_SB_.PC00.RP25.PXP_: New power resource
[    0.374005] ACPI: \_SB_.PC00.PAUD: New power resource
[    0.376601] ACPI: \_SB_.PC00.I2C1.PXTC: New power resource
[    0.380040] ACPI: \_SB_.PC00.CNVW.WRST: New power resource
[    0.380174] ACPI: \SPR4: New power resource
[    0.380282] ACPI: \SPR5: New power resource
[    0.380389] ACPI: \SPR6: New power resource
[    0.380494] ACPI: \SPR7: New power resource
[    0.385000] ACPI: \_TZ_.FN00: New power resource
[    0.385032] ACPI: \_TZ_.FN01: New power resource
[    0.385063] ACPI: \_TZ_.FN02: New power resource
[    0.385093] ACPI: \_TZ_.FN03: New power resource
[    0.385122] ACPI: \_TZ_.FN04: New power resource
[    0.385538] ACPI: \PIN_: New power resource
[    0.385722] ACPI: PCI Root Bridge [PC00] (domain 0000 [bus 00-fe])
[    0.385726] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
[    0.386971] acpi PNP0A08:00: _OSC: platform does not support [AER]
[    0.389470] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug SHPCHotplug PME PCIeCapability LTR DPC]
[    0.390683] PCI host bridge to bus 0000:00
[    0.390684] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.390686] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.390687] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    0.390688] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000fffff window]
[    0.390689] pci_bus 0000:00: root bus resource [mem 0x80400000-0xbfffffff window]
[    0.390690] pci_bus 0000:00: root bus resource [mem 0x4000000000-0x7fffffffff window]
[    0.390691] pci_bus 0000:00: root bus resource [bus 00-fe]
[    0.603355] pci 0000:00:00.0: [8086:a704] type 00 class 0x060000
[    0.603450] pci 0000:00:01.0: [8086:a70d] type 01 class 0x060400
[    0.603499] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    0.603515] pci 0000:00:01.0: PTM enabled (root), 4ns granularity
[    0.603879] pci 0000:00:01.1: [8086:a72d] type 01 class 0x060400
[    0.603926] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
[    0.603941] pci 0000:00:01.1: PTM enabled (root), 4ns granularity
[    0.604346] pci 0000:00:02.0: [8086:a780] type 00 class 0x038000
[    0.604353] pci 0000:00:02.0: reg 0x10: [mem 0x60eb000000-0x60ebffffff 64bit]
[    0.604358] pci 0000:00:02.0: reg 0x18: [mem 0x4000000000-0x400fffffff 64bit pref]
[    0.604361] pci 0000:00:02.0: reg 0x20: [io  0x5000-0x503f]
[    0.604373] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    0.604394] pci 0000:00:02.0: reg 0x344: [mem 0x60e4000000-0x60e4ffffff 64bit]
[    0.604395] pci 0000:00:02.0: VF(n) BAR0 space: [mem 0x60e4000000-0x60eaffffff 64bit] (contains BAR0 for 7 VFs)
[    0.604398] pci 0000:00:02.0: reg 0x34c: [mem 0x6000000000-0x601fffffff 64bit pref]
[    0.604399] pci 0000:00:02.0: VF(n) BAR2 space: [mem 0x6000000000-0x60dfffffff 64bit pref] (contains BAR2 for 7 VFs)
[    0.604509] pci 0000:00:06.0: [8086:a74d] type 01 class 0x060400
[    0.604584] pci 0000:00:06.0: PME# supported from D0 D3hot D3cold
[    0.604608] pci 0000:00:06.0: PTM enabled (root), 4ns granularity
[    0.605019] pci 0000:00:0a.0: [8086:a77d] type 00 class 0x118000
[    0.605026] pci 0000:00:0a.0: reg 0x10: [mem 0x60ec110000-0x60ec117fff 64bit]
[    0.605045] pci 0000:00:0a.0: enabling Extended Tags
[    0.605155] pci 0000:00:14.0: [8086:7ae0] type 00 class 0x0c0330
[    0.605173] pci 0000:00:14.0: reg 0x10: [mem 0x60ec100000-0x60ec10ffff 64bit]
[    0.605249] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    0.606996] pci 0000:00:14.2: [8086:7aa7] type 00 class 0x050000
[    0.607016] pci 0000:00:14.2: reg 0x10: [mem 0x60ec124000-0x60ec127fff 64bit]
[    0.607029] pci 0000:00:14.2: reg 0x18: [mem 0x60ec12c000-0x60ec12cfff 64bit]
[    0.607207] pci 0000:00:15.0: [8086:7acc] type 00 class 0x0c8000
[    0.607235] pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
[    0.607683] pci 0000:00:15.1: [8086:7acd] type 00 class 0x0c8000
[    0.607711] pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
[    0.608093] pci 0000:00:16.0: [8086:7ae8] type 00 class 0x078000
[    0.608115] pci 0000:00:16.0: reg 0x10: [mem 0x60ec129000-0x60ec129fff 64bit]
[    0.608200] pci 0000:00:16.0: PME# supported from D3hot
[    0.608608] pci 0000:00:16.3: [8086:7aeb] type 00 class 0x070002
[    0.608622] pci 0000:00:16.3: reg 0x10: [io  0x50a0-0x50a7]
[    0.608630] pci 0000:00:16.3: reg 0x14: [mem 0x84024000-0x84024fff]
[    0.608768] pci 0000:00:17.0: [8086:7ae2] type 00 class 0x010601
[    0.608781] pci 0000:00:17.0: reg 0x10: [mem 0x84020000-0x84021fff]
[    0.608789] pci 0000:00:17.0: reg 0x14: [mem 0x84023000-0x840230ff]
[    0.608797] pci 0000:00:17.0: reg 0x18: [io  0x5090-0x5097]
[    0.608805] pci 0000:00:17.0: reg 0x1c: [io  0x5080-0x5083]
[    0.608813] pci 0000:00:17.0: reg 0x20: [io  0x5060-0x507f]
[    0.608821] pci 0000:00:17.0: reg 0x24: [mem 0x84022000-0x840227ff]
[    0.608862] pci 0000:00:17.0: PME# supported from D3hot
[    0.609084] pci 0000:00:1a.0: [8086:7ac8] type 01 class 0x060400
[    0.609198] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    0.609240] pci 0000:00:1a.0: PTM enabled (root), 4ns granularity
[    0.609697] pci 0000:00:1b.0: [8086:7ac0] type 01 class 0x060400
[    0.610204] pci 0000:00:1b.4: [8086:7ac4] type 01 class 0x060400
[    0.610316] pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold
[    0.610358] pci 0000:00:1b.4: PTM enabled (root), 4ns granularity
[    0.610770] pci 0000:00:1c.0: [8086:7ab8] type 01 class 0x060400
[    0.610876] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    0.610911] pci 0000:00:1c.0: PTM enabled (root), 4ns granularity
[    0.611311] pci 0000:00:1c.1: [8086:7ab9] type 01 class 0x060400
[    0.611413] pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold
[    0.611448] pci 0000:00:1c.1: PTM enabled (root), 4ns granularity
[    0.611877] pci 0000:00:1c.3: [8086:7abb] type 01 class 0x060400
[    0.611983] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    0.612018] pci 0000:00:1c.3: PTM enabled (root), 4ns granularity
[    0.612438] pci 0000:00:1c.4: [8086:7abc] type 01 class 0x060400
[    0.612543] pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold
[    0.612579] pci 0000:00:1c.4: PTM enabled (root), 4ns granularity
[    0.612974] pci 0000:00:1d.0: [8086:7ab0] type 01 class 0x060400
[    0.613081] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    0.613116] pci 0000:00:1d.0: PTM enabled (root), 4ns granularity
[    0.613519] pci 0000:00:1f.0: [8086:7a88] type 00 class 0x060100
[    0.613817] pci 0000:00:1f.3: [8086:7ad0] type 00 class 0x040300
[    0.613859] pci 0000:00:1f.3: reg 0x10: [mem 0x60ec120000-0x60ec123fff 64bit]
[    0.613913] pci 0000:00:1f.3: reg 0x20: [mem 0x60ec000000-0x60ec0fffff 64bit]
[    0.614017] pci 0000:00:1f.3: PME# supported from D3hot D3cold
[    0.614096] pci 0000:00:1f.4: [8086:7aa3] type 00 class 0x0c0500
[    0.614127] pci 0000:00:1f.4: reg 0x10: [mem 0x60ec128000-0x60ec1280ff 64bit]
[    0.614159] pci 0000:00:1f.4: reg 0x20: [io  0xefa0-0xefbf]
[    0.614347] pci 0000:00:1f.5: [8086:7aa4] type 00 class 0x0c8000
[    0.614366] pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff]
[    0.614497] pci 0000:00:1f.6: [8086:1a1c] type 00 class 0x020000
[    0.614523] pci 0000:00:1f.6: reg 0x10: [mem 0x84000000-0x8401ffff]
[    0.614646] pci 0000:00:1f.6: PME# supported from D0 D3hot D3cold
[    0.614844] pci 0000:01:00.0: [1000:0097] type 00 class 0x010700
[    0.614853] pci 0000:01:00.0: reg 0x10: [io  0x4000-0x40ff]
[    0.614861] pci 0000:01:00.0: reg 0x14: [mem 0x83400000-0x8340ffff 64bit]
[    0.614868] pci 0000:01:00.0: reg 0x1c: [mem 0x83200000-0x832fffff 64bit]
[    0.614878] pci 0000:01:00.0: reg 0x30: [mem 0x83100000-0x831fffff pref]
[    0.614882] pci 0000:01:00.0: enabling Extended Tags
[    0.614947] pci 0000:01:00.0: supports D1 D2
[    0.614970] pci 0000:01:00.0: reg 0x174: [mem 0x83300000-0x8330ffff 64bit]
[    0.614971] pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x83300000-0x833fffff 64bit] (contains BAR0 for 16 VFs)
[    0.614978] pci 0000:01:00.0: reg 0x17c: [mem 0x82100000-0x821fffff 64bit]
[    0.614979] pci 0000:01:00.0: VF(n) BAR2 space: [mem 0x82100000-0x830fffff 64bit] (contains BAR2 for 16 VFs)
[    0.615086] pci 0000:00:01.0: PCI bridge to [bus 01]
[    0.615089] pci 0000:00:01.0:   bridge window [io  0x4000-0x4fff]
[    0.615090] pci 0000:00:01.0:   bridge window [mem 0x82100000-0x834fffff]
[    0.615194] pci 0000:02:00.0: [15b3:1013] type 00 class 0x020000
[    0.615295] pci 0000:02:00.0: reg 0x10: [mem 0x60e2000000-0x60e3ffffff 64bit pref]
[    0.615449] pci 0000:02:00.0: reg 0x30: [mem 0x83b00000-0x83bfffff pref]
[    0.615941] pci 0000:02:00.0: PME# supported from D3cold
[    0.616298] pci 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x8 link at 0000:00:01.1 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
[    0.616663] pci 0000:02:00.1: [15b3:1013] type 00 class 0x020000
[    0.616763] pci 0000:02:00.1: reg 0x10: [mem 0x60e0000000-0x60e1ffffff 64bit pref]
[    0.616918] pci 0000:02:00.1: reg 0x30: [mem 0x83a00000-0x83afffff pref]
[    0.617368] pci 0000:02:00.1: PME# supported from D3cold
[    0.617877] pci 0000:00:01.1: PCI bridge to [bus 02]
[    0.617880] pci 0000:00:01.1:   bridge window [mem 0x83a00000-0x83bfffff]
[    0.617882] pci 0000:00:01.1:   bridge window [mem 0x60e0000000-0x60e3ffffff 64bit pref]
[    0.617952] pci 0000:03:00.0: [144d:a808] type 00 class 0x010802
[    0.617967] pci 0000:03:00.0: reg 0x10: [mem 0x83f00000-0x83f03fff 64bit]
[    0.618154] pci 0000:00:06.0: PCI bridge to [bus 03]
[    0.618156] pci 0000:00:06.0:   bridge window [mem 0x83f00000-0x83ffffff]
[    0.618305] pci 0000:04:00.0: [144d:a80a] type 00 class 0x010802
[    0.618329] pci 0000:04:00.0: reg 0x10: [mem 0x83e00000-0x83e03fff 64bit]
[    0.618680] pci 0000:00:1a.0: PCI bridge to [bus 04]
[    0.618684] pci 0000:00:1a.0:   bridge window [mem 0x83e00000-0x83efffff]
[    0.618736] pci 0000:00:1b.0: PCI bridge to [bus 05]
[    0.618881] pci 0000:06:00.0: [144d:a80a] type 00 class 0x010802
[    0.618905] pci 0000:06:00.0: reg 0x10: [mem 0x83d00000-0x83d03fff 64bit]
[    0.619258] pci 0000:00:1b.4: PCI bridge to [bus 06]
[    0.619262] pci 0000:00:1b.4:   bridge window [mem 0x83d00000-0x83dfffff]
[    0.619329] pci 0000:07:00.0: [1283:8893] type 01 class 0x060401
[    0.619492] pci 0000:07:00.0: supports D1 D2
[    0.619492] pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.619553] pci 0000:00:1c.0: PCI bridge to [bus 07-08]
[    0.619608] pci_bus 0000:08: extended config space not accessible
[    0.619701] pci 0000:07:00.0: PCI bridge to [bus 08] (subtractive decode)
[    0.619832] pci 0000:09:00.0: [8086:15f2] type 00 class 0x020000
[    0.619855] pci 0000:09:00.0: reg 0x10: [mem 0x83600000-0x836fffff]
[    0.619891] pci 0000:09:00.0: reg 0x1c: [mem 0x83700000-0x83703fff]
[    0.619927] pci 0000:09:00.0: reg 0x30: [mem 0x83500000-0x835fffff pref]
[    0.620034] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[    0.620241] pci 0000:00:1c.1: PCI bridge to [bus 09]
[    0.620246] pci 0000:00:1c.1:   bridge window [mem 0x83500000-0x837fffff]
[    0.620305] pci 0000:0a:00.0: [1a03:1150] type 01 class 0x060400
[    0.620375] pci 0000:0a:00.0: enabling Extended Tags
[    0.620464] pci 0000:0a:00.0: supports D1 D2
[    0.620465] pci 0000:0a:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.620627] pci 0000:00:1c.3: PCI bridge to [bus 0a-0b]
[    0.620630] pci 0000:00:1c.3:   bridge window [io  0x3000-0x3fff]
[    0.620632] pci 0000:00:1c.3:   bridge window [mem 0x81000000-0x820fffff]
[    0.620682] pci_bus 0000:0b: extended config space not accessible
[    0.620699] pci 0000:0b:00.0: [1a03:2000] type 00 class 0x030000
[    0.620721] pci 0000:0b:00.0: reg 0x10: [mem 0x81000000-0x81ffffff]
[    0.620733] pci 0000:0b:00.0: reg 0x14: [mem 0x82000000-0x8203ffff]
[    0.620745] pci 0000:0b:00.0: reg 0x18: [io  0x3000-0x307f]
[    0.620800] pci 0000:0b:00.0: BAR 0: assigned to efifb
[    0.620808] pci 0000:0b:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    0.620850] pci 0000:0b:00.0: supports D1 D2
[    0.620850] pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    0.620944] pci 0000:0a:00.0: PCI bridge to [bus 0b]
[    0.620951] pci 0000:0a:00.0:   bridge window [io  0x3000-0x3fff]
[    0.620955] pci 0000:0a:00.0:   bridge window [mem 0x81000000-0x820fffff]
[    0.621056] pci 0000:0c:00.0: [144d:a808] type 00 class 0x010802
[    0.621080] pci 0000:0c:00.0: reg 0x10: [mem 0x83c00000-0x83c03fff 64bit]
[    0.621405] pci 0000:00:1c.4: PCI bridge to [bus 0c]
[    0.621409] pci 0000:00:1c.4:   bridge window [mem 0x83c00000-0x83cfffff]
[    0.621467] pci 0000:0d:00.0: [1179:010e] type 00 class 0x010802
[    0.621487] pci 0000:0d:00.0: reg 0x10: [mem 0x83800000-0x838fffff 64bit]
[    0.621519] pci 0000:0d:00.0: reg 0x30: [mem 0x83900000-0x8397ffff pref]
[    0.621676] pci 0000:00:1d.0: PCI bridge to [bus 0d]
[    0.621680] pci 0000:00:1d.0:   bridge window [mem 0x83800000-0x839fffff]
[    0.625951] ACPI: PCI: Interrupt link LNKA configured for IRQ 0
[    0.626035] ACPI: PCI: Interrupt link LNKB configured for IRQ 1
[    0.626118] ACPI: PCI: Interrupt link LNKC configured for IRQ 0
[    0.626205] ACPI: PCI: Interrupt link LNKD configured for IRQ 0
[    0.626288] ACPI: PCI: Interrupt link LNKE configured for IRQ 0
[    0.626370] ACPI: PCI: Interrupt link LNKF configured for IRQ 0
[    0.626452] ACPI: PCI: Interrupt link LNKG configured for IRQ 0
[    0.626535] ACPI: PCI: Interrupt link LNKH configured for IRQ 0
[    0.633153] iommu: Default domain type: Translated
[    0.633153] iommu: DMA domain TLB invalidation policy: lazy mode
[    0.633198] SCSI subsystem initialized
[    0.633202] libata version 3.00 loaded.
[    0.633202] ACPI: bus type USB registered
[    0.633202] usbcore: registered new interface driver usbfs
[    0.633202] usbcore: registered new interface driver hub
[    0.633202] usbcore: registered new device driver usb
[    0.633202] pps_core: LinuxPPS API ver. 1 registered
[    0.633202] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    0.633202] PTP clock support registered
[    0.633202] EDAC MC: Ver: 3.0.0
[    0.634260] efivars: Registered efivars operations
[    0.634292] NetLabel: Initializing
[    0.634293] NetLabel:  domain hash size = 128
[    0.634294] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
[    0.634303] NetLabel:  unlabeled traffic allowed by default
[    0.634306] mctp: management component transport protocol core
[    0.634306] NET: Registered PF_MCTP protocol family
[    0.634309] PCI: Using ACPI for IRQ routing
[    0.656412] PCI: pci_cache_line_size set to 64 bytes
[    0.656811] pci 0000:00:1f.5: can't claim BAR 0 [mem 0xfe010000-0xfe010fff]: no compatible bridge window
[    0.657224] e820: reserve RAM buffer [mem 0x0009e000-0x0009ffff]
[    0.657225] e820: reserve RAM buffer [mem 0x63b36018-0x63ffffff]
[    0.657226] e820: reserve RAM buffer [mem 0x63b68018-0x63ffffff]
[    0.657227] e820: reserve RAM buffer [mem 0x66b83018-0x67ffffff]
[    0.657228] e820: reserve RAM buffer [mem 0x66b91018-0x67ffffff]
[    0.657228] e820: reserve RAM buffer [mem 0x66bb6018-0x67ffffff]
[    0.657229] e820: reserve RAM buffer [mem 0x6867d000-0x6bffffff]
[    0.657230] e820: reserve RAM buffer [mem 0x7177e000-0x73ffffff]
[    0.657230] e820: reserve RAM buffer [mem 0x76000000-0x77ffffff]
[    0.657231] e820: reserve RAM buffer [mem 0x107fc00000-0x107fffffff]
[    0.657254] pci 0000:0b:00.0: vgaarb: setting as boot VGA device
[    0.657254] pci 0000:0b:00.0: vgaarb: bridge control possible
[    0.657254] pci 0000:0b:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    0.657254] vgaarb: loaded
[    0.657254] clocksource: Switched to clocksource tsc-early
[    0.663419] VFS: Disk quotas dquot_6.6.0
[    0.663428] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.663486] pnp: PnP ACPI init
[    0.663936] pnp 00:01: [dma 0 disabled]
[    0.664116] system 00:02: [io  0x0a00-0x0a1f] has been reserved
[    0.664118] system 00:02: [io  0x0a20-0x0a2f] has been reserved
[    0.664119] system 00:02: [io  0x0a30-0x0a3f] has been reserved
[    0.664120] system 00:02: [io  0x0a40-0x0a4f] has been reserved
[    0.664121] system 00:02: [io  0x0a50-0x0a5f] has been reserved
[    0.664437] pnp 00:03: [dma 0 disabled]
[    0.664559] system 00:04: [io  0x0680-0x069f] has been reserved
[    0.664560] system 00:04: [io  0x164e-0x164f] has been reserved
[    0.664652] system 00:05: [io  0x1854-0x1857] has been reserved
[    0.666871] system 00:06: [mem 0xfedc0000-0xfedc7fff] has been reserved
[    0.666873] system 00:06: [mem 0xfeda0000-0xfeda0fff] has been reserved
[    0.666874] system 00:06: [mem 0xfeda1000-0xfeda1fff] has been reserved
[    0.666875] system 00:06: [mem 0xc0000000-0xcfffffff] has been reserved
[    0.666876] system 00:06: [mem 0xfed20000-0xfed7ffff] could not be reserved
[    0.666877] system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved
[    0.666878] system 00:06: [mem 0xfed45000-0xfed8ffff] could not be reserved
[    0.666879] system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved
[    0.667504] system 00:07: [io  0x2000-0x20fe] has been reserved
[    0.668096] pnp: PnP ACPI: found 9 devices
[    0.673266] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    0.673303] NET: Registered PF_INET protocol family
[    0.673487] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.675399] tcp_listen_portaddr_hash hash table entries: 32768 (order: 7, 524288 bytes, linear)
[    0.675438] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[    0.675453] TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[    0.675758] TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
[    0.675908] TCP: Hash tables configured (established 524288 bind 65536)
[    0.676116] MPTCP token hash table entries: 65536 (order: 8, 1572864 bytes, linear)
[    0.676165] UDP hash table entries: 32768 (order: 8, 1048576 bytes, linear)
[    0.676246] UDP-Lite hash table entries: 32768 (order: 8, 1048576 bytes, linear)
[    0.676361] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.676366] NET: Registered PF_XDP protocol family
[    0.676378] pci 0000:00:15.0: BAR 0: assigned [mem 0x4010000000-0x4010000fff 64bit]
[    0.676438] pci 0000:00:15.1: BAR 0: assigned [mem 0x4010001000-0x4010001fff 64bit]
[    0.676494] pci 0000:00:1f.5: BAR 0: assigned [mem 0x80400000-0x80400fff]
[    0.676505] pci 0000:00:01.0: PCI bridge to [bus 01]
[    0.676506] pci 0000:00:01.0:   bridge window [io  0x4000-0x4fff]
[    0.676509] pci 0000:00:01.0:   bridge window [mem 0x82100000-0x834fffff]
[    0.676512] pci 0000:00:01.1: PCI bridge to [bus 02]
[    0.676514] pci 0000:00:01.1:   bridge window [mem 0x83a00000-0x83bfffff]
[    0.676516] pci 0000:00:01.1:   bridge window [mem 0x60e0000000-0x60e3ffffff 64bit pref]
[    0.676519] pci 0000:00:06.0: PCI bridge to [bus 03]
[    0.676525] pci 0000:00:06.0:   bridge window [mem 0x83f00000-0x83ffffff]
[    0.676533] pci 0000:00:1a.0: PCI bridge to [bus 04]
[    0.676543] pci 0000:00:1a.0:   bridge window [mem 0x83e00000-0x83efffff]
[    0.676550] pci 0000:00:1b.0: PCI bridge to [bus 05]
[    0.676563] pci 0000:00:1b.4: PCI bridge to [bus 06]
[    0.676569] pci 0000:00:1b.4:   bridge window [mem 0x83d00000-0x83dfffff]
[    0.676577] pci 0000:07:00.0: PCI bridge to [bus 08]
[    0.676599] pci 0000:00:1c.0: PCI bridge to [bus 07-08]
[    0.676609] pci 0000:00:1c.1: PCI bridge to [bus 09]
[    0.676612] pci 0000:00:1c.1:   bridge window [mem 0x83500000-0x837fffff]
[    0.676620] pci 0000:0a:00.0: PCI bridge to [bus 0b]
[    0.676622] pci 0000:0a:00.0:   bridge window [io  0x3000-0x3fff]
[    0.676628] pci 0000:0a:00.0:   bridge window [mem 0x81000000-0x820fffff]
[    0.676638] pci 0000:00:1c.3: PCI bridge to [bus 0a-0b]
[    0.676639] pci 0000:00:1c.3:   bridge window [io  0x3000-0x3fff]
[    0.676643] pci 0000:00:1c.3:   bridge window [mem 0x81000000-0x820fffff]
[    0.676649] pci 0000:00:1c.4: PCI bridge to [bus 0c]
[    0.676653] pci 0000:00:1c.4:   bridge window [mem 0x83c00000-0x83cfffff]
[    0.676660] pci 0000:00:1d.0: PCI bridge to [bus 0d]
[    0.676664] pci 0000:00:1d.0:   bridge window [mem 0x83800000-0x839fffff]
[    0.676670] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    0.676672] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    0.676673] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[    0.676673] pci_bus 0000:00: resource 7 [mem 0x000e0000-0x000fffff window]
[    0.676674] pci_bus 0000:00: resource 8 [mem 0x80400000-0xbfffffff window]
[    0.676675] pci_bus 0000:00: resource 9 [mem 0x4000000000-0x7fffffffff window]
[    0.676676] pci_bus 0000:01: resource 0 [io  0x4000-0x4fff]
[    0.676677] pci_bus 0000:01: resource 1 [mem 0x82100000-0x834fffff]
[    0.676677] pci_bus 0000:02: resource 1 [mem 0x83a00000-0x83bfffff]
[    0.676678] pci_bus 0000:02: resource 2 [mem 0x60e0000000-0x60e3ffffff 64bit pref]
[    0.676679] pci_bus 0000:03: resource 1 [mem 0x83f00000-0x83ffffff]
[    0.676680] pci_bus 0000:04: resource 1 [mem 0x83e00000-0x83efffff]
[    0.676681] pci_bus 0000:06: resource 1 [mem 0x83d00000-0x83dfffff]
[    0.676682] pci_bus 0000:09: resource 1 [mem 0x83500000-0x837fffff]
[    0.676682] pci_bus 0000:0a: resource 0 [io  0x3000-0x3fff]
[    0.676683] pci_bus 0000:0a: resource 1 [mem 0x81000000-0x820fffff]
[    0.676684] pci_bus 0000:0b: resource 0 [io  0x3000-0x3fff]
[    0.676685] pci_bus 0000:0b: resource 1 [mem 0x81000000-0x820fffff]
[    0.676685] pci_bus 0000:0c: resource 1 [mem 0x83c00000-0x83cfffff]
[    0.676686] pci_bus 0000:0d: resource 1 [mem 0x83800000-0x839fffff]
[    0.677202] pci 0000:02:00.0: enabling device (0000 -> 0002)
[    0.677317] pci 0000:02:00.1: enabling device (0000 -> 0002)
[    0.677724] PCI: CLS 64 bytes, default 64
[    0.677742] DMAR: No ATSR found
[    0.677742] DMAR: No SATC found
[    0.677743] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.677744] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.677745] DMAR: IOMMU feature nwfs inconsistent
[    0.677745] DMAR: IOMMU feature dit inconsistent
[    0.677746] DMAR: IOMMU feature sc_support inconsistent
[    0.677746] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.677747] DMAR: dmar0: Using Queued invalidation
[    0.677749] DMAR: dmar1: Using Queued invalidation
[    0.677750] Trying to unpack rootfs image as initramfs...
[    0.678027] pci 0000:00:02.0: Adding to iommu group 0
[    0.678309] pci 0000:00:00.0: Adding to iommu group 1
[    0.678318] pci 0000:00:01.0: Adding to iommu group 2
[    0.678325] pci 0000:00:01.1: Adding to iommu group 3
[    0.678340] pci 0000:00:06.0: Adding to iommu group 4
[    0.678347] pci 0000:00:0a.0: Adding to iommu group 5
[    0.678362] pci 0000:00:14.0: Adding to iommu group 6
[    0.678369] pci 0000:00:14.2: Adding to iommu group 6
[    0.678382] pci 0000:00:15.0: Adding to iommu group 7
[    0.678389] pci 0000:00:15.1: Adding to iommu group 7
[    0.678403] pci 0000:00:16.0: Adding to iommu group 8
[    0.678410] pci 0000:00:16.3: Adding to iommu group 8
[    0.678417] pci 0000:00:17.0: Adding to iommu group 9
[    0.678433] pci 0000:00:1a.0: Adding to iommu group 10
[    0.678453] pci 0000:00:1b.0: Adding to iommu group 11
[    0.678462] pci 0000:00:1b.4: Adding to iommu group 12
[    0.678470] pci 0000:00:1c.0: Adding to iommu group 13
[    0.678478] pci 0000:00:1c.1: Adding to iommu group 14
[    0.678486] pci 0000:00:1c.3: Adding to iommu group 15
[    0.678495] pci 0000:00:1c.4: Adding to iommu group 16
[    0.678504] pci 0000:00:1d.0: Adding to iommu group 17
[    0.678528] pci 0000:00:1f.0: Adding to iommu group 18
[    0.678535] pci 0000:00:1f.3: Adding to iommu group 18
[    0.678543] pci 0000:00:1f.4: Adding to iommu group 18
[    0.678551] pci 0000:00:1f.5: Adding to iommu group 18
[    0.678559] pci 0000:00:1f.6: Adding to iommu group 18
[    0.678568] pci 0000:01:00.0: Adding to iommu group 19
[    0.678593] pci 0000:02:00.0: Adding to iommu group 20
[    0.678615] pci 0000:02:00.1: Adding to iommu group 21
[    0.678626] pci 0000:03:00.0: Adding to iommu group 22
[    0.678642] pci 0000:04:00.0: Adding to iommu group 23
[    0.678650] pci 0000:06:00.0: Adding to iommu group 24
[    0.678658] pci 0000:07:00.0: Adding to iommu group 25
[    0.678667] pci 0000:09:00.0: Adding to iommu group 26
[    0.678675] pci 0000:0a:00.0: Adding to iommu group 27
[    0.678677] pci 0000:0b:00.0: Adding to iommu group 27
[    0.678686] pci 0000:0c:00.0: Adding to iommu group 28
[    0.678694] pci 0000:0d:00.0: Adding to iommu group 29
[    0.680126] DMAR: Intel(R) Virtualization Technology for Directed I/O
[    0.680126] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    0.680127] software IO TLB: mapped [mem 0x000000005bdc8000-0x000000005fdc8000] (64MB)
[    0.680156] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x325ea749ca1, max_idle_ns: 440795373125 ns
[    0.680238] clocksource: Switched to clocksource tsc
[    0.680250] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    0.681425] Initialise system trusted keyrings
[    0.681431] Key type blacklist registered
[    0.681459] workingset: timestamp_bits=36 max_order=24 bucket_order=0
[    0.681470] zbud: loaded
[    0.681723] integrity: Platform Keyring initialized
[    0.681724] integrity: Machine keyring initialized
[    0.685820] NET: Registered PF_ALG protocol family
[    0.685822] xor: automatically using best checksumming function   avx       
[    0.685823] Key type asymmetric registered
[    0.685824] Asymmetric key parser 'x509' registered
[    1.091602] Freeing initrd memory: 42456K
[    1.093719] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 245)
[    1.093741] io scheduler mq-deadline registered
[    1.093743] io scheduler kyber registered
[    1.093748] io scheduler bfq registered
[    1.094907] atomic64_test: passed for x86-64 platform with CX8 and with SSE
[    1.095223] pcieport 0000:00:01.0: PME: Signaling with IRQ 122
[    1.095293] pcieport 0000:00:01.1: PME: Signaling with IRQ 123
[    1.095448] pcieport 0000:00:06.0: PME: Signaling with IRQ 124
[    1.095635] pcieport 0000:00:1a.0: PME: Signaling with IRQ 125
[    1.095754] pcieport 0000:00:1b.0: PME: Signaling with IRQ 126
[    1.095925] pcieport 0000:00:1b.4: PME: Signaling with IRQ 127
[    1.096107] pcieport 0000:00:1c.0: PME: Signaling with IRQ 128
[    1.096284] pcieport 0000:00:1c.1: PME: Signaling with IRQ 129
[    1.096459] pcieport 0000:00:1c.3: PME: Signaling with IRQ 130
[    1.096637] pcieport 0000:00:1c.4: PME: Signaling with IRQ 131
[    1.096814] pcieport 0000:00:1d.0: PME: Signaling with IRQ 132
[    1.096915] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[    1.097042] Monitor-Mwait will be used to enter C-1 state
[    1.097049] Monitor-Mwait will be used to enter C-2 state
[    1.097053] Monitor-Mwait will be used to enter C-3 state
[    1.097055] ACPI: \_SB_.PR00: Found 3 idle states
[    1.097682] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input0
[    1.097699] ACPI: button: Sleep Button [SLPB]
[    1.097719] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input1
[    1.097730] ACPI: button: Power Button [PWRB]
[    1.097746] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
[    1.097777] ACPI: button: Power Button [PWRF]
[    1.100620] thermal LNXTHERM:00: registered as thermal_zone0
[    1.100621] ACPI: thermal: Thermal Zone [TZ00] (28 C)
[    1.100719] ERST: Error Record Serialization Table (ERST) support is initialized.
[    1.100721] pstore: Registered erst as persistent store backend
[    1.100795] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    1.100890] 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
[    1.103769] 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    1.107277] serial 0000:00:16.3: enabling device (0001 -> 0003)
[    1.107446] 0000:00:16.3: ttyS4 at I/O 0x50a0 (irq = 19, base_baud = 115200) is a 16550A
[    1.107587] hpet_acpi_add: no address or irqs in _CRS
[    1.107625] Non-volatile memory driver v1.3
[    1.107627] Linux agpgart interface v0.103
[    1.114796] ACPI: bus type drm_connector registered
[    1.115670] [drm] Initialized simpledrm 1.0.0 20200625 for simple-framebuffer.0 on minor 0
[    1.115970] fbcon: Deferring console take-over
[    1.115972] simple-framebuffer simple-framebuffer.0: [drm] fb0: simpledrmdrmfb frame buffer device
[    1.128605] intel-lpss 0000:00:15.0: enabling device (0004 -> 0006)
[    1.148603] intel-lpss 0000:00:15.1: enabling device (0004 -> 0006)
[    1.157124] ahci 0000:00:17.0: version 3.0
[    1.167412] ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode
[    1.167414] ahci 0000:00:17.0: flags: 64bit ncq sntf pm led clo only pio slum part ems deso sadm sds 
[    1.188128] scsi host0: ahci
[    1.188428] scsi host1: ahci
[    1.188590] scsi host2: ahci
[    1.188813] scsi host3: ahci
[    1.188966] scsi host4: ahci
[    1.189116] scsi host5: ahci
[    1.189247] scsi host6: ahci
[    1.189368] scsi host7: ahci
[    1.189385] ata1: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022100 irq 133
[    1.189387] ata2: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022180 irq 133
[    1.189388] ata3: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022200 irq 133
[    1.189389] ata4: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022280 irq 133
[    1.189391] ata5: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022300 irq 133
[    1.189392] ata6: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022380 irq 133
[    1.189394] ata7: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022400 irq 133
[    1.189395] ata8: SATA max UDMA/133 abar m2048@0x84022000 port 0x84022480 irq 133
[    1.189703] xhci_hcd 0000:00:14.0: xHCI Host Controller
[    1.189720] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1
[    1.190851] xhci_hcd 0000:00:14.0: hcc params 0x20007fc1 hci version 0x120 quirks 0x0000000200009810
[    1.191070] xhci_hcd 0000:00:14.0: xHCI Host Controller
[    1.191085] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2
[    1.191086] xhci_hcd 0000:00:14.0: Host supports USB 3.2 Enhanced SuperSpeed
[    1.191120] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.05
[    1.191122] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    1.191123] usb usb1: Product: xHCI Host Controller
[    1.191124] usb usb1: Manufacturer: Linux 6.5.12-300.fc39.x86_64 xhci-hcd
[    1.191125] usb usb1: SerialNumber: 0000:00:14.0
[    1.191200] hub 1-0:1.0: USB hub found
[    1.191224] hub 1-0:1.0: 16 ports detected
[    1.192710] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 6.05
[    1.192711] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    1.192712] usb usb2: Product: xHCI Host Controller
[    1.192713] usb usb2: Manufacturer: Linux 6.5.12-300.fc39.x86_64 xhci-hcd
[    1.192713] usb usb2: SerialNumber: 0000:00:14.0
[    1.192752] hub 2-0:1.0: USB hub found
[    1.192772] hub 2-0:1.0: 8 ports detected
[    1.193624] usbcore: registered new interface driver usbserial_generic
[    1.193627] usbserial: USB Serial support registered for generic
[    1.193641] i8042: PNP: No PS/2 controller found.
[    1.193655] mousedev: PS/2 mouse device common for all mice
[    1.193707] rtc_cmos rtc_cmos: RTC can wake from S4
[    1.194577] rtc_cmos rtc_cmos: registered as rtc0
[    1.194754] rtc_cmos rtc_cmos: setting system clock to 2023-11-22T16:01:03 UTC (1700668863)
[    1.194774] rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram
[    1.195575] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[    1.195579] device-mapper: uevent: version 1.0.3
[    1.195600] device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
[    1.195640] intel_pstate: Intel P-state driver initializing
[    1.197070] intel_pstate: HWP enabled
[    1.197168] hid: raw HID events driver (C) Jiri Kosina
[    1.197177] usbcore: registered new interface driver usbhid
[    1.197177] usbhid: USB HID core driver
[    1.197201] intel_pmc_core INT33A1:00:  initialized
[    1.197273] drop_monitor: Initializing network drop monitor service
[    1.205849] Initializing XFRM netlink socket
[    1.205868] NET: Registered PF_INET6 protocol family
[    1.208082] Segment Routing with IPv6
[    1.208082] RPL Segment Routing with IPv6
[    1.208086] In-situ OAM (IOAM) with IPv6
[    1.208097] mip6: Mobile IPv6
[    1.208098] NET: Registered PF_PACKET protocol family
[    1.209981] microcode: Microcode Update Driver: v2.2.
[    1.209986] IPI shorthand broadcast: enabled
[    1.209994] AVX2 version of gcm_enc/dec engaged.
[    1.210626] AES CTR mode by8 optimization enabled
[    1.211520] sched_clock: Marking stable (1204001473, 6566790)->(1271940821, -61372558)
[    1.211668] registered taskstats version 1
[    1.211920] Loading compiled-in X.509 certificates
[    1.217093] Loaded X.509 cert 'Fedora kernel signing key: 2d719b71d7ce13a5dfb092a6f464a08d3a63afe0'
[    1.218535] page_owner is disabled
[    1.218561] Key type .fscrypt registered
[    1.218561] Key type fscrypt-provisioning registered
[    1.218829] Btrfs loaded, zoned=yes, fsverity=yes
[    1.218952] pstore: Using crash dump compression: deflate
[    1.218966] Key type big_key registered
[    1.219057] Key type trusted registered
[    1.220714] Key type encrypted registered
[    1.220938] integrity: Loading X.509 certificate: UEFI:db
[    1.227454] integrity: Loaded X.509 cert 'Database Key: 00a97359bd48ef5ad6de84f2ca1b2ab4a2'
[    1.227456] integrity: Loading X.509 certificate: UEFI:db
[    1.227477] integrity: Loaded X.509 cert 'Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4'
[    1.227478] integrity: Loading X.509 certificate: UEFI:db
[    1.227497] integrity: Loaded X.509 cert 'Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53'
[    1.227739] Loading compiled-in module X.509 certificates
[    1.228306] Loaded X.509 cert 'Fedora kernel signing key: 2d719b71d7ce13a5dfb092a6f464a08d3a63afe0'
[    1.228309] ima: Allocated hash algorithm: sha256
[    1.247274] audit: type=1807 audit(1700668863.550:2): action=measure func=KEXEC_KERNEL_CHECK res=1
[    1.247281] audit: type=1807 audit(1700668863.550:3): action=measure func=MODULE_CHECK res=1
[    1.247283] evm: Initialising EVM extended attributes:
[    1.247284] evm: security.selinux
[    1.247286] evm: security.SMACK64 (disabled)
[    1.247287] evm: security.SMACK64EXEC (disabled)
[    1.247288] evm: security.SMACK64TRANSMUTE (disabled)
[    1.247289] evm: security.SMACK64MMAP (disabled)
[    1.247290] evm: security.apparmor (disabled)
[    1.247291] evm: security.ima
[    1.247292] evm: security.capability
[    1.247293] evm: HMAC attrs: 0x1
[    1.305732] alg: No test for 842 (842-scomp)
[    1.305767] alg: No test for 842 (842-generic)
[    1.378257] PM:   Magic number: 15:343:35
[    1.380563] RAS: Correctable Errors collector initialized.
[    1.380580] Lockdown: swapper/0: hibernation is restricted; see man kernel_lockdown.7
[    1.380593] clk: Disabling unused clocks
[    1.435645] usb 1-4: new high-speed USB device number 2 using xhci_hcd
[    1.494776] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.494826] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.495497] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.495608] ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.495672] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.496032] ata2.00: supports DRM functions and may not be fully accessible
[    1.496034] ata2.00: ATA-9: WDC WD140EDFZ-11A0VA0, 81.00A81, max UDMA/133
[    1.496457] ata7: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.496512] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.496543] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.497092] ata7.00: supports DRM functions and may not be fully accessible
[    1.497097] ata7.00: ATA-9: WDC WD140EDFZ-11A0VA0, 81.00A81, max UDMA/133
[    1.497105] ata6.00: supports DRM functions and may not be fully accessible
[    1.497109] ata6.00: ATA-9: WDC WD140EDFZ-11A0VA0, 81.00A81, max UDMA/133
[    1.497185] ata3.00: supports DRM functions and may not be fully accessible
[    1.497190] ata3.00: ATA-9: WDC WD140EDFZ-11A0VA0, 81.00A81, max UDMA/133
[    1.502572] ata2.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.503783] ata3.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.503828] ata6.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.504005] ata2.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.504098] ata7.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.505228] ata3.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.505264] ata6.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.505521] ata7.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.507194] ata2.00: supports DRM functions and may not be fully accessible
[    1.508522] ata3.00: supports DRM functions and may not be fully accessible
[    1.508532] ata6.00: supports DRM functions and may not be fully accessible
[    1.509094] ata7.00: supports DRM functions and may not be fully accessible
[    1.515404] ata2.00: configured for UDMA/133
[    1.516811] ata3.00: configured for UDMA/133
[    1.516821] ata6.00: configured for UDMA/133
[    1.517659] ata7.00: configured for UDMA/133
[    1.545402] ata5.00: supports DRM functions and may not be fully accessible
[    1.545410] ata5.00: ATA-11: WDC WD140EDGZ-11B2DA2, 85.00A85, max UDMA/133
[    1.548712] ata5.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.549167] ata1.00: supports DRM functions and may not be fully accessible
[    1.549175] ata1.00: ATA-11: WDC WD140EDGZ-11B2DA2, 85.00A85, max UDMA/133
[    1.549940] ata5.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.550656] ata5.00: supports DRM functions and may not be fully accessible
[    1.551927] ata8.00: supports DRM functions and may not be fully accessible
[    1.551934] ata8.00: ATA-11: WDC WD140EDGZ-11B2DA2, 85.00A85, max UDMA/133
[    1.551976] ata1.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.553004] ata4.00: supports DRM functions and may not be fully accessible
[    1.553010] ata4.00: ATA-11: WDC WD140EDGZ-11B2DA2, 85.00A85, max UDMA/133
[    1.553183] ata1.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.553909] ata1.00: supports DRM functions and may not be fully accessible
[    1.554454] ata5.00: configured for UDMA/133
[    1.554772] ata8.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.555453] ata4.00: 27344764928 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    1.555911] ata8.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.556510] ata4.00: Features: Trust NCQ-sndrcv NCQ-prio
[    1.556635] ata8.00: supports DRM functions and may not be fully accessible
[    1.557237] ata4.00: supports DRM functions and may not be fully accessible
[    1.557698] ata1.00: configured for UDMA/133
[    1.557889] scsi 0:0:0:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 5
[    1.558301] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    1.558527] sd 0:0:0:0: [sda] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.558537] sd 0:0:0:0: [sda] 4096-byte physical blocks
[    1.558552] scsi 1:0:0:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 5
[    1.558711] sd 0:0:0:0: [sda] Write Protect is off
[    1.558718] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    1.558880] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.559147] sd 1:0:0:0: Attached scsi generic sg1 type 0
[    1.559159] sd 1:0:0:0: [sdb] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.559162] sd 1:0:0:0: [sdb] 4096-byte physical blocks
[    1.559179] sd 1:0:0:0: [sdb] Write Protect is off
[    1.559188] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    1.559279] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.559330] sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes
[    1.559377] sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes
[    1.559387] scsi 2:0:0:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 5
[    1.559858] sd 2:0:0:0: Attached scsi generic sg2 type 0
[    1.559944] sd 2:0:0:0: [sdc] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.559949] sd 2:0:0:0: [sdc] 4096-byte physical blocks
[    1.559977] sd 2:0:0:0: [sdc] Write Protect is off
[    1.559980] sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[    1.560005] sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.560064] sd 2:0:0:0: [sdc] Preferred minimum I/O size 4096 bytes
[    1.560439] ata8.00: configured for UDMA/133
[    1.561012] ata4.00: configured for UDMA/133
[    1.561103] scsi 3:0:0:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 5
[    1.561389] sd 3:0:0:0: Attached scsi generic sg3 type 0
[    1.561520] sd 3:0:0:0: [sdd] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.561526] sd 3:0:0:0: [sdd] 4096-byte physical blocks
[    1.561554] sd 3:0:0:0: [sdd] Write Protect is off
[    1.561557] sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[    1.561591] sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.561648] sd 3:0:0:0: [sdd] Preferred minimum I/O size 4096 bytes
[    1.561672] scsi 4:0:0:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 5
[    1.562075] sd 4:0:0:0: Attached scsi generic sg4 type 0
[    1.562125] sd 4:0:0:0: [sde] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.562129] sd 4:0:0:0: [sde] 4096-byte physical blocks
[    1.562141] sd 4:0:0:0: [sde] Write Protect is off
[    1.562144] sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00
[    1.562168] sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.562230] scsi 5:0:0:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 5
[    1.562230] sd 4:0:0:0: [sde] Preferred minimum I/O size 4096 bytes
[    1.562677] sd 5:0:0:0: Attached scsi generic sg5 type 0
[    1.562702] sd 5:0:0:0: [sdf] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.562705] sd 5:0:0:0: [sdf] 4096-byte physical blocks
[    1.562720] sd 5:0:0:0: [sdf] Write Protect is off
[    1.562724] sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00
[    1.562746] sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.562766] usb 1-4: New USB device found, idVendor=1d6b, idProduct=0107, bcdDevice= 1.00
[    1.562771] usb 1-4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    1.562773] usb 1-4: Product: USB Virtual Hub
[    1.562775] usb 1-4: Manufacturer: Aspeed
[    1.562777] usb 1-4: SerialNumber: 00000000
[    1.562826] sd 5:0:0:0: [sdf] Preferred minimum I/O size 4096 bytes
[    1.562877] scsi 6:0:0:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 5
[    1.563214] sd 6:0:0:0: Attached scsi generic sg6 type 0
[    1.563306] sd 6:0:0:0: [sdg] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.563308] sd 6:0:0:0: [sdg] 4096-byte physical blocks
[    1.563324] sd 6:0:0:0: [sdg] Write Protect is off
[    1.563328] sd 6:0:0:0: [sdg] Mode Sense: 00 3a 00 00
[    1.563353] sd 6:0:0:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.563456] scsi 7:0:0:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 5
[    1.563531] sd 6:0:0:0: [sdg] Preferred minimum I/O size 4096 bytes
[    1.563777] hub 1-4:1.0: USB hub found
[    1.563874] hub 1-4:1.0: 7 ports detected
[    1.564277] sd 7:0:0:0: Attached scsi generic sg7 type 0
[    1.564335] sd 7:0:0:0: [sdh] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    1.564338] sd 7:0:0:0: [sdh] 4096-byte physical blocks
[    1.564352] sd 7:0:0:0: [sdh] Write Protect is off
[    1.564355] sd 7:0:0:0: [sdh] Mode Sense: 00 3a 00 00
[    1.564377] sd 7:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.564429] sd 7:0:0:0: [sdh] Preferred minimum I/O size 4096 bytes
[    1.578727] sd 3:0:0:0: [sdd] Attached SCSI disk
[    1.579330] sd 1:0:0:0: [sdb] Attached SCSI disk
[    1.579410] sd 4:0:0:0: [sde] Attached SCSI disk
[    1.583186] sd 0:0:0:0: [sda] Attached SCSI disk
[    1.583415] sd 2:0:0:0: [sdc] Attached SCSI disk
[    1.585641] sd 5:0:0:0: [sdf] Attached SCSI disk
[    1.585911] sd 7:0:0:0: [sdh] Attached SCSI disk
[    1.587830] sd 6:0:0:0: [sdg] Attached SCSI disk
[    1.606530] Freeing unused decrypted memory: 2036K
[    1.607202] Freeing unused kernel image (initmem) memory: 4532K
[    1.607205] Write protecting the kernel read-only data: 34816k
[    1.607824] Freeing unused kernel image (rodata/data gap) memory: 1908K
[    1.613649] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    1.613653] Run /init as init process
[    1.613654]   with arguments:
[    1.613655]     /init
[    1.613655]     rhgb
[    1.613656]   with environment:
[    1.613656]     HOME=/
[    1.613657]     TERM=linux
[    1.631752] systemd[1]: systemd 254.5-2.fc39 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
[    1.631759] systemd[1]: Detected architecture x86-64.
[    1.631761] systemd[1]: Running in initrd.
[    1.631933] systemd[1]: Hostname set to <sm-1.int.chiller3.com>.
[    1.708618] systemd[1]: bpf-lsm: LSM BPF program attached
[    1.762809] systemd[1]: Queued start job for default target initrd.target.
[    1.778686] systemd[1]: Created slice system-systemd\x2dcryptsetup.slice - Slice /system/systemd-cryptsetup.
[    1.778773] systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System.
[    1.778802] systemd[1]: Reached target slices.target - Slice Units.
[    1.778816] systemd[1]: Reached target swap.target - Swaps.
[    1.778828] systemd[1]: Reached target timers.target - Timer Units.
[    1.778917] systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
[    1.778993] systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
[    1.779066] systemd[1]: Listening on systemd-journald.socket - Journal Socket.
[    1.779148] systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
[    1.779207] systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
[    1.779218] systemd[1]: Reached target sockets.target - Socket Units.
[    1.779927] systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
[    1.781478] systemd[1]: Starting systemd-journald.service - Journal Service...
[    1.782087] systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
[    1.783250] systemd[1]: Starting systemd-pcrphase-initrd.service - TPM2 PCR Barrier (initrd)...
[    1.783714] systemd[1]: Starting systemd-sysusers.service - Create System Users...
[    1.784149] systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
[    1.787798] systemd-journald[389]: Collecting audit messages is disabled.
[    1.788857] alua: device handler registered
[    1.789595] emc: device handler registered
[    1.790465] rdac: device handler registered
[    1.795685] systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
[    1.807705] systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
[    1.820643] systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
[    1.820682] systemd[1]: Started systemd-journald.service - Journal Service.
[    1.842592] usb 1-4.1: new high-speed USB device number 3 using xhci_hcd
[    1.920307] usb 1-4.1: New USB device found, idVendor=0557, idProduct=9241, bcdDevice= 5.04
[    1.920315] usb 1-4.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[    1.920318] usb 1-4.1: Product: SMCI HID KM
[    1.920321] usb 1-4.1: Manufacturer: Linux 5.4.62 with aspeed_vhub
[    1.925245] input: Linux 5.4.62 with aspeed_vhub SMCI HID KM as /devices/pci0000:00/0000:00:14.0/usb1/1-4/1-4.1/1-4.1:1.0/0003:0557:9241.0001/input/input3
[    1.976789] hid-generic 0003:0557:9241.0001: input,hidraw0: USB HID v1.00 Keyboard [Linux 5.4.62 with aspeed_vhub SMCI HID KM] on usb-0000:00:14.0-4.1/input0
[    1.980226] input: Linux 5.4.62 with aspeed_vhub SMCI HID KM as /devices/pci0000:00/0000:00:14.0/usb1/1-4/1-4.1/1-4.1:1.1/0003:0557:9241.0002/input/input4
[    1.980306] hid-generic 0003:0557:9241.0002: input,hidraw1: USB HID v1.00 Mouse [Linux 5.4.62 with aspeed_vhub SMCI HID KM] on usb-0000:00:14.0-4.1/input1
[    2.046580] usb 1-4.2: new high-speed USB device number 4 using xhci_hcd
[    2.123557] usb 1-4.2: New USB device found, idVendor=0b1f, idProduct=03ee, bcdDevice= 5.04
[    2.123560] usb 1-4.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[    2.123561] usb 1-4.2: Product: RNDIS/Ethernet Gadget
[    2.123562] usb 1-4.2: Manufacturer: Linux 5.4.62 with aspeed_vhub
[    2.131364] usbcore: registered new interface driver cdc_ether
[    2.133085] rndis_host 1-4.2:2.0 usb0: register 'rndis_host' at usb-0000:00:14.0-4.2, RNDIS device, be:3a:f2:b6:05:9f
[    2.133097] usbcore: registered new interface driver rndis_host
[    2.134121] rndis_host 1-4.2:2.0 enp0s20f0u4u2c2: renamed from usb0
[    2.182417] ACPI: video: Video Device [GFX0] (multi-head: yes  rom: no  post: no)
[    2.182797] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input5
[    2.215718] ast 0000:0b:00.0: vgaarb: deactivate vga console
[    2.216037] ast 0000:0b:00.0: [drm] P2A bridge disabled, using default configuration
[    2.216039] ast 0000:0b:00.0: [drm] AST 2600 detected
[    2.216045] ast 0000:0b:00.0: [drm] Using analog VGA
[    2.216046] ast 0000:0b:00.0: [drm] dram MCLK=396 Mhz type=1 bus_width=16
[    2.216141] [drm] Initialized ast 0.1.0 20120228 for 0000:0b:00.0 on minor 0
[    2.219089] Intel(R) 2.5G Ethernet Linux Driver
[    2.219091] Copyright(c) 2018 Intel Corporation.
[    2.219129] mpt3sas version 43.100.00.00 loaded
[    2.219259] igc 0000:09:00.0: PTM enabled, 4ns granularity
[    2.219362] mpt3sas_cm0: 63 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (65586660 kB)
[    2.219422] e1000e: Intel(R) PRO/1000 Network Driver
[    2.219424] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    2.219444] e1000e 0000:00:1f.6: enabling device (0000 -> 0002)
[    2.219520] nvme nvme0: pci function 0000:04:00.0
[    2.219532] nvme nvme2: pci function 0000:0d:00.0
[    2.219607] nvme nvme1: pci function 0000:06:00.0
[    2.219700] nvme 0000:0d:00.0: enabling device (0000 -> 0002)
[    2.219827] fbcon: astdrmfb (fb0) is primary device
[    2.219828] fbcon: Deferring console take-over
[    2.219830] ast 0000:0b:00.0: [drm] fb0: astdrmfb frame buffer device
[    2.224490] e1000e 0000:00:1f.6: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[    2.225533] nvme nvme3: pci function 0000:03:00.0
[    2.228028] nvme nvme4: pci function 0000:0c:00.0
[    2.233882] nvme nvme4: missing or invalid SUBNQN field.
[    2.233907] nvme nvme3: missing or invalid SUBNQN field.
[    2.234017] nvme nvme4: Shutdown timeout set to 8 seconds
[    2.234039] nvme nvme3: Shutdown timeout set to 8 seconds
[    2.239072] nvme nvme0: Shutdown timeout set to 10 seconds
[    2.239962] nvme nvme1: Shutdown timeout set to 10 seconds
[    2.243299] nvme nvme0: 20/0/0 default/read/poll queues
[    2.244340] nvme nvme1: 20/0/0 default/read/poll queues
[    2.245819]  nvme0n1: p1 p2
[    2.252582] nvme nvme3: 20/0/0 default/read/poll queues
[    2.252791] nvme nvme4: 20/0/0 default/read/poll queues
[    2.266818] pps pps0: new PPS source ptp0
[    2.266845] igc 0000:09:00.0 (unnamed net_device) (uninitialized): PHC added
[    2.276570] mpt3sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.276587] mpt3sas_cm0: MSI-X vectors supported: 96
[    2.276588] 	 no of cores: 20, max_msix_vectors: -1
[    2.276589] mpt3sas_cm0:  0 20 20
[    2.277005] mpt3sas_cm0: High IOPs queues : disabled
[    2.277007] mpt3sas0-msix0: PCI-MSI-X enabled: IRQ 226
[    2.277007] mpt3sas0-msix1: PCI-MSI-X enabled: IRQ 227
[    2.277008] mpt3sas0-msix2: PCI-MSI-X enabled: IRQ 228
[    2.277009] mpt3sas0-msix3: PCI-MSI-X enabled: IRQ 229
[    2.277009] mpt3sas0-msix4: PCI-MSI-X enabled: IRQ 230
[    2.277010] mpt3sas0-msix5: PCI-MSI-X enabled: IRQ 231
[    2.277011] mpt3sas0-msix6: PCI-MSI-X enabled: IRQ 232
[    2.277011] mpt3sas0-msix7: PCI-MSI-X enabled: IRQ 233
[    2.277012] mpt3sas0-msix8: PCI-MSI-X enabled: IRQ 234
[    2.277012] mpt3sas0-msix9: PCI-MSI-X enabled: IRQ 235
[    2.277013] mpt3sas0-msix10: PCI-MSI-X enabled: IRQ 236
[    2.277013] mpt3sas0-msix11: PCI-MSI-X enabled: IRQ 237
[    2.277014] mpt3sas0-msix12: PCI-MSI-X enabled: IRQ 238
[    2.277015] mpt3sas0-msix13: PCI-MSI-X enabled: IRQ 239
[    2.277015] mpt3sas0-msix14: PCI-MSI-X enabled: IRQ 240
[    2.277016] mpt3sas0-msix15: PCI-MSI-X enabled: IRQ 241
[    2.277016] mpt3sas0-msix16: PCI-MSI-X enabled: IRQ 242
[    2.277017] mpt3sas0-msix17: PCI-MSI-X enabled: IRQ 243
[    2.277017] mpt3sas0-msix18: PCI-MSI-X enabled: IRQ 244
[    2.277018] mpt3sas0-msix19: PCI-MSI-X enabled: IRQ 245
[    2.277019] mpt3sas_cm0: iomem(0x0000000083400000), mapped(0x0000000008105f95), size(65536)
[    2.277021] mpt3sas_cm0: ioport(0x0000000000004000), size(256)
[    2.292836] igc 0000:09:00.0: 4.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x1 link)
[    2.292838] igc 0000:09:00.0 eth0: MAC: 3c:ec:ef:c2:2c:e4
[    2.334081] mpt3sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.334083] mpt3sas_cm0: sending message unit reset !!
[    2.335576] mpt3sas_cm0: message unit reset: SUCCESS
[    2.360895] mlx5_core 0000:02:00.0: firmware version: 12.27.1016
[    2.360922] mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x8 link at 0000:00:01.1 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
[    2.363077] mpt3sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(7), sge_per_io(128), chains_per_io(19)
[    2.364387] mpt3sas_cm0: request pool(0x00000000f387c3d8) - dma(0xff600000): depth(9700), frame_size(128), pool_size(1212 kB)
[    2.375654] mpt3sas_cm0: sense pool(0x00000000c48b34d9) - dma(0xfde00000): depth(9463), element_size(96), pool_size (887 kB)
[    2.375768] mpt3sas_cm0: reply pool(0x00000000bbaf065f) - dma(0xfdc00000): depth(9764), frame_size(128), pool_size(1220 kB)
[    2.375779] mpt3sas_cm0: config page(0x0000000096b719e6) - dma(0xfdbee000): size(512)
[    2.375780] mpt3sas_cm0: Allocated physical memory: size(30701 kB)
[    2.375780] mpt3sas_cm0: Current Controller Queue Depth(9460),Max Controller Queue Depth(9584)
[    2.375781] mpt3sas_cm0: Scatter Gather Elements per IO(128)
[    2.419638] mpt3sas_cm0: _base_display_fwpkg_version: complete
[    2.419641] mpt3sas_cm0: FW Package Ver(16.17.00.03)
[    2.419911] mpt3sas_cm0: LSISAS3008: FWVersion(16.00.04.00), ChipRevision(0x02)
[    2.419913] mpt3sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    2.419965] scsi host8: Fusion MPT SAS Host
[    2.421195] mpt3sas_cm0: sending port enable !!
[    2.421640] mpt3sas_cm0: hba_port entry: 000000009d8c9369, port: 255 is added to hba_port list
[    2.422563] mpt3sas_cm0: host_add: handle(0x0001), sas_addr(0x54cd98f05d477e00), phys(8)
[    2.423023] mpt3sas_cm0: handle(0x9) sas_address(0x4433221100000000) port_type(0x1)
[    2.423124] mpt3sas_cm0: handle(0xa) sas_address(0x4433221101000000) port_type(0x1)
[    2.423224] mpt3sas_cm0: handle(0xb) sas_address(0x4433221102000000) port_type(0x1)
[    2.423249] e1000e 0000:00:1f.6 0000:00:1f.6 (uninitialized): registered PHC clock
[    2.423323] mpt3sas_cm0: handle(0xc) sas_address(0x4433221103000000) port_type(0x1)
[    2.423423] mpt3sas_cm0: handle(0xd) sas_address(0x4433221104000000) port_type(0x1)
[    2.423523] mpt3sas_cm0: handle(0xe) sas_address(0x4433221105000000) port_type(0x1)
[    2.423624] mpt3sas_cm0: handle(0x10) sas_address(0x4433221106000000) port_type(0x1)
[    2.423724] mpt3sas_cm0: handle(0xf) sas_address(0x4433221107000000) port_type(0x1)
[    2.432577] mpt3sas_cm0: port enable: SUCCESS
[    2.433212] scsi 8:0:0:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 6
[    2.433214] scsi 8:0:0:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), phy(0), device_name(0x5000cca28edd8866)
[    2.433215] scsi 8:0:0:0: enclosure logical id (0x54cd98f05d477e00), slot(7) 
[    2.433216] scsi 8:0:0:0: enclosure level(0x0001), connector name(     )
[    2.433253] scsi 8:0:0:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.433254] scsi 8:0:0:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.436026]  end_device-8:0: add: handle(0x0009), sas_addr(0x4433221100000000)
[    2.436612] scsi 8:0:1:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 6
[    2.436615] scsi 8:0:1:0: SATA: handle(0x000a), sas_addr(0x4433221101000000), phy(1), device_name(0x5000cca29dc2c4b9)
[    2.436617] scsi 8:0:1:0: enclosure logical id (0x54cd98f05d477e00), slot(6) 
[    2.436618] scsi 8:0:1:0: enclosure level(0x0001), connector name(     )
[    2.436657] scsi 8:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.436659] scsi 8:0:1:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.439645]  end_device-8:1: add: handle(0x000a), sas_addr(0x4433221101000000)
[    2.440207] scsi 8:0:2:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 6
[    2.440208] scsi 8:0:2:0: SATA: handle(0x000b), sas_addr(0x4433221102000000), phy(2), device_name(0x5000cca2a1cfa50d)
[    2.440209] scsi 8:0:2:0: enclosure logical id (0x54cd98f05d477e00), slot(4) 
[    2.440210] scsi 8:0:2:0: enclosure level(0x0001), connector name(     )
[    2.440246] scsi 8:0:2:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.440247] scsi 8:0:2:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.440990] fbcon: Taking over console
[    2.443274]  end_device-8:2: add: handle(0x000b), sas_addr(0x4433221102000000)
[    2.443924] scsi 8:0:3:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 6
[    2.443926] scsi 8:0:3:0: SATA: handle(0x000c), sas_addr(0x4433221103000000), phy(3), device_name(0x5000cca28fc3ae8b)
[    2.443926] scsi 8:0:3:0: enclosure logical id (0x54cd98f05d477e00), slot(5) 
[    2.443927] scsi 8:0:3:0: enclosure level(0x0001), connector name(     )
[    2.443985] scsi 8:0:3:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.443988] scsi 8:0:3:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.446901]  end_device-8:3: add: handle(0x000c), sas_addr(0x4433221103000000)
[    2.447484] scsi 8:0:4:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 6
[    2.447485] scsi 8:0:4:0: SATA: handle(0x000d), sas_addr(0x4433221104000000), phy(4), device_name(0x5000cca28fc28a36)
[    2.447485] scsi 8:0:4:0: enclosure logical id (0x54cd98f05d477e00), slot(3) 
[    2.447487] scsi 8:0:4:0: enclosure level(0x0001), connector name(     )
[    2.447549] scsi 8:0:4:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.447550] scsi 8:0:4:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.450451]  end_device-8:4: add: handle(0x000d), sas_addr(0x4433221104000000)
[    2.451037] scsi 8:0:5:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 6
[    2.451040] scsi 8:0:5:0: SATA: handle(0x000e), sas_addr(0x4433221105000000), phy(5), device_name(0x5000cca2adcb47c0)
[    2.451041] scsi 8:0:5:0: enclosure logical id (0x54cd98f05d477e00), slot(2) 
[    2.451041] scsi 8:0:5:0: enclosure level(0x0001), connector name(     )
[    2.451102] scsi 8:0:5:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.451103] scsi 8:0:5:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.453241] Console: switching to colour frame buffer device 128x48
[    2.453917]  end_device-8:5: add: handle(0x000e), sas_addr(0x4433221105000000)
[    2.454471] scsi 8:0:6:0: Direct-Access     ATA      WDC WD140EDGZ-11 0A85 PQ: 0 ANSI: 6
[    2.454474] scsi 8:0:6:0: SATA: handle(0x0010), sas_addr(0x4433221106000000), phy(6), device_name(0x5000cca2adcaddda)
[    2.454474] scsi 8:0:6:0: enclosure logical id (0x54cd98f05d477e00), slot(0) 
[    2.454475] scsi 8:0:6:0: enclosure level(0x0001), connector name(     )
[    2.454535] scsi 8:0:6:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.454536] scsi 8:0:6:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.457471]  end_device-8:6: add: handle(0x0010), sas_addr(0x4433221106000000)
[    2.458050] scsi 8:0:7:0: Direct-Access     ATA      WDC WD140EDFZ-11 0A81 PQ: 0 ANSI: 6
[    2.458051] scsi 8:0:7:0: SATA: handle(0x000f), sas_addr(0x4433221107000000), phy(7), device_name(0x5000cca28fc38835)
[    2.458051] scsi 8:0:7:0: enclosure logical id (0x54cd98f05d477e00), slot(1) 
[    2.458052] scsi 8:0:7:0: enclosure level(0x0001), connector name(     )
[    2.458111] scsi 8:0:7:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.458111] scsi 8:0:7:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
[    2.461013]  end_device-8:7: add: handle(0x000f), sas_addr(0x4433221107000000)
[    2.461204] sd 8:0:0:0: Attached scsi generic sg8 type 0
[    2.461370] sd 8:0:1:0: Attached scsi generic sg9 type 0
[    2.461445] sd 8:0:2:0: Attached scsi generic sg10 type 0
[    2.461516] sd 8:0:3:0: Attached scsi generic sg11 type 0
[    2.461544] sd 8:0:0:0: [sdi] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.461550] sd 8:0:0:0: [sdi] 4096-byte physical blocks
[    2.461640] sd 8:0:4:0: Attached scsi generic sg12 type 0
[    2.461810] sd 8:0:5:0: Attached scsi generic sg13 type 0
[    2.462056] sd 8:0:6:0: Attached scsi generic sg14 type 0
[    2.462114] sd 8:0:1:0: [sdj] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.462115] sd 8:0:1:0: [sdj] 4096-byte physical blocks
[    2.462175] sd 8:0:7:0: Attached scsi generic sg15 type 0
[    2.462323] sd 8:0:2:0: [sdk] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.462324] sd 8:0:2:0: [sdk] 4096-byte physical blocks
[    2.462345] sd 8:0:3:0: [sdl] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.462346] sd 8:0:3:0: [sdl] 4096-byte physical blocks
[    2.462544] sd 8:0:4:0: [sdm] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.462545] sd 8:0:4:0: [sdm] 4096-byte physical blocks
[    2.462757] sd 8:0:5:0: [sdn] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.462758] sd 8:0:5:0: [sdn] 4096-byte physical blocks
[    2.462874] sd 8:0:6:0: [sdo] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.462891] sd 8:0:7:0: [sdp] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
[    2.462891] sd 8:0:6:0: [sdo] 4096-byte physical blocks
[    2.462895] sd 8:0:7:0: [sdp] 4096-byte physical blocks
[    2.465183] sd 8:0:1:0: [sdj] Write Protect is off
[    2.465185] sd 8:0:1:0: [sdj] Mode Sense: 9b 00 10 08
[    2.465216] sd 8:0:2:0: [sdk] Write Protect is off
[    2.465217] sd 8:0:2:0: [sdk] Mode Sense: 9b 00 10 08
[    2.465624] sd 8:0:1:0: [sdj] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.465737] sd 8:0:0:0: [sdi] Write Protect is off
[    2.465739] sd 8:0:0:0: [sdi] Mode Sense: 9b 00 10 08
[    2.465767] sd 8:0:2:0: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.466142] sd 8:0:0:0: [sdi] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.466694] sd 8:0:3:0: [sdl] Write Protect is off
[    2.466699] sd 8:0:3:0: [sdl] Mode Sense: 9b 00 10 08
[    2.466763] sd 8:0:4:0: [sdm] Write Protect is off
[    2.466763] sd 8:0:4:0: [sdm] Mode Sense: 9b 00 10 08
[    2.467098] sd 8:0:7:0: [sdp] Write Protect is off
[    2.467099] sd 8:0:7:0: [sdp] Mode Sense: 9b 00 10 08
[    2.467424] sd 8:0:5:0: [sdn] Write Protect is off
[    2.467425] sd 8:0:5:0: [sdn] Mode Sense: 9b 00 10 08
[    2.467652] sd 8:0:3:0: [sdl] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.467829] sd 8:0:4:0: [sdm] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.467931] sd 8:0:6:0: [sdo] Write Protect is off
[    2.467935] sd 8:0:6:0: [sdo] Mode Sense: 9b 00 10 08
[    2.467978] sd 8:0:7:0: [sdp] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.468215] sd 8:0:5:0: [sdn] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.468734] sd 8:0:6:0: [sdo] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.470506] sd 8:0:1:0: [sdj] Attached SCSI disk
[    2.471026] sd 8:0:2:0: [sdk] Attached SCSI disk
[    2.472160] sd 8:0:0:0: [sdi] Attached SCSI disk
[    2.473940] sd 8:0:3:0: [sdl] Attached SCSI disk
[    2.474029] sd 8:0:4:0: [sdm] Attached SCSI disk
[    2.474099] sd 8:0:7:0: [sdp] Attached SCSI disk
[    2.474762] sd 8:0:5:0: [sdn] Attached SCSI disk
[    2.476038] sd 8:0:6:0: [sdo] Attached SCSI disk
[    2.485377] e1000e 0000:00:1f.6 eth1: (PCI Express:2.5GT/s:Width x1) 3c:ec:ef:c1:0d:0e
[    2.485380] e1000e 0000:00:1f.6 eth1: Intel(R) PRO/1000 Network Connection
[    2.485462] e1000e 0000:00:1f.6 eth1: MAC: 15, PHY: 12, PBA No: 01A0FF-0FF
[    2.612852] mlx5_core 0000:02:00.0: E-Switch: Total vports 2, per vport: max uc(1024) max mc(16384)
[    2.616745] mlx5_core 0000:02:00.0: Port module event: module 0, Cable unplugged
[    2.662734] nvme nvme2: 20/0/0 default/read/poll queues
[    2.667624]  nvme2n1: p1 p2
[    2.675928] igc 0000:09:00.0 eno2: renamed from eth0
[    2.696872] e1000e 0000:00:1f.6 eno1: renamed from eth1
[    2.827000] mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
[    2.828265] mlx5_core 0000:02:00.1: firmware version: 12.27.1016
[    2.828310] mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x8 link at 0000:00:01.1 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
[    2.978807] i915 0000:00:02.0: enabling device (0006 -> 0007)
[    2.979655] i915 0000:00:02.0: [drm] VT-d active for gfx access
[    2.979696] i915 0000:00:02.0: [drm] Using Transparent Hugepages
[    2.979941] ------------[ cut here ]------------
[    2.979942] i915 0000:00:02.0: Port A asks to use VBT vswing/preemph tables
[    2.979956] WARNING: CPU: 2 PID: 623 at drivers/gpu/drm/i915/display/intel_bios.c:2723 intel_bios_init+0x1872/0x1f20 [i915]
[    2.980086] Modules linked in: i915(+) crct10dif_pclmul crc32_pclmul crc32c_intel drm_buddy polyval_clmulni mlx5_core(+) ttm polyval_generic nvme mpt3sas e1000e drm_display_helper igc ghash_clmulni_intel nvme_core sha512_ssse3 mlxfw ast tls i2c_algo_bit ucsi_acpi cec typec_ucsi raid_class nvme_common psample scsi_transport_sas pci_hyperv_intf typec video wmi pinctrl_alderlake rndis_host cdc_ether usbnet mii scsi_dh_rdac scsi_dh_emc scsi_dh_alua dm_multipath
[    2.980103] CPU: 2 PID: 623 Comm: (udev-worker) Not tainted 6.5.12-300.fc39.x86_64 #1
[    2.980104] Hardware name: Supermicro Super Server/X13SAE-F, BIOS 2.1 04/06/2023
[    2.980105] RIP: 0010:intel_bios_init+0x1872/0x1f20 [i915]
[    2.980203] Code: 49 8b 7c 24 08 48 8b 5f 50 48 85 db 75 03 48 8b 1f e8 e2 95 c2 c5 89 e9 48 89 da 48 c7 c7 80 e5 0e c1 48 89 c6 e8 fe bc 26 c5 <0f> 0b e9 e6 fa ff ff 80 fa 01 45 19 db 41 81 e3 15 b7 ff ff 41 81
[    2.980204] RSP: 0018:ffffaf4980e8bad0 EFLAGS: 00010282
[    2.980205] RAX: 0000000000000000 RBX: ffff92cac31c4220 RCX: 0000000000000027
[    2.980206] RDX: ffff92d9ff6a1548 RSI: 0000000000000001 RDI: ffff92d9ff6a1540
[    2.980207] RBP: 0000000000000041 R08: 0000000000000000 R09: ffffaf4980e8b960
[    2.980208] R10: 0000000000000003 R11: ffffffff88345d88 R12: ffff92cadf668000
[    2.980208] R13: ffff92cadf668000 R14: 0000000000000000 R15: 0000000000000000
[    2.980209] FS:  00007fb4511b2980(0000) GS:ffff92d9ff680000(0000) knlGS:0000000000000000
[    2.980210] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    2.980211] CR2: 00007fb450d3aa20 CR3: 000000010f994000 CR4: 0000000000f50ee0
[    2.980212] PKRU: 55555554
[    2.980212] Call Trace:
[    2.980214]  <TASK>
[    2.980214]  ? intel_bios_init+0x1872/0x1f20 [i915]
[    2.980304]  ? __warn+0x81/0x130
[    2.980309]  ? intel_bios_init+0x1872/0x1f20 [i915]
[    2.980397]  ? report_bug+0x171/0x1a0
[    2.980400]  ? prb_read_valid+0x1b/0x30
[    2.980403]  ? handle_bug+0x3c/0x80
[    2.980405]  ? exc_invalid_op+0x17/0x70
[    2.980407]  ? asm_exc_invalid_op+0x1a/0x20
[    2.980410]  ? intel_bios_init+0x1872/0x1f20 [i915]
[    2.980498]  ? drm_vblank_worker_init+0x6b/0x80
[    2.980502]  intel_display_driver_probe_noirq+0x39/0x290 [i915]
[    2.980620]  i915_driver_probe+0x6b9/0xb80 [i915]
[    2.980703]  local_pci_probe+0x42/0xa0
[    2.980707]  pci_device_probe+0xc7/0x240
[    2.980709]  really_probe+0x19b/0x3e0
[    2.980711]  ? __pfx___driver_attach+0x10/0x10
[    2.980712]  __driver_probe_device+0x78/0x160
[    2.980714]  driver_probe_device+0x1f/0x90
[    2.980715]  __driver_attach+0xd2/0x1c0
[    2.980716]  bus_for_each_dev+0x85/0xd0
[    2.980719]  bus_add_driver+0x116/0x220
[    2.980720]  driver_register+0x59/0x100
[    2.980722]  i915_init+0x22/0xc0 [i915]
[    2.980799]  ? __pfx_i915_init+0x10/0x10 [i915]
[    2.980872]  do_one_initcall+0x5a/0x320
[    2.980876]  do_init_module+0x60/0x240
[    2.980878]  __do_sys_init_module+0x17f/0x1b0
[    2.980880]  ? __seccomp_filter+0x32c/0x4f0
[    2.980883]  do_syscall_64+0x5d/0x90
[    2.980887]  ? do_user_addr_fault+0x179/0x640
[    2.980889]  ? exc_page_fault+0x7f/0x180
[    2.980891]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
[    2.980894] RIP: 0033:0x7fb451b9d7fe
[    2.980901] Code: 48 8b 0d 35 16 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 02 16 0c 00 f7 d8 64 89 01 48
[    2.980902] RSP: 002b:00007fff5eb1f868 EFLAGS: 00000246 ORIG_RAX: 00000000000000af
[    2.980904] RAX: ffffffffffffffda RBX: 00005603127b27c0 RCX: 00007fb451b9d7fe
[    2.980904] RDX: 00007fb451ca307d RSI: 00000000008909e6 RDI: 00005603130914d0
[    2.980905] RBP: 00007fff5eb1f920 R08: 000056031274c010 R09: 0000000000000007
[    2.980906] R10: 0000000000000006 R11: 0000000000000246 R12: 00007fb451ca307d
[    2.980906] R13: 0000000000020000 R14: 000056031274cde0 R15: 0000560312798f30
[    2.980908]  </TASK>
[    2.980908] ---[ end trace 0000000000000000 ]---
[    2.982235] i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/adls_dmc_ver2_01.bin (v2.1)
[    3.002365] i915 0000:00:02.0: [drm] GT0: GuC firmware i915/tgl_guc_70.bin version 70.13.1
[    3.002368] i915 0000:00:02.0: [drm] GT0: HuC firmware i915/tgl_huc.bin version 7.9.3
[    3.005295] i915 0000:00:02.0: [drm] GT0: HuC: authenticated for all workloads
[    3.005596] i915 0000:00:02.0: [drm] GT0: GUC: submission enabled
[    3.005597] i915 0000:00:02.0: [drm] GT0: GUC: SLPC enabled
[    3.005992] i915 0000:00:02.0: [drm] GT0: GUC: RC enabled
[    3.006466] i915 0000:00:02.0: [drm] Protected Xe Path (PXP) protected content support initialized
[    3.007133] [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.0 on minor 1
[    3.007836] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes
[    3.008047] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes
[    3.100457] mlx5_core 0000:02:00.1: E-Switch: Total vports 2, per vport: max uc(1024) max mc(16384)
[    3.105034] mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged
[    3.339801] mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
[    3.343455] mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0
[    3.358748] mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1
[    3.467159] BTRFS: device fsid e3af9a0d-aa4f-4d81-b315-97fa46206986 devid 1 transid 640928 /dev/dm-0 scanned by (udev-worker) (767)
[    3.971160] BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
[    3.971171] BTRFS info (device dm-0): using free space tree
[    3.978354] BTRFS info (device dm-0): enabling ssd optimizations
[    3.978361] BTRFS info (device dm-0): auto enabling async discard
[    4.523501] systemd-journald[389]: Received SIGTERM from PID 1 (systemd).
[    4.576738] audit: type=1404 audit(1700668866.880:4): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
[    4.618671] SELinux:  policy capability network_peer_controls=1
[    4.618675] SELinux:  policy capability open_perms=1
[    4.618676] SELinux:  policy capability extended_socket_class=1
[    4.618676] SELinux:  policy capability always_check_network=0
[    4.618676] SELinux:  policy capability cgroup_seclabel=1
[    4.618677] SELinux:  policy capability nnp_nosuid_transition=1
[    4.618677] SELinux:  policy capability genfs_seclabel_symlinks=1
[    4.618678] SELinux:  policy capability ioctl_skip_cloexec=0
[    4.646879] audit: type=1403 audit(1700668866.950:5): auid=4294967295 ses=4294967295 lsm=selinux res=1
[    4.647334] systemd[1]: Successfully loaded SELinux policy in 70.877ms.
[    4.675862] systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.882ms.
[    4.678582] systemd[1]: systemd 254.5-2.fc39 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
[    4.678587] systemd[1]: Detected architecture x86-64.
[    4.679531] systemd[1]: Installed transient /etc/machine-id file.
[    4.867586] systemd[1]: bpf-lsm: LSM BPF program attached
[    4.985731] zram: Added device: zram0
[    5.170193] systemd[1]: initrd-switch-root.service: Deactivated successfully.
[    5.176628] systemd[1]: Stopped initrd-switch-root.service - Switch Root.
[    5.177361] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
[    5.177654] systemd[1]: Created slice machine.slice - Virtual Machine and Container Slice.
[    5.177936] systemd[1]: Created slice system-getty.slice - Slice /system/getty.
[    5.178178] systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
[    5.178389] systemd[1]: Created slice system-sshd\x2dkeygen.slice - Slice /system/sshd-keygen.
[    5.178595] systemd[1]: Created slice system-syncthing.slice - Slice /system/syncthing.
[    5.179225] systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
[    5.179424] systemd[1]: Created slice system-systemd\x2dzram\x2dsetup.slice - Slice /system/systemd-zram-setup.
[    5.179626] systemd[1]: Created slice user.slice - User and Session Slice.
[    5.179670] systemd[1]: systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch was skipped because of an unmet condition check (ConditionPathExists=!/run/plymouth/pid).
[    5.179882] systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
[    5.180338] systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
[    5.180733] systemd[1]: Reached target blockdev@dev-mapper-luks\x2dcf6bc8bd\x2d2dfb\x2d4b60\x2db004\x2d4a283c0d2d42.target - Block Device Preparation for /dev/mapper/luks-cf6bc8bd-2dfb-4b60-b004-4a283c0d2d42.
[    5.180912] systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
[    5.181086] systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
[    5.181096] systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
[    5.181108] systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
[    5.181144] systemd[1]: Reached target paths.target - Path Units.
[    5.181328] systemd[1]: Reached target slices.target - Slice Units.
[    5.181356] systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
[    5.182444] systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs.
[    5.183992] systemd[1]: Listening on lvm2-lvmpolld.socket - LVM2 poll daemon socket.
[    5.184047] systemd[1]: multipathd.socket - multipathd control socket was skipped because of an unmet condition check (ConditionPathExists=/etc/multipath.conf).
[    5.203374] systemd[1]: Listening on rpcbind.socket - RPCbind Server Activation Socket.
[    5.203429] systemd[1]: Reached target rpcbind.target - RPC Port Mapper.
[    5.205484] systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
[    5.205634] systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe.
[    5.206374] systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket.
[    5.207455] systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket.
[    5.209132] systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
[    5.209373] systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
[    5.209803] systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
[    5.211730] systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
[    5.214078] systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
[    5.215958] systemd[1]: Mounting proc-fs-nfsd.mount - NFSD configuration filesystem...
[    5.217461] systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
[    5.218547] systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
[    5.218721] systemd[1]: auth-rpcgss-module.service - Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
[    5.218875] systemd[1]: iscsi-starter.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/var/lib/iscsi/nodes).
[    5.219979] systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
[    5.221275] systemd[1]: Starting lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
[    5.222712] systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
[    5.223805] systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
[    5.225016] systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
[    5.226350] systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
[    5.227448] systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
[    5.227560] systemd[1]: plymouth-switch-root.service: Deactivated successfully.
[    5.232810] loop: module loaded
[    5.239329] fuse: init (API version 7.38)
[    5.244696] systemd[1]: Stopped plymouth-switch-root.service - Plymouth switch root service.
[    5.245151] systemd[1]: systemd-fsck-root.service: Deactivated successfully.
[    5.259591] systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
[    5.280373] RPC: Registered named UNIX socket transport module.
[    5.280375] RPC: Registered udp transport module.
[    5.280376] RPC: Registered tcp transport module.
[    5.280376] RPC: Registered tcp-with-tls transport module.
[    5.280376] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    5.286725] systemd[1]: Starting systemd-journald.service - Journal Service...
[    5.288195] systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
[    5.289575] systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
[    5.294949] systemd[1]: Starting systemd-pcrmachine.service - TPM2 PCR Machine ID Measurement...
[    5.295719] systemd-journald[947]: Collecting audit messages is enabled.
[    5.296321] systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
[    5.297808] systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
[    5.299774] systemd[1]: Started systemd-journald.service - Journal Service.
[    5.299930] audit: type=1130 audit(1700668867.603:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.316830] audit: type=1130 audit(1700668867.620:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.347748] audit: type=1130 audit(1700668867.651:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.356161] audit: type=1130 audit(1700668867.659:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.356167] audit: type=1131 audit(1700668867.659:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.367712] audit: type=1130 audit(1700668867.671:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.367721] audit: type=1131 audit(1700668867.671:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.383727] audit: type=1130 audit(1700668867.687:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.383734] audit: type=1131 audit(1700668867.687:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.399864] audit: type=1130 audit(1700668867.703:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.516310] systemd-journald[947]: Received client request to flush runtime journal.
[    5.577110] systemd-journald[947]: /var/log/journal/ec7321adfdbb412093efcdc435abb26e/system.journal: Journal file uses a different sequence number ID, rotating.
[    5.577114] systemd-journald[947]: Rotating system journal.
[    5.695017] BTRFS info: devid 1 device path /dev/mapper/luks-cf6bc8bd-2dfb-4b60-b004-4a283c0d2d42 changed to /dev/dm-0 scanned by (udev-worker) (1008)
[    5.697976] BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/luks-cf6bc8bd-2dfb-4b60-b004-4a283c0d2d42 scanned by (udev-worker) (1008)
[    5.746203] IPMI message handler: version 39.2
[    5.758465] ipmi device interface
[    5.780922] ipmi_si: IPMI System Interface driver
[    5.781435] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS
[    5.781439] ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0
[    5.781445] ipmi_si: Adding SMBIOS-specified kcs state machine
[    5.783246] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI
[    5.783422] ipmi_si IPI0001:00: ipmi_platform: [io  0x0ca2] regsize 1 spacing 1 irq 0
[    5.791964] zram0: detected capacity change from 0 to 16777216
[    5.801268] idma64 idma64.0: Found Intel integrated DMA 64-bit
[    5.814700] ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI
[    5.814704] ipmi_si: Adding ACPI-specified kcs state machine
[    5.817086] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0
[    5.822689] idma64 idma64.1: Found Intel integrated DMA 64-bit
[    5.829149] mei_me 0000:00:16.0: enabling device (0000 -> 0002)
[    5.839317] ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed.
[    5.840485] i801_smbus 0000:00:1f.4: SPD Write Disable is set
[    5.840543] i801_smbus 0000:00:1f.4: SMBus using PCI interrupt
[    5.845805] i2c i2c-13: 2/4 memory slots populated (from DMI)
[    5.845808] i2c i2c-13: Memory type 0x22 not supported yet, not instantiating SPD
[    5.847911] iTCO_vendor_support: vendor-support=0
[    5.852652] mei_pxp 0000:00:16.0-fbf6fcf1-96cf-4e2e-a6a6-1bab8cbe36b1: bound 0000:00:02.0 (ops i915_pxp_tee_component_ops [i915])
[    5.853168] iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400)
[    5.854004] iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[    5.855982] mei_hdcp 0000:00:16.0-b638ab7e-94e2-4ea2-a552-d1c54b627f04: bound 0000:00:02.0 (ops i915_hdcp_ops [i915])
[    5.862564] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer
[    5.862569] RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
[    5.862570] RAPL PMU: hw unit of domain package 2^-14 Joules
[    5.862571] RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules
[    5.864493] ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1c48, dev_id: 0x20)
[    5.963005] Adding 8388604k swap on /dev/zram0.  Priority:100 extents:1 across:8388604k SSDscFS
[    5.993282] snd_hda_intel 0000:00:1f.3: enabling device (0000 -> 0002)
[    5.993621] snd_hda_intel 0000:00:1f.3: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
[    6.000439] intel_tcc_cooling: TCC Offset locked
[    6.012384] ipmi_si IPI0001:00: IPMI kcs interface initialized
[    6.016957] intel_rapl_msr: PL4 support detected.
[    6.017002] intel_rapl_common: Found RAPL domain package
[    6.017010] intel_rapl_common: Found RAPL domain core
[    6.017012] intel_rapl_common: Found RAPL domain uncore
[    6.030459] ipmi_ssif: IPMI SSIF Interface driver
[    6.057893] snd_hda_codec_realtek hdaudioC0D0: autoconfig for ALC888-VD: line_outs=3 (0x14/0x17/0x16/0x0/0x0) type:line
[    6.057897] snd_hda_codec_realtek hdaudioC0D0:    speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
[    6.057898] snd_hda_codec_realtek hdaudioC0D0:    hp_outs=1 (0x1b/0x0/0x0/0x0/0x0)
[    6.057899] snd_hda_codec_realtek hdaudioC0D0:    mono: mono_out=0x0
[    6.057900] snd_hda_codec_realtek hdaudioC0D0:    dig-out=0x1e/0x0
[    6.057901] snd_hda_codec_realtek hdaudioC0D0:    inputs:
[    6.057901] snd_hda_codec_realtek hdaudioC0D0:      Front Mic=0x19
[    6.057902] snd_hda_codec_realtek hdaudioC0D0:      Rear Mic=0x18
[    6.057903] snd_hda_codec_realtek hdaudioC0D0:      Line=0x1a
[    6.104092] input: HDA Intel PCH Front Mic as /devices/pci0000:00/0000:00:1f.3/sound/card0/input6
[    6.104132] input: HDA Intel PCH Rear Mic as /devices/pci0000:00/0000:00:1f.3/sound/card0/input7
[    6.104159] input: HDA Intel PCH Line as /devices/pci0000:00/0000:00:1f.3/sound/card0/input8
[    6.104185] input: HDA Intel PCH Line Out Front as /devices/pci0000:00/0000:00:1f.3/sound/card0/input9
[    6.104213] input: HDA Intel PCH Line Out Surround as /devices/pci0000:00/0000:00:1f.3/sound/card0/input10
[    6.104239] input: HDA Intel PCH Line Out CLFE as /devices/pci0000:00/0000:00:1f.3/sound/card0/input11
[    6.104265] input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1f.3/sound/card0/input12
[    6.104291] input: HDA Intel PCH HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input13
[    6.104316] input: HDA Intel PCH HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input14
[    6.104340] input: HDA Intel PCH HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input15
[    6.104365] input: HDA Intel PCH HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input16
[   11.430069] kauditd_printk_skb: 25 callbacks suppressed
[   11.430071] audit: type=1338 audit(1700668873.732:41): module=crypt op=ctr ppid=1 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-cryptse" exe="/usr/lib/systemd/systemd-cryptsetup" subj=system_u:system_r:lvm_t:s0 dev=253:1 error_msg='success' res=1
[   11.430287] audit: type=1300 audit(1700668873.732:41): arch=c000003e syscall=16 success=yes exit=0 a0=4 a1=c138fd09 a2=564c895d29d0 a3=564c895c8410 items=6 ppid=1 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-cryptse" exe="/usr/lib/systemd/systemd-cryptsetup" subj=system_u:system_r:lvm_t:s0 key=(null)
[   11.430289] audit: type=1307 audit(1700668873.732:41): cwd="/"
[   11.430291] audit: type=1302 audit(1700668873.732:41): item=0 name=(null) inode=2049 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   11.430293] audit: type=1302 audit(1700668873.732:41): item=1 name=(null) inode=28379 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   11.430295] audit: type=1302 audit(1700668873.732:41): item=2 name=(null) inode=1046 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   11.430297] audit: type=1302 audit(1700668873.732:41): item=3 name=(null) inode=28380 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   11.430298] audit: type=1302 audit(1700668873.732:41): item=4 name=(null) inode=28380 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   11.430300] audit: type=1302 audit(1700668873.732:41): item=5 name=(null) inode=28381 dev=00:07 mode=0100444 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   11.430302] audit: type=1327 audit(1700668873.732:41): proctitle=2F7573722F6C69622F73797374656D642F73797374656D642D6372797074736574757000617474616368006C756B732D7A66732D6E766D652D53616D73756E675F5353445F3937305F45564F5F506C75735F3254425F533539434E5A464E41313436323559002F6465762F6469736B2F62792D69642F6E766D652D53616D7375
[   11.456728] spl: loading out-of-tree module taints kernel.
[   11.585473] zfs: module license 'CDDL' taints kernel.
[   11.585477] Disabling lock debugging due to kernel taint
[   11.585491] zfs: module license taints kernel.
[   12.634620] ZFS: Loaded module v2.2.1-1, ZFS pool version 5000, ZFS filesystem version 5
[   16.625451] kauditd_printk_skb: 12 callbacks suppressed
[   16.625453] audit: type=1338 audit(1700668878.928:45): module=crypt op=ctr ppid=1 pid=1195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-cryptse" exe="/usr/lib/systemd/systemd-cryptsetup" subj=system_u:system_r:lvm_t:s0 dev=253:3 error_msg='success' res=1
[   16.625607] audit: type=1300 audit(1700668878.928:45): arch=c000003e syscall=16 success=yes exit=0 a0=4 a1=c138fd09 a2=55cc7951d9b0 a3=55cc79512600 items=6 ppid=1 pid=1195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-cryptse" exe="/usr/lib/systemd/systemd-cryptsetup" subj=system_u:system_r:lvm_t:s0 key=(null)
[   16.625609] audit: type=1307 audit(1700668878.928:45): cwd="/"
[   16.625611] audit: type=1302 audit(1700668878.928:45): item=0 name=(null) inode=2049 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   16.625613] audit: type=1302 audit(1700668878.928:45): item=1 name=(null) inode=45162 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   16.625615] audit: type=1302 audit(1700668878.928:45): item=2 name=(null) inode=1046 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   16.625617] audit: type=1302 audit(1700668878.928:45): item=3 name=(null) inode=42049 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   16.625618] audit: type=1302 audit(1700668878.928:45): item=4 name=(null) inode=42049 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   16.625620] audit: type=1302 audit(1700668878.928:45): item=5 name=(null) inode=42050 dev=00:07 mode=0100444 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
[   16.625622] audit: type=1327 audit(1700668878.928:45): proctitle=2F7573722F6C69622F73797374656D642F73797374656D642D6372797074736574757000617474616368006C756B732D7A66732D6174612D5744435F57443134304544465A2D313141305641305F394C47354C364841002F6465762F6469736B2F62792D69642F6174612D5744435F57443134304544465A2D31314130564130
[   25.005762] kauditd_printk_skb: 188 callbacks suppressed
[   25.005764] audit: type=1130 audit(1700668887.309:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zfs-import-cache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   25.946908] audit: type=1130 audit(1700668888.250:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zfs-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   25.983858] audit: type=1130 audit(1700668888.287:83): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-read-write comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   25.995761] audit: type=1130 audit(1700668888.299:84): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-boot-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   26.008790] audit: type=1130 audit(1700668888.312:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   26.025785] audit: type=1130 audit(1700668888.329:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   26.041562] audit: type=1334 audit(1700668888.344:87): prog-id=61 op=LOAD
[   26.041717] audit: type=1334 audit(1700668888.345:88): prog-id=62 op=LOAD
[   26.041726] audit: type=1334 audit(1700668888.345:89): prog-id=63 op=LOAD
[   26.043464] audit: type=1334 audit(1700668888.346:90): prog-id=64 op=LOAD
[   26.216772] dbus-broker-lau[2574]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
[   26.505658] NET: Registered PF_QIPCRTR protocol family
[   27.442587] mlx5_core 0000:02:00.0 enp2s0f0np0: Link down
[   27.610859] mlx5_core 0000:02:00.1 enp2s0f1np1: Link up
[   27.644363] 8021q: 802.1Q VLAN Support v1.8
[   27.644386] 8021q: adding VLAN 0 to HW filter on device enp2s0f0np0
[   27.656123] 8021q: adding VLAN 0 to HW filter on device enp2s0f1np1
[   27.709774] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   27.809971] bridge0: port 1(enp2s0f1np1) entered blocking state
[   27.809974] bridge0: port 1(enp2s0f1np1) entered disabled state
[   27.809991] mlx5_core 0000:02:00.1 enp2s0f1np1: entered allmulticast mode
[   27.820266] systemd-journald[947]: /var/log/journal/ec7321adfdbb412093efcdc435abb26e/user-2003.journal: Journal file uses a different sequence number ID, rotating.
[   27.838680] br-vlan-6: port 1(enp2s0f1np1.6) entered blocking state
[   27.838684] br-vlan-6: port 1(enp2s0f1np1.6) entered disabled state
[   27.838702] enp2s0f1np1.6: entered allmulticast mode
[   27.838738] enp2s0f1np1.6: entered promiscuous mode
[   27.838743] mlx5_core 0000:02:00.1 enp2s0f1np1: entered promiscuous mode
[   27.838785] br-vlan-6: port 1(enp2s0f1np1.6) entered blocking state
[   27.838786] br-vlan-6: port 1(enp2s0f1np1.6) entered listening state
[   27.853369] mlx5_core 0000:02:00.1: mlx5e_fs_set_rx_mode_work:842:(pid 210): S-tagged traffic will be dropped while C-tag vlan stripping is enabled
[   28.483789] bridge0: port 1(enp2s0f1np1) entered blocking state
[   28.483792] bridge0: port 1(enp2s0f1np1) entered listening state
[   31.427057] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10703067623424 size=122880 flags=1074267264
[   31.427209] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GSX1RC error=5 type=2 offset=10703067623424 size=122880 flags=1074267264
[   31.428041] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3HG62TPN error=5 type=2 offset=10445368209408 size=110592 flags=1074267264
[   31.428255] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3WH05X7J error=5 type=2 offset=10445368209408 size=110592 flags=1074267264
[   31.460137] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3HG62TPN error=5 type=2 offset=10445368209408 size=4096 flags=1605761
[   31.460292] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10703067623424 size=4096 flags=1605761
[   31.466081] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GSX1RC error=5 type=2 offset=10703067623424 size=4096 flags=1605761
[   31.466975] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3WH05X7J error=5 type=2 offset=10445368209408 size=4096 flags=1605761
[   31.478155] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11201279152128 size=4096 flags=1572992
[   31.478162] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11201279152128 size=4096 flags=1572992
[   31.478172] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10531265617920 size=4096 flags=1572992
[   31.478178] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10531265617920 size=4096 flags=1572992
[   31.509977] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11201279152128 size=4096 flags=1605761
[   31.509981] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11201279152128 size=4096 flags=1605761
[   31.509983] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10531265617920 size=4096 flags=1605761
[   31.510580] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10531265617920 size=4096 flags=1605761
[   31.520771] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11201279152128 size=4096 flags=1572992
[   31.520773] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11201279152128 size=4096 flags=1572992
[   31.520776] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10531265617920 size=4096 flags=1572992
[   31.520777] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10531265617920 size=4096 flags=1572992
[   31.530545] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11201279152128 size=4096 flags=1605761
[   31.530590] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11201279152128 size=4096 flags=1605761
[   31.530603] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10531265617920 size=4096 flags=1605761
[   31.530616] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10531265617920 size=4096 flags=1605761
[   31.540902] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11201279152128 size=4096 flags=1572992
[   31.540908] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11201279152128 size=4096 flags=1572992
[   31.540917] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10531265617920 size=4096 flags=1572992
[   31.540917] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10531265617920 size=4096 flags=1572992
[   31.550642] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10531265617920 size=4096 flags=1605761
[   31.550705] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10531265617920 size=4096 flags=1605761
[   31.550722] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11201279152128 size=4096 flags=1605761
[   31.550727] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11201279152128 size=4096 flags=1605761
[   43.201413] br-vlan-6: port 1(enp2s0f1np1.6) entered learning state
[   43.713442] bridge0: port 1(enp2s0f1np1) entered learning state
[   58.561131] br-vlan-6: port 1(enp2s0f1np1.6) entered forwarding state
[   58.561155] br-vlan-6: topology change detected, propagating
[   59.073045] bridge0: port 1(enp2s0f1np1) entered forwarding state
[   60.142817] tun: Universal TUN/TAP device driver, 1.6
[   60.144025] br-vlan-6: port 2(vnet0) entered blocking state
[   60.144030] br-vlan-6: port 2(vnet0) entered disabled state
[   60.144047] vnet0: entered allmulticast mode
[   60.144124] vnet0: entered promiscuous mode
[   60.144330] br-vlan-6: port 2(vnet0) entered blocking state
[   60.144335] br-vlan-6: port 2(vnet0) entered listening state
[   60.524915] br-vlan-6: port 3(vnet1) entered blocking state
[   60.524919] br-vlan-6: port 3(vnet1) entered disabled state
[   60.524942] vnet1: entered allmulticast mode
[   60.524971] vnet1: entered promiscuous mode
[   60.525057] br-vlan-6: port 3(vnet1) entered blocking state
[   60.525059] br-vlan-6: port 3(vnet1) entered listening state
[   60.647633] x86/split lock detection: #AC: CPU 3/KVM/5994 took a split_lock trap at address: 0x7efa7050
[   60.647634] x86/split lock detection: #AC: CPU 2/KVM/5993 took a split_lock trap at address: 0x7efa7050
[   60.647634] x86/split lock detection: #AC: CPU 1/KVM/5992 took a split_lock trap at address: 0x7efa7050
[   60.832739] br-vlan-6: port 4(vnet2) entered blocking state
[   60.832743] br-vlan-6: port 4(vnet2) entered disabled state
[   60.832748] vnet2: entered allmulticast mode
[   60.832779] vnet2: entered promiscuous mode
[   60.832912] br-vlan-6: port 4(vnet2) entered blocking state
[   60.832915] br-vlan-6: port 4(vnet2) entered listening state
[   60.968879] x86/split lock detection: #AC: CPU 1/KVM/6039 took a split_lock trap at address: 0x7efa7050
[   60.968879] x86/split lock detection: #AC: CPU 3/KVM/6041 took a split_lock trap at address: 0x7efa7050
[   60.968881] x86/split lock detection: #AC: CPU 2/KVM/6040 took a split_lock trap at address: 0x7efa7050
[   61.297594] x86/split lock detection: #AC: CPU 2/KVM/6079 took a split_lock trap at address: 0x7efa7050
[   61.297594] x86/split lock detection: #AC: CPU 3/KVM/6080 took a split_lock trap at address: 0x7efa7050
[   61.297597] x86/split lock detection: #AC: CPU 1/KVM/6078 took a split_lock trap at address: 0x7efa7050
[   61.437058] br-vlan-6: port 5(vnet3) entered blocking state
[   61.437063] br-vlan-6: port 5(vnet3) entered disabled state
[   61.437069] vnet3: entered allmulticast mode
[   61.437089] vnet3: entered promiscuous mode
[   61.437171] br-vlan-6: port 5(vnet3) entered blocking state
[   61.437172] br-vlan-6: port 5(vnet3) entered listening state
[   61.442068] virbr1: port 1(vnet4) entered blocking state
[   61.442072] virbr1: port 1(vnet4) entered disabled state
[   61.442081] vnet4: entered allmulticast mode
[   61.442118] vnet4: entered promiscuous mode
[   61.442210] virbr1: port 1(vnet4) entered blocking state
[   61.442212] virbr1: port 1(vnet4) entered listening state
[   61.818999] br-vlan-6: port 6(vnet5) entered blocking state
[   61.819005] br-vlan-6: port 6(vnet5) entered disabled state
[   61.819014] vnet5: entered allmulticast mode
[   61.819045] vnet5: entered promiscuous mode
[   61.819147] br-vlan-6: port 6(vnet5) entered blocking state
[   61.819149] br-vlan-6: port 6(vnet5) entered listening state
[   61.947108] x86/split lock detection: #AC: CPU 2/KVM/6295 took a split_lock trap at address: 0x7efa7050
[   63.321465] RPC: Registered rdma transport module.
[   63.321468] RPC: Registered rdma backchannel transport module.
[   63.454892] NFSD: Using nfsdcld client tracking operations.
[   63.454901] NFSD: no clients to reclaim, skipping NFSv4 grace period (net f0000000)
[   63.488780] virbr1: port 1(vnet4) entered learning state
[   65.442248] virbr1: port 1(vnet4) entered forwarding state
[   65.442260] virbr1: topology change detected, propagating
[   75.222579] br-vlan-6: port 2(vnet0) entered learning state
[   75.733567] br-vlan-6: port 3(vnet1) entered learning state
[   75.733587] br-vlan-6: port 4(vnet2) entered learning state
[   76.245557] br-vlan-6: port 5(vnet3) entered learning state
[   76.757547] br-vlan-6: port 6(vnet5) entered learning state
[   90.582378] br-vlan-6: port 2(vnet0) entered forwarding state
[   90.582387] br-vlan-6: topology change detected, propagating
[   91.093397] br-vlan-6: port 3(vnet1) entered forwarding state
[   91.093418] br-vlan-6: topology change detected, propagating
[   91.093457] br-vlan-6: port 4(vnet2) entered forwarding state
[   91.093470] br-vlan-6: topology change detected, propagating
[   91.605379] br-vlan-6: port 5(vnet3) entered forwarding state
[   91.605399] br-vlan-6: topology change detected, propagating
[   92.117307] br-vlan-6: port 6(vnet5) entered forwarding state
[   92.117322] br-vlan-6: topology change detected, propagating
[  381.387706] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10393830420480 size=4096 flags=1572992
[  381.387719] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10393830420480 size=4096 flags=1572992
[  381.387732] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10893321154560 size=4096 flags=1572992
[  381.387769] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3WH05X7J error=5 type=2 offset=10067416707072 size=4096 flags=1572992
[  381.435069] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10393830420480 size=4096 flags=1605761
[  381.435079] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10393830420480 size=4096 flags=1605761
[  381.435084] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10893321154560 size=4096 flags=1605761
[  381.435094] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3WH05X7J error=5 type=2 offset=10067416707072 size=4096 flags=1605761
[  389.507042] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033077415936 size=4096 flags=1572992
[  389.507068] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185280512 size=4096 flags=1572992
[  389.507068] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033077415936 size=4096 flags=1572992
[  389.507082] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185280512 size=4096 flags=1572992
[  389.507086] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10943687331840 size=4096 flags=1572992
[  389.568784] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033077415936 size=4096 flags=1605761
[  389.568791] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033077415936 size=4096 flags=1605761
[  389.568798] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185280512 size=4096 flags=1605761
[  389.568807] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185280512 size=4096 flags=1605761
[  389.568810] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10943687331840 size=4096 flags=1605761
[  389.651031] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033077436416 size=4096 flags=1572992
[  389.651041] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033077436416 size=4096 flags=1572992
[  389.651050] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185300992 size=4096 flags=1572992
[  389.651049] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185300992 size=4096 flags=1572992
[  389.651057] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10943687352320 size=4096 flags=1572992
[  389.749552] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033077436416 size=4096 flags=1605761
[  389.749564] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185300992 size=4096 flags=1605761
[  389.749565] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033077436416 size=4096 flags=1605761
[  389.749569] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185300992 size=4096 flags=1605761
[  389.749579] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGUWTKP error=5 type=2 offset=10943687352320 size=4096 flags=1605761
[  394.091755] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3WH05X7J error=5 type=2 offset=10050264371200 size=4096 flags=1572992
[  394.091785] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10496907218944 size=4096 flags=1572992
[  394.091784] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11029482332160 size=4096 flags=1572992
[  394.091784] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11029482332160 size=4096 flags=1572992
[  394.091797] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10496907218944 size=4096 flags=1572992
[  394.143686] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7SW0A error=5 type=2 offset=11029482332160 size=4096 flags=1605761
[  394.143690] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG7AWJG error=5 type=2 offset=11029482332160 size=4096 flags=1605761
[  394.143695] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_QBHTL05T error=5 type=2 offset=10496907218944 size=4096 flags=1605761
[  394.143701] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG832LA error=5 type=2 offset=10496907218944 size=4096 flags=1605761
[  394.144571] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_3WH05X7J error=5 type=2 offset=10050264371200 size=4096 flags=1605761
[  977.041814] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG86A8A error=5 type=2 offset=11115380703232 size=4096 flags=1572992
[  977.041819] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_Y5J2Z08C error=5 type=2 offset=11115380703232 size=4096 flags=1572992
[  977.042131] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078136832 size=40960 flags=1074267264
[  977.042203] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078136832 size=40960 flags=1074267264
[  977.042509] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164292096 size=24576 flags=1074267264
[  977.042717] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164292096 size=24576 flags=1074267264
[  977.100141] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164292096 size=4096 flags=1605761
[  977.100148] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078136832 size=4096 flags=1605761
[  977.100347] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078136832 size=4096 flags=1605761
[  977.100503] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164292096 size=4096 flags=1605761
[  977.100899] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG86A8A error=5 type=2 offset=11115380703232 size=4096 flags=1605761
[  977.100994] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_Y5J2Z08C error=5 type=2 offset=11115380703232 size=4096 flags=1605761
[  977.162666] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG86A8A error=5 type=2 offset=11115380711424 size=4096 flags=1572992
[  977.162672] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_Y5J2Z08C error=5 type=2 offset=11115380711424 size=4096 flags=1572992
[  977.162684] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164316672 size=4096 flags=1572992
[  977.162688] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078177792 size=4096 flags=1572992
[  977.162698] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078177792 size=4096 flags=1572992
[  977.162694] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164316672 size=4096 flags=1572992
[  977.253953] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG86A8A error=5 type=2 offset=11115380711424 size=4096 flags=1605761
[  977.253971] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_Y5J2Z08C error=5 type=2 offset=11115380711424 size=4096 flags=1605761
[  977.253981] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164316672 size=4096 flags=1605761
[  977.253990] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164316672 size=4096 flags=1605761
[  977.254000] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078177792 size=4096 flags=1605761
[  977.254014] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078177792 size=4096 flags=1605761
[  980.172095] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164607488 size=4096 flags=1572992
[  980.172118] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164607488 size=4096 flags=1572992
[  980.172162] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078587392 size=4096 flags=1572992
[  980.172186] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078587392 size=4096 flags=1572992
[  980.172213] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185964544 size=4096 flags=1572992
[  980.172232] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185964544 size=4096 flags=1572992
[  980.224252] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164607488 size=4096 flags=1605761
[  980.224261] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164607488 size=4096 flags=1605761
[  980.224266] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078587392 size=4096 flags=1605761
[  980.224270] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078587392 size=4096 flags=1605761
[  980.224274] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185964544 size=4096 flags=1605761
[  980.224277] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185964544 size=4096 flags=1605761
[  980.336765] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164632064 size=4096 flags=1572992
[  980.336773] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164632064 size=4096 flags=1572992
[  980.336778] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078616064 size=4096 flags=1572992
[  980.336784] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185989120 size=4096 flags=1572992
[  980.336784] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078616064 size=4096 flags=1572992
[  980.336793] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185989120 size=4096 flags=1572992
[  980.429169] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG3WA0A error=5 type=2 offset=10617164632064 size=4096 flags=1605761
[  980.429177] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDFZ-11A0VA0_9LG5L6HA error=5 type=2 offset=10617164632064 size=4096 flags=1605761
[  980.429191] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B1PA0_Y6GTU87C error=5 type=2 offset=10033078616064 size=4096 flags=1605761
[  980.429191] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CGH1YPP error=5 type=2 offset=10033078616064 size=4096 flags=1605761
[  980.429197] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CH3DWZP error=5 type=2 offset=10428185989120 size=4096 flags=1605761
[  980.429198] zio pool=satapool0 vdev=/dev/mapper/luks-zfs-ata-WDC_WD140EDGZ-11B2DA2_2CHDJSXN error=5 type=2 offset=10428185989120 size=4096 flags=1605761

@chenxiaolong
Copy link

So far, after downgrading to 2.2.0 (had to build the RPMs from source for Fedora 39), the issue seems to have disappeared. dmesg no longer reports an zfs errors and the zpool status write errors column got reset to 0 (without having run zpool clear). I'm running a scrub now and will report back tomorrow when it completes.

Also, I just noticed I'm also using a SLOG and L2ARC like @Rudd-O. So it looks like both our setups have these things in common: Fedora + kernel 6.5 + LUKS encrypted disks + striped mirrors + SLOG + L2ARC + ECC memory.

In case it matters at all, it doesn't look like I've ever (intentionally or inadvertently) used block cloning:

[chenxiaolong@sm-1]~% zpool get all satapool0 | grep bclone
satapool0  bcloneused                     0                              -
satapool0  bclonesaved                    0                              -
satapool0  bcloneratio                    1.00x                          -

@blind-oracle
Copy link

blind-oracle commented Nov 22, 2023

Same here, WRITE errors across all vdev after upgrade from 2.2.0 to 2.2.1 on Ubuntu 22.04 @ 6.2.0-37 kernel.

# zpool status
  pool: zfs
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 1.64M in 00:00:00 with 0 errors on Wed Nov 22 17:51:00 2023
config:

        NAME          STATE     READ WRITE CKSUM
        zfs           ONLINE       0     0     0
          mirror-0    ONLINE       0     2     0
            D01_89BL  ONLINE       0     2     0
            D02_9WYL  ONLINE       0     2     0
          mirror-1    ONLINE       0     4     0
            D03_89YL  ONLINE       0     4     0
            D04_DA9L  ONLINE       0     4     0
          mirror-2    ONLINE       0     1     0
            D05_P8JL  ONLINE       0     1     0
            D06_8Y7L  ONLINE       0     1     0
          mirror-3    ONLINE       0     1     0
            D07_5B5J  ONLINE       0     1     0
            D08_94SL  ONLINE       0     1     0
        logs
          mirror-4    ONLINE       0     0     0
            SLOG_01   ONLINE       0     0     0
            SLOG_02   ONLINE       0     0     0
        cache
          L2ARC_01    ONLINE       0     0     0
          L2ARC_02    ONLINE       0     0     0

errors: No known data errors
# dmesg | grep zio
[ 1408.819308] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1572992
[ 1408.819329] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1572992
[ 1408.819334] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1572992
[ 1408.819726] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1572992
[ 1408.845815] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1605761
[ 1408.845822] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1605761
[ 1408.845825] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1605761
[ 1408.845832] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1605761
[26421.390577] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.390589] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.433580] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.433604] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.473379] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.473387] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.497776] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.497791] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26431.662076] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1572992
[26431.662082] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1572992
[26431.681016] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1605761
[26431.681026] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1605761
[26432.250116] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1572992
[26432.250120] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1572992
[26432.272795] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1605761
[26432.272802] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1605761
[29823.446168] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1572992
[29823.446183] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1572992
[29823.484845] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1605761
[29823.484859] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1605761
[29823.503696] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1572992
[29823.503707] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1572992
[29823.532217] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1605761
[29823.532228] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1605761

@awused
Copy link

awused commented Nov 22, 2023

With this bug in 2.2.1 and the block cloning bug in 2.2.0 I guess I'll continue putting off upgrading to 2.2.x and Fedora 39/FreeBSD 14. Both of these are the most serious bugs I've personally noticed making it into a released zfs version, and it happened two releases in a row.

Could 2.2.2 be made into a small bug fixing release instead of a normal release so there's more confidence in getting a trustworthy 2.2.x version?

@broizter
Copy link

Same here, WRITE errors across all vdev after upgrade from 2.2.0 to 2.2.1 on Ubuntu 22.04 @ 6.2.0-37 kernel.

# zpool status
  pool: zfs
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 1.64M in 00:00:00 with 0 errors on Wed Nov 22 17:51:00 2023
config:

        NAME          STATE     READ WRITE CKSUM
        zfs           ONLINE       0     0     0
          mirror-0    ONLINE       0     2     0
            D01_89BL  ONLINE       0     2     0
            D02_9WYL  ONLINE       0     2     0
          mirror-1    ONLINE       0     4     0
            D03_89YL  ONLINE       0     4     0
            D04_DA9L  ONLINE       0     4     0
          mirror-2    ONLINE       0     1     0
            D05_P8JL  ONLINE       0     1     0
            D06_8Y7L  ONLINE       0     1     0
          mirror-3    ONLINE       0     1     0
            D07_5B5J  ONLINE       0     1     0
            D08_94SL  ONLINE       0     1     0
        logs
          mirror-4    ONLINE       0     0     0
            SLOG_01   ONLINE       0     0     0
            SLOG_02   ONLINE       0     0     0
        cache
          L2ARC_01    ONLINE       0     0     0
          L2ARC_02    ONLINE       0     0     0

errors: No known data errors
# dmesg | grep zio
[ 1408.819308] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1572992
[ 1408.819329] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1572992
[ 1408.819334] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1572992
[ 1408.819726] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1572992
[ 1408.845815] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1605761
[ 1408.845822] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1605761
[ 1408.845825] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1605761
[ 1408.845832] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1605761
[26421.390577] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.390589] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.433580] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.433604] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.473379] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.473387] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.497776] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.497791] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26431.662076] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1572992
[26431.662082] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1572992
[26431.681016] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1605761
[26431.681026] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1605761
[26432.250116] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1572992
[26432.250120] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1572992
[26432.272795] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1605761
[26432.272802] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1605761
[29823.446168] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1572992
[29823.446183] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1572992
[29823.484845] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1605761
[29823.484859] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1605761
[29823.503696] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1572992
[29823.503707] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1572992
[29823.532217] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1605761
[29823.532228] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1605761

Are you also running ZFS on top of LUKS? Asking since I see /dev/mapper/ devices.

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 22, 2023

I'm deffo LUKS but the copy paste above from our friends who have repro'd the bug doesn't seem like it's LUKS.

Gotta say that my heart almost came out thru my esophagus when I got Alertmanager alerts about various drives in several machines popping off. If anyone is interested, I'm using https://github.com/Rudd-O/zfs-stats-exporter plus Node Exporter, and the following alerting rules for ZFS:

        - alert: PoolUnhealthy
          expr: zfs_pool_healthy == 0
          for: 10s
          annotations:
            summary: '{{ $labels.zpool }} in {{ $labels.instance }} is degraded or faulted'
        - alert: PoolBadState
          expr: |
            node_zfs_zpool_state{state!="online"} == 1
          for: 10s
          annotations:
            summary: '{{ $labels.zpool }} in {{ $labels.instance }} is in state {{ $labels.state }}'
        - alert: PoolErrored
          expr: zfs_pool_errors_total > 0
          for: 10s
          annotations:
            summary: '{{ $labels.zpool }} in {{ $labels.instance }} has had {{ $value }} {{ $labels.class }} errors'

None of my drives tripped the SMART rules:

        - alert: DiskHot
          expr: smartmon_temperature_celsius_raw_value >= 60
          for: 60s
          annotations:
            summary: '{{ $labels.device }} in {{ $labels.instance }} at {{ $value }}°C'
        - alert: SMARTUnhealthy
          expr: smartmon_device_smart_healthy == 0
          for: 10s
        - alert: SMARTUncorrectableSectorsFound
          expr: smartmon_offline_uncorrectable_raw_value > 0
          for: 10s
          annotations:
            summary: '{{ $value }} bad sectors on {{ $labels.device }} in {{ $labels.instance }}'
        - alert: SMARTPendingSectorsFound
          expr: smartmon_current_pending_sector_raw_value > 0
          for: 10s
          annotations:
            summary: '{{ $value }} pending sectors on {{ $labels.device }} in {{ $labels.instance }}'
        - alert: SMARTReallocatedSectorsCountHigh
          expr: smartmon_reallocated_sector_ct_raw_value > 5
          for: 10s
          annotations:
            summary: '{{ $value }} reallocated sectors on {{ $labels.device }} in {{ $labels.instance }}'
        - alert: SMARTUDMACRCErrorCountHigh
          expr: smartmon_udma_crc_error_count_raw_value > 5
          for: 10s
          annotations:
            summary: '{{ $value }} CRC errors on {{ $labels.device }} in {{ $labels.instance }}'
        - alert: SMARTAttributeAtOrBelowThreshold
          expr: '{__name__=~"smartmon_.*_value", __name__!~"smartmon_.*_raw_value", __name__!~".*power_on_hours.*"} <= {__name__=~"smartmon_.*_threshold"}'
          for: 10s

@broizter
Copy link

broizter commented Nov 22, 2023

Maybe I'm missing something but everyone who confirmed the bug in here are running ZFS on top of LUKS. blind-oracle hasn't confirmed it but given his device paths resides in dev/mapper I would guess he is as well.

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 22, 2023

Maybe some funky interaction with device-mapper?

@blind-oracle
Copy link

@broizter Yes, it's running on top of LUKS since it's much faster than built-in encryption. So yeah, might be some device-mapper related bug which is absent in 2.2.0

@RinCat
Copy link

RinCat commented Nov 23, 2023

I had the same issue using zfs 2.2.1 with LUKS, linux 6.6.2.
It seems all are write errors, and it currently shows
root DEGRADED 0 43.3K 0 too many errors

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 23, 2023

Yep. 2.2.1 has that problem too (kernel 6.5). Reverting to 2.2.0 now.

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 23, 2023

So we know master at the commit in the description, and 2.2.1 both share the issue.

@Rudd-O Rudd-O changed the title Data corruption with commit 786641dcf9a7e35f26a1b4778fc710c7ec0321bf Data corruption with commit 786641dcf9a7e35f26a1b4778fc710c7ec0321bf, and with 2.2.1 stable, when vdevs are atop LUKS Nov 23, 2023
@MajesticFaucet
Copy link

MajesticFaucet commented Nov 23, 2023

Same WRITE error issue on my laptop with two single-disk zpools on LUKS.

  pool: zlaptop_hdd
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
	attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
	using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 672K in 00:00:01 with 0 errors on Wed Nov 22 23:33:31 2023
config:

	NAME                      STATE     READ WRITE CKSUM
	zlaptop_hdd               ONLINE       0     0     0
	 /dev/mapper/laptop_hdd  ONLINE       0    64     0

errors: No known data errors

  pool: zlaptop_ssd
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
	attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
	using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 2.66M in 00:00:00 with 0 errors on Thu Nov 23 00:05:20 2023
config:

	NAME                           STATE     READ WRITE CKSUM
	zlaptop_ssd                    ONLINE       0     0     0
	 /dev/mapper/laptop_ssd-data  ONLINE       0    28     0

errors: No known data errors
[   32.610897] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785984929792 size=24576 flags=1074267264
[   32.649825] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785984929792 size=4096 flags=1605761
[   32.649943] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550085120 size=4096 flags=1605761
[   32.651301] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149486592 size=4096 flags=1605761
[   32.653041] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550203904 size=4096 flags=1589376
[   32.653041] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149625856 size=4096 flags=1589376
[   32.653054] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985052672 size=4096 flags=1589376
[   32.654378] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149625856 size=4096 flags=1605761
[   32.654479] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550203904 size=4096 flags=1605761
[   32.654488] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985052672 size=4096 flags=1605761
[   43.455629] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149642240 size=4096 flags=1589376
[   43.455653] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985249280 size=8192 flags=1074267264
[   43.455664] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550224384 size=4096 flags=1589376
[   43.529133] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149642240 size=4096 flags=1605761
[   43.529150] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985249280 size=4096 flags=1605761
[   43.529283] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550224384 size=4096 flags=1605761
[  259.390004] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451307106304 size=4096 flags=1589376
[  259.390031] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262020669440 size=4096 flags=1589376
[  259.394231] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262020669440 size=4096 flags=1605761
[  259.394309] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451307106304 size=4096 flags=1605761
[  300.313484] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451348086784 size=4096 flags=1589376
[  300.313540] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262022987776 size=4096 flags=1589376
[  300.319265] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262022987776 size=4096 flags=1605761
[  300.320045] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=274903932928 size=45056 flags=1074267264
[  300.321407] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451348086784 size=4096 flags=1605761
[  300.322566] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=274903973888 size=4096 flags=1605761

Running zfs 2.2.1 - upgraded from 2.2.0 on Void Linux, custom 6.1.62 Linux kernel. The disks have 512 logical sector size, but I manually formatted LUKS to use 4096 sector size since it's optimal. The parititons are aligned. My zpools are created with ashift=12.

Since others have noted potential corruption issues with zfs_dmu_offset_next_sync=1 #15526 (comment), I have also tried setting it to 0, and I am still getting new errors. edit: strike-through unrelated

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Nov 23, 2023

The disks have 512 logical sector size, but I manually formatted LUKS to use 4096 sector size since it's optimal. The parititons are aligned. My zpools are created with ashift=12.

Same here. Important data points!

@chenxiaolong
Copy link

chenxiaolong commented Nov 23, 2023

The disks have 512 logical sector size, but I manually formatted LUKS to use 4096 sector size since it's optimal. The parititons are aligned. My zpools are created with ashift=12.

Same here. Important data points!

Me as well. All of my LUKS volumes are formatted as LUKS2 with a 4 KiB sector size (including the ones backing SLOG and L2ARC).

@blind-oracle
Copy link

In my case all devices are native 4k (SLOG/ARC and spinning disks), so probably it does not matter much.

@blind-oracle
Copy link

@Rudd-O I'd rename the issue, it's more like write errors than data corruption I think. At least downgrading to 2.2.0 and doing scrub shows no errors.

@amotin
Copy link
Member

amotin commented Jan 15, 2024

@RichardBelzer bd7a02c is reverted from 2.2.2. Master indeed should get #15588 instead.

behlendorf pushed a commit that referenced this issue Mar 25, 2024
The regular ABD iterators yield data buffers, so they have to map and
unmap pages into kernel memory. If the caller only wants to count
chunks, or can use page pointers directly, then the map/unmap is just
unnecessary overhead.

This adds adb_iterate_page_func, which yields unmapped struct page
instead.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
behlendorf pushed a commit that referenced this issue Mar 25, 2024
This is just renaming the existing functions we're about to replace and
grouping them together to make the next commits easier to follow.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
behlendorf pushed a commit that referenced this issue Mar 25, 2024
Light reshuffle to make it a bit more linear to read and get rid of a
bunch of args that aren't needed in all cases.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
behlendorf pushed a commit that referenced this issue Mar 25, 2024
This is just setting up for the next couple of commits, which will add a
new IO function and a parameter to select it.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
behlendorf pushed a commit that referenced this issue Mar 25, 2024
This commit tackles a number of issues in the way BIOs (`struct bio`)
are constructed for submission to the Linux block layer.

The kernel has a hard upper limit on the number of pages/segments that
can be added to a BIO, as well as a separate limit for each device
(related to its queue depth and other scheduling characteristics).

ZFS counts the number of memory pages in the request ABD
(`abd_nr_pages_off()`, and then uses that as the number of segments to
put into the BIO, up to the hard upper limit. If it requires more than
the limit, it will create multiple BIOs.

Leaving aside the fact that page count method is wrong (see below), not
limiting to the device segment max means that the device driver will
need to split the BIO in half. This is alone is not necessarily a
problem, but it interacts with another issue to cause a much larger
problem.

The kernel function to add a segment to a BIO (`bio_add_page()`) takes a
`struct page` pointer, and offset+len within it. `struct page` can
represent a run of contiguous memory pages (known as a "compound page").
In can be of arbitrary length.

The ZFS functions that count ABD pages and load them into the BIO
(`abd_nr_pages_off()`, `bio_map()` and `abd_bio_map_off()`) will never
consider a page to be more than `PAGE_SIZE` (4K), even if the `struct
page` is for multiple pages. In this case, it will load the same `struct
page` into the BIO multiple times, with the offset adjusted each time.

With a sufficiently large ABD, this can easily lead to the BIO being
entirely filled much earlier than it could have been. This is also
further contributes to the problem caused by the incorrect segment limit
calculation, as its much easier to go past the device limit, and so
require a split.

Again, this is not a problem on its own.

The logic for "never submit more than `PAGE_SIZE`" is actually a little
more subtle. It will actually never submit a buffer that crosses a 4K
page boundary.

In practice, this is fine, as most ABDs are scattered, that is a list of
complete 4K pages, and so are loaded in as such.

Linear ABDs are typically allocated from slabs, and for small sizes they
are frequently not aligned to page boundaries. For example, a 12K
allocation can span four pages, eg:

     -- 4K -- -- 4K -- -- 4K -- -- 4K --
    |        |        |        |        |
          :## ######## ######## ######:    [1K, 4K, 4K, 3K]

Such an allocation would be loaded into a BIO as you see:

    [1K, 4K, 4K, 3K]

This tends not to be a problem in practice, because even if the BIO were
filled and needed to be split, each half would still have either a start
or end aligned to the logical block size of the device (assuming 4K at
least).

---

In ideal circumstances, these shortcomings don't cause any particular
problems. Its when they start to interact with other ZFS features that
things get interesting.

Aggregation will create a "gang" ABD, which is simply a list of other
ABDs. Iterating over a gang ABD is just iterating over each ABD within
it in turn.

Because the segments are simply loaded in order, we can end up with
uneven segments either side of the "gap" between the two ABDs. For
example, two 12K ABDs might be aggregated and then loaded as:

    [1K, 4K, 4K, 3K, 2K, 4K, 4K, 2K]

Should a split occur, each individual BIO can end up either having an
start or end offset that is not aligned to the logical block size, which
some drivers (eg SCSI) will reject. However, this tends not to happen
because the default aggregation limit usually keeps the BIO small enough
to not require more than one split, and most pages are actually full 4K
pages, so hitting an uneven gap is very rare anyway.

If the pool is under particular memory pressure, then an IO can be
broken down into a "gang block", a 512-byte block composed of a header
and up to three block pointers. Each points to a fragment of the
original write, or in turn, another gang block, breaking the original
data up over and over until space can be found in the pool for each of
them.

Each gang header is a separate 512-byte memory allocation from a slab,
that needs to be written down to disk. When the gang header is added to
the BIO, its a single 512-byte segment.

Pulling all this together, consider a large aggregated write of gang
blocks. This results a BIO containing lots of 512-byte segments. Given
our tendency to overfill the BIO, a split is likely, and most possible
split points will yield a pair of BIOs that are misaligned. Drivers that
care, like the SCSI driver, will reject them.

---

This commit is a substantial refactor and rewrite of much of `vdev_disk`
to sort all this out.

`vdev_bio_max_segs()` now returns the ideal maximum size for the device,
if available. There's also a tuneable `zfs_vdev_disk_max_segs` to
override this, to assist with testing.

We scan the ABD up front to count the number of pages within it, and to
confirm that if we submitted all those pages to one or more BIOs, it
could be split at any point with creating a misaligned BIO.  If the
pages in the BIO are not usable (as in any of the above situations), the
ABD is linearised, and then checked again. This is the same technique
used in `vdev_geom` on FreeBSD, adjusted for Linux's variable page size
and allocator quirks.

`vbio_t` is a cleanup and enhancement of the old `dio_request_t`. The
idea is simply that it can hold all the state needed to create, submit
and return multiple BIOs, including all the refcounts, the ABD copy if
it was needed, and so on. Apart from what I hope is a clearer interface,
the major difference is that because we know how many BIOs we'll need up
front, we don't need the old overflow logic that would grow the BIO
array, throw away all the old work and restart. We can get it right from
the start.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
behlendorf pushed a commit that referenced this issue Mar 25, 2024
This makes the submission method selectable at module load time via the
`zfs_vdev_disk_classic` parameter, allowing this change to be backported
to 2.2 safely, and disabled in favour of the "classic" submission method
if new problems come up.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
behlendorf pushed a commit that referenced this issue Mar 25, 2024
Simplifies our code a lot, so we don't have to wait for each and
reassemble them.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
behlendorf pushed a commit that referenced this issue Mar 25, 2024
Before 4.5 (specifically, torvalds/linux@ddc58f2), head and tail pages
in a compound page were refcounted separately. This means that using the
head page without taking a reference to it could see it cleaned up later
before we're finished with it. Specifically, bio_add_page() would take a
reference, and drop its reference after the bio completion callback
returns.

If the zio is executed immediately from the completion callback, this is
usually ok, as any data is referenced through the tail page referenced
by the ABD, and so becomes "live" that way. If there's a delay in zio
execution (high load, error injection), then the head page can be freed,
along with any dirty flags or other indicators that the underlying
memory is used. Later, when the zio completes and that memory is
accessed, its either unmapped and an unhandled fault takes down the
entire system, or it is mapped and we end up messing around in someone
else's memory. Both of these are very bad.

The solution on these older kernels is to take a reference to the head
page when we use it, and release it when we're done. There's not really
a sensible way under our current structure to do this; the "best" would
be to keep a list of head page references in the ABD, and release them
when the ABD is freed.

Since this additional overhead is totally unnecessary on 4.5+, where
head and tail pages share refcounts, I've opted to simply not use the
compound head in ABD page iteration there. This is theoretically less
efficient (though cleaning up head page references would add overhead),
but its safe, and we still get the other benefits of not mapping pages
before adding them to a bio and not mis-splitting pages.

There doesn't appear to be an obvious symbol name or config option we
can match on to discover this behaviour in configure (and the mm/page
APIs have changed a lot since then anyway), so I've gone with a simple
version check.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
Before 5.4 we have to do a little math.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit df04efe)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
The regular ABD iterators yield data buffers, so they have to map and
unmap pages into kernel memory. If the caller only wants to count
chunks, or can use page pointers directly, then the map/unmap is just
unnecessary overhead.

This adds adb_iterate_page_func, which yields unmapped struct page
instead.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit 390b448)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
This is just renaming the existing functions we're about to replace and
grouping them together to make the next commits easier to follow.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit f3b85d7)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
Light reshuffle to make it a bit more linear to read and get rid of a
bunch of args that aren't needed in all cases.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit 867178a)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
This is just setting up for the next couple of commits, which will add a
new IO function and a parameter to select it.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit c4a13ba)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
This commit tackles a number of issues in the way BIOs (`struct bio`)
are constructed for submission to the Linux block layer.

The kernel has a hard upper limit on the number of pages/segments that
can be added to a BIO, as well as a separate limit for each device
(related to its queue depth and other scheduling characteristics).

ZFS counts the number of memory pages in the request ABD
(`abd_nr_pages_off()`, and then uses that as the number of segments to
put into the BIO, up to the hard upper limit. If it requires more than
the limit, it will create multiple BIOs.

Leaving aside the fact that page count method is wrong (see below), not
limiting to the device segment max means that the device driver will
need to split the BIO in half. This is alone is not necessarily a
problem, but it interacts with another issue to cause a much larger
problem.

The kernel function to add a segment to a BIO (`bio_add_page()`) takes a
`struct page` pointer, and offset+len within it. `struct page` can
represent a run of contiguous memory pages (known as a "compound page").
In can be of arbitrary length.

The ZFS functions that count ABD pages and load them into the BIO
(`abd_nr_pages_off()`, `bio_map()` and `abd_bio_map_off()`) will never
consider a page to be more than `PAGE_SIZE` (4K), even if the `struct
page` is for multiple pages. In this case, it will load the same `struct
page` into the BIO multiple times, with the offset adjusted each time.

With a sufficiently large ABD, this can easily lead to the BIO being
entirely filled much earlier than it could have been. This is also
further contributes to the problem caused by the incorrect segment limit
calculation, as its much easier to go past the device limit, and so
require a split.

Again, this is not a problem on its own.

The logic for "never submit more than `PAGE_SIZE`" is actually a little
more subtle. It will actually never submit a buffer that crosses a 4K
page boundary.

In practice, this is fine, as most ABDs are scattered, that is a list of
complete 4K pages, and so are loaded in as such.

Linear ABDs are typically allocated from slabs, and for small sizes they
are frequently not aligned to page boundaries. For example, a 12K
allocation can span four pages, eg:

     -- 4K -- -- 4K -- -- 4K -- -- 4K --
    |        |        |        |        |
          :## ######## ######## ######:    [1K, 4K, 4K, 3K]

Such an allocation would be loaded into a BIO as you see:

    [1K, 4K, 4K, 3K]

This tends not to be a problem in practice, because even if the BIO were
filled and needed to be split, each half would still have either a start
or end aligned to the logical block size of the device (assuming 4K at
least).

---

In ideal circumstances, these shortcomings don't cause any particular
problems. Its when they start to interact with other ZFS features that
things get interesting.

Aggregation will create a "gang" ABD, which is simply a list of other
ABDs. Iterating over a gang ABD is just iterating over each ABD within
it in turn.

Because the segments are simply loaded in order, we can end up with
uneven segments either side of the "gap" between the two ABDs. For
example, two 12K ABDs might be aggregated and then loaded as:

    [1K, 4K, 4K, 3K, 2K, 4K, 4K, 2K]

Should a split occur, each individual BIO can end up either having an
start or end offset that is not aligned to the logical block size, which
some drivers (eg SCSI) will reject. However, this tends not to happen
because the default aggregation limit usually keeps the BIO small enough
to not require more than one split, and most pages are actually full 4K
pages, so hitting an uneven gap is very rare anyway.

If the pool is under particular memory pressure, then an IO can be
broken down into a "gang block", a 512-byte block composed of a header
and up to three block pointers. Each points to a fragment of the
original write, or in turn, another gang block, breaking the original
data up over and over until space can be found in the pool for each of
them.

Each gang header is a separate 512-byte memory allocation from a slab,
that needs to be written down to disk. When the gang header is added to
the BIO, its a single 512-byte segment.

Pulling all this together, consider a large aggregated write of gang
blocks. This results a BIO containing lots of 512-byte segments. Given
our tendency to overfill the BIO, a split is likely, and most possible
split points will yield a pair of BIOs that are misaligned. Drivers that
care, like the SCSI driver, will reject them.

---

This commit is a substantial refactor and rewrite of much of `vdev_disk`
to sort all this out.

`vdev_bio_max_segs()` now returns the ideal maximum size for the device,
if available. There's also a tuneable `zfs_vdev_disk_max_segs` to
override this, to assist with testing.

We scan the ABD up front to count the number of pages within it, and to
confirm that if we submitted all those pages to one or more BIOs, it
could be split at any point with creating a misaligned BIO.  If the
pages in the BIO are not usable (as in any of the above situations), the
ABD is linearised, and then checked again. This is the same technique
used in `vdev_geom` on FreeBSD, adjusted for Linux's variable page size
and allocator quirks.

`vbio_t` is a cleanup and enhancement of the old `dio_request_t`. The
idea is simply that it can hold all the state needed to create, submit
and return multiple BIOs, including all the refcounts, the ABD copy if
it was needed, and so on. Apart from what I hope is a clearer interface,
the major difference is that because we know how many BIOs we'll need up
front, we don't need the old overflow logic that would grow the BIO
array, throw away all the old work and restart. We can get it right from
the start.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit 06a1960)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
This makes the submission method selectable at module load time via the
`zfs_vdev_disk_classic` parameter, allowing this change to be backported
to 2.2 safely, and disabled in favour of the "classic" submission method
if new problems come up.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit df2169d)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
Simplifies our code a lot, so we don't have to wait for each and
reassemble them.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit 72fd834)
robn added a commit to robn/zfs that referenced this issue Mar 27, 2024
Before 4.5 (specifically, torvalds/linux@ddc58f2), head and tail pages
in a compound page were refcounted separately. This means that using the
head page without taking a reference to it could see it cleaned up later
before we're finished with it. Specifically, bio_add_page() would take a
reference, and drop its reference after the bio completion callback
returns.

If the zio is executed immediately from the completion callback, this is
usually ok, as any data is referenced through the tail page referenced
by the ABD, and so becomes "live" that way. If there's a delay in zio
execution (high load, error injection), then the head page can be freed,
along with any dirty flags or other indicators that the underlying
memory is used. Later, when the zio completes and that memory is
accessed, its either unmapped and an unhandled fault takes down the
entire system, or it is mapped and we end up messing around in someone
else's memory. Both of these are very bad.

The solution on these older kernels is to take a reference to the head
page when we use it, and release it when we're done. There's not really
a sensible way under our current structure to do this; the "best" would
be to keep a list of head page references in the ABD, and release them
when the ABD is freed.

Since this additional overhead is totally unnecessary on 4.5+, where
head and tail pages share refcounts, I've opted to simply not use the
compound head in ABD page iteration there. This is theoretically less
efficient (though cleaning up head page references would add overhead),
but its safe, and we still get the other benefits of not mapping pages
before adding them to a bio and not mis-splitting pages.

There doesn't appear to be an obvious symbol name or config option we
can match on to discover this behaviour in configure (and the mm/page
APIs have changed a lot since then anyway), so I've gone with a simple
version check.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes openzfs#15533
Closes openzfs#15588
(cherry picked from commit c6be6ce)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
Before 5.4 we have to do a little math.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit df04efe)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
The regular ABD iterators yield data buffers, so they have to map and
unmap pages into kernel memory. If the caller only wants to count
chunks, or can use page pointers directly, then the map/unmap is just
unnecessary overhead.

This adds adb_iterate_page_func, which yields unmapped struct page
instead.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit 390b448)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
This is just renaming the existing functions we're about to replace and
grouping them together to make the next commits easier to follow.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit f3b85d7)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
Light reshuffle to make it a bit more linear to read and get rid of a
bunch of args that aren't needed in all cases.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit 867178a)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
This is just setting up for the next couple of commits, which will add a
new IO function and a parameter to select it.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit c4a13ba)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
This commit tackles a number of issues in the way BIOs (`struct bio`)
are constructed for submission to the Linux block layer.

The kernel has a hard upper limit on the number of pages/segments that
can be added to a BIO, as well as a separate limit for each device
(related to its queue depth and other scheduling characteristics).

ZFS counts the number of memory pages in the request ABD
(`abd_nr_pages_off()`, and then uses that as the number of segments to
put into the BIO, up to the hard upper limit. If it requires more than
the limit, it will create multiple BIOs.

Leaving aside the fact that page count method is wrong (see below), not
limiting to the device segment max means that the device driver will
need to split the BIO in half. This is alone is not necessarily a
problem, but it interacts with another issue to cause a much larger
problem.

The kernel function to add a segment to a BIO (`bio_add_page()`) takes a
`struct page` pointer, and offset+len within it. `struct page` can
represent a run of contiguous memory pages (known as a "compound page").
In can be of arbitrary length.

The ZFS functions that count ABD pages and load them into the BIO
(`abd_nr_pages_off()`, `bio_map()` and `abd_bio_map_off()`) will never
consider a page to be more than `PAGE_SIZE` (4K), even if the `struct
page` is for multiple pages. In this case, it will load the same `struct
page` into the BIO multiple times, with the offset adjusted each time.

With a sufficiently large ABD, this can easily lead to the BIO being
entirely filled much earlier than it could have been. This is also
further contributes to the problem caused by the incorrect segment limit
calculation, as its much easier to go past the device limit, and so
require a split.

Again, this is not a problem on its own.

The logic for "never submit more than `PAGE_SIZE`" is actually a little
more subtle. It will actually never submit a buffer that crosses a 4K
page boundary.

In practice, this is fine, as most ABDs are scattered, that is a list of
complete 4K pages, and so are loaded in as such.

Linear ABDs are typically allocated from slabs, and for small sizes they
are frequently not aligned to page boundaries. For example, a 12K
allocation can span four pages, eg:

     -- 4K -- -- 4K -- -- 4K -- -- 4K --
    |        |        |        |        |
          :## ######## ######## ######:    [1K, 4K, 4K, 3K]

Such an allocation would be loaded into a BIO as you see:

    [1K, 4K, 4K, 3K]

This tends not to be a problem in practice, because even if the BIO were
filled and needed to be split, each half would still have either a start
or end aligned to the logical block size of the device (assuming 4K at
least).

---

In ideal circumstances, these shortcomings don't cause any particular
problems. Its when they start to interact with other ZFS features that
things get interesting.

Aggregation will create a "gang" ABD, which is simply a list of other
ABDs. Iterating over a gang ABD is just iterating over each ABD within
it in turn.

Because the segments are simply loaded in order, we can end up with
uneven segments either side of the "gap" between the two ABDs. For
example, two 12K ABDs might be aggregated and then loaded as:

    [1K, 4K, 4K, 3K, 2K, 4K, 4K, 2K]

Should a split occur, each individual BIO can end up either having an
start or end offset that is not aligned to the logical block size, which
some drivers (eg SCSI) will reject. However, this tends not to happen
because the default aggregation limit usually keeps the BIO small enough
to not require more than one split, and most pages are actually full 4K
pages, so hitting an uneven gap is very rare anyway.

If the pool is under particular memory pressure, then an IO can be
broken down into a "gang block", a 512-byte block composed of a header
and up to three block pointers. Each points to a fragment of the
original write, or in turn, another gang block, breaking the original
data up over and over until space can be found in the pool for each of
them.

Each gang header is a separate 512-byte memory allocation from a slab,
that needs to be written down to disk. When the gang header is added to
the BIO, its a single 512-byte segment.

Pulling all this together, consider a large aggregated write of gang
blocks. This results a BIO containing lots of 512-byte segments. Given
our tendency to overfill the BIO, a split is likely, and most possible
split points will yield a pair of BIOs that are misaligned. Drivers that
care, like the SCSI driver, will reject them.

---

This commit is a substantial refactor and rewrite of much of `vdev_disk`
to sort all this out.

`vdev_bio_max_segs()` now returns the ideal maximum size for the device,
if available. There's also a tuneable `zfs_vdev_disk_max_segs` to
override this, to assist with testing.

We scan the ABD up front to count the number of pages within it, and to
confirm that if we submitted all those pages to one or more BIOs, it
could be split at any point with creating a misaligned BIO.  If the
pages in the BIO are not usable (as in any of the above situations), the
ABD is linearised, and then checked again. This is the same technique
used in `vdev_geom` on FreeBSD, adjusted for Linux's variable page size
and allocator quirks.

`vbio_t` is a cleanup and enhancement of the old `dio_request_t`. The
idea is simply that it can hold all the state needed to create, submit
and return multiple BIOs, including all the refcounts, the ABD copy if
it was needed, and so on. Apart from what I hope is a clearer interface,
the major difference is that because we know how many BIOs we'll need up
front, we don't need the old overflow logic that would grow the BIO
array, throw away all the old work and restart. We can get it right from
the start.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit 06a1960)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
This makes the submission method selectable at module load time via the
`zfs_vdev_disk_classic` parameter, allowing this change to be backported
to 2.2 safely, and disabled in favour of the "classic" submission method
if new problems come up.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit df2169d)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
Simplifies our code a lot, so we don't have to wait for each and
reassemble them.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit 72fd834)
behlendorf pushed a commit that referenced this issue Mar 28, 2024
Before 4.5 (specifically, torvalds/linux@ddc58f2), head and tail pages
in a compound page were refcounted separately. This means that using the
head page without taking a reference to it could see it cleaned up later
before we're finished with it. Specifically, bio_add_page() would take a
reference, and drop its reference after the bio completion callback
returns.

If the zio is executed immediately from the completion callback, this is
usually ok, as any data is referenced through the tail page referenced
by the ABD, and so becomes "live" that way. If there's a delay in zio
execution (high load, error injection), then the head page can be freed,
along with any dirty flags or other indicators that the underlying
memory is used. Later, when the zio completes and that memory is
accessed, its either unmapped and an unhandled fault takes down the
entire system, or it is mapped and we end up messing around in someone
else's memory. Both of these are very bad.

The solution on these older kernels is to take a reference to the head
page when we use it, and release it when we're done. There's not really
a sensible way under our current structure to do this; the "best" would
be to keep a list of head page references in the ABD, and release them
when the ABD is freed.

Since this additional overhead is totally unnecessary on 4.5+, where
head and tail pages share refcounts, I've opted to simply not use the
compound head in ABD page iteration there. This is theoretically less
efficient (though cleaning up head page references would add overhead),
but its safe, and we still get the other benefits of not mapping pages
before adding them to a bio and not mis-splitting pages.

There doesn't appear to be an obvious symbol name or config option we
can match on to discover this behaviour in configure (and the mm/page
APIs have changed a lot since then anyway), so I've gone with a simple
version check.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15533
Closes #15588
(cherry picked from commit c6be6ce)
@robn
Copy link
Contributor

robn commented May 3, 2024

FYI, 2.2.4 just shipped, with #15588 and followup patches included. If you are still having this problem, you might try setting zfs_vdev_disk_classic=0 in your zfs module parameters and seeing if that helps. If you do try this, please report back with your results, as our hope is to make this the default in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests