From 204d2e79b9a337722af292a06a309c49a51bee6e Mon Sep 17 00:00:00 2001 From: Ricardo Pardini Date: Thu, 23 Nov 2023 22:28:41 +0100 Subject: [PATCH] WSL2 "boards" `wsl2-x86`/`wsl2-arm64` with current (6.1.y) and edge (6.6.y) kernels with Microsoft patches > tl,dr: add 4 small-ish UEFI-like kernels, with Microsoft patches & fixes, for use with Microsoft WSL2 on x86/arm64 and 6.1.y/6.6.y - the boards are UEFI derivatives, using a common `microsoft` vendor include to modify - `KERNELPATCHDIR`/`LINUXFAMILY` (for now, we don't want those patches in regular UEFI builds / .debs) - disable `EXTRAWIFI` (kernel is for a VM, will never have wifi so doesn't need any drivers) - `LINUXCONFIG`, so we can use Microsoft's own monolithic kernel, required for WSL2 (their initrd is a mistery) - really, what we're mostly interested right now are the kernels (in the future we might have an "Armbian" WSL2 app in the Microsoft Store) - `current` `6.1.y`: - rebased from https://github.com/microsoft/WSL2-Linux-Kernel/tree/linux-msft-wsl-6.1.y onto real 6.1.y - using Microsoft's `.config` exactly (monolithic, there are no `=m`'s) - `edge` `6.6.y`: - also from https://github.com/microsoft/WSL2-Linux-Kernel/tree/linux-msft-wsl-6.1.y but rebased onto 6.6.y - using updated Microsoft's `.config` (monolithic, there are no `=m`'s) - dropped 2 of 6.1.y's patches that were actually upstreamed in the meantime: - `mm-page_reporting-Add-checks-for-page_reporting_order-param` - mainlined in https://lore.kernel.org/all/1664517699-1085-2-git-send-email-shradhagupta@linux.microsoft.com/ - `hv_balloon-Add-support-for-configurable-order-free-page-reporting` - mainlined in https://lore.kernel.org/all/1664517699-1085-3-git-send-email-shradhagupta@linux.microsoft.com/ - drop the `arm64: hyperv: Enable Hyper-V synthetic clocks/timers` patch, since it causes asm breakage on 6.6.y - a shame, but I tried and can't fix it myself - @kelleymh ? - add my own patch to fix: - `1709-drivers-hv-dxgkrnl-restore-uuid_le_cmp-removed-from-upstream-in-f5b3c341a.patch` due to https://lore.kernel.org/all/20230202145412.87569-1-andriy.shevchenko@linux.intel.com/ landing in 6.6 - `1710-drivers-hv-dxgkrnl-adapt-dxg_remove_vmbus-to-96ec29396-s-reality-void-return.patch` to adapt to https://lore.kernel.org/all/TYCP286MB2323A93C55526E4DF239D3ACCAFA9@TYCP286MB2323.JPNP286.PROD.OUTLOOK.COM/ --- config/boards/wsl2-arm64.csc | 8 + config/boards/wsl2-x86.csc | 8 + config/kernel/linux-wsl2-arm64-current.config | 4457 ++++++++++++++++ config/kernel/linux-wsl2-arm64-edge.config | 4578 +++++++++++++++++ config/kernel/linux-wsl2-x86-current.config | 4349 ++++++++++++++++ config/kernel/linux-wsl2-x86-edge.config | 4476 ++++++++++++++++ .../sources/vendors/microsoft/wsl2.hooks.sh | 23 + ...-use-the-Hyper-V-hypercall-interface.patch | 239 + ...able-Hyper-V-synthetic-clocks-timers.patch | 185 + ...l-compute-device-VMBus-channel-guids.patch | 45 + ...nl-Driver-initialization-and-loading.patch | 966 ++++ ...ge-support-initialize-VMBus-channels.patch | 660 +++ ...xgkrnl-Creation-of-dxgadapter-object.patch | 1160 +++++ ...v-dxg-device-and-dxgprocess-creation.patch | 1847 +++++++ ...numerate-and-open-dxgadapter-objects.patch | 554 ++ ...xgkrnl-Creation-of-dxgdevice-objects.patch | 828 +++ ...gkrnl-Creation-of-dxgcontext-objects.patch | 668 +++ ...ute-device-allocations-and-resources.patch | 2263 ++++++++ ...ation-of-compute-device-sync-objects.patch | 1016 ++++ ...xgkrnl-Operations-using-sync-objects.patch | 1689 ++++++ ...gkrnl-Sharing-of-dxgresource-objects.patch | 1464 ++++++ ...s-hv-dxgkrnl-Sharing-of-sync-objects.patch | 1555 ++++++ ...rnl-Creation-of-paging-queue-objects.patch | 640 +++ ...ution-commands-to-the-compute-device.patch | 450 ++ ...-dxgkrnl-Share-objects-with-the-host.patch | 271 + ...hv-dxgkrnl-Query-the-dxgdevice-state.patch | 454 ++ ...map-CPU-address-to-device-allocation.patch | 498 ++ ...-Manage-device-allocation-properties.patch | 912 ++++ ...rs-hv-dxgkrnl-Flush-heap-transitions.patch | 194 + ...gkrnl-Query-video-memory-information.patch | 237 + ...-drivers-hv-dxgkrnl-The-escape-ioctl.patch | 305 ++ ...l-Ioctl-to-put-device-to-error-state.patch | 180 + ...ery-statistics-and-clock-calibration.patch | 423 ++ ...xgkrnl-Offer-and-reclaim-allocations.patch | 466 ++ ...Ioctls-to-manage-scheduling-priority.patch | 427 ++ ...krnl-Manage-residency-of-allocations.patch | 447 ++ ...age-compute-device-virtual-addresses.patch | 703 +++ ...d-support-to-map-guest-pages-by-host.patch | 313 ++ ...was-defined-in-the-main-linux-branch.patch | 29 + ...s-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch | 100 + ...krnl-Creation-of-dxgsyncfile-objects.patch | 482 ++ ...gkrnl-Use-tracing-instead-of-dev_dbg.patch | 205 + ...dxgkrnl-Implement-D3DKMTWaitSyncFile.patch | 658 +++ ...nd-return-values-from-copy-from-user.patch | 2000 +++++++ ...hv-dxgkrnl-Fix-synchronization-locks.patch | 391 ++ ...ed-file-objects-in-case-of-a-failure.patch | 80 + ...issed-NULL-check-for-resource-object.patch | 51 + ...-dxgkrnl-to-build-for-the-6.1-kernel.patch | 84 + ...m-Support-PCI-BAR-relative-addresses.patch | 80 + ...status-prior-to-creating-pmem-region.patch | 52 + ...hecks-for-page_reporting_order-param.patch | 104 + ...nfigurable-order-free-page-reporting.patch | 202 + ...-use-the-Hyper-V-hypercall-interface.patch | 239 + ...l-compute-device-VMBus-channel-guids.patch | 45 + ...nl-Driver-initialization-and-loading.patch | 966 ++++ ...ge-support-initialize-VMBus-channels.patch | 660 +++ ...xgkrnl-Creation-of-dxgadapter-object.patch | 1160 +++++ ...v-dxg-device-and-dxgprocess-creation.patch | 1847 +++++++ ...numerate-and-open-dxgadapter-objects.patch | 554 ++ ...xgkrnl-Creation-of-dxgdevice-objects.patch | 828 +++ ...gkrnl-Creation-of-dxgcontext-objects.patch | 668 +++ ...ute-device-allocations-and-resources.patch | 2263 ++++++++ ...ation-of-compute-device-sync-objects.patch | 1016 ++++ ...xgkrnl-Operations-using-sync-objects.patch | 1689 ++++++ ...gkrnl-Sharing-of-dxgresource-objects.patch | 1464 ++++++ ...s-hv-dxgkrnl-Sharing-of-sync-objects.patch | 1555 ++++++ ...rnl-Creation-of-paging-queue-objects.patch | 640 +++ ...ution-commands-to-the-compute-device.patch | 450 ++ ...-dxgkrnl-Share-objects-with-the-host.patch | 271 + ...hv-dxgkrnl-Query-the-dxgdevice-state.patch | 454 ++ ...map-CPU-address-to-device-allocation.patch | 498 ++ ...-Manage-device-allocation-properties.patch | 912 ++++ ...rs-hv-dxgkrnl-Flush-heap-transitions.patch | 194 + ...gkrnl-Query-video-memory-information.patch | 237 + ...-drivers-hv-dxgkrnl-The-escape-ioctl.patch | 305 ++ ...l-Ioctl-to-put-device-to-error-state.patch | 180 + ...ery-statistics-and-clock-calibration.patch | 423 ++ ...xgkrnl-Offer-and-reclaim-allocations.patch | 466 ++ ...Ioctls-to-manage-scheduling-priority.patch | 427 ++ ...krnl-Manage-residency-of-allocations.patch | 447 ++ ...age-compute-device-virtual-addresses.patch | 703 +++ ...d-support-to-map-guest-pages-by-host.patch | 313 ++ ...was-defined-in-the-main-linux-branch.patch | 29 + ...s-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch | 100 + ...krnl-Creation-of-dxgsyncfile-objects.patch | 482 ++ ...gkrnl-Use-tracing-instead-of-dev_dbg.patch | 205 + ...dxgkrnl-Implement-D3DKMTWaitSyncFile.patch | 658 +++ ...nd-return-values-from-copy-from-user.patch | 2000 +++++++ ...hv-dxgkrnl-Fix-synchronization-locks.patch | 391 ++ ...ed-file-objects-in-case-of-a-failure.patch | 80 + ...issed-NULL-check-for-resource-object.patch | 51 + ...-dxgkrnl-to-build-for-the-6.1-kernel.patch | 84 + ...m-Support-PCI-BAR-relative-addresses.patch | 80 + ...status-prior-to-creating-pmem-region.patch | 52 + ...p-removed-from-upstream-in-f5b3c341a.patch | 30 + ...s-to-96ec29396-s-reality-void-return.patch | 41 + patch/kernel/archive/wsl2-x86-6.1 | 1 + patch/kernel/archive/wsl2-x86-6.6 | 1 + 98 files changed, 70635 insertions(+) create mode 100644 config/boards/wsl2-arm64.csc create mode 100644 config/boards/wsl2-x86.csc create mode 100644 config/kernel/linux-wsl2-arm64-current.config create mode 100644 config/kernel/linux-wsl2-arm64-edge.config create mode 100644 config/kernel/linux-wsl2-x86-current.config create mode 100644 config/kernel/linux-wsl2-x86-edge.config create mode 100644 config/sources/vendors/microsoft/wsl2.hooks.sh create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1667-arm64-hyperv-Enable-Hyper-V-synthetic-clocks-timers.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1668-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1669-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1670-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1671-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1672-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1673-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1674-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1675-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1677-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1678-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1679-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1680-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1681-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1682-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1683-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1684-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1685-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1686-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1687-drivers-hv-dxgkrnl-Flush-heap-transitions.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1688-drivers-hv-dxgkrnl-Query-video-memory-information.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1689-drivers-hv-dxgkrnl-The-escape-ioctl.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1690-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1691-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1692-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1693-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1694-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1695-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1696-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1697-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1698-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1699-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1700-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1701-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1702-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1703-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1704-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1705-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1706-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1707-virtio-pmem-Support-PCI-BAR-relative-addresses.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1708-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1709-mm-page_reporting-Add-checks-for-page_reporting_order-param.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.1/1710-hv_balloon-Add-support-for-configurable-order-free-page-reporting.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1667-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1668-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1669-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1670-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1671-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1672-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1673-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1674-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1675-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1677-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1678-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1679-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1680-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1681-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1682-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1683-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1684-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1685-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1686-drivers-hv-dxgkrnl-Flush-heap-transitions.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1687-drivers-hv-dxgkrnl-Query-video-memory-information.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1688-drivers-hv-dxgkrnl-The-escape-ioctl.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1689-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1690-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1691-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1692-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1693-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1694-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1695-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1696-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1697-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1698-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1699-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1700-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1701-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1702-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1703-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1704-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1705-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1706-virtio-pmem-Support-PCI-BAR-relative-addresses.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1707-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1708-drivers-hv-dxgkrnl-restore-uuid_le_cmp-removed-from-upstream-in-f5b3c341a.patch create mode 100644 patch/kernel/archive/wsl2-arm64-6.6/1709-drivers-hv-dxgkrnl-adapt-dxg_remove_vmbus-to-96ec29396-s-reality-void-return.patch create mode 120000 patch/kernel/archive/wsl2-x86-6.1 create mode 120000 patch/kernel/archive/wsl2-x86-6.6 diff --git a/config/boards/wsl2-arm64.csc b/config/boards/wsl2-arm64.csc new file mode 100644 index 000000000000..4ceac208abb3 --- /dev/null +++ b/config/boards/wsl2-arm64.csc @@ -0,0 +1,8 @@ +# aarch64 Windows Subsystem for Linux 2 (Hyper-V) +declare -g BOARD_NAME="WSL2 arm64" +declare -g BOARDFAMILY="uefi-arm64" +declare -g BOARD_MAINTAINER="rpardini" +declare -g KERNEL_TARGET="current,edge" + +# Source vendor-specific configuration (common hooks for wsl2 - changes LINUXFAMILY etc) +source "${SRC}/config/sources/vendors/microsoft/wsl2.hooks.sh" diff --git a/config/boards/wsl2-x86.csc b/config/boards/wsl2-x86.csc new file mode 100644 index 000000000000..41a795115649 --- /dev/null +++ b/config/boards/wsl2-x86.csc @@ -0,0 +1,8 @@ +# x86_64 Windows Subsystem for Linux 2 (Hyper-V) +declare -g BOARD_NAME="WSL2 x86" +declare -g BOARDFAMILY="uefi-x86" +declare -g BOARD_MAINTAINER="rpardini" +declare -g KERNEL_TARGET="current,edge" + +# Source vendor-specific configuration (common hooks for wsl2 - changes LINUXFAMILY etc) +source "${SRC}/config/sources/vendors/microsoft/wsl2.hooks.sh" diff --git a/config/kernel/linux-wsl2-arm64-current.config b/config/kernel/linux-wsl2-arm64-current.config new file mode 100644 index 000000000000..074c6b311b84 --- /dev/null +++ b/config/kernel/linux-wsl2-arm64-current.config @@ -0,0 +1,4457 @@ +# +# Automatically generated file; DO NOT EDIT. +# Linux/arm64 6.1.63 Kernel Configuration +# +CONFIG_CC_VERSION_TEXT="aarch64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0" +CONFIG_CC_IS_GCC=y +CONFIG_GCC_VERSION=110400 +CONFIG_CLANG_VERSION=0 +CONFIG_AS_IS_GNU=y +CONFIG_AS_VERSION=23800 +CONFIG_LD_IS_BFD=y +CONFIG_LD_VERSION=23800 +CONFIG_LLD_VERSION=0 +CONFIG_CC_CAN_LINK=y +CONFIG_CC_CAN_LINK_STATIC=y +CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y +CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y +CONFIG_CC_HAS_ASM_INLINE=y +CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y +CONFIG_PAHOLE_VERSION=125 +CONFIG_IRQ_WORK=y +CONFIG_BUILDTIME_TABLE_SORT=y +CONFIG_THREAD_INFO_IN_TASK=y + +# +# General setup +# +CONFIG_INIT_ENV_ARG_LIMIT=32 +# CONFIG_COMPILE_TEST is not set +# CONFIG_WERROR is not set +CONFIG_LOCALVERSION="" +# CONFIG_LOCALVERSION_AUTO is not set +CONFIG_BUILD_SALT="" +CONFIG_DEFAULT_INIT="" +CONFIG_DEFAULT_HOSTNAME="(none)" +CONFIG_SYSVIPC=y +CONFIG_SYSVIPC_SYSCTL=y +CONFIG_POSIX_MQUEUE=y +CONFIG_POSIX_MQUEUE_SYSCTL=y +# CONFIG_WATCH_QUEUE is not set +CONFIG_CROSS_MEMORY_ATTACH=y +# CONFIG_USELIB is not set +CONFIG_AUDIT=y +CONFIG_HAVE_ARCH_AUDITSYSCALL=y +CONFIG_AUDITSYSCALL=y + +# +# IRQ subsystem +# +CONFIG_GENERIC_IRQ_PROBE=y +CONFIG_GENERIC_IRQ_SHOW=y +CONFIG_GENERIC_IRQ_SHOW_LEVEL=y +CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y +CONFIG_HARDIRQS_SW_RESEND=y +CONFIG_IRQ_DOMAIN=y +CONFIG_IRQ_DOMAIN_HIERARCHY=y +CONFIG_GENERIC_IRQ_IPI=y +CONFIG_GENERIC_MSI_IRQ=y +CONFIG_GENERIC_MSI_IRQ_DOMAIN=y +CONFIG_IRQ_MSI_IOMMU=y +CONFIG_IRQ_FORCED_THREADING=y +CONFIG_SPARSE_IRQ=y +# CONFIG_GENERIC_IRQ_DEBUGFS is not set +# end of IRQ subsystem + +CONFIG_GENERIC_TIME_VSYSCALL=y +CONFIG_GENERIC_CLOCKEVENTS=y +CONFIG_ARCH_HAS_TICK_BROADCAST=y +CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y +CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_CONTEXT_TRACKING=y +CONFIG_CONTEXT_TRACKING_IDLE=y + +# +# Timers subsystem +# +CONFIG_TICK_ONESHOT=y +CONFIG_NO_HZ_COMMON=y +# CONFIG_HZ_PERIODIC is not set +CONFIG_NO_HZ_IDLE=y +# CONFIG_NO_HZ_FULL is not set +# CONFIG_NO_HZ is not set +CONFIG_HIGH_RES_TIMERS=y +# end of Timers subsystem + +CONFIG_BPF=y +CONFIG_HAVE_EBPF_JIT=y +CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y + +# +# BPF subsystem +# +CONFIG_BPF_SYSCALL=y +CONFIG_BPF_JIT=y +CONFIG_BPF_JIT_ALWAYS_ON=y +CONFIG_BPF_JIT_DEFAULT_ON=y +CONFIG_BPF_UNPRIV_DEFAULT_OFF=y +# CONFIG_BPF_PRELOAD is not set +# CONFIG_BPF_LSM is not set +# end of BPF subsystem + +CONFIG_PREEMPT_NONE_BUILD=y +CONFIG_PREEMPT_NONE=y +# CONFIG_PREEMPT_VOLUNTARY is not set +# CONFIG_PREEMPT is not set +# CONFIG_PREEMPT_DYNAMIC is not set + +# +# CPU/Task time and stats accounting +# +CONFIG_TICK_CPU_ACCOUNTING=y +# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set +CONFIG_IRQ_TIME_ACCOUNTING=y +CONFIG_HAVE_SCHED_AVG_IRQ=y +CONFIG_BSD_PROCESS_ACCT=y +# CONFIG_BSD_PROCESS_ACCT_V3 is not set +CONFIG_TASKSTATS=y +CONFIG_TASK_DELAY_ACCT=y +CONFIG_TASK_XACCT=y +CONFIG_TASK_IO_ACCOUNTING=y +# CONFIG_PSI is not set +# end of CPU/Task time and stats accounting + +# CONFIG_CPU_ISOLATION is not set + +# +# RCU Subsystem +# +CONFIG_TREE_RCU=y +# CONFIG_RCU_EXPERT is not set +CONFIG_SRCU=y +CONFIG_TREE_SRCU=y +CONFIG_TASKS_RCU_GENERIC=y +CONFIG_TASKS_RUDE_RCU=y +CONFIG_TASKS_TRACE_RCU=y +CONFIG_RCU_STALL_COMMON=y +CONFIG_RCU_NEED_SEGCBLIST=y +# end of RCU Subsystem + +CONFIG_IKCONFIG=y +CONFIG_IKCONFIG_PROC=y +# CONFIG_IKHEADERS is not set +CONFIG_LOG_BUF_SHIFT=17 +CONFIG_LOG_CPU_MAX_BUF_SHIFT=12 +CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13 +# CONFIG_PRINTK_INDEX is not set +CONFIG_GENERIC_SCHED_CLOCK=y + +# +# Scheduler features +# +# end of Scheduler features + +CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y +CONFIG_CC_HAS_INT128=y +CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5" +CONFIG_GCC11_NO_ARRAY_BOUNDS=y +CONFIG_CC_NO_ARRAY_BOUNDS=y +CONFIG_ARCH_SUPPORTS_INT128=y +CONFIG_CGROUPS=y +CONFIG_PAGE_COUNTER=y +# CONFIG_CGROUP_FAVOR_DYNMODS is not set +CONFIG_MEMCG=y +CONFIG_MEMCG_KMEM=y +CONFIG_BLK_CGROUP=y +CONFIG_CGROUP_WRITEBACK=y +CONFIG_CGROUP_SCHED=y +CONFIG_FAIR_GROUP_SCHED=y +CONFIG_CFS_BANDWIDTH=y +CONFIG_RT_GROUP_SCHED=y +CONFIG_CGROUP_PIDS=y +CONFIG_CGROUP_RDMA=y +CONFIG_CGROUP_FREEZER=y +CONFIG_CGROUP_HUGETLB=y +CONFIG_CPUSETS=y +CONFIG_PROC_PID_CPUSET=y +CONFIG_CGROUP_DEVICE=y +CONFIG_CGROUP_CPUACCT=y +CONFIG_CGROUP_PERF=y +CONFIG_CGROUP_BPF=y +CONFIG_CGROUP_MISC=y +# CONFIG_CGROUP_DEBUG is not set +CONFIG_SOCK_CGROUP_DATA=y +CONFIG_NAMESPACES=y +CONFIG_UTS_NS=y +CONFIG_TIME_NS=y +CONFIG_IPC_NS=y +CONFIG_USER_NS=y +CONFIG_PID_NS=y +CONFIG_NET_NS=y +CONFIG_CHECKPOINT_RESTORE=y +# CONFIG_SCHED_AUTOGROUP is not set +# CONFIG_SYSFS_DEPRECATED is not set +# CONFIG_RELAY is not set +CONFIG_BLK_DEV_INITRD=y +CONFIG_INITRAMFS_SOURCE="" +CONFIG_RD_GZIP=y +# CONFIG_RD_BZIP2 is not set +# CONFIG_RD_LZMA is not set +# CONFIG_RD_XZ is not set +# CONFIG_RD_LZO is not set +# CONFIG_RD_LZ4 is not set +CONFIG_RD_ZSTD=y +# CONFIG_BOOT_CONFIG is not set +# CONFIG_INITRAMFS_PRESERVE_MTIME is not set +CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y +# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set +CONFIG_LD_ORPHAN_WARN=y +CONFIG_SYSCTL=y +CONFIG_SYSCTL_EXCEPTION_TRACE=y +CONFIG_EXPERT=y +CONFIG_MULTIUSER=y +CONFIG_SGETMASK_SYSCALL=y +# CONFIG_SYSFS_SYSCALL is not set +CONFIG_FHANDLE=y +CONFIG_POSIX_TIMERS=y +CONFIG_PRINTK=y +CONFIG_BUG=y +CONFIG_ELF_CORE=y +CONFIG_BASE_FULL=y +CONFIG_FUTEX=y +CONFIG_FUTEX_PI=y +CONFIG_EPOLL=y +CONFIG_SIGNALFD=y +CONFIG_TIMERFD=y +CONFIG_EVENTFD=y +CONFIG_SHMEM=y +CONFIG_AIO=y +CONFIG_IO_URING=y +CONFIG_ADVISE_SYSCALLS=y +CONFIG_MEMBARRIER=y +CONFIG_KALLSYMS=y +# CONFIG_KALLSYMS_ALL is not set +CONFIG_KALLSYMS_BASE_RELATIVE=y +CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y +CONFIG_KCMP=y +CONFIG_RSEQ=y +# CONFIG_DEBUG_RSEQ is not set +CONFIG_EMBEDDED=y +CONFIG_HAVE_PERF_EVENTS=y +# CONFIG_PC104 is not set + +# +# Kernel Performance Events And Counters +# +CONFIG_PERF_EVENTS=y +# CONFIG_DEBUG_PERF_USE_VMALLOC is not set +# end of Kernel Performance Events And Counters + +CONFIG_SYSTEM_DATA_VERIFICATION=y +# CONFIG_PROFILING is not set +CONFIG_TRACEPOINTS=y +# end of General setup + +CONFIG_ARM64=y +CONFIG_GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_64BIT=y +CONFIG_MMU=y +CONFIG_ARM64_PAGE_SHIFT=12 +CONFIG_ARM64_CONT_PTE_SHIFT=4 +CONFIG_ARM64_CONT_PMD_SHIFT=4 +CONFIG_ARCH_MMAP_RND_BITS_MIN=18 +CONFIG_ARCH_MMAP_RND_BITS_MAX=33 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=11 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16 +CONFIG_STACKTRACE_SUPPORT=y +CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000 +CONFIG_LOCKDEP_SUPPORT=y +CONFIG_GENERIC_BUG=y +CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y +CONFIG_GENERIC_HWEIGHT=y +CONFIG_GENERIC_CSUM=y +CONFIG_GENERIC_CALIBRATE_DELAY=y +CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y +CONFIG_SMP=y +CONFIG_KERNEL_MODE_NEON=y +CONFIG_FIX_EARLYCON_MEM=y +CONFIG_PGTABLE_LEVELS=4 +CONFIG_ARCH_SUPPORTS_UPROBES=y +CONFIG_ARCH_PROC_KCORE_TEXT=y + +# +# Platform selection +# +# CONFIG_ARCH_ACTIONS is not set +# CONFIG_ARCH_SUNXI is not set +# CONFIG_ARCH_ALPINE is not set +# CONFIG_ARCH_APPLE is not set +# CONFIG_ARCH_BCM is not set +# CONFIG_ARCH_BERLIN is not set +# CONFIG_ARCH_BITMAIN is not set +# CONFIG_ARCH_EXYNOS is not set +# CONFIG_ARCH_SPARX5 is not set +# CONFIG_ARCH_K3 is not set +# CONFIG_ARCH_LG1K is not set +# CONFIG_ARCH_HISI is not set +# CONFIG_ARCH_KEEMBAY is not set +# CONFIG_ARCH_MEDIATEK is not set +# CONFIG_ARCH_MESON is not set +# CONFIG_ARCH_MVEBU is not set +# CONFIG_ARCH_NXP is not set +# CONFIG_ARCH_NPCM is not set +# CONFIG_ARCH_QCOM is not set +# CONFIG_ARCH_REALTEK is not set +# CONFIG_ARCH_RENESAS is not set +# CONFIG_ARCH_ROCKCHIP is not set +# CONFIG_ARCH_SEATTLE is not set +# CONFIG_ARCH_INTEL_SOCFPGA is not set +# CONFIG_ARCH_SYNQUACER is not set +# CONFIG_ARCH_TEGRA is not set +# CONFIG_ARCH_SPRD is not set +# CONFIG_ARCH_THUNDER is not set +# CONFIG_ARCH_THUNDER2 is not set +# CONFIG_ARCH_UNIPHIER is not set +CONFIG_ARCH_VEXPRESS=y +# CONFIG_ARCH_VISCONTI is not set +# CONFIG_ARCH_XGENE is not set +# CONFIG_ARCH_ZYNQMP is not set +# end of Platform selection + +# +# Kernel Features +# + +# +# ARM errata workarounds via the alternatives framework +# +CONFIG_AMPERE_ERRATUM_AC03_CPU_38=y +CONFIG_ARM64_WORKAROUND_CLEAN_CACHE=y +CONFIG_ARM64_ERRATUM_826319=y +CONFIG_ARM64_ERRATUM_827319=y +CONFIG_ARM64_ERRATUM_824069=y +CONFIG_ARM64_ERRATUM_819472=y +CONFIG_ARM64_ERRATUM_832075=y +CONFIG_ARM64_ERRATUM_843419=y +CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419=y +CONFIG_ARM64_ERRATUM_1024718=y +CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT=y +# CONFIG_ARM64_ERRATUM_1165522 is not set +CONFIG_ARM64_ERRATUM_1319367=y +CONFIG_ARM64_ERRATUM_1530923=y +# CONFIG_ARM64_ERRATUM_2441007 is not set +# CONFIG_ARM64_ERRATUM_1286807 is not set +# CONFIG_ARM64_ERRATUM_1463225 is not set +# CONFIG_ARM64_ERRATUM_1542419 is not set +CONFIG_ARM64_ERRATUM_1508412=y +CONFIG_ARM64_ERRATUM_2051678=y +CONFIG_ARM64_ERRATUM_2077057=y +CONFIG_ARM64_ERRATUM_2658417=y +CONFIG_ARM64_WORKAROUND_TSB_FLUSH_FAILURE=y +CONFIG_ARM64_ERRATUM_2054223=y +CONFIG_ARM64_ERRATUM_2067961=y +# CONFIG_ARM64_ERRATUM_2441009 is not set +# CONFIG_ARM64_ERRATUM_2457168 is not set +CONFIG_ARM64_ERRATUM_2966298=y +# CONFIG_CAVIUM_ERRATUM_22375 is not set +# CONFIG_CAVIUM_ERRATUM_23154 is not set +# CONFIG_CAVIUM_ERRATUM_27456 is not set +# CONFIG_CAVIUM_ERRATUM_30115 is not set +# CONFIG_CAVIUM_TX2_ERRATUM_219 is not set +# CONFIG_FUJITSU_ERRATUM_010001 is not set +# CONFIG_HISILICON_ERRATUM_161600802 is not set +# CONFIG_QCOM_FALKOR_ERRATUM_1003 is not set +# CONFIG_QCOM_FALKOR_ERRATUM_1009 is not set +# CONFIG_QCOM_QDF2400_ERRATUM_0065 is not set +# CONFIG_QCOM_FALKOR_ERRATUM_E1041 is not set +# CONFIG_NVIDIA_CARMEL_CNP_ERRATUM is not set +CONFIG_SOCIONEXT_SYNQUACER_PREITS=y +# end of ARM errata workarounds via the alternatives framework + +CONFIG_ARM64_4K_PAGES=y +# CONFIG_ARM64_16K_PAGES is not set +# CONFIG_ARM64_64K_PAGES is not set +# CONFIG_ARM64_VA_BITS_39 is not set +CONFIG_ARM64_VA_BITS_48=y +CONFIG_ARM64_VA_BITS=48 +CONFIG_ARM64_PA_BITS_48=y +CONFIG_ARM64_PA_BITS=48 +# CONFIG_CPU_BIG_ENDIAN is not set +CONFIG_CPU_LITTLE_ENDIAN=y +CONFIG_SCHED_MC=y +# CONFIG_SCHED_CLUSTER is not set +# CONFIG_SCHED_SMT is not set +CONFIG_NR_CPUS=256 +# CONFIG_HOTPLUG_CPU is not set +# CONFIG_NUMA is not set +# CONFIG_HZ_100 is not set +CONFIG_HZ_250=y +# CONFIG_HZ_300 is not set +# CONFIG_HZ_1000 is not set +CONFIG_HZ=250 +CONFIG_SCHED_HRTICK=y +CONFIG_ARCH_SPARSEMEM_ENABLE=y +CONFIG_HW_PERF_EVENTS=y +CONFIG_PARAVIRT=y +# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set +# CONFIG_KEXEC_FILE is not set +# CONFIG_CRASH_DUMP is not set +# CONFIG_XEN is not set +CONFIG_ARCH_FORCE_MAX_ORDER=11 +# CONFIG_UNMAP_KERNEL_AT_EL0 is not set +# CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY is not set +# CONFIG_RODATA_FULL_DEFAULT_ENABLED is not set +# CONFIG_ARM64_SW_TTBR0_PAN is not set +# CONFIG_ARM64_TAGGED_ADDR_ABI is not set +# CONFIG_COMPAT is not set + +# +# ARMv8.1 architectural features +# +CONFIG_ARM64_HW_AFDBM=y +CONFIG_ARM64_PAN=y +CONFIG_AS_HAS_LDAPR=y +CONFIG_AS_HAS_LSE_ATOMICS=y +CONFIG_ARM64_LSE_ATOMICS=y +CONFIG_ARM64_USE_LSE_ATOMICS=y +# end of ARMv8.1 architectural features + +# +# ARMv8.2 architectural features +# +CONFIG_AS_HAS_ARMV8_2=y +CONFIG_AS_HAS_SHA3=y +# CONFIG_ARM64_PMEM is not set +CONFIG_ARM64_RAS_EXTN=y +# CONFIG_ARM64_CNP is not set +# end of ARMv8.2 architectural features + +# +# ARMv8.3 architectural features +# +# CONFIG_ARM64_PTR_AUTH is not set +CONFIG_CC_HAS_BRANCH_PROT_PAC_RET=y +CONFIG_CC_HAS_SIGN_RETURN_ADDRESS=y +CONFIG_AS_HAS_PAC=y +CONFIG_AS_HAS_CFI_NEGATE_RA_STATE=y +# end of ARMv8.3 architectural features + +# +# ARMv8.4 architectural features +# +CONFIG_ARM64_AMU_EXTN=y +CONFIG_AS_HAS_ARMV8_4=y +CONFIG_ARM64_TLB_RANGE=y +# end of ARMv8.4 architectural features + +# +# ARMv8.5 architectural features +# +CONFIG_AS_HAS_ARMV8_5=y +CONFIG_ARM64_BTI=y +CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI=y +CONFIG_ARM64_E0PD=y +CONFIG_ARM64_AS_HAS_MTE=y +# end of ARMv8.5 architectural features + +# +# ARMv8.7 architectural features +# +# CONFIG_ARM64_EPAN is not set +# end of ARMv8.7 architectural features + +CONFIG_ARM64_SVE=y +CONFIG_ARM64_SME=y +CONFIG_ARM64_MODULE_PLTS=y +# CONFIG_ARM64_PSEUDO_NMI is not set +CONFIG_RELOCATABLE=y +# CONFIG_RANDOMIZE_BASE is not set +CONFIG_CC_HAVE_STACKPROTECTOR_SYSREG=y +CONFIG_STACKPROTECTOR_PER_TASK=y +CONFIG_ARCH_NR_GPIO=0 +# end of Kernel Features + +# +# Boot options +# +# CONFIG_ARM64_ACPI_PARKING_PROTOCOL is not set +CONFIG_CMDLINE="" +CONFIG_EFI_STUB=y +CONFIG_EFI=y +# CONFIG_DMI is not set +# end of Boot options + +# +# Power management options +# +# CONFIG_SUSPEND is not set +CONFIG_PM=y +# CONFIG_PM_DEBUG is not set +CONFIG_PM_CLK=y +CONFIG_PM_GENERIC_DOMAINS=y +# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set +CONFIG_PM_GENERIC_DOMAINS_OF=y +# CONFIG_ENERGY_MODEL is not set +CONFIG_ARCH_SUSPEND_POSSIBLE=y +# end of Power management options + +# +# CPU Power Management +# + +# +# CPU Idle +# +# CONFIG_CPU_IDLE is not set +# end of CPU Idle + +# +# CPU Frequency scaling +# +CONFIG_CPU_FREQ=y +# CONFIG_CPU_FREQ_STAT is not set +CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set +CONFIG_CPU_FREQ_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set +# CONFIG_CPU_FREQ_GOV_SCHEDUTIL is not set + +# +# CPU frequency scaling drivers +# +# CONFIG_CPUFREQ_DT is not set +# end of CPU Frequency scaling +# end of CPU Power Management + +CONFIG_ARCH_SUPPORTS_ACPI=y +CONFIG_ACPI=y +CONFIG_ACPI_GENERIC_GSI=y +CONFIG_ACPI_CCA_REQUIRED=y +# CONFIG_ACPI_DEBUGGER is not set +CONFIG_ACPI_SPCR_TABLE=y +# CONFIG_ACPI_EC_DEBUGFS is not set +CONFIG_ACPI_AC=y +CONFIG_ACPI_BATTERY=y +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_TINY_POWER_BUTTON is not set +# CONFIG_ACPI_DOCK is not set +CONFIG_ACPI_MCFG=y +# CONFIG_ACPI_PROCESSOR is not set +CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y +# CONFIG_ACPI_TABLE_UPGRADE is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_PCI_SLOT is not set +# CONFIG_ACPI_CONTAINER is not set +# CONFIG_ACPI_HOTPLUG_MEMORY is not set +# CONFIG_ACPI_HED is not set +# CONFIG_ACPI_CUSTOM_METHOD is not set +# CONFIG_ACPI_BGRT is not set +CONFIG_ACPI_REDUCED_HARDWARE_ONLY=y +CONFIG_HAVE_ACPI_APEI=y +# CONFIG_ACPI_APEI is not set +# CONFIG_ACPI_CONFIGFS is not set +# CONFIG_ACPI_PFRUT is not set +CONFIG_ACPI_IORT=y +CONFIG_ACPI_GTDT=y +CONFIG_ACPI_PPTT=y +# CONFIG_PMIC_OPREGION is not set +CONFIG_ACPI_PRMT=y +CONFIG_IRQ_BYPASS_MANAGER=y +CONFIG_HAVE_KVM=y +# CONFIG_VIRTUALIZATION is not set + +# +# General architecture-dependent options +# +CONFIG_CRASH_CORE=y +CONFIG_KPROBES=y +CONFIG_JUMP_LABEL=y +# CONFIG_STATIC_KEYS_SELFTEST is not set +CONFIG_UPROBES=y +CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y +CONFIG_KRETPROBES=y +CONFIG_HAVE_IOREMAP_PROT=y +CONFIG_HAVE_KPROBES=y +CONFIG_HAVE_KRETPROBES=y +CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y +CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y +CONFIG_HAVE_NMI=y +CONFIG_TRACE_IRQFLAGS_SUPPORT=y +CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y +CONFIG_HAVE_ARCH_TRACEHOOK=y +CONFIG_HAVE_DMA_CONTIGUOUS=y +CONFIG_GENERIC_SMP_IDLE_THREAD=y +CONFIG_GENERIC_IDLE_POLL_SETUP=y +CONFIG_ARCH_HAS_FORTIFY_SOURCE=y +CONFIG_ARCH_HAS_KEEPINITRD=y +CONFIG_ARCH_HAS_SET_MEMORY=y +CONFIG_ARCH_HAS_SET_DIRECT_MAP=y +CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y +CONFIG_ARCH_WANTS_NO_INSTR=y +CONFIG_HAVE_ASM_MODVERSIONS=y +CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y +CONFIG_HAVE_RSEQ=y +CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y +CONFIG_HAVE_HW_BREAKPOINT=y +CONFIG_HAVE_PERF_REGS=y +CONFIG_HAVE_PERF_USER_STACK_DUMP=y +CONFIG_HAVE_ARCH_JUMP_LABEL=y +CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y +CONFIG_MMU_GATHER_TABLE_FREE=y +CONFIG_MMU_GATHER_RCU_TABLE_FREE=y +CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y +CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y +CONFIG_HAVE_CMPXCHG_LOCAL=y +CONFIG_HAVE_CMPXCHG_DOUBLE=y +CONFIG_HAVE_ARCH_SECCOMP=y +CONFIG_HAVE_ARCH_SECCOMP_FILTER=y +CONFIG_SECCOMP=y +CONFIG_SECCOMP_FILTER=y +# CONFIG_SECCOMP_CACHE_DEBUG is not set +CONFIG_HAVE_ARCH_STACKLEAK=y +CONFIG_HAVE_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR_STRONG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y +CONFIG_LTO_NONE=y +CONFIG_ARCH_SUPPORTS_CFI_CLANG=y +CONFIG_HAVE_CONTEXT_TRACKING_USER=y +CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y +CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y +CONFIG_HAVE_MOVE_PUD=y +CONFIG_HAVE_MOVE_PMD=y +CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y +CONFIG_HAVE_ARCH_HUGE_VMAP=y +CONFIG_HAVE_ARCH_HUGE_VMALLOC=y +CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y +CONFIG_HAVE_MOD_ARCH_SPECIFIC=y +CONFIG_MODULES_USE_ELF_RELA=y +CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y +CONFIG_SOFTIRQ_ON_OWN_STACK=y +CONFIG_ARCH_HAS_ELF_RANDOMIZE=y +CONFIG_HAVE_ARCH_MMAP_RND_BITS=y +CONFIG_ARCH_MMAP_RND_BITS=28 +CONFIG_PAGE_SIZE_LESS_THAN_64KB=y +CONFIG_PAGE_SIZE_LESS_THAN_256KB=y +CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT=y +CONFIG_CLONE_BACKWARDS=y +# CONFIG_COMPAT_32BIT_TIME is not set +CONFIG_HAVE_ARCH_VMAP_STACK=y +CONFIG_VMAP_STACK=y +CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y +CONFIG_RANDOMIZE_KSTACK_OFFSET=y +# CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT is not set +CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y +CONFIG_STRICT_KERNEL_RWX=y +CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y +CONFIG_STRICT_MODULE_RWX=y +CONFIG_HAVE_ARCH_COMPILER_H=y +CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y +CONFIG_ARCH_USE_MEMREMAP_PROT=y +# CONFIG_LOCK_EVENT_COUNTS is not set +CONFIG_ARCH_HAS_RELR=y +CONFIG_HAVE_PREEMPT_DYNAMIC=y +CONFIG_HAVE_PREEMPT_DYNAMIC_KEY=y +CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y +CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y +CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y +CONFIG_ARCH_HAVE_TRACE_MMIO_ACCESS=y + +# +# GCOV-based kernel profiling +# +# CONFIG_GCOV_KERNEL is not set +CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y +# end of GCOV-based kernel profiling + +CONFIG_HAVE_GCC_PLUGINS=y +# end of General architecture-dependent options + +CONFIG_RT_MUTEXES=y +CONFIG_BASE_SMALL=0 +CONFIG_MODULES=y +CONFIG_MODULE_FORCE_LOAD=y +CONFIG_MODULE_UNLOAD=y +CONFIG_MODULE_FORCE_UNLOAD=y +# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set +CONFIG_MODVERSIONS=y +CONFIG_ASM_MODVERSIONS=y +CONFIG_MODULE_SRCVERSION_ALL=y +# CONFIG_MODULE_SIG is not set +CONFIG_MODULE_COMPRESS_NONE=y +# CONFIG_MODULE_COMPRESS_GZIP is not set +# CONFIG_MODULE_COMPRESS_XZ is not set +# CONFIG_MODULE_COMPRESS_ZSTD is not set +# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set +CONFIG_MODPROBE_PATH="/sbin/modprobe" +# CONFIG_TRIM_UNUSED_KSYMS is not set +CONFIG_MODULES_TREE_LOOKUP=y +CONFIG_BLOCK=y +CONFIG_BLOCK_LEGACY_AUTOLOAD=y +CONFIG_BLK_DEV_BSG_COMMON=y +CONFIG_BLK_DEV_BSGLIB=y +# CONFIG_BLK_DEV_INTEGRITY is not set +# CONFIG_BLK_DEV_ZONED is not set +# CONFIG_BLK_DEV_THROTTLING is not set +# CONFIG_BLK_WBT is not set +# CONFIG_BLK_CGROUP_IOLATENCY is not set +# CONFIG_BLK_CGROUP_IOCOST is not set +# CONFIG_BLK_CGROUP_IOPRIO is not set +# CONFIG_BLK_DEBUG_FS is not set +# CONFIG_BLK_SED_OPAL is not set +# CONFIG_BLK_INLINE_ENCRYPTION is not set + +# +# Partition Types +# +CONFIG_PARTITION_ADVANCED=y +# CONFIG_ACORN_PARTITION is not set +# CONFIG_AIX_PARTITION is not set +# CONFIG_OSF_PARTITION is not set +# CONFIG_AMIGA_PARTITION is not set +# CONFIG_ATARI_PARTITION is not set +# CONFIG_MAC_PARTITION is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_BSD_DISKLABEL is not set +# CONFIG_MINIX_SUBPARTITION is not set +# CONFIG_SOLARIS_X86_PARTITION is not set +# CONFIG_UNIXWARE_DISKLABEL is not set +# CONFIG_LDM_PARTITION is not set +# CONFIG_SGI_PARTITION is not set +# CONFIG_ULTRIX_PARTITION is not set +# CONFIG_SUN_PARTITION is not set +# CONFIG_KARMA_PARTITION is not set +CONFIG_EFI_PARTITION=y +# CONFIG_SYSV68_PARTITION is not set +# CONFIG_CMDLINE_PARTITION is not set +# end of Partition Types + +CONFIG_BLK_MQ_PCI=y +CONFIG_BLK_MQ_VIRTIO=y +CONFIG_BLK_PM=y +CONFIG_BLOCK_HOLDER_DEPRECATED=y +CONFIG_BLK_MQ_STACKING=y + +# +# IO Schedulers +# +# CONFIG_MQ_IOSCHED_DEADLINE is not set +# CONFIG_MQ_IOSCHED_KYBER is not set +# CONFIG_IOSCHED_BFQ is not set +# end of IO Schedulers + +CONFIG_ASN1=y +CONFIG_ARCH_INLINE_SPIN_TRYLOCK=y +CONFIG_ARCH_INLINE_SPIN_TRYLOCK_BH=y +CONFIG_ARCH_INLINE_SPIN_LOCK=y +CONFIG_ARCH_INLINE_SPIN_LOCK_BH=y +CONFIG_ARCH_INLINE_SPIN_LOCK_IRQ=y +CONFIG_ARCH_INLINE_SPIN_LOCK_IRQSAVE=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK_BH=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK_IRQ=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE=y +CONFIG_ARCH_INLINE_READ_LOCK=y +CONFIG_ARCH_INLINE_READ_LOCK_BH=y +CONFIG_ARCH_INLINE_READ_LOCK_IRQ=y +CONFIG_ARCH_INLINE_READ_LOCK_IRQSAVE=y +CONFIG_ARCH_INLINE_READ_UNLOCK=y +CONFIG_ARCH_INLINE_READ_UNLOCK_BH=y +CONFIG_ARCH_INLINE_READ_UNLOCK_IRQ=y +CONFIG_ARCH_INLINE_READ_UNLOCK_IRQRESTORE=y +CONFIG_ARCH_INLINE_WRITE_LOCK=y +CONFIG_ARCH_INLINE_WRITE_LOCK_BH=y +CONFIG_ARCH_INLINE_WRITE_LOCK_IRQ=y +CONFIG_ARCH_INLINE_WRITE_LOCK_IRQSAVE=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK_BH=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK_IRQ=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE=y +CONFIG_INLINE_SPIN_TRYLOCK=y +CONFIG_INLINE_SPIN_TRYLOCK_BH=y +CONFIG_INLINE_SPIN_LOCK=y +CONFIG_INLINE_SPIN_LOCK_BH=y +CONFIG_INLINE_SPIN_LOCK_IRQ=y +CONFIG_INLINE_SPIN_LOCK_IRQSAVE=y +CONFIG_INLINE_SPIN_UNLOCK_BH=y +CONFIG_INLINE_SPIN_UNLOCK_IRQ=y +CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE=y +CONFIG_INLINE_READ_LOCK=y +CONFIG_INLINE_READ_LOCK_BH=y +CONFIG_INLINE_READ_LOCK_IRQ=y +CONFIG_INLINE_READ_LOCK_IRQSAVE=y +CONFIG_INLINE_READ_UNLOCK=y +CONFIG_INLINE_READ_UNLOCK_BH=y +CONFIG_INLINE_READ_UNLOCK_IRQ=y +CONFIG_INLINE_READ_UNLOCK_IRQRESTORE=y +CONFIG_INLINE_WRITE_LOCK=y +CONFIG_INLINE_WRITE_LOCK_BH=y +CONFIG_INLINE_WRITE_LOCK_IRQ=y +CONFIG_INLINE_WRITE_LOCK_IRQSAVE=y +CONFIG_INLINE_WRITE_UNLOCK=y +CONFIG_INLINE_WRITE_UNLOCK_BH=y +CONFIG_INLINE_WRITE_UNLOCK_IRQ=y +CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE=y +CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y +CONFIG_MUTEX_SPIN_ON_OWNER=y +CONFIG_RWSEM_SPIN_ON_OWNER=y +CONFIG_LOCK_SPIN_ON_OWNER=y +CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y +CONFIG_QUEUED_SPINLOCKS=y +CONFIG_ARCH_USE_QUEUED_RWLOCKS=y +CONFIG_QUEUED_RWLOCKS=y +CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y +CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y +CONFIG_FREEZER=y + +# +# Executable file formats +# +CONFIG_BINFMT_ELF=y +CONFIG_ARCH_BINFMT_ELF_STATE=y +CONFIG_ARCH_BINFMT_ELF_EXTRA_PHDRS=y +CONFIG_ARCH_HAVE_ELF_PROT=y +CONFIG_ARCH_USE_GNU_PROPERTY=y +CONFIG_ELFCORE=y +CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y +CONFIG_BINFMT_SCRIPT=y +CONFIG_BINFMT_MISC=y +CONFIG_COREDUMP=y +# end of Executable file formats + +# +# Memory Management options +# +CONFIG_SWAP=y +# CONFIG_ZSWAP is not set + +# +# SLAB allocator options +# +# CONFIG_SLAB is not set +CONFIG_SLUB=y +# CONFIG_SLOB is not set +# CONFIG_SLAB_MERGE_DEFAULT is not set +# CONFIG_SLAB_FREELIST_RANDOM is not set +# CONFIG_SLAB_FREELIST_HARDENED is not set +# CONFIG_SLUB_STATS is not set +# CONFIG_SLUB_CPU_PARTIAL is not set +# end of SLAB allocator options + +# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set +# CONFIG_COMPAT_BRK is not set +CONFIG_SPARSEMEM=y +CONFIG_SPARSEMEM_EXTREME=y +CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y +CONFIG_SPARSEMEM_VMEMMAP=y +CONFIG_HAVE_FAST_GUP=y +CONFIG_ARCH_KEEP_MEMBLOCK=y +CONFIG_MEMORY_ISOLATION=y +CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y +CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y +CONFIG_MEMORY_HOTPLUG=y +# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set +CONFIG_MEMORY_HOTREMOVE=y +CONFIG_MHP_MEMMAP_ON_MEMORY=y +CONFIG_SPLIT_PTLOCK_CPUS=4 +CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y +CONFIG_MEMORY_BALLOON=y +# CONFIG_BALLOON_COMPACTION is not set +CONFIG_COMPACTION=y +CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1 +CONFIG_PAGE_REPORTING=y +CONFIG_MIGRATION=y +CONFIG_DEVICE_MIGRATION=y +CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y +CONFIG_ARCH_ENABLE_THP_MIGRATION=y +CONFIG_CONTIG_ALLOC=y +CONFIG_PHYS_ADDR_T_64BIT=y +# CONFIG_KSM is not set +CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 +CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y +# CONFIG_MEMORY_FAILURE is not set +CONFIG_ARCH_WANTS_THP_SWAP=y +CONFIG_TRANSPARENT_HUGEPAGE=y +CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y +# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set +CONFIG_THP_SWAP=y +# CONFIG_READ_ONLY_THP_FOR_FS is not set +# CONFIG_CMA is not set +CONFIG_GENERIC_EARLY_IOREMAP=y +# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set +# CONFIG_IDLE_PAGE_TRACKING is not set +CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y +CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y +CONFIG_ARCH_HAS_PTE_DEVMAP=y +CONFIG_ARCH_HAS_ZONE_DMA_SET=y +# CONFIG_ZONE_DMA is not set +# CONFIG_ZONE_DMA32 is not set +CONFIG_ZONE_DEVICE=y +# CONFIG_DEVICE_PRIVATE is not set +CONFIG_VMAP_PFN=y +CONFIG_VM_EVENT_COUNTERS=y +# CONFIG_PERCPU_STATS is not set +# CONFIG_GUP_TEST is not set +CONFIG_ARCH_HAS_PTE_SPECIAL=y +CONFIG_ANON_VMA_NAME=y +CONFIG_USERFAULTFD=y +CONFIG_HAVE_ARCH_USERFAULTFD_MINOR=y +# CONFIG_LRU_GEN is not set +CONFIG_LOCK_MM_AND_FIND_VMA=y + +# +# Data Access Monitoring +# +# CONFIG_DAMON is not set +# end of Data Access Monitoring +# end of Memory Management options + +CONFIG_NET=y +CONFIG_NET_INGRESS=y +CONFIG_NET_EGRESS=y +CONFIG_SKB_EXTENSIONS=y + +# +# Networking options +# +CONFIG_PACKET=y +CONFIG_PACKET_DIAG=y +CONFIG_UNIX=y +CONFIG_UNIX_SCM=y +CONFIG_AF_UNIX_OOB=y +CONFIG_UNIX_DIAG=y +# CONFIG_TLS is not set +CONFIG_XFRM=y +CONFIG_XFRM_ALGO=y +CONFIG_XFRM_USER=y +# CONFIG_XFRM_INTERFACE is not set +# CONFIG_XFRM_SUB_POLICY is not set +# CONFIG_XFRM_MIGRATE is not set +# CONFIG_XFRM_STATISTICS is not set +CONFIG_XFRM_ESP=y +# CONFIG_NET_KEY is not set +# CONFIG_XDP_SOCKETS is not set +CONFIG_INET=y +# CONFIG_IP_MULTICAST is not set +CONFIG_IP_ADVANCED_ROUTER=y +# CONFIG_IP_FIB_TRIE_STATS is not set +CONFIG_IP_MULTIPLE_TABLES=y +# CONFIG_IP_ROUTE_MULTIPATH is not set +# CONFIG_IP_ROUTE_VERBOSE is not set +CONFIG_IP_PNP=y +CONFIG_IP_PNP_DHCP=y +# CONFIG_IP_PNP_BOOTP is not set +# CONFIG_IP_PNP_RARP is not set +CONFIG_NET_IPIP=y +# CONFIG_NET_IPGRE_DEMUX is not set +CONFIG_NET_IP_TUNNEL=y +CONFIG_SYN_COOKIES=y +# CONFIG_NET_IPVTI is not set +CONFIG_NET_UDP_TUNNEL=y +# CONFIG_NET_FOU is not set +# CONFIG_NET_FOU_IP_TUNNELS is not set +# CONFIG_INET_AH is not set +CONFIG_INET_ESP=y +# CONFIG_INET_ESP_OFFLOAD is not set +# CONFIG_INET_ESPINTCP is not set +# CONFIG_INET_IPCOMP is not set +CONFIG_INET_TABLE_PERTURB_ORDER=16 +CONFIG_INET_TUNNEL=y +CONFIG_INET_DIAG=y +CONFIG_INET_TCP_DIAG=y +CONFIG_INET_UDP_DIAG=y +CONFIG_INET_RAW_DIAG=y +# CONFIG_INET_DIAG_DESTROY is not set +# CONFIG_TCP_CONG_ADVANCED is not set +CONFIG_TCP_CONG_CUBIC=y +CONFIG_DEFAULT_TCP_CONG="cubic" +# CONFIG_TCP_MD5SIG is not set +CONFIG_IPV6=y +# CONFIG_IPV6_ROUTER_PREF is not set +CONFIG_IPV6_OPTIMISTIC_DAD=y +# CONFIG_INET6_AH is not set +# CONFIG_INET6_ESP is not set +# CONFIG_INET6_IPCOMP is not set +# CONFIG_IPV6_MIP6 is not set +# CONFIG_IPV6_ILA is not set +# CONFIG_IPV6_VTI is not set +CONFIG_IPV6_SIT=y +# CONFIG_IPV6_SIT_6RD is not set +CONFIG_IPV6_NDISC_NODETYPE=y +# CONFIG_IPV6_TUNNEL is not set +# CONFIG_IPV6_MULTIPLE_TABLES is not set +# CONFIG_IPV6_MROUTE is not set +# CONFIG_IPV6_SEG6_LWTUNNEL is not set +# CONFIG_IPV6_SEG6_HMAC is not set +# CONFIG_IPV6_RPL_LWTUNNEL is not set +# CONFIG_IPV6_IOAM6_LWTUNNEL is not set +# CONFIG_NETLABEL is not set +# CONFIG_MPTCP is not set +CONFIG_NETWORK_SECMARK=y +CONFIG_NET_PTP_CLASSIFY=y +CONFIG_NETWORK_PHY_TIMESTAMPING=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_ADVANCED=y +CONFIG_BRIDGE_NETFILTER=y + +# +# Core Netfilter Configuration +# +CONFIG_NETFILTER_INGRESS=y +CONFIG_NETFILTER_EGRESS=y +CONFIG_NETFILTER_SKIP_EGRESS=y +CONFIG_NETFILTER_NETLINK=y +CONFIG_NETFILTER_FAMILY_BRIDGE=y +CONFIG_NETFILTER_FAMILY_ARP=y +# CONFIG_NETFILTER_NETLINK_HOOK is not set +# CONFIG_NETFILTER_NETLINK_ACCT is not set +CONFIG_NETFILTER_NETLINK_QUEUE=y +CONFIG_NETFILTER_NETLINK_LOG=y +# CONFIG_NETFILTER_NETLINK_OSF is not set +CONFIG_NF_CONNTRACK=y +CONFIG_NF_LOG_SYSLOG=y +CONFIG_NETFILTER_CONNCOUNT=y +CONFIG_NF_CONNTRACK_MARK=y +# CONFIG_NF_CONNTRACK_SECMARK is not set +# CONFIG_NF_CONNTRACK_ZONES is not set +# CONFIG_NF_CONNTRACK_PROCFS is not set +CONFIG_NF_CONNTRACK_EVENTS=y +# CONFIG_NF_CONNTRACK_TIMEOUT is not set +# CONFIG_NF_CONNTRACK_TIMESTAMP is not set +# CONFIG_NF_CONNTRACK_LABELS is not set +# CONFIG_NF_CT_PROTO_DCCP is not set +CONFIG_NF_CT_PROTO_GRE=y +# CONFIG_NF_CT_PROTO_SCTP is not set +# CONFIG_NF_CT_PROTO_UDPLITE is not set +CONFIG_NF_CONNTRACK_AMANDA=y +CONFIG_NF_CONNTRACK_FTP=y +CONFIG_NF_CONNTRACK_H323=y +CONFIG_NF_CONNTRACK_IRC=y +CONFIG_NF_CONNTRACK_BROADCAST=y +CONFIG_NF_CONNTRACK_NETBIOS_NS=y +# CONFIG_NF_CONNTRACK_SNMP is not set +CONFIG_NF_CONNTRACK_PPTP=y +CONFIG_NF_CONNTRACK_SANE=y +CONFIG_NF_CONNTRACK_SIP=y +CONFIG_NF_CONNTRACK_TFTP=y +CONFIG_NF_CT_NETLINK=y +# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set +CONFIG_NF_NAT=y +CONFIG_NF_NAT_AMANDA=y +CONFIG_NF_NAT_FTP=y +CONFIG_NF_NAT_IRC=y +CONFIG_NF_NAT_SIP=y +CONFIG_NF_NAT_TFTP=y +CONFIG_NF_NAT_REDIRECT=y +CONFIG_NF_NAT_MASQUERADE=y +CONFIG_NETFILTER_SYNPROXY=y +CONFIG_NF_TABLES=y +CONFIG_NF_TABLES_INET=y +# CONFIG_NF_TABLES_NETDEV is not set +CONFIG_NFT_NUMGEN=y +CONFIG_NFT_CT=y +CONFIG_NFT_CONNLIMIT=y +CONFIG_NFT_LOG=y +CONFIG_NFT_LIMIT=y +CONFIG_NFT_MASQ=y +CONFIG_NFT_REDIR=y +CONFIG_NFT_NAT=y +CONFIG_NFT_TUNNEL=y +CONFIG_NFT_OBJREF=y +# CONFIG_NFT_QUEUE is not set +# CONFIG_NFT_QUOTA is not set +CONFIG_NFT_REJECT=y +CONFIG_NFT_REJECT_INET=y +CONFIG_NFT_COMPAT=y +# CONFIG_NFT_HASH is not set +CONFIG_NFT_XFRM=y +CONFIG_NFT_SOCKET=y +# CONFIG_NFT_OSF is not set +# CONFIG_NFT_TPROXY is not set +# CONFIG_NFT_SYNPROXY is not set +# CONFIG_NF_FLOW_TABLE is not set +CONFIG_NETFILTER_XTABLES=y + +# +# Xtables combined modules +# +CONFIG_NETFILTER_XT_MARK=y +# CONFIG_NETFILTER_XT_CONNMARK is not set +CONFIG_NETFILTER_XT_SET=y + +# +# Xtables targets +# +# CONFIG_NETFILTER_XT_TARGET_AUDIT is not set +CONFIG_NETFILTER_XT_TARGET_CHECKSUM=y +# CONFIG_NETFILTER_XT_TARGET_CLASSIFY is not set +# CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set +# CONFIG_NETFILTER_XT_TARGET_CT is not set +# CONFIG_NETFILTER_XT_TARGET_DSCP is not set +CONFIG_NETFILTER_XT_TARGET_HL=y +# CONFIG_NETFILTER_XT_TARGET_HMARK is not set +# CONFIG_NETFILTER_XT_TARGET_IDLETIMER is not set +CONFIG_NETFILTER_XT_TARGET_LOG=y +CONFIG_NETFILTER_XT_TARGET_MARK=y +CONFIG_NETFILTER_XT_NAT=y +CONFIG_NETFILTER_XT_TARGET_NETMAP=y +CONFIG_NETFILTER_XT_TARGET_NFLOG=y +# CONFIG_NETFILTER_XT_TARGET_NFQUEUE is not set +# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set +# CONFIG_NETFILTER_XT_TARGET_RATEEST is not set +CONFIG_NETFILTER_XT_TARGET_REDIRECT=y +CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y +# CONFIG_NETFILTER_XT_TARGET_TEE is not set +# CONFIG_NETFILTER_XT_TARGET_TPROXY is not set +# CONFIG_NETFILTER_XT_TARGET_TRACE is not set +CONFIG_NETFILTER_XT_TARGET_SECMARK=y +CONFIG_NETFILTER_XT_TARGET_TCPMSS=y +# CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP is not set + +# +# Xtables matches +# +CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=y +# CONFIG_NETFILTER_XT_MATCH_BPF is not set +CONFIG_NETFILTER_XT_MATCH_CGROUP=y +# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set +CONFIG_NETFILTER_XT_MATCH_COMMENT=y +# CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLABEL is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_CONNMARK is not set +CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y +# CONFIG_NETFILTER_XT_MATCH_CPU is not set +# CONFIG_NETFILTER_XT_MATCH_DCCP is not set +# CONFIG_NETFILTER_XT_MATCH_DEVGROUP is not set +# CONFIG_NETFILTER_XT_MATCH_DSCP is not set +CONFIG_NETFILTER_XT_MATCH_ECN=y +# CONFIG_NETFILTER_XT_MATCH_ESP is not set +# CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_HELPER is not set +CONFIG_NETFILTER_XT_MATCH_HL=y +# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set +# CONFIG_NETFILTER_XT_MATCH_IPRANGE is not set +CONFIG_NETFILTER_XT_MATCH_IPVS=y +# CONFIG_NETFILTER_XT_MATCH_L2TP is not set +# CONFIG_NETFILTER_XT_MATCH_LENGTH is not set +CONFIG_NETFILTER_XT_MATCH_LIMIT=y +# CONFIG_NETFILTER_XT_MATCH_MAC is not set +# CONFIG_NETFILTER_XT_MATCH_MARK is not set +CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y +# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set +# CONFIG_NETFILTER_XT_MATCH_OSF is not set +CONFIG_NETFILTER_XT_MATCH_OWNER=y +# CONFIG_NETFILTER_XT_MATCH_POLICY is not set +CONFIG_NETFILTER_XT_MATCH_PHYSDEV=y +# CONFIG_NETFILTER_XT_MATCH_PKTTYPE is not set +# CONFIG_NETFILTER_XT_MATCH_QUOTA is not set +# CONFIG_NETFILTER_XT_MATCH_RATEEST is not set +# CONFIG_NETFILTER_XT_MATCH_REALM is not set +# CONFIG_NETFILTER_XT_MATCH_RECENT is not set +# CONFIG_NETFILTER_XT_MATCH_SCTP is not set +# CONFIG_NETFILTER_XT_MATCH_SOCKET is not set +# CONFIG_NETFILTER_XT_MATCH_STATE is not set +CONFIG_NETFILTER_XT_MATCH_STATISTIC=y +# CONFIG_NETFILTER_XT_MATCH_STRING is not set +# CONFIG_NETFILTER_XT_MATCH_TCPMSS is not set +# CONFIG_NETFILTER_XT_MATCH_TIME is not set +# CONFIG_NETFILTER_XT_MATCH_U32 is not set +# end of Core Netfilter Configuration + +CONFIG_IP_SET=y +CONFIG_IP_SET_MAX=256 +CONFIG_IP_SET_BITMAP_IP=y +CONFIG_IP_SET_BITMAP_IPMAC=y +CONFIG_IP_SET_BITMAP_PORT=y +CONFIG_IP_SET_HASH_IP=y +CONFIG_IP_SET_HASH_IPMARK=y +CONFIG_IP_SET_HASH_IPPORT=y +CONFIG_IP_SET_HASH_IPPORTIP=y +CONFIG_IP_SET_HASH_IPPORTNET=y +CONFIG_IP_SET_HASH_IPMAC=y +CONFIG_IP_SET_HASH_MAC=y +CONFIG_IP_SET_HASH_NETPORTNET=y +CONFIG_IP_SET_HASH_NET=y +CONFIG_IP_SET_HASH_NETNET=y +CONFIG_IP_SET_HASH_NETPORT=y +CONFIG_IP_SET_HASH_NETIFACE=y +# CONFIG_IP_SET_LIST_SET is not set +CONFIG_IP_VS=y +# CONFIG_IP_VS_IPV6 is not set +# CONFIG_IP_VS_DEBUG is not set +CONFIG_IP_VS_TAB_BITS=12 + +# +# IPVS transport protocol load balancing support +# +CONFIG_IP_VS_PROTO_TCP=y +CONFIG_IP_VS_PROTO_UDP=y +# CONFIG_IP_VS_PROTO_ESP is not set +# CONFIG_IP_VS_PROTO_AH is not set +# CONFIG_IP_VS_PROTO_SCTP is not set + +# +# IPVS scheduler +# +CONFIG_IP_VS_RR=y +CONFIG_IP_VS_WRR=y +# CONFIG_IP_VS_LC is not set +# CONFIG_IP_VS_WLC is not set +# CONFIG_IP_VS_FO is not set +# CONFIG_IP_VS_OVF is not set +# CONFIG_IP_VS_LBLC is not set +# CONFIG_IP_VS_LBLCR is not set +# CONFIG_IP_VS_DH is not set +CONFIG_IP_VS_SH=y +# CONFIG_IP_VS_MH is not set +# CONFIG_IP_VS_SED is not set +# CONFIG_IP_VS_NQ is not set +# CONFIG_IP_VS_TWOS is not set + +# +# IPVS SH scheduler +# +CONFIG_IP_VS_SH_TAB_BITS=8 + +# +# IPVS MH scheduler +# +CONFIG_IP_VS_MH_TAB_INDEX=12 + +# +# IPVS application helper +# +# CONFIG_IP_VS_FTP is not set +CONFIG_IP_VS_NFCT=y +# CONFIG_IP_VS_PE_SIP is not set + +# +# IP: Netfilter Configuration +# +CONFIG_NF_DEFRAG_IPV4=y +CONFIG_NF_SOCKET_IPV4=y +# CONFIG_NF_TPROXY_IPV4 is not set +CONFIG_NF_TABLES_IPV4=y +CONFIG_NFT_REJECT_IPV4=y +# CONFIG_NFT_DUP_IPV4 is not set +# CONFIG_NFT_FIB_IPV4 is not set +# CONFIG_NF_TABLES_ARP is not set +# CONFIG_NF_DUP_IPV4 is not set +# CONFIG_NF_LOG_ARP is not set +CONFIG_NF_LOG_IPV4=y +CONFIG_NF_REJECT_IPV4=y +CONFIG_NF_NAT_PPTP=y +CONFIG_NF_NAT_H323=y +CONFIG_IP_NF_IPTABLES=y +CONFIG_IP_NF_MATCH_AH=y +CONFIG_IP_NF_MATCH_ECN=y +CONFIG_IP_NF_MATCH_RPFILTER=y +CONFIG_IP_NF_MATCH_TTL=y +CONFIG_IP_NF_FILTER=y +CONFIG_IP_NF_TARGET_REJECT=y +CONFIG_IP_NF_TARGET_SYNPROXY=y +CONFIG_IP_NF_NAT=y +CONFIG_IP_NF_TARGET_MASQUERADE=y +CONFIG_IP_NF_TARGET_NETMAP=y +CONFIG_IP_NF_TARGET_REDIRECT=y +CONFIG_IP_NF_MANGLE=y +CONFIG_IP_NF_TARGET_CLUSTERIP=y +CONFIG_IP_NF_TARGET_ECN=y +CONFIG_IP_NF_TARGET_TTL=y +CONFIG_IP_NF_RAW=y +CONFIG_IP_NF_SECURITY=y +CONFIG_IP_NF_ARPTABLES=y +CONFIG_IP_NF_ARPFILTER=y +CONFIG_IP_NF_ARP_MANGLE=y +# end of IP: Netfilter Configuration + +# +# IPv6: Netfilter Configuration +# +CONFIG_NF_SOCKET_IPV6=y +# CONFIG_NF_TPROXY_IPV6 is not set +CONFIG_NF_TABLES_IPV6=y +CONFIG_NFT_REJECT_IPV6=y +# CONFIG_NFT_DUP_IPV6 is not set +# CONFIG_NFT_FIB_IPV6 is not set +# CONFIG_NF_DUP_IPV6 is not set +CONFIG_NF_REJECT_IPV6=y +CONFIG_NF_LOG_IPV6=y +CONFIG_IP6_NF_IPTABLES=y +CONFIG_IP6_NF_MATCH_AH=y +CONFIG_IP6_NF_MATCH_EUI64=y +CONFIG_IP6_NF_MATCH_FRAG=y +CONFIG_IP6_NF_MATCH_OPTS=y +CONFIG_IP6_NF_MATCH_HL=y +CONFIG_IP6_NF_MATCH_IPV6HEADER=y +CONFIG_IP6_NF_MATCH_MH=y +CONFIG_IP6_NF_MATCH_RPFILTER=y +CONFIG_IP6_NF_MATCH_RT=y +CONFIG_IP6_NF_MATCH_SRH=y +CONFIG_IP6_NF_TARGET_HL=y +CONFIG_IP6_NF_FILTER=y +CONFIG_IP6_NF_TARGET_REJECT=y +CONFIG_IP6_NF_TARGET_SYNPROXY=y +CONFIG_IP6_NF_MANGLE=y +CONFIG_IP6_NF_RAW=y +CONFIG_IP6_NF_SECURITY=y +CONFIG_IP6_NF_NAT=y +CONFIG_IP6_NF_TARGET_MASQUERADE=y +CONFIG_IP6_NF_TARGET_NPT=y +# end of IPv6: Netfilter Configuration + +CONFIG_NF_DEFRAG_IPV6=y +# CONFIG_NF_TABLES_BRIDGE is not set +# CONFIG_NF_CONNTRACK_BRIDGE is not set +# CONFIG_BRIDGE_NF_EBTABLES is not set +# CONFIG_BPFILTER is not set +# CONFIG_IP_DCCP is not set +CONFIG_IP_SCTP=y +# CONFIG_SCTP_DBG_OBJCNT is not set +CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set +CONFIG_SCTP_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set +CONFIG_INET_SCTP_DIAG=y +# CONFIG_RDS is not set +# CONFIG_TIPC is not set +# CONFIG_ATM is not set +# CONFIG_L2TP is not set +CONFIG_STP=y +CONFIG_BRIDGE=y +CONFIG_BRIDGE_IGMP_SNOOPING=y +CONFIG_BRIDGE_VLAN_FILTERING=y +# CONFIG_BRIDGE_MRP is not set +# CONFIG_BRIDGE_CFM is not set +# CONFIG_NET_DSA is not set +CONFIG_VLAN_8021Q=y +# CONFIG_VLAN_8021Q_GVRP is not set +# CONFIG_VLAN_8021Q_MVRP is not set +CONFIG_LLC=y +# CONFIG_LLC2 is not set +# CONFIG_ATALK is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_PHONET is not set +# CONFIG_6LOWPAN is not set +# CONFIG_IEEE802154 is not set +CONFIG_NET_SCHED=y + +# +# Queueing/Scheduling +# +# CONFIG_NET_SCH_CBQ is not set +# CONFIG_NET_SCH_HTB is not set +# CONFIG_NET_SCH_HFSC is not set +# CONFIG_NET_SCH_PRIO is not set +CONFIG_NET_SCH_MULTIQ=y +# CONFIG_NET_SCH_RED is not set +# CONFIG_NET_SCH_SFB is not set +# CONFIG_NET_SCH_SFQ is not set +# CONFIG_NET_SCH_TEQL is not set +# CONFIG_NET_SCH_TBF is not set +# CONFIG_NET_SCH_CBS is not set +# CONFIG_NET_SCH_ETF is not set +# CONFIG_NET_SCH_TAPRIO is not set +# CONFIG_NET_SCH_GRED is not set +# CONFIG_NET_SCH_DSMARK is not set +# CONFIG_NET_SCH_NETEM is not set +# CONFIG_NET_SCH_DRR is not set +# CONFIG_NET_SCH_MQPRIO is not set +# CONFIG_NET_SCH_SKBPRIO is not set +# CONFIG_NET_SCH_CHOKE is not set +# CONFIG_NET_SCH_QFQ is not set +# CONFIG_NET_SCH_CODEL is not set +CONFIG_NET_SCH_FQ_CODEL=y +# CONFIG_NET_SCH_CAKE is not set +# CONFIG_NET_SCH_FQ is not set +# CONFIG_NET_SCH_HHF is not set +# CONFIG_NET_SCH_PIE is not set +CONFIG_NET_SCH_INGRESS=y +# CONFIG_NET_SCH_PLUG is not set +# CONFIG_NET_SCH_ETS is not set +CONFIG_NET_SCH_DEFAULT=y +CONFIG_DEFAULT_FQ_CODEL=y +# CONFIG_DEFAULT_PFIFO_FAST is not set +CONFIG_DEFAULT_NET_SCH="fq_codel" + +# +# Classification +# +CONFIG_NET_CLS=y +# CONFIG_NET_CLS_BASIC is not set +# CONFIG_NET_CLS_ROUTE4 is not set +# CONFIG_NET_CLS_FW is not set +CONFIG_NET_CLS_U32=y +CONFIG_CLS_U32_PERF=y +CONFIG_CLS_U32_MARK=y +# CONFIG_NET_CLS_FLOW is not set +CONFIG_NET_CLS_CGROUP=y +CONFIG_NET_CLS_BPF=y +CONFIG_NET_CLS_FLOWER=y +# CONFIG_NET_CLS_MATCHALL is not set +# CONFIG_NET_EMATCH is not set +CONFIG_NET_CLS_ACT=y +# CONFIG_NET_ACT_POLICE is not set +# CONFIG_NET_ACT_GACT is not set +CONFIG_NET_ACT_MIRRED=y +# CONFIG_NET_ACT_SAMPLE is not set +CONFIG_NET_ACT_IPT=y +# CONFIG_NET_ACT_NAT is not set +# CONFIG_NET_ACT_PEDIT is not set +# CONFIG_NET_ACT_SIMP is not set +# CONFIG_NET_ACT_SKBEDIT is not set +# CONFIG_NET_ACT_CSUM is not set +# CONFIG_NET_ACT_MPLS is not set +# CONFIG_NET_ACT_VLAN is not set +CONFIG_NET_ACT_BPF=y +# CONFIG_NET_ACT_CONNMARK is not set +# CONFIG_NET_ACT_CTINFO is not set +# CONFIG_NET_ACT_SKBMOD is not set +# CONFIG_NET_ACT_IFE is not set +# CONFIG_NET_ACT_TUNNEL_KEY is not set +# CONFIG_NET_ACT_GATE is not set +# CONFIG_NET_TC_SKB_EXT is not set +CONFIG_NET_SCH_FIFO=y +# CONFIG_DCB is not set +CONFIG_DNS_RESOLVER=y +# CONFIG_BATMAN_ADV is not set +# CONFIG_OPENVSWITCH is not set +CONFIG_VSOCKETS=y +CONFIG_VSOCKETS_DIAG=y +# CONFIG_VSOCKETS_LOOPBACK is not set +# CONFIG_VIRTIO_VSOCKETS is not set +CONFIG_HYPERV_VSOCKETS=y +CONFIG_NETLINK_DIAG=y +# CONFIG_MPLS is not set +# CONFIG_NET_NSH is not set +# CONFIG_HSR is not set +CONFIG_NET_SWITCHDEV=y +CONFIG_NET_L3_MASTER_DEV=y +# CONFIG_QRTR is not set +# CONFIG_NET_NCSI is not set +# CONFIG_PCPU_DEV_REFCNT is not set +CONFIG_RPS=y +CONFIG_RFS_ACCEL=y +CONFIG_SOCK_RX_QUEUE_MAPPING=y +CONFIG_XPS=y +CONFIG_CGROUP_NET_PRIO=y +CONFIG_CGROUP_NET_CLASSID=y +CONFIG_NET_RX_BUSY_POLL=y +CONFIG_BQL=y +# CONFIG_BPF_STREAM_PARSER is not set +CONFIG_NET_FLOW_LIMIT=y + +# +# Network testing +# +# CONFIG_NET_PKTGEN is not set +CONFIG_NET_DROP_MONITOR=y +# end of Network testing +# end of Networking options + +# CONFIG_HAMRADIO is not set +# CONFIG_CAN is not set +# CONFIG_BT is not set +# CONFIG_AF_RXRPC is not set +# CONFIG_AF_KCM is not set +# CONFIG_MCTP is not set +CONFIG_FIB_RULES=y +# CONFIG_WIRELESS is not set +# CONFIG_RFKILL is not set +CONFIG_NET_9P=y +CONFIG_NET_9P_FD=y +CONFIG_NET_9P_VIRTIO=y +# CONFIG_NET_9P_DEBUG is not set +# CONFIG_CAIF is not set +CONFIG_CEPH_LIB=y +# CONFIG_CEPH_LIB_PRETTYDEBUG is not set +# CONFIG_CEPH_LIB_USE_DNS_RESOLVER is not set +# CONFIG_NFC is not set +# CONFIG_PSAMPLE is not set +# CONFIG_NET_IFE is not set +# CONFIG_LWTUNNEL is not set +CONFIG_DST_CACHE=y +CONFIG_GRO_CELLS=y +CONFIG_NET_SOCK_MSG=y +CONFIG_PAGE_POOL=y +# CONFIG_PAGE_POOL_STATS is not set +CONFIG_FAILOVER=y +# CONFIG_ETHTOOL_NETLINK is not set + +# +# Device Drivers +# +CONFIG_ARM_AMBA=y +CONFIG_HAVE_PCI=y +CONFIG_PCI=y +CONFIG_PCI_DOMAINS=y +CONFIG_PCI_DOMAINS_GENERIC=y +CONFIG_PCI_SYSCALL=y +CONFIG_PCIEPORTBUS=y +CONFIG_PCIEAER=y +# CONFIG_PCIEAER_INJECT is not set +# CONFIG_PCIE_ECRC is not set +CONFIG_PCIEASPM=y +CONFIG_PCIEASPM_DEFAULT=y +# CONFIG_PCIEASPM_POWERSAVE is not set +# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set +# CONFIG_PCIEASPM_PERFORMANCE is not set +CONFIG_PCIE_PME=y +# CONFIG_PCIE_DPC is not set +# CONFIG_PCIE_PTM is not set +CONFIG_PCI_MSI=y +CONFIG_PCI_MSI_IRQ_DOMAIN=y +CONFIG_PCI_QUIRKS=y +# CONFIG_PCI_DEBUG is not set +# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set +# CONFIG_PCI_STUB is not set +# CONFIG_PCI_PF_STUB is not set +CONFIG_PCI_ATS=y +CONFIG_PCI_ECAM=y +CONFIG_PCI_IOV=y +# CONFIG_PCI_PRI is not set +# CONFIG_PCI_PASID is not set +# CONFIG_PCI_P2PDMA is not set +CONFIG_PCI_LABEL=y +CONFIG_PCI_HYPERV=y +# CONFIG_PCIE_BUS_TUNE_OFF is not set +CONFIG_PCIE_BUS_DEFAULT=y +# CONFIG_PCIE_BUS_SAFE is not set +# CONFIG_PCIE_BUS_PERFORMANCE is not set +# CONFIG_PCIE_BUS_PEER2PEER is not set +# CONFIG_VGA_ARB is not set +# CONFIG_HOTPLUG_PCI is not set + +# +# PCI controller drivers +# +# CONFIG_PCI_FTPCI100 is not set +# CONFIG_PCI_HOST_GENERIC is not set +# CONFIG_PCIE_XILINX is not set +# CONFIG_PCI_XGENE is not set +# CONFIG_PCIE_ALTERA is not set +# CONFIG_PCI_HOST_THUNDER_PEM is not set +# CONFIG_PCI_HOST_THUNDER_ECAM is not set +CONFIG_PCI_HYPERV_INTERFACE=y +# CONFIG_PCIE_MICROCHIP_HOST is not set + +# +# DesignWare PCI Core Support +# +# CONFIG_PCIE_DW_PLAT_HOST is not set +# CONFIG_PCI_HISI is not set +# CONFIG_PCIE_KIRIN is not set +# CONFIG_PCI_MESON is not set +# CONFIG_PCIE_AL is not set +# end of DesignWare PCI Core Support + +# +# Mobiveil PCIe Core Support +# +# end of Mobiveil PCIe Core Support + +# +# Cadence PCIe controllers support +# +# CONFIG_PCIE_CADENCE_PLAT_HOST is not set +# CONFIG_PCI_J721E_HOST is not set +# end of Cadence PCIe controllers support +# end of PCI controller drivers + +# +# PCI Endpoint +# +# CONFIG_PCI_ENDPOINT is not set +# end of PCI Endpoint + +# +# PCI switch controller drivers +# +# CONFIG_PCI_SW_SWITCHTEC is not set +# end of PCI switch controller drivers + +# CONFIG_CXL_BUS is not set +# CONFIG_PCCARD is not set +# CONFIG_RAPIDIO is not set + +# +# Generic Driver Options +# +CONFIG_UEVENT_HELPER=y +CONFIG_UEVENT_HELPER_PATH="" +CONFIG_DEVTMPFS=y +CONFIG_DEVTMPFS_MOUNT=y +CONFIG_DEVTMPFS_SAFE=y +CONFIG_STANDALONE=y +CONFIG_PREVENT_FIRMWARE_BUILD=y + +# +# Firmware loader +# +CONFIG_FW_LOADER=y +CONFIG_FW_LOADER_PAGED_BUF=y +CONFIG_FW_LOADER_SYSFS=y +CONFIG_EXTRA_FIRMWARE="" +# CONFIG_FW_LOADER_USER_HELPER is not set +# CONFIG_FW_LOADER_COMPRESS is not set +CONFIG_FW_UPLOAD=y +# end of Firmware loader + +CONFIG_ALLOW_DEV_COREDUMP=y +# CONFIG_DEBUG_DRIVER is not set +# CONFIG_DEBUG_DEVRES is not set +# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set +# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set +CONFIG_GENERIC_CPU_AUTOPROBE=y +CONFIG_GENERIC_CPU_VULNERABILITIES=y +CONFIG_SOC_BUS=y +CONFIG_DMA_SHARED_BUFFER=y +# CONFIG_DMA_FENCE_TRACE is not set +CONFIG_GENERIC_ARCH_TOPOLOGY=y +# end of Generic Driver Options + +# +# Bus devices +# +# CONFIG_BRCMSTB_GISB_ARB is not set +# CONFIG_VEXPRESS_CONFIG is not set +# CONFIG_MHI_BUS is not set +# CONFIG_MHI_BUS_EP is not set +# end of Bus devices + +CONFIG_CONNECTOR=y +CONFIG_PROC_EVENTS=y + +# +# Firmware Drivers +# + +# +# ARM System Control and Management Interface Protocol +# +# CONFIG_ARM_SCMI_PROTOCOL is not set +# end of ARM System Control and Management Interface Protocol + +CONFIG_FIRMWARE_MEMMAP=y +# CONFIG_ISCSI_IBFT is not set +# CONFIG_FW_CFG_SYSFS is not set +# CONFIG_SYSFB_SIMPLEFB is not set +# CONFIG_ARM_FFA_TRANSPORT is not set +# CONFIG_GOOGLE_FIRMWARE is not set + +# +# EFI (Extensible Firmware Interface) Support +# +CONFIG_EFI_ESRT=y +CONFIG_EFI_PARAMS_FROM_FDT=y +CONFIG_EFI_RUNTIME_WRAPPERS=y +CONFIG_EFI_GENERIC_STUB=y +# CONFIG_EFI_ZBOOT is not set +# CONFIG_EFI_ARMSTUB_DTB_LOADER is not set +CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y +# CONFIG_EFI_BOOTLOADER_CONTROL is not set +# CONFIG_EFI_CAPSULE_LOADER is not set +# CONFIG_EFI_TEST is not set +CONFIG_RESET_ATTACK_MITIGATION=y +# CONFIG_EFI_DISABLE_PCI_DMA is not set +CONFIG_EFI_EARLYCON=y +# CONFIG_EFI_CUSTOM_SSDT_OVERLAYS is not set +# CONFIG_EFI_DISABLE_RUNTIME is not set +CONFIG_EFI_COCO_SECRET=y +# end of EFI (Extensible Firmware Interface) Support + +CONFIG_ARM_PSCI_FW=y +CONFIG_HAVE_ARM_SMCCC=y +CONFIG_HAVE_ARM_SMCCC_DISCOVERY=y +CONFIG_ARM_SMCCC_SOC_ID=y + +# +# Tegra firmware driver +# +# end of Tegra firmware driver +# end of Firmware Drivers + +# CONFIG_GNSS is not set +# CONFIG_MTD is not set +CONFIG_DTC=y +CONFIG_OF=y +# CONFIG_OF_UNITTEST is not set +CONFIG_OF_FLATTREE=y +CONFIG_OF_EARLY_FLATTREE=y +CONFIG_OF_KOBJ=y +CONFIG_OF_ADDRESS=y +CONFIG_OF_IRQ=y +CONFIG_OF_RESERVED_MEM=y +# CONFIG_OF_OVERLAY is not set +# CONFIG_PARPORT is not set +CONFIG_PNP=y +# CONFIG_PNP_DEBUG_MESSAGES is not set + +# +# Protocols +# +CONFIG_PNPACPI=y +CONFIG_BLK_DEV=y +# CONFIG_BLK_DEV_NULL_BLK is not set +# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set +CONFIG_BLK_DEV_LOOP=y +CONFIG_BLK_DEV_LOOP_MIN_COUNT=8 +# CONFIG_BLK_DEV_DRBD is not set +# CONFIG_BLK_DEV_NBD is not set +CONFIG_BLK_DEV_RAM=y +CONFIG_BLK_DEV_RAM_COUNT=16 +CONFIG_BLK_DEV_RAM_SIZE=65536 +# CONFIG_CDROM_PKTCDVD is not set +# CONFIG_ATA_OVER_ETH is not set +CONFIG_VIRTIO_BLK=y +# CONFIG_BLK_DEV_RBD is not set +# CONFIG_BLK_DEV_UBLK is not set + +# +# NVME Support +# +# CONFIG_BLK_DEV_NVME is not set +# CONFIG_NVME_FC is not set +# CONFIG_NVME_TCP is not set +# end of NVME Support + +# +# Misc devices +# +# CONFIG_AD525X_DPOT is not set +# CONFIG_DUMMY_IRQ is not set +# CONFIG_PHANTOM is not set +# CONFIG_TIFM_CORE is not set +# CONFIG_ICS932S401 is not set +# CONFIG_ENCLOSURE_SERVICES is not set +# CONFIG_HP_ILO is not set +# CONFIG_APDS9802ALS is not set +# CONFIG_ISL29003 is not set +# CONFIG_ISL29020 is not set +# CONFIG_SENSORS_TSL2550 is not set +# CONFIG_SENSORS_BH1770 is not set +# CONFIG_SENSORS_APDS990X is not set +# CONFIG_HMC6352 is not set +# CONFIG_DS1682 is not set +# CONFIG_SRAM is not set +# CONFIG_DW_XDATA_PCIE is not set +# CONFIG_PCI_ENDPOINT_TEST is not set +# CONFIG_XILINX_SDFEC is not set +# CONFIG_OPEN_DICE is not set +CONFIG_VCPU_STALL_DETECTOR=y +# CONFIG_C2PORT is not set + +# +# EEPROM support +# +# CONFIG_EEPROM_AT24 is not set +# CONFIG_EEPROM_LEGACY is not set +# CONFIG_EEPROM_MAX6875 is not set +# CONFIG_EEPROM_93CX6 is not set +# CONFIG_EEPROM_IDT_89HPESX is not set +# CONFIG_EEPROM_EE1004 is not set +# end of EEPROM support + +# CONFIG_CB710_CORE is not set + +# +# Texas Instruments shared transport line discipline +# +# CONFIG_TI_ST is not set +# end of Texas Instruments shared transport line discipline + +# CONFIG_SENSORS_LIS3_I2C is not set +# CONFIG_ALTERA_STAPL is not set +# CONFIG_VMWARE_VMCI is not set +# CONFIG_GENWQE is not set +# CONFIG_ECHO is not set +# CONFIG_BCM_VK is not set +# CONFIG_MISC_ALCOR_PCI is not set +# CONFIG_MISC_RTSX_PCI is not set +# CONFIG_MISC_RTSX_USB is not set +# CONFIG_HABANA_AI is not set +# CONFIG_UACCE is not set +# CONFIG_PVPANIC is not set +# CONFIG_GP_PCI1XXXX is not set +# end of Misc devices + +# +# SCSI device support +# +CONFIG_SCSI_MOD=y +# CONFIG_RAID_ATTRS is not set +CONFIG_SCSI_COMMON=y +CONFIG_SCSI=y +CONFIG_SCSI_DMA=y +# CONFIG_SCSI_PROC_FS is not set + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +# CONFIG_CHR_DEV_ST is not set +# CONFIG_BLK_DEV_SR is not set +CONFIG_CHR_DEV_SG=y +CONFIG_BLK_DEV_BSG=y +# CONFIG_CHR_DEV_SCH is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set +# CONFIG_SCSI_SCAN_ASYNC is not set + +# +# SCSI Transports +# +# CONFIG_SCSI_SPI_ATTRS is not set +# CONFIG_SCSI_FC_ATTRS is not set +# CONFIG_SCSI_ISCSI_ATTRS is not set +# CONFIG_SCSI_SAS_ATTRS is not set +# CONFIG_SCSI_SAS_LIBSAS is not set +# CONFIG_SCSI_SRP_ATTRS is not set +# end of SCSI Transports + +CONFIG_SCSI_LOWLEVEL=y +# CONFIG_ISCSI_TCP is not set +# CONFIG_ISCSI_BOOT_SYSFS is not set +# CONFIG_SCSI_CXGB3_ISCSI is not set +# CONFIG_SCSI_CXGB4_ISCSI is not set +# CONFIG_SCSI_BNX2_ISCSI is not set +# CONFIG_BE2ISCSI is not set +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_HPSA is not set +# CONFIG_SCSI_3W_9XXX is not set +# CONFIG_SCSI_3W_SAS is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AACRAID is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC79XX is not set +# CONFIG_SCSI_AIC94XX is not set +# CONFIG_SCSI_MVSAS is not set +# CONFIG_SCSI_MVUMI is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_ARCMSR is not set +# CONFIG_SCSI_ESAS2R is not set +# CONFIG_MEGARAID_NEWGEN is not set +# CONFIG_MEGARAID_LEGACY is not set +# CONFIG_MEGARAID_SAS is not set +# CONFIG_SCSI_MPT3SAS is not set +# CONFIG_SCSI_MPT2SAS is not set +# CONFIG_SCSI_MPI3MR is not set +# CONFIG_SCSI_SMARTPQI is not set +# CONFIG_SCSI_HPTIOP is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_MYRB is not set +# CONFIG_SCSI_MYRS is not set +CONFIG_HYPERV_STORAGE=y +# CONFIG_SCSI_SNIC is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_FDOMAIN_PCI is not set +# CONFIG_SCSI_IPS is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_STEX is not set +# CONFIG_SCSI_SYM53C8XX_2 is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLA_ISCSI is not set +# CONFIG_SCSI_DC395x is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_WD719X is not set +# CONFIG_SCSI_DEBUG is not set +# CONFIG_SCSI_PMCRAID is not set +# CONFIG_SCSI_PM8001 is not set +CONFIG_SCSI_VIRTIO=y +# CONFIG_SCSI_DH is not set +# end of SCSI device support + +# CONFIG_ATA is not set +CONFIG_MD=y +CONFIG_BLK_DEV_MD=y +# CONFIG_MD_AUTODETECT is not set +# CONFIG_MD_LINEAR is not set +CONFIG_MD_RAID0=y +CONFIG_MD_RAID1=y +CONFIG_MD_RAID10=y +CONFIG_MD_RAID456=y +# CONFIG_MD_MULTIPATH is not set +# CONFIG_MD_FAULTY is not set +# CONFIG_BCACHE is not set +CONFIG_BLK_DEV_DM_BUILTIN=y +CONFIG_BLK_DEV_DM=y +# CONFIG_DM_DEBUG is not set +CONFIG_DM_BUFIO=y +# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set +CONFIG_DM_BIO_PRISON=y +CONFIG_DM_PERSISTENT_DATA=y +# CONFIG_DM_UNSTRIPED is not set +CONFIG_DM_CRYPT=y +# CONFIG_DM_SNAPSHOT is not set +CONFIG_DM_THIN_PROVISIONING=y +# CONFIG_DM_CACHE is not set +# CONFIG_DM_WRITECACHE is not set +# CONFIG_DM_EBS is not set +# CONFIG_DM_ERA is not set +# CONFIG_DM_CLONE is not set +# CONFIG_DM_MIRROR is not set +CONFIG_DM_RAID=y +# CONFIG_DM_ZERO is not set +# CONFIG_DM_MULTIPATH is not set +# CONFIG_DM_DELAY is not set +# CONFIG_DM_DUST is not set +# CONFIG_DM_INIT is not set +# CONFIG_DM_UEVENT is not set +# CONFIG_DM_FLAKEY is not set +CONFIG_DM_VERITY=y +CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y +# CONFIG_DM_VERITY_FEC is not set +# CONFIG_DM_SWITCH is not set +# CONFIG_DM_LOG_WRITES is not set +# CONFIG_DM_INTEGRITY is not set +# CONFIG_DM_AUDIT is not set +# CONFIG_TARGET_CORE is not set +# CONFIG_FUSION is not set + +# +# IEEE 1394 (FireWire) support +# +# CONFIG_FIREWIRE is not set +# CONFIG_FIREWIRE_NOSY is not set +# end of IEEE 1394 (FireWire) support + +CONFIG_NETDEVICES=y +CONFIG_MII=y +CONFIG_NET_CORE=y +CONFIG_BONDING=y +CONFIG_DUMMY=y +CONFIG_WIREGUARD=y +# CONFIG_WIREGUARD_DEBUG is not set +# CONFIG_EQUALIZER is not set +# CONFIG_NET_FC is not set +# CONFIG_IFB is not set +CONFIG_NET_TEAM=y +# CONFIG_NET_TEAM_MODE_BROADCAST is not set +# CONFIG_NET_TEAM_MODE_ROUNDROBIN is not set +# CONFIG_NET_TEAM_MODE_RANDOM is not set +# CONFIG_NET_TEAM_MODE_ACTIVEBACKUP is not set +# CONFIG_NET_TEAM_MODE_LOADBALANCE is not set +CONFIG_MACVLAN=y +CONFIG_MACVTAP=y +CONFIG_IPVLAN_L3S=y +CONFIG_IPVLAN=y +CONFIG_IPVTAP=y +CONFIG_VXLAN=y +CONFIG_GENEVE=y +# CONFIG_BAREUDP is not set +# CONFIG_GTP is not set +# CONFIG_MACSEC is not set +# CONFIG_NETCONSOLE is not set +CONFIG_TUN=y +CONFIG_TAP=y +# CONFIG_TUN_VNET_CROSS_LE is not set +CONFIG_VETH=y +CONFIG_VIRTIO_NET=y +# CONFIG_NLMON is not set +# CONFIG_ARCNET is not set +CONFIG_ETHERNET=y +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_NET_VENDOR_ADAPTEC is not set +# CONFIG_NET_VENDOR_AGERE is not set +# CONFIG_NET_VENDOR_ALACRITECH is not set +# CONFIG_NET_VENDOR_ALTEON is not set +# CONFIG_ALTERA_TSE is not set +# CONFIG_NET_VENDOR_AMAZON is not set +# CONFIG_NET_VENDOR_AMD is not set +# CONFIG_NET_VENDOR_AQUANTIA is not set +# CONFIG_NET_VENDOR_ARC is not set +# CONFIG_NET_VENDOR_ASIX is not set +# CONFIG_NET_VENDOR_ATHEROS is not set +# CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_VENDOR_CADENCE is not set +# CONFIG_NET_VENDOR_CAVIUM is not set +# CONFIG_NET_VENDOR_CHELSIO is not set +# CONFIG_NET_VENDOR_CISCO is not set +# CONFIG_NET_VENDOR_CORTINA is not set +# CONFIG_NET_VENDOR_DAVICOM is not set +# CONFIG_DNET is not set +# CONFIG_NET_VENDOR_DEC is not set +# CONFIG_NET_VENDOR_DLINK is not set +# CONFIG_NET_VENDOR_EMULEX is not set +# CONFIG_NET_VENDOR_ENGLEDER is not set +# CONFIG_NET_VENDOR_EZCHIP is not set +# CONFIG_NET_VENDOR_FUNGIBLE is not set +# CONFIG_NET_VENDOR_GOOGLE is not set +# CONFIG_NET_VENDOR_HISILICON is not set +# CONFIG_NET_VENDOR_HUAWEI is not set +# CONFIG_NET_VENDOR_INTEL is not set +# CONFIG_NET_VENDOR_WANGXUN is not set +# CONFIG_JME is not set +# CONFIG_NET_VENDOR_LITEX is not set +# CONFIG_NET_VENDOR_MARVELL is not set +# CONFIG_NET_VENDOR_MELLANOX is not set +# CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROCHIP is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set +# CONFIG_NET_VENDOR_MICROSOFT is not set +# CONFIG_NET_VENDOR_MYRI is not set +# CONFIG_FEALNX is not set +# CONFIG_NET_VENDOR_NI is not set +# CONFIG_NET_VENDOR_NATSEMI is not set +# CONFIG_NET_VENDOR_NETERION is not set +# CONFIG_NET_VENDOR_NETRONOME is not set +# CONFIG_NET_VENDOR_NVIDIA is not set +# CONFIG_NET_VENDOR_OKI is not set +# CONFIG_ETHOC is not set +# CONFIG_NET_VENDOR_PACKET_ENGINES is not set +# CONFIG_NET_VENDOR_PENSANDO is not set +# CONFIG_NET_VENDOR_QLOGIC is not set +# CONFIG_NET_VENDOR_BROCADE is not set +# CONFIG_NET_VENDOR_QUALCOMM is not set +# CONFIG_NET_VENDOR_RDC is not set +# CONFIG_NET_VENDOR_REALTEK is not set +# CONFIG_NET_VENDOR_RENESAS is not set +# CONFIG_NET_VENDOR_ROCKER is not set +# CONFIG_NET_VENDOR_SAMSUNG is not set +# CONFIG_NET_VENDOR_SEEQ is not set +# CONFIG_NET_VENDOR_SILAN is not set +# CONFIG_NET_VENDOR_SIS is not set +# CONFIG_NET_VENDOR_SOLARFLARE is not set +# CONFIG_NET_VENDOR_SMSC is not set +# CONFIG_NET_VENDOR_SOCIONEXT is not set +# CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SUN is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set +# CONFIG_NET_VENDOR_TEHUTI is not set +# CONFIG_NET_VENDOR_TI is not set +# CONFIG_NET_VENDOR_VERTEXCOM is not set +# CONFIG_NET_VENDOR_VIA is not set +# CONFIG_NET_VENDOR_WIZNET is not set +# CONFIG_NET_VENDOR_XILINX is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_NET_SB1000 is not set +# CONFIG_PHYLIB is not set +# CONFIG_PSE_CONTROLLER is not set +# CONFIG_MDIO_DEVICE is not set + +# +# PCS device drivers +# +# end of PCS device drivers + +CONFIG_PPP=y +CONFIG_PPP_BSDCOMP=y +CONFIG_PPP_DEFLATE=y +CONFIG_PPP_FILTER=y +CONFIG_PPP_MPPE=y +CONFIG_PPP_MULTILINK=y +CONFIG_PPPOE=y +CONFIG_PPP_ASYNC=y +CONFIG_PPP_SYNC_TTY=y +# CONFIG_SLIP is not set +CONFIG_SLHC=y +CONFIG_USB_NET_DRIVERS=y +# CONFIG_USB_CATC is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_RTL8150 is not set +# CONFIG_USB_RTL8152 is not set +# CONFIG_USB_LAN78XX is not set +CONFIG_USB_USBNET=y +# CONFIG_USB_NET_AX8817X is not set +# CONFIG_USB_NET_AX88179_178A is not set +CONFIG_USB_NET_CDCETHER=y +# CONFIG_USB_NET_CDC_EEM is not set +CONFIG_USB_NET_CDC_NCM=y +# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set +# CONFIG_USB_NET_CDC_MBIM is not set +# CONFIG_USB_NET_DM9601 is not set +# CONFIG_USB_NET_SR9700 is not set +# CONFIG_USB_NET_SR9800 is not set +# CONFIG_USB_NET_SMSC75XX is not set +# CONFIG_USB_NET_SMSC95XX is not set +# CONFIG_USB_NET_GL620A is not set +# CONFIG_USB_NET_NET1080 is not set +# CONFIG_USB_NET_PLUSB is not set +# CONFIG_USB_NET_MCS7830 is not set +# CONFIG_USB_NET_RNDIS_HOST is not set +# CONFIG_USB_NET_CDC_SUBSET is not set +# CONFIG_USB_NET_ZAURUS is not set +# CONFIG_USB_NET_CX82310_ETH is not set +# CONFIG_USB_NET_KALMIA is not set +# CONFIG_USB_NET_QMI_WWAN is not set +# CONFIG_USB_NET_INT51X1 is not set +# CONFIG_USB_IPHETH is not set +# CONFIG_USB_SIERRA_NET is not set +# CONFIG_USB_VL600 is not set +# CONFIG_USB_NET_CH9200 is not set +# CONFIG_USB_NET_AQC111 is not set +CONFIG_USB_RTL8153_ECM=y +# CONFIG_WLAN is not set +# CONFIG_WAN is not set + +# +# Wireless WAN +# +# CONFIG_WWAN is not set +# end of Wireless WAN + +# CONFIG_VMXNET3 is not set +# CONFIG_FUJITSU_ES is not set +CONFIG_HYPERV_NET=y +# CONFIG_NETDEVSIM is not set +CONFIG_NET_FAILOVER=y +# CONFIG_ISDN is not set + +# +# Input device support +# +CONFIG_INPUT=y +# CONFIG_INPUT_FF_MEMLESS is not set +# CONFIG_INPUT_SPARSEKMAP is not set +# CONFIG_INPUT_MATRIXKMAP is not set + +# +# Userland interfaces +# +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set +# CONFIG_INPUT_EVBUG is not set + +# +# Input Device Drivers +# +# CONFIG_INPUT_KEYBOARD is not set +# CONFIG_INPUT_MOUSE is not set +# CONFIG_INPUT_JOYSTICK is not set +# CONFIG_INPUT_TABLET is not set +# CONFIG_INPUT_TOUCHSCREEN is not set +# CONFIG_INPUT_MISC is not set +# CONFIG_RMI4_CORE is not set + +# +# Hardware I/O ports +# +CONFIG_SERIO=y +CONFIG_SERIO_SERPORT=y +# CONFIG_SERIO_AMBAKMI is not set +# CONFIG_SERIO_PCIPS2 is not set +# CONFIG_SERIO_LIBPS2 is not set +CONFIG_SERIO_RAW=y +# CONFIG_SERIO_ALTERA_PS2 is not set +# CONFIG_SERIO_PS2MULT is not set +# CONFIG_SERIO_ARC_PS2 is not set +# CONFIG_SERIO_APBPS2 is not set +CONFIG_HYPERV_KEYBOARD=y +# CONFIG_SERIO_GPIO_PS2 is not set +# CONFIG_USERIO is not set +# CONFIG_GAMEPORT is not set +# end of Hardware I/O ports +# end of Input device support + +# +# Character devices +# +CONFIG_TTY=y +CONFIG_VT=y +CONFIG_CONSOLE_TRANSLATIONS=y +CONFIG_VT_CONSOLE=y +CONFIG_HW_CONSOLE=y +# CONFIG_VT_HW_CONSOLE_BINDING is not set +CONFIG_UNIX98_PTYS=y +CONFIG_LEGACY_PTYS=y +CONFIG_LEGACY_PTY_COUNT=256 +# CONFIG_LDISC_AUTOLOAD is not set + +# +# Serial drivers +# +CONFIG_SERIAL_EARLYCON=y +CONFIG_SERIAL_8250=y +CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y +# CONFIG_SERIAL_8250_PNP is not set +# CONFIG_SERIAL_8250_16550A_VARIANTS is not set +# CONFIG_SERIAL_8250_FINTEK is not set +CONFIG_SERIAL_8250_CONSOLE=y +CONFIG_SERIAL_8250_PCI=y +# CONFIG_SERIAL_8250_EXAR is not set +CONFIG_SERIAL_8250_NR_UARTS=4 +CONFIG_SERIAL_8250_RUNTIME_UARTS=4 +# CONFIG_SERIAL_8250_EXTENDED is not set +CONFIG_SERIAL_8250_FSL=y +# CONFIG_SERIAL_8250_DW is not set +# CONFIG_SERIAL_8250_RT288X is not set +# CONFIG_SERIAL_8250_PERICOM is not set +# CONFIG_SERIAL_OF_PLATFORM is not set + +# +# Non-8250 serial port support +# +# CONFIG_SERIAL_AMBA_PL010 is not set +CONFIG_SERIAL_AMBA_PL011=y +CONFIG_SERIAL_AMBA_PL011_CONSOLE=y +# CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST is not set +# CONFIG_SERIAL_UARTLITE is not set +CONFIG_SERIAL_CORE=y +CONFIG_SERIAL_CORE_CONSOLE=y +# CONFIG_SERIAL_JSM is not set +# CONFIG_SERIAL_SIFIVE is not set +# CONFIG_SERIAL_SCCNXP is not set +# CONFIG_SERIAL_SC16IS7XX is not set +# CONFIG_SERIAL_ALTERA_JTAGUART is not set +# CONFIG_SERIAL_ALTERA_UART is not set +# CONFIG_SERIAL_XILINX_PS_UART is not set +# CONFIG_SERIAL_ARC is not set +# CONFIG_SERIAL_RP2 is not set +# CONFIG_SERIAL_FSL_LPUART is not set +# CONFIG_SERIAL_FSL_LINFLEXUART is not set +# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set +# CONFIG_SERIAL_SPRD is not set +# end of Serial drivers + +CONFIG_SERIAL_MCTRL_GPIO=y +# CONFIG_SERIAL_NONSTANDARD is not set +# CONFIG_N_GSM is not set +# CONFIG_NOZOMI is not set +# CONFIG_NULL_TTY is not set +CONFIG_HVC_DRIVER=y +# CONFIG_HVC_DCC is not set +# CONFIG_SERIAL_DEV_BUS is not set +# CONFIG_TTY_PRINTK is not set +CONFIG_VIRTIO_CONSOLE=y +# CONFIG_IPMI_HANDLER is not set +# CONFIG_HW_RANDOM is not set +# CONFIG_APPLICOM is not set +CONFIG_DEVMEM=y +# CONFIG_DEVPORT is not set +# CONFIG_TCG_TPM is not set +# CONFIG_XILLYBUS is not set +# CONFIG_XILLYUSB is not set +# CONFIG_RANDOM_TRUST_CPU is not set +# CONFIG_RANDOM_TRUST_BOOTLOADER is not set +# end of Character devices + +# +# I2C support +# +CONFIG_I2C=y +# CONFIG_ACPI_I2C_OPREGION is not set +CONFIG_I2C_BOARDINFO=y +# CONFIG_I2C_COMPAT is not set +# CONFIG_I2C_CHARDEV is not set +# CONFIG_I2C_MUX is not set +# CONFIG_I2C_HELPER_AUTO is not set +# CONFIG_I2C_SMBUS is not set + +# +# I2C Algorithms +# +CONFIG_I2C_ALGOBIT=y +# CONFIG_I2C_ALGOPCF is not set +# CONFIG_I2C_ALGOPCA is not set +# end of I2C Algorithms + +# +# I2C Hardware Bus support +# + +# +# PC SMBus host controller drivers +# +# CONFIG_I2C_ALI1535 is not set +# CONFIG_I2C_ALI1563 is not set +# CONFIG_I2C_ALI15X3 is not set +# CONFIG_I2C_AMD756 is not set +# CONFIG_I2C_AMD8111 is not set +# CONFIG_I2C_AMD_MP2 is not set +# CONFIG_I2C_I801 is not set +# CONFIG_I2C_ISCH is not set +# CONFIG_I2C_PIIX4 is not set +# CONFIG_I2C_NFORCE2 is not set +# CONFIG_I2C_NVIDIA_GPU is not set +# CONFIG_I2C_SIS5595 is not set +# CONFIG_I2C_SIS630 is not set +# CONFIG_I2C_SIS96X is not set +# CONFIG_I2C_VIA is not set +# CONFIG_I2C_VIAPRO is not set + +# +# ACPI drivers +# +# CONFIG_I2C_SCMI is not set + +# +# I2C system bus drivers (mostly embedded / system-on-chip) +# +# CONFIG_I2C_CADENCE is not set +# CONFIG_I2C_CBUS_GPIO is not set +# CONFIG_I2C_DESIGNWARE_PLATFORM is not set +# CONFIG_I2C_DESIGNWARE_PCI is not set +# CONFIG_I2C_EMEV2 is not set +# CONFIG_I2C_GPIO is not set +# CONFIG_I2C_HISI is not set +# CONFIG_I2C_NOMADIK is not set +# CONFIG_I2C_OCORES is not set +# CONFIG_I2C_PCA_PLATFORM is not set +# CONFIG_I2C_RK3X is not set +# CONFIG_I2C_SIMTEC is not set +# CONFIG_I2C_VERSATILE is not set +# CONFIG_I2C_THUNDERX is not set +# CONFIG_I2C_XILINX is not set + +# +# External I2C/SMBus adapter drivers +# +# CONFIG_I2C_DIOLAN_U2C is not set +# CONFIG_I2C_CP2615 is not set +# CONFIG_I2C_PCI1XXXX is not set +# CONFIG_I2C_ROBOTFUZZ_OSIF is not set +# CONFIG_I2C_TAOS_EVM is not set +# CONFIG_I2C_TINY_USB is not set + +# +# Other I2C/SMBus bus drivers +# +# CONFIG_I2C_VIRTIO is not set +# end of I2C Hardware Bus support + +# CONFIG_I2C_STUB is not set +# CONFIG_I2C_SLAVE is not set +# CONFIG_I2C_DEBUG_CORE is not set +# CONFIG_I2C_DEBUG_ALGO is not set +# CONFIG_I2C_DEBUG_BUS is not set +# end of I2C support + +# CONFIG_I3C is not set +# CONFIG_SPI is not set +# CONFIG_SPMI is not set +# CONFIG_HSI is not set +CONFIG_PPS=y +# CONFIG_PPS_DEBUG is not set + +# +# PPS clients support +# +# CONFIG_PPS_CLIENT_KTIMER is not set +# CONFIG_PPS_CLIENT_LDISC is not set +# CONFIG_PPS_CLIENT_GPIO is not set + +# +# PPS generators support +# + +# +# PTP clock support +# +CONFIG_PTP_1588_CLOCK=y +CONFIG_PTP_1588_CLOCK_OPTIONAL=y + +# +# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks. +# +# CONFIG_PTP_1588_CLOCK_KVM is not set +# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set +# CONFIG_PTP_1588_CLOCK_IDTCM is not set +# end of PTP clock support + +# CONFIG_PINCTRL is not set +CONFIG_GPIOLIB=y +CONFIG_GPIOLIB_FASTPATH_LIMIT=512 +CONFIG_OF_GPIO=y +CONFIG_GPIO_ACPI=y +# CONFIG_DEBUG_GPIO is not set +# CONFIG_GPIO_SYSFS is not set +# CONFIG_GPIO_CDEV is not set + +# +# Memory mapped GPIO drivers +# +# CONFIG_GPIO_74XX_MMIO is not set +# CONFIG_GPIO_ALTERA is not set +# CONFIG_GPIO_AMDPT is not set +# CONFIG_GPIO_CADENCE is not set +# CONFIG_GPIO_DWAPB is not set +# CONFIG_GPIO_FTGPIO010 is not set +# CONFIG_GPIO_GENERIC_PLATFORM is not set +# CONFIG_GPIO_GRGPIO is not set +# CONFIG_GPIO_HISI is not set +# CONFIG_GPIO_HLWD is not set +# CONFIG_GPIO_MB86S7X is not set +# CONFIG_GPIO_PL061 is not set +# CONFIG_GPIO_SIFIVE is not set +# CONFIG_GPIO_XGENE is not set +# CONFIG_GPIO_XILINX is not set +# CONFIG_GPIO_AMD_FCH is not set +# end of Memory mapped GPIO drivers + +# +# I2C GPIO expanders +# +# CONFIG_GPIO_ADNP is not set +# CONFIG_GPIO_GW_PLD is not set +# CONFIG_GPIO_MAX7300 is not set +# CONFIG_GPIO_MAX732X is not set +# CONFIG_GPIO_PCA953X is not set +# CONFIG_GPIO_PCA9570 is not set +# CONFIG_GPIO_PCF857X is not set +# CONFIG_GPIO_TPIC2810 is not set +# end of I2C GPIO expanders + +# +# MFD GPIO expanders +# +# end of MFD GPIO expanders + +# +# PCI GPIO expanders +# +# CONFIG_GPIO_BT8XX is not set +# CONFIG_GPIO_PCI_IDIO_16 is not set +# CONFIG_GPIO_PCIE_IDIO_24 is not set +# CONFIG_GPIO_RDC321X is not set +# end of PCI GPIO expanders + +# +# USB GPIO expanders +# +# end of USB GPIO expanders + +# +# Virtual GPIO drivers +# +# CONFIG_GPIO_AGGREGATOR is not set +# CONFIG_GPIO_MOCKUP is not set +# CONFIG_GPIO_VIRTIO is not set +# CONFIG_GPIO_SIM is not set +# end of Virtual GPIO drivers + +# CONFIG_W1 is not set +CONFIG_POWER_RESET=y +# CONFIG_POWER_RESET_GPIO is not set +# CONFIG_POWER_RESET_GPIO_RESTART is not set +# CONFIG_POWER_RESET_LTC2952 is not set +# CONFIG_POWER_RESET_RESTART is not set +# CONFIG_POWER_RESET_XGENE is not set +# CONFIG_POWER_RESET_SYSCON is not set +# CONFIG_POWER_RESET_SYSCON_POWEROFF is not set +# CONFIG_NVMEM_REBOOT_MODE is not set +CONFIG_POWER_SUPPLY=y +# CONFIG_POWER_SUPPLY_DEBUG is not set +# CONFIG_PDA_POWER is not set +# CONFIG_IP5XXX_POWER is not set +# CONFIG_TEST_POWER is not set +# CONFIG_CHARGER_ADP5061 is not set +# CONFIG_BATTERY_CW2015 is not set +# CONFIG_BATTERY_DS2780 is not set +# CONFIG_BATTERY_DS2781 is not set +# CONFIG_BATTERY_DS2782 is not set +# CONFIG_BATTERY_SAMSUNG_SDI is not set +# CONFIG_BATTERY_SBS is not set +# CONFIG_CHARGER_SBS is not set +# CONFIG_BATTERY_BQ27XXX is not set +# CONFIG_BATTERY_MAX17040 is not set +# CONFIG_BATTERY_MAX17042 is not set +# CONFIG_CHARGER_MAX8903 is not set +# CONFIG_CHARGER_LP8727 is not set +# CONFIG_CHARGER_GPIO is not set +# CONFIG_CHARGER_LT3651 is not set +# CONFIG_CHARGER_LTC4162L is not set +# CONFIG_CHARGER_DETECTOR_MAX14656 is not set +# CONFIG_CHARGER_MAX77976 is not set +# CONFIG_CHARGER_BQ2415X is not set +# CONFIG_CHARGER_BQ24257 is not set +# CONFIG_CHARGER_BQ24735 is not set +# CONFIG_CHARGER_BQ2515X is not set +# CONFIG_CHARGER_BQ25890 is not set +# CONFIG_CHARGER_BQ25980 is not set +# CONFIG_CHARGER_BQ256XX is not set +# CONFIG_BATTERY_GAUGE_LTC2941 is not set +# CONFIG_BATTERY_GOLDFISH is not set +# CONFIG_BATTERY_RT5033 is not set +# CONFIG_CHARGER_RT9455 is not set +# CONFIG_CHARGER_BD99954 is not set +# CONFIG_BATTERY_UG3105 is not set +# CONFIG_HWMON is not set +# CONFIG_THERMAL is not set +# CONFIG_WATCHDOG is not set +CONFIG_SSB_POSSIBLE=y +# CONFIG_SSB is not set +CONFIG_BCMA_POSSIBLE=y +# CONFIG_BCMA is not set + +# +# Multifunction device drivers +# +# CONFIG_MFD_ACT8945A is not set +# CONFIG_MFD_AS3711 is not set +# CONFIG_MFD_AS3722 is not set +# CONFIG_PMIC_ADP5520 is not set +# CONFIG_MFD_AAT2870_CORE is not set +# CONFIG_MFD_ATMEL_FLEXCOM is not set +# CONFIG_MFD_ATMEL_HLCDC is not set +# CONFIG_MFD_BCM590XX is not set +# CONFIG_MFD_BD9571MWV is not set +# CONFIG_MFD_AXP20X_I2C is not set +# CONFIG_MFD_MADERA is not set +# CONFIG_PMIC_DA903X is not set +# CONFIG_MFD_DA9052_I2C is not set +# CONFIG_MFD_DA9055 is not set +# CONFIG_MFD_DA9062 is not set +# CONFIG_MFD_DA9063 is not set +# CONFIG_MFD_DA9150 is not set +# CONFIG_MFD_DLN2 is not set +# CONFIG_MFD_GATEWORKS_GSC is not set +# CONFIG_MFD_MC13XXX_I2C is not set +# CONFIG_MFD_MP2629 is not set +# CONFIG_MFD_HI6421_PMIC is not set +# CONFIG_HTC_PASIC3 is not set +# CONFIG_HTC_I2CPLD is not set +# CONFIG_LPC_ICH is not set +# CONFIG_LPC_SCH is not set +# CONFIG_MFD_IQS62X is not set +# CONFIG_MFD_JANZ_CMODIO is not set +# CONFIG_MFD_KEMPLD is not set +# CONFIG_MFD_88PM800 is not set +# CONFIG_MFD_88PM805 is not set +# CONFIG_MFD_88PM860X is not set +# CONFIG_MFD_MAX14577 is not set +# CONFIG_MFD_MAX77620 is not set +# CONFIG_MFD_MAX77650 is not set +# CONFIG_MFD_MAX77686 is not set +# CONFIG_MFD_MAX77693 is not set +# CONFIG_MFD_MAX77714 is not set +# CONFIG_MFD_MAX77843 is not set +# CONFIG_MFD_MAX8907 is not set +# CONFIG_MFD_MAX8925 is not set +# CONFIG_MFD_MAX8997 is not set +# CONFIG_MFD_MAX8998 is not set +# CONFIG_MFD_MT6360 is not set +# CONFIG_MFD_MT6370 is not set +# CONFIG_MFD_MT6397 is not set +# CONFIG_MFD_MENF21BMC is not set +# CONFIG_MFD_VIPERBOARD is not set +# CONFIG_MFD_NTXEC is not set +# CONFIG_MFD_RETU is not set +# CONFIG_MFD_PCF50633 is not set +# CONFIG_MFD_SY7636A is not set +# CONFIG_MFD_RDC321X is not set +# CONFIG_MFD_RT4831 is not set +# CONFIG_MFD_RT5033 is not set +# CONFIG_MFD_RT5120 is not set +# CONFIG_MFD_RC5T583 is not set +# CONFIG_MFD_RK808 is not set +# CONFIG_MFD_RN5T618 is not set +# CONFIG_MFD_SEC_CORE is not set +# CONFIG_MFD_SI476X_CORE is not set +# CONFIG_MFD_SM501 is not set +# CONFIG_MFD_SKY81452 is not set +# CONFIG_MFD_STMPE is not set +# CONFIG_MFD_SYSCON is not set +# CONFIG_MFD_TI_AM335X_TSCADC is not set +# CONFIG_MFD_LP3943 is not set +# CONFIG_MFD_LP8788 is not set +# CONFIG_MFD_TI_LMU is not set +# CONFIG_MFD_PALMAS is not set +# CONFIG_TPS6105X is not set +# CONFIG_TPS65010 is not set +# CONFIG_TPS6507X is not set +# CONFIG_MFD_TPS65086 is not set +# CONFIG_MFD_TPS65090 is not set +# CONFIG_MFD_TPS65217 is not set +# CONFIG_MFD_TI_LP873X is not set +# CONFIG_MFD_TI_LP87565 is not set +# CONFIG_MFD_TPS65218 is not set +# CONFIG_MFD_TPS6586X is not set +# CONFIG_MFD_TPS65910 is not set +# CONFIG_MFD_TPS65912_I2C is not set +# CONFIG_TWL4030_CORE is not set +# CONFIG_TWL6040_CORE is not set +# CONFIG_MFD_WL1273_CORE is not set +# CONFIG_MFD_LM3533 is not set +# CONFIG_MFD_TC3589X is not set +# CONFIG_MFD_TQMX86 is not set +# CONFIG_MFD_VX855 is not set +# CONFIG_MFD_LOCHNAGAR is not set +# CONFIG_MFD_ARIZONA_I2C is not set +# CONFIG_MFD_WM8400 is not set +# CONFIG_MFD_WM831X_I2C is not set +# CONFIG_MFD_WM8350_I2C is not set +# CONFIG_MFD_WM8994 is not set +# CONFIG_MFD_ROHM_BD718XX is not set +# CONFIG_MFD_ROHM_BD71828 is not set +# CONFIG_MFD_ROHM_BD957XMUF is not set +# CONFIG_MFD_STPMIC1 is not set +# CONFIG_MFD_STMFX is not set +# CONFIG_MFD_ATC260X_I2C is not set +# CONFIG_MFD_QCOM_PM8008 is not set +# CONFIG_MFD_RSMU_I2C is not set +# end of Multifunction device drivers + +# CONFIG_REGULATOR is not set +# CONFIG_RC_CORE is not set + +# +# CEC support +# +# CONFIG_MEDIA_CEC_SUPPORT is not set +# end of CEC support + +# CONFIG_MEDIA_SUPPORT is not set + +# +# Graphics support +# +CONFIG_DRM=y +# CONFIG_DRM_DEBUG_MM is not set +# CONFIG_DRM_DEBUG_MODESET_LOCK is not set +# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set +CONFIG_DRM_GEM_SHMEM_HELPER=y + +# +# ARM devices +# +# CONFIG_DRM_HDLCD is not set +# CONFIG_DRM_MALI_DISPLAY is not set +# CONFIG_DRM_KOMEDA is not set +# end of ARM devices + +# CONFIG_DRM_RADEON is not set +# CONFIG_DRM_AMDGPU is not set +# CONFIG_DRM_NOUVEAU is not set +CONFIG_DRM_VGEM=y +# CONFIG_DRM_VKMS is not set +# CONFIG_DRM_VMWGFX is not set +# CONFIG_DRM_UDL is not set +# CONFIG_DRM_AST is not set +# CONFIG_DRM_MGAG200 is not set +# CONFIG_DRM_RCAR_DW_HDMI is not set +# CONFIG_DRM_RCAR_USE_LVDS is not set +# CONFIG_DRM_RCAR_USE_MIPI_DSI is not set +# CONFIG_DRM_QXL is not set +# CONFIG_DRM_VIRTIO_GPU is not set +CONFIG_DRM_PANEL=y + +# +# Display Panels +# +# CONFIG_DRM_PANEL_SAMSUNG_S6E88A0_AMS452EF01 is not set +# CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0 is not set +# end of Display Panels + +CONFIG_DRM_BRIDGE=y +CONFIG_DRM_PANEL_BRIDGE=y + +# +# Display Interface Bridges +# +# CONFIG_DRM_CDNS_DSI is not set +# CONFIG_DRM_CHIPONE_ICN6211 is not set +# CONFIG_DRM_CHRONTEL_CH7033 is not set +# CONFIG_DRM_DISPLAY_CONNECTOR is not set +# CONFIG_DRM_ITE_IT6505 is not set +# CONFIG_DRM_LONTIUM_LT8912B is not set +# CONFIG_DRM_LONTIUM_LT9211 is not set +# CONFIG_DRM_LONTIUM_LT9611 is not set +# CONFIG_DRM_LONTIUM_LT9611UXC is not set +# CONFIG_DRM_ITE_IT66121 is not set +# CONFIG_DRM_LVDS_CODEC is not set +# CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW is not set +# CONFIG_DRM_NWL_MIPI_DSI is not set +# CONFIG_DRM_NXP_PTN3460 is not set +# CONFIG_DRM_PARADE_PS8622 is not set +# CONFIG_DRM_PARADE_PS8640 is not set +# CONFIG_DRM_SIL_SII8620 is not set +# CONFIG_DRM_SII902X is not set +# CONFIG_DRM_SII9234 is not set +# CONFIG_DRM_SIMPLE_BRIDGE is not set +# CONFIG_DRM_THINE_THC63LVD1024 is not set +# CONFIG_DRM_TOSHIBA_TC358762 is not set +# CONFIG_DRM_TOSHIBA_TC358764 is not set +# CONFIG_DRM_TOSHIBA_TC358767 is not set +# CONFIG_DRM_TOSHIBA_TC358768 is not set +# CONFIG_DRM_TOSHIBA_TC358775 is not set +# CONFIG_DRM_TI_DLPC3433 is not set +# CONFIG_DRM_TI_TFP410 is not set +# CONFIG_DRM_TI_SN65DSI83 is not set +# CONFIG_DRM_TI_SN65DSI86 is not set +# CONFIG_DRM_TI_TPD12S015 is not set +# CONFIG_DRM_ANALOGIX_ANX6345 is not set +# CONFIG_DRM_ANALOGIX_ANX78XX is not set +# CONFIG_DRM_ANALOGIX_ANX7625 is not set +# CONFIG_DRM_I2C_ADV7511 is not set +# CONFIG_DRM_CDNS_MHDP8546 is not set +# end of Display Interface Bridges + +# CONFIG_DRM_ETNAVIV is not set +# CONFIG_DRM_HISI_HIBMC is not set +# CONFIG_DRM_HISI_KIRIN is not set +# CONFIG_DRM_LOGICVC is not set +# CONFIG_DRM_ARCPGU is not set +# CONFIG_DRM_BOCHS is not set +# CONFIG_DRM_CIRRUS_QEMU is not set +# CONFIG_DRM_GM12U320 is not set +# CONFIG_DRM_SIMPLEDRM is not set +# CONFIG_DRM_PL111 is not set +# CONFIG_DRM_LIMA is not set +# CONFIG_DRM_PANFROST is not set +# CONFIG_DRM_TIDSS is not set +# CONFIG_DRM_GUD is not set +# CONFIG_DRM_SSD130X is not set +# CONFIG_DRM_HYPERV is not set +# CONFIG_DRM_LEGACY is not set +CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y +CONFIG_DRM_NOMODESET=y + +# +# Frame buffer Devices +# +CONFIG_FB_CMDLINE=y +# CONFIG_FB is not set +# end of Frame buffer Devices + +# +# Backlight & LCD device support +# +# CONFIG_LCD_CLASS_DEVICE is not set +# CONFIG_BACKLIGHT_CLASS_DEVICE is not set +# end of Backlight & LCD device support + +CONFIG_HDMI=y + +# +# Console display driver support +# +CONFIG_DUMMY_CONSOLE=y +CONFIG_DUMMY_CONSOLE_COLUMNS=80 +CONFIG_DUMMY_CONSOLE_ROWS=25 +# end of Console display driver support +# end of Graphics support + +# CONFIG_SOUND is not set + +# +# HID support +# +CONFIG_HID=y +# CONFIG_HID_BATTERY_STRENGTH is not set +# CONFIG_HIDRAW is not set +# CONFIG_UHID is not set +CONFIG_HID_GENERIC=y + +# +# Special HID drivers +# +# CONFIG_HID_A4TECH is not set +# CONFIG_HID_ACCUTOUCH is not set +# CONFIG_HID_ACRUX is not set +# CONFIG_HID_APPLEIR is not set +# CONFIG_HID_AUREAL is not set +# CONFIG_HID_BELKIN is not set +# CONFIG_HID_BETOP_FF is not set +# CONFIG_HID_CHERRY is not set +# CONFIG_HID_CHICONY is not set +# CONFIG_HID_COUGAR is not set +# CONFIG_HID_MACALLY is not set +# CONFIG_HID_CMEDIA is not set +# CONFIG_HID_CREATIVE_SB0540 is not set +# CONFIG_HID_CYPRESS is not set +# CONFIG_HID_DRAGONRISE is not set +# CONFIG_HID_EMS_FF is not set +# CONFIG_HID_ELECOM is not set +# CONFIG_HID_ELO is not set +# CONFIG_HID_EZKEY is not set +# CONFIG_HID_GEMBIRD is not set +# CONFIG_HID_GFRM is not set +# CONFIG_HID_GLORIOUS is not set +# CONFIG_HID_HOLTEK is not set +# CONFIG_HID_VIVALDI is not set +# CONFIG_HID_KEYTOUCH is not set +# CONFIG_HID_KYE is not set +# CONFIG_HID_UCLOGIC is not set +# CONFIG_HID_WALTOP is not set +# CONFIG_HID_VIEWSONIC is not set +# CONFIG_HID_VRC2 is not set +# CONFIG_HID_XIAOMI is not set +# CONFIG_HID_GYRATION is not set +# CONFIG_HID_ICADE is not set +# CONFIG_HID_ITE is not set +# CONFIG_HID_JABRA is not set +# CONFIG_HID_TWINHAN is not set +# CONFIG_HID_KENSINGTON is not set +# CONFIG_HID_LCPOWER is not set +# CONFIG_HID_LENOVO is not set +# CONFIG_HID_LETSKETCH is not set +# CONFIG_HID_MAGICMOUSE is not set +# CONFIG_HID_MALTRON is not set +# CONFIG_HID_MAYFLASH is not set +# CONFIG_HID_MEGAWORLD_FF is not set +# CONFIG_HID_REDRAGON is not set +# CONFIG_HID_MICROSOFT is not set +# CONFIG_HID_MONTEREY is not set +# CONFIG_HID_MULTITOUCH is not set +# CONFIG_HID_NTI is not set +# CONFIG_HID_NTRIG is not set +# CONFIG_HID_ORTEK is not set +# CONFIG_HID_PANTHERLORD is not set +# CONFIG_HID_PENMOUNT is not set +# CONFIG_HID_PETALYNX is not set +# CONFIG_HID_PICOLCD is not set +# CONFIG_HID_PLANTRONICS is not set +# CONFIG_HID_PXRC is not set +# CONFIG_HID_RAZER is not set +# CONFIG_HID_PRIMAX is not set +# CONFIG_HID_RETRODE is not set +# CONFIG_HID_ROCCAT is not set +# CONFIG_HID_SAITEK is not set +# CONFIG_HID_SAMSUNG is not set +# CONFIG_HID_SEMITEK is not set +# CONFIG_HID_SIGMAMICRO is not set +# CONFIG_HID_SPEEDLINK is not set +# CONFIG_HID_STEAM is not set +# CONFIG_HID_STEELSERIES is not set +# CONFIG_HID_SUNPLUS is not set +# CONFIG_HID_RMI is not set +# CONFIG_HID_GREENASIA is not set +# CONFIG_HID_HYPERV_MOUSE is not set +# CONFIG_HID_SMARTJOYPLUS is not set +# CONFIG_HID_TIVO is not set +# CONFIG_HID_TOPSEED is not set +# CONFIG_HID_TOPRE is not set +# CONFIG_HID_THRUSTMASTER is not set +# CONFIG_HID_UDRAW_PS3 is not set +# CONFIG_HID_WACOM is not set +# CONFIG_HID_XINMO is not set +# CONFIG_HID_ZEROPLUS is not set +# CONFIG_HID_ZYDACRON is not set +# CONFIG_HID_SENSOR_HUB is not set +# CONFIG_HID_ALPS is not set +# CONFIG_HID_MCP2221 is not set +# end of Special HID drivers + +# +# USB HID support +# +CONFIG_USB_HID=y +# CONFIG_HID_PID is not set +# CONFIG_USB_HIDDEV is not set +# end of USB HID support + +# +# I2C HID support +# +# CONFIG_I2C_HID_ACPI is not set +# CONFIG_I2C_HID_OF is not set +# CONFIG_I2C_HID_OF_ELAN is not set +# CONFIG_I2C_HID_OF_GOODIX is not set +# end of I2C HID support +# end of HID support + +CONFIG_USB_OHCI_LITTLE_ENDIAN=y +CONFIG_USB_SUPPORT=y +CONFIG_USB_COMMON=y +# CONFIG_USB_ULPI_BUS is not set +# CONFIG_USB_CONN_GPIO is not set +CONFIG_USB_ARCH_HAS_HCD=y +CONFIG_USB=y +# CONFIG_USB_PCI is not set +CONFIG_USB_ANNOUNCE_NEW_DEVICES=y + +# +# Miscellaneous USB options +# +CONFIG_USB_DEFAULT_PERSIST=y +# CONFIG_USB_FEW_INIT_RETRIES is not set +# CONFIG_USB_DYNAMIC_MINORS is not set +# CONFIG_USB_OTG is not set +# CONFIG_USB_OTG_PRODUCTLIST is not set +# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set +CONFIG_USB_AUTOSUSPEND_DELAY=2 +# CONFIG_USB_MON is not set + +# +# USB Host Controller Drivers +# +# CONFIG_USB_C67X00_HCD is not set +# CONFIG_USB_XHCI_HCD is not set +# CONFIG_USB_EHCI_HCD is not set +# CONFIG_USB_OXU210HP_HCD is not set +# CONFIG_USB_ISP116X_HCD is not set +# CONFIG_USB_FOTG210_HCD is not set +# CONFIG_USB_OHCI_HCD is not set +# CONFIG_USB_SL811_HCD is not set +# CONFIG_USB_R8A66597_HCD is not set +# CONFIG_USB_HCD_TEST_MODE is not set + +# +# USB Device Class drivers +# +CONFIG_USB_ACM=y +# CONFIG_USB_PRINTER is not set +# CONFIG_USB_WDM is not set +# CONFIG_USB_TMC is not set + +# +# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may +# + +# +# also be needed; see USB_STORAGE Help for more info +# +# CONFIG_USB_STORAGE is not set + +# +# USB Imaging devices +# +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_MICROTEK is not set +CONFIG_USBIP_CORE=y +CONFIG_USBIP_VHCI_HCD=y +CONFIG_USBIP_VHCI_HC_PORTS=8 +CONFIG_USBIP_VHCI_NR_HCS=1 +# CONFIG_USBIP_HOST is not set +# CONFIG_USBIP_DEBUG is not set +# CONFIG_USB_CDNS_SUPPORT is not set +# CONFIG_USB_MUSB_HDRC is not set +# CONFIG_USB_DWC3 is not set +# CONFIG_USB_DWC2 is not set +# CONFIG_USB_ISP1760 is not set + +# +# USB port drivers +# +CONFIG_USB_SERIAL=y +# CONFIG_USB_SERIAL_CONSOLE is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_SIMPLE is not set +# CONFIG_USB_SERIAL_AIRCABLE is not set +# CONFIG_USB_SERIAL_ARK3116 is not set +# CONFIG_USB_SERIAL_BELKIN is not set +CONFIG_USB_SERIAL_CH341=y +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +CONFIG_USB_SERIAL_CP210X=y +# CONFIG_USB_SERIAL_CYPRESS_M8 is not set +# CONFIG_USB_SERIAL_EMPEG is not set +CONFIG_USB_SERIAL_FTDI_SIO=y +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IPAQ is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_EDGEPORT_TI is not set +# CONFIG_USB_SERIAL_F81232 is not set +# CONFIG_USB_SERIAL_F8153X is not set +# CONFIG_USB_SERIAL_GARMIN is not set +# CONFIG_USB_SERIAL_IPW is not set +# CONFIG_USB_SERIAL_IUU is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KLSI is not set +# CONFIG_USB_SERIAL_KOBIL_SCT is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_METRO is not set +# CONFIG_USB_SERIAL_MOS7720 is not set +# CONFIG_USB_SERIAL_MOS7840 is not set +# CONFIG_USB_SERIAL_MXUPORT is not set +# CONFIG_USB_SERIAL_NAVMAN is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_OTI6858 is not set +# CONFIG_USB_SERIAL_QCAUX is not set +# CONFIG_USB_SERIAL_QUALCOMM is not set +# CONFIG_USB_SERIAL_SPCP8X5 is not set +# CONFIG_USB_SERIAL_SAFE is not set +# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set +# CONFIG_USB_SERIAL_SYMBOL is not set +# CONFIG_USB_SERIAL_TI is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_OPTION is not set +# CONFIG_USB_SERIAL_OMNINET is not set +# CONFIG_USB_SERIAL_OPTICON is not set +# CONFIG_USB_SERIAL_XSENS_MT is not set +# CONFIG_USB_SERIAL_WISHBONE is not set +# CONFIG_USB_SERIAL_SSU100 is not set +# CONFIG_USB_SERIAL_QT2 is not set +# CONFIG_USB_SERIAL_UPD78F0730 is not set +# CONFIG_USB_SERIAL_XR is not set +# CONFIG_USB_SERIAL_DEBUG is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_EMI62 is not set +# CONFIG_USB_EMI26 is not set +# CONFIG_USB_ADUTUX is not set +# CONFIG_USB_SEVSEG is not set +# CONFIG_USB_LEGOTOWER is not set +# CONFIG_USB_LCD is not set +# CONFIG_USB_CYPRESS_CY7C63 is not set +# CONFIG_USB_CYTHERM is not set +# CONFIG_USB_IDMOUSE is not set +# CONFIG_USB_FTDI_ELAN is not set +# CONFIG_USB_APPLEDISPLAY is not set +# CONFIG_APPLE_MFI_FASTCHARGE is not set +# CONFIG_USB_LD is not set +# CONFIG_USB_TRANCEVIBRATOR is not set +# CONFIG_USB_IOWARRIOR is not set +# CONFIG_USB_TEST is not set +# CONFIG_USB_EHSET_TEST_FIXTURE is not set +# CONFIG_USB_ISIGHTFW is not set +# CONFIG_USB_YUREX is not set +# CONFIG_USB_EZUSB_FX2 is not set +# CONFIG_USB_HUB_USB251XB is not set +# CONFIG_USB_HSIC_USB3503 is not set +# CONFIG_USB_HSIC_USB4604 is not set +# CONFIG_USB_LINK_LAYER_TEST is not set +# CONFIG_USB_ONBOARD_HUB is not set + +# +# USB Physical Layer drivers +# +# CONFIG_NOP_USB_XCEIV is not set +# CONFIG_USB_GPIO_VBUS is not set +# CONFIG_USB_ISP1301 is not set +# CONFIG_USB_ULPI is not set +# end of USB Physical Layer drivers + +# CONFIG_USB_GADGET is not set +# CONFIG_TYPEC is not set +# CONFIG_USB_ROLE_SWITCH is not set +# CONFIG_MMC is not set +# CONFIG_SCSI_UFSHCD is not set +# CONFIG_MEMSTICK is not set +# CONFIG_NEW_LEDS is not set +# CONFIG_ACCESSIBILITY is not set +# CONFIG_INFINIBAND is not set +CONFIG_EDAC_SUPPORT=y +# CONFIG_EDAC is not set +CONFIG_RTC_LIB=y +CONFIG_RTC_CLASS=y +CONFIG_RTC_HCTOSYS=y +CONFIG_RTC_HCTOSYS_DEVICE="rtc0" +CONFIG_RTC_SYSTOHC=y +CONFIG_RTC_SYSTOHC_DEVICE="rtc0" +# CONFIG_RTC_DEBUG is not set +CONFIG_RTC_NVMEM=y + +# +# RTC interfaces +# +CONFIG_RTC_INTF_SYSFS=y +CONFIG_RTC_INTF_PROC=y +CONFIG_RTC_INTF_DEV=y +CONFIG_RTC_INTF_DEV_UIE_EMUL=y +# CONFIG_RTC_DRV_TEST is not set + +# +# I2C RTC drivers +# +# CONFIG_RTC_DRV_ABB5ZES3 is not set +# CONFIG_RTC_DRV_ABEOZ9 is not set +# CONFIG_RTC_DRV_ABX80X is not set +# CONFIG_RTC_DRV_DS1307 is not set +# CONFIG_RTC_DRV_DS1374 is not set +# CONFIG_RTC_DRV_DS1672 is not set +# CONFIG_RTC_DRV_HYM8563 is not set +# CONFIG_RTC_DRV_MAX6900 is not set +# CONFIG_RTC_DRV_NCT3018Y is not set +# CONFIG_RTC_DRV_RS5C372 is not set +# CONFIG_RTC_DRV_ISL1208 is not set +# CONFIG_RTC_DRV_ISL12022 is not set +# CONFIG_RTC_DRV_ISL12026 is not set +# CONFIG_RTC_DRV_X1205 is not set +# CONFIG_RTC_DRV_PCF8523 is not set +# CONFIG_RTC_DRV_PCF85063 is not set +# CONFIG_RTC_DRV_PCF85363 is not set +# CONFIG_RTC_DRV_PCF8563 is not set +# CONFIG_RTC_DRV_PCF8583 is not set +# CONFIG_RTC_DRV_M41T80 is not set +# CONFIG_RTC_DRV_BQ32K is not set +# CONFIG_RTC_DRV_S35390A is not set +# CONFIG_RTC_DRV_FM3130 is not set +# CONFIG_RTC_DRV_RX8010 is not set +# CONFIG_RTC_DRV_RX8581 is not set +# CONFIG_RTC_DRV_RX8025 is not set +# CONFIG_RTC_DRV_EM3027 is not set +# CONFIG_RTC_DRV_RV3028 is not set +# CONFIG_RTC_DRV_RV3032 is not set +# CONFIG_RTC_DRV_RV8803 is not set +# CONFIG_RTC_DRV_SD3078 is not set + +# +# SPI RTC drivers +# +CONFIG_RTC_I2C_AND_SPI=y + +# +# SPI and I2C RTC drivers +# +# CONFIG_RTC_DRV_DS3232 is not set +# CONFIG_RTC_DRV_PCF2127 is not set +# CONFIG_RTC_DRV_RV3029C2 is not set +# CONFIG_RTC_DRV_RX6110 is not set + +# +# Platform RTC drivers +# +# CONFIG_RTC_DRV_DS1286 is not set +# CONFIG_RTC_DRV_DS1511 is not set +# CONFIG_RTC_DRV_DS1553 is not set +# CONFIG_RTC_DRV_DS1685_FAMILY is not set +# CONFIG_RTC_DRV_DS1742 is not set +# CONFIG_RTC_DRV_DS2404 is not set +# CONFIG_RTC_DRV_EFI is not set +# CONFIG_RTC_DRV_STK17TA8 is not set +# CONFIG_RTC_DRV_M48T86 is not set +# CONFIG_RTC_DRV_M48T35 is not set +# CONFIG_RTC_DRV_M48T59 is not set +# CONFIG_RTC_DRV_MSM6242 is not set +# CONFIG_RTC_DRV_BQ4802 is not set +# CONFIG_RTC_DRV_RP5C01 is not set +# CONFIG_RTC_DRV_V3020 is not set +# CONFIG_RTC_DRV_ZYNQMP is not set + +# +# on-CPU RTC drivers +# +# CONFIG_RTC_DRV_PL030 is not set +# CONFIG_RTC_DRV_PL031 is not set +# CONFIG_RTC_DRV_CADENCE is not set +# CONFIG_RTC_DRV_FTRTC010 is not set +# CONFIG_RTC_DRV_R7301 is not set + +# +# HID Sensor RTC drivers +# +# CONFIG_RTC_DRV_GOLDFISH is not set +# CONFIG_DMADEVICES is not set + +# +# DMABUF options +# +CONFIG_SYNC_FILE=y +# CONFIG_SW_SYNC is not set +# CONFIG_UDMABUF is not set +# CONFIG_DMABUF_MOVE_NOTIFY is not set +# CONFIG_DMABUF_DEBUG is not set +# CONFIG_DMABUF_SELFTESTS is not set +# CONFIG_DMABUF_HEAPS is not set +# CONFIG_DMABUF_SYSFS_STATS is not set +# end of DMABUF options + +# CONFIG_AUXDISPLAY is not set +CONFIG_UIO=y +# CONFIG_UIO_CIF is not set +CONFIG_UIO_PDRV_GENIRQ=y +CONFIG_UIO_DMEM_GENIRQ=y +# CONFIG_UIO_AEC is not set +# CONFIG_UIO_SERCOS3 is not set +# CONFIG_UIO_PCI_GENERIC is not set +# CONFIG_UIO_NETX is not set +# CONFIG_UIO_PRUSS is not set +# CONFIG_UIO_MF624 is not set +# CONFIG_UIO_HV_GENERIC is not set +CONFIG_VFIO=y +CONFIG_VFIO_IOMMU_TYPE1=y +CONFIG_VFIO_VIRQFD=y +# CONFIG_VFIO_NOIOMMU is not set +CONFIG_VFIO_PCI_CORE=y +CONFIG_VFIO_PCI_MMAP=y +CONFIG_VFIO_PCI_INTX=y +CONFIG_VFIO_PCI=y +# CONFIG_VFIO_PLATFORM is not set +CONFIG_VFIO_MDEV=y +# CONFIG_VIRT_DRIVERS is not set +CONFIG_VIRTIO_ANCHOR=y +CONFIG_VIRTIO=y +CONFIG_VIRTIO_PCI_LIB=y +CONFIG_VIRTIO_MENU=y +CONFIG_VIRTIO_PCI=y +# CONFIG_VIRTIO_PCI_LEGACY is not set +# CONFIG_VIRTIO_VDPA is not set +CONFIG_VIRTIO_PMEM=y +CONFIG_VIRTIO_BALLOON=y +CONFIG_VIRTIO_INPUT=y +CONFIG_VIRTIO_MMIO=y +# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set +CONFIG_VDPA=y +# CONFIG_VDPA_USER is not set +# CONFIG_IFCVF is not set +# CONFIG_VP_VDPA is not set +CONFIG_VHOST_IOTLB=y +CONFIG_VHOST=y +CONFIG_VHOST_MENU=y +CONFIG_VHOST_NET=y +# CONFIG_VHOST_VSOCK is not set +CONFIG_VHOST_VDPA=y +# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set + +# +# Microsoft Hyper-V guest support +# +CONFIG_HYPERV=y +CONFIG_HYPERV_TIMER=y +CONFIG_HYPERV_UTILS=y +CONFIG_HYPERV_BALLOON=y +CONFIG_DXGKRNL=y +# end of Microsoft Hyper-V guest support + +# CONFIG_GREYBUS is not set +# CONFIG_COMEDI is not set +# CONFIG_STAGING is not set +# CONFIG_GOLDFISH is not set +# CONFIG_CHROME_PLATFORMS is not set +# CONFIG_MELLANOX_PLATFORM is not set +# CONFIG_SURFACE_PLATFORMS is not set +CONFIG_HAVE_CLK=y +CONFIG_HAVE_CLK_PREPARE=y +CONFIG_COMMON_CLK=y + +# +# Clock driver for ARM Reference designs +# +# CONFIG_CLK_ICST is not set +# CONFIG_CLK_SP810 is not set +# end of Clock driver for ARM Reference designs + +# CONFIG_COMMON_CLK_MAX9485 is not set +# CONFIG_COMMON_CLK_SI5341 is not set +# CONFIG_COMMON_CLK_SI5351 is not set +# CONFIG_COMMON_CLK_SI514 is not set +# CONFIG_COMMON_CLK_SI544 is not set +# CONFIG_COMMON_CLK_SI570 is not set +# CONFIG_COMMON_CLK_CDCE706 is not set +# CONFIG_COMMON_CLK_CDCE925 is not set +# CONFIG_COMMON_CLK_CS2000_CP is not set +# CONFIG_COMMON_CLK_AXI_CLKGEN is not set +# CONFIG_COMMON_CLK_XGENE is not set +# CONFIG_COMMON_CLK_RS9_PCIE is not set +# CONFIG_COMMON_CLK_VC5 is not set +# CONFIG_COMMON_CLK_VC7 is not set +# CONFIG_COMMON_CLK_FIXED_MMIO is not set +# CONFIG_XILINX_VCU is not set +# CONFIG_COMMON_CLK_XLNX_CLKWZRD is not set +# CONFIG_HWSPINLOCK is not set + +# +# Clock Source drivers +# +CONFIG_TIMER_OF=y +CONFIG_TIMER_ACPI=y +CONFIG_TIMER_PROBE=y +CONFIG_ARM_ARCH_TIMER=y +# CONFIG_ARM_ARCH_TIMER_EVTSTREAM is not set +CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND=y +# CONFIG_FSL_ERRATUM_A008585 is not set +# CONFIG_HISILICON_ERRATUM_161010101 is not set +CONFIG_ARM64_ERRATUM_858921=y +# CONFIG_MICROCHIP_PIT64B is not set +# end of Clock Source drivers + +# CONFIG_MAILBOX is not set +CONFIG_IOMMU_IOVA=y +CONFIG_IOMMU_API=y +CONFIG_IOMMU_SUPPORT=y + +# +# Generic IOMMU Pagetable Support +# +CONFIG_IOMMU_IO_PGTABLE=y +CONFIG_IOMMU_IO_PGTABLE_LPAE=y +# CONFIG_IOMMU_IO_PGTABLE_LPAE_SELFTEST is not set +# CONFIG_IOMMU_IO_PGTABLE_ARMV7S is not set +# CONFIG_IOMMU_IO_PGTABLE_DART is not set +# end of Generic IOMMU Pagetable Support + +# CONFIG_IOMMU_DEBUGFS is not set +CONFIG_IOMMU_DEFAULT_DMA_STRICT=y +# CONFIG_IOMMU_DEFAULT_DMA_LAZY is not set +# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set +CONFIG_OF_IOMMU=y +CONFIG_IOMMU_DMA=y +CONFIG_ARM_SMMU=y +# CONFIG_ARM_SMMU_LEGACY_DT_BINDINGS is not set +CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT=y +CONFIG_ARM_SMMU_V3=y +# CONFIG_ARM_SMMU_V3_SVA is not set +# CONFIG_VIRTIO_IOMMU is not set + +# +# Remoteproc drivers +# +# CONFIG_REMOTEPROC is not set +# end of Remoteproc drivers + +# +# Rpmsg drivers +# +# CONFIG_RPMSG_VIRTIO is not set +# end of Rpmsg drivers + +# CONFIG_SOUNDWIRE is not set + +# +# SOC (System On Chip) specific Drivers +# + +# +# Amlogic SoC drivers +# +# end of Amlogic SoC drivers + +# +# Broadcom SoC drivers +# +# CONFIG_SOC_BRCMSTB is not set +# end of Broadcom SoC drivers + +# +# NXP/Freescale QorIQ SoC drivers +# +# CONFIG_QUICC_ENGINE is not set +# end of NXP/Freescale QorIQ SoC drivers + +# +# fujitsu SoC drivers +# +# CONFIG_A64FX_DIAG is not set +# end of fujitsu SoC drivers + +# +# i.MX SoC drivers +# +# end of i.MX SoC drivers + +# +# Enable LiteX SoC Builder specific drivers +# +# CONFIG_LITEX_SOC_CONTROLLER is not set +# end of Enable LiteX SoC Builder specific drivers + +# +# Qualcomm SoC drivers +# +# end of Qualcomm SoC drivers + +# CONFIG_SOC_TI is not set + +# +# Xilinx SoC drivers +# +# end of Xilinx SoC drivers +# end of SOC (System On Chip) specific Drivers + +# CONFIG_PM_DEVFREQ is not set +# CONFIG_EXTCON is not set +# CONFIG_MEMORY is not set +# CONFIG_IIO is not set +# CONFIG_NTB is not set +# CONFIG_PWM is not set + +# +# IRQ chip support +# +CONFIG_IRQCHIP=y +CONFIG_ARM_GIC=y +CONFIG_ARM_GIC_MAX_NR=1 +CONFIG_ARM_GIC_V2M=y +CONFIG_ARM_GIC_V3=y +CONFIG_ARM_GIC_V3_ITS=y +CONFIG_ARM_GIC_V3_ITS_PCI=y +# CONFIG_AL_FIC is not set +# CONFIG_XILINX_INTC is not set +CONFIG_PARTITION_PERCPU=y +# end of IRQ chip support + +# CONFIG_IPACK_BUS is not set +# CONFIG_RESET_CONTROLLER is not set + +# +# PHY Subsystem +# +CONFIG_GENERIC_PHY=y +# CONFIG_PHY_XGENE is not set +# CONFIG_PHY_CAN_TRANSCEIVER is not set + +# +# PHY drivers for Broadcom platforms +# +# CONFIG_BCM_KONA_USB2_PHY is not set +# end of PHY drivers for Broadcom platforms + +# CONFIG_PHY_CADENCE_TORRENT is not set +# CONFIG_PHY_CADENCE_DPHY is not set +# CONFIG_PHY_CADENCE_DPHY_RX is not set +# CONFIG_PHY_CADENCE_SALVO is not set +# CONFIG_PHY_PXA_28NM_HSIC is not set +# CONFIG_PHY_PXA_28NM_USB2 is not set +# CONFIG_PHY_MAPPHONE_MDM6600 is not set +# end of PHY Subsystem + +# CONFIG_POWERCAP is not set +# CONFIG_MCB is not set + +# +# Performance monitor support +# +# CONFIG_ARM_CCI_PMU is not set +CONFIG_ARM_CCN=y +# CONFIG_ARM_CMN is not set +CONFIG_ARM_PMU=y +CONFIG_ARM_PMU_ACPI=y +# CONFIG_ARM_SMMU_V3_PMU is not set +# CONFIG_ARM_DSU_PMU is not set +# CONFIG_ARM_SPE_PMU is not set +# CONFIG_ARM_DMC620_PMU is not set +# CONFIG_ALIBABA_UNCORE_DRW_PMU is not set +# CONFIG_HISI_PMU is not set +# CONFIG_HISI_PCIE_PMU is not set +# CONFIG_HNS3_PMU is not set +# end of Performance monitor support + +CONFIG_RAS=y +# CONFIG_USB4 is not set + +# +# Android +# +# CONFIG_ANDROID_BINDER_IPC is not set +# end of Android + +CONFIG_LIBNVDIMM=y +CONFIG_BLK_DEV_PMEM=y +CONFIG_ND_CLAIM=y +CONFIG_ND_BTT=y +CONFIG_BTT=y +CONFIG_ND_PFN=y +CONFIG_NVDIMM_PFN=y +CONFIG_NVDIMM_DAX=y +# CONFIG_OF_PMEM is not set +CONFIG_DAX=y +CONFIG_DEV_DAX=y +CONFIG_DEV_DAX_PMEM=y +CONFIG_DEV_DAX_KMEM=y +CONFIG_NVMEM=y +# CONFIG_NVMEM_SYSFS is not set +# CONFIG_NVMEM_RMEM is not set + +# +# HW tracing support +# +# CONFIG_STM is not set +# CONFIG_INTEL_TH is not set +# CONFIG_HISI_PTT is not set +# end of HW tracing support + +# CONFIG_FPGA is not set +# CONFIG_FSI is not set +# CONFIG_TEE is not set +# CONFIG_SIOX is not set +# CONFIG_SLIMBUS is not set +# CONFIG_INTERCONNECT is not set +# CONFIG_COUNTER is not set +# CONFIG_PECI is not set +# CONFIG_HTE is not set +# end of Device Drivers + +# +# File systems +# +CONFIG_DCACHE_WORD_ACCESS=y +# CONFIG_VALIDATE_FS_PARSER is not set +CONFIG_FS_IOMAP=y +# CONFIG_EXT2_FS is not set +# CONFIG_EXT3_FS is not set +CONFIG_EXT4_FS=y +CONFIG_EXT4_USE_FOR_EXT2=y +CONFIG_EXT4_FS_POSIX_ACL=y +CONFIG_EXT4_FS_SECURITY=y +# CONFIG_EXT4_DEBUG is not set +CONFIG_JBD2=y +# CONFIG_JBD2_DEBUG is not set +CONFIG_FS_MBCACHE=y +# CONFIG_REISERFS_FS is not set +# CONFIG_JFS_FS is not set +CONFIG_XFS_FS=y +# CONFIG_XFS_SUPPORT_V4 is not set +CONFIG_XFS_QUOTA=y +CONFIG_XFS_POSIX_ACL=y +CONFIG_XFS_RT=y +CONFIG_XFS_ONLINE_SCRUB=y +CONFIG_XFS_ONLINE_REPAIR=y +# CONFIG_XFS_WARN is not set +# CONFIG_XFS_DEBUG is not set +# CONFIG_GFS2_FS is not set +CONFIG_BTRFS_FS=y +CONFIG_BTRFS_FS_POSIX_ACL=y +# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set +# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set +# CONFIG_BTRFS_DEBUG is not set +# CONFIG_BTRFS_ASSERT is not set +# CONFIG_BTRFS_FS_REF_VERIFY is not set +# CONFIG_NILFS2_FS is not set +# CONFIG_F2FS_FS is not set +CONFIG_FS_DAX=y +CONFIG_FS_DAX_PMD=y +CONFIG_FS_POSIX_ACL=y +CONFIG_EXPORTFS=y +CONFIG_EXPORTFS_BLOCK_OPS=y +CONFIG_FILE_LOCKING=y +# CONFIG_FS_ENCRYPTION is not set +# CONFIG_FS_VERITY is not set +CONFIG_FSNOTIFY=y +CONFIG_DNOTIFY=y +CONFIG_INOTIFY_USER=y +CONFIG_FANOTIFY=y +CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y +CONFIG_QUOTA=y +# CONFIG_QUOTA_NETLINK_INTERFACE is not set +# CONFIG_PRINT_QUOTA_WARNING is not set +# CONFIG_QUOTA_DEBUG is not set +# CONFIG_QFMT_V1 is not set +# CONFIG_QFMT_V2 is not set +CONFIG_QUOTACTL=y +CONFIG_AUTOFS4_FS=y +CONFIG_AUTOFS_FS=y +CONFIG_FUSE_FS=y +CONFIG_CUSE=y +CONFIG_VIRTIO_FS=y +CONFIG_FUSE_DAX=y +CONFIG_OVERLAY_FS=y +# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set +# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set +# CONFIG_OVERLAY_FS_INDEX is not set +# CONFIG_OVERLAY_FS_XINO_AUTO is not set +# CONFIG_OVERLAY_FS_METACOPY is not set + +# +# Caches +# +CONFIG_NETFS_SUPPORT=y +# CONFIG_NETFS_STATS is not set +CONFIG_FSCACHE=y +# CONFIG_FSCACHE_STATS is not set +# CONFIG_FSCACHE_DEBUG is not set +# CONFIG_CACHEFILES is not set +# end of Caches + +# +# CD-ROM/DVD Filesystems +# +CONFIG_ISO9660_FS=y +CONFIG_JOLIET=y +CONFIG_ZISOFS=y +CONFIG_UDF_FS=y +# end of CD-ROM/DVD Filesystems + +# +# DOS/FAT/EXFAT/NT Filesystems +# +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +CONFIG_VFAT_FS=y +CONFIG_FAT_DEFAULT_CODEPAGE=437 +CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" +# CONFIG_FAT_DEFAULT_UTF8 is not set +# CONFIG_EXFAT_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS3_FS is not set +# end of DOS/FAT/EXFAT/NT Filesystems + +# +# Pseudo filesystems +# +CONFIG_PROC_FS=y +CONFIG_PROC_KCORE=y +CONFIG_PROC_SYSCTL=y +CONFIG_PROC_PAGE_MONITOR=y +CONFIG_PROC_CHILDREN=y +CONFIG_KERNFS=y +CONFIG_SYSFS=y +CONFIG_TMPFS=y +CONFIG_TMPFS_POSIX_ACL=y +CONFIG_TMPFS_XATTR=y +# CONFIG_TMPFS_INODE64 is not set +CONFIG_ARCH_SUPPORTS_HUGETLBFS=y +CONFIG_HUGETLBFS=y +CONFIG_HUGETLB_PAGE=y +CONFIG_MEMFD_CREATE=y +CONFIG_ARCH_HAS_GIGANTIC_PAGE=y +# CONFIG_CONFIGFS_FS is not set +# CONFIG_EFIVAR_FS is not set +# end of Pseudo filesystems + +CONFIG_MISC_FILESYSTEMS=y +# CONFIG_ORANGEFS_FS is not set +# CONFIG_ADFS_FS is not set +# CONFIG_AFFS_FS is not set +# CONFIG_ECRYPT_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_HFSPLUS_FS is not set +# CONFIG_BEFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_SQUASHFS=y +# CONFIG_SQUASHFS_FILE_CACHE is not set +CONFIG_SQUASHFS_FILE_DIRECT=y +CONFIG_SQUASHFS_DECOMP_SINGLE=y +# CONFIG_SQUASHFS_DECOMP_MULTI is not set +# CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU is not set +CONFIG_SQUASHFS_XATTR=y +CONFIG_SQUASHFS_ZLIB=y +CONFIG_SQUASHFS_LZ4=y +CONFIG_SQUASHFS_LZO=y +CONFIG_SQUASHFS_XZ=y +CONFIG_SQUASHFS_ZSTD=y +# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set +# CONFIG_SQUASHFS_EMBEDDED is not set +CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3 +# CONFIG_VXFS_FS is not set +# CONFIG_MINIX_FS is not set +# CONFIG_OMFS_FS is not set +# CONFIG_HPFS_FS is not set +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX6FS_FS is not set +# CONFIG_ROMFS_FS is not set +# CONFIG_PSTORE is not set +# CONFIG_SYSV_FS is not set +# CONFIG_UFS_FS is not set +CONFIG_EROFS_FS=y +# CONFIG_EROFS_FS_DEBUG is not set +CONFIG_EROFS_FS_XATTR=y +CONFIG_EROFS_FS_POSIX_ACL=y +CONFIG_EROFS_FS_SECURITY=y +CONFIG_EROFS_FS_ZIP=y +# CONFIG_EROFS_FS_ZIP_LZMA is not set +CONFIG_NETWORK_FILESYSTEMS=y +CONFIG_NFS_FS=y +CONFIG_NFS_V2=y +CONFIG_NFS_V3=y +# CONFIG_NFS_V3_ACL is not set +CONFIG_NFS_V4=y +# CONFIG_NFS_SWAP is not set +CONFIG_NFS_V4_1=y +# CONFIG_NFS_V4_2 is not set +CONFIG_PNFS_FILE_LAYOUT=y +CONFIG_PNFS_BLOCK=y +CONFIG_PNFS_FLEXFILE_LAYOUT=y +CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org" +# CONFIG_NFS_V4_1_MIGRATION is not set +CONFIG_ROOT_NFS=y +# CONFIG_NFS_FSCACHE is not set +# CONFIG_NFS_USE_LEGACY_DNS is not set +CONFIG_NFS_USE_KERNEL_DNS=y +# CONFIG_NFS_DISABLE_UDP_SUPPORT is not set +CONFIG_NFSD=y +CONFIG_NFSD_V2_ACL=y +CONFIG_NFSD_V3_ACL=y +CONFIG_NFSD_V4=y +CONFIG_NFSD_PNFS=y +CONFIG_NFSD_BLOCKLAYOUT=y +CONFIG_NFSD_SCSILAYOUT=y +CONFIG_NFSD_FLEXFILELAYOUT=y +CONFIG_NFSD_V4_SECURITY_LABEL=y +CONFIG_GRACE_PERIOD=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +CONFIG_NFS_ACL_SUPPORT=y +CONFIG_NFS_COMMON=y +CONFIG_SUNRPC=y +CONFIG_SUNRPC_GSS=y +CONFIG_SUNRPC_BACKCHANNEL=y +# CONFIG_RPCSEC_GSS_KRB5 is not set +# CONFIG_SUNRPC_DEBUG is not set +CONFIG_CEPH_FS=y +CONFIG_CEPH_FSCACHE=y +CONFIG_CEPH_FS_POSIX_ACL=y +# CONFIG_CEPH_FS_SECURITY_LABEL is not set +CONFIG_CIFS=y +# CONFIG_CIFS_STATS2 is not set +CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y +# CONFIG_CIFS_UPCALL is not set +CONFIG_CIFS_XATTR=y +CONFIG_CIFS_POSIX=y +# CONFIG_CIFS_DEBUG is not set +# CONFIG_CIFS_DFS_UPCALL is not set +# CONFIG_CIFS_SWN_UPCALL is not set +# CONFIG_CIFS_FSCACHE is not set +# CONFIG_CIFS_ROOT is not set +# CONFIG_SMB_SERVER is not set +CONFIG_SMBFS=y +# CONFIG_CODA_FS is not set +# CONFIG_AFS_FS is not set +CONFIG_9P_FS=y +CONFIG_9P_FSCACHE=y +CONFIG_9P_FS_POSIX_ACL=y +CONFIG_9P_FS_SECURITY=y +CONFIG_NLS=y +CONFIG_NLS_DEFAULT="iso8859-1" +CONFIG_NLS_CODEPAGE_437=y +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1250 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +CONFIG_NLS_ASCII=y +CONFIG_NLS_ISO8859_1=y +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_MAC_ROMAN is not set +# CONFIG_NLS_MAC_CELTIC is not set +# CONFIG_NLS_MAC_CENTEURO is not set +# CONFIG_NLS_MAC_CROATIAN is not set +# CONFIG_NLS_MAC_CYRILLIC is not set +# CONFIG_NLS_MAC_GAELIC is not set +# CONFIG_NLS_MAC_GREEK is not set +# CONFIG_NLS_MAC_ICELAND is not set +# CONFIG_NLS_MAC_INUIT is not set +# CONFIG_NLS_MAC_ROMANIAN is not set +# CONFIG_NLS_MAC_TURKISH is not set +CONFIG_NLS_UTF8=y +# CONFIG_UNICODE is not set +CONFIG_IO_WQ=y +# end of File systems + +# +# Security options +# +CONFIG_KEYS=y +# CONFIG_KEYS_REQUEST_CACHE is not set +# CONFIG_PERSISTENT_KEYRINGS is not set +# CONFIG_BIG_KEYS is not set +# CONFIG_TRUSTED_KEYS is not set +# CONFIG_ENCRYPTED_KEYS is not set +# CONFIG_KEY_DH_OPERATIONS is not set +CONFIG_SECURITY_DMESG_RESTRICT=y +CONFIG_SECURITY=y +# CONFIG_SECURITYFS is not set +# CONFIG_SECURITY_NETWORK is not set +CONFIG_SECURITY_PATH=y +CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y +CONFIG_HARDENED_USERCOPY=y +CONFIG_FORTIFY_SOURCE=y +# CONFIG_STATIC_USERMODEHELPER is not set +# CONFIG_SECURITY_SMACK is not set +# CONFIG_SECURITY_TOMOYO is not set +# CONFIG_SECURITY_APPARMOR is not set +# CONFIG_SECURITY_LOADPIN is not set +# CONFIG_SECURITY_YAMA is not set +# CONFIG_SECURITY_SAFESETID is not set +# CONFIG_SECURITY_LOCKDOWN_LSM is not set +CONFIG_SECURITY_LANDLOCK=y +# CONFIG_INTEGRITY is not set +# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set +CONFIG_DEFAULT_SECURITY_DAC=y +CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,bpf" + +# +# Kernel hardening options +# + +# +# Memory initialization +# +CONFIG_INIT_STACK_NONE=y +# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set +# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set +CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y +CONFIG_ZERO_CALL_USED_REGS=y +# end of Memory initialization + +CONFIG_RANDSTRUCT_NONE=y +# end of Kernel hardening options +# end of Security options + +CONFIG_XOR_BLOCKS=y +CONFIG_ASYNC_CORE=y +CONFIG_ASYNC_MEMCPY=y +CONFIG_ASYNC_XOR=y +CONFIG_ASYNC_PQ=y +CONFIG_ASYNC_RAID6_RECOV=y +CONFIG_CRYPTO=y + +# +# Crypto core or helper +# +CONFIG_CRYPTO_ALGAPI=y +CONFIG_CRYPTO_ALGAPI2=y +CONFIG_CRYPTO_AEAD=y +CONFIG_CRYPTO_AEAD2=y +CONFIG_CRYPTO_SKCIPHER=y +CONFIG_CRYPTO_SKCIPHER2=y +CONFIG_CRYPTO_HASH=y +CONFIG_CRYPTO_HASH2=y +CONFIG_CRYPTO_RNG=y +CONFIG_CRYPTO_RNG2=y +CONFIG_CRYPTO_RNG_DEFAULT=y +CONFIG_CRYPTO_AKCIPHER2=y +CONFIG_CRYPTO_AKCIPHER=y +CONFIG_CRYPTO_KPP2=y +CONFIG_CRYPTO_ACOMP2=y +CONFIG_CRYPTO_MANAGER=y +CONFIG_CRYPTO_MANAGER2=y +# CONFIG_CRYPTO_USER is not set +CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y +CONFIG_CRYPTO_GF128MUL=y +CONFIG_CRYPTO_NULL=y +CONFIG_CRYPTO_NULL2=y +# CONFIG_CRYPTO_PCRYPT is not set +# CONFIG_CRYPTO_CRYPTD is not set +CONFIG_CRYPTO_AUTHENC=y +# CONFIG_CRYPTO_TEST is not set +# end of Crypto core or helper + +# +# Public-key cryptography +# +CONFIG_CRYPTO_RSA=y +# CONFIG_CRYPTO_DH is not set +# CONFIG_CRYPTO_ECDH is not set +# CONFIG_CRYPTO_ECDSA is not set +# CONFIG_CRYPTO_ECRDSA is not set +# CONFIG_CRYPTO_SM2 is not set +# CONFIG_CRYPTO_CURVE25519 is not set +# end of Public-key cryptography + +# +# Block ciphers +# +CONFIG_CRYPTO_AES=y +# CONFIG_CRYPTO_AES_TI is not set +# CONFIG_CRYPTO_ANUBIS is not set +# CONFIG_CRYPTO_ARIA is not set +# CONFIG_CRYPTO_BLOWFISH is not set +# CONFIG_CRYPTO_CAMELLIA is not set +# CONFIG_CRYPTO_CAST5 is not set +# CONFIG_CRYPTO_CAST6 is not set +CONFIG_CRYPTO_DES=y +# CONFIG_CRYPTO_FCRYPT is not set +# CONFIG_CRYPTO_KHAZAD is not set +# CONFIG_CRYPTO_SEED is not set +# CONFIG_CRYPTO_SERPENT is not set +# CONFIG_CRYPTO_SM4_GENERIC is not set +# CONFIG_CRYPTO_TEA is not set +# CONFIG_CRYPTO_TWOFISH is not set +# end of Block ciphers + +# +# Length-preserving ciphers and modes +# +# CONFIG_CRYPTO_ADIANTUM is not set +CONFIG_CRYPTO_ARC4=y +# CONFIG_CRYPTO_CHACHA20 is not set +CONFIG_CRYPTO_CBC=y +# CONFIG_CRYPTO_CFB is not set +CONFIG_CRYPTO_CTR=y +CONFIG_CRYPTO_CTS=y +CONFIG_CRYPTO_ECB=y +# CONFIG_CRYPTO_HCTR2 is not set +# CONFIG_CRYPTO_KEYWRAP is not set +# CONFIG_CRYPTO_LRW is not set +# CONFIG_CRYPTO_OFB is not set +# CONFIG_CRYPTO_PCBC is not set +CONFIG_CRYPTO_XTS=y +# end of Length-preserving ciphers and modes + +# +# AEAD (authenticated encryption with associated data) ciphers +# +# CONFIG_CRYPTO_AEGIS128 is not set +# CONFIG_CRYPTO_CHACHA20POLY1305 is not set +CONFIG_CRYPTO_CCM=y +CONFIG_CRYPTO_GCM=y +CONFIG_CRYPTO_SEQIV=y +CONFIG_CRYPTO_ECHAINIV=y +CONFIG_CRYPTO_ESSIV=y +# end of AEAD (authenticated encryption with associated data) ciphers + +# +# Hashes, digests, and MACs +# +CONFIG_CRYPTO_BLAKE2B=y +CONFIG_CRYPTO_CMAC=y +CONFIG_CRYPTO_GHASH=y +CONFIG_CRYPTO_HMAC=y +CONFIG_CRYPTO_MD4=y +CONFIG_CRYPTO_MD5=y +# CONFIG_CRYPTO_MICHAEL_MIC is not set +# CONFIG_CRYPTO_POLY1305 is not set +# CONFIG_CRYPTO_RMD160 is not set +CONFIG_CRYPTO_SHA1=y +CONFIG_CRYPTO_SHA256=y +CONFIG_CRYPTO_SHA512=y +# CONFIG_CRYPTO_SHA3 is not set +# CONFIG_CRYPTO_SM3_GENERIC is not set +# CONFIG_CRYPTO_STREEBOG is not set +# CONFIG_CRYPTO_VMAC is not set +# CONFIG_CRYPTO_WP512 is not set +# CONFIG_CRYPTO_XCBC is not set +CONFIG_CRYPTO_XXHASH=y +# end of Hashes, digests, and MACs + +# +# CRCs (cyclic redundancy checks) +# +CONFIG_CRYPTO_CRC32C=y +# CONFIG_CRYPTO_CRC32 is not set +# CONFIG_CRYPTO_CRCT10DIF is not set +# end of CRCs (cyclic redundancy checks) + +# +# Compression +# +# CONFIG_CRYPTO_DEFLATE is not set +# CONFIG_CRYPTO_LZO is not set +# CONFIG_CRYPTO_842 is not set +# CONFIG_CRYPTO_LZ4 is not set +# CONFIG_CRYPTO_LZ4HC is not set +# CONFIG_CRYPTO_ZSTD is not set +# end of Compression + +# +# Random number generation +# +# CONFIG_CRYPTO_ANSI_CPRNG is not set +CONFIG_CRYPTO_DRBG_MENU=y +CONFIG_CRYPTO_DRBG_HMAC=y +# CONFIG_CRYPTO_DRBG_HASH is not set +# CONFIG_CRYPTO_DRBG_CTR is not set +CONFIG_CRYPTO_DRBG=y +CONFIG_CRYPTO_JITTERENTROPY=y +# end of Random number generation + +# +# Userspace interface +# +CONFIG_CRYPTO_USER_API=y +CONFIG_CRYPTO_USER_API_HASH=y +CONFIG_CRYPTO_USER_API_SKCIPHER=y +# CONFIG_CRYPTO_USER_API_RNG is not set +# CONFIG_CRYPTO_USER_API_AEAD is not set +CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y +# end of Userspace interface + +CONFIG_CRYPTO_HASH_INFO=y +# CONFIG_CRYPTO_NHPOLY1305_NEON is not set +CONFIG_CRYPTO_CHACHA20_NEON=y + +# +# Accelerated Cryptographic Algorithms for CPU (arm64) +# +# CONFIG_CRYPTO_GHASH_ARM64_CE is not set +CONFIG_CRYPTO_POLY1305_NEON=y +# CONFIG_CRYPTO_SHA1_ARM64_CE is not set +# CONFIG_CRYPTO_SHA256_ARM64 is not set +# CONFIG_CRYPTO_SHA2_ARM64_CE is not set +# CONFIG_CRYPTO_SHA512_ARM64 is not set +# CONFIG_CRYPTO_SHA512_ARM64_CE is not set +# CONFIG_CRYPTO_SHA3_ARM64 is not set +# CONFIG_CRYPTO_SM3_NEON is not set +# CONFIG_CRYPTO_SM3_ARM64_CE is not set +# CONFIG_CRYPTO_POLYVAL_ARM64_CE is not set +# CONFIG_CRYPTO_AES_ARM64 is not set +# CONFIG_CRYPTO_AES_ARM64_CE is not set +# CONFIG_CRYPTO_AES_ARM64_CE_BLK is not set +# CONFIG_CRYPTO_AES_ARM64_NEON_BLK is not set +# CONFIG_CRYPTO_AES_ARM64_BS is not set +# CONFIG_CRYPTO_SM4_ARM64_CE is not set +# CONFIG_CRYPTO_SM4_ARM64_CE_BLK is not set +# CONFIG_CRYPTO_SM4_ARM64_NEON_BLK is not set +# CONFIG_CRYPTO_AES_ARM64_CE_CCM is not set +# end of Accelerated Cryptographic Algorithms for CPU (arm64) + +CONFIG_CRYPTO_HW=y +# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set +# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set +# CONFIG_CRYPTO_DEV_CCP is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCC is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set +# CONFIG_CRYPTO_DEV_QAT_C62X is not set +# CONFIG_CRYPTO_DEV_QAT_4XXX is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCCVF is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set +# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set +# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set +# CONFIG_CRYPTO_DEV_CAVIUM_ZIP is not set +# CONFIG_CRYPTO_DEV_VIRTIO is not set +# CONFIG_CRYPTO_DEV_SAFEXCEL is not set +# CONFIG_CRYPTO_DEV_CCREE is not set +# CONFIG_CRYPTO_DEV_HISI_SEC is not set +# CONFIG_CRYPTO_DEV_HISI_SEC2 is not set +# CONFIG_CRYPTO_DEV_HISI_ZIP is not set +# CONFIG_CRYPTO_DEV_HISI_HPRE is not set +# CONFIG_CRYPTO_DEV_HISI_TRNG is not set +# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set +CONFIG_ASYMMETRIC_KEY_TYPE=y +CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y +CONFIG_X509_CERTIFICATE_PARSER=y +# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set +CONFIG_PKCS7_MESSAGE_PARSER=y +# CONFIG_PKCS7_TEST_KEY is not set +# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set +CONFIG_FIPS_SIGNATURE_SELFTEST=y + +# +# Certificates for signature checking +# +CONFIG_SYSTEM_TRUSTED_KEYRING=y +CONFIG_SYSTEM_TRUSTED_KEYS="" +# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set +# CONFIG_SECONDARY_TRUSTED_KEYRING is not set +# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set +# end of Certificates for signature checking + +CONFIG_BINARY_PRINTF=y + +# +# Library routines +# +CONFIG_RAID6_PQ=y +# CONFIG_RAID6_PQ_BENCHMARK is not set +# CONFIG_PACKING is not set +CONFIG_BITREVERSE=y +CONFIG_HAVE_ARCH_BITREVERSE=y +CONFIG_GENERIC_STRNCPY_FROM_USER=y +CONFIG_GENERIC_STRNLEN_USER=y +CONFIG_GENERIC_NET_UTILS=y +# CONFIG_CORDIC is not set +# CONFIG_PRIME_NUMBERS is not set +CONFIG_RATIONAL=y +CONFIG_GENERIC_PCI_IOMAP=y +CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y +CONFIG_ARCH_HAS_FAST_MULTIPLIER=y +CONFIG_ARCH_USE_SYM_ANNOTATIONS=y +# CONFIG_INDIRECT_PIO is not set +# CONFIG_TRACE_MMIO_ACCESS is not set + +# +# Crypto library routines +# +CONFIG_CRYPTO_LIB_UTILS=y +CONFIG_CRYPTO_LIB_AES=y +CONFIG_CRYPTO_LIB_ARC4=y +CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=y +CONFIG_CRYPTO_LIB_CHACHA_GENERIC=y +CONFIG_CRYPTO_LIB_CHACHA=y +CONFIG_CRYPTO_LIB_CURVE25519_GENERIC=y +CONFIG_CRYPTO_LIB_CURVE25519=y +CONFIG_CRYPTO_LIB_DES=y +CONFIG_CRYPTO_LIB_POLY1305_RSIZE=9 +CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_CHACHA20POLY1305=y +CONFIG_CRYPTO_LIB_SHA1=y +CONFIG_CRYPTO_LIB_SHA256=y +# end of Crypto library routines + +CONFIG_CRC_CCITT=y +CONFIG_CRC16=y +# CONFIG_CRC_T10DIF is not set +# CONFIG_CRC64_ROCKSOFT is not set +CONFIG_CRC_ITU_T=y +CONFIG_CRC32=y +# CONFIG_CRC32_SELFTEST is not set +CONFIG_CRC32_SLICEBY8=y +# CONFIG_CRC32_SLICEBY4 is not set +# CONFIG_CRC32_SARWATE is not set +# CONFIG_CRC32_BIT is not set +# CONFIG_CRC64 is not set +# CONFIG_CRC4 is not set +# CONFIG_CRC7 is not set +CONFIG_LIBCRC32C=y +# CONFIG_CRC8 is not set +CONFIG_XXHASH=y +CONFIG_AUDIT_GENERIC=y +CONFIG_AUDIT_ARCH_COMPAT_GENERIC=y +# CONFIG_RANDOM32_SELFTEST is not set +CONFIG_ZLIB_INFLATE=y +CONFIG_ZLIB_DEFLATE=y +CONFIG_LZO_COMPRESS=y +CONFIG_LZO_DECOMPRESS=y +CONFIG_LZ4_DECOMPRESS=y +CONFIG_ZSTD_COMMON=y +CONFIG_ZSTD_COMPRESS=y +CONFIG_ZSTD_DECOMPRESS=y +CONFIG_XZ_DEC=y +# CONFIG_XZ_DEC_X86 is not set +# CONFIG_XZ_DEC_POWERPC is not set +# CONFIG_XZ_DEC_IA64 is not set +# CONFIG_XZ_DEC_ARM is not set +# CONFIG_XZ_DEC_ARMTHUMB is not set +# CONFIG_XZ_DEC_SPARC is not set +# CONFIG_XZ_DEC_MICROLZMA is not set +# CONFIG_XZ_DEC_TEST is not set +CONFIG_DECOMPRESS_GZIP=y +CONFIG_DECOMPRESS_ZSTD=y +CONFIG_GENERIC_ALLOCATOR=y +CONFIG_TEXTSEARCH=y +CONFIG_TEXTSEARCH_KMP=y +CONFIG_INTERVAL_TREE=y +CONFIG_XARRAY_MULTI=y +CONFIG_ASSOCIATIVE_ARRAY=y +CONFIG_HAS_IOMEM=y +CONFIG_HAS_IOPORT_MAP=y +CONFIG_HAS_DMA=y +CONFIG_DMA_OPS=y +CONFIG_NEED_SG_DMA_LENGTH=y +CONFIG_NEED_DMA_MAP_STATE=y +CONFIG_ARCH_DMA_ADDR_T_64BIT=y +CONFIG_DMA_DECLARE_COHERENT=y +CONFIG_ARCH_HAS_SETUP_DMA_OPS=y +CONFIG_ARCH_HAS_TEARDOWN_DMA_OPS=y +CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE=y +CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU=y +CONFIG_ARCH_HAS_DMA_PREP_COHERENT=y +CONFIG_SWIOTLB=y +# CONFIG_DMA_RESTRICTED_POOL is not set +CONFIG_DMA_NONCOHERENT_MMAP=y +CONFIG_DMA_COHERENT_POOL=y +CONFIG_DMA_DIRECT_REMAP=y +# CONFIG_DMA_API_DEBUG is not set +# CONFIG_DMA_MAP_BENCHMARK is not set +CONFIG_SGL_ALLOC=y +# CONFIG_FORCE_NR_CPUS is not set +CONFIG_CPU_RMAP=y +CONFIG_DQL=y +CONFIG_GLOB=y +# CONFIG_GLOB_SELFTEST is not set +CONFIG_NLATTR=y +CONFIG_CLZ_TAB=y +CONFIG_IRQ_POLL=y +CONFIG_MPILIB=y +CONFIG_LIBFDT=y +CONFIG_OID_REGISTRY=y +CONFIG_UCS2_STRING=y +CONFIG_HAVE_GENERIC_VDSO=y +CONFIG_GENERIC_GETTIMEOFDAY=y +CONFIG_GENERIC_VDSO_TIME_NS=y +CONFIG_FONT_SUPPORT=y +CONFIG_FONT_8x16=y +CONFIG_FONT_AUTOSELECT=y +CONFIG_SG_POOL=y +CONFIG_MEMREGION=y +CONFIG_ARCH_STACKWALK=y +CONFIG_SBITMAP=y +# end of Library routines + +CONFIG_GENERIC_IOREMAP=y +CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED=y + +# +# Kernel hacking +# + +# +# printk and dmesg options +# +CONFIG_PRINTK_TIME=y +# CONFIG_PRINTK_CALLER is not set +# CONFIG_STACKTRACE_BUILD_ID is not set +CONFIG_CONSOLE_LOGLEVEL_DEFAULT=2 +CONFIG_CONSOLE_LOGLEVEL_QUIET=1 +CONFIG_MESSAGE_LOGLEVEL_DEFAULT=1 +# CONFIG_BOOT_PRINTK_DELAY is not set +# CONFIG_DYNAMIC_DEBUG is not set +# CONFIG_DYNAMIC_DEBUG_CORE is not set +# CONFIG_SYMBOLIC_ERRNAME is not set +CONFIG_DEBUG_BUGVERBOSE=y +# end of printk and dmesg options + +# CONFIG_DEBUG_KERNEL is not set +# CONFIG_DEBUG_MISC is not set + +# +# Compile-time checks and compiler options +# +# CONFIG_DEBUG_INFO is not set +CONFIG_AS_HAS_NON_CONST_LEB128=y +# CONFIG_DEBUG_INFO_NONE is not set +CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y +# CONFIG_DEBUG_INFO_DWARF4 is not set +# CONFIG_DEBUG_INFO_DWARF5 is not set +# CONFIG_DEBUG_INFO_REDUCED is not set +# CONFIG_DEBUG_INFO_COMPRESSED is not set +# CONFIG_DEBUG_INFO_SPLIT is not set +CONFIG_DEBUG_INFO_BTF=y +CONFIG_PAHOLE_HAS_SPLIT_BTF=y +CONFIG_DEBUG_INFO_BTF_MODULES=y +# CONFIG_MODULE_ALLOW_BTF_MISMATCH is not set +# CONFIG_GDB_SCRIPTS is not set +CONFIG_FRAME_WARN=2048 +# CONFIG_STRIP_ASM_SYMS is not set +# CONFIG_READABLE_ASM is not set +# CONFIG_HEADERS_INSTALL is not set +# CONFIG_DEBUG_SECTION_MISMATCH is not set +# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set +# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B is not set +CONFIG_ARCH_WANT_FRAME_POINTERS=y +CONFIG_FRAME_POINTER=y +# CONFIG_VMLINUX_MAP is not set +# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set +# end of Compile-time checks and compiler options + +# +# Generic Kernel Debugging Instruments +# +# CONFIG_MAGIC_SYSRQ is not set +CONFIG_DEBUG_FS=y +CONFIG_DEBUG_FS_ALLOW_ALL=y +# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set +# CONFIG_DEBUG_FS_ALLOW_NONE is not set +CONFIG_HAVE_ARCH_KGDB=y +# CONFIG_KGDB is not set +CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y +# CONFIG_UBSAN is not set +CONFIG_HAVE_ARCH_KCSAN=y +CONFIG_HAVE_KCSAN_COMPILER=y +# CONFIG_KCSAN is not set +# end of Generic Kernel Debugging Instruments + +# +# Networking Debugging +# +# CONFIG_NET_DEV_REFCNT_TRACKER is not set +# CONFIG_NET_NS_REFCNT_TRACKER is not set +# CONFIG_DEBUG_NET is not set +# end of Networking Debugging + +# +# Memory Debugging +# +CONFIG_PAGE_EXTENSION=y +# CONFIG_DEBUG_PAGEALLOC is not set +# CONFIG_SLUB_DEBUG is not set +# CONFIG_PAGE_OWNER is not set +# CONFIG_PAGE_POISONING is not set +# CONFIG_DEBUG_PAGE_REF is not set +# CONFIG_DEBUG_RODATA_TEST is not set +CONFIG_ARCH_HAS_DEBUG_WX=y +# CONFIG_DEBUG_WX is not set +CONFIG_GENERIC_PTDUMP=y +# CONFIG_PTDUMP_DEBUGFS is not set +# CONFIG_DEBUG_OBJECTS is not set +# CONFIG_SHRINKER_DEBUG is not set +CONFIG_HAVE_DEBUG_KMEMLEAK=y +# CONFIG_DEBUG_KMEMLEAK is not set +# CONFIG_DEBUG_STACK_USAGE is not set +CONFIG_SCHED_STACK_END_CHECK=y +CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y +# CONFIG_DEBUG_VM is not set +# CONFIG_DEBUG_VM_PGTABLE is not set +CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y +# CONFIG_DEBUG_VIRTUAL is not set +CONFIG_DEBUG_MEMORY_INIT=y +# CONFIG_DEBUG_PER_CPU_MAPS is not set +CONFIG_HAVE_ARCH_KASAN=y +CONFIG_HAVE_ARCH_KASAN_SW_TAGS=y +CONFIG_HAVE_ARCH_KASAN_VMALLOC=y +CONFIG_CC_HAS_KASAN_GENERIC=y +CONFIG_CC_HAS_KASAN_SW_TAGS=y +CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y +# CONFIG_KASAN is not set +CONFIG_HAVE_ARCH_KFENCE=y +# CONFIG_KFENCE is not set +# end of Memory Debugging + +# CONFIG_DEBUG_SHIRQ is not set + +# +# Debug Oops, Lockups and Hangs +# +CONFIG_PANIC_ON_OOPS=y +CONFIG_PANIC_ON_OOPS_VALUE=1 +CONFIG_PANIC_TIMEOUT=0 +# CONFIG_SOFTLOCKUP_DETECTOR is not set +# CONFIG_DETECT_HUNG_TASK is not set +# CONFIG_WQ_WATCHDOG is not set +# CONFIG_TEST_LOCKUP is not set +# end of Debug Oops, Lockups and Hangs + +# +# Scheduler Debugging +# +CONFIG_SCHED_DEBUG=y +CONFIG_SCHED_INFO=y +CONFIG_SCHEDSTATS=y +# end of Scheduler Debugging + +# CONFIG_DEBUG_TIMEKEEPING is not set + +# +# Lock Debugging (spinlocks, mutexes, etc...) +# +CONFIG_LOCK_DEBUGGING_SUPPORT=y +# CONFIG_PROVE_LOCKING is not set +# CONFIG_LOCK_STAT is not set +# CONFIG_DEBUG_RT_MUTEXES is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_DEBUG_MUTEXES is not set +# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set +# CONFIG_DEBUG_RWSEMS is not set +# CONFIG_DEBUG_LOCK_ALLOC is not set +# CONFIG_DEBUG_ATOMIC_SLEEP is not set +# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set +# CONFIG_LOCK_TORTURE_TEST is not set +# CONFIG_WW_MUTEX_SELFTEST is not set +# CONFIG_SCF_TORTURE_TEST is not set +# CONFIG_CSD_LOCK_WAIT_DEBUG is not set +# end of Lock Debugging (spinlocks, mutexes, etc...) + +# CONFIG_DEBUG_IRQFLAGS is not set +CONFIG_STACKTRACE=y +# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set +# CONFIG_DEBUG_KOBJECT is not set + +# +# Debug kernel data structures +# +CONFIG_DEBUG_LIST=y +# CONFIG_DEBUG_PLIST is not set +CONFIG_DEBUG_SG=y +CONFIG_DEBUG_NOTIFIERS=y +# CONFIG_BUG_ON_DATA_CORRUPTION is not set +# CONFIG_DEBUG_MAPLE_TREE is not set +# end of Debug kernel data structures + +CONFIG_DEBUG_CREDENTIALS=y + +# +# RCU Debugging +# +# CONFIG_RCU_SCALE_TEST is not set +# CONFIG_RCU_TORTURE_TEST is not set +# CONFIG_RCU_REF_SCALE_TEST is not set +CONFIG_RCU_CPU_STALL_TIMEOUT=60 +CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=0 +# CONFIG_RCU_TRACE is not set +# CONFIG_RCU_EQS_DEBUG is not set +# end of RCU Debugging + +# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set +# CONFIG_LATENCYTOP is not set +CONFIG_NOP_TRACER=y +CONFIG_HAVE_FUNCTION_TRACER=y +CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y +CONFIG_HAVE_DYNAMIC_FTRACE=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y +CONFIG_HAVE_SYSCALL_TRACEPOINTS=y +CONFIG_HAVE_C_RECORDMCOUNT=y +CONFIG_TRACER_MAX_TRACE=y +CONFIG_TRACE_CLOCK=y +CONFIG_RING_BUFFER=y +CONFIG_EVENT_TRACING=y +CONFIG_CONTEXT_SWITCH_TRACER=y +CONFIG_RING_BUFFER_ALLOW_SWAP=y +CONFIG_TRACING=y +CONFIG_GENERIC_TRACER=y +CONFIG_TRACING_SUPPORT=y +CONFIG_FTRACE=y +# CONFIG_BOOTTIME_TRACING is not set +CONFIG_FUNCTION_TRACER=y +CONFIG_FUNCTION_GRAPH_TRACER=y +CONFIG_DYNAMIC_FTRACE=y +CONFIG_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_FUNCTION_PROFILER=y +CONFIG_STACK_TRACER=y +# CONFIG_IRQSOFF_TRACER is not set +CONFIG_SCHED_TRACER=y +CONFIG_HWLAT_TRACER=y +# CONFIG_OSNOISE_TRACER is not set +# CONFIG_TIMERLAT_TRACER is not set +CONFIG_FTRACE_SYSCALLS=y +CONFIG_TRACER_SNAPSHOT=y +CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y +CONFIG_BRANCH_PROFILE_NONE=y +# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set +# CONFIG_BLK_DEV_IO_TRACE is not set +CONFIG_KPROBE_EVENTS=y +# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set +CONFIG_UPROBE_EVENTS=y +CONFIG_BPF_EVENTS=y +CONFIG_DYNAMIC_EVENTS=y +CONFIG_PROBE_EVENTS=y +CONFIG_FTRACE_MCOUNT_RECORD=y +CONFIG_FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY=y +# CONFIG_SYNTH_EVENTS is not set +# CONFIG_HIST_TRIGGERS is not set +# CONFIG_TRACE_EVENT_INJECT is not set +# CONFIG_TRACEPOINT_BENCHMARK is not set +# CONFIG_RING_BUFFER_BENCHMARK is not set +# CONFIG_TRACE_EVAL_MAP_FILE is not set +# CONFIG_FTRACE_RECORD_RECURSION is not set +# CONFIG_FTRACE_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS is not set +# CONFIG_PREEMPTIRQ_DELAY_TEST is not set +# CONFIG_KPROBE_EVENT_GEN_TEST is not set +# CONFIG_RV is not set +# CONFIG_SAMPLES is not set +# CONFIG_STRICT_DEVMEM is not set + +# +# arm64 Debugging +# +# CONFIG_PID_IN_CONTEXTIDR is not set +# CONFIG_DEBUG_EFI is not set +# CONFIG_ARM64_RELOC_TEST is not set +# CONFIG_CORESIGHT is not set +# end of arm64 Debugging + +# +# Kernel Testing and Coverage +# +# CONFIG_KUNIT is not set +# CONFIG_NOTIFIER_ERROR_INJECTION is not set +# CONFIG_FUNCTION_ERROR_INJECTION is not set +# CONFIG_FAULT_INJECTION is not set +CONFIG_ARCH_HAS_KCOV=y +CONFIG_CC_HAS_SANCOV_TRACE_PC=y +# CONFIG_RUNTIME_TESTING_MENU is not set +CONFIG_ARCH_USE_MEMTEST=y +# CONFIG_MEMTEST is not set +# CONFIG_HYPERV_TESTING is not set +# end of Kernel Testing and Coverage + +# +# Rust hacking +# +# end of Rust hacking +# end of Kernel hacking diff --git a/config/kernel/linux-wsl2-arm64-edge.config b/config/kernel/linux-wsl2-arm64-edge.config new file mode 100644 index 000000000000..05f9a929c91d --- /dev/null +++ b/config/kernel/linux-wsl2-arm64-edge.config @@ -0,0 +1,4578 @@ +# +# Automatically generated file; DO NOT EDIT. +# Linux/arm64 6.6.2 Kernel Configuration +# +CONFIG_CC_VERSION_TEXT="aarch64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0" +CONFIG_CC_IS_GCC=y +CONFIG_GCC_VERSION=110400 +CONFIG_CLANG_VERSION=0 +CONFIG_AS_IS_GNU=y +CONFIG_AS_VERSION=23800 +CONFIG_LD_IS_BFD=y +CONFIG_LD_VERSION=23800 +CONFIG_LLD_VERSION=0 +CONFIG_CC_CAN_LINK=y +CONFIG_CC_CAN_LINK_STATIC=y +CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y +CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y +CONFIG_CC_HAS_ASM_INLINE=y +CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y +CONFIG_PAHOLE_VERSION=125 +CONFIG_IRQ_WORK=y +CONFIG_BUILDTIME_TABLE_SORT=y +CONFIG_THREAD_INFO_IN_TASK=y + +# +# General setup +# +CONFIG_INIT_ENV_ARG_LIMIT=32 +# CONFIG_COMPILE_TEST is not set +# CONFIG_WERROR is not set +CONFIG_LOCALVERSION="" +# CONFIG_LOCALVERSION_AUTO is not set +CONFIG_BUILD_SALT="" +CONFIG_DEFAULT_INIT="" +CONFIG_DEFAULT_HOSTNAME="(none)" +CONFIG_SYSVIPC=y +CONFIG_SYSVIPC_SYSCTL=y +CONFIG_POSIX_MQUEUE=y +CONFIG_POSIX_MQUEUE_SYSCTL=y +# CONFIG_WATCH_QUEUE is not set +CONFIG_CROSS_MEMORY_ATTACH=y +# CONFIG_USELIB is not set +CONFIG_AUDIT=y +CONFIG_HAVE_ARCH_AUDITSYSCALL=y +CONFIG_AUDITSYSCALL=y + +# +# IRQ subsystem +# +CONFIG_GENERIC_IRQ_PROBE=y +CONFIG_GENERIC_IRQ_SHOW=y +CONFIG_GENERIC_IRQ_SHOW_LEVEL=y +CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y +CONFIG_HARDIRQS_SW_RESEND=y +CONFIG_IRQ_DOMAIN=y +CONFIG_IRQ_DOMAIN_HIERARCHY=y +CONFIG_GENERIC_IRQ_IPI=y +CONFIG_GENERIC_MSI_IRQ=y +CONFIG_IRQ_MSI_IOMMU=y +CONFIG_IRQ_FORCED_THREADING=y +CONFIG_SPARSE_IRQ=y +# CONFIG_GENERIC_IRQ_DEBUGFS is not set +# end of IRQ subsystem + +CONFIG_GENERIC_TIME_VSYSCALL=y +CONFIG_GENERIC_CLOCKEVENTS=y +CONFIG_ARCH_HAS_TICK_BROADCAST=y +CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y +CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_CONTEXT_TRACKING=y +CONFIG_CONTEXT_TRACKING_IDLE=y + +# +# Timers subsystem +# +CONFIG_TICK_ONESHOT=y +CONFIG_NO_HZ_COMMON=y +# CONFIG_HZ_PERIODIC is not set +CONFIG_NO_HZ_IDLE=y +# CONFIG_NO_HZ_FULL is not set +# CONFIG_NO_HZ is not set +CONFIG_HIGH_RES_TIMERS=y +# end of Timers subsystem + +CONFIG_BPF=y +CONFIG_HAVE_EBPF_JIT=y +CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y + +# +# BPF subsystem +# +CONFIG_BPF_SYSCALL=y +CONFIG_BPF_JIT=y +CONFIG_BPF_JIT_ALWAYS_ON=y +CONFIG_BPF_JIT_DEFAULT_ON=y +CONFIG_BPF_UNPRIV_DEFAULT_OFF=y +# CONFIG_BPF_PRELOAD is not set +# CONFIG_BPF_LSM is not set +# end of BPF subsystem + +CONFIG_PREEMPT_NONE_BUILD=y +CONFIG_PREEMPT_NONE=y +# CONFIG_PREEMPT_VOLUNTARY is not set +# CONFIG_PREEMPT is not set +# CONFIG_PREEMPT_DYNAMIC is not set + +# +# CPU/Task time and stats accounting +# +CONFIG_TICK_CPU_ACCOUNTING=y +# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set +CONFIG_IRQ_TIME_ACCOUNTING=y +CONFIG_HAVE_SCHED_AVG_IRQ=y +CONFIG_BSD_PROCESS_ACCT=y +# CONFIG_BSD_PROCESS_ACCT_V3 is not set +CONFIG_TASKSTATS=y +CONFIG_TASK_DELAY_ACCT=y +CONFIG_TASK_XACCT=y +CONFIG_TASK_IO_ACCOUNTING=y +# CONFIG_PSI is not set +# end of CPU/Task time and stats accounting + +# CONFIG_CPU_ISOLATION is not set + +# +# RCU Subsystem +# +CONFIG_TREE_RCU=y +# CONFIG_RCU_EXPERT is not set +CONFIG_TREE_SRCU=y +CONFIG_TASKS_RCU_GENERIC=y +CONFIG_TASKS_RUDE_RCU=y +CONFIG_TASKS_TRACE_RCU=y +CONFIG_RCU_STALL_COMMON=y +CONFIG_RCU_NEED_SEGCBLIST=y +# end of RCU Subsystem + +CONFIG_IKCONFIG=y +CONFIG_IKCONFIG_PROC=y +# CONFIG_IKHEADERS is not set +CONFIG_LOG_BUF_SHIFT=17 +CONFIG_LOG_CPU_MAX_BUF_SHIFT=12 +# CONFIG_PRINTK_INDEX is not set +CONFIG_GENERIC_SCHED_CLOCK=y + +# +# Scheduler features +# +# end of Scheduler features + +CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y +CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y +CONFIG_CC_HAS_INT128=y +CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5" +CONFIG_GCC11_NO_ARRAY_BOUNDS=y +CONFIG_CC_NO_ARRAY_BOUNDS=y +CONFIG_ARCH_SUPPORTS_INT128=y +CONFIG_CGROUPS=y +CONFIG_PAGE_COUNTER=y +# CONFIG_CGROUP_FAVOR_DYNMODS is not set +CONFIG_MEMCG=y +CONFIG_MEMCG_KMEM=y +CONFIG_BLK_CGROUP=y +CONFIG_CGROUP_WRITEBACK=y +CONFIG_CGROUP_SCHED=y +CONFIG_FAIR_GROUP_SCHED=y +CONFIG_CFS_BANDWIDTH=y +CONFIG_RT_GROUP_SCHED=y +CONFIG_SCHED_MM_CID=y +CONFIG_CGROUP_PIDS=y +CONFIG_CGROUP_RDMA=y +CONFIG_CGROUP_FREEZER=y +CONFIG_CGROUP_HUGETLB=y +CONFIG_CPUSETS=y +CONFIG_PROC_PID_CPUSET=y +CONFIG_CGROUP_DEVICE=y +CONFIG_CGROUP_CPUACCT=y +CONFIG_CGROUP_PERF=y +CONFIG_CGROUP_BPF=y +CONFIG_CGROUP_MISC=y +# CONFIG_CGROUP_DEBUG is not set +CONFIG_SOCK_CGROUP_DATA=y +CONFIG_NAMESPACES=y +CONFIG_UTS_NS=y +CONFIG_TIME_NS=y +CONFIG_IPC_NS=y +CONFIG_USER_NS=y +CONFIG_PID_NS=y +CONFIG_NET_NS=y +CONFIG_CHECKPOINT_RESTORE=y +# CONFIG_SCHED_AUTOGROUP is not set +# CONFIG_RELAY is not set +CONFIG_BLK_DEV_INITRD=y +CONFIG_INITRAMFS_SOURCE="" +CONFIG_RD_GZIP=y +# CONFIG_RD_BZIP2 is not set +# CONFIG_RD_LZMA is not set +# CONFIG_RD_XZ is not set +# CONFIG_RD_LZO is not set +# CONFIG_RD_LZ4 is not set +CONFIG_RD_ZSTD=y +# CONFIG_BOOT_CONFIG is not set +# CONFIG_INITRAMFS_PRESERVE_MTIME is not set +CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y +# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set +CONFIG_LD_ORPHAN_WARN=y +CONFIG_LD_ORPHAN_WARN_LEVEL="warn" +CONFIG_SYSCTL=y +CONFIG_SYSCTL_EXCEPTION_TRACE=y +CONFIG_EXPERT=y +CONFIG_MULTIUSER=y +CONFIG_SGETMASK_SYSCALL=y +# CONFIG_SYSFS_SYSCALL is not set +CONFIG_FHANDLE=y +CONFIG_POSIX_TIMERS=y +CONFIG_PRINTK=y +CONFIG_BUG=y +CONFIG_ELF_CORE=y +CONFIG_BASE_FULL=y +CONFIG_FUTEX=y +CONFIG_FUTEX_PI=y +CONFIG_EPOLL=y +CONFIG_SIGNALFD=y +CONFIG_TIMERFD=y +CONFIG_EVENTFD=y +CONFIG_SHMEM=y +CONFIG_AIO=y +CONFIG_IO_URING=y +CONFIG_ADVISE_SYSCALLS=y +CONFIG_MEMBARRIER=y +CONFIG_KALLSYMS=y +# CONFIG_KALLSYMS_SELFTEST is not set +# CONFIG_KALLSYMS_ALL is not set +CONFIG_KALLSYMS_BASE_RELATIVE=y +CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y +CONFIG_KCMP=y +CONFIG_RSEQ=y +CONFIG_CACHESTAT_SYSCALL=y +# CONFIG_DEBUG_RSEQ is not set +CONFIG_HAVE_PERF_EVENTS=y +# CONFIG_PC104 is not set + +# +# Kernel Performance Events And Counters +# +CONFIG_PERF_EVENTS=y +# CONFIG_DEBUG_PERF_USE_VMALLOC is not set +# end of Kernel Performance Events And Counters + +CONFIG_SYSTEM_DATA_VERIFICATION=y +# CONFIG_PROFILING is not set +CONFIG_TRACEPOINTS=y + +# +# Kexec and crash features +# +CONFIG_CRASH_CORE=y +# CONFIG_KEXEC_FILE is not set +# end of Kexec and crash features +# end of General setup + +CONFIG_ARM64=y +CONFIG_GCC_SUPPORTS_DYNAMIC_FTRACE_WITH_ARGS=y +CONFIG_64BIT=y +CONFIG_MMU=y +CONFIG_ARM64_PAGE_SHIFT=12 +CONFIG_ARM64_CONT_PTE_SHIFT=4 +CONFIG_ARM64_CONT_PMD_SHIFT=4 +CONFIG_ARCH_MMAP_RND_BITS_MIN=18 +CONFIG_ARCH_MMAP_RND_BITS_MAX=33 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=11 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16 +CONFIG_STACKTRACE_SUPPORT=y +CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000 +CONFIG_LOCKDEP_SUPPORT=y +CONFIG_GENERIC_BUG=y +CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y +CONFIG_GENERIC_HWEIGHT=y +CONFIG_GENERIC_CSUM=y +CONFIG_GENERIC_CALIBRATE_DELAY=y +CONFIG_SMP=y +CONFIG_KERNEL_MODE_NEON=y +CONFIG_FIX_EARLYCON_MEM=y +CONFIG_PGTABLE_LEVELS=4 +CONFIG_ARCH_SUPPORTS_UPROBES=y +CONFIG_ARCH_PROC_KCORE_TEXT=y +CONFIG_BUILTIN_RETURN_ADDRESS_STRIPS_PAC=y + +# +# Platform selection +# +# CONFIG_ARCH_ACTIONS is not set +# CONFIG_ARCH_SUNXI is not set +# CONFIG_ARCH_ALPINE is not set +# CONFIG_ARCH_APPLE is not set +# CONFIG_ARCH_BCM is not set +# CONFIG_ARCH_BERLIN is not set +# CONFIG_ARCH_BITMAIN is not set +# CONFIG_ARCH_EXYNOS is not set +# CONFIG_ARCH_SPARX5 is not set +# CONFIG_ARCH_K3 is not set +# CONFIG_ARCH_LG1K is not set +# CONFIG_ARCH_HISI is not set +# CONFIG_ARCH_KEEMBAY is not set +# CONFIG_ARCH_MEDIATEK is not set +# CONFIG_ARCH_MESON is not set +# CONFIG_ARCH_MVEBU is not set +# CONFIG_ARCH_NXP is not set +# CONFIG_ARCH_MA35 is not set +# CONFIG_ARCH_NPCM is not set +# CONFIG_ARCH_QCOM is not set +# CONFIG_ARCH_REALTEK is not set +# CONFIG_ARCH_RENESAS is not set +# CONFIG_ARCH_ROCKCHIP is not set +# CONFIG_ARCH_SEATTLE is not set +# CONFIG_ARCH_INTEL_SOCFPGA is not set +# CONFIG_ARCH_STM32 is not set +# CONFIG_ARCH_SYNQUACER is not set +# CONFIG_ARCH_TEGRA is not set +# CONFIG_ARCH_SPRD is not set +# CONFIG_ARCH_THUNDER is not set +# CONFIG_ARCH_THUNDER2 is not set +# CONFIG_ARCH_UNIPHIER is not set +CONFIG_ARCH_VEXPRESS=y +# CONFIG_ARCH_VISCONTI is not set +# CONFIG_ARCH_XGENE is not set +# CONFIG_ARCH_ZYNQMP is not set +# end of Platform selection + +# +# Kernel Features +# + +# +# ARM errata workarounds via the alternatives framework +# +CONFIG_AMPERE_ERRATUM_AC03_CPU_38=y +CONFIG_ARM64_WORKAROUND_CLEAN_CACHE=y +CONFIG_ARM64_ERRATUM_826319=y +CONFIG_ARM64_ERRATUM_827319=y +CONFIG_ARM64_ERRATUM_824069=y +CONFIG_ARM64_ERRATUM_819472=y +CONFIG_ARM64_ERRATUM_832075=y +CONFIG_ARM64_ERRATUM_843419=y +CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419=y +CONFIG_ARM64_ERRATUM_1024718=y +CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT=y +# CONFIG_ARM64_ERRATUM_1165522 is not set +CONFIG_ARM64_ERRATUM_1319367=y +CONFIG_ARM64_ERRATUM_1530923=y +# CONFIG_ARM64_ERRATUM_2441007 is not set +# CONFIG_ARM64_ERRATUM_1286807 is not set +# CONFIG_ARM64_ERRATUM_1463225 is not set +# CONFIG_ARM64_ERRATUM_1542419 is not set +CONFIG_ARM64_ERRATUM_1508412=y +CONFIG_ARM64_ERRATUM_2051678=y +CONFIG_ARM64_ERRATUM_2077057=y +CONFIG_ARM64_ERRATUM_2658417=y +CONFIG_ARM64_WORKAROUND_TSB_FLUSH_FAILURE=y +CONFIG_ARM64_ERRATUM_2054223=y +CONFIG_ARM64_ERRATUM_2067961=y +# CONFIG_ARM64_ERRATUM_2441009 is not set +# CONFIG_ARM64_ERRATUM_2457168 is not set +CONFIG_ARM64_ERRATUM_2645198=y +CONFIG_ARM64_ERRATUM_2966298=y +# CONFIG_CAVIUM_ERRATUM_22375 is not set +# CONFIG_CAVIUM_ERRATUM_23154 is not set +# CONFIG_CAVIUM_ERRATUM_27456 is not set +# CONFIG_CAVIUM_ERRATUM_30115 is not set +# CONFIG_CAVIUM_TX2_ERRATUM_219 is not set +# CONFIG_FUJITSU_ERRATUM_010001 is not set +# CONFIG_HISILICON_ERRATUM_161600802 is not set +# CONFIG_QCOM_FALKOR_ERRATUM_1003 is not set +# CONFIG_QCOM_FALKOR_ERRATUM_1009 is not set +# CONFIG_QCOM_QDF2400_ERRATUM_0065 is not set +# CONFIG_QCOM_FALKOR_ERRATUM_E1041 is not set +# CONFIG_NVIDIA_CARMEL_CNP_ERRATUM is not set +CONFIG_ROCKCHIP_ERRATUM_3588001=y +CONFIG_SOCIONEXT_SYNQUACER_PREITS=y +# end of ARM errata workarounds via the alternatives framework + +CONFIG_ARM64_4K_PAGES=y +# CONFIG_ARM64_16K_PAGES is not set +# CONFIG_ARM64_64K_PAGES is not set +# CONFIG_ARM64_VA_BITS_39 is not set +CONFIG_ARM64_VA_BITS_48=y +CONFIG_ARM64_VA_BITS=48 +CONFIG_ARM64_PA_BITS_48=y +CONFIG_ARM64_PA_BITS=48 +# CONFIG_CPU_BIG_ENDIAN is not set +CONFIG_CPU_LITTLE_ENDIAN=y +CONFIG_SCHED_MC=y +# CONFIG_SCHED_CLUSTER is not set +# CONFIG_SCHED_SMT is not set +CONFIG_NR_CPUS=256 +# CONFIG_HOTPLUG_CPU is not set +# CONFIG_NUMA is not set +# CONFIG_HZ_100 is not set +CONFIG_HZ_250=y +# CONFIG_HZ_300 is not set +# CONFIG_HZ_1000 is not set +CONFIG_HZ=250 +CONFIG_SCHED_HRTICK=y +CONFIG_ARCH_SPARSEMEM_ENABLE=y +CONFIG_HW_PERF_EVENTS=y +CONFIG_PARAVIRT=y +# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set +CONFIG_ARCH_SUPPORTS_KEXEC_FILE=y +CONFIG_ARCH_SUPPORTS_KEXEC_SIG=y +CONFIG_ARCH_SUPPORTS_KEXEC_IMAGE_VERIFY_SIG=y +CONFIG_ARCH_DEFAULT_KEXEC_IMAGE_VERIFY_SIG=y +CONFIG_ARCH_SUPPORTS_CRASH_DUMP=y +# CONFIG_XEN is not set +CONFIG_ARCH_FORCE_MAX_ORDER=10 +# CONFIG_UNMAP_KERNEL_AT_EL0 is not set +# CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY is not set +# CONFIG_RODATA_FULL_DEFAULT_ENABLED is not set +# CONFIG_ARM64_SW_TTBR0_PAN is not set +# CONFIG_ARM64_TAGGED_ADDR_ABI is not set +# CONFIG_COMPAT is not set + +# +# ARMv8.1 architectural features +# +CONFIG_ARM64_HW_AFDBM=y +CONFIG_ARM64_PAN=y +CONFIG_AS_HAS_LSE_ATOMICS=y +CONFIG_ARM64_LSE_ATOMICS=y +CONFIG_ARM64_USE_LSE_ATOMICS=y +# end of ARMv8.1 architectural features + +# +# ARMv8.2 architectural features +# +CONFIG_AS_HAS_ARMV8_2=y +CONFIG_AS_HAS_SHA3=y +# CONFIG_ARM64_PMEM is not set +CONFIG_ARM64_RAS_EXTN=y +# CONFIG_ARM64_CNP is not set +# end of ARMv8.2 architectural features + +# +# ARMv8.3 architectural features +# +# CONFIG_ARM64_PTR_AUTH is not set +CONFIG_CC_HAS_BRANCH_PROT_PAC_RET=y +CONFIG_CC_HAS_SIGN_RETURN_ADDRESS=y +CONFIG_AS_HAS_ARMV8_3=y +CONFIG_AS_HAS_CFI_NEGATE_RA_STATE=y +CONFIG_AS_HAS_LDAPR=y +# end of ARMv8.3 architectural features + +# +# ARMv8.4 architectural features +# +CONFIG_ARM64_AMU_EXTN=y +CONFIG_AS_HAS_ARMV8_4=y +CONFIG_ARM64_TLB_RANGE=y +# end of ARMv8.4 architectural features + +# +# ARMv8.5 architectural features +# +CONFIG_AS_HAS_ARMV8_5=y +CONFIG_ARM64_BTI=y +CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI=y +CONFIG_ARM64_E0PD=y +CONFIG_ARM64_AS_HAS_MTE=y +# end of ARMv8.5 architectural features + +# +# ARMv8.7 architectural features +# +# CONFIG_ARM64_EPAN is not set +# end of ARMv8.7 architectural features + +CONFIG_ARM64_SVE=y +CONFIG_ARM64_SME=y +# CONFIG_ARM64_PSEUDO_NMI is not set +CONFIG_RELOCATABLE=y +# CONFIG_RANDOMIZE_BASE is not set +CONFIG_CC_HAVE_STACKPROTECTOR_SYSREG=y +CONFIG_STACKPROTECTOR_PER_TASK=y +# end of Kernel Features + +# +# Boot options +# +# CONFIG_ARM64_ACPI_PARKING_PROTOCOL is not set +CONFIG_CMDLINE="" +CONFIG_EFI_STUB=y +CONFIG_EFI=y +# CONFIG_DMI is not set +# end of Boot options + +# +# Power management options +# +# CONFIG_SUSPEND is not set +CONFIG_PM=y +# CONFIG_PM_DEBUG is not set +CONFIG_PM_CLK=y +CONFIG_PM_GENERIC_DOMAINS=y +# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set +CONFIG_PM_GENERIC_DOMAINS_OF=y +# CONFIG_ENERGY_MODEL is not set +CONFIG_ARCH_SUSPEND_POSSIBLE=y +# end of Power management options + +# +# CPU Power Management +# + +# +# CPU Idle +# +# CONFIG_CPU_IDLE is not set +# end of CPU Idle + +# +# CPU Frequency scaling +# +CONFIG_CPU_FREQ=y +# CONFIG_CPU_FREQ_STAT is not set +CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set +CONFIG_CPU_FREQ_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set +# CONFIG_CPU_FREQ_GOV_SCHEDUTIL is not set + +# +# CPU frequency scaling drivers +# +# CONFIG_CPUFREQ_DT is not set +# CONFIG_CPUFREQ_DT_PLATDEV is not set +# end of CPU Frequency scaling +# end of CPU Power Management + +CONFIG_ARCH_SUPPORTS_ACPI=y +CONFIG_ACPI=y +CONFIG_ACPI_GENERIC_GSI=y +CONFIG_ACPI_CCA_REQUIRED=y +# CONFIG_ACPI_DEBUGGER is not set +CONFIG_ACPI_SPCR_TABLE=y +# CONFIG_ACPI_FPDT is not set +# CONFIG_ACPI_EC_DEBUGFS is not set +CONFIG_ACPI_AC=y +CONFIG_ACPI_BATTERY=y +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_TINY_POWER_BUTTON is not set +# CONFIG_ACPI_DOCK is not set +CONFIG_ACPI_MCFG=y +# CONFIG_ACPI_PROCESSOR is not set +CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y +# CONFIG_ACPI_TABLE_UPGRADE is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_PCI_SLOT is not set +# CONFIG_ACPI_CONTAINER is not set +# CONFIG_ACPI_HOTPLUG_MEMORY is not set +# CONFIG_ACPI_HED is not set +# CONFIG_ACPI_CUSTOM_METHOD is not set +# CONFIG_ACPI_BGRT is not set +CONFIG_ACPI_REDUCED_HARDWARE_ONLY=y +CONFIG_HAVE_ACPI_APEI=y +# CONFIG_ACPI_APEI is not set +# CONFIG_ACPI_CONFIGFS is not set +# CONFIG_ACPI_PFRUT is not set +CONFIG_ACPI_IORT=y +CONFIG_ACPI_GTDT=y +CONFIG_ACPI_APMT=y +CONFIG_ACPI_PPTT=y +# CONFIG_ACPI_FFH is not set +# CONFIG_PMIC_OPREGION is not set +CONFIG_ACPI_PRMT=y +CONFIG_IRQ_BYPASS_MANAGER=y +CONFIG_HAVE_KVM=y +# CONFIG_VIRTUALIZATION is not set + +# +# General architecture-dependent options +# +CONFIG_KPROBES=y +CONFIG_JUMP_LABEL=y +# CONFIG_STATIC_KEYS_SELFTEST is not set +CONFIG_UPROBES=y +CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y +CONFIG_KRETPROBES=y +CONFIG_HAVE_IOREMAP_PROT=y +CONFIG_HAVE_KPROBES=y +CONFIG_HAVE_KRETPROBES=y +CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y +CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y +CONFIG_HAVE_NMI=y +CONFIG_TRACE_IRQFLAGS_SUPPORT=y +CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y +CONFIG_HAVE_ARCH_TRACEHOOK=y +CONFIG_HAVE_DMA_CONTIGUOUS=y +CONFIG_GENERIC_SMP_IDLE_THREAD=y +CONFIG_GENERIC_IDLE_POLL_SETUP=y +CONFIG_ARCH_HAS_FORTIFY_SOURCE=y +CONFIG_ARCH_HAS_KEEPINITRD=y +CONFIG_ARCH_HAS_SET_MEMORY=y +CONFIG_ARCH_HAS_SET_DIRECT_MAP=y +CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y +CONFIG_ARCH_WANTS_NO_INSTR=y +CONFIG_HAVE_ASM_MODVERSIONS=y +CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y +CONFIG_HAVE_RSEQ=y +CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y +CONFIG_HAVE_HW_BREAKPOINT=y +CONFIG_HAVE_PERF_REGS=y +CONFIG_HAVE_PERF_USER_STACK_DUMP=y +CONFIG_HAVE_ARCH_JUMP_LABEL=y +CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y +CONFIG_MMU_GATHER_TABLE_FREE=y +CONFIG_MMU_GATHER_RCU_TABLE_FREE=y +CONFIG_MMU_LAZY_TLB_REFCOUNT=y +CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y +CONFIG_ARCH_HAS_NMI_SAFE_THIS_CPU_OPS=y +CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y +CONFIG_HAVE_CMPXCHG_LOCAL=y +CONFIG_HAVE_CMPXCHG_DOUBLE=y +CONFIG_HAVE_ARCH_SECCOMP=y +CONFIG_HAVE_ARCH_SECCOMP_FILTER=y +CONFIG_SECCOMP=y +CONFIG_SECCOMP_FILTER=y +# CONFIG_SECCOMP_CACHE_DEBUG is not set +CONFIG_HAVE_ARCH_STACKLEAK=y +CONFIG_HAVE_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR_STRONG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y +CONFIG_LTO_NONE=y +CONFIG_ARCH_SUPPORTS_CFI_CLANG=y +CONFIG_HAVE_CONTEXT_TRACKING_USER=y +CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y +CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y +CONFIG_HAVE_MOVE_PUD=y +CONFIG_HAVE_MOVE_PMD=y +CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y +CONFIG_HAVE_ARCH_HUGE_VMAP=y +CONFIG_HAVE_ARCH_HUGE_VMALLOC=y +CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y +CONFIG_ARCH_WANT_PMD_MKWRITE=y +CONFIG_HAVE_MOD_ARCH_SPECIFIC=y +CONFIG_MODULES_USE_ELF_RELA=y +CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y +CONFIG_SOFTIRQ_ON_OWN_STACK=y +CONFIG_ARCH_HAS_ELF_RANDOMIZE=y +CONFIG_HAVE_ARCH_MMAP_RND_BITS=y +CONFIG_ARCH_MMAP_RND_BITS=28 +CONFIG_PAGE_SIZE_LESS_THAN_64KB=y +CONFIG_PAGE_SIZE_LESS_THAN_256KB=y +CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT=y +CONFIG_CLONE_BACKWARDS=y +# CONFIG_COMPAT_32BIT_TIME is not set +CONFIG_HAVE_ARCH_VMAP_STACK=y +CONFIG_VMAP_STACK=y +CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y +CONFIG_RANDOMIZE_KSTACK_OFFSET=y +# CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT is not set +CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y +CONFIG_STRICT_KERNEL_RWX=y +CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y +CONFIG_STRICT_MODULE_RWX=y +CONFIG_HAVE_ARCH_COMPILER_H=y +CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y +CONFIG_ARCH_USE_MEMREMAP_PROT=y +# CONFIG_LOCK_EVENT_COUNTS is not set +CONFIG_ARCH_HAS_RELR=y +CONFIG_HAVE_PREEMPT_DYNAMIC=y +CONFIG_HAVE_PREEMPT_DYNAMIC_KEY=y +CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y +CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y +CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y +CONFIG_ARCH_HAVE_TRACE_MMIO_ACCESS=y + +# +# GCOV-based kernel profiling +# +# CONFIG_GCOV_KERNEL is not set +CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y +# end of GCOV-based kernel profiling + +CONFIG_HAVE_GCC_PLUGINS=y +CONFIG_FUNCTION_ALIGNMENT_4B=y +CONFIG_FUNCTION_ALIGNMENT_8B=y +CONFIG_FUNCTION_ALIGNMENT=8 +# end of General architecture-dependent options + +CONFIG_RT_MUTEXES=y +CONFIG_BASE_SMALL=0 +CONFIG_MODULES=y +# CONFIG_MODULE_DEBUG is not set +CONFIG_MODULE_FORCE_LOAD=y +CONFIG_MODULE_UNLOAD=y +CONFIG_MODULE_FORCE_UNLOAD=y +# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set +CONFIG_MODVERSIONS=y +CONFIG_ASM_MODVERSIONS=y +CONFIG_MODULE_SRCVERSION_ALL=y +# CONFIG_MODULE_SIG is not set +CONFIG_MODULE_COMPRESS_NONE=y +# CONFIG_MODULE_COMPRESS_GZIP is not set +# CONFIG_MODULE_COMPRESS_XZ is not set +# CONFIG_MODULE_COMPRESS_ZSTD is not set +# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set +CONFIG_MODPROBE_PATH="/sbin/modprobe" +# CONFIG_TRIM_UNUSED_KSYMS is not set +CONFIG_MODULES_TREE_LOOKUP=y +CONFIG_BLOCK=y +CONFIG_BLOCK_LEGACY_AUTOLOAD=y +CONFIG_BLK_CGROUP_PUNT_BIO=y +CONFIG_BLK_DEV_BSG_COMMON=y +CONFIG_BLK_DEV_BSGLIB=y +# CONFIG_BLK_DEV_INTEGRITY is not set +# CONFIG_BLK_DEV_ZONED is not set +# CONFIG_BLK_DEV_THROTTLING is not set +# CONFIG_BLK_WBT is not set +# CONFIG_BLK_CGROUP_IOLATENCY is not set +# CONFIG_BLK_CGROUP_IOCOST is not set +# CONFIG_BLK_CGROUP_IOPRIO is not set +# CONFIG_BLK_DEBUG_FS is not set +# CONFIG_BLK_SED_OPAL is not set +# CONFIG_BLK_INLINE_ENCRYPTION is not set + +# +# Partition Types +# +CONFIG_PARTITION_ADVANCED=y +# CONFIG_ACORN_PARTITION is not set +# CONFIG_AIX_PARTITION is not set +# CONFIG_OSF_PARTITION is not set +# CONFIG_AMIGA_PARTITION is not set +# CONFIG_ATARI_PARTITION is not set +# CONFIG_MAC_PARTITION is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_BSD_DISKLABEL is not set +# CONFIG_MINIX_SUBPARTITION is not set +# CONFIG_SOLARIS_X86_PARTITION is not set +# CONFIG_UNIXWARE_DISKLABEL is not set +# CONFIG_LDM_PARTITION is not set +# CONFIG_SGI_PARTITION is not set +# CONFIG_ULTRIX_PARTITION is not set +# CONFIG_SUN_PARTITION is not set +# CONFIG_KARMA_PARTITION is not set +CONFIG_EFI_PARTITION=y +# CONFIG_SYSV68_PARTITION is not set +# CONFIG_CMDLINE_PARTITION is not set +# end of Partition Types + +CONFIG_BLK_MQ_PCI=y +CONFIG_BLK_MQ_VIRTIO=y +CONFIG_BLK_PM=y +CONFIG_BLOCK_HOLDER_DEPRECATED=y +CONFIG_BLK_MQ_STACKING=y + +# +# IO Schedulers +# +# CONFIG_MQ_IOSCHED_DEADLINE is not set +# CONFIG_MQ_IOSCHED_KYBER is not set +# CONFIG_IOSCHED_BFQ is not set +# end of IO Schedulers + +CONFIG_ASN1=y +CONFIG_ARCH_INLINE_SPIN_TRYLOCK=y +CONFIG_ARCH_INLINE_SPIN_TRYLOCK_BH=y +CONFIG_ARCH_INLINE_SPIN_LOCK=y +CONFIG_ARCH_INLINE_SPIN_LOCK_BH=y +CONFIG_ARCH_INLINE_SPIN_LOCK_IRQ=y +CONFIG_ARCH_INLINE_SPIN_LOCK_IRQSAVE=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK_BH=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK_IRQ=y +CONFIG_ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE=y +CONFIG_ARCH_INLINE_READ_LOCK=y +CONFIG_ARCH_INLINE_READ_LOCK_BH=y +CONFIG_ARCH_INLINE_READ_LOCK_IRQ=y +CONFIG_ARCH_INLINE_READ_LOCK_IRQSAVE=y +CONFIG_ARCH_INLINE_READ_UNLOCK=y +CONFIG_ARCH_INLINE_READ_UNLOCK_BH=y +CONFIG_ARCH_INLINE_READ_UNLOCK_IRQ=y +CONFIG_ARCH_INLINE_READ_UNLOCK_IRQRESTORE=y +CONFIG_ARCH_INLINE_WRITE_LOCK=y +CONFIG_ARCH_INLINE_WRITE_LOCK_BH=y +CONFIG_ARCH_INLINE_WRITE_LOCK_IRQ=y +CONFIG_ARCH_INLINE_WRITE_LOCK_IRQSAVE=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK_BH=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK_IRQ=y +CONFIG_ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE=y +CONFIG_INLINE_SPIN_TRYLOCK=y +CONFIG_INLINE_SPIN_TRYLOCK_BH=y +CONFIG_INLINE_SPIN_LOCK=y +CONFIG_INLINE_SPIN_LOCK_BH=y +CONFIG_INLINE_SPIN_LOCK_IRQ=y +CONFIG_INLINE_SPIN_LOCK_IRQSAVE=y +CONFIG_INLINE_SPIN_UNLOCK_BH=y +CONFIG_INLINE_SPIN_UNLOCK_IRQ=y +CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE=y +CONFIG_INLINE_READ_LOCK=y +CONFIG_INLINE_READ_LOCK_BH=y +CONFIG_INLINE_READ_LOCK_IRQ=y +CONFIG_INLINE_READ_LOCK_IRQSAVE=y +CONFIG_INLINE_READ_UNLOCK=y +CONFIG_INLINE_READ_UNLOCK_BH=y +CONFIG_INLINE_READ_UNLOCK_IRQ=y +CONFIG_INLINE_READ_UNLOCK_IRQRESTORE=y +CONFIG_INLINE_WRITE_LOCK=y +CONFIG_INLINE_WRITE_LOCK_BH=y +CONFIG_INLINE_WRITE_LOCK_IRQ=y +CONFIG_INLINE_WRITE_LOCK_IRQSAVE=y +CONFIG_INLINE_WRITE_UNLOCK=y +CONFIG_INLINE_WRITE_UNLOCK_BH=y +CONFIG_INLINE_WRITE_UNLOCK_IRQ=y +CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE=y +CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y +CONFIG_MUTEX_SPIN_ON_OWNER=y +CONFIG_RWSEM_SPIN_ON_OWNER=y +CONFIG_LOCK_SPIN_ON_OWNER=y +CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y +CONFIG_QUEUED_SPINLOCKS=y +CONFIG_ARCH_USE_QUEUED_RWLOCKS=y +CONFIG_QUEUED_RWLOCKS=y +CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y +CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y +CONFIG_FREEZER=y + +# +# Executable file formats +# +CONFIG_BINFMT_ELF=y +CONFIG_ARCH_BINFMT_ELF_STATE=y +CONFIG_ARCH_BINFMT_ELF_EXTRA_PHDRS=y +CONFIG_ARCH_HAVE_ELF_PROT=y +CONFIG_ARCH_USE_GNU_PROPERTY=y +CONFIG_ELFCORE=y +CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y +CONFIG_BINFMT_SCRIPT=y +CONFIG_BINFMT_MISC=y +CONFIG_COREDUMP=y +# end of Executable file formats + +# +# Memory Management options +# +CONFIG_SWAP=y +# CONFIG_ZSWAP is not set + +# +# SLAB allocator options +# +# CONFIG_SLAB_DEPRECATED is not set +CONFIG_SLUB=y +# CONFIG_SLUB_TINY is not set +# CONFIG_SLAB_MERGE_DEFAULT is not set +# CONFIG_SLAB_FREELIST_RANDOM is not set +# CONFIG_SLAB_FREELIST_HARDENED is not set +# CONFIG_SLUB_STATS is not set +# CONFIG_SLUB_CPU_PARTIAL is not set +# CONFIG_RANDOM_KMALLOC_CACHES is not set +# end of SLAB allocator options + +# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set +# CONFIG_COMPAT_BRK is not set +CONFIG_SPARSEMEM=y +CONFIG_SPARSEMEM_EXTREME=y +CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y +CONFIG_SPARSEMEM_VMEMMAP=y +CONFIG_HAVE_FAST_GUP=y +CONFIG_ARCH_KEEP_MEMBLOCK=y +CONFIG_MEMORY_ISOLATION=y +CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y +CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y +CONFIG_MEMORY_HOTPLUG=y +# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set +CONFIG_MEMORY_HOTREMOVE=y +CONFIG_MHP_MEMMAP_ON_MEMORY=y +CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y +CONFIG_SPLIT_PTLOCK_CPUS=4 +CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y +CONFIG_MEMORY_BALLOON=y +# CONFIG_BALLOON_COMPACTION is not set +CONFIG_COMPACTION=y +CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1 +CONFIG_PAGE_REPORTING=y +CONFIG_MIGRATION=y +CONFIG_DEVICE_MIGRATION=y +CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y +CONFIG_ARCH_ENABLE_THP_MIGRATION=y +CONFIG_CONTIG_ALLOC=y +CONFIG_PHYS_ADDR_T_64BIT=y +# CONFIG_KSM is not set +CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 +CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y +# CONFIG_MEMORY_FAILURE is not set +CONFIG_ARCH_WANTS_THP_SWAP=y +CONFIG_TRANSPARENT_HUGEPAGE=y +CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y +# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set +CONFIG_THP_SWAP=y +# CONFIG_READ_ONLY_THP_FOR_FS is not set +# CONFIG_CMA is not set +CONFIG_GENERIC_EARLY_IOREMAP=y +# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set +# CONFIG_IDLE_PAGE_TRACKING is not set +CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y +CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y +CONFIG_ARCH_HAS_PTE_DEVMAP=y +CONFIG_ARCH_HAS_ZONE_DMA_SET=y +# CONFIG_ZONE_DMA is not set +# CONFIG_ZONE_DMA32 is not set +CONFIG_ZONE_DEVICE=y +# CONFIG_DEVICE_PRIVATE is not set +CONFIG_VM_EVENT_COUNTERS=y +# CONFIG_PERCPU_STATS is not set +# CONFIG_GUP_TEST is not set +# CONFIG_DMAPOOL_TEST is not set +CONFIG_ARCH_HAS_PTE_SPECIAL=y +CONFIG_MEMFD_CREATE=y +CONFIG_SECRETMEM=y +CONFIG_ANON_VMA_NAME=y +CONFIG_USERFAULTFD=y +CONFIG_HAVE_ARCH_USERFAULTFD_MINOR=y +# CONFIG_LRU_GEN is not set +CONFIG_ARCH_SUPPORTS_PER_VMA_LOCK=y +CONFIG_PER_VMA_LOCK=y +CONFIG_LOCK_MM_AND_FIND_VMA=y + +# +# Data Access Monitoring +# +# CONFIG_DAMON is not set +# end of Data Access Monitoring +# end of Memory Management options + +CONFIG_NET=y +CONFIG_NET_INGRESS=y +CONFIG_NET_EGRESS=y +CONFIG_NET_XGRESS=y +CONFIG_SKB_EXTENSIONS=y + +# +# Networking options +# +CONFIG_PACKET=y +CONFIG_PACKET_DIAG=y +CONFIG_UNIX=y +CONFIG_UNIX_SCM=y +CONFIG_AF_UNIX_OOB=y +CONFIG_UNIX_DIAG=y +# CONFIG_TLS is not set +CONFIG_XFRM=y +CONFIG_XFRM_ALGO=y +CONFIG_XFRM_USER=y +# CONFIG_XFRM_INTERFACE is not set +# CONFIG_XFRM_SUB_POLICY is not set +# CONFIG_XFRM_MIGRATE is not set +# CONFIG_XFRM_STATISTICS is not set +CONFIG_XFRM_ESP=y +# CONFIG_NET_KEY is not set +# CONFIG_XDP_SOCKETS is not set +CONFIG_NET_HANDSHAKE=y +CONFIG_INET=y +# CONFIG_IP_MULTICAST is not set +CONFIG_IP_ADVANCED_ROUTER=y +# CONFIG_IP_FIB_TRIE_STATS is not set +CONFIG_IP_MULTIPLE_TABLES=y +# CONFIG_IP_ROUTE_MULTIPATH is not set +# CONFIG_IP_ROUTE_VERBOSE is not set +CONFIG_IP_PNP=y +CONFIG_IP_PNP_DHCP=y +# CONFIG_IP_PNP_BOOTP is not set +# CONFIG_IP_PNP_RARP is not set +CONFIG_NET_IPIP=y +# CONFIG_NET_IPGRE_DEMUX is not set +CONFIG_NET_IP_TUNNEL=y +CONFIG_SYN_COOKIES=y +# CONFIG_NET_IPVTI is not set +CONFIG_NET_UDP_TUNNEL=y +# CONFIG_NET_FOU is not set +# CONFIG_NET_FOU_IP_TUNNELS is not set +# CONFIG_INET_AH is not set +CONFIG_INET_ESP=y +# CONFIG_INET_ESP_OFFLOAD is not set +# CONFIG_INET_ESPINTCP is not set +# CONFIG_INET_IPCOMP is not set +CONFIG_INET_TABLE_PERTURB_ORDER=16 +CONFIG_INET_TUNNEL=y +CONFIG_INET_DIAG=y +CONFIG_INET_TCP_DIAG=y +CONFIG_INET_UDP_DIAG=y +CONFIG_INET_RAW_DIAG=y +# CONFIG_INET_DIAG_DESTROY is not set +# CONFIG_TCP_CONG_ADVANCED is not set +CONFIG_TCP_CONG_CUBIC=y +CONFIG_DEFAULT_TCP_CONG="cubic" +# CONFIG_TCP_MD5SIG is not set +CONFIG_IPV6=y +# CONFIG_IPV6_ROUTER_PREF is not set +CONFIG_IPV6_OPTIMISTIC_DAD=y +# CONFIG_INET6_AH is not set +# CONFIG_INET6_ESP is not set +# CONFIG_INET6_IPCOMP is not set +# CONFIG_IPV6_MIP6 is not set +# CONFIG_IPV6_ILA is not set +# CONFIG_IPV6_VTI is not set +CONFIG_IPV6_SIT=y +# CONFIG_IPV6_SIT_6RD is not set +CONFIG_IPV6_NDISC_NODETYPE=y +# CONFIG_IPV6_TUNNEL is not set +# CONFIG_IPV6_MULTIPLE_TABLES is not set +# CONFIG_IPV6_MROUTE is not set +# CONFIG_IPV6_SEG6_LWTUNNEL is not set +# CONFIG_IPV6_SEG6_HMAC is not set +# CONFIG_IPV6_RPL_LWTUNNEL is not set +# CONFIG_IPV6_IOAM6_LWTUNNEL is not set +# CONFIG_NETLABEL is not set +# CONFIG_MPTCP is not set +CONFIG_NETWORK_SECMARK=y +CONFIG_NET_PTP_CLASSIFY=y +CONFIG_NETWORK_PHY_TIMESTAMPING=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_ADVANCED=y +CONFIG_BRIDGE_NETFILTER=y + +# +# Core Netfilter Configuration +# +CONFIG_NETFILTER_INGRESS=y +CONFIG_NETFILTER_EGRESS=y +CONFIG_NETFILTER_SKIP_EGRESS=y +CONFIG_NETFILTER_NETLINK=y +CONFIG_NETFILTER_FAMILY_BRIDGE=y +CONFIG_NETFILTER_FAMILY_ARP=y +CONFIG_NETFILTER_BPF_LINK=y +# CONFIG_NETFILTER_NETLINK_HOOK is not set +# CONFIG_NETFILTER_NETLINK_ACCT is not set +CONFIG_NETFILTER_NETLINK_QUEUE=y +CONFIG_NETFILTER_NETLINK_LOG=y +# CONFIG_NETFILTER_NETLINK_OSF is not set +CONFIG_NF_CONNTRACK=y +CONFIG_NF_LOG_SYSLOG=y +CONFIG_NETFILTER_CONNCOUNT=y +CONFIG_NF_CONNTRACK_MARK=y +# CONFIG_NF_CONNTRACK_SECMARK is not set +# CONFIG_NF_CONNTRACK_ZONES is not set +# CONFIG_NF_CONNTRACK_PROCFS is not set +CONFIG_NF_CONNTRACK_EVENTS=y +# CONFIG_NF_CONNTRACK_TIMEOUT is not set +# CONFIG_NF_CONNTRACK_TIMESTAMP is not set +# CONFIG_NF_CONNTRACK_LABELS is not set +# CONFIG_NF_CT_PROTO_DCCP is not set +CONFIG_NF_CT_PROTO_GRE=y +# CONFIG_NF_CT_PROTO_SCTP is not set +# CONFIG_NF_CT_PROTO_UDPLITE is not set +CONFIG_NF_CONNTRACK_AMANDA=y +CONFIG_NF_CONNTRACK_FTP=y +CONFIG_NF_CONNTRACK_H323=y +CONFIG_NF_CONNTRACK_IRC=y +CONFIG_NF_CONNTRACK_BROADCAST=y +CONFIG_NF_CONNTRACK_NETBIOS_NS=y +# CONFIG_NF_CONNTRACK_SNMP is not set +CONFIG_NF_CONNTRACK_PPTP=y +CONFIG_NF_CONNTRACK_SANE=y +CONFIG_NF_CONNTRACK_SIP=y +CONFIG_NF_CONNTRACK_TFTP=y +CONFIG_NF_CT_NETLINK=y +# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set +CONFIG_NF_NAT=y +CONFIG_NF_NAT_AMANDA=y +CONFIG_NF_NAT_FTP=y +CONFIG_NF_NAT_IRC=y +CONFIG_NF_NAT_SIP=y +CONFIG_NF_NAT_TFTP=y +CONFIG_NF_NAT_REDIRECT=y +CONFIG_NF_NAT_MASQUERADE=y +CONFIG_NETFILTER_SYNPROXY=y +CONFIG_NF_TABLES=y +CONFIG_NF_TABLES_INET=y +# CONFIG_NF_TABLES_NETDEV is not set +CONFIG_NFT_NUMGEN=y +CONFIG_NFT_CT=y +CONFIG_NFT_CONNLIMIT=y +CONFIG_NFT_LOG=y +CONFIG_NFT_LIMIT=y +CONFIG_NFT_MASQ=y +CONFIG_NFT_REDIR=y +CONFIG_NFT_NAT=y +CONFIG_NFT_TUNNEL=y +# CONFIG_NFT_QUEUE is not set +# CONFIG_NFT_QUOTA is not set +CONFIG_NFT_REJECT=y +CONFIG_NFT_REJECT_INET=y +CONFIG_NFT_COMPAT=y +# CONFIG_NFT_HASH is not set +CONFIG_NFT_XFRM=y +CONFIG_NFT_SOCKET=y +# CONFIG_NFT_OSF is not set +# CONFIG_NFT_TPROXY is not set +# CONFIG_NFT_SYNPROXY is not set +# CONFIG_NF_FLOW_TABLE is not set +CONFIG_NETFILTER_XTABLES=y + +# +# Xtables combined modules +# +CONFIG_NETFILTER_XT_MARK=y +# CONFIG_NETFILTER_XT_CONNMARK is not set +CONFIG_NETFILTER_XT_SET=y + +# +# Xtables targets +# +# CONFIG_NETFILTER_XT_TARGET_AUDIT is not set +CONFIG_NETFILTER_XT_TARGET_CHECKSUM=y +# CONFIG_NETFILTER_XT_TARGET_CLASSIFY is not set +# CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set +# CONFIG_NETFILTER_XT_TARGET_CT is not set +# CONFIG_NETFILTER_XT_TARGET_DSCP is not set +CONFIG_NETFILTER_XT_TARGET_HL=y +# CONFIG_NETFILTER_XT_TARGET_HMARK is not set +# CONFIG_NETFILTER_XT_TARGET_IDLETIMER is not set +CONFIG_NETFILTER_XT_TARGET_LOG=y +CONFIG_NETFILTER_XT_TARGET_MARK=y +CONFIG_NETFILTER_XT_NAT=y +CONFIG_NETFILTER_XT_TARGET_NETMAP=y +CONFIG_NETFILTER_XT_TARGET_NFLOG=y +# CONFIG_NETFILTER_XT_TARGET_NFQUEUE is not set +# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set +# CONFIG_NETFILTER_XT_TARGET_RATEEST is not set +CONFIG_NETFILTER_XT_TARGET_REDIRECT=y +CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y +# CONFIG_NETFILTER_XT_TARGET_TEE is not set +# CONFIG_NETFILTER_XT_TARGET_TPROXY is not set +# CONFIG_NETFILTER_XT_TARGET_TRACE is not set +CONFIG_NETFILTER_XT_TARGET_SECMARK=y +CONFIG_NETFILTER_XT_TARGET_TCPMSS=y +# CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP is not set + +# +# Xtables matches +# +CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=y +# CONFIG_NETFILTER_XT_MATCH_BPF is not set +CONFIG_NETFILTER_XT_MATCH_CGROUP=y +# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set +CONFIG_NETFILTER_XT_MATCH_COMMENT=y +# CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLABEL is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_CONNMARK is not set +CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y +# CONFIG_NETFILTER_XT_MATCH_CPU is not set +# CONFIG_NETFILTER_XT_MATCH_DCCP is not set +# CONFIG_NETFILTER_XT_MATCH_DEVGROUP is not set +# CONFIG_NETFILTER_XT_MATCH_DSCP is not set +CONFIG_NETFILTER_XT_MATCH_ECN=y +# CONFIG_NETFILTER_XT_MATCH_ESP is not set +# CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_HELPER is not set +CONFIG_NETFILTER_XT_MATCH_HL=y +# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set +# CONFIG_NETFILTER_XT_MATCH_IPRANGE is not set +CONFIG_NETFILTER_XT_MATCH_IPVS=y +# CONFIG_NETFILTER_XT_MATCH_L2TP is not set +# CONFIG_NETFILTER_XT_MATCH_LENGTH is not set +CONFIG_NETFILTER_XT_MATCH_LIMIT=y +# CONFIG_NETFILTER_XT_MATCH_MAC is not set +# CONFIG_NETFILTER_XT_MATCH_MARK is not set +CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y +# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set +# CONFIG_NETFILTER_XT_MATCH_OSF is not set +CONFIG_NETFILTER_XT_MATCH_OWNER=y +# CONFIG_NETFILTER_XT_MATCH_POLICY is not set +CONFIG_NETFILTER_XT_MATCH_PHYSDEV=y +# CONFIG_NETFILTER_XT_MATCH_PKTTYPE is not set +# CONFIG_NETFILTER_XT_MATCH_QUOTA is not set +# CONFIG_NETFILTER_XT_MATCH_RATEEST is not set +# CONFIG_NETFILTER_XT_MATCH_REALM is not set +# CONFIG_NETFILTER_XT_MATCH_RECENT is not set +# CONFIG_NETFILTER_XT_MATCH_SCTP is not set +# CONFIG_NETFILTER_XT_MATCH_SOCKET is not set +# CONFIG_NETFILTER_XT_MATCH_STATE is not set +CONFIG_NETFILTER_XT_MATCH_STATISTIC=y +# CONFIG_NETFILTER_XT_MATCH_STRING is not set +# CONFIG_NETFILTER_XT_MATCH_TCPMSS is not set +# CONFIG_NETFILTER_XT_MATCH_TIME is not set +# CONFIG_NETFILTER_XT_MATCH_U32 is not set +# end of Core Netfilter Configuration + +CONFIG_IP_SET=y +CONFIG_IP_SET_MAX=256 +CONFIG_IP_SET_BITMAP_IP=y +CONFIG_IP_SET_BITMAP_IPMAC=y +CONFIG_IP_SET_BITMAP_PORT=y +CONFIG_IP_SET_HASH_IP=y +CONFIG_IP_SET_HASH_IPMARK=y +CONFIG_IP_SET_HASH_IPPORT=y +CONFIG_IP_SET_HASH_IPPORTIP=y +CONFIG_IP_SET_HASH_IPPORTNET=y +CONFIG_IP_SET_HASH_IPMAC=y +CONFIG_IP_SET_HASH_MAC=y +CONFIG_IP_SET_HASH_NETPORTNET=y +CONFIG_IP_SET_HASH_NET=y +CONFIG_IP_SET_HASH_NETNET=y +CONFIG_IP_SET_HASH_NETPORT=y +CONFIG_IP_SET_HASH_NETIFACE=y +# CONFIG_IP_SET_LIST_SET is not set +CONFIG_IP_VS=y +# CONFIG_IP_VS_IPV6 is not set +# CONFIG_IP_VS_DEBUG is not set +CONFIG_IP_VS_TAB_BITS=12 + +# +# IPVS transport protocol load balancing support +# +CONFIG_IP_VS_PROTO_TCP=y +CONFIG_IP_VS_PROTO_UDP=y +# CONFIG_IP_VS_PROTO_ESP is not set +# CONFIG_IP_VS_PROTO_AH is not set +# CONFIG_IP_VS_PROTO_SCTP is not set + +# +# IPVS scheduler +# +CONFIG_IP_VS_RR=y +CONFIG_IP_VS_WRR=y +# CONFIG_IP_VS_LC is not set +# CONFIG_IP_VS_WLC is not set +# CONFIG_IP_VS_FO is not set +# CONFIG_IP_VS_OVF is not set +# CONFIG_IP_VS_LBLC is not set +# CONFIG_IP_VS_LBLCR is not set +# CONFIG_IP_VS_DH is not set +CONFIG_IP_VS_SH=y +# CONFIG_IP_VS_MH is not set +# CONFIG_IP_VS_SED is not set +# CONFIG_IP_VS_NQ is not set +# CONFIG_IP_VS_TWOS is not set + +# +# IPVS SH scheduler +# +CONFIG_IP_VS_SH_TAB_BITS=8 + +# +# IPVS MH scheduler +# +CONFIG_IP_VS_MH_TAB_INDEX=12 + +# +# IPVS application helper +# +# CONFIG_IP_VS_FTP is not set +CONFIG_IP_VS_NFCT=y +# CONFIG_IP_VS_PE_SIP is not set + +# +# IP: Netfilter Configuration +# +CONFIG_NF_DEFRAG_IPV4=y +CONFIG_NF_SOCKET_IPV4=y +# CONFIG_NF_TPROXY_IPV4 is not set +CONFIG_NF_TABLES_IPV4=y +CONFIG_NFT_REJECT_IPV4=y +# CONFIG_NFT_DUP_IPV4 is not set +# CONFIG_NFT_FIB_IPV4 is not set +# CONFIG_NF_TABLES_ARP is not set +# CONFIG_NF_DUP_IPV4 is not set +# CONFIG_NF_LOG_ARP is not set +CONFIG_NF_LOG_IPV4=y +CONFIG_NF_REJECT_IPV4=y +CONFIG_NF_NAT_PPTP=y +CONFIG_NF_NAT_H323=y +CONFIG_IP_NF_IPTABLES=y +CONFIG_IP_NF_MATCH_AH=y +CONFIG_IP_NF_MATCH_ECN=y +CONFIG_IP_NF_MATCH_RPFILTER=y +CONFIG_IP_NF_MATCH_TTL=y +CONFIG_IP_NF_FILTER=y +CONFIG_IP_NF_TARGET_REJECT=y +CONFIG_IP_NF_TARGET_SYNPROXY=y +CONFIG_IP_NF_NAT=y +CONFIG_IP_NF_TARGET_MASQUERADE=y +CONFIG_IP_NF_TARGET_NETMAP=y +CONFIG_IP_NF_TARGET_REDIRECT=y +CONFIG_IP_NF_MANGLE=y +CONFIG_IP_NF_TARGET_ECN=y +CONFIG_IP_NF_TARGET_TTL=y +CONFIG_IP_NF_RAW=y +CONFIG_IP_NF_SECURITY=y +CONFIG_IP_NF_ARPTABLES=y +CONFIG_IP_NF_ARPFILTER=y +CONFIG_IP_NF_ARP_MANGLE=y +# end of IP: Netfilter Configuration + +# +# IPv6: Netfilter Configuration +# +CONFIG_NF_SOCKET_IPV6=y +# CONFIG_NF_TPROXY_IPV6 is not set +CONFIG_NF_TABLES_IPV6=y +CONFIG_NFT_REJECT_IPV6=y +# CONFIG_NFT_DUP_IPV6 is not set +# CONFIG_NFT_FIB_IPV6 is not set +# CONFIG_NF_DUP_IPV6 is not set +CONFIG_NF_REJECT_IPV6=y +CONFIG_NF_LOG_IPV6=y +CONFIG_IP6_NF_IPTABLES=y +CONFIG_IP6_NF_MATCH_AH=y +CONFIG_IP6_NF_MATCH_EUI64=y +CONFIG_IP6_NF_MATCH_FRAG=y +CONFIG_IP6_NF_MATCH_OPTS=y +CONFIG_IP6_NF_MATCH_HL=y +CONFIG_IP6_NF_MATCH_IPV6HEADER=y +CONFIG_IP6_NF_MATCH_MH=y +CONFIG_IP6_NF_MATCH_RPFILTER=y +CONFIG_IP6_NF_MATCH_RT=y +CONFIG_IP6_NF_MATCH_SRH=y +CONFIG_IP6_NF_TARGET_HL=y +CONFIG_IP6_NF_FILTER=y +CONFIG_IP6_NF_TARGET_REJECT=y +CONFIG_IP6_NF_TARGET_SYNPROXY=y +CONFIG_IP6_NF_MANGLE=y +CONFIG_IP6_NF_RAW=y +CONFIG_IP6_NF_SECURITY=y +CONFIG_IP6_NF_NAT=y +CONFIG_IP6_NF_TARGET_MASQUERADE=y +CONFIG_IP6_NF_TARGET_NPT=y +# end of IPv6: Netfilter Configuration + +CONFIG_NF_DEFRAG_IPV6=y +# CONFIG_NF_TABLES_BRIDGE is not set +# CONFIG_NF_CONNTRACK_BRIDGE is not set +# CONFIG_BRIDGE_NF_EBTABLES is not set +# CONFIG_BPFILTER is not set +# CONFIG_IP_DCCP is not set +CONFIG_IP_SCTP=y +# CONFIG_SCTP_DBG_OBJCNT is not set +CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set +CONFIG_SCTP_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set +CONFIG_INET_SCTP_DIAG=y +# CONFIG_RDS is not set +# CONFIG_TIPC is not set +# CONFIG_ATM is not set +# CONFIG_L2TP is not set +CONFIG_STP=y +CONFIG_BRIDGE=y +CONFIG_BRIDGE_IGMP_SNOOPING=y +CONFIG_BRIDGE_VLAN_FILTERING=y +# CONFIG_BRIDGE_MRP is not set +# CONFIG_BRIDGE_CFM is not set +# CONFIG_NET_DSA is not set +CONFIG_VLAN_8021Q=y +# CONFIG_VLAN_8021Q_GVRP is not set +# CONFIG_VLAN_8021Q_MVRP is not set +CONFIG_LLC=y +# CONFIG_LLC2 is not set +# CONFIG_ATALK is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_PHONET is not set +# CONFIG_6LOWPAN is not set +# CONFIG_IEEE802154 is not set +CONFIG_NET_SCHED=y + +# +# Queueing/Scheduling +# +# CONFIG_NET_SCH_HTB is not set +# CONFIG_NET_SCH_HFSC is not set +# CONFIG_NET_SCH_PRIO is not set +CONFIG_NET_SCH_MULTIQ=y +# CONFIG_NET_SCH_RED is not set +# CONFIG_NET_SCH_SFB is not set +# CONFIG_NET_SCH_SFQ is not set +# CONFIG_NET_SCH_TEQL is not set +# CONFIG_NET_SCH_TBF is not set +# CONFIG_NET_SCH_CBS is not set +# CONFIG_NET_SCH_ETF is not set +# CONFIG_NET_SCH_TAPRIO is not set +# CONFIG_NET_SCH_GRED is not set +# CONFIG_NET_SCH_NETEM is not set +# CONFIG_NET_SCH_DRR is not set +# CONFIG_NET_SCH_MQPRIO is not set +# CONFIG_NET_SCH_SKBPRIO is not set +# CONFIG_NET_SCH_CHOKE is not set +# CONFIG_NET_SCH_QFQ is not set +# CONFIG_NET_SCH_CODEL is not set +CONFIG_NET_SCH_FQ_CODEL=y +# CONFIG_NET_SCH_CAKE is not set +# CONFIG_NET_SCH_FQ is not set +# CONFIG_NET_SCH_HHF is not set +# CONFIG_NET_SCH_PIE is not set +CONFIG_NET_SCH_INGRESS=y +# CONFIG_NET_SCH_PLUG is not set +# CONFIG_NET_SCH_ETS is not set +CONFIG_NET_SCH_DEFAULT=y +CONFIG_DEFAULT_FQ_CODEL=y +# CONFIG_DEFAULT_PFIFO_FAST is not set +CONFIG_DEFAULT_NET_SCH="fq_codel" + +# +# Classification +# +CONFIG_NET_CLS=y +# CONFIG_NET_CLS_BASIC is not set +# CONFIG_NET_CLS_ROUTE4 is not set +# CONFIG_NET_CLS_FW is not set +CONFIG_NET_CLS_U32=y +CONFIG_CLS_U32_PERF=y +CONFIG_CLS_U32_MARK=y +# CONFIG_NET_CLS_FLOW is not set +CONFIG_NET_CLS_CGROUP=y +CONFIG_NET_CLS_BPF=y +CONFIG_NET_CLS_FLOWER=y +# CONFIG_NET_CLS_MATCHALL is not set +# CONFIG_NET_EMATCH is not set +CONFIG_NET_CLS_ACT=y +# CONFIG_NET_ACT_POLICE is not set +# CONFIG_NET_ACT_GACT is not set +CONFIG_NET_ACT_MIRRED=y +# CONFIG_NET_ACT_SAMPLE is not set +CONFIG_NET_ACT_IPT=y +# CONFIG_NET_ACT_NAT is not set +# CONFIG_NET_ACT_PEDIT is not set +# CONFIG_NET_ACT_SIMP is not set +# CONFIG_NET_ACT_SKBEDIT is not set +# CONFIG_NET_ACT_CSUM is not set +# CONFIG_NET_ACT_MPLS is not set +# CONFIG_NET_ACT_VLAN is not set +CONFIG_NET_ACT_BPF=y +# CONFIG_NET_ACT_CONNMARK is not set +# CONFIG_NET_ACT_CTINFO is not set +# CONFIG_NET_ACT_SKBMOD is not set +# CONFIG_NET_ACT_IFE is not set +# CONFIG_NET_ACT_TUNNEL_KEY is not set +# CONFIG_NET_ACT_GATE is not set +# CONFIG_NET_TC_SKB_EXT is not set +CONFIG_NET_SCH_FIFO=y +# CONFIG_DCB is not set +CONFIG_DNS_RESOLVER=y +# CONFIG_BATMAN_ADV is not set +# CONFIG_OPENVSWITCH is not set +CONFIG_VSOCKETS=y +CONFIG_VSOCKETS_DIAG=y +# CONFIG_VSOCKETS_LOOPBACK is not set +# CONFIG_VIRTIO_VSOCKETS is not set +CONFIG_HYPERV_VSOCKETS=y +CONFIG_NETLINK_DIAG=y +# CONFIG_MPLS is not set +# CONFIG_NET_NSH is not set +# CONFIG_HSR is not set +CONFIG_NET_SWITCHDEV=y +CONFIG_NET_L3_MASTER_DEV=y +# CONFIG_QRTR is not set +# CONFIG_NET_NCSI is not set +# CONFIG_PCPU_DEV_REFCNT is not set +CONFIG_MAX_SKB_FRAGS=17 +CONFIG_RPS=y +CONFIG_RFS_ACCEL=y +CONFIG_SOCK_RX_QUEUE_MAPPING=y +CONFIG_XPS=y +CONFIG_CGROUP_NET_PRIO=y +CONFIG_CGROUP_NET_CLASSID=y +CONFIG_NET_RX_BUSY_POLL=y +CONFIG_BQL=y +# CONFIG_BPF_STREAM_PARSER is not set +CONFIG_NET_FLOW_LIMIT=y + +# +# Network testing +# +# CONFIG_NET_PKTGEN is not set +CONFIG_NET_DROP_MONITOR=y +# end of Network testing +# end of Networking options + +# CONFIG_HAMRADIO is not set +# CONFIG_CAN is not set +# CONFIG_BT is not set +# CONFIG_AF_RXRPC is not set +# CONFIG_AF_KCM is not set +# CONFIG_MCTP is not set +CONFIG_FIB_RULES=y +# CONFIG_WIRELESS is not set +# CONFIG_RFKILL is not set +CONFIG_NET_9P=y +CONFIG_NET_9P_FD=y +CONFIG_NET_9P_VIRTIO=y +# CONFIG_NET_9P_DEBUG is not set +# CONFIG_CAIF is not set +CONFIG_CEPH_LIB=y +# CONFIG_CEPH_LIB_PRETTYDEBUG is not set +# CONFIG_CEPH_LIB_USE_DNS_RESOLVER is not set +# CONFIG_NFC is not set +# CONFIG_PSAMPLE is not set +# CONFIG_NET_IFE is not set +# CONFIG_LWTUNNEL is not set +CONFIG_DST_CACHE=y +CONFIG_GRO_CELLS=y +CONFIG_NET_SOCK_MSG=y +CONFIG_PAGE_POOL=y +# CONFIG_PAGE_POOL_STATS is not set +CONFIG_FAILOVER=y +# CONFIG_ETHTOOL_NETLINK is not set + +# +# Device Drivers +# +CONFIG_ARM_AMBA=y +CONFIG_HAVE_PCI=y +CONFIG_PCI=y +CONFIG_PCI_DOMAINS=y +CONFIG_PCI_DOMAINS_GENERIC=y +CONFIG_PCI_SYSCALL=y +CONFIG_PCIEPORTBUS=y +CONFIG_PCIEAER=y +# CONFIG_PCIEAER_INJECT is not set +# CONFIG_PCIE_ECRC is not set +CONFIG_PCIEASPM=y +CONFIG_PCIEASPM_DEFAULT=y +# CONFIG_PCIEASPM_POWERSAVE is not set +# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set +# CONFIG_PCIEASPM_PERFORMANCE is not set +CONFIG_PCIE_PME=y +# CONFIG_PCIE_DPC is not set +# CONFIG_PCIE_PTM is not set +CONFIG_PCI_MSI=y +CONFIG_PCI_QUIRKS=y +# CONFIG_PCI_DEBUG is not set +# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set +# CONFIG_PCI_STUB is not set +# CONFIG_PCI_PF_STUB is not set +CONFIG_PCI_ATS=y +CONFIG_PCI_ECAM=y +CONFIG_PCI_IOV=y +# CONFIG_PCI_PRI is not set +# CONFIG_PCI_PASID is not set +# CONFIG_PCI_P2PDMA is not set +CONFIG_PCI_LABEL=y +CONFIG_PCI_HYPERV=y +# CONFIG_PCI_DYNAMIC_OF_NODES is not set +# CONFIG_PCIE_BUS_TUNE_OFF is not set +CONFIG_PCIE_BUS_DEFAULT=y +# CONFIG_PCIE_BUS_SAFE is not set +# CONFIG_PCIE_BUS_PERFORMANCE is not set +# CONFIG_PCIE_BUS_PEER2PEER is not set +# CONFIG_VGA_ARB is not set +# CONFIG_HOTPLUG_PCI is not set + +# +# PCI controller drivers +# +# CONFIG_PCIE_ALTERA is not set +# CONFIG_PCI_HOST_THUNDER_PEM is not set +# CONFIG_PCI_HOST_THUNDER_ECAM is not set +# CONFIG_PCI_FTPCI100 is not set +# CONFIG_PCI_HOST_GENERIC is not set +# CONFIG_PCIE_MICROCHIP_HOST is not set +CONFIG_PCI_HYPERV_INTERFACE=y +# CONFIG_PCI_XGENE is not set +# CONFIG_PCIE_XILINX is not set + +# +# Cadence-based PCIe controllers +# +# CONFIG_PCIE_CADENCE_PLAT_HOST is not set +# CONFIG_PCI_J721E_HOST is not set +# end of Cadence-based PCIe controllers + +# +# DesignWare-based PCIe controllers +# +# CONFIG_PCIE_AL is not set +# CONFIG_PCI_MESON is not set +# CONFIG_PCI_HISI is not set +# CONFIG_PCIE_KIRIN is not set +# CONFIG_PCIE_DW_PLAT_HOST is not set +# end of DesignWare-based PCIe controllers + +# +# Mobiveil-based PCIe controllers +# +# end of Mobiveil-based PCIe controllers +# end of PCI controller drivers + +# +# PCI Endpoint +# +# CONFIG_PCI_ENDPOINT is not set +# end of PCI Endpoint + +# +# PCI switch controller drivers +# +# CONFIG_PCI_SW_SWITCHTEC is not set +# end of PCI switch controller drivers + +# CONFIG_CXL_BUS is not set +# CONFIG_PCCARD is not set +# CONFIG_RAPIDIO is not set + +# +# Generic Driver Options +# +CONFIG_UEVENT_HELPER=y +CONFIG_UEVENT_HELPER_PATH="" +CONFIG_DEVTMPFS=y +CONFIG_DEVTMPFS_MOUNT=y +CONFIG_DEVTMPFS_SAFE=y +CONFIG_STANDALONE=y +CONFIG_PREVENT_FIRMWARE_BUILD=y + +# +# Firmware loader +# +CONFIG_FW_LOADER=y +CONFIG_FW_LOADER_PAGED_BUF=y +CONFIG_FW_LOADER_SYSFS=y +CONFIG_EXTRA_FIRMWARE="" +# CONFIG_FW_LOADER_USER_HELPER is not set +# CONFIG_FW_LOADER_COMPRESS is not set +CONFIG_FW_UPLOAD=y +# end of Firmware loader + +CONFIG_ALLOW_DEV_COREDUMP=y +# CONFIG_DEBUG_DRIVER is not set +# CONFIG_DEBUG_DEVRES is not set +# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set +# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set +CONFIG_GENERIC_CPU_AUTOPROBE=y +CONFIG_GENERIC_CPU_VULNERABILITIES=y +CONFIG_SOC_BUS=y +CONFIG_DMA_SHARED_BUFFER=y +# CONFIG_DMA_FENCE_TRACE is not set +CONFIG_GENERIC_ARCH_TOPOLOGY=y +# CONFIG_FW_DEVLINK_SYNC_STATE_TIMEOUT is not set +# end of Generic Driver Options + +# +# Bus devices +# +# CONFIG_BRCMSTB_GISB_ARB is not set +# CONFIG_VEXPRESS_CONFIG is not set +# CONFIG_MHI_BUS is not set +# CONFIG_MHI_BUS_EP is not set +# end of Bus devices + +# +# Cache Drivers +# +# end of Cache Drivers + +CONFIG_CONNECTOR=y +CONFIG_PROC_EVENTS=y + +# +# Firmware Drivers +# + +# +# ARM System Control and Management Interface Protocol +# +# CONFIG_ARM_SCMI_PROTOCOL is not set +# end of ARM System Control and Management Interface Protocol + +CONFIG_FIRMWARE_MEMMAP=y +# CONFIG_ISCSI_IBFT is not set +# CONFIG_FW_CFG_SYSFS is not set +# CONFIG_SYSFB_SIMPLEFB is not set +# CONFIG_ARM_FFA_TRANSPORT is not set +# CONFIG_GOOGLE_FIRMWARE is not set + +# +# EFI (Extensible Firmware Interface) Support +# +CONFIG_EFI_ESRT=y +CONFIG_EFI_PARAMS_FROM_FDT=y +CONFIG_EFI_RUNTIME_WRAPPERS=y +CONFIG_EFI_GENERIC_STUB=y +# CONFIG_EFI_ZBOOT is not set +# CONFIG_EFI_ARMSTUB_DTB_LOADER is not set +# CONFIG_EFI_BOOTLOADER_CONTROL is not set +# CONFIG_EFI_CAPSULE_LOADER is not set +# CONFIG_EFI_TEST is not set +CONFIG_RESET_ATTACK_MITIGATION=y +# CONFIG_EFI_DISABLE_PCI_DMA is not set +CONFIG_EFI_EARLYCON=y +# CONFIG_EFI_CUSTOM_SSDT_OVERLAYS is not set +# CONFIG_EFI_DISABLE_RUNTIME is not set +CONFIG_EFI_COCO_SECRET=y +# end of EFI (Extensible Firmware Interface) Support + +CONFIG_ARM_PSCI_FW=y +CONFIG_HAVE_ARM_SMCCC=y +CONFIG_HAVE_ARM_SMCCC_DISCOVERY=y +CONFIG_ARM_SMCCC_SOC_ID=y + +# +# Tegra firmware driver +# +# end of Tegra firmware driver +# end of Firmware Drivers + +# CONFIG_GNSS is not set +# CONFIG_MTD is not set +CONFIG_DTC=y +CONFIG_OF=y +# CONFIG_OF_UNITTEST is not set +CONFIG_OF_FLATTREE=y +CONFIG_OF_EARLY_FLATTREE=y +CONFIG_OF_KOBJ=y +CONFIG_OF_ADDRESS=y +CONFIG_OF_IRQ=y +CONFIG_OF_RESERVED_MEM=y +# CONFIG_OF_OVERLAY is not set +# CONFIG_PARPORT is not set +CONFIG_PNP=y +# CONFIG_PNP_DEBUG_MESSAGES is not set + +# +# Protocols +# +CONFIG_PNPACPI=y +CONFIG_BLK_DEV=y +# CONFIG_BLK_DEV_NULL_BLK is not set +# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set +CONFIG_BLK_DEV_LOOP=y +CONFIG_BLK_DEV_LOOP_MIN_COUNT=8 +# CONFIG_BLK_DEV_DRBD is not set +# CONFIG_BLK_DEV_NBD is not set +CONFIG_BLK_DEV_RAM=y +CONFIG_BLK_DEV_RAM_COUNT=16 +CONFIG_BLK_DEV_RAM_SIZE=65536 +# CONFIG_CDROM_PKTCDVD is not set +# CONFIG_ATA_OVER_ETH is not set +CONFIG_VIRTIO_BLK=y +# CONFIG_BLK_DEV_RBD is not set +# CONFIG_BLK_DEV_UBLK is not set + +# +# NVME Support +# +# CONFIG_BLK_DEV_NVME is not set +# CONFIG_NVME_FC is not set +# CONFIG_NVME_TCP is not set +# end of NVME Support + +# +# Misc devices +# +# CONFIG_AD525X_DPOT is not set +# CONFIG_DUMMY_IRQ is not set +# CONFIG_PHANTOM is not set +# CONFIG_TIFM_CORE is not set +# CONFIG_ICS932S401 is not set +# CONFIG_ENCLOSURE_SERVICES is not set +# CONFIG_HP_ILO is not set +# CONFIG_APDS9802ALS is not set +# CONFIG_ISL29003 is not set +# CONFIG_ISL29020 is not set +# CONFIG_SENSORS_TSL2550 is not set +# CONFIG_SENSORS_BH1770 is not set +# CONFIG_SENSORS_APDS990X is not set +# CONFIG_HMC6352 is not set +# CONFIG_DS1682 is not set +# CONFIG_SRAM is not set +# CONFIG_DW_XDATA_PCIE is not set +# CONFIG_PCI_ENDPOINT_TEST is not set +# CONFIG_XILINX_SDFEC is not set +# CONFIG_OPEN_DICE is not set +CONFIG_VCPU_STALL_DETECTOR=y +# CONFIG_C2PORT is not set + +# +# EEPROM support +# +# CONFIG_EEPROM_AT24 is not set +# CONFIG_EEPROM_LEGACY is not set +# CONFIG_EEPROM_MAX6875 is not set +# CONFIG_EEPROM_93CX6 is not set +# CONFIG_EEPROM_IDT_89HPESX is not set +# CONFIG_EEPROM_EE1004 is not set +# end of EEPROM support + +# CONFIG_CB710_CORE is not set + +# +# Texas Instruments shared transport line discipline +# +# CONFIG_TI_ST is not set +# end of Texas Instruments shared transport line discipline + +# CONFIG_SENSORS_LIS3_I2C is not set +# CONFIG_ALTERA_STAPL is not set +# CONFIG_VMWARE_VMCI is not set +# CONFIG_GENWQE is not set +# CONFIG_ECHO is not set +# CONFIG_BCM_VK is not set +# CONFIG_MISC_ALCOR_PCI is not set +# CONFIG_MISC_RTSX_PCI is not set +# CONFIG_MISC_RTSX_USB is not set +# CONFIG_UACCE is not set +# CONFIG_PVPANIC is not set +# end of Misc devices + +# +# SCSI device support +# +CONFIG_SCSI_MOD=y +# CONFIG_RAID_ATTRS is not set +CONFIG_SCSI_COMMON=y +CONFIG_SCSI=y +CONFIG_SCSI_DMA=y +# CONFIG_SCSI_PROC_FS is not set + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +# CONFIG_CHR_DEV_ST is not set +# CONFIG_BLK_DEV_SR is not set +CONFIG_CHR_DEV_SG=y +CONFIG_BLK_DEV_BSG=y +# CONFIG_CHR_DEV_SCH is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set +# CONFIG_SCSI_SCAN_ASYNC is not set + +# +# SCSI Transports +# +# CONFIG_SCSI_SPI_ATTRS is not set +# CONFIG_SCSI_FC_ATTRS is not set +# CONFIG_SCSI_ISCSI_ATTRS is not set +# CONFIG_SCSI_SAS_ATTRS is not set +# CONFIG_SCSI_SAS_LIBSAS is not set +# CONFIG_SCSI_SRP_ATTRS is not set +# end of SCSI Transports + +CONFIG_SCSI_LOWLEVEL=y +# CONFIG_ISCSI_TCP is not set +# CONFIG_ISCSI_BOOT_SYSFS is not set +# CONFIG_SCSI_CXGB3_ISCSI is not set +# CONFIG_SCSI_CXGB4_ISCSI is not set +# CONFIG_SCSI_BNX2_ISCSI is not set +# CONFIG_BE2ISCSI is not set +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_HPSA is not set +# CONFIG_SCSI_3W_9XXX is not set +# CONFIG_SCSI_3W_SAS is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AACRAID is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC79XX is not set +# CONFIG_SCSI_AIC94XX is not set +# CONFIG_SCSI_MVSAS is not set +# CONFIG_SCSI_MVUMI is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_ARCMSR is not set +# CONFIG_SCSI_ESAS2R is not set +# CONFIG_MEGARAID_NEWGEN is not set +# CONFIG_MEGARAID_LEGACY is not set +# CONFIG_MEGARAID_SAS is not set +# CONFIG_SCSI_MPT3SAS is not set +# CONFIG_SCSI_MPT2SAS is not set +# CONFIG_SCSI_MPI3MR is not set +# CONFIG_SCSI_SMARTPQI is not set +# CONFIG_SCSI_HPTIOP is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_MYRB is not set +# CONFIG_SCSI_MYRS is not set +CONFIG_HYPERV_STORAGE=y +# CONFIG_SCSI_SNIC is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_FDOMAIN_PCI is not set +# CONFIG_SCSI_IPS is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_STEX is not set +# CONFIG_SCSI_SYM53C8XX_2 is not set +# CONFIG_SCSI_IPR is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLA_ISCSI is not set +# CONFIG_SCSI_DC395x is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_WD719X is not set +# CONFIG_SCSI_DEBUG is not set +# CONFIG_SCSI_PMCRAID is not set +# CONFIG_SCSI_PM8001 is not set +CONFIG_SCSI_VIRTIO=y +# CONFIG_SCSI_DH is not set +# end of SCSI device support + +# CONFIG_ATA is not set +CONFIG_MD=y +CONFIG_BLK_DEV_MD=y +# CONFIG_MD_AUTODETECT is not set +CONFIG_MD_BITMAP_FILE=y +# CONFIG_MD_LINEAR is not set +CONFIG_MD_RAID0=y +CONFIG_MD_RAID1=y +CONFIG_MD_RAID10=y +CONFIG_MD_RAID456=y +# CONFIG_MD_MULTIPATH is not set +# CONFIG_MD_FAULTY is not set +# CONFIG_BCACHE is not set +CONFIG_BLK_DEV_DM_BUILTIN=y +CONFIG_BLK_DEV_DM=y +# CONFIG_DM_DEBUG is not set +CONFIG_DM_BUFIO=y +# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set +CONFIG_DM_BIO_PRISON=y +CONFIG_DM_PERSISTENT_DATA=y +# CONFIG_DM_UNSTRIPED is not set +CONFIG_DM_CRYPT=y +# CONFIG_DM_SNAPSHOT is not set +CONFIG_DM_THIN_PROVISIONING=y +# CONFIG_DM_CACHE is not set +# CONFIG_DM_WRITECACHE is not set +# CONFIG_DM_EBS is not set +# CONFIG_DM_ERA is not set +# CONFIG_DM_CLONE is not set +# CONFIG_DM_MIRROR is not set +CONFIG_DM_RAID=y +# CONFIG_DM_ZERO is not set +# CONFIG_DM_MULTIPATH is not set +# CONFIG_DM_DELAY is not set +# CONFIG_DM_DUST is not set +# CONFIG_DM_INIT is not set +# CONFIG_DM_UEVENT is not set +# CONFIG_DM_FLAKEY is not set +CONFIG_DM_VERITY=y +CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y +# CONFIG_DM_VERITY_FEC is not set +# CONFIG_DM_SWITCH is not set +# CONFIG_DM_LOG_WRITES is not set +# CONFIG_DM_INTEGRITY is not set +# CONFIG_DM_AUDIT is not set +# CONFIG_TARGET_CORE is not set +# CONFIG_FUSION is not set + +# +# IEEE 1394 (FireWire) support +# +# CONFIG_FIREWIRE is not set +# CONFIG_FIREWIRE_NOSY is not set +# end of IEEE 1394 (FireWire) support + +CONFIG_NETDEVICES=y +CONFIG_MII=y +CONFIG_NET_CORE=y +CONFIG_BONDING=y +CONFIG_DUMMY=y +CONFIG_WIREGUARD=y +# CONFIG_WIREGUARD_DEBUG is not set +# CONFIG_EQUALIZER is not set +# CONFIG_NET_FC is not set +# CONFIG_IFB is not set +CONFIG_NET_TEAM=y +# CONFIG_NET_TEAM_MODE_BROADCAST is not set +# CONFIG_NET_TEAM_MODE_ROUNDROBIN is not set +# CONFIG_NET_TEAM_MODE_RANDOM is not set +# CONFIG_NET_TEAM_MODE_ACTIVEBACKUP is not set +# CONFIG_NET_TEAM_MODE_LOADBALANCE is not set +CONFIG_MACVLAN=y +CONFIG_MACVTAP=y +CONFIG_IPVLAN_L3S=y +CONFIG_IPVLAN=y +CONFIG_IPVTAP=y +CONFIG_VXLAN=y +CONFIG_GENEVE=y +# CONFIG_BAREUDP is not set +# CONFIG_GTP is not set +# CONFIG_MACSEC is not set +# CONFIG_NETCONSOLE is not set +CONFIG_TUN=y +CONFIG_TAP=y +# CONFIG_TUN_VNET_CROSS_LE is not set +CONFIG_VETH=y +CONFIG_VIRTIO_NET=y +# CONFIG_NLMON is not set +# CONFIG_ARCNET is not set +CONFIG_ETHERNET=y +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_NET_VENDOR_ADAPTEC is not set +# CONFIG_NET_VENDOR_AGERE is not set +# CONFIG_NET_VENDOR_ALACRITECH is not set +# CONFIG_NET_VENDOR_ALTEON is not set +# CONFIG_ALTERA_TSE is not set +# CONFIG_NET_VENDOR_AMAZON is not set +# CONFIG_NET_VENDOR_AMD is not set +# CONFIG_NET_VENDOR_AQUANTIA is not set +# CONFIG_NET_VENDOR_ARC is not set +# CONFIG_NET_VENDOR_ASIX is not set +# CONFIG_NET_VENDOR_ATHEROS is not set +# CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_VENDOR_CADENCE is not set +# CONFIG_NET_VENDOR_CAVIUM is not set +# CONFIG_NET_VENDOR_CHELSIO is not set +# CONFIG_NET_VENDOR_CISCO is not set +# CONFIG_NET_VENDOR_CORTINA is not set +# CONFIG_NET_VENDOR_DAVICOM is not set +# CONFIG_DNET is not set +# CONFIG_NET_VENDOR_DEC is not set +# CONFIG_NET_VENDOR_DLINK is not set +# CONFIG_NET_VENDOR_EMULEX is not set +# CONFIG_NET_VENDOR_ENGLEDER is not set +# CONFIG_NET_VENDOR_EZCHIP is not set +# CONFIG_NET_VENDOR_FUNGIBLE is not set +# CONFIG_NET_VENDOR_GOOGLE is not set +# CONFIG_NET_VENDOR_HISILICON is not set +# CONFIG_NET_VENDOR_HUAWEI is not set +# CONFIG_NET_VENDOR_INTEL is not set +# CONFIG_JME is not set +# CONFIG_NET_VENDOR_LITEX is not set +# CONFIG_NET_VENDOR_MARVELL is not set +# CONFIG_NET_VENDOR_MELLANOX is not set +# CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROCHIP is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set +# CONFIG_NET_VENDOR_MICROSOFT is not set +# CONFIG_NET_VENDOR_MYRI is not set +# CONFIG_FEALNX is not set +# CONFIG_NET_VENDOR_NI is not set +# CONFIG_NET_VENDOR_NATSEMI is not set +# CONFIG_NET_VENDOR_NETERION is not set +# CONFIG_NET_VENDOR_NETRONOME is not set +# CONFIG_NET_VENDOR_NVIDIA is not set +# CONFIG_NET_VENDOR_OKI is not set +# CONFIG_ETHOC is not set +# CONFIG_NET_VENDOR_PACKET_ENGINES is not set +# CONFIG_NET_VENDOR_PENSANDO is not set +# CONFIG_NET_VENDOR_QLOGIC is not set +# CONFIG_NET_VENDOR_BROCADE is not set +# CONFIG_NET_VENDOR_QUALCOMM is not set +# CONFIG_NET_VENDOR_RDC is not set +# CONFIG_NET_VENDOR_REALTEK is not set +# CONFIG_NET_VENDOR_RENESAS is not set +# CONFIG_NET_VENDOR_ROCKER is not set +# CONFIG_NET_VENDOR_SAMSUNG is not set +# CONFIG_NET_VENDOR_SEEQ is not set +# CONFIG_NET_VENDOR_SILAN is not set +# CONFIG_NET_VENDOR_SIS is not set +# CONFIG_NET_VENDOR_SOLARFLARE is not set +# CONFIG_NET_VENDOR_SMSC is not set +# CONFIG_NET_VENDOR_SOCIONEXT is not set +# CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SUN is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set +# CONFIG_NET_VENDOR_TEHUTI is not set +# CONFIG_NET_VENDOR_TI is not set +# CONFIG_NET_VENDOR_VERTEXCOM is not set +# CONFIG_NET_VENDOR_VIA is not set +# CONFIG_NET_VENDOR_WANGXUN is not set +# CONFIG_NET_VENDOR_WIZNET is not set +# CONFIG_NET_VENDOR_XILINX is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_NET_SB1000 is not set +# CONFIG_PHYLIB is not set +# CONFIG_PSE_CONTROLLER is not set +# CONFIG_MDIO_DEVICE is not set + +# +# PCS device drivers +# +# end of PCS device drivers + +CONFIG_PPP=y +CONFIG_PPP_BSDCOMP=y +CONFIG_PPP_DEFLATE=y +CONFIG_PPP_FILTER=y +CONFIG_PPP_MPPE=y +CONFIG_PPP_MULTILINK=y +CONFIG_PPPOE=y +# CONFIG_PPPOE_HASH_BITS_1 is not set +# CONFIG_PPPOE_HASH_BITS_2 is not set +CONFIG_PPPOE_HASH_BITS_4=y +# CONFIG_PPPOE_HASH_BITS_8 is not set +CONFIG_PPPOE_HASH_BITS=4 +CONFIG_PPP_ASYNC=y +CONFIG_PPP_SYNC_TTY=y +# CONFIG_SLIP is not set +CONFIG_SLHC=y +CONFIG_USB_NET_DRIVERS=y +# CONFIG_USB_CATC is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_RTL8150 is not set +# CONFIG_USB_RTL8152 is not set +# CONFIG_USB_LAN78XX is not set +CONFIG_USB_USBNET=y +# CONFIG_USB_NET_AX8817X is not set +# CONFIG_USB_NET_AX88179_178A is not set +CONFIG_USB_NET_CDCETHER=y +# CONFIG_USB_NET_CDC_EEM is not set +CONFIG_USB_NET_CDC_NCM=y +# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set +# CONFIG_USB_NET_CDC_MBIM is not set +# CONFIG_USB_NET_DM9601 is not set +# CONFIG_USB_NET_SR9700 is not set +# CONFIG_USB_NET_SR9800 is not set +# CONFIG_USB_NET_SMSC75XX is not set +# CONFIG_USB_NET_SMSC95XX is not set +# CONFIG_USB_NET_GL620A is not set +# CONFIG_USB_NET_NET1080 is not set +# CONFIG_USB_NET_PLUSB is not set +# CONFIG_USB_NET_MCS7830 is not set +# CONFIG_USB_NET_RNDIS_HOST is not set +# CONFIG_USB_NET_CDC_SUBSET is not set +# CONFIG_USB_NET_ZAURUS is not set +# CONFIG_USB_NET_CX82310_ETH is not set +# CONFIG_USB_NET_KALMIA is not set +# CONFIG_USB_NET_QMI_WWAN is not set +# CONFIG_USB_NET_INT51X1 is not set +# CONFIG_USB_IPHETH is not set +# CONFIG_USB_SIERRA_NET is not set +# CONFIG_USB_VL600 is not set +# CONFIG_USB_NET_CH9200 is not set +# CONFIG_USB_NET_AQC111 is not set +CONFIG_USB_RTL8153_ECM=y +# CONFIG_WLAN is not set +# CONFIG_WAN is not set + +# +# Wireless WAN +# +# CONFIG_WWAN is not set +# end of Wireless WAN + +# CONFIG_VMXNET3 is not set +# CONFIG_FUJITSU_ES is not set +CONFIG_HYPERV_NET=y +# CONFIG_NETDEVSIM is not set +CONFIG_NET_FAILOVER=y +# CONFIG_ISDN is not set + +# +# Input device support +# +CONFIG_INPUT=y +# CONFIG_INPUT_FF_MEMLESS is not set +# CONFIG_INPUT_SPARSEKMAP is not set +# CONFIG_INPUT_MATRIXKMAP is not set + +# +# Userland interfaces +# +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set +# CONFIG_INPUT_EVBUG is not set + +# +# Input Device Drivers +# +# CONFIG_INPUT_KEYBOARD is not set +# CONFIG_INPUT_MOUSE is not set +# CONFIG_INPUT_JOYSTICK is not set +# CONFIG_INPUT_TABLET is not set +# CONFIG_INPUT_TOUCHSCREEN is not set +# CONFIG_INPUT_MISC is not set +# CONFIG_RMI4_CORE is not set + +# +# Hardware I/O ports +# +CONFIG_SERIO=y +CONFIG_SERIO_SERPORT=y +# CONFIG_SERIO_AMBAKMI is not set +# CONFIG_SERIO_PCIPS2 is not set +# CONFIG_SERIO_LIBPS2 is not set +CONFIG_SERIO_RAW=y +# CONFIG_SERIO_ALTERA_PS2 is not set +# CONFIG_SERIO_PS2MULT is not set +# CONFIG_SERIO_ARC_PS2 is not set +# CONFIG_SERIO_APBPS2 is not set +CONFIG_HYPERV_KEYBOARD=y +# CONFIG_SERIO_GPIO_PS2 is not set +# CONFIG_USERIO is not set +# CONFIG_GAMEPORT is not set +# end of Hardware I/O ports +# end of Input device support + +# +# Character devices +# +CONFIG_TTY=y +CONFIG_VT=y +CONFIG_CONSOLE_TRANSLATIONS=y +CONFIG_VT_CONSOLE=y +CONFIG_HW_CONSOLE=y +# CONFIG_VT_HW_CONSOLE_BINDING is not set +CONFIG_UNIX98_PTYS=y +CONFIG_LEGACY_PTYS=y +CONFIG_LEGACY_PTY_COUNT=256 +CONFIG_LEGACY_TIOCSTI=y +# CONFIG_LDISC_AUTOLOAD is not set + +# +# Serial drivers +# +CONFIG_SERIAL_EARLYCON=y +CONFIG_SERIAL_8250=y +CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y +# CONFIG_SERIAL_8250_PNP is not set +# CONFIG_SERIAL_8250_16550A_VARIANTS is not set +# CONFIG_SERIAL_8250_FINTEK is not set +CONFIG_SERIAL_8250_CONSOLE=y +CONFIG_SERIAL_8250_PCILIB=y +CONFIG_SERIAL_8250_PCI=y +# CONFIG_SERIAL_8250_EXAR is not set +CONFIG_SERIAL_8250_NR_UARTS=4 +CONFIG_SERIAL_8250_RUNTIME_UARTS=4 +# CONFIG_SERIAL_8250_EXTENDED is not set +# CONFIG_SERIAL_8250_PCI1XXXX is not set +CONFIG_SERIAL_8250_FSL=y +# CONFIG_SERIAL_8250_DW is not set +# CONFIG_SERIAL_8250_RT288X is not set +# CONFIG_SERIAL_8250_PERICOM is not set +# CONFIG_SERIAL_OF_PLATFORM is not set + +# +# Non-8250 serial port support +# +# CONFIG_SERIAL_AMBA_PL010 is not set +CONFIG_SERIAL_AMBA_PL011=y +CONFIG_SERIAL_AMBA_PL011_CONSOLE=y +# CONFIG_SERIAL_EARLYCON_SEMIHOST is not set +# CONFIG_SERIAL_UARTLITE is not set +CONFIG_SERIAL_CORE=y +CONFIG_SERIAL_CORE_CONSOLE=y +# CONFIG_SERIAL_JSM is not set +# CONFIG_SERIAL_SIFIVE is not set +# CONFIG_SERIAL_SCCNXP is not set +# CONFIG_SERIAL_SC16IS7XX is not set +# CONFIG_SERIAL_ALTERA_JTAGUART is not set +# CONFIG_SERIAL_ALTERA_UART is not set +# CONFIG_SERIAL_XILINX_PS_UART is not set +# CONFIG_SERIAL_ARC is not set +# CONFIG_SERIAL_RP2 is not set +# CONFIG_SERIAL_FSL_LPUART is not set +# CONFIG_SERIAL_FSL_LINFLEXUART is not set +# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set +# CONFIG_SERIAL_SPRD is not set +# end of Serial drivers + +CONFIG_SERIAL_MCTRL_GPIO=y +# CONFIG_SERIAL_NONSTANDARD is not set +# CONFIG_N_GSM is not set +# CONFIG_NOZOMI is not set +# CONFIG_NULL_TTY is not set +CONFIG_HVC_DRIVER=y +# CONFIG_HVC_DCC is not set +# CONFIG_SERIAL_DEV_BUS is not set +# CONFIG_TTY_PRINTK is not set +CONFIG_VIRTIO_CONSOLE=y +# CONFIG_IPMI_HANDLER is not set +# CONFIG_HW_RANDOM is not set +# CONFIG_APPLICOM is not set +CONFIG_DEVMEM=y +# CONFIG_DEVPORT is not set +# CONFIG_TCG_TPM is not set +# CONFIG_XILLYBUS is not set +# CONFIG_XILLYUSB is not set +# end of Character devices + +# +# I2C support +# +CONFIG_I2C=y +# CONFIG_ACPI_I2C_OPREGION is not set +CONFIG_I2C_BOARDINFO=y +# CONFIG_I2C_COMPAT is not set +# CONFIG_I2C_CHARDEV is not set +# CONFIG_I2C_MUX is not set +# CONFIG_I2C_HELPER_AUTO is not set +# CONFIG_I2C_SMBUS is not set + +# +# I2C Algorithms +# +CONFIG_I2C_ALGOBIT=y +# CONFIG_I2C_ALGOPCF is not set +# CONFIG_I2C_ALGOPCA is not set +# end of I2C Algorithms + +# +# I2C Hardware Bus support +# + +# +# PC SMBus host controller drivers +# +# CONFIG_I2C_ALI1535 is not set +# CONFIG_I2C_ALI1563 is not set +# CONFIG_I2C_ALI15X3 is not set +# CONFIG_I2C_AMD756 is not set +# CONFIG_I2C_AMD8111 is not set +# CONFIG_I2C_AMD_MP2 is not set +# CONFIG_I2C_I801 is not set +# CONFIG_I2C_ISCH is not set +# CONFIG_I2C_PIIX4 is not set +# CONFIG_I2C_NFORCE2 is not set +# CONFIG_I2C_NVIDIA_GPU is not set +# CONFIG_I2C_SIS5595 is not set +# CONFIG_I2C_SIS630 is not set +# CONFIG_I2C_SIS96X is not set +# CONFIG_I2C_VIA is not set +# CONFIG_I2C_VIAPRO is not set + +# +# ACPI drivers +# +# CONFIG_I2C_SCMI is not set + +# +# I2C system bus drivers (mostly embedded / system-on-chip) +# +# CONFIG_I2C_CADENCE is not set +# CONFIG_I2C_CBUS_GPIO is not set +# CONFIG_I2C_DESIGNWARE_PLATFORM is not set +# CONFIG_I2C_DESIGNWARE_PCI is not set +# CONFIG_I2C_EMEV2 is not set +# CONFIG_I2C_GPIO is not set +# CONFIG_I2C_HISI is not set +# CONFIG_I2C_NOMADIK is not set +# CONFIG_I2C_OCORES is not set +# CONFIG_I2C_PCA_PLATFORM is not set +# CONFIG_I2C_RK3X is not set +# CONFIG_I2C_SIMTEC is not set +# CONFIG_I2C_VERSATILE is not set +# CONFIG_I2C_THUNDERX is not set +# CONFIG_I2C_XILINX is not set + +# +# External I2C/SMBus adapter drivers +# +# CONFIG_I2C_DIOLAN_U2C is not set +# CONFIG_I2C_CP2615 is not set +# CONFIG_I2C_PCI1XXXX is not set +# CONFIG_I2C_ROBOTFUZZ_OSIF is not set +# CONFIG_I2C_TAOS_EVM is not set +# CONFIG_I2C_TINY_USB is not set + +# +# Other I2C/SMBus bus drivers +# +# CONFIG_I2C_MLXCPLD is not set +# CONFIG_I2C_VIRTIO is not set +# end of I2C Hardware Bus support + +# CONFIG_I2C_STUB is not set +# CONFIG_I2C_SLAVE is not set +# CONFIG_I2C_DEBUG_CORE is not set +# CONFIG_I2C_DEBUG_ALGO is not set +# CONFIG_I2C_DEBUG_BUS is not set +# end of I2C support + +# CONFIG_I3C is not set +# CONFIG_SPI is not set +# CONFIG_SPMI is not set +# CONFIG_HSI is not set +CONFIG_PPS=y +# CONFIG_PPS_DEBUG is not set + +# +# PPS clients support +# +# CONFIG_PPS_CLIENT_KTIMER is not set +# CONFIG_PPS_CLIENT_LDISC is not set +# CONFIG_PPS_CLIENT_GPIO is not set + +# +# PPS generators support +# + +# +# PTP clock support +# +CONFIG_PTP_1588_CLOCK=y +CONFIG_PTP_1588_CLOCK_OPTIONAL=y + +# +# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks. +# +# CONFIG_PTP_1588_CLOCK_KVM is not set +# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set +# CONFIG_PTP_1588_CLOCK_IDTCM is not set +# CONFIG_PTP_1588_CLOCK_MOCK is not set +# end of PTP clock support + +# CONFIG_PINCTRL is not set +CONFIG_GPIOLIB=y +CONFIG_GPIOLIB_FASTPATH_LIMIT=512 +CONFIG_OF_GPIO=y +CONFIG_GPIO_ACPI=y +# CONFIG_DEBUG_GPIO is not set +# CONFIG_GPIO_SYSFS is not set +# CONFIG_GPIO_CDEV is not set + +# +# Memory mapped GPIO drivers +# +# CONFIG_GPIO_74XX_MMIO is not set +# CONFIG_GPIO_ALTERA is not set +# CONFIG_GPIO_AMDPT is not set +# CONFIG_GPIO_CADENCE is not set +# CONFIG_GPIO_DWAPB is not set +# CONFIG_GPIO_FTGPIO010 is not set +# CONFIG_GPIO_GENERIC_PLATFORM is not set +# CONFIG_GPIO_GRGPIO is not set +# CONFIG_GPIO_HISI is not set +# CONFIG_GPIO_HLWD is not set +# CONFIG_GPIO_MB86S7X is not set +# CONFIG_GPIO_PL061 is not set +# CONFIG_GPIO_SIFIVE is not set +# CONFIG_GPIO_XGENE is not set +# CONFIG_GPIO_XILINX is not set +# CONFIG_GPIO_AMD_FCH is not set +# end of Memory mapped GPIO drivers + +# +# I2C GPIO expanders +# +# CONFIG_GPIO_ADNP is not set +# CONFIG_GPIO_FXL6408 is not set +# CONFIG_GPIO_DS4520 is not set +# CONFIG_GPIO_GW_PLD is not set +# CONFIG_GPIO_MAX7300 is not set +# CONFIG_GPIO_MAX732X is not set +# CONFIG_GPIO_PCA953X is not set +# CONFIG_GPIO_PCA9570 is not set +# CONFIG_GPIO_PCF857X is not set +# CONFIG_GPIO_TPIC2810 is not set +# end of I2C GPIO expanders + +# +# MFD GPIO expanders +# +# end of MFD GPIO expanders + +# +# PCI GPIO expanders +# +# CONFIG_GPIO_BT8XX is not set +# CONFIG_GPIO_PCI_IDIO_16 is not set +# CONFIG_GPIO_PCIE_IDIO_24 is not set +# CONFIG_GPIO_RDC321X is not set +# end of PCI GPIO expanders + +# +# USB GPIO expanders +# +# end of USB GPIO expanders + +# +# Virtual GPIO drivers +# +# CONFIG_GPIO_AGGREGATOR is not set +# CONFIG_GPIO_LATCH is not set +# CONFIG_GPIO_MOCKUP is not set +# CONFIG_GPIO_VIRTIO is not set +# CONFIG_GPIO_SIM is not set +# end of Virtual GPIO drivers + +# CONFIG_W1 is not set +CONFIG_POWER_RESET=y +# CONFIG_POWER_RESET_GPIO is not set +# CONFIG_POWER_RESET_GPIO_RESTART is not set +# CONFIG_POWER_RESET_LTC2952 is not set +# CONFIG_POWER_RESET_RESTART is not set +# CONFIG_POWER_RESET_XGENE is not set +# CONFIG_POWER_RESET_SYSCON is not set +# CONFIG_POWER_RESET_SYSCON_POWEROFF is not set +# CONFIG_NVMEM_REBOOT_MODE is not set +CONFIG_POWER_SUPPLY=y +# CONFIG_POWER_SUPPLY_DEBUG is not set +# CONFIG_IP5XXX_POWER is not set +# CONFIG_TEST_POWER is not set +# CONFIG_CHARGER_ADP5061 is not set +# CONFIG_BATTERY_CW2015 is not set +# CONFIG_BATTERY_DS2780 is not set +# CONFIG_BATTERY_DS2781 is not set +# CONFIG_BATTERY_DS2782 is not set +# CONFIG_BATTERY_SAMSUNG_SDI is not set +# CONFIG_BATTERY_SBS is not set +# CONFIG_CHARGER_SBS is not set +# CONFIG_BATTERY_BQ27XXX is not set +# CONFIG_BATTERY_MAX17040 is not set +# CONFIG_BATTERY_MAX17042 is not set +# CONFIG_CHARGER_MAX8903 is not set +# CONFIG_CHARGER_LP8727 is not set +# CONFIG_CHARGER_GPIO is not set +# CONFIG_CHARGER_LT3651 is not set +# CONFIG_CHARGER_LTC4162L is not set +# CONFIG_CHARGER_DETECTOR_MAX14656 is not set +# CONFIG_CHARGER_MAX77976 is not set +# CONFIG_CHARGER_BQ2415X is not set +# CONFIG_CHARGER_BQ24257 is not set +# CONFIG_CHARGER_BQ24735 is not set +# CONFIG_CHARGER_BQ2515X is not set +# CONFIG_CHARGER_BQ25890 is not set +# CONFIG_CHARGER_BQ25980 is not set +# CONFIG_CHARGER_BQ256XX is not set +# CONFIG_BATTERY_GAUGE_LTC2941 is not set +# CONFIG_BATTERY_GOLDFISH is not set +# CONFIG_BATTERY_RT5033 is not set +# CONFIG_CHARGER_RT9455 is not set +# CONFIG_CHARGER_BD99954 is not set +# CONFIG_BATTERY_UG3105 is not set +# CONFIG_HWMON is not set +# CONFIG_THERMAL is not set +# CONFIG_WATCHDOG is not set +CONFIG_SSB_POSSIBLE=y +# CONFIG_SSB is not set +CONFIG_BCMA_POSSIBLE=y +# CONFIG_BCMA is not set + +# +# Multifunction device drivers +# +# CONFIG_MFD_ACT8945A is not set +# CONFIG_MFD_AS3711 is not set +# CONFIG_MFD_SMPRO is not set +# CONFIG_MFD_AS3722 is not set +# CONFIG_PMIC_ADP5520 is not set +# CONFIG_MFD_AAT2870_CORE is not set +# CONFIG_MFD_ATMEL_FLEXCOM is not set +# CONFIG_MFD_ATMEL_HLCDC is not set +# CONFIG_MFD_BCM590XX is not set +# CONFIG_MFD_BD9571MWV is not set +# CONFIG_MFD_AXP20X_I2C is not set +# CONFIG_MFD_CS42L43_I2C is not set +# CONFIG_MFD_MADERA is not set +# CONFIG_MFD_MAX5970 is not set +# CONFIG_PMIC_DA903X is not set +# CONFIG_MFD_DA9052_I2C is not set +# CONFIG_MFD_DA9055 is not set +# CONFIG_MFD_DA9062 is not set +# CONFIG_MFD_DA9063 is not set +# CONFIG_MFD_DA9150 is not set +# CONFIG_MFD_DLN2 is not set +# CONFIG_MFD_GATEWORKS_GSC is not set +# CONFIG_MFD_MC13XXX_I2C is not set +# CONFIG_MFD_MP2629 is not set +# CONFIG_MFD_HI6421_PMIC is not set +# CONFIG_LPC_ICH is not set +# CONFIG_LPC_SCH is not set +# CONFIG_MFD_IQS62X is not set +# CONFIG_MFD_JANZ_CMODIO is not set +# CONFIG_MFD_KEMPLD is not set +# CONFIG_MFD_88PM800 is not set +# CONFIG_MFD_88PM805 is not set +# CONFIG_MFD_88PM860X is not set +# CONFIG_MFD_MAX14577 is not set +# CONFIG_MFD_MAX77541 is not set +# CONFIG_MFD_MAX77620 is not set +# CONFIG_MFD_MAX77650 is not set +# CONFIG_MFD_MAX77686 is not set +# CONFIG_MFD_MAX77693 is not set +# CONFIG_MFD_MAX77714 is not set +# CONFIG_MFD_MAX77843 is not set +# CONFIG_MFD_MAX8907 is not set +# CONFIG_MFD_MAX8925 is not set +# CONFIG_MFD_MAX8997 is not set +# CONFIG_MFD_MAX8998 is not set +# CONFIG_MFD_MT6360 is not set +# CONFIG_MFD_MT6370 is not set +# CONFIG_MFD_MT6397 is not set +# CONFIG_MFD_MENF21BMC is not set +# CONFIG_MFD_VIPERBOARD is not set +# CONFIG_MFD_NTXEC is not set +# CONFIG_MFD_RETU is not set +# CONFIG_MFD_PCF50633 is not set +# CONFIG_MFD_SY7636A is not set +# CONFIG_MFD_RDC321X is not set +# CONFIG_MFD_RT4831 is not set +# CONFIG_MFD_RT5033 is not set +# CONFIG_MFD_RT5120 is not set +# CONFIG_MFD_RC5T583 is not set +# CONFIG_MFD_RK8XX_I2C is not set +# CONFIG_MFD_RN5T618 is not set +# CONFIG_MFD_SEC_CORE is not set +# CONFIG_MFD_SI476X_CORE is not set +# CONFIG_MFD_SM501 is not set +# CONFIG_MFD_SKY81452 is not set +# CONFIG_MFD_STMPE is not set +# CONFIG_MFD_SYSCON is not set +# CONFIG_MFD_TI_AM335X_TSCADC is not set +# CONFIG_MFD_LP3943 is not set +# CONFIG_MFD_LP8788 is not set +# CONFIG_MFD_TI_LMU is not set +# CONFIG_MFD_PALMAS is not set +# CONFIG_TPS6105X is not set +# CONFIG_TPS65010 is not set +# CONFIG_TPS6507X is not set +# CONFIG_MFD_TPS65086 is not set +# CONFIG_MFD_TPS65090 is not set +# CONFIG_MFD_TPS65217 is not set +# CONFIG_MFD_TI_LP873X is not set +# CONFIG_MFD_TI_LP87565 is not set +# CONFIG_MFD_TPS65218 is not set +# CONFIG_MFD_TPS65219 is not set +# CONFIG_MFD_TPS6586X is not set +# CONFIG_MFD_TPS65910 is not set +# CONFIG_MFD_TPS65912_I2C is not set +# CONFIG_MFD_TPS6594_I2C is not set +# CONFIG_TWL4030_CORE is not set +# CONFIG_TWL6040_CORE is not set +# CONFIG_MFD_WL1273_CORE is not set +# CONFIG_MFD_LM3533 is not set +# CONFIG_MFD_TC3589X is not set +# CONFIG_MFD_TQMX86 is not set +# CONFIG_MFD_VX855 is not set +# CONFIG_MFD_LOCHNAGAR is not set +# CONFIG_MFD_ARIZONA_I2C is not set +# CONFIG_MFD_WM8400 is not set +# CONFIG_MFD_WM831X_I2C is not set +# CONFIG_MFD_WM8350_I2C is not set +# CONFIG_MFD_WM8994 is not set +# CONFIG_MFD_ROHM_BD718XX is not set +# CONFIG_MFD_ROHM_BD71828 is not set +# CONFIG_MFD_ROHM_BD957XMUF is not set +# CONFIG_MFD_STPMIC1 is not set +# CONFIG_MFD_STMFX is not set +# CONFIG_MFD_ATC260X_I2C is not set +# CONFIG_MFD_QCOM_PM8008 is not set +# CONFIG_MFD_RSMU_I2C is not set +# end of Multifunction device drivers + +# CONFIG_REGULATOR is not set +# CONFIG_RC_CORE is not set + +# +# CEC support +# +# CONFIG_MEDIA_CEC_SUPPORT is not set +# end of CEC support + +# CONFIG_MEDIA_SUPPORT is not set + +# +# Graphics support +# +CONFIG_VIDEO_CMDLINE=y +CONFIG_VIDEO_NOMODESET=y +# CONFIG_AUXDISPLAY is not set +CONFIG_DRM=y +# CONFIG_DRM_DEBUG_MM is not set +# CONFIG_DRM_DEBUG_MODESET_LOCK is not set +# CONFIG_DRM_FBDEV_EMULATION is not set +# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set +CONFIG_DRM_GEM_SHMEM_HELPER=y + +# +# ARM devices +# +# CONFIG_DRM_HDLCD is not set +# CONFIG_DRM_MALI_DISPLAY is not set +# CONFIG_DRM_KOMEDA is not set +# end of ARM devices + +# CONFIG_DRM_RADEON is not set +# CONFIG_DRM_AMDGPU is not set +# CONFIG_DRM_NOUVEAU is not set +CONFIG_DRM_VGEM=y +# CONFIG_DRM_VKMS is not set +# CONFIG_DRM_VMWGFX is not set +# CONFIG_DRM_UDL is not set +# CONFIG_DRM_AST is not set +# CONFIG_DRM_MGAG200 is not set +# CONFIG_DRM_QXL is not set +# CONFIG_DRM_VIRTIO_GPU is not set +CONFIG_DRM_PANEL=y + +# +# Display Panels +# +# CONFIG_DRM_PANEL_SAMSUNG_S6E88A0_AMS452EF01 is not set +# CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0 is not set +# end of Display Panels + +CONFIG_DRM_BRIDGE=y +CONFIG_DRM_PANEL_BRIDGE=y + +# +# Display Interface Bridges +# +# CONFIG_DRM_CHIPONE_ICN6211 is not set +# CONFIG_DRM_CHRONTEL_CH7033 is not set +# CONFIG_DRM_DISPLAY_CONNECTOR is not set +# CONFIG_DRM_ITE_IT6505 is not set +# CONFIG_DRM_LONTIUM_LT8912B is not set +# CONFIG_DRM_LONTIUM_LT9211 is not set +# CONFIG_DRM_LONTIUM_LT9611 is not set +# CONFIG_DRM_LONTIUM_LT9611UXC is not set +# CONFIG_DRM_ITE_IT66121 is not set +# CONFIG_DRM_LVDS_CODEC is not set +# CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW is not set +# CONFIG_DRM_NWL_MIPI_DSI is not set +# CONFIG_DRM_NXP_PTN3460 is not set +# CONFIG_DRM_PARADE_PS8622 is not set +# CONFIG_DRM_PARADE_PS8640 is not set +# CONFIG_DRM_SAMSUNG_DSIM is not set +# CONFIG_DRM_SIL_SII8620 is not set +# CONFIG_DRM_SII902X is not set +# CONFIG_DRM_SII9234 is not set +# CONFIG_DRM_SIMPLE_BRIDGE is not set +# CONFIG_DRM_THINE_THC63LVD1024 is not set +# CONFIG_DRM_TOSHIBA_TC358762 is not set +# CONFIG_DRM_TOSHIBA_TC358764 is not set +# CONFIG_DRM_TOSHIBA_TC358767 is not set +# CONFIG_DRM_TOSHIBA_TC358768 is not set +# CONFIG_DRM_TOSHIBA_TC358775 is not set +# CONFIG_DRM_TI_DLPC3433 is not set +# CONFIG_DRM_TI_TFP410 is not set +# CONFIG_DRM_TI_SN65DSI83 is not set +# CONFIG_DRM_TI_SN65DSI86 is not set +# CONFIG_DRM_TI_TPD12S015 is not set +# CONFIG_DRM_ANALOGIX_ANX6345 is not set +# CONFIG_DRM_ANALOGIX_ANX78XX is not set +# CONFIG_DRM_ANALOGIX_ANX7625 is not set +# CONFIG_DRM_I2C_ADV7511 is not set +# CONFIG_DRM_CDNS_DSI is not set +# CONFIG_DRM_CDNS_MHDP8546 is not set +# end of Display Interface Bridges + +# CONFIG_DRM_LOONGSON is not set +# CONFIG_DRM_ETNAVIV is not set +# CONFIG_DRM_HISI_HIBMC is not set +# CONFIG_DRM_HISI_KIRIN is not set +# CONFIG_DRM_LOGICVC is not set +# CONFIG_DRM_ARCPGU is not set +# CONFIG_DRM_BOCHS is not set +# CONFIG_DRM_CIRRUS_QEMU is not set +# CONFIG_DRM_GM12U320 is not set +# CONFIG_DRM_SIMPLEDRM is not set +# CONFIG_DRM_PL111 is not set +# CONFIG_DRM_LIMA is not set +# CONFIG_DRM_PANFROST is not set +# CONFIG_DRM_TIDSS is not set +# CONFIG_DRM_GUD is not set +# CONFIG_DRM_SSD130X is not set +# CONFIG_DRM_HYPERV is not set +# CONFIG_DRM_LEGACY is not set +CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y + +# +# Frame buffer Devices +# +# CONFIG_FB is not set +# end of Frame buffer Devices + +# +# Backlight & LCD device support +# +# CONFIG_LCD_CLASS_DEVICE is not set +# CONFIG_BACKLIGHT_CLASS_DEVICE is not set +# end of Backlight & LCD device support + +CONFIG_HDMI=y + +# +# Console display driver support +# +CONFIG_DUMMY_CONSOLE=y +CONFIG_DUMMY_CONSOLE_COLUMNS=80 +CONFIG_DUMMY_CONSOLE_ROWS=25 +# end of Console display driver support +# end of Graphics support + +# CONFIG_DRM_ACCEL is not set +# CONFIG_SOUND is not set +CONFIG_HID_SUPPORT=y +CONFIG_HID=y +# CONFIG_HID_BATTERY_STRENGTH is not set +# CONFIG_HIDRAW is not set +# CONFIG_UHID is not set +CONFIG_HID_GENERIC=y + +# +# Special HID drivers +# +# CONFIG_HID_A4TECH is not set +# CONFIG_HID_ACCUTOUCH is not set +# CONFIG_HID_ACRUX is not set +# CONFIG_HID_APPLEIR is not set +# CONFIG_HID_AUREAL is not set +# CONFIG_HID_BELKIN is not set +# CONFIG_HID_BETOP_FF is not set +# CONFIG_HID_CHERRY is not set +# CONFIG_HID_CHICONY is not set +# CONFIG_HID_COUGAR is not set +# CONFIG_HID_MACALLY is not set +# CONFIG_HID_CMEDIA is not set +# CONFIG_HID_CREATIVE_SB0540 is not set +# CONFIG_HID_CYPRESS is not set +# CONFIG_HID_DRAGONRISE is not set +# CONFIG_HID_EMS_FF is not set +# CONFIG_HID_ELECOM is not set +# CONFIG_HID_ELO is not set +# CONFIG_HID_EVISION is not set +# CONFIG_HID_EZKEY is not set +# CONFIG_HID_GEMBIRD is not set +# CONFIG_HID_GFRM is not set +# CONFIG_HID_GLORIOUS is not set +# CONFIG_HID_HOLTEK is not set +# CONFIG_HID_GOOGLE_STADIA_FF is not set +# CONFIG_HID_VIVALDI is not set +# CONFIG_HID_KEYTOUCH is not set +# CONFIG_HID_KYE is not set +# CONFIG_HID_UCLOGIC is not set +# CONFIG_HID_WALTOP is not set +# CONFIG_HID_VIEWSONIC is not set +# CONFIG_HID_VRC2 is not set +# CONFIG_HID_XIAOMI is not set +# CONFIG_HID_GYRATION is not set +# CONFIG_HID_ICADE is not set +# CONFIG_HID_ITE is not set +# CONFIG_HID_JABRA is not set +# CONFIG_HID_TWINHAN is not set +# CONFIG_HID_KENSINGTON is not set +# CONFIG_HID_LCPOWER is not set +# CONFIG_HID_LENOVO is not set +# CONFIG_HID_LETSKETCH is not set +# CONFIG_HID_MAGICMOUSE is not set +# CONFIG_HID_MALTRON is not set +# CONFIG_HID_MAYFLASH is not set +# CONFIG_HID_MEGAWORLD_FF is not set +# CONFIG_HID_REDRAGON is not set +# CONFIG_HID_MICROSOFT is not set +# CONFIG_HID_MONTEREY is not set +# CONFIG_HID_MULTITOUCH is not set +# CONFIG_HID_NTI is not set +# CONFIG_HID_NTRIG is not set +# CONFIG_HID_ORTEK is not set +# CONFIG_HID_PANTHERLORD is not set +# CONFIG_HID_PENMOUNT is not set +# CONFIG_HID_PETALYNX is not set +# CONFIG_HID_PICOLCD is not set +# CONFIG_HID_PLANTRONICS is not set +# CONFIG_HID_PXRC is not set +# CONFIG_HID_RAZER is not set +# CONFIG_HID_PRIMAX is not set +# CONFIG_HID_RETRODE is not set +# CONFIG_HID_ROCCAT is not set +# CONFIG_HID_SAITEK is not set +# CONFIG_HID_SAMSUNG is not set +# CONFIG_HID_SEMITEK is not set +# CONFIG_HID_SIGMAMICRO is not set +# CONFIG_HID_SPEEDLINK is not set +# CONFIG_HID_STEAM is not set +# CONFIG_HID_STEELSERIES is not set +# CONFIG_HID_SUNPLUS is not set +# CONFIG_HID_RMI is not set +# CONFIG_HID_GREENASIA is not set +# CONFIG_HID_HYPERV_MOUSE is not set +# CONFIG_HID_SMARTJOYPLUS is not set +# CONFIG_HID_TIVO is not set +# CONFIG_HID_TOPSEED is not set +# CONFIG_HID_TOPRE is not set +# CONFIG_HID_THRUSTMASTER is not set +# CONFIG_HID_UDRAW_PS3 is not set +# CONFIG_HID_WACOM is not set +# CONFIG_HID_XINMO is not set +# CONFIG_HID_ZEROPLUS is not set +# CONFIG_HID_ZYDACRON is not set +# CONFIG_HID_SENSOR_HUB is not set +# CONFIG_HID_ALPS is not set +# CONFIG_HID_MCP2221 is not set +# end of Special HID drivers + +# +# HID-BPF support +# +# CONFIG_HID_BPF is not set +# end of HID-BPF support + +# +# USB HID support +# +CONFIG_USB_HID=y +# CONFIG_HID_PID is not set +# CONFIG_USB_HIDDEV is not set +# end of USB HID support + +CONFIG_I2C_HID=y +# CONFIG_I2C_HID_ACPI is not set +# CONFIG_I2C_HID_OF is not set +# CONFIG_I2C_HID_OF_ELAN is not set +# CONFIG_I2C_HID_OF_GOODIX is not set +CONFIG_USB_OHCI_LITTLE_ENDIAN=y +CONFIG_USB_SUPPORT=y +CONFIG_USB_COMMON=y +# CONFIG_USB_ULPI_BUS is not set +# CONFIG_USB_CONN_GPIO is not set +CONFIG_USB_ARCH_HAS_HCD=y +CONFIG_USB=y +# CONFIG_USB_PCI is not set +CONFIG_USB_ANNOUNCE_NEW_DEVICES=y + +# +# Miscellaneous USB options +# +CONFIG_USB_DEFAULT_PERSIST=y +# CONFIG_USB_FEW_INIT_RETRIES is not set +# CONFIG_USB_DYNAMIC_MINORS is not set +# CONFIG_USB_OTG is not set +# CONFIG_USB_OTG_PRODUCTLIST is not set +# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set +CONFIG_USB_AUTOSUSPEND_DELAY=2 +# CONFIG_USB_MON is not set + +# +# USB Host Controller Drivers +# +# CONFIG_USB_C67X00_HCD is not set +# CONFIG_USB_XHCI_HCD is not set +# CONFIG_USB_EHCI_HCD is not set +# CONFIG_USB_OXU210HP_HCD is not set +# CONFIG_USB_ISP116X_HCD is not set +# CONFIG_USB_OHCI_HCD is not set +# CONFIG_USB_SL811_HCD is not set +# CONFIG_USB_R8A66597_HCD is not set +# CONFIG_USB_HCD_TEST_MODE is not set + +# +# USB Device Class drivers +# +CONFIG_USB_ACM=y +# CONFIG_USB_PRINTER is not set +# CONFIG_USB_WDM is not set +# CONFIG_USB_TMC is not set + +# +# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may +# + +# +# also be needed; see USB_STORAGE Help for more info +# +# CONFIG_USB_STORAGE is not set + +# +# USB Imaging devices +# +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_MICROTEK is not set +CONFIG_USBIP_CORE=y +CONFIG_USBIP_VHCI_HCD=y +CONFIG_USBIP_VHCI_HC_PORTS=8 +CONFIG_USBIP_VHCI_NR_HCS=1 +# CONFIG_USBIP_HOST is not set +# CONFIG_USBIP_DEBUG is not set + +# +# USB dual-mode controller drivers +# +# CONFIG_USB_CDNS_SUPPORT is not set +# CONFIG_USB_MUSB_HDRC is not set +# CONFIG_USB_DWC3 is not set +# CONFIG_USB_DWC2 is not set +# CONFIG_USB_ISP1760 is not set + +# +# USB port drivers +# +CONFIG_USB_SERIAL=y +# CONFIG_USB_SERIAL_CONSOLE is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_SIMPLE is not set +# CONFIG_USB_SERIAL_AIRCABLE is not set +# CONFIG_USB_SERIAL_ARK3116 is not set +# CONFIG_USB_SERIAL_BELKIN is not set +CONFIG_USB_SERIAL_CH341=y +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +CONFIG_USB_SERIAL_CP210X=y +# CONFIG_USB_SERIAL_CYPRESS_M8 is not set +# CONFIG_USB_SERIAL_EMPEG is not set +CONFIG_USB_SERIAL_FTDI_SIO=y +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IPAQ is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_EDGEPORT_TI is not set +# CONFIG_USB_SERIAL_F81232 is not set +# CONFIG_USB_SERIAL_F8153X is not set +# CONFIG_USB_SERIAL_GARMIN is not set +# CONFIG_USB_SERIAL_IPW is not set +# CONFIG_USB_SERIAL_IUU is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KLSI is not set +# CONFIG_USB_SERIAL_KOBIL_SCT is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_METRO is not set +# CONFIG_USB_SERIAL_MOS7720 is not set +# CONFIG_USB_SERIAL_MOS7840 is not set +# CONFIG_USB_SERIAL_MXUPORT is not set +# CONFIG_USB_SERIAL_NAVMAN is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_OTI6858 is not set +# CONFIG_USB_SERIAL_QCAUX is not set +# CONFIG_USB_SERIAL_QUALCOMM is not set +# CONFIG_USB_SERIAL_SPCP8X5 is not set +# CONFIG_USB_SERIAL_SAFE is not set +# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set +# CONFIG_USB_SERIAL_SYMBOL is not set +# CONFIG_USB_SERIAL_TI is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_OPTION is not set +# CONFIG_USB_SERIAL_OMNINET is not set +# CONFIG_USB_SERIAL_OPTICON is not set +# CONFIG_USB_SERIAL_XSENS_MT is not set +# CONFIG_USB_SERIAL_WISHBONE is not set +# CONFIG_USB_SERIAL_SSU100 is not set +# CONFIG_USB_SERIAL_QT2 is not set +# CONFIG_USB_SERIAL_UPD78F0730 is not set +# CONFIG_USB_SERIAL_XR is not set +# CONFIG_USB_SERIAL_DEBUG is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_EMI62 is not set +# CONFIG_USB_EMI26 is not set +# CONFIG_USB_ADUTUX is not set +# CONFIG_USB_SEVSEG is not set +# CONFIG_USB_LEGOTOWER is not set +# CONFIG_USB_LCD is not set +# CONFIG_USB_CYPRESS_CY7C63 is not set +# CONFIG_USB_CYTHERM is not set +# CONFIG_USB_IDMOUSE is not set +# CONFIG_USB_APPLEDISPLAY is not set +# CONFIG_APPLE_MFI_FASTCHARGE is not set +# CONFIG_USB_LD is not set +# CONFIG_USB_TRANCEVIBRATOR is not set +# CONFIG_USB_IOWARRIOR is not set +# CONFIG_USB_TEST is not set +# CONFIG_USB_EHSET_TEST_FIXTURE is not set +# CONFIG_USB_ISIGHTFW is not set +# CONFIG_USB_YUREX is not set +# CONFIG_USB_EZUSB_FX2 is not set +# CONFIG_USB_HUB_USB251XB is not set +# CONFIG_USB_HSIC_USB3503 is not set +# CONFIG_USB_HSIC_USB4604 is not set +# CONFIG_USB_LINK_LAYER_TEST is not set +# CONFIG_USB_ONBOARD_HUB is not set + +# +# USB Physical Layer drivers +# +# CONFIG_NOP_USB_XCEIV is not set +# CONFIG_USB_GPIO_VBUS is not set +# CONFIG_USB_ISP1301 is not set +# CONFIG_USB_ULPI is not set +# end of USB Physical Layer drivers + +# CONFIG_USB_GADGET is not set +# CONFIG_TYPEC is not set +# CONFIG_USB_ROLE_SWITCH is not set +# CONFIG_MMC is not set +# CONFIG_SCSI_UFSHCD is not set +# CONFIG_MEMSTICK is not set +# CONFIG_NEW_LEDS is not set +# CONFIG_ACCESSIBILITY is not set +# CONFIG_INFINIBAND is not set +CONFIG_EDAC_SUPPORT=y +# CONFIG_EDAC is not set +CONFIG_RTC_LIB=y +CONFIG_RTC_CLASS=y +CONFIG_RTC_HCTOSYS=y +CONFIG_RTC_HCTOSYS_DEVICE="rtc0" +CONFIG_RTC_SYSTOHC=y +CONFIG_RTC_SYSTOHC_DEVICE="rtc0" +# CONFIG_RTC_DEBUG is not set +CONFIG_RTC_NVMEM=y + +# +# RTC interfaces +# +CONFIG_RTC_INTF_SYSFS=y +CONFIG_RTC_INTF_PROC=y +CONFIG_RTC_INTF_DEV=y +CONFIG_RTC_INTF_DEV_UIE_EMUL=y +# CONFIG_RTC_DRV_TEST is not set + +# +# I2C RTC drivers +# +# CONFIG_RTC_DRV_ABB5ZES3 is not set +# CONFIG_RTC_DRV_ABEOZ9 is not set +# CONFIG_RTC_DRV_ABX80X is not set +# CONFIG_RTC_DRV_DS1307 is not set +# CONFIG_RTC_DRV_DS1374 is not set +# CONFIG_RTC_DRV_DS1672 is not set +# CONFIG_RTC_DRV_HYM8563 is not set +# CONFIG_RTC_DRV_MAX6900 is not set +# CONFIG_RTC_DRV_NCT3018Y is not set +# CONFIG_RTC_DRV_RS5C372 is not set +# CONFIG_RTC_DRV_ISL1208 is not set +# CONFIG_RTC_DRV_ISL12022 is not set +# CONFIG_RTC_DRV_ISL12026 is not set +# CONFIG_RTC_DRV_X1205 is not set +# CONFIG_RTC_DRV_PCF8523 is not set +# CONFIG_RTC_DRV_PCF85063 is not set +# CONFIG_RTC_DRV_PCF85363 is not set +# CONFIG_RTC_DRV_PCF8563 is not set +# CONFIG_RTC_DRV_PCF8583 is not set +# CONFIG_RTC_DRV_M41T80 is not set +# CONFIG_RTC_DRV_BQ32K is not set +# CONFIG_RTC_DRV_S35390A is not set +# CONFIG_RTC_DRV_FM3130 is not set +# CONFIG_RTC_DRV_RX8010 is not set +# CONFIG_RTC_DRV_RX8581 is not set +# CONFIG_RTC_DRV_RX8025 is not set +# CONFIG_RTC_DRV_EM3027 is not set +# CONFIG_RTC_DRV_RV3028 is not set +# CONFIG_RTC_DRV_RV3032 is not set +# CONFIG_RTC_DRV_RV8803 is not set +# CONFIG_RTC_DRV_SD3078 is not set + +# +# SPI RTC drivers +# +CONFIG_RTC_I2C_AND_SPI=y + +# +# SPI and I2C RTC drivers +# +# CONFIG_RTC_DRV_DS3232 is not set +# CONFIG_RTC_DRV_PCF2127 is not set +# CONFIG_RTC_DRV_RV3029C2 is not set +# CONFIG_RTC_DRV_RX6110 is not set + +# +# Platform RTC drivers +# +# CONFIG_RTC_DRV_DS1286 is not set +# CONFIG_RTC_DRV_DS1511 is not set +# CONFIG_RTC_DRV_DS1553 is not set +# CONFIG_RTC_DRV_DS1685_FAMILY is not set +# CONFIG_RTC_DRV_DS1742 is not set +# CONFIG_RTC_DRV_DS2404 is not set +# CONFIG_RTC_DRV_EFI is not set +# CONFIG_RTC_DRV_STK17TA8 is not set +# CONFIG_RTC_DRV_M48T86 is not set +# CONFIG_RTC_DRV_M48T35 is not set +# CONFIG_RTC_DRV_M48T59 is not set +# CONFIG_RTC_DRV_MSM6242 is not set +# CONFIG_RTC_DRV_RP5C01 is not set +# CONFIG_RTC_DRV_ZYNQMP is not set + +# +# on-CPU RTC drivers +# +# CONFIG_RTC_DRV_PL030 is not set +# CONFIG_RTC_DRV_PL031 is not set +# CONFIG_RTC_DRV_CADENCE is not set +# CONFIG_RTC_DRV_FTRTC010 is not set +# CONFIG_RTC_DRV_R7301 is not set + +# +# HID Sensor RTC drivers +# +# CONFIG_RTC_DRV_GOLDFISH is not set +# CONFIG_DMADEVICES is not set + +# +# DMABUF options +# +CONFIG_SYNC_FILE=y +# CONFIG_SW_SYNC is not set +# CONFIG_UDMABUF is not set +# CONFIG_DMABUF_MOVE_NOTIFY is not set +# CONFIG_DMABUF_DEBUG is not set +# CONFIG_DMABUF_SELFTESTS is not set +# CONFIG_DMABUF_HEAPS is not set +# CONFIG_DMABUF_SYSFS_STATS is not set +# end of DMABUF options + +CONFIG_UIO=y +# CONFIG_UIO_CIF is not set +CONFIG_UIO_PDRV_GENIRQ=y +CONFIG_UIO_DMEM_GENIRQ=y +# CONFIG_UIO_AEC is not set +# CONFIG_UIO_SERCOS3 is not set +# CONFIG_UIO_PCI_GENERIC is not set +# CONFIG_UIO_NETX is not set +# CONFIG_UIO_PRUSS is not set +# CONFIG_UIO_MF624 is not set +# CONFIG_UIO_HV_GENERIC is not set +CONFIG_VFIO=y +CONFIG_VFIO_GROUP=y +CONFIG_VFIO_CONTAINER=y +CONFIG_VFIO_IOMMU_TYPE1=y +# CONFIG_VFIO_NOIOMMU is not set +CONFIG_VFIO_VIRQFD=y + +# +# VFIO support for PCI devices +# +CONFIG_VFIO_PCI_CORE=y +CONFIG_VFIO_PCI_MMAP=y +CONFIG_VFIO_PCI_INTX=y +CONFIG_VFIO_PCI=y +# end of VFIO support for PCI devices + +# +# VFIO support for platform devices +# +# CONFIG_VFIO_PLATFORM is not set +# CONFIG_VFIO_AMBA is not set +# end of VFIO support for platform devices + +# CONFIG_VIRT_DRIVERS is not set +CONFIG_VIRTIO_ANCHOR=y +CONFIG_VIRTIO=y +CONFIG_VIRTIO_PCI_LIB=y +CONFIG_VIRTIO_MENU=y +CONFIG_VIRTIO_PCI=y +# CONFIG_VIRTIO_PCI_LEGACY is not set +# CONFIG_VIRTIO_VDPA is not set +CONFIG_VIRTIO_PMEM=y +CONFIG_VIRTIO_BALLOON=y +CONFIG_VIRTIO_INPUT=y +CONFIG_VIRTIO_MMIO=y +# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set +CONFIG_VDPA=y +# CONFIG_VDPA_USER is not set +# CONFIG_IFCVF is not set +# CONFIG_MLX5_VDPA_STEERING_DEBUG is not set +# CONFIG_VP_VDPA is not set +# CONFIG_SNET_VDPA is not set +CONFIG_VHOST_IOTLB=y +CONFIG_VHOST_TASK=y +CONFIG_VHOST=y +CONFIG_VHOST_MENU=y +CONFIG_VHOST_NET=y +# CONFIG_VHOST_VSOCK is not set +CONFIG_VHOST_VDPA=y +# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set + +# +# Microsoft Hyper-V guest support +# +CONFIG_HYPERV=y +CONFIG_HYPERV_UTILS=y +CONFIG_HYPERV_BALLOON=y +CONFIG_DXGKRNL=y +# end of Microsoft Hyper-V guest support + +# CONFIG_GREYBUS is not set +# CONFIG_COMEDI is not set +# CONFIG_STAGING is not set +# CONFIG_GOLDFISH is not set +# CONFIG_CHROME_PLATFORMS is not set +# CONFIG_MELLANOX_PLATFORM is not set +# CONFIG_SURFACE_PLATFORMS is not set +CONFIG_HAVE_CLK=y +CONFIG_HAVE_CLK_PREPARE=y +CONFIG_COMMON_CLK=y + +# +# Clock driver for ARM Reference designs +# +# CONFIG_CLK_ICST is not set +# CONFIG_CLK_SP810 is not set +# end of Clock driver for ARM Reference designs + +# CONFIG_COMMON_CLK_MAX9485 is not set +# CONFIG_COMMON_CLK_SI5341 is not set +# CONFIG_COMMON_CLK_SI5351 is not set +# CONFIG_COMMON_CLK_SI514 is not set +# CONFIG_COMMON_CLK_SI544 is not set +# CONFIG_COMMON_CLK_SI570 is not set +# CONFIG_COMMON_CLK_CDCE706 is not set +# CONFIG_COMMON_CLK_CDCE925 is not set +# CONFIG_COMMON_CLK_CS2000_CP is not set +# CONFIG_COMMON_CLK_AXI_CLKGEN is not set +# CONFIG_COMMON_CLK_XGENE is not set +# CONFIG_COMMON_CLK_RS9_PCIE is not set +# CONFIG_COMMON_CLK_SI521XX is not set +# CONFIG_COMMON_CLK_VC3 is not set +# CONFIG_COMMON_CLK_VC5 is not set +# CONFIG_COMMON_CLK_VC7 is not set +# CONFIG_COMMON_CLK_FIXED_MMIO is not set +# CONFIG_XILINX_VCU is not set +# CONFIG_COMMON_CLK_XLNX_CLKWZRD is not set +# CONFIG_HWSPINLOCK is not set + +# +# Clock Source drivers +# +CONFIG_TIMER_OF=y +CONFIG_TIMER_ACPI=y +CONFIG_TIMER_PROBE=y +CONFIG_ARM_ARCH_TIMER=y +# CONFIG_ARM_ARCH_TIMER_EVTSTREAM is not set +CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND=y +# CONFIG_FSL_ERRATUM_A008585 is not set +# CONFIG_HISILICON_ERRATUM_161010101 is not set +CONFIG_ARM64_ERRATUM_858921=y +# end of Clock Source drivers + +# CONFIG_MAILBOX is not set +CONFIG_IOMMU_IOVA=y +CONFIG_IOMMU_API=y +CONFIG_IOMMU_SUPPORT=y + +# +# Generic IOMMU Pagetable Support +# +CONFIG_IOMMU_IO_PGTABLE=y +CONFIG_IOMMU_IO_PGTABLE_LPAE=y +# CONFIG_IOMMU_IO_PGTABLE_LPAE_SELFTEST is not set +# CONFIG_IOMMU_IO_PGTABLE_ARMV7S is not set +# CONFIG_IOMMU_IO_PGTABLE_DART is not set +# end of Generic IOMMU Pagetable Support + +# CONFIG_IOMMU_DEBUGFS is not set +CONFIG_IOMMU_DEFAULT_DMA_STRICT=y +# CONFIG_IOMMU_DEFAULT_DMA_LAZY is not set +# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set +CONFIG_OF_IOMMU=y +CONFIG_IOMMU_DMA=y +# CONFIG_IOMMUFD is not set +CONFIG_ARM_SMMU=y +# CONFIG_ARM_SMMU_LEGACY_DT_BINDINGS is not set +CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT=y +CONFIG_ARM_SMMU_V3=y +# CONFIG_ARM_SMMU_V3_SVA is not set +# CONFIG_VIRTIO_IOMMU is not set + +# +# Remoteproc drivers +# +# CONFIG_REMOTEPROC is not set +# end of Remoteproc drivers + +# +# Rpmsg drivers +# +# CONFIG_RPMSG_VIRTIO is not set +# end of Rpmsg drivers + +# CONFIG_SOUNDWIRE is not set + +# +# SOC (System On Chip) specific Drivers +# + +# +# Amlogic SoC drivers +# +# end of Amlogic SoC drivers + +# +# Broadcom SoC drivers +# +# CONFIG_SOC_BRCMSTB is not set +# end of Broadcom SoC drivers + +# +# NXP/Freescale QorIQ SoC drivers +# +# CONFIG_QUICC_ENGINE is not set +# end of NXP/Freescale QorIQ SoC drivers + +# +# fujitsu SoC drivers +# +# CONFIG_A64FX_DIAG is not set +# end of fujitsu SoC drivers + +# +# i.MX SoC drivers +# +# end of i.MX SoC drivers + +# +# Enable LiteX SoC Builder specific drivers +# +# CONFIG_LITEX_SOC_CONTROLLER is not set +# end of Enable LiteX SoC Builder specific drivers + +# CONFIG_WPCM450_SOC is not set + +# +# Qualcomm SoC drivers +# +# end of Qualcomm SoC drivers + +# CONFIG_SOC_TI is not set + +# +# Xilinx SoC drivers +# +# end of Xilinx SoC drivers +# end of SOC (System On Chip) specific Drivers + +# CONFIG_PM_DEVFREQ is not set +# CONFIG_EXTCON is not set +# CONFIG_MEMORY is not set +# CONFIG_IIO is not set +# CONFIG_NTB is not set +# CONFIG_PWM is not set + +# +# IRQ chip support +# +CONFIG_IRQCHIP=y +CONFIG_ARM_GIC=y +CONFIG_ARM_GIC_MAX_NR=1 +CONFIG_ARM_GIC_V2M=y +CONFIG_ARM_GIC_V3=y +CONFIG_ARM_GIC_V3_ITS=y +CONFIG_ARM_GIC_V3_ITS_PCI=y +# CONFIG_AL_FIC is not set +# CONFIG_XILINX_INTC is not set +CONFIG_PARTITION_PERCPU=y +# end of IRQ chip support + +# CONFIG_IPACK_BUS is not set +# CONFIG_RESET_CONTROLLER is not set + +# +# PHY Subsystem +# +CONFIG_GENERIC_PHY=y +# CONFIG_PHY_CAN_TRANSCEIVER is not set + +# +# PHY drivers for Broadcom platforms +# +# CONFIG_BCM_KONA_USB2_PHY is not set +# end of PHY drivers for Broadcom platforms + +# CONFIG_PHY_CADENCE_TORRENT is not set +# CONFIG_PHY_CADENCE_DPHY is not set +# CONFIG_PHY_CADENCE_DPHY_RX is not set +# CONFIG_PHY_CADENCE_SALVO is not set +# CONFIG_PHY_PXA_28NM_HSIC is not set +# CONFIG_PHY_PXA_28NM_USB2 is not set +# CONFIG_PHY_MAPPHONE_MDM6600 is not set +# end of PHY Subsystem + +# CONFIG_POWERCAP is not set +# CONFIG_MCB is not set + +# +# Performance monitor support +# +# CONFIG_ARM_CCI_PMU is not set +CONFIG_ARM_CCN=y +# CONFIG_ARM_CMN is not set +CONFIG_ARM_PMU=y +CONFIG_ARM_PMU_ACPI=y +# CONFIG_ARM_SMMU_V3_PMU is not set +CONFIG_ARM_PMUV3=y +# CONFIG_ARM_DSU_PMU is not set +# CONFIG_ARM_SPE_PMU is not set +# CONFIG_ARM_DMC620_PMU is not set +# CONFIG_ALIBABA_UNCORE_DRW_PMU is not set +# CONFIG_HISI_PMU is not set +# CONFIG_HISI_PCIE_PMU is not set +# CONFIG_HNS3_PMU is not set +# CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU is not set +# end of Performance monitor support + +CONFIG_RAS=y +# CONFIG_USB4 is not set + +# +# Android +# +# CONFIG_ANDROID_BINDER_IPC is not set +# end of Android + +CONFIG_LIBNVDIMM=y +CONFIG_BLK_DEV_PMEM=y +CONFIG_ND_CLAIM=y +CONFIG_ND_BTT=y +CONFIG_BTT=y +CONFIG_ND_PFN=y +CONFIG_NVDIMM_PFN=y +CONFIG_NVDIMM_DAX=y +# CONFIG_OF_PMEM is not set +CONFIG_DAX=y +CONFIG_DEV_DAX=y +CONFIG_DEV_DAX_PMEM=y +CONFIG_DEV_DAX_KMEM=y +CONFIG_NVMEM=y +# CONFIG_NVMEM_SYSFS is not set + +# +# Layout Types +# +# CONFIG_NVMEM_LAYOUT_SL28_VPD is not set +# CONFIG_NVMEM_LAYOUT_ONIE_TLV is not set +# end of Layout Types + +# CONFIG_NVMEM_RMEM is not set + +# +# HW tracing support +# +# CONFIG_STM is not set +# CONFIG_INTEL_TH is not set +# CONFIG_HISI_PTT is not set +# end of HW tracing support + +# CONFIG_FPGA is not set +# CONFIG_FSI is not set +# CONFIG_TEE is not set +# CONFIG_SIOX is not set +# CONFIG_SLIMBUS is not set +# CONFIG_INTERCONNECT is not set +# CONFIG_COUNTER is not set +# CONFIG_PECI is not set +# CONFIG_HTE is not set +# CONFIG_CDX_BUS is not set +# end of Device Drivers + +# +# File systems +# +CONFIG_DCACHE_WORD_ACCESS=y +# CONFIG_VALIDATE_FS_PARSER is not set +CONFIG_FS_IOMAP=y +CONFIG_BUFFER_HEAD=y +CONFIG_LEGACY_DIRECT_IO=y +# CONFIG_EXT2_FS is not set +# CONFIG_EXT3_FS is not set +CONFIG_EXT4_FS=y +CONFIG_EXT4_USE_FOR_EXT2=y +CONFIG_EXT4_FS_POSIX_ACL=y +CONFIG_EXT4_FS_SECURITY=y +# CONFIG_EXT4_DEBUG is not set +CONFIG_JBD2=y +# CONFIG_JBD2_DEBUG is not set +CONFIG_FS_MBCACHE=y +# CONFIG_REISERFS_FS is not set +# CONFIG_JFS_FS is not set +CONFIG_XFS_FS=y +# CONFIG_XFS_SUPPORT_V4 is not set +CONFIG_XFS_SUPPORT_ASCII_CI=y +CONFIG_XFS_QUOTA=y +CONFIG_XFS_POSIX_ACL=y +CONFIG_XFS_RT=y +CONFIG_XFS_DRAIN_INTENTS=y +CONFIG_XFS_ONLINE_SCRUB=y +CONFIG_XFS_ONLINE_SCRUB_STATS=y +CONFIG_XFS_ONLINE_REPAIR=y +CONFIG_XFS_DEBUG=y +CONFIG_XFS_ASSERT_FATAL=y +# CONFIG_GFS2_FS is not set +CONFIG_BTRFS_FS=y +CONFIG_BTRFS_FS_POSIX_ACL=y +# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set +# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set +# CONFIG_BTRFS_DEBUG is not set +# CONFIG_BTRFS_ASSERT is not set +# CONFIG_BTRFS_FS_REF_VERIFY is not set +# CONFIG_NILFS2_FS is not set +# CONFIG_F2FS_FS is not set +CONFIG_FS_DAX=y +CONFIG_FS_DAX_PMD=y +CONFIG_FS_POSIX_ACL=y +CONFIG_EXPORTFS=y +CONFIG_EXPORTFS_BLOCK_OPS=y +CONFIG_FILE_LOCKING=y +# CONFIG_FS_ENCRYPTION is not set +# CONFIG_FS_VERITY is not set +CONFIG_FSNOTIFY=y +CONFIG_DNOTIFY=y +CONFIG_INOTIFY_USER=y +CONFIG_FANOTIFY=y +CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y +CONFIG_QUOTA=y +# CONFIG_QUOTA_NETLINK_INTERFACE is not set +# CONFIG_QUOTA_DEBUG is not set +# CONFIG_QFMT_V1 is not set +# CONFIG_QFMT_V2 is not set +CONFIG_QUOTACTL=y +CONFIG_AUTOFS_FS=y +CONFIG_FUSE_FS=y +CONFIG_CUSE=y +CONFIG_VIRTIO_FS=y +CONFIG_FUSE_DAX=y +CONFIG_OVERLAY_FS=y +# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set +# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set +# CONFIG_OVERLAY_FS_INDEX is not set +# CONFIG_OVERLAY_FS_XINO_AUTO is not set +# CONFIG_OVERLAY_FS_METACOPY is not set +# CONFIG_OVERLAY_FS_DEBUG is not set + +# +# Caches +# +CONFIG_NETFS_SUPPORT=y +# CONFIG_NETFS_STATS is not set +CONFIG_FSCACHE=y +# CONFIG_FSCACHE_STATS is not set +# CONFIG_FSCACHE_DEBUG is not set +# CONFIG_CACHEFILES is not set +# end of Caches + +# +# CD-ROM/DVD Filesystems +# +CONFIG_ISO9660_FS=y +CONFIG_JOLIET=y +CONFIG_ZISOFS=y +CONFIG_UDF_FS=y +# end of CD-ROM/DVD Filesystems + +# +# DOS/FAT/EXFAT/NT Filesystems +# +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +CONFIG_VFAT_FS=y +CONFIG_FAT_DEFAULT_CODEPAGE=437 +CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" +# CONFIG_FAT_DEFAULT_UTF8 is not set +# CONFIG_EXFAT_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS3_FS is not set +# end of DOS/FAT/EXFAT/NT Filesystems + +# +# Pseudo filesystems +# +CONFIG_PROC_FS=y +CONFIG_PROC_KCORE=y +CONFIG_PROC_SYSCTL=y +CONFIG_PROC_PAGE_MONITOR=y +CONFIG_PROC_CHILDREN=y +CONFIG_KERNFS=y +CONFIG_SYSFS=y +CONFIG_TMPFS=y +CONFIG_TMPFS_POSIX_ACL=y +CONFIG_TMPFS_XATTR=y +# CONFIG_TMPFS_INODE64 is not set +# CONFIG_TMPFS_QUOTA is not set +CONFIG_ARCH_SUPPORTS_HUGETLBFS=y +CONFIG_HUGETLBFS=y +CONFIG_HUGETLB_PAGE=y +CONFIG_ARCH_HAS_GIGANTIC_PAGE=y +# CONFIG_CONFIGFS_FS is not set +# CONFIG_EFIVAR_FS is not set +# end of Pseudo filesystems + +CONFIG_MISC_FILESYSTEMS=y +# CONFIG_ORANGEFS_FS is not set +# CONFIG_ADFS_FS is not set +# CONFIG_AFFS_FS is not set +# CONFIG_ECRYPT_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_HFSPLUS_FS is not set +# CONFIG_BEFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_SQUASHFS=y +# CONFIG_SQUASHFS_FILE_CACHE is not set +CONFIG_SQUASHFS_FILE_DIRECT=y +CONFIG_SQUASHFS_DECOMP_SINGLE=y +# CONFIG_SQUASHFS_CHOICE_DECOMP_BY_MOUNT is not set +CONFIG_SQUASHFS_COMPILE_DECOMP_SINGLE=y +# CONFIG_SQUASHFS_COMPILE_DECOMP_MULTI is not set +# CONFIG_SQUASHFS_COMPILE_DECOMP_MULTI_PERCPU is not set +CONFIG_SQUASHFS_XATTR=y +CONFIG_SQUASHFS_ZLIB=y +CONFIG_SQUASHFS_LZ4=y +CONFIG_SQUASHFS_LZO=y +CONFIG_SQUASHFS_XZ=y +CONFIG_SQUASHFS_ZSTD=y +# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set +# CONFIG_SQUASHFS_EMBEDDED is not set +CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3 +# CONFIG_VXFS_FS is not set +# CONFIG_MINIX_FS is not set +# CONFIG_OMFS_FS is not set +# CONFIG_HPFS_FS is not set +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX6FS_FS is not set +# CONFIG_ROMFS_FS is not set +# CONFIG_PSTORE is not set +# CONFIG_SYSV_FS is not set +# CONFIG_UFS_FS is not set +CONFIG_EROFS_FS=y +# CONFIG_EROFS_FS_DEBUG is not set +CONFIG_EROFS_FS_XATTR=y +CONFIG_EROFS_FS_POSIX_ACL=y +CONFIG_EROFS_FS_SECURITY=y +CONFIG_EROFS_FS_ZIP=y +# CONFIG_EROFS_FS_ZIP_LZMA is not set +# CONFIG_EROFS_FS_ZIP_DEFLATE is not set +# CONFIG_EROFS_FS_PCPU_KTHREAD is not set +CONFIG_NETWORK_FILESYSTEMS=y +CONFIG_NFS_FS=y +CONFIG_NFS_V2=y +CONFIG_NFS_V3=y +# CONFIG_NFS_V3_ACL is not set +CONFIG_NFS_V4=y +# CONFIG_NFS_SWAP is not set +CONFIG_NFS_V4_1=y +# CONFIG_NFS_V4_2 is not set +CONFIG_PNFS_FILE_LAYOUT=y +CONFIG_PNFS_BLOCK=y +CONFIG_PNFS_FLEXFILE_LAYOUT=y +CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org" +# CONFIG_NFS_V4_1_MIGRATION is not set +CONFIG_ROOT_NFS=y +# CONFIG_NFS_FSCACHE is not set +# CONFIG_NFS_USE_LEGACY_DNS is not set +CONFIG_NFS_USE_KERNEL_DNS=y +# CONFIG_NFS_DISABLE_UDP_SUPPORT is not set +CONFIG_NFSD=y +# CONFIG_NFSD_V2 is not set +CONFIG_NFSD_V3_ACL=y +CONFIG_NFSD_V4=y +CONFIG_NFSD_PNFS=y +CONFIG_NFSD_BLOCKLAYOUT=y +CONFIG_NFSD_SCSILAYOUT=y +CONFIG_NFSD_FLEXFILELAYOUT=y +CONFIG_NFSD_V4_SECURITY_LABEL=y +CONFIG_GRACE_PERIOD=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +CONFIG_NFS_ACL_SUPPORT=y +CONFIG_NFS_COMMON=y +CONFIG_SUNRPC=y +CONFIG_SUNRPC_GSS=y +CONFIG_SUNRPC_BACKCHANNEL=y +CONFIG_RPCSEC_GSS_KRB5=y +CONFIG_RPCSEC_GSS_KRB5_ENCTYPES_AES_SHA1=y +# CONFIG_RPCSEC_GSS_KRB5_ENCTYPES_AES_SHA2 is not set +# CONFIG_SUNRPC_DEBUG is not set +CONFIG_CEPH_FS=y +CONFIG_CEPH_FSCACHE=y +CONFIG_CEPH_FS_POSIX_ACL=y +# CONFIG_CEPH_FS_SECURITY_LABEL is not set +CONFIG_CIFS=y +# CONFIG_CIFS_STATS2 is not set +CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y +# CONFIG_CIFS_UPCALL is not set +CONFIG_CIFS_XATTR=y +CONFIG_CIFS_POSIX=y +# CONFIG_CIFS_DEBUG is not set +# CONFIG_CIFS_DFS_UPCALL is not set +# CONFIG_CIFS_SWN_UPCALL is not set +# CONFIG_CIFS_FSCACHE is not set +# CONFIG_CIFS_ROOT is not set +# CONFIG_SMB_SERVER is not set +CONFIG_SMBFS=y +# CONFIG_CODA_FS is not set +# CONFIG_AFS_FS is not set +CONFIG_9P_FS=y +CONFIG_9P_FSCACHE=y +CONFIG_9P_FS_POSIX_ACL=y +CONFIG_9P_FS_SECURITY=y +CONFIG_NLS=y +CONFIG_NLS_DEFAULT="iso8859-1" +CONFIG_NLS_CODEPAGE_437=y +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1250 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +CONFIG_NLS_ASCII=y +CONFIG_NLS_ISO8859_1=y +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_MAC_ROMAN is not set +# CONFIG_NLS_MAC_CELTIC is not set +# CONFIG_NLS_MAC_CENTEURO is not set +# CONFIG_NLS_MAC_CROATIAN is not set +# CONFIG_NLS_MAC_CYRILLIC is not set +# CONFIG_NLS_MAC_GAELIC is not set +# CONFIG_NLS_MAC_GREEK is not set +# CONFIG_NLS_MAC_ICELAND is not set +# CONFIG_NLS_MAC_INUIT is not set +# CONFIG_NLS_MAC_ROMANIAN is not set +# CONFIG_NLS_MAC_TURKISH is not set +CONFIG_NLS_UTF8=y +CONFIG_NLS_UCS2_UTILS=y +# CONFIG_UNICODE is not set +CONFIG_IO_WQ=y +# end of File systems + +# +# Security options +# +CONFIG_KEYS=y +# CONFIG_KEYS_REQUEST_CACHE is not set +# CONFIG_PERSISTENT_KEYRINGS is not set +# CONFIG_BIG_KEYS is not set +# CONFIG_TRUSTED_KEYS is not set +# CONFIG_ENCRYPTED_KEYS is not set +# CONFIG_KEY_DH_OPERATIONS is not set +CONFIG_SECURITY_DMESG_RESTRICT=y +CONFIG_SECURITY=y +# CONFIG_SECURITYFS is not set +# CONFIG_SECURITY_NETWORK is not set +CONFIG_SECURITY_PATH=y +CONFIG_HARDENED_USERCOPY=y +CONFIG_FORTIFY_SOURCE=y +# CONFIG_STATIC_USERMODEHELPER is not set +# CONFIG_SECURITY_SMACK is not set +# CONFIG_SECURITY_TOMOYO is not set +# CONFIG_SECURITY_APPARMOR is not set +# CONFIG_SECURITY_LOADPIN is not set +# CONFIG_SECURITY_YAMA is not set +# CONFIG_SECURITY_SAFESETID is not set +# CONFIG_SECURITY_LOCKDOWN_LSM is not set +CONFIG_SECURITY_LANDLOCK=y +# CONFIG_INTEGRITY is not set +# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set +CONFIG_DEFAULT_SECURITY_DAC=y +CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,bpf" + +# +# Kernel hardening options +# + +# +# Memory initialization +# +CONFIG_INIT_STACK_NONE=y +# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set +# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set +CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y +CONFIG_ZERO_CALL_USED_REGS=y +# end of Memory initialization + +# +# Hardening of kernel data structures +# +CONFIG_LIST_HARDENED=y +# CONFIG_BUG_ON_DATA_CORRUPTION is not set +# end of Hardening of kernel data structures + +CONFIG_RANDSTRUCT_NONE=y +# end of Kernel hardening options +# end of Security options + +CONFIG_XOR_BLOCKS=y +CONFIG_ASYNC_CORE=y +CONFIG_ASYNC_MEMCPY=y +CONFIG_ASYNC_XOR=y +CONFIG_ASYNC_PQ=y +CONFIG_ASYNC_RAID6_RECOV=y +CONFIG_CRYPTO=y + +# +# Crypto core or helper +# +CONFIG_CRYPTO_ALGAPI=y +CONFIG_CRYPTO_ALGAPI2=y +CONFIG_CRYPTO_AEAD=y +CONFIG_CRYPTO_AEAD2=y +CONFIG_CRYPTO_SIG2=y +CONFIG_CRYPTO_SKCIPHER=y +CONFIG_CRYPTO_SKCIPHER2=y +CONFIG_CRYPTO_HASH=y +CONFIG_CRYPTO_HASH2=y +CONFIG_CRYPTO_RNG=y +CONFIG_CRYPTO_RNG2=y +CONFIG_CRYPTO_RNG_DEFAULT=y +CONFIG_CRYPTO_AKCIPHER2=y +CONFIG_CRYPTO_AKCIPHER=y +CONFIG_CRYPTO_KPP2=y +CONFIG_CRYPTO_ACOMP2=y +CONFIG_CRYPTO_MANAGER=y +CONFIG_CRYPTO_MANAGER2=y +# CONFIG_CRYPTO_USER is not set +CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y +CONFIG_CRYPTO_NULL=y +CONFIG_CRYPTO_NULL2=y +# CONFIG_CRYPTO_PCRYPT is not set +# CONFIG_CRYPTO_CRYPTD is not set +CONFIG_CRYPTO_AUTHENC=y +# CONFIG_CRYPTO_TEST is not set +# end of Crypto core or helper + +# +# Public-key cryptography +# +CONFIG_CRYPTO_RSA=y +# CONFIG_CRYPTO_DH is not set +# CONFIG_CRYPTO_ECDH is not set +# CONFIG_CRYPTO_ECDSA is not set +# CONFIG_CRYPTO_ECRDSA is not set +# CONFIG_CRYPTO_SM2 is not set +# CONFIG_CRYPTO_CURVE25519 is not set +# end of Public-key cryptography + +# +# Block ciphers +# +CONFIG_CRYPTO_AES=y +# CONFIG_CRYPTO_AES_TI is not set +# CONFIG_CRYPTO_ANUBIS is not set +# CONFIG_CRYPTO_ARIA is not set +# CONFIG_CRYPTO_BLOWFISH is not set +# CONFIG_CRYPTO_CAMELLIA is not set +# CONFIG_CRYPTO_CAST5 is not set +# CONFIG_CRYPTO_CAST6 is not set +CONFIG_CRYPTO_DES=y +# CONFIG_CRYPTO_FCRYPT is not set +# CONFIG_CRYPTO_KHAZAD is not set +# CONFIG_CRYPTO_SEED is not set +# CONFIG_CRYPTO_SERPENT is not set +# CONFIG_CRYPTO_SM4_GENERIC is not set +# CONFIG_CRYPTO_TEA is not set +# CONFIG_CRYPTO_TWOFISH is not set +# end of Block ciphers + +# +# Length-preserving ciphers and modes +# +# CONFIG_CRYPTO_ADIANTUM is not set +CONFIG_CRYPTO_ARC4=y +# CONFIG_CRYPTO_CHACHA20 is not set +CONFIG_CRYPTO_CBC=y +# CONFIG_CRYPTO_CFB is not set +CONFIG_CRYPTO_CTR=y +CONFIG_CRYPTO_CTS=y +CONFIG_CRYPTO_ECB=y +# CONFIG_CRYPTO_HCTR2 is not set +# CONFIG_CRYPTO_KEYWRAP is not set +# CONFIG_CRYPTO_LRW is not set +# CONFIG_CRYPTO_OFB is not set +# CONFIG_CRYPTO_PCBC is not set +CONFIG_CRYPTO_XTS=y +# end of Length-preserving ciphers and modes + +# +# AEAD (authenticated encryption with associated data) ciphers +# +# CONFIG_CRYPTO_AEGIS128 is not set +# CONFIG_CRYPTO_CHACHA20POLY1305 is not set +CONFIG_CRYPTO_CCM=y +CONFIG_CRYPTO_GCM=y +CONFIG_CRYPTO_GENIV=y +CONFIG_CRYPTO_SEQIV=y +CONFIG_CRYPTO_ECHAINIV=y +CONFIG_CRYPTO_ESSIV=y +# end of AEAD (authenticated encryption with associated data) ciphers + +# +# Hashes, digests, and MACs +# +CONFIG_CRYPTO_BLAKE2B=y +CONFIG_CRYPTO_CMAC=y +CONFIG_CRYPTO_GHASH=y +CONFIG_CRYPTO_HMAC=y +CONFIG_CRYPTO_MD4=y +CONFIG_CRYPTO_MD5=y +# CONFIG_CRYPTO_MICHAEL_MIC is not set +# CONFIG_CRYPTO_POLY1305 is not set +# CONFIG_CRYPTO_RMD160 is not set +CONFIG_CRYPTO_SHA1=y +CONFIG_CRYPTO_SHA256=y +CONFIG_CRYPTO_SHA512=y +CONFIG_CRYPTO_SHA3=y +# CONFIG_CRYPTO_SM3_GENERIC is not set +# CONFIG_CRYPTO_STREEBOG is not set +# CONFIG_CRYPTO_VMAC is not set +# CONFIG_CRYPTO_WP512 is not set +# CONFIG_CRYPTO_XCBC is not set +CONFIG_CRYPTO_XXHASH=y +# end of Hashes, digests, and MACs + +# +# CRCs (cyclic redundancy checks) +# +CONFIG_CRYPTO_CRC32C=y +# CONFIG_CRYPTO_CRC32 is not set +# CONFIG_CRYPTO_CRCT10DIF is not set +# end of CRCs (cyclic redundancy checks) + +# +# Compression +# +# CONFIG_CRYPTO_DEFLATE is not set +# CONFIG_CRYPTO_LZO is not set +# CONFIG_CRYPTO_842 is not set +# CONFIG_CRYPTO_LZ4 is not set +# CONFIG_CRYPTO_LZ4HC is not set +# CONFIG_CRYPTO_ZSTD is not set +# end of Compression + +# +# Random number generation +# +# CONFIG_CRYPTO_ANSI_CPRNG is not set +CONFIG_CRYPTO_DRBG_MENU=y +CONFIG_CRYPTO_DRBG_HMAC=y +# CONFIG_CRYPTO_DRBG_HASH is not set +# CONFIG_CRYPTO_DRBG_CTR is not set +CONFIG_CRYPTO_DRBG=y +CONFIG_CRYPTO_JITTERENTROPY=y +# CONFIG_CRYPTO_JITTERENTROPY_TESTINTERFACE is not set +# end of Random number generation + +# +# Userspace interface +# +CONFIG_CRYPTO_USER_API=y +CONFIG_CRYPTO_USER_API_HASH=y +CONFIG_CRYPTO_USER_API_SKCIPHER=y +# CONFIG_CRYPTO_USER_API_RNG is not set +# CONFIG_CRYPTO_USER_API_AEAD is not set +CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y +# end of Userspace interface + +CONFIG_CRYPTO_HASH_INFO=y +# CONFIG_CRYPTO_NHPOLY1305_NEON is not set +CONFIG_CRYPTO_CHACHA20_NEON=y + +# +# Accelerated Cryptographic Algorithms for CPU (arm64) +# +# CONFIG_CRYPTO_GHASH_ARM64_CE is not set +CONFIG_CRYPTO_POLY1305_NEON=y +# CONFIG_CRYPTO_SHA1_ARM64_CE is not set +# CONFIG_CRYPTO_SHA256_ARM64 is not set +# CONFIG_CRYPTO_SHA2_ARM64_CE is not set +# CONFIG_CRYPTO_SHA512_ARM64 is not set +# CONFIG_CRYPTO_SHA512_ARM64_CE is not set +# CONFIG_CRYPTO_SHA3_ARM64 is not set +# CONFIG_CRYPTO_SM3_NEON is not set +# CONFIG_CRYPTO_SM3_ARM64_CE is not set +# CONFIG_CRYPTO_POLYVAL_ARM64_CE is not set +# CONFIG_CRYPTO_AES_ARM64 is not set +# CONFIG_CRYPTO_AES_ARM64_CE is not set +# CONFIG_CRYPTO_AES_ARM64_CE_BLK is not set +# CONFIG_CRYPTO_AES_ARM64_NEON_BLK is not set +# CONFIG_CRYPTO_AES_ARM64_BS is not set +# CONFIG_CRYPTO_SM4_ARM64_CE is not set +# CONFIG_CRYPTO_SM4_ARM64_CE_BLK is not set +# CONFIG_CRYPTO_SM4_ARM64_NEON_BLK is not set +# CONFIG_CRYPTO_AES_ARM64_CE_CCM is not set +# CONFIG_CRYPTO_SM4_ARM64_CE_CCM is not set +# CONFIG_CRYPTO_SM4_ARM64_CE_GCM is not set +# end of Accelerated Cryptographic Algorithms for CPU (arm64) + +CONFIG_CRYPTO_HW=y +# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set +# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set +# CONFIG_CRYPTO_DEV_CCP is not set +# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCC is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set +# CONFIG_CRYPTO_DEV_QAT_C62X is not set +# CONFIG_CRYPTO_DEV_QAT_4XXX is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCCVF is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set +# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set +# CONFIG_CRYPTO_DEV_CAVIUM_ZIP is not set +# CONFIG_CRYPTO_DEV_VIRTIO is not set +# CONFIG_CRYPTO_DEV_SAFEXCEL is not set +# CONFIG_CRYPTO_DEV_CCREE is not set +# CONFIG_CRYPTO_DEV_HISI_SEC is not set +# CONFIG_CRYPTO_DEV_HISI_SEC2 is not set +# CONFIG_CRYPTO_DEV_HISI_ZIP is not set +# CONFIG_CRYPTO_DEV_HISI_HPRE is not set +# CONFIG_CRYPTO_DEV_HISI_TRNG is not set +# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set +CONFIG_ASYMMETRIC_KEY_TYPE=y +CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y +CONFIG_X509_CERTIFICATE_PARSER=y +# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set +CONFIG_PKCS7_MESSAGE_PARSER=y +# CONFIG_PKCS7_TEST_KEY is not set +# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set +CONFIG_FIPS_SIGNATURE_SELFTEST=y + +# +# Certificates for signature checking +# +CONFIG_SYSTEM_TRUSTED_KEYRING=y +CONFIG_SYSTEM_TRUSTED_KEYS="" +# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set +# CONFIG_SECONDARY_TRUSTED_KEYRING is not set +# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set +# end of Certificates for signature checking + +CONFIG_BINARY_PRINTF=y + +# +# Library routines +# +CONFIG_RAID6_PQ=y +# CONFIG_RAID6_PQ_BENCHMARK is not set +# CONFIG_PACKING is not set +CONFIG_BITREVERSE=y +CONFIG_HAVE_ARCH_BITREVERSE=y +CONFIG_GENERIC_STRNCPY_FROM_USER=y +CONFIG_GENERIC_STRNLEN_USER=y +CONFIG_GENERIC_NET_UTILS=y +# CONFIG_CORDIC is not set +# CONFIG_PRIME_NUMBERS is not set +CONFIG_RATIONAL=y +CONFIG_GENERIC_PCI_IOMAP=y +CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y +CONFIG_ARCH_HAS_FAST_MULTIPLIER=y +CONFIG_ARCH_USE_SYM_ANNOTATIONS=y +# CONFIG_INDIRECT_PIO is not set +# CONFIG_TRACE_MMIO_ACCESS is not set + +# +# Crypto library routines +# +CONFIG_CRYPTO_LIB_UTILS=y +CONFIG_CRYPTO_LIB_AES=y +CONFIG_CRYPTO_LIB_ARC4=y +CONFIG_CRYPTO_LIB_GF128MUL=y +CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=y +CONFIG_CRYPTO_LIB_CHACHA_GENERIC=y +CONFIG_CRYPTO_LIB_CHACHA=y +CONFIG_CRYPTO_LIB_CURVE25519_GENERIC=y +CONFIG_CRYPTO_LIB_CURVE25519=y +CONFIG_CRYPTO_LIB_DES=y +CONFIG_CRYPTO_LIB_POLY1305_RSIZE=9 +CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_CHACHA20POLY1305=y +CONFIG_CRYPTO_LIB_SHA1=y +CONFIG_CRYPTO_LIB_SHA256=y +# end of Crypto library routines + +CONFIG_CRC_CCITT=y +CONFIG_CRC16=y +# CONFIG_CRC_T10DIF is not set +# CONFIG_CRC64_ROCKSOFT is not set +CONFIG_CRC_ITU_T=y +CONFIG_CRC32=y +# CONFIG_CRC32_SELFTEST is not set +CONFIG_CRC32_SLICEBY8=y +# CONFIG_CRC32_SLICEBY4 is not set +# CONFIG_CRC32_SARWATE is not set +# CONFIG_CRC32_BIT is not set +# CONFIG_CRC64 is not set +# CONFIG_CRC4 is not set +# CONFIG_CRC7 is not set +CONFIG_LIBCRC32C=y +# CONFIG_CRC8 is not set +CONFIG_XXHASH=y +CONFIG_AUDIT_GENERIC=y +CONFIG_AUDIT_ARCH_COMPAT_GENERIC=y +# CONFIG_RANDOM32_SELFTEST is not set +CONFIG_ZLIB_INFLATE=y +CONFIG_ZLIB_DEFLATE=y +CONFIG_LZO_COMPRESS=y +CONFIG_LZO_DECOMPRESS=y +CONFIG_LZ4_DECOMPRESS=y +CONFIG_ZSTD_COMMON=y +CONFIG_ZSTD_COMPRESS=y +CONFIG_ZSTD_DECOMPRESS=y +CONFIG_XZ_DEC=y +# CONFIG_XZ_DEC_X86 is not set +# CONFIG_XZ_DEC_POWERPC is not set +# CONFIG_XZ_DEC_IA64 is not set +# CONFIG_XZ_DEC_ARM is not set +# CONFIG_XZ_DEC_ARMTHUMB is not set +# CONFIG_XZ_DEC_SPARC is not set +# CONFIG_XZ_DEC_MICROLZMA is not set +# CONFIG_XZ_DEC_TEST is not set +CONFIG_DECOMPRESS_GZIP=y +CONFIG_DECOMPRESS_ZSTD=y +CONFIG_GENERIC_ALLOCATOR=y +CONFIG_TEXTSEARCH=y +CONFIG_TEXTSEARCH_KMP=y +CONFIG_INTERVAL_TREE=y +CONFIG_XARRAY_MULTI=y +CONFIG_ASSOCIATIVE_ARRAY=y +CONFIG_HAS_IOMEM=y +CONFIG_HAS_IOPORT=y +CONFIG_HAS_IOPORT_MAP=y +CONFIG_HAS_DMA=y +CONFIG_DMA_OPS=y +CONFIG_NEED_SG_DMA_FLAGS=y +CONFIG_NEED_SG_DMA_LENGTH=y +CONFIG_NEED_DMA_MAP_STATE=y +CONFIG_ARCH_DMA_ADDR_T_64BIT=y +CONFIG_DMA_DECLARE_COHERENT=y +CONFIG_ARCH_HAS_SETUP_DMA_OPS=y +CONFIG_ARCH_HAS_TEARDOWN_DMA_OPS=y +CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE=y +CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU=y +CONFIG_ARCH_HAS_DMA_PREP_COHERENT=y +CONFIG_SWIOTLB=y +# CONFIG_SWIOTLB_DYNAMIC is not set +CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC=y +# CONFIG_DMA_RESTRICTED_POOL is not set +CONFIG_DMA_NONCOHERENT_MMAP=y +CONFIG_DMA_COHERENT_POOL=y +CONFIG_DMA_DIRECT_REMAP=y +# CONFIG_DMA_API_DEBUG is not set +# CONFIG_DMA_MAP_BENCHMARK is not set +CONFIG_SGL_ALLOC=y +# CONFIG_FORCE_NR_CPUS is not set +CONFIG_CPU_RMAP=y +CONFIG_DQL=y +CONFIG_GLOB=y +# CONFIG_GLOB_SELFTEST is not set +CONFIG_NLATTR=y +CONFIG_CLZ_TAB=y +CONFIG_IRQ_POLL=y +CONFIG_MPILIB=y +CONFIG_LIBFDT=y +CONFIG_OID_REGISTRY=y +CONFIG_UCS2_STRING=y +CONFIG_HAVE_GENERIC_VDSO=y +CONFIG_GENERIC_GETTIMEOFDAY=y +CONFIG_GENERIC_VDSO_TIME_NS=y +CONFIG_FONT_SUPPORT=y +CONFIG_FONT_8x16=y +CONFIG_FONT_AUTOSELECT=y +CONFIG_SG_POOL=y +CONFIG_MEMREGION=y +CONFIG_ARCH_STACKWALK=y +CONFIG_SBITMAP=y +# end of Library routines + +CONFIG_GENERIC_IOREMAP=y +CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED=y + +# +# Kernel hacking +# + +# +# printk and dmesg options +# +CONFIG_PRINTK_TIME=y +# CONFIG_PRINTK_CALLER is not set +# CONFIG_STACKTRACE_BUILD_ID is not set +CONFIG_CONSOLE_LOGLEVEL_DEFAULT=2 +CONFIG_CONSOLE_LOGLEVEL_QUIET=1 +CONFIG_MESSAGE_LOGLEVEL_DEFAULT=1 +# CONFIG_BOOT_PRINTK_DELAY is not set +# CONFIG_DYNAMIC_DEBUG is not set +# CONFIG_DYNAMIC_DEBUG_CORE is not set +# CONFIG_SYMBOLIC_ERRNAME is not set +CONFIG_DEBUG_BUGVERBOSE=y +# end of printk and dmesg options + +# CONFIG_DEBUG_KERNEL is not set +# CONFIG_DEBUG_MISC is not set + +# +# Compile-time checks and compiler options +# +# CONFIG_DEBUG_INFO is not set +CONFIG_AS_HAS_NON_CONST_LEB128=y +# CONFIG_DEBUG_INFO_NONE is not set +CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y +# CONFIG_DEBUG_INFO_DWARF4 is not set +# CONFIG_DEBUG_INFO_DWARF5 is not set +# CONFIG_DEBUG_INFO_REDUCED is not set +CONFIG_DEBUG_INFO_COMPRESSED_NONE=y +# CONFIG_DEBUG_INFO_COMPRESSED_ZLIB is not set +# CONFIG_DEBUG_INFO_SPLIT is not set +CONFIG_DEBUG_INFO_BTF=y +CONFIG_PAHOLE_HAS_SPLIT_BTF=y +CONFIG_PAHOLE_HAS_LANG_EXCLUDE=y +CONFIG_DEBUG_INFO_BTF_MODULES=y +# CONFIG_MODULE_ALLOW_BTF_MISMATCH is not set +# CONFIG_GDB_SCRIPTS is not set +CONFIG_FRAME_WARN=2048 +# CONFIG_STRIP_ASM_SYMS is not set +# CONFIG_READABLE_ASM is not set +# CONFIG_HEADERS_INSTALL is not set +# CONFIG_DEBUG_SECTION_MISMATCH is not set +# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set +# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B is not set +CONFIG_ARCH_WANT_FRAME_POINTERS=y +CONFIG_FRAME_POINTER=y +# CONFIG_VMLINUX_MAP is not set +# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set +# end of Compile-time checks and compiler options + +# +# Generic Kernel Debugging Instruments +# +# CONFIG_MAGIC_SYSRQ is not set +CONFIG_DEBUG_FS=y +CONFIG_DEBUG_FS_ALLOW_ALL=y +# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set +# CONFIG_DEBUG_FS_ALLOW_NONE is not set +CONFIG_HAVE_ARCH_KGDB=y +# CONFIG_KGDB is not set +CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y +# CONFIG_UBSAN is not set +CONFIG_HAVE_ARCH_KCSAN=y +CONFIG_HAVE_KCSAN_COMPILER=y +# CONFIG_KCSAN is not set +# end of Generic Kernel Debugging Instruments + +# +# Networking Debugging +# +# CONFIG_NET_DEV_REFCNT_TRACKER is not set +# CONFIG_NET_NS_REFCNT_TRACKER is not set +# CONFIG_DEBUG_NET is not set +# end of Networking Debugging + +# +# Memory Debugging +# +CONFIG_PAGE_EXTENSION=y +# CONFIG_DEBUG_PAGEALLOC is not set +# CONFIG_SLUB_DEBUG is not set +# CONFIG_PAGE_OWNER is not set +# CONFIG_PAGE_POISONING is not set +# CONFIG_DEBUG_PAGE_REF is not set +# CONFIG_DEBUG_RODATA_TEST is not set +CONFIG_ARCH_HAS_DEBUG_WX=y +# CONFIG_DEBUG_WX is not set +CONFIG_GENERIC_PTDUMP=y +# CONFIG_PTDUMP_DEBUGFS is not set +CONFIG_HAVE_DEBUG_KMEMLEAK=y +# CONFIG_DEBUG_KMEMLEAK is not set +# CONFIG_PER_VMA_LOCK_STATS is not set +# CONFIG_DEBUG_OBJECTS is not set +# CONFIG_SHRINKER_DEBUG is not set +# CONFIG_DEBUG_STACK_USAGE is not set +CONFIG_SCHED_STACK_END_CHECK=y +CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y +# CONFIG_DEBUG_VM is not set +# CONFIG_DEBUG_VM_PGTABLE is not set +CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y +# CONFIG_DEBUG_VIRTUAL is not set +CONFIG_DEBUG_MEMORY_INIT=y +# CONFIG_DEBUG_PER_CPU_MAPS is not set +CONFIG_HAVE_ARCH_KASAN=y +CONFIG_HAVE_ARCH_KASAN_SW_TAGS=y +CONFIG_HAVE_ARCH_KASAN_VMALLOC=y +CONFIG_CC_HAS_KASAN_GENERIC=y +CONFIG_CC_HAS_KASAN_SW_TAGS=y +CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y +# CONFIG_KASAN is not set +CONFIG_HAVE_ARCH_KFENCE=y +# CONFIG_KFENCE is not set +# end of Memory Debugging + +# CONFIG_DEBUG_SHIRQ is not set + +# +# Debug Oops, Lockups and Hangs +# +CONFIG_PANIC_ON_OOPS=y +CONFIG_PANIC_ON_OOPS_VALUE=1 +CONFIG_PANIC_TIMEOUT=0 +# CONFIG_SOFTLOCKUP_DETECTOR is not set +CONFIG_HAVE_HARDLOCKUP_DETECTOR_BUDDY=y +# CONFIG_HARDLOCKUP_DETECTOR is not set +# CONFIG_DETECT_HUNG_TASK is not set +# CONFIG_WQ_WATCHDOG is not set +# CONFIG_WQ_CPU_INTENSIVE_REPORT is not set +# CONFIG_TEST_LOCKUP is not set +# end of Debug Oops, Lockups and Hangs + +# +# Scheduler Debugging +# +CONFIG_SCHED_DEBUG=y +CONFIG_SCHED_INFO=y +CONFIG_SCHEDSTATS=y +# end of Scheduler Debugging + +# CONFIG_DEBUG_TIMEKEEPING is not set + +# +# Lock Debugging (spinlocks, mutexes, etc...) +# +CONFIG_LOCK_DEBUGGING_SUPPORT=y +# CONFIG_PROVE_LOCKING is not set +# CONFIG_LOCK_STAT is not set +# CONFIG_DEBUG_RT_MUTEXES is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_DEBUG_MUTEXES is not set +# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set +# CONFIG_DEBUG_RWSEMS is not set +# CONFIG_DEBUG_LOCK_ALLOC is not set +# CONFIG_DEBUG_ATOMIC_SLEEP is not set +# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set +# CONFIG_LOCK_TORTURE_TEST is not set +# CONFIG_WW_MUTEX_SELFTEST is not set +# CONFIG_SCF_TORTURE_TEST is not set +# CONFIG_CSD_LOCK_WAIT_DEBUG is not set +# end of Lock Debugging (spinlocks, mutexes, etc...) + +# CONFIG_DEBUG_IRQFLAGS is not set +CONFIG_STACKTRACE=y +# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set +# CONFIG_DEBUG_KOBJECT is not set + +# +# Debug kernel data structures +# +CONFIG_DEBUG_LIST=y +# CONFIG_DEBUG_PLIST is not set +CONFIG_DEBUG_SG=y +CONFIG_DEBUG_NOTIFIERS=y +# CONFIG_DEBUG_MAPLE_TREE is not set +# end of Debug kernel data structures + +CONFIG_DEBUG_CREDENTIALS=y + +# +# RCU Debugging +# +# CONFIG_RCU_SCALE_TEST is not set +# CONFIG_RCU_TORTURE_TEST is not set +# CONFIG_RCU_REF_SCALE_TEST is not set +CONFIG_RCU_CPU_STALL_TIMEOUT=60 +CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=0 +# CONFIG_RCU_CPU_STALL_CPUTIME is not set +# CONFIG_RCU_TRACE is not set +# CONFIG_RCU_EQS_DEBUG is not set +# end of RCU Debugging + +# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set +# CONFIG_LATENCYTOP is not set +# CONFIG_DEBUG_CGROUP_REF is not set +CONFIG_NOP_TRACER=y +CONFIG_HAVE_FUNCTION_TRACER=y +CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y +CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y +CONFIG_HAVE_DYNAMIC_FTRACE=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y +CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y +CONFIG_HAVE_SYSCALL_TRACEPOINTS=y +CONFIG_HAVE_C_RECORDMCOUNT=y +CONFIG_TRACER_MAX_TRACE=y +CONFIG_TRACE_CLOCK=y +CONFIG_RING_BUFFER=y +CONFIG_EVENT_TRACING=y +CONFIG_CONTEXT_SWITCH_TRACER=y +CONFIG_RING_BUFFER_ALLOW_SWAP=y +CONFIG_TRACING=y +CONFIG_GENERIC_TRACER=y +CONFIG_TRACING_SUPPORT=y +CONFIG_FTRACE=y +# CONFIG_BOOTTIME_TRACING is not set +CONFIG_FUNCTION_TRACER=y +CONFIG_FUNCTION_GRAPH_TRACER=y +# CONFIG_FUNCTION_GRAPH_RETVAL is not set +CONFIG_DYNAMIC_FTRACE=y +CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y +CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS=y +CONFIG_DYNAMIC_FTRACE_WITH_ARGS=y +CONFIG_FUNCTION_PROFILER=y +CONFIG_STACK_TRACER=y +# CONFIG_IRQSOFF_TRACER is not set +CONFIG_SCHED_TRACER=y +CONFIG_HWLAT_TRACER=y +# CONFIG_OSNOISE_TRACER is not set +# CONFIG_TIMERLAT_TRACER is not set +CONFIG_FTRACE_SYSCALLS=y +CONFIG_TRACER_SNAPSHOT=y +CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y +CONFIG_BRANCH_PROFILE_NONE=y +# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set +# CONFIG_BLK_DEV_IO_TRACE is not set +CONFIG_PROBE_EVENTS_BTF_ARGS=y +CONFIG_KPROBE_EVENTS=y +# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set +CONFIG_UPROBE_EVENTS=y +CONFIG_BPF_EVENTS=y +CONFIG_DYNAMIC_EVENTS=y +CONFIG_PROBE_EVENTS=y +CONFIG_FTRACE_MCOUNT_RECORD=y +CONFIG_FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY=y +# CONFIG_SYNTH_EVENTS is not set +# CONFIG_USER_EVENTS is not set +# CONFIG_HIST_TRIGGERS is not set +# CONFIG_TRACE_EVENT_INJECT is not set +# CONFIG_TRACEPOINT_BENCHMARK is not set +# CONFIG_RING_BUFFER_BENCHMARK is not set +# CONFIG_TRACE_EVAL_MAP_FILE is not set +# CONFIG_FTRACE_RECORD_RECURSION is not set +# CONFIG_FTRACE_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS is not set +# CONFIG_PREEMPTIRQ_DELAY_TEST is not set +# CONFIG_KPROBE_EVENT_GEN_TEST is not set +# CONFIG_RV is not set +# CONFIG_SAMPLES is not set +CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y +CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y +# CONFIG_STRICT_DEVMEM is not set + +# +# arm64 Debugging +# +# CONFIG_PID_IN_CONTEXTIDR is not set +# CONFIG_DEBUG_EFI is not set +# CONFIG_ARM64_RELOC_TEST is not set +# CONFIG_CORESIGHT is not set +# end of arm64 Debugging + +# +# Kernel Testing and Coverage +# +# CONFIG_KUNIT is not set +# CONFIG_NOTIFIER_ERROR_INJECTION is not set +# CONFIG_FUNCTION_ERROR_INJECTION is not set +# CONFIG_FAULT_INJECTION is not set +CONFIG_ARCH_HAS_KCOV=y +CONFIG_CC_HAS_SANCOV_TRACE_PC=y +# CONFIG_RUNTIME_TESTING_MENU is not set +CONFIG_ARCH_USE_MEMTEST=y +# CONFIG_MEMTEST is not set +# CONFIG_HYPERV_TESTING is not set +# end of Kernel Testing and Coverage + +# +# Rust hacking +# +# end of Rust hacking +# end of Kernel hacking diff --git a/config/kernel/linux-wsl2-x86-current.config b/config/kernel/linux-wsl2-x86-current.config new file mode 100644 index 000000000000..96ed414f4e2c --- /dev/null +++ b/config/kernel/linux-wsl2-x86-current.config @@ -0,0 +1,4349 @@ +# +# Automatically generated file; DO NOT EDIT. +# Linux/x86_64 6.1.63 Kernel Configuration +# +CONFIG_CC_VERSION_TEXT="gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0" +CONFIG_CC_IS_GCC=y +CONFIG_GCC_VERSION=110400 +CONFIG_CLANG_VERSION=0 +CONFIG_AS_IS_GNU=y +CONFIG_AS_VERSION=23800 +CONFIG_LD_IS_BFD=y +CONFIG_LD_VERSION=23800 +CONFIG_LLD_VERSION=0 +CONFIG_CC_CAN_LINK=y +CONFIG_CC_CAN_LINK_STATIC=y +CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y +CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y +CONFIG_CC_HAS_ASM_INLINE=y +CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y +CONFIG_PAHOLE_VERSION=125 +CONFIG_IRQ_WORK=y +CONFIG_BUILDTIME_TABLE_SORT=y +CONFIG_THREAD_INFO_IN_TASK=y + +# +# General setup +# +CONFIG_INIT_ENV_ARG_LIMIT=32 +# CONFIG_COMPILE_TEST is not set +# CONFIG_WERROR is not set +CONFIG_LOCALVERSION="" +# CONFIG_LOCALVERSION_AUTO is not set +CONFIG_BUILD_SALT="" +CONFIG_HAVE_KERNEL_GZIP=y +CONFIG_HAVE_KERNEL_BZIP2=y +CONFIG_HAVE_KERNEL_LZMA=y +CONFIG_HAVE_KERNEL_XZ=y +CONFIG_HAVE_KERNEL_LZO=y +CONFIG_HAVE_KERNEL_LZ4=y +CONFIG_HAVE_KERNEL_ZSTD=y +CONFIG_KERNEL_GZIP=y +# CONFIG_KERNEL_BZIP2 is not set +# CONFIG_KERNEL_LZMA is not set +# CONFIG_KERNEL_XZ is not set +# CONFIG_KERNEL_LZO is not set +# CONFIG_KERNEL_LZ4 is not set +# CONFIG_KERNEL_ZSTD is not set +CONFIG_DEFAULT_INIT="" +CONFIG_DEFAULT_HOSTNAME="(none)" +CONFIG_SYSVIPC=y +CONFIG_SYSVIPC_SYSCTL=y +CONFIG_SYSVIPC_COMPAT=y +CONFIG_POSIX_MQUEUE=y +CONFIG_POSIX_MQUEUE_SYSCTL=y +# CONFIG_WATCH_QUEUE is not set +CONFIG_CROSS_MEMORY_ATTACH=y +# CONFIG_USELIB is not set +CONFIG_AUDIT=y +CONFIG_HAVE_ARCH_AUDITSYSCALL=y +CONFIG_AUDITSYSCALL=y + +# +# IRQ subsystem +# +CONFIG_GENERIC_IRQ_PROBE=y +CONFIG_GENERIC_IRQ_SHOW=y +CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y +CONFIG_GENERIC_PENDING_IRQ=y +CONFIG_GENERIC_IRQ_MIGRATION=y +CONFIG_HARDIRQS_SW_RESEND=y +CONFIG_IRQ_DOMAIN=y +CONFIG_IRQ_DOMAIN_HIERARCHY=y +CONFIG_GENERIC_MSI_IRQ=y +CONFIG_GENERIC_MSI_IRQ_DOMAIN=y +CONFIG_IRQ_MSI_IOMMU=y +CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y +CONFIG_GENERIC_IRQ_RESERVATION_MODE=y +CONFIG_IRQ_FORCED_THREADING=y +CONFIG_SPARSE_IRQ=y +# CONFIG_GENERIC_IRQ_DEBUGFS is not set +# end of IRQ subsystem + +CONFIG_CLOCKSOURCE_WATCHDOG=y +CONFIG_ARCH_CLOCKSOURCE_INIT=y +CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y +CONFIG_GENERIC_TIME_VSYSCALL=y +CONFIG_GENERIC_CLOCKEVENTS=y +CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y +CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y +CONFIG_GENERIC_CMOS_UPDATE=y +CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_CONTEXT_TRACKING=y +CONFIG_CONTEXT_TRACKING_IDLE=y + +# +# Timers subsystem +# +CONFIG_TICK_ONESHOT=y +CONFIG_NO_HZ_COMMON=y +# CONFIG_HZ_PERIODIC is not set +CONFIG_NO_HZ_IDLE=y +# CONFIG_NO_HZ_FULL is not set +# CONFIG_NO_HZ is not set +CONFIG_HIGH_RES_TIMERS=y +CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US=100 +# end of Timers subsystem + +CONFIG_BPF=y +CONFIG_HAVE_EBPF_JIT=y +CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y + +# +# BPF subsystem +# +CONFIG_BPF_SYSCALL=y +CONFIG_BPF_JIT=y +CONFIG_BPF_JIT_ALWAYS_ON=y +CONFIG_BPF_JIT_DEFAULT_ON=y +CONFIG_BPF_UNPRIV_DEFAULT_OFF=y +# CONFIG_BPF_PRELOAD is not set +# CONFIG_BPF_LSM is not set +# end of BPF subsystem + +CONFIG_PREEMPT_NONE_BUILD=y +CONFIG_PREEMPT_NONE=y +# CONFIG_PREEMPT_VOLUNTARY is not set +# CONFIG_PREEMPT is not set +# CONFIG_PREEMPT_DYNAMIC is not set +# CONFIG_SCHED_CORE is not set + +# +# CPU/Task time and stats accounting +# +CONFIG_TICK_CPU_ACCOUNTING=y +# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set +# CONFIG_IRQ_TIME_ACCOUNTING is not set +CONFIG_BSD_PROCESS_ACCT=y +# CONFIG_BSD_PROCESS_ACCT_V3 is not set +CONFIG_TASKSTATS=y +CONFIG_TASK_DELAY_ACCT=y +CONFIG_TASK_XACCT=y +CONFIG_TASK_IO_ACCOUNTING=y +# CONFIG_PSI is not set +# end of CPU/Task time and stats accounting + +# CONFIG_CPU_ISOLATION is not set + +# +# RCU Subsystem +# +CONFIG_TREE_RCU=y +# CONFIG_RCU_EXPERT is not set +CONFIG_SRCU=y +CONFIG_TREE_SRCU=y +CONFIG_TASKS_RCU_GENERIC=y +CONFIG_TASKS_RUDE_RCU=y +CONFIG_TASKS_TRACE_RCU=y +CONFIG_RCU_STALL_COMMON=y +CONFIG_RCU_NEED_SEGCBLIST=y +# end of RCU Subsystem + +CONFIG_IKCONFIG=y +CONFIG_IKCONFIG_PROC=y +# CONFIG_IKHEADERS is not set +CONFIG_LOG_BUF_SHIFT=17 +CONFIG_LOG_CPU_MAX_BUF_SHIFT=12 +CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13 +# CONFIG_PRINTK_INDEX is not set +CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y + +# +# Scheduler features +# +# CONFIG_UCLAMP_TASK is not set +# end of Scheduler features + +CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y +CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y +CONFIG_CC_HAS_INT128=y +CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5" +CONFIG_GCC11_NO_ARRAY_BOUNDS=y +CONFIG_CC_NO_ARRAY_BOUNDS=y +CONFIG_ARCH_SUPPORTS_INT128=y +CONFIG_CGROUPS=y +CONFIG_PAGE_COUNTER=y +# CONFIG_CGROUP_FAVOR_DYNMODS is not set +CONFIG_MEMCG=y +CONFIG_MEMCG_KMEM=y +CONFIG_BLK_CGROUP=y +CONFIG_CGROUP_WRITEBACK=y +CONFIG_CGROUP_SCHED=y +CONFIG_FAIR_GROUP_SCHED=y +CONFIG_CFS_BANDWIDTH=y +CONFIG_RT_GROUP_SCHED=y +CONFIG_CGROUP_PIDS=y +CONFIG_CGROUP_RDMA=y +CONFIG_CGROUP_FREEZER=y +CONFIG_CGROUP_HUGETLB=y +CONFIG_CPUSETS=y +CONFIG_PROC_PID_CPUSET=y +CONFIG_CGROUP_DEVICE=y +CONFIG_CGROUP_CPUACCT=y +CONFIG_CGROUP_PERF=y +CONFIG_CGROUP_BPF=y +CONFIG_CGROUP_MISC=y +# CONFIG_CGROUP_DEBUG is not set +CONFIG_SOCK_CGROUP_DATA=y +CONFIG_NAMESPACES=y +CONFIG_UTS_NS=y +CONFIG_TIME_NS=y +CONFIG_IPC_NS=y +CONFIG_USER_NS=y +CONFIG_PID_NS=y +CONFIG_NET_NS=y +CONFIG_CHECKPOINT_RESTORE=y +# CONFIG_SCHED_AUTOGROUP is not set +# CONFIG_SYSFS_DEPRECATED is not set +# CONFIG_RELAY is not set +CONFIG_BLK_DEV_INITRD=y +CONFIG_INITRAMFS_SOURCE="" +CONFIG_RD_GZIP=y +# CONFIG_RD_BZIP2 is not set +# CONFIG_RD_LZMA is not set +# CONFIG_RD_XZ is not set +# CONFIG_RD_LZO is not set +# CONFIG_RD_LZ4 is not set +CONFIG_RD_ZSTD=y +# CONFIG_BOOT_CONFIG is not set +# CONFIG_INITRAMFS_PRESERVE_MTIME is not set +CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y +# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set +CONFIG_LD_ORPHAN_WARN=y +CONFIG_SYSCTL=y +CONFIG_HAVE_UID16=y +CONFIG_SYSCTL_EXCEPTION_TRACE=y +CONFIG_HAVE_PCSPKR_PLATFORM=y +CONFIG_EXPERT=y +# CONFIG_UID16 is not set +CONFIG_MULTIUSER=y +CONFIG_SGETMASK_SYSCALL=y +CONFIG_SYSFS_SYSCALL=y +CONFIG_FHANDLE=y +CONFIG_POSIX_TIMERS=y +CONFIG_PRINTK=y +CONFIG_BUG=y +CONFIG_ELF_CORE=y +CONFIG_PCSPKR_PLATFORM=y +CONFIG_BASE_FULL=y +CONFIG_FUTEX=y +CONFIG_FUTEX_PI=y +CONFIG_EPOLL=y +CONFIG_SIGNALFD=y +CONFIG_TIMERFD=y +CONFIG_EVENTFD=y +CONFIG_SHMEM=y +CONFIG_AIO=y +CONFIG_IO_URING=y +CONFIG_ADVISE_SYSCALLS=y +CONFIG_MEMBARRIER=y +CONFIG_KALLSYMS=y +# CONFIG_KALLSYMS_ALL is not set +CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y +CONFIG_KALLSYMS_BASE_RELATIVE=y +CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y +CONFIG_KCMP=y +CONFIG_RSEQ=y +# CONFIG_DEBUG_RSEQ is not set +# CONFIG_EMBEDDED is not set +CONFIG_HAVE_PERF_EVENTS=y +CONFIG_GUEST_PERF_EVENTS=y +# CONFIG_PC104 is not set + +# +# Kernel Performance Events And Counters +# +CONFIG_PERF_EVENTS=y +# CONFIG_DEBUG_PERF_USE_VMALLOC is not set +# end of Kernel Performance Events And Counters + +CONFIG_SYSTEM_DATA_VERIFICATION=y +# CONFIG_PROFILING is not set +CONFIG_TRACEPOINTS=y +# end of General setup + +CONFIG_64BIT=y +CONFIG_X86_64=y +CONFIG_X86=y +CONFIG_INSTRUCTION_DECODER=y +CONFIG_OUTPUT_FORMAT="elf64-x86-64" +CONFIG_LOCKDEP_SUPPORT=y +CONFIG_STACKTRACE_SUPPORT=y +CONFIG_MMU=y +CONFIG_ARCH_MMAP_RND_BITS_MIN=28 +CONFIG_ARCH_MMAP_RND_BITS_MAX=32 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16 +CONFIG_GENERIC_ISA_DMA=y +CONFIG_GENERIC_BUG=y +CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y +CONFIG_ARCH_MAY_HAVE_PC_FDC=y +CONFIG_GENERIC_CALIBRATE_DELAY=y +CONFIG_ARCH_HAS_CPU_RELAX=y +CONFIG_ARCH_HIBERNATION_POSSIBLE=y +CONFIG_ARCH_NR_GPIO=1024 +CONFIG_ARCH_SUSPEND_POSSIBLE=y +CONFIG_AUDIT_ARCH=y +CONFIG_HAVE_INTEL_TXT=y +CONFIG_X86_64_SMP=y +CONFIG_ARCH_SUPPORTS_UPROBES=y +CONFIG_FIX_EARLYCON_MEM=y +CONFIG_DYNAMIC_PHYSICAL_MASK=y +CONFIG_PGTABLE_LEVELS=4 +CONFIG_CC_HAS_SANE_STACKPROTECTOR=y + +# +# Processor type and features +# +CONFIG_SMP=y +CONFIG_X86_FEATURE_NAMES=y +CONFIG_X86_X2APIC=y +# CONFIG_X86_MPPARSE is not set +# CONFIG_GOLDFISH is not set +# CONFIG_X86_CPU_RESCTRL is not set +# CONFIG_X86_EXTENDED_PLATFORM is not set +# CONFIG_X86_INTEL_LPSS is not set +# CONFIG_X86_AMD_PLATFORM_DEVICE is not set +# CONFIG_IOSF_MBI is not set +CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y +# CONFIG_SCHED_OMIT_FRAME_POINTER is not set +CONFIG_HYPERVISOR_GUEST=y +CONFIG_PARAVIRT=y +# CONFIG_PARAVIRT_DEBUG is not set +CONFIG_PARAVIRT_SPINLOCKS=y +CONFIG_X86_HV_CALLBACK_VECTOR=y +# CONFIG_XEN is not set +# CONFIG_KVM_GUEST is not set +# CONFIG_ARCH_CPUIDLE_HALTPOLL is not set +# CONFIG_PVH is not set +# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set +# CONFIG_JAILHOUSE_GUEST is not set +# CONFIG_ACRN_GUEST is not set +CONFIG_INTEL_TDX_GUEST=y +# CONFIG_MK8 is not set +# CONFIG_MPSC is not set +CONFIG_MCORE2=y +# CONFIG_MATOM is not set +# CONFIG_GENERIC_CPU is not set +CONFIG_X86_INTERNODE_CACHE_SHIFT=6 +CONFIG_X86_L1_CACHE_SHIFT=6 +CONFIG_X86_INTEL_USERCOPY=y +CONFIG_X86_USE_PPRO_CHECKSUM=y +CONFIG_X86_P6_NOP=y +CONFIG_X86_TSC=y +CONFIG_X86_CMPXCHG64=y +CONFIG_X86_CMOV=y +CONFIG_X86_MINIMUM_CPU_FAMILY=64 +CONFIG_X86_DEBUGCTLMSR=y +CONFIG_IA32_FEAT_CTL=y +CONFIG_X86_VMX_FEATURE_NAMES=y +CONFIG_PROCESSOR_SELECT=y +CONFIG_CPU_SUP_INTEL=y +CONFIG_CPU_SUP_AMD=y +# CONFIG_CPU_SUP_HYGON is not set +CONFIG_CPU_SUP_CENTAUR=y +# CONFIG_CPU_SUP_ZHAOXIN is not set +CONFIG_HPET_TIMER=y +CONFIG_HPET_EMULATE_RTC=y +CONFIG_DMI=y +# CONFIG_GART_IOMMU is not set +# CONFIG_MAXSMP is not set +CONFIG_NR_CPUS_RANGE_BEGIN=2 +CONFIG_NR_CPUS_RANGE_END=512 +CONFIG_NR_CPUS_DEFAULT=64 +CONFIG_NR_CPUS=256 +CONFIG_SCHED_CLUSTER=y +CONFIG_SCHED_SMT=y +CONFIG_SCHED_MC=y +# CONFIG_SCHED_MC_PRIO is not set +CONFIG_X86_LOCAL_APIC=y +CONFIG_X86_IO_APIC=y +# CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS is not set +CONFIG_X86_MCE=y +# CONFIG_X86_MCELOG_LEGACY is not set +CONFIG_X86_MCE_INTEL=y +CONFIG_X86_MCE_AMD=y +CONFIG_X86_MCE_THRESHOLD=y +# CONFIG_X86_MCE_INJECT is not set + +# +# Performance monitoring +# +# CONFIG_PERF_EVENTS_INTEL_UNCORE is not set +# CONFIG_PERF_EVENTS_INTEL_RAPL is not set +# CONFIG_PERF_EVENTS_INTEL_CSTATE is not set +# CONFIG_PERF_EVENTS_AMD_POWER is not set +# CONFIG_PERF_EVENTS_AMD_UNCORE is not set +CONFIG_PERF_EVENTS_AMD_BRS=y +# end of Performance monitoring + +CONFIG_X86_16BIT=y +CONFIG_X86_ESPFIX64=y +CONFIG_X86_VSYSCALL_EMULATION=y +# CONFIG_X86_IOPL_IOPERM is not set +# CONFIG_MICROCODE is not set +# CONFIG_X86_MSR is not set +# CONFIG_X86_CPUID is not set +# CONFIG_X86_5LEVEL is not set +CONFIG_X86_DIRECT_GBPAGES=y +# CONFIG_X86_CPA_STATISTICS is not set +CONFIG_X86_MEM_ENCRYPT=y +# CONFIG_AMD_MEM_ENCRYPT is not set +# CONFIG_NUMA is not set +CONFIG_ARCH_SPARSEMEM_ENABLE=y +CONFIG_ARCH_SPARSEMEM_DEFAULT=y +# CONFIG_ARCH_MEMORY_PROBE is not set +CONFIG_ARCH_PROC_KCORE_TEXT=y +CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000 +CONFIG_X86_PMEM_LEGACY_DEVICE=y +CONFIG_X86_PMEM_LEGACY=y +# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set +CONFIG_MTRR=y +# CONFIG_MTRR_SANITIZER is not set +CONFIG_X86_PAT=y +CONFIG_ARCH_USES_PG_UNCACHED=y +CONFIG_X86_UMIP=y +CONFIG_CC_HAS_IBT=y +# CONFIG_X86_KERNEL_IBT is not set +CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y +CONFIG_X86_INTEL_TSX_MODE_OFF=y +# CONFIG_X86_INTEL_TSX_MODE_ON is not set +# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set +# CONFIG_X86_SGX is not set +CONFIG_EFI=y +CONFIG_EFI_STUB=y +CONFIG_EFI_MIXED=y +CONFIG_HZ_100=y +# CONFIG_HZ_250 is not set +# CONFIG_HZ_300 is not set +# CONFIG_HZ_1000 is not set +CONFIG_HZ=100 +CONFIG_SCHED_HRTICK=y +# CONFIG_KEXEC is not set +# CONFIG_KEXEC_FILE is not set +# CONFIG_CRASH_DUMP is not set +CONFIG_PHYSICAL_START=0x1000000 +CONFIG_RELOCATABLE=y +CONFIG_RANDOMIZE_BASE=y +CONFIG_X86_NEED_RELOCS=y +CONFIG_PHYSICAL_ALIGN=0x1000000 +CONFIG_DYNAMIC_MEMORY_LAYOUT=y +CONFIG_RANDOMIZE_MEMORY=y +CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa +CONFIG_HOTPLUG_CPU=y +# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set +# CONFIG_DEBUG_HOTPLUG_CPU0 is not set +# CONFIG_COMPAT_VDSO is not set +# CONFIG_LEGACY_VSYSCALL_XONLY is not set +CONFIG_LEGACY_VSYSCALL_NONE=y +# CONFIG_CMDLINE_BOOL is not set +CONFIG_MODIFY_LDT_SYSCALL=y +# CONFIG_STRICT_SIGALTSTACK_SIZE is not set +CONFIG_HAVE_LIVEPATCH=y +# end of Processor type and features + +CONFIG_CC_HAS_SLS=y +CONFIG_CC_HAS_RETURN_THUNK=y +CONFIG_SPECULATION_MITIGATIONS=y +CONFIG_PAGE_TABLE_ISOLATION=y +CONFIG_RETPOLINE=y +CONFIG_RETHUNK=y +CONFIG_CPU_UNRET_ENTRY=y +CONFIG_CPU_IBPB_ENTRY=y +CONFIG_CPU_IBRS_ENTRY=y +CONFIG_CPU_SRSO=y +# CONFIG_SLS is not set +# CONFIG_GDS_FORCE_MITIGATION is not set +CONFIG_ARCH_HAS_ADD_PAGES=y +CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y + +# +# Power management and ACPI options +# +# CONFIG_SUSPEND is not set +# CONFIG_HIBERNATION is not set +# CONFIG_PM is not set +# CONFIG_ENERGY_MODEL is not set +CONFIG_ARCH_SUPPORTS_ACPI=y +CONFIG_ACPI=y +CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y +CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y +CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y +# CONFIG_ACPI_DEBUGGER is not set +# CONFIG_ACPI_SPCR_TABLE is not set +# CONFIG_ACPI_FPDT is not set +CONFIG_ACPI_LPIT=y +# CONFIG_ACPI_REV_OVERRIDE_POSSIBLE is not set +# CONFIG_ACPI_EC_DEBUGFS is not set +CONFIG_ACPI_AC=y +CONFIG_ACPI_BATTERY=y +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_TINY_POWER_BUTTON is not set +# CONFIG_ACPI_FAN is not set +# CONFIG_ACPI_DOCK is not set +CONFIG_ACPI_CPU_FREQ_PSS=y +CONFIG_ACPI_PROCESSOR_CSTATE=y +CONFIG_ACPI_PROCESSOR_IDLE=y +CONFIG_ACPI_CPPC_LIB=y +CONFIG_ACPI_PROCESSOR=y +CONFIG_ACPI_HOTPLUG_CPU=y +# CONFIG_ACPI_PROCESSOR_AGGREGATOR is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y +# CONFIG_ACPI_TABLE_UPGRADE is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_PCI_SLOT is not set +CONFIG_ACPI_CONTAINER=y +# CONFIG_ACPI_HOTPLUG_MEMORY is not set +CONFIG_ACPI_HOTPLUG_IOAPIC=y +# CONFIG_ACPI_SBS is not set +# CONFIG_ACPI_HED is not set +# CONFIG_ACPI_CUSTOM_METHOD is not set +# CONFIG_ACPI_BGRT is not set +# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set +CONFIG_ACPI_NFIT=y +# CONFIG_NFIT_SECURITY_DEBUG is not set +CONFIG_HAVE_ACPI_APEI=y +CONFIG_HAVE_ACPI_APEI_NMI=y +# CONFIG_ACPI_APEI is not set +# CONFIG_ACPI_DPTF is not set +# CONFIG_ACPI_CONFIGFS is not set +# CONFIG_ACPI_PFRUT is not set +CONFIG_ACPI_PCC=y +# CONFIG_PMIC_OPREGION is not set +# CONFIG_ACPI_PRMT is not set +# CONFIG_X86_PM_TIMER is not set + +# +# CPU Frequency scaling +# +CONFIG_CPU_FREQ=y +CONFIG_CPU_FREQ_GOV_ATTR_SET=y +# CONFIG_CPU_FREQ_STAT is not set +CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set +CONFIG_CPU_FREQ_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set +CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y + +# +# CPU frequency scaling drivers +# +# CONFIG_X86_INTEL_PSTATE is not set +# CONFIG_X86_PCC_CPUFREQ is not set +CONFIG_X86_AMD_PSTATE=y +# CONFIG_X86_AMD_PSTATE_UT is not set +# CONFIG_X86_ACPI_CPUFREQ is not set +# CONFIG_X86_SPEEDSTEP_CENTRINO is not set +# CONFIG_X86_P4_CLOCKMOD is not set + +# +# shared options +# +# end of CPU Frequency scaling + +# +# CPU Idle +# +CONFIG_CPU_IDLE=y +# CONFIG_CPU_IDLE_GOV_LADDER is not set +CONFIG_CPU_IDLE_GOV_MENU=y +# CONFIG_CPU_IDLE_GOV_TEO is not set +# end of CPU Idle + +# CONFIG_INTEL_IDLE is not set +# end of Power management and ACPI options + +# +# Bus options (PCI etc.) +# +CONFIG_PCI_DIRECT=y +# CONFIG_PCI_MMCONFIG is not set +# CONFIG_PCI_CNB20LE_QUIRK is not set +# CONFIG_ISA_BUS is not set +CONFIG_ISA_DMA_API=y +CONFIG_AMD_NB=y +# end of Bus options (PCI etc.) + +# +# Binary Emulations +# +CONFIG_IA32_EMULATION=y +# CONFIG_X86_X32_ABI is not set +CONFIG_COMPAT_32=y +CONFIG_COMPAT=y +CONFIG_COMPAT_FOR_U64_ALIGNMENT=y +# end of Binary Emulations + +CONFIG_HAVE_KVM=y +CONFIG_HAVE_KVM_PFNCACHE=y +CONFIG_HAVE_KVM_IRQCHIP=y +CONFIG_HAVE_KVM_IRQFD=y +CONFIG_HAVE_KVM_IRQ_ROUTING=y +CONFIG_HAVE_KVM_DIRTY_RING=y +CONFIG_HAVE_KVM_DIRTY_RING_TSO=y +CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y +CONFIG_HAVE_KVM_EVENTFD=y +CONFIG_KVM_MMIO=y +CONFIG_KVM_ASYNC_PF=y +CONFIG_HAVE_KVM_MSI=y +CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y +CONFIG_KVM_VFIO=y +CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y +CONFIG_KVM_COMPAT=y +CONFIG_HAVE_KVM_IRQ_BYPASS=y +CONFIG_HAVE_KVM_NO_POLL=y +CONFIG_KVM_XFER_TO_GUEST_WORK=y +CONFIG_VIRTUALIZATION=y +CONFIG_KVM=y +CONFIG_KVM_WERROR=y +CONFIG_KVM_INTEL=y +CONFIG_KVM_AMD=y +# CONFIG_KVM_XEN is not set +CONFIG_AS_AVX512=y +CONFIG_AS_SHA1_NI=y +CONFIG_AS_SHA256_NI=y +CONFIG_AS_TPAUSE=y + +# +# General architecture-dependent options +# +CONFIG_CRASH_CORE=y +CONFIG_HOTPLUG_SMT=y +CONFIG_GENERIC_ENTRY=y +CONFIG_KPROBES=y +CONFIG_JUMP_LABEL=y +# CONFIG_STATIC_KEYS_SELFTEST is not set +# CONFIG_STATIC_CALL_SELFTEST is not set +CONFIG_OPTPROBES=y +CONFIG_KPROBES_ON_FTRACE=y +CONFIG_UPROBES=y +CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y +CONFIG_ARCH_USE_BUILTIN_BSWAP=y +CONFIG_KRETPROBES=y +CONFIG_KRETPROBE_ON_RETHOOK=y +CONFIG_USER_RETURN_NOTIFIER=y +CONFIG_HAVE_IOREMAP_PROT=y +CONFIG_HAVE_KPROBES=y +CONFIG_HAVE_KRETPROBES=y +CONFIG_HAVE_OPTPROBES=y +CONFIG_HAVE_KPROBES_ON_FTRACE=y +CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y +CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y +CONFIG_HAVE_NMI=y +CONFIG_TRACE_IRQFLAGS_SUPPORT=y +CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y +CONFIG_HAVE_ARCH_TRACEHOOK=y +CONFIG_HAVE_DMA_CONTIGUOUS=y +CONFIG_GENERIC_SMP_IDLE_THREAD=y +CONFIG_ARCH_HAS_FORTIFY_SOURCE=y +CONFIG_ARCH_HAS_SET_MEMORY=y +CONFIG_ARCH_HAS_SET_DIRECT_MAP=y +CONFIG_ARCH_HAS_CPU_FINALIZE_INIT=y +CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y +CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y +CONFIG_ARCH_WANTS_NO_INSTR=y +CONFIG_HAVE_ASM_MODVERSIONS=y +CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y +CONFIG_HAVE_RSEQ=y +CONFIG_HAVE_RUST=y +CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y +CONFIG_HAVE_HW_BREAKPOINT=y +CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y +CONFIG_HAVE_USER_RETURN_NOTIFIER=y +CONFIG_HAVE_PERF_EVENTS_NMI=y +CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y +CONFIG_HAVE_PERF_REGS=y +CONFIG_HAVE_PERF_USER_STACK_DUMP=y +CONFIG_HAVE_ARCH_JUMP_LABEL=y +CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y +CONFIG_MMU_GATHER_TABLE_FREE=y +CONFIG_MMU_GATHER_RCU_TABLE_FREE=y +CONFIG_MMU_GATHER_MERGE_VMAS=y +CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y +CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y +CONFIG_HAVE_CMPXCHG_LOCAL=y +CONFIG_HAVE_CMPXCHG_DOUBLE=y +CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y +CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y +CONFIG_HAVE_ARCH_SECCOMP=y +CONFIG_HAVE_ARCH_SECCOMP_FILTER=y +CONFIG_SECCOMP=y +CONFIG_SECCOMP_FILTER=y +# CONFIG_SECCOMP_CACHE_DEBUG is not set +CONFIG_HAVE_ARCH_STACKLEAK=y +CONFIG_HAVE_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR_STRONG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y +CONFIG_LTO_NONE=y +CONFIG_ARCH_SUPPORTS_CFI_CLANG=y +CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y +CONFIG_HAVE_CONTEXT_TRACKING_USER=y +CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK=y +CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y +CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y +CONFIG_HAVE_MOVE_PUD=y +CONFIG_HAVE_MOVE_PMD=y +CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y +CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y +CONFIG_HAVE_ARCH_HUGE_VMAP=y +CONFIG_HAVE_ARCH_HUGE_VMALLOC=y +CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y +CONFIG_HAVE_ARCH_SOFT_DIRTY=y +CONFIG_HAVE_MOD_ARCH_SPECIFIC=y +CONFIG_MODULES_USE_ELF_RELA=y +CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y +CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y +CONFIG_SOFTIRQ_ON_OWN_STACK=y +CONFIG_ARCH_HAS_ELF_RANDOMIZE=y +CONFIG_HAVE_ARCH_MMAP_RND_BITS=y +CONFIG_HAVE_EXIT_THREAD=y +CONFIG_ARCH_MMAP_RND_BITS=28 +CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y +CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8 +CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y +CONFIG_PAGE_SIZE_LESS_THAN_64KB=y +CONFIG_PAGE_SIZE_LESS_THAN_256KB=y +CONFIG_HAVE_OBJTOOL=y +CONFIG_HAVE_JUMP_LABEL_HACK=y +CONFIG_HAVE_NOINSTR_HACK=y +CONFIG_HAVE_NOINSTR_VALIDATION=y +CONFIG_HAVE_UACCESS_VALIDATION=y +CONFIG_HAVE_STACK_VALIDATION=y +CONFIG_HAVE_RELIABLE_STACKTRACE=y +CONFIG_OLD_SIGSUSPEND3=y +CONFIG_COMPAT_OLD_SIGACTION=y +CONFIG_COMPAT_32BIT_TIME=y +CONFIG_HAVE_ARCH_VMAP_STACK=y +CONFIG_VMAP_STACK=y +CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y +CONFIG_RANDOMIZE_KSTACK_OFFSET=y +# CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT is not set +CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y +CONFIG_STRICT_KERNEL_RWX=y +CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y +CONFIG_STRICT_MODULE_RWX=y +CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y +CONFIG_ARCH_USE_MEMREMAP_PROT=y +# CONFIG_LOCK_EVENT_COUNTS is not set +CONFIG_ARCH_HAS_MEM_ENCRYPT=y +CONFIG_ARCH_HAS_CC_PLATFORM=y +CONFIG_HAVE_STATIC_CALL=y +CONFIG_HAVE_STATIC_CALL_INLINE=y +CONFIG_HAVE_PREEMPT_DYNAMIC=y +CONFIG_HAVE_PREEMPT_DYNAMIC_CALL=y +CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y +CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y +CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y +CONFIG_ARCH_HAS_ELFCORE_COMPAT=y +CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y +CONFIG_DYNAMIC_SIGFRAME=y +CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y + +# +# GCOV-based kernel profiling +# +# CONFIG_GCOV_KERNEL is not set +CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y +# end of GCOV-based kernel profiling + +CONFIG_HAVE_GCC_PLUGINS=y +# end of General architecture-dependent options + +CONFIG_RT_MUTEXES=y +CONFIG_BASE_SMALL=0 +CONFIG_MODULES=y +CONFIG_MODULE_FORCE_LOAD=y +CONFIG_MODULE_UNLOAD=y +CONFIG_MODULE_FORCE_UNLOAD=y +# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set +CONFIG_MODVERSIONS=y +CONFIG_ASM_MODVERSIONS=y +CONFIG_MODULE_SRCVERSION_ALL=y +# CONFIG_MODULE_SIG is not set +CONFIG_MODULE_COMPRESS_NONE=y +# CONFIG_MODULE_COMPRESS_GZIP is not set +# CONFIG_MODULE_COMPRESS_XZ is not set +# CONFIG_MODULE_COMPRESS_ZSTD is not set +# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set +CONFIG_MODPROBE_PATH="/sbin/modprobe" +# CONFIG_TRIM_UNUSED_KSYMS is not set +CONFIG_MODULES_TREE_LOOKUP=y +CONFIG_BLOCK=y +CONFIG_BLOCK_LEGACY_AUTOLOAD=y +CONFIG_BLK_DEV_BSG_COMMON=y +CONFIG_BLK_DEV_BSGLIB=y +# CONFIG_BLK_DEV_INTEGRITY is not set +# CONFIG_BLK_DEV_ZONED is not set +# CONFIG_BLK_DEV_THROTTLING is not set +# CONFIG_BLK_WBT is not set +# CONFIG_BLK_CGROUP_IOLATENCY is not set +# CONFIG_BLK_CGROUP_IOCOST is not set +# CONFIG_BLK_CGROUP_IOPRIO is not set +# CONFIG_BLK_DEBUG_FS is not set +# CONFIG_BLK_SED_OPAL is not set +# CONFIG_BLK_INLINE_ENCRYPTION is not set + +# +# Partition Types +# +CONFIG_PARTITION_ADVANCED=y +# CONFIG_ACORN_PARTITION is not set +# CONFIG_AIX_PARTITION is not set +# CONFIG_OSF_PARTITION is not set +# CONFIG_AMIGA_PARTITION is not set +# CONFIG_ATARI_PARTITION is not set +# CONFIG_MAC_PARTITION is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_BSD_DISKLABEL is not set +# CONFIG_MINIX_SUBPARTITION is not set +# CONFIG_SOLARIS_X86_PARTITION is not set +# CONFIG_UNIXWARE_DISKLABEL is not set +# CONFIG_LDM_PARTITION is not set +# CONFIG_SGI_PARTITION is not set +# CONFIG_ULTRIX_PARTITION is not set +# CONFIG_SUN_PARTITION is not set +# CONFIG_KARMA_PARTITION is not set +CONFIG_EFI_PARTITION=y +# CONFIG_SYSV68_PARTITION is not set +# CONFIG_CMDLINE_PARTITION is not set +# end of Partition Types + +CONFIG_BLOCK_COMPAT=y +CONFIG_BLK_MQ_PCI=y +CONFIG_BLK_MQ_VIRTIO=y +CONFIG_BLOCK_HOLDER_DEPRECATED=y +CONFIG_BLK_MQ_STACKING=y + +# +# IO Schedulers +# +# CONFIG_MQ_IOSCHED_DEADLINE is not set +# CONFIG_MQ_IOSCHED_KYBER is not set +# CONFIG_IOSCHED_BFQ is not set +# end of IO Schedulers + +CONFIG_PREEMPT_NOTIFIERS=y +CONFIG_PADATA=y +CONFIG_ASN1=y +CONFIG_INLINE_SPIN_UNLOCK_IRQ=y +CONFIG_INLINE_READ_UNLOCK=y +CONFIG_INLINE_READ_UNLOCK_IRQ=y +CONFIG_INLINE_WRITE_UNLOCK=y +CONFIG_INLINE_WRITE_UNLOCK_IRQ=y +CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y +CONFIG_MUTEX_SPIN_ON_OWNER=y +CONFIG_RWSEM_SPIN_ON_OWNER=y +CONFIG_LOCK_SPIN_ON_OWNER=y +CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y +CONFIG_QUEUED_SPINLOCKS=y +CONFIG_ARCH_USE_QUEUED_RWLOCKS=y +CONFIG_QUEUED_RWLOCKS=y +CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y +CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y +CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y +CONFIG_FREEZER=y + +# +# Executable file formats +# +CONFIG_BINFMT_ELF=y +CONFIG_COMPAT_BINFMT_ELF=y +CONFIG_ELFCORE=y +CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y +CONFIG_BINFMT_SCRIPT=y +CONFIG_BINFMT_MISC=y +CONFIG_COREDUMP=y +# end of Executable file formats + +# +# Memory Management options +# +CONFIG_SWAP=y +# CONFIG_ZSWAP is not set + +# +# SLAB allocator options +# +# CONFIG_SLAB is not set +CONFIG_SLUB=y +# CONFIG_SLOB is not set +# CONFIG_SLAB_MERGE_DEFAULT is not set +# CONFIG_SLAB_FREELIST_RANDOM is not set +# CONFIG_SLAB_FREELIST_HARDENED is not set +# CONFIG_SLUB_STATS is not set +# CONFIG_SLUB_CPU_PARTIAL is not set +# end of SLAB allocator options + +# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set +# CONFIG_COMPAT_BRK is not set +CONFIG_SPARSEMEM=y +CONFIG_SPARSEMEM_EXTREME=y +CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y +CONFIG_SPARSEMEM_VMEMMAP=y +CONFIG_HAVE_FAST_GUP=y +CONFIG_MEMORY_ISOLATION=y +CONFIG_HAVE_BOOTMEM_INFO_NODE=y +CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y +CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y +CONFIG_MEMORY_HOTPLUG=y +# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set +CONFIG_MEMORY_HOTREMOVE=y +CONFIG_MHP_MEMMAP_ON_MEMORY=y +CONFIG_SPLIT_PTLOCK_CPUS=4 +CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y +CONFIG_MEMORY_BALLOON=y +# CONFIG_BALLOON_COMPACTION is not set +CONFIG_COMPACTION=y +CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1 +CONFIG_PAGE_REPORTING=y +CONFIG_MIGRATION=y +CONFIG_DEVICE_MIGRATION=y +CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y +CONFIG_ARCH_ENABLE_THP_MIGRATION=y +CONFIG_CONTIG_ALLOC=y +CONFIG_PHYS_ADDR_T_64BIT=y +CONFIG_MMU_NOTIFIER=y +CONFIG_KSM=y +CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 +CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y +# CONFIG_MEMORY_FAILURE is not set +CONFIG_ARCH_WANT_GENERAL_HUGETLB=y +CONFIG_ARCH_WANTS_THP_SWAP=y +CONFIG_TRANSPARENT_HUGEPAGE=y +CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y +# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set +CONFIG_THP_SWAP=y +# CONFIG_READ_ONLY_THP_FOR_FS is not set +CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y +CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y +CONFIG_HAVE_SETUP_PER_CPU_AREA=y +# CONFIG_CMA is not set +# CONFIG_MEM_SOFT_DIRTY is not set +CONFIG_GENERIC_EARLY_IOREMAP=y +CONFIG_DEFERRED_STRUCT_PAGE_INIT=y +# CONFIG_IDLE_PAGE_TRACKING is not set +CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y +CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y +CONFIG_ARCH_HAS_PTE_DEVMAP=y +CONFIG_ARCH_HAS_ZONE_DMA_SET=y +CONFIG_ZONE_DMA=y +CONFIG_ZONE_DMA32=y +CONFIG_ZONE_DEVICE=y +# CONFIG_DEVICE_PRIVATE is not set +CONFIG_VMAP_PFN=y +CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y +CONFIG_ARCH_HAS_PKEYS=y +CONFIG_VM_EVENT_COUNTERS=y +# CONFIG_PERCPU_STATS is not set +# CONFIG_GUP_TEST is not set +CONFIG_ARCH_HAS_PTE_SPECIAL=y +CONFIG_SECRETMEM=y +CONFIG_ANON_VMA_NAME=y +CONFIG_USERFAULTFD=y +CONFIG_HAVE_ARCH_USERFAULTFD_WP=y +CONFIG_HAVE_ARCH_USERFAULTFD_MINOR=y +CONFIG_PTE_MARKER=y +CONFIG_PTE_MARKER_UFFD_WP=y +# CONFIG_LRU_GEN is not set +CONFIG_LOCK_MM_AND_FIND_VMA=y + +# +# Data Access Monitoring +# +# CONFIG_DAMON is not set +# end of Data Access Monitoring +# end of Memory Management options + +CONFIG_NET=y +CONFIG_NET_INGRESS=y +CONFIG_NET_EGRESS=y +CONFIG_SKB_EXTENSIONS=y + +# +# Networking options +# +CONFIG_PACKET=y +CONFIG_PACKET_DIAG=y +CONFIG_UNIX=y +CONFIG_UNIX_SCM=y +CONFIG_AF_UNIX_OOB=y +CONFIG_UNIX_DIAG=y +# CONFIG_TLS is not set +CONFIG_XFRM=y +CONFIG_XFRM_ALGO=y +CONFIG_XFRM_USER=y +# CONFIG_XFRM_USER_COMPAT is not set +# CONFIG_XFRM_INTERFACE is not set +# CONFIG_XFRM_SUB_POLICY is not set +# CONFIG_XFRM_MIGRATE is not set +# CONFIG_XFRM_STATISTICS is not set +CONFIG_XFRM_ESP=y +# CONFIG_NET_KEY is not set +# CONFIG_XDP_SOCKETS is not set +CONFIG_INET=y +# CONFIG_IP_MULTICAST is not set +CONFIG_IP_ADVANCED_ROUTER=y +# CONFIG_IP_FIB_TRIE_STATS is not set +CONFIG_IP_MULTIPLE_TABLES=y +# CONFIG_IP_ROUTE_MULTIPATH is not set +# CONFIG_IP_ROUTE_VERBOSE is not set +CONFIG_IP_PNP=y +CONFIG_IP_PNP_DHCP=y +# CONFIG_IP_PNP_BOOTP is not set +# CONFIG_IP_PNP_RARP is not set +CONFIG_NET_IPIP=y +# CONFIG_NET_IPGRE_DEMUX is not set +CONFIG_NET_IP_TUNNEL=y +CONFIG_SYN_COOKIES=y +# CONFIG_NET_IPVTI is not set +CONFIG_NET_UDP_TUNNEL=y +# CONFIG_NET_FOU is not set +# CONFIG_NET_FOU_IP_TUNNELS is not set +# CONFIG_INET_AH is not set +CONFIG_INET_ESP=y +# CONFIG_INET_ESP_OFFLOAD is not set +# CONFIG_INET_ESPINTCP is not set +# CONFIG_INET_IPCOMP is not set +CONFIG_INET_TABLE_PERTURB_ORDER=16 +CONFIG_INET_TUNNEL=y +CONFIG_INET_DIAG=y +CONFIG_INET_TCP_DIAG=y +CONFIG_INET_UDP_DIAG=y +CONFIG_INET_RAW_DIAG=y +# CONFIG_INET_DIAG_DESTROY is not set +# CONFIG_TCP_CONG_ADVANCED is not set +CONFIG_TCP_CONG_CUBIC=y +CONFIG_DEFAULT_TCP_CONG="cubic" +# CONFIG_TCP_MD5SIG is not set +CONFIG_IPV6=y +# CONFIG_IPV6_ROUTER_PREF is not set +CONFIG_IPV6_OPTIMISTIC_DAD=y +# CONFIG_INET6_AH is not set +# CONFIG_INET6_ESP is not set +# CONFIG_INET6_IPCOMP is not set +# CONFIG_IPV6_MIP6 is not set +# CONFIG_IPV6_ILA is not set +# CONFIG_IPV6_VTI is not set +CONFIG_IPV6_SIT=y +# CONFIG_IPV6_SIT_6RD is not set +CONFIG_IPV6_NDISC_NODETYPE=y +# CONFIG_IPV6_TUNNEL is not set +# CONFIG_IPV6_MULTIPLE_TABLES is not set +# CONFIG_IPV6_MROUTE is not set +# CONFIG_IPV6_SEG6_LWTUNNEL is not set +# CONFIG_IPV6_SEG6_HMAC is not set +# CONFIG_IPV6_RPL_LWTUNNEL is not set +# CONFIG_IPV6_IOAM6_LWTUNNEL is not set +# CONFIG_NETLABEL is not set +# CONFIG_MPTCP is not set +CONFIG_NETWORK_SECMARK=y +CONFIG_NET_PTP_CLASSIFY=y +CONFIG_NETWORK_PHY_TIMESTAMPING=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_ADVANCED=y +CONFIG_BRIDGE_NETFILTER=y + +# +# Core Netfilter Configuration +# +CONFIG_NETFILTER_INGRESS=y +CONFIG_NETFILTER_EGRESS=y +CONFIG_NETFILTER_SKIP_EGRESS=y +CONFIG_NETFILTER_NETLINK=y +CONFIG_NETFILTER_FAMILY_BRIDGE=y +CONFIG_NETFILTER_FAMILY_ARP=y +# CONFIG_NETFILTER_NETLINK_HOOK is not set +# CONFIG_NETFILTER_NETLINK_ACCT is not set +CONFIG_NETFILTER_NETLINK_QUEUE=y +CONFIG_NETFILTER_NETLINK_LOG=y +# CONFIG_NETFILTER_NETLINK_OSF is not set +CONFIG_NF_CONNTRACK=y +CONFIG_NF_LOG_SYSLOG=y +CONFIG_NETFILTER_CONNCOUNT=y +CONFIG_NF_CONNTRACK_MARK=y +# CONFIG_NF_CONNTRACK_SECMARK is not set +# CONFIG_NF_CONNTRACK_ZONES is not set +# CONFIG_NF_CONNTRACK_PROCFS is not set +CONFIG_NF_CONNTRACK_EVENTS=y +# CONFIG_NF_CONNTRACK_TIMEOUT is not set +# CONFIG_NF_CONNTRACK_TIMESTAMP is not set +# CONFIG_NF_CONNTRACK_LABELS is not set +# CONFIG_NF_CT_PROTO_DCCP is not set +CONFIG_NF_CT_PROTO_GRE=y +# CONFIG_NF_CT_PROTO_SCTP is not set +# CONFIG_NF_CT_PROTO_UDPLITE is not set +CONFIG_NF_CONNTRACK_AMANDA=y +CONFIG_NF_CONNTRACK_FTP=y +CONFIG_NF_CONNTRACK_H323=y +CONFIG_NF_CONNTRACK_IRC=y +CONFIG_NF_CONNTRACK_BROADCAST=y +CONFIG_NF_CONNTRACK_NETBIOS_NS=y +# CONFIG_NF_CONNTRACK_SNMP is not set +CONFIG_NF_CONNTRACK_PPTP=y +CONFIG_NF_CONNTRACK_SANE=y +CONFIG_NF_CONNTRACK_SIP=y +CONFIG_NF_CONNTRACK_TFTP=y +CONFIG_NF_CT_NETLINK=y +# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set +CONFIG_NF_NAT=y +CONFIG_NF_NAT_AMANDA=y +CONFIG_NF_NAT_FTP=y +CONFIG_NF_NAT_IRC=y +CONFIG_NF_NAT_SIP=y +CONFIG_NF_NAT_TFTP=y +CONFIG_NF_NAT_REDIRECT=y +CONFIG_NF_NAT_MASQUERADE=y +CONFIG_NETFILTER_SYNPROXY=y +CONFIG_NF_TABLES=y +CONFIG_NF_TABLES_INET=y +# CONFIG_NF_TABLES_NETDEV is not set +CONFIG_NFT_NUMGEN=y +CONFIG_NFT_CT=y +CONFIG_NFT_CONNLIMIT=y +CONFIG_NFT_LOG=y +CONFIG_NFT_LIMIT=y +CONFIG_NFT_MASQ=y +CONFIG_NFT_REDIR=y +CONFIG_NFT_NAT=y +CONFIG_NFT_TUNNEL=y +CONFIG_NFT_OBJREF=y +# CONFIG_NFT_QUEUE is not set +# CONFIG_NFT_QUOTA is not set +CONFIG_NFT_REJECT=y +CONFIG_NFT_REJECT_INET=y +CONFIG_NFT_COMPAT=y +# CONFIG_NFT_HASH is not set +CONFIG_NFT_XFRM=y +CONFIG_NFT_SOCKET=y +# CONFIG_NFT_OSF is not set +# CONFIG_NFT_TPROXY is not set +# CONFIG_NFT_SYNPROXY is not set +# CONFIG_NF_FLOW_TABLE is not set +CONFIG_NETFILTER_XTABLES=y +# CONFIG_NETFILTER_XTABLES_COMPAT is not set + +# +# Xtables combined modules +# +CONFIG_NETFILTER_XT_MARK=y +# CONFIG_NETFILTER_XT_CONNMARK is not set +CONFIG_NETFILTER_XT_SET=y + +# +# Xtables targets +# +# CONFIG_NETFILTER_XT_TARGET_AUDIT is not set +CONFIG_NETFILTER_XT_TARGET_CHECKSUM=y +# CONFIG_NETFILTER_XT_TARGET_CLASSIFY is not set +# CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set +# CONFIG_NETFILTER_XT_TARGET_CT is not set +# CONFIG_NETFILTER_XT_TARGET_DSCP is not set +CONFIG_NETFILTER_XT_TARGET_HL=y +# CONFIG_NETFILTER_XT_TARGET_HMARK is not set +# CONFIG_NETFILTER_XT_TARGET_IDLETIMER is not set +CONFIG_NETFILTER_XT_TARGET_LOG=y +CONFIG_NETFILTER_XT_TARGET_MARK=y +CONFIG_NETFILTER_XT_NAT=y +CONFIG_NETFILTER_XT_TARGET_NETMAP=y +CONFIG_NETFILTER_XT_TARGET_NFLOG=y +# CONFIG_NETFILTER_XT_TARGET_NFQUEUE is not set +# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set +# CONFIG_NETFILTER_XT_TARGET_RATEEST is not set +CONFIG_NETFILTER_XT_TARGET_REDIRECT=y +CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y +# CONFIG_NETFILTER_XT_TARGET_TEE is not set +# CONFIG_NETFILTER_XT_TARGET_TPROXY is not set +# CONFIG_NETFILTER_XT_TARGET_TRACE is not set +CONFIG_NETFILTER_XT_TARGET_SECMARK=y +CONFIG_NETFILTER_XT_TARGET_TCPMSS=y +# CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP is not set + +# +# Xtables matches +# +CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=y +# CONFIG_NETFILTER_XT_MATCH_BPF is not set +CONFIG_NETFILTER_XT_MATCH_CGROUP=y +# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set +CONFIG_NETFILTER_XT_MATCH_COMMENT=y +# CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLABEL is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_CONNMARK is not set +CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y +# CONFIG_NETFILTER_XT_MATCH_CPU is not set +# CONFIG_NETFILTER_XT_MATCH_DCCP is not set +# CONFIG_NETFILTER_XT_MATCH_DEVGROUP is not set +# CONFIG_NETFILTER_XT_MATCH_DSCP is not set +CONFIG_NETFILTER_XT_MATCH_ECN=y +# CONFIG_NETFILTER_XT_MATCH_ESP is not set +# CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_HELPER is not set +CONFIG_NETFILTER_XT_MATCH_HL=y +# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set +# CONFIG_NETFILTER_XT_MATCH_IPRANGE is not set +CONFIG_NETFILTER_XT_MATCH_IPVS=y +# CONFIG_NETFILTER_XT_MATCH_L2TP is not set +# CONFIG_NETFILTER_XT_MATCH_LENGTH is not set +CONFIG_NETFILTER_XT_MATCH_LIMIT=y +# CONFIG_NETFILTER_XT_MATCH_MAC is not set +# CONFIG_NETFILTER_XT_MATCH_MARK is not set +CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y +# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set +# CONFIG_NETFILTER_XT_MATCH_OSF is not set +CONFIG_NETFILTER_XT_MATCH_OWNER=y +# CONFIG_NETFILTER_XT_MATCH_POLICY is not set +CONFIG_NETFILTER_XT_MATCH_PHYSDEV=y +# CONFIG_NETFILTER_XT_MATCH_PKTTYPE is not set +# CONFIG_NETFILTER_XT_MATCH_QUOTA is not set +# CONFIG_NETFILTER_XT_MATCH_RATEEST is not set +# CONFIG_NETFILTER_XT_MATCH_REALM is not set +# CONFIG_NETFILTER_XT_MATCH_RECENT is not set +# CONFIG_NETFILTER_XT_MATCH_SCTP is not set +# CONFIG_NETFILTER_XT_MATCH_SOCKET is not set +# CONFIG_NETFILTER_XT_MATCH_STATE is not set +CONFIG_NETFILTER_XT_MATCH_STATISTIC=y +# CONFIG_NETFILTER_XT_MATCH_STRING is not set +# CONFIG_NETFILTER_XT_MATCH_TCPMSS is not set +# CONFIG_NETFILTER_XT_MATCH_TIME is not set +# CONFIG_NETFILTER_XT_MATCH_U32 is not set +# end of Core Netfilter Configuration + +CONFIG_IP_SET=y +CONFIG_IP_SET_MAX=256 +CONFIG_IP_SET_BITMAP_IP=y +CONFIG_IP_SET_BITMAP_IPMAC=y +CONFIG_IP_SET_BITMAP_PORT=y +CONFIG_IP_SET_HASH_IP=y +CONFIG_IP_SET_HASH_IPMARK=y +CONFIG_IP_SET_HASH_IPPORT=y +CONFIG_IP_SET_HASH_IPPORTIP=y +CONFIG_IP_SET_HASH_IPPORTNET=y +CONFIG_IP_SET_HASH_IPMAC=y +CONFIG_IP_SET_HASH_MAC=y +CONFIG_IP_SET_HASH_NETPORTNET=y +CONFIG_IP_SET_HASH_NET=y +CONFIG_IP_SET_HASH_NETNET=y +CONFIG_IP_SET_HASH_NETPORT=y +CONFIG_IP_SET_HASH_NETIFACE=y +# CONFIG_IP_SET_LIST_SET is not set +CONFIG_IP_VS=y +# CONFIG_IP_VS_IPV6 is not set +# CONFIG_IP_VS_DEBUG is not set +CONFIG_IP_VS_TAB_BITS=12 + +# +# IPVS transport protocol load balancing support +# +CONFIG_IP_VS_PROTO_TCP=y +CONFIG_IP_VS_PROTO_UDP=y +# CONFIG_IP_VS_PROTO_ESP is not set +# CONFIG_IP_VS_PROTO_AH is not set +# CONFIG_IP_VS_PROTO_SCTP is not set + +# +# IPVS scheduler +# +CONFIG_IP_VS_RR=y +CONFIG_IP_VS_WRR=y +# CONFIG_IP_VS_LC is not set +# CONFIG_IP_VS_WLC is not set +# CONFIG_IP_VS_FO is not set +# CONFIG_IP_VS_OVF is not set +# CONFIG_IP_VS_LBLC is not set +# CONFIG_IP_VS_LBLCR is not set +# CONFIG_IP_VS_DH is not set +CONFIG_IP_VS_SH=y +# CONFIG_IP_VS_MH is not set +# CONFIG_IP_VS_SED is not set +# CONFIG_IP_VS_NQ is not set +# CONFIG_IP_VS_TWOS is not set + +# +# IPVS SH scheduler +# +CONFIG_IP_VS_SH_TAB_BITS=8 + +# +# IPVS MH scheduler +# +CONFIG_IP_VS_MH_TAB_INDEX=12 + +# +# IPVS application helper +# +# CONFIG_IP_VS_FTP is not set +CONFIG_IP_VS_NFCT=y +# CONFIG_IP_VS_PE_SIP is not set + +# +# IP: Netfilter Configuration +# +CONFIG_NF_DEFRAG_IPV4=y +CONFIG_NF_SOCKET_IPV4=y +# CONFIG_NF_TPROXY_IPV4 is not set +CONFIG_NF_TABLES_IPV4=y +CONFIG_NFT_REJECT_IPV4=y +# CONFIG_NFT_DUP_IPV4 is not set +# CONFIG_NFT_FIB_IPV4 is not set +# CONFIG_NF_TABLES_ARP is not set +# CONFIG_NF_DUP_IPV4 is not set +# CONFIG_NF_LOG_ARP is not set +CONFIG_NF_LOG_IPV4=y +CONFIG_NF_REJECT_IPV4=y +CONFIG_NF_NAT_PPTP=y +CONFIG_NF_NAT_H323=y +CONFIG_IP_NF_IPTABLES=y +CONFIG_IP_NF_MATCH_AH=y +CONFIG_IP_NF_MATCH_ECN=y +CONFIG_IP_NF_MATCH_RPFILTER=y +CONFIG_IP_NF_MATCH_TTL=y +CONFIG_IP_NF_FILTER=y +CONFIG_IP_NF_TARGET_REJECT=y +CONFIG_IP_NF_TARGET_SYNPROXY=y +CONFIG_IP_NF_NAT=y +CONFIG_IP_NF_TARGET_MASQUERADE=y +CONFIG_IP_NF_TARGET_NETMAP=y +CONFIG_IP_NF_TARGET_REDIRECT=y +CONFIG_IP_NF_MANGLE=y +CONFIG_IP_NF_TARGET_CLUSTERIP=y +CONFIG_IP_NF_TARGET_ECN=y +CONFIG_IP_NF_TARGET_TTL=y +CONFIG_IP_NF_RAW=y +CONFIG_IP_NF_SECURITY=y +CONFIG_IP_NF_ARPTABLES=y +CONFIG_IP_NF_ARPFILTER=y +CONFIG_IP_NF_ARP_MANGLE=y +# end of IP: Netfilter Configuration + +# +# IPv6: Netfilter Configuration +# +CONFIG_NF_SOCKET_IPV6=y +# CONFIG_NF_TPROXY_IPV6 is not set +CONFIG_NF_TABLES_IPV6=y +CONFIG_NFT_REJECT_IPV6=y +# CONFIG_NFT_DUP_IPV6 is not set +# CONFIG_NFT_FIB_IPV6 is not set +# CONFIG_NF_DUP_IPV6 is not set +CONFIG_NF_REJECT_IPV6=y +CONFIG_NF_LOG_IPV6=y +CONFIG_IP6_NF_IPTABLES=y +CONFIG_IP6_NF_MATCH_AH=y +CONFIG_IP6_NF_MATCH_EUI64=y +CONFIG_IP6_NF_MATCH_FRAG=y +CONFIG_IP6_NF_MATCH_OPTS=y +CONFIG_IP6_NF_MATCH_HL=y +CONFIG_IP6_NF_MATCH_IPV6HEADER=y +CONFIG_IP6_NF_MATCH_MH=y +CONFIG_IP6_NF_MATCH_RPFILTER=y +CONFIG_IP6_NF_MATCH_RT=y +CONFIG_IP6_NF_MATCH_SRH=y +CONFIG_IP6_NF_TARGET_HL=y +CONFIG_IP6_NF_FILTER=y +CONFIG_IP6_NF_TARGET_REJECT=y +CONFIG_IP6_NF_TARGET_SYNPROXY=y +CONFIG_IP6_NF_MANGLE=y +CONFIG_IP6_NF_RAW=y +CONFIG_IP6_NF_SECURITY=y +CONFIG_IP6_NF_NAT=y +CONFIG_IP6_NF_TARGET_MASQUERADE=y +CONFIG_IP6_NF_TARGET_NPT=y +# end of IPv6: Netfilter Configuration + +CONFIG_NF_DEFRAG_IPV6=y +# CONFIG_NF_TABLES_BRIDGE is not set +# CONFIG_NF_CONNTRACK_BRIDGE is not set +# CONFIG_BRIDGE_NF_EBTABLES is not set +# CONFIG_BPFILTER is not set +# CONFIG_IP_DCCP is not set +CONFIG_IP_SCTP=y +# CONFIG_SCTP_DBG_OBJCNT is not set +CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set +CONFIG_SCTP_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set +CONFIG_INET_SCTP_DIAG=y +# CONFIG_RDS is not set +# CONFIG_TIPC is not set +# CONFIG_ATM is not set +# CONFIG_L2TP is not set +CONFIG_STP=y +CONFIG_BRIDGE=y +CONFIG_BRIDGE_IGMP_SNOOPING=y +CONFIG_BRIDGE_VLAN_FILTERING=y +# CONFIG_BRIDGE_MRP is not set +# CONFIG_BRIDGE_CFM is not set +# CONFIG_NET_DSA is not set +CONFIG_VLAN_8021Q=y +# CONFIG_VLAN_8021Q_GVRP is not set +# CONFIG_VLAN_8021Q_MVRP is not set +CONFIG_LLC=y +# CONFIG_LLC2 is not set +# CONFIG_ATALK is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_PHONET is not set +# CONFIG_6LOWPAN is not set +# CONFIG_IEEE802154 is not set +CONFIG_NET_SCHED=y + +# +# Queueing/Scheduling +# +# CONFIG_NET_SCH_CBQ is not set +# CONFIG_NET_SCH_HTB is not set +# CONFIG_NET_SCH_HFSC is not set +# CONFIG_NET_SCH_PRIO is not set +CONFIG_NET_SCH_MULTIQ=y +# CONFIG_NET_SCH_RED is not set +# CONFIG_NET_SCH_SFB is not set +# CONFIG_NET_SCH_SFQ is not set +# CONFIG_NET_SCH_TEQL is not set +# CONFIG_NET_SCH_TBF is not set +# CONFIG_NET_SCH_CBS is not set +# CONFIG_NET_SCH_ETF is not set +# CONFIG_NET_SCH_TAPRIO is not set +# CONFIG_NET_SCH_GRED is not set +# CONFIG_NET_SCH_DSMARK is not set +# CONFIG_NET_SCH_NETEM is not set +# CONFIG_NET_SCH_DRR is not set +# CONFIG_NET_SCH_MQPRIO is not set +# CONFIG_NET_SCH_SKBPRIO is not set +# CONFIG_NET_SCH_CHOKE is not set +# CONFIG_NET_SCH_QFQ is not set +# CONFIG_NET_SCH_CODEL is not set +CONFIG_NET_SCH_FQ_CODEL=y +# CONFIG_NET_SCH_CAKE is not set +# CONFIG_NET_SCH_FQ is not set +# CONFIG_NET_SCH_HHF is not set +# CONFIG_NET_SCH_PIE is not set +CONFIG_NET_SCH_INGRESS=y +# CONFIG_NET_SCH_PLUG is not set +# CONFIG_NET_SCH_ETS is not set +CONFIG_NET_SCH_DEFAULT=y +CONFIG_DEFAULT_FQ_CODEL=y +# CONFIG_DEFAULT_PFIFO_FAST is not set +CONFIG_DEFAULT_NET_SCH="fq_codel" + +# +# Classification +# +CONFIG_NET_CLS=y +# CONFIG_NET_CLS_BASIC is not set +# CONFIG_NET_CLS_ROUTE4 is not set +# CONFIG_NET_CLS_FW is not set +CONFIG_NET_CLS_U32=y +CONFIG_CLS_U32_PERF=y +CONFIG_CLS_U32_MARK=y +# CONFIG_NET_CLS_FLOW is not set +CONFIG_NET_CLS_CGROUP=y +CONFIG_NET_CLS_BPF=y +CONFIG_NET_CLS_FLOWER=y +# CONFIG_NET_CLS_MATCHALL is not set +# CONFIG_NET_EMATCH is not set +CONFIG_NET_CLS_ACT=y +# CONFIG_NET_ACT_POLICE is not set +# CONFIG_NET_ACT_GACT is not set +CONFIG_NET_ACT_MIRRED=y +# CONFIG_NET_ACT_SAMPLE is not set +CONFIG_NET_ACT_IPT=y +# CONFIG_NET_ACT_NAT is not set +# CONFIG_NET_ACT_PEDIT is not set +# CONFIG_NET_ACT_SIMP is not set +# CONFIG_NET_ACT_SKBEDIT is not set +# CONFIG_NET_ACT_CSUM is not set +# CONFIG_NET_ACT_MPLS is not set +# CONFIG_NET_ACT_VLAN is not set +CONFIG_NET_ACT_BPF=y +# CONFIG_NET_ACT_CONNMARK is not set +# CONFIG_NET_ACT_CTINFO is not set +# CONFIG_NET_ACT_SKBMOD is not set +# CONFIG_NET_ACT_IFE is not set +# CONFIG_NET_ACT_TUNNEL_KEY is not set +# CONFIG_NET_ACT_GATE is not set +# CONFIG_NET_TC_SKB_EXT is not set +CONFIG_NET_SCH_FIFO=y +# CONFIG_DCB is not set +CONFIG_DNS_RESOLVER=y +# CONFIG_BATMAN_ADV is not set +# CONFIG_OPENVSWITCH is not set +CONFIG_VSOCKETS=y +CONFIG_VSOCKETS_DIAG=y +# CONFIG_VSOCKETS_LOOPBACK is not set +# CONFIG_VIRTIO_VSOCKETS is not set +CONFIG_HYPERV_VSOCKETS=y +CONFIG_NETLINK_DIAG=y +# CONFIG_MPLS is not set +# CONFIG_NET_NSH is not set +# CONFIG_HSR is not set +CONFIG_NET_SWITCHDEV=y +CONFIG_NET_L3_MASTER_DEV=y +# CONFIG_QRTR is not set +# CONFIG_NET_NCSI is not set +# CONFIG_PCPU_DEV_REFCNT is not set +CONFIG_RPS=y +CONFIG_RFS_ACCEL=y +CONFIG_SOCK_RX_QUEUE_MAPPING=y +CONFIG_XPS=y +CONFIG_CGROUP_NET_PRIO=y +CONFIG_CGROUP_NET_CLASSID=y +CONFIG_NET_RX_BUSY_POLL=y +CONFIG_BQL=y +# CONFIG_BPF_STREAM_PARSER is not set +CONFIG_NET_FLOW_LIMIT=y + +# +# Network testing +# +# CONFIG_NET_PKTGEN is not set +CONFIG_NET_DROP_MONITOR=y +# end of Network testing +# end of Networking options + +# CONFIG_HAMRADIO is not set +# CONFIG_CAN is not set +# CONFIG_BT is not set +# CONFIG_AF_RXRPC is not set +# CONFIG_AF_KCM is not set +# CONFIG_MCTP is not set +CONFIG_FIB_RULES=y +# CONFIG_WIRELESS is not set +# CONFIG_RFKILL is not set +CONFIG_NET_9P=y +CONFIG_NET_9P_FD=y +CONFIG_NET_9P_VIRTIO=y +# CONFIG_NET_9P_DEBUG is not set +# CONFIG_CAIF is not set +CONFIG_CEPH_LIB=y +# CONFIG_CEPH_LIB_PRETTYDEBUG is not set +# CONFIG_CEPH_LIB_USE_DNS_RESOLVER is not set +# CONFIG_NFC is not set +# CONFIG_PSAMPLE is not set +# CONFIG_NET_IFE is not set +# CONFIG_LWTUNNEL is not set +CONFIG_DST_CACHE=y +CONFIG_GRO_CELLS=y +CONFIG_NET_SOCK_MSG=y +CONFIG_PAGE_POOL=y +# CONFIG_PAGE_POOL_STATS is not set +CONFIG_FAILOVER=y +# CONFIG_ETHTOOL_NETLINK is not set + +# +# Device Drivers +# +CONFIG_HAVE_EISA=y +# CONFIG_EISA is not set +CONFIG_HAVE_PCI=y +CONFIG_PCI=y +CONFIG_PCI_DOMAINS=y +CONFIG_PCIEPORTBUS=y +CONFIG_PCIEAER=y +# CONFIG_PCIEAER_INJECT is not set +# CONFIG_PCIE_ECRC is not set +CONFIG_PCIEASPM=y +CONFIG_PCIEASPM_DEFAULT=y +# CONFIG_PCIEASPM_POWERSAVE is not set +# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set +# CONFIG_PCIEASPM_PERFORMANCE is not set +# CONFIG_PCIE_DPC is not set +# CONFIG_PCIE_PTM is not set +CONFIG_PCI_MSI=y +CONFIG_PCI_MSI_IRQ_DOMAIN=y +CONFIG_PCI_QUIRKS=y +# CONFIG_PCI_DEBUG is not set +# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set +# CONFIG_PCI_STUB is not set +# CONFIG_PCI_PF_STUB is not set +CONFIG_PCI_ATS=y +CONFIG_PCI_LOCKLESS_CONFIG=y +CONFIG_PCI_IOV=y +CONFIG_PCI_PRI=y +CONFIG_PCI_PASID=y +# CONFIG_PCI_P2PDMA is not set +CONFIG_PCI_LABEL=y +CONFIG_PCI_HYPERV=y +# CONFIG_PCIE_BUS_TUNE_OFF is not set +CONFIG_PCIE_BUS_DEFAULT=y +# CONFIG_PCIE_BUS_SAFE is not set +# CONFIG_PCIE_BUS_PERFORMANCE is not set +# CONFIG_PCIE_BUS_PEER2PEER is not set +# CONFIG_VGA_ARB is not set +# CONFIG_HOTPLUG_PCI is not set + +# +# PCI controller drivers +# +# CONFIG_VMD is not set +CONFIG_PCI_HYPERV_INTERFACE=y + +# +# DesignWare PCI Core Support +# +# CONFIG_PCIE_DW_PLAT_HOST is not set +# CONFIG_PCI_MESON is not set +# end of DesignWare PCI Core Support + +# +# Mobiveil PCIe Core Support +# +# end of Mobiveil PCIe Core Support + +# +# Cadence PCIe controllers support +# +# end of Cadence PCIe controllers support +# end of PCI controller drivers + +# +# PCI Endpoint +# +# CONFIG_PCI_ENDPOINT is not set +# end of PCI Endpoint + +# +# PCI switch controller drivers +# +# CONFIG_PCI_SW_SWITCHTEC is not set +# end of PCI switch controller drivers + +# CONFIG_CXL_BUS is not set +# CONFIG_PCCARD is not set +# CONFIG_RAPIDIO is not set + +# +# Generic Driver Options +# +CONFIG_UEVENT_HELPER=y +CONFIG_UEVENT_HELPER_PATH="" +CONFIG_DEVTMPFS=y +CONFIG_DEVTMPFS_MOUNT=y +CONFIG_DEVTMPFS_SAFE=y +CONFIG_STANDALONE=y +CONFIG_PREVENT_FIRMWARE_BUILD=y + +# +# Firmware loader +# +CONFIG_FW_LOADER=y +CONFIG_FW_LOADER_PAGED_BUF=y +CONFIG_FW_LOADER_SYSFS=y +CONFIG_EXTRA_FIRMWARE="" +# CONFIG_FW_LOADER_USER_HELPER is not set +# CONFIG_FW_LOADER_COMPRESS is not set +CONFIG_FW_UPLOAD=y +# end of Firmware loader + +CONFIG_ALLOW_DEV_COREDUMP=y +# CONFIG_DEBUG_DRIVER is not set +# CONFIG_DEBUG_DEVRES is not set +# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set +# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set +CONFIG_GENERIC_CPU_AUTOPROBE=y +CONFIG_GENERIC_CPU_VULNERABILITIES=y +CONFIG_DMA_SHARED_BUFFER=y +# CONFIG_DMA_FENCE_TRACE is not set +# end of Generic Driver Options + +# +# Bus devices +# +# CONFIG_MHI_BUS is not set +# CONFIG_MHI_BUS_EP is not set +# end of Bus devices + +CONFIG_CONNECTOR=y +CONFIG_PROC_EVENTS=y + +# +# Firmware Drivers +# + +# +# ARM System Control and Management Interface Protocol +# +# end of ARM System Control and Management Interface Protocol + +# CONFIG_EDD is not set +CONFIG_FIRMWARE_MEMMAP=y +# CONFIG_DMIID is not set +# CONFIG_DMI_SYSFS is not set +CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y +# CONFIG_ISCSI_IBFT is not set +# CONFIG_FW_CFG_SYSFS is not set +# CONFIG_SYSFB_SIMPLEFB is not set +# CONFIG_GOOGLE_FIRMWARE is not set + +# +# EFI (Extensible Firmware Interface) Support +# +CONFIG_EFI_ESRT=y +# CONFIG_EFI_FAKE_MEMMAP is not set +CONFIG_EFI_DXE_MEM_ATTRIBUTES=y +CONFIG_EFI_RUNTIME_WRAPPERS=y +CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y +# CONFIG_EFI_BOOTLOADER_CONTROL is not set +# CONFIG_EFI_CAPSULE_LOADER is not set +# CONFIG_EFI_TEST is not set +# CONFIG_APPLE_PROPERTIES is not set +CONFIG_RESET_ATTACK_MITIGATION=y +# CONFIG_EFI_RCI2_TABLE is not set +# CONFIG_EFI_DISABLE_PCI_DMA is not set +CONFIG_EFI_EARLYCON=y +# CONFIG_EFI_CUSTOM_SSDT_OVERLAYS is not set +# CONFIG_EFI_DISABLE_RUNTIME is not set +CONFIG_EFI_COCO_SECRET=y +# end of EFI (Extensible Firmware Interface) Support + +# +# Tegra firmware driver +# +# end of Tegra firmware driver +# end of Firmware Drivers + +# CONFIG_GNSS is not set +# CONFIG_MTD is not set +# CONFIG_OF is not set +CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y +# CONFIG_PARPORT is not set +CONFIG_PNP=y +# CONFIG_PNP_DEBUG_MESSAGES is not set + +# +# Protocols +# +CONFIG_PNPACPI=y +CONFIG_BLK_DEV=y +# CONFIG_BLK_DEV_NULL_BLK is not set +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set +CONFIG_BLK_DEV_LOOP=y +CONFIG_BLK_DEV_LOOP_MIN_COUNT=8 +# CONFIG_BLK_DEV_DRBD is not set +# CONFIG_BLK_DEV_NBD is not set +CONFIG_BLK_DEV_RAM=y +CONFIG_BLK_DEV_RAM_COUNT=16 +CONFIG_BLK_DEV_RAM_SIZE=65536 +# CONFIG_CDROM_PKTCDVD is not set +# CONFIG_ATA_OVER_ETH is not set +CONFIG_VIRTIO_BLK=y +# CONFIG_BLK_DEV_RBD is not set +# CONFIG_BLK_DEV_UBLK is not set + +# +# NVME Support +# +# CONFIG_BLK_DEV_NVME is not set +# CONFIG_NVME_FC is not set +# CONFIG_NVME_TCP is not set +# end of NVME Support + +# +# Misc devices +# +# CONFIG_AD525X_DPOT is not set +# CONFIG_DUMMY_IRQ is not set +# CONFIG_IBM_ASM is not set +# CONFIG_PHANTOM is not set +# CONFIG_TIFM_CORE is not set +# CONFIG_ICS932S401 is not set +# CONFIG_ENCLOSURE_SERVICES is not set +# CONFIG_HP_ILO is not set +# CONFIG_APDS9802ALS is not set +# CONFIG_ISL29003 is not set +# CONFIG_ISL29020 is not set +# CONFIG_SENSORS_TSL2550 is not set +# CONFIG_SENSORS_BH1770 is not set +# CONFIG_SENSORS_APDS990X is not set +# CONFIG_HMC6352 is not set +# CONFIG_DS1682 is not set +# CONFIG_SRAM is not set +# CONFIG_DW_XDATA_PCIE is not set +# CONFIG_PCI_ENDPOINT_TEST is not set +# CONFIG_XILINX_SDFEC is not set +# CONFIG_C2PORT is not set + +# +# EEPROM support +# +# CONFIG_EEPROM_AT24 is not set +# CONFIG_EEPROM_LEGACY is not set +# CONFIG_EEPROM_MAX6875 is not set +# CONFIG_EEPROM_93CX6 is not set +# CONFIG_EEPROM_IDT_89HPESX is not set +# CONFIG_EEPROM_EE1004 is not set +# end of EEPROM support + +# CONFIG_CB710_CORE is not set + +# +# Texas Instruments shared transport line discipline +# +# end of Texas Instruments shared transport line discipline + +# CONFIG_SENSORS_LIS3_I2C is not set +# CONFIG_ALTERA_STAPL is not set +# CONFIG_INTEL_MEI is not set +# CONFIG_INTEL_MEI_ME is not set +# CONFIG_INTEL_MEI_TXE is not set +# CONFIG_VMWARE_VMCI is not set +# CONFIG_GENWQE is not set +# CONFIG_ECHO is not set +# CONFIG_BCM_VK is not set +# CONFIG_MISC_ALCOR_PCI is not set +# CONFIG_MISC_RTSX_PCI is not set +# CONFIG_MISC_RTSX_USB is not set +# CONFIG_HABANA_AI is not set +# CONFIG_UACCE is not set +# CONFIG_PVPANIC is not set +# end of Misc devices + +# +# SCSI device support +# +CONFIG_SCSI_MOD=y +# CONFIG_RAID_ATTRS is not set +CONFIG_SCSI_COMMON=y +CONFIG_SCSI=y +CONFIG_SCSI_DMA=y +# CONFIG_SCSI_PROC_FS is not set + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +# CONFIG_CHR_DEV_ST is not set +# CONFIG_BLK_DEV_SR is not set +CONFIG_CHR_DEV_SG=y +CONFIG_BLK_DEV_BSG=y +# CONFIG_CHR_DEV_SCH is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set +# CONFIG_SCSI_SCAN_ASYNC is not set + +# +# SCSI Transports +# +# CONFIG_SCSI_SPI_ATTRS is not set +# CONFIG_SCSI_FC_ATTRS is not set +# CONFIG_SCSI_ISCSI_ATTRS is not set +# CONFIG_SCSI_SAS_ATTRS is not set +# CONFIG_SCSI_SAS_LIBSAS is not set +# CONFIG_SCSI_SRP_ATTRS is not set +# end of SCSI Transports + +CONFIG_SCSI_LOWLEVEL=y +# CONFIG_ISCSI_TCP is not set +# CONFIG_ISCSI_BOOT_SYSFS is not set +# CONFIG_SCSI_CXGB3_ISCSI is not set +# CONFIG_SCSI_CXGB4_ISCSI is not set +# CONFIG_SCSI_BNX2_ISCSI is not set +# CONFIG_BE2ISCSI is not set +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_HPSA is not set +# CONFIG_SCSI_3W_9XXX is not set +# CONFIG_SCSI_3W_SAS is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AACRAID is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC79XX is not set +# CONFIG_SCSI_AIC94XX is not set +# CONFIG_SCSI_MVSAS is not set +# CONFIG_SCSI_MVUMI is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_ARCMSR is not set +# CONFIG_SCSI_ESAS2R is not set +# CONFIG_MEGARAID_NEWGEN is not set +# CONFIG_MEGARAID_LEGACY is not set +# CONFIG_MEGARAID_SAS is not set +# CONFIG_SCSI_MPT3SAS is not set +# CONFIG_SCSI_MPT2SAS is not set +# CONFIG_SCSI_MPI3MR is not set +# CONFIG_SCSI_SMARTPQI is not set +# CONFIG_SCSI_HPTIOP is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_MYRB is not set +# CONFIG_SCSI_MYRS is not set +# CONFIG_VMWARE_PVSCSI is not set +CONFIG_HYPERV_STORAGE=y +# CONFIG_SCSI_SNIC is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_FDOMAIN_PCI is not set +# CONFIG_SCSI_ISCI is not set +# CONFIG_SCSI_IPS is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_STEX is not set +# CONFIG_SCSI_SYM53C8XX_2 is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLA_ISCSI is not set +# CONFIG_SCSI_DC395x is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_WD719X is not set +# CONFIG_SCSI_DEBUG is not set +# CONFIG_SCSI_PMCRAID is not set +# CONFIG_SCSI_PM8001 is not set +CONFIG_SCSI_VIRTIO=y +# CONFIG_SCSI_DH is not set +# end of SCSI device support + +# CONFIG_ATA is not set +CONFIG_MD=y +CONFIG_BLK_DEV_MD=y +# CONFIG_MD_AUTODETECT is not set +# CONFIG_MD_LINEAR is not set +CONFIG_MD_RAID0=y +CONFIG_MD_RAID1=y +CONFIG_MD_RAID10=y +CONFIG_MD_RAID456=y +# CONFIG_MD_MULTIPATH is not set +# CONFIG_MD_FAULTY is not set +# CONFIG_BCACHE is not set +CONFIG_BLK_DEV_DM_BUILTIN=y +CONFIG_BLK_DEV_DM=y +# CONFIG_DM_DEBUG is not set +CONFIG_DM_BUFIO=y +# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set +CONFIG_DM_BIO_PRISON=y +CONFIG_DM_PERSISTENT_DATA=y +# CONFIG_DM_UNSTRIPED is not set +CONFIG_DM_CRYPT=y +# CONFIG_DM_SNAPSHOT is not set +CONFIG_DM_THIN_PROVISIONING=y +# CONFIG_DM_CACHE is not set +# CONFIG_DM_WRITECACHE is not set +# CONFIG_DM_EBS is not set +# CONFIG_DM_ERA is not set +# CONFIG_DM_CLONE is not set +# CONFIG_DM_MIRROR is not set +CONFIG_DM_RAID=y +# CONFIG_DM_ZERO is not set +# CONFIG_DM_MULTIPATH is not set +# CONFIG_DM_DELAY is not set +# CONFIG_DM_DUST is not set +# CONFIG_DM_INIT is not set +# CONFIG_DM_UEVENT is not set +# CONFIG_DM_FLAKEY is not set +CONFIG_DM_VERITY=y +CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y +# CONFIG_DM_VERITY_FEC is not set +# CONFIG_DM_SWITCH is not set +# CONFIG_DM_LOG_WRITES is not set +# CONFIG_DM_INTEGRITY is not set +# CONFIG_DM_AUDIT is not set +# CONFIG_TARGET_CORE is not set +# CONFIG_FUSION is not set + +# +# IEEE 1394 (FireWire) support +# +# CONFIG_FIREWIRE is not set +# CONFIG_FIREWIRE_NOSY is not set +# end of IEEE 1394 (FireWire) support + +# CONFIG_MACINTOSH_DRIVERS is not set +CONFIG_NETDEVICES=y +CONFIG_MII=y +CONFIG_NET_CORE=y +CONFIG_BONDING=y +CONFIG_DUMMY=y +CONFIG_WIREGUARD=y +# CONFIG_WIREGUARD_DEBUG is not set +# CONFIG_EQUALIZER is not set +# CONFIG_NET_FC is not set +# CONFIG_IFB is not set +CONFIG_NET_TEAM=y +# CONFIG_NET_TEAM_MODE_BROADCAST is not set +# CONFIG_NET_TEAM_MODE_ROUNDROBIN is not set +# CONFIG_NET_TEAM_MODE_RANDOM is not set +# CONFIG_NET_TEAM_MODE_ACTIVEBACKUP is not set +# CONFIG_NET_TEAM_MODE_LOADBALANCE is not set +CONFIG_MACVLAN=y +CONFIG_MACVTAP=y +CONFIG_IPVLAN_L3S=y +CONFIG_IPVLAN=y +CONFIG_IPVTAP=y +CONFIG_VXLAN=y +CONFIG_GENEVE=y +# CONFIG_BAREUDP is not set +# CONFIG_GTP is not set +# CONFIG_MACSEC is not set +# CONFIG_NETCONSOLE is not set +CONFIG_TUN=y +CONFIG_TAP=y +# CONFIG_TUN_VNET_CROSS_LE is not set +CONFIG_VETH=y +CONFIG_VIRTIO_NET=y +# CONFIG_NLMON is not set +# CONFIG_ARCNET is not set +CONFIG_ETHERNET=y +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_NET_VENDOR_ADAPTEC is not set +# CONFIG_NET_VENDOR_AGERE is not set +# CONFIG_NET_VENDOR_ALACRITECH is not set +# CONFIG_NET_VENDOR_ALTEON is not set +# CONFIG_ALTERA_TSE is not set +# CONFIG_NET_VENDOR_AMAZON is not set +# CONFIG_NET_VENDOR_AMD is not set +# CONFIG_NET_VENDOR_AQUANTIA is not set +# CONFIG_NET_VENDOR_ARC is not set +# CONFIG_NET_VENDOR_ASIX is not set +# CONFIG_NET_VENDOR_ATHEROS is not set +# CONFIG_CX_ECAT is not set +# CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_VENDOR_CADENCE is not set +# CONFIG_NET_VENDOR_CAVIUM is not set +# CONFIG_NET_VENDOR_CHELSIO is not set +# CONFIG_NET_VENDOR_CISCO is not set +# CONFIG_NET_VENDOR_CORTINA is not set +# CONFIG_NET_VENDOR_DAVICOM is not set +# CONFIG_DNET is not set +# CONFIG_NET_VENDOR_DEC is not set +# CONFIG_NET_VENDOR_DLINK is not set +# CONFIG_NET_VENDOR_EMULEX is not set +# CONFIG_NET_VENDOR_ENGLEDER is not set +# CONFIG_NET_VENDOR_EZCHIP is not set +# CONFIG_NET_VENDOR_FUNGIBLE is not set +# CONFIG_NET_VENDOR_GOOGLE is not set +# CONFIG_NET_VENDOR_HUAWEI is not set +# CONFIG_NET_VENDOR_INTEL is not set +# CONFIG_NET_VENDOR_WANGXUN is not set +# CONFIG_JME is not set +# CONFIG_NET_VENDOR_LITEX is not set +# CONFIG_NET_VENDOR_MARVELL is not set +# CONFIG_NET_VENDOR_MELLANOX is not set +# CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROCHIP is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set +# CONFIG_NET_VENDOR_MICROSOFT is not set +# CONFIG_NET_VENDOR_MYRI is not set +# CONFIG_FEALNX is not set +# CONFIG_NET_VENDOR_NI is not set +# CONFIG_NET_VENDOR_NATSEMI is not set +# CONFIG_NET_VENDOR_NETERION is not set +# CONFIG_NET_VENDOR_NETRONOME is not set +# CONFIG_NET_VENDOR_NVIDIA is not set +# CONFIG_NET_VENDOR_OKI is not set +# CONFIG_ETHOC is not set +# CONFIG_NET_VENDOR_PACKET_ENGINES is not set +# CONFIG_NET_VENDOR_PENSANDO is not set +# CONFIG_NET_VENDOR_QLOGIC is not set +# CONFIG_NET_VENDOR_BROCADE is not set +# CONFIG_NET_VENDOR_QUALCOMM is not set +# CONFIG_NET_VENDOR_RDC is not set +# CONFIG_NET_VENDOR_REALTEK is not set +# CONFIG_NET_VENDOR_RENESAS is not set +# CONFIG_NET_VENDOR_ROCKER is not set +# CONFIG_NET_VENDOR_SAMSUNG is not set +# CONFIG_NET_VENDOR_SEEQ is not set +# CONFIG_NET_VENDOR_SILAN is not set +# CONFIG_NET_VENDOR_SIS is not set +# CONFIG_NET_VENDOR_SOLARFLARE is not set +# CONFIG_NET_VENDOR_SMSC is not set +# CONFIG_NET_VENDOR_SOCIONEXT is not set +# CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SUN is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set +# CONFIG_NET_VENDOR_TEHUTI is not set +# CONFIG_NET_VENDOR_TI is not set +# CONFIG_NET_VENDOR_VERTEXCOM is not set +# CONFIG_NET_VENDOR_VIA is not set +# CONFIG_NET_VENDOR_WIZNET is not set +# CONFIG_NET_VENDOR_XILINX is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_NET_SB1000 is not set +# CONFIG_PHYLIB is not set +# CONFIG_PSE_CONTROLLER is not set +# CONFIG_MDIO_DEVICE is not set + +# +# PCS device drivers +# +# end of PCS device drivers + +CONFIG_PPP=y +CONFIG_PPP_BSDCOMP=y +CONFIG_PPP_DEFLATE=y +CONFIG_PPP_FILTER=y +CONFIG_PPP_MPPE=y +CONFIG_PPP_MULTILINK=y +CONFIG_PPPOE=y +CONFIG_PPP_ASYNC=y +CONFIG_PPP_SYNC_TTY=y +# CONFIG_SLIP is not set +CONFIG_SLHC=y +CONFIG_USB_NET_DRIVERS=y +# CONFIG_USB_CATC is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_RTL8150 is not set +# CONFIG_USB_RTL8152 is not set +# CONFIG_USB_LAN78XX is not set +CONFIG_USB_USBNET=y +# CONFIG_USB_NET_AX8817X is not set +# CONFIG_USB_NET_AX88179_178A is not set +CONFIG_USB_NET_CDCETHER=y +# CONFIG_USB_NET_CDC_EEM is not set +CONFIG_USB_NET_CDC_NCM=y +# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set +# CONFIG_USB_NET_CDC_MBIM is not set +# CONFIG_USB_NET_DM9601 is not set +# CONFIG_USB_NET_SR9700 is not set +# CONFIG_USB_NET_SR9800 is not set +# CONFIG_USB_NET_SMSC75XX is not set +# CONFIG_USB_NET_SMSC95XX is not set +# CONFIG_USB_NET_GL620A is not set +# CONFIG_USB_NET_NET1080 is not set +# CONFIG_USB_NET_PLUSB is not set +# CONFIG_USB_NET_MCS7830 is not set +# CONFIG_USB_NET_RNDIS_HOST is not set +# CONFIG_USB_NET_CDC_SUBSET is not set +# CONFIG_USB_NET_ZAURUS is not set +# CONFIG_USB_NET_CX82310_ETH is not set +# CONFIG_USB_NET_KALMIA is not set +# CONFIG_USB_NET_QMI_WWAN is not set +# CONFIG_USB_NET_INT51X1 is not set +# CONFIG_USB_IPHETH is not set +# CONFIG_USB_SIERRA_NET is not set +# CONFIG_USB_VL600 is not set +# CONFIG_USB_NET_CH9200 is not set +# CONFIG_USB_NET_AQC111 is not set +CONFIG_USB_RTL8153_ECM=y +# CONFIG_WLAN is not set +# CONFIG_WAN is not set + +# +# Wireless WAN +# +# CONFIG_WWAN is not set +# end of Wireless WAN + +# CONFIG_VMXNET3 is not set +# CONFIG_FUJITSU_ES is not set +CONFIG_HYPERV_NET=y +# CONFIG_NETDEVSIM is not set +CONFIG_NET_FAILOVER=y +# CONFIG_ISDN is not set + +# +# Input device support +# +CONFIG_INPUT=y +# CONFIG_INPUT_FF_MEMLESS is not set +# CONFIG_INPUT_SPARSEKMAP is not set +# CONFIG_INPUT_MATRIXKMAP is not set + +# +# Userland interfaces +# +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set +# CONFIG_INPUT_EVBUG is not set + +# +# Input Device Drivers +# +# CONFIG_INPUT_KEYBOARD is not set +# CONFIG_INPUT_MOUSE is not set +# CONFIG_INPUT_JOYSTICK is not set +# CONFIG_INPUT_TABLET is not set +# CONFIG_INPUT_TOUCHSCREEN is not set +# CONFIG_INPUT_MISC is not set +# CONFIG_RMI4_CORE is not set + +# +# Hardware I/O ports +# +CONFIG_SERIO=y +CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y +# CONFIG_SERIO_I8042 is not set +CONFIG_SERIO_SERPORT=y +# CONFIG_SERIO_CT82C710 is not set +# CONFIG_SERIO_PCIPS2 is not set +# CONFIG_SERIO_LIBPS2 is not set +CONFIG_SERIO_RAW=y +# CONFIG_SERIO_ALTERA_PS2 is not set +# CONFIG_SERIO_PS2MULT is not set +# CONFIG_SERIO_ARC_PS2 is not set +CONFIG_HYPERV_KEYBOARD=y +# CONFIG_USERIO is not set +# CONFIG_GAMEPORT is not set +# end of Hardware I/O ports +# end of Input device support + +# +# Character devices +# +CONFIG_TTY=y +CONFIG_VT=y +CONFIG_CONSOLE_TRANSLATIONS=y +CONFIG_VT_CONSOLE=y +CONFIG_HW_CONSOLE=y +# CONFIG_VT_HW_CONSOLE_BINDING is not set +CONFIG_UNIX98_PTYS=y +CONFIG_LEGACY_PTYS=y +CONFIG_LEGACY_PTY_COUNT=256 +# CONFIG_LDISC_AUTOLOAD is not set + +# +# Serial drivers +# +CONFIG_SERIAL_EARLYCON=y +CONFIG_SERIAL_8250=y +CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y +CONFIG_SERIAL_8250_PNP=y +# CONFIG_SERIAL_8250_16550A_VARIANTS is not set +# CONFIG_SERIAL_8250_FINTEK is not set +CONFIG_SERIAL_8250_CONSOLE=y +CONFIG_SERIAL_8250_PCI=y +# CONFIG_SERIAL_8250_EXAR is not set +CONFIG_SERIAL_8250_NR_UARTS=32 +CONFIG_SERIAL_8250_RUNTIME_UARTS=4 +# CONFIG_SERIAL_8250_EXTENDED is not set +# CONFIG_SERIAL_8250_DW is not set +# CONFIG_SERIAL_8250_RT288X is not set +# CONFIG_SERIAL_8250_LPSS is not set +# CONFIG_SERIAL_8250_MID is not set +# CONFIG_SERIAL_8250_PERICOM is not set + +# +# Non-8250 serial port support +# +# CONFIG_SERIAL_UARTLITE is not set +CONFIG_SERIAL_CORE=y +CONFIG_SERIAL_CORE_CONSOLE=y +# CONFIG_SERIAL_JSM is not set +# CONFIG_SERIAL_LANTIQ is not set +# CONFIG_SERIAL_SCCNXP is not set +# CONFIG_SERIAL_SC16IS7XX is not set +# CONFIG_SERIAL_ALTERA_JTAGUART is not set +# CONFIG_SERIAL_ALTERA_UART is not set +# CONFIG_SERIAL_ARC is not set +# CONFIG_SERIAL_RP2 is not set +# CONFIG_SERIAL_FSL_LPUART is not set +# CONFIG_SERIAL_FSL_LINFLEXUART is not set +# CONFIG_SERIAL_SPRD is not set +# end of Serial drivers + +# CONFIG_SERIAL_NONSTANDARD is not set +# CONFIG_N_GSM is not set +# CONFIG_NOZOMI is not set +# CONFIG_NULL_TTY is not set +CONFIG_HVC_DRIVER=y +# CONFIG_SERIAL_DEV_BUS is not set +# CONFIG_TTY_PRINTK is not set +CONFIG_VIRTIO_CONSOLE=y +# CONFIG_IPMI_HANDLER is not set +# CONFIG_HW_RANDOM is not set +# CONFIG_APPLICOM is not set +# CONFIG_MWAVE is not set +CONFIG_DEVMEM=y +CONFIG_NVRAM=y +# CONFIG_DEVPORT is not set +# CONFIG_HPET is not set +# CONFIG_HANGCHECK_TIMER is not set +# CONFIG_TCG_TPM is not set +# CONFIG_TELCLOCK is not set +# CONFIG_XILLYBUS is not set +# CONFIG_XILLYUSB is not set +CONFIG_RANDOM_TRUST_CPU=y +# CONFIG_RANDOM_TRUST_BOOTLOADER is not set +# end of Character devices + +# +# I2C support +# +CONFIG_I2C=y +# CONFIG_ACPI_I2C_OPREGION is not set +CONFIG_I2C_BOARDINFO=y +# CONFIG_I2C_COMPAT is not set +# CONFIG_I2C_CHARDEV is not set +# CONFIG_I2C_MUX is not set +# CONFIG_I2C_HELPER_AUTO is not set +# CONFIG_I2C_SMBUS is not set + +# +# I2C Algorithms +# +CONFIG_I2C_ALGOBIT=y +# CONFIG_I2C_ALGOPCF is not set +# CONFIG_I2C_ALGOPCA is not set +# end of I2C Algorithms + +# +# I2C Hardware Bus support +# + +# +# PC SMBus host controller drivers +# +# CONFIG_I2C_ALI1535 is not set +# CONFIG_I2C_ALI1563 is not set +# CONFIG_I2C_ALI15X3 is not set +# CONFIG_I2C_AMD756 is not set +# CONFIG_I2C_AMD8111 is not set +# CONFIG_I2C_AMD_MP2 is not set +# CONFIG_I2C_I801 is not set +# CONFIG_I2C_ISCH is not set +# CONFIG_I2C_ISMT is not set +# CONFIG_I2C_PIIX4 is not set +# CONFIG_I2C_NFORCE2 is not set +# CONFIG_I2C_NVIDIA_GPU is not set +# CONFIG_I2C_SIS5595 is not set +# CONFIG_I2C_SIS630 is not set +# CONFIG_I2C_SIS96X is not set +# CONFIG_I2C_VIA is not set +# CONFIG_I2C_VIAPRO is not set + +# +# ACPI drivers +# +# CONFIG_I2C_SCMI is not set + +# +# I2C system bus drivers (mostly embedded / system-on-chip) +# +# CONFIG_I2C_DESIGNWARE_PLATFORM is not set +# CONFIG_I2C_DESIGNWARE_PCI is not set +# CONFIG_I2C_EMEV2 is not set +# CONFIG_I2C_OCORES is not set +# CONFIG_I2C_PCA_PLATFORM is not set +# CONFIG_I2C_SIMTEC is not set +# CONFIG_I2C_XILINX is not set + +# +# External I2C/SMBus adapter drivers +# +# CONFIG_I2C_DIOLAN_U2C is not set +# CONFIG_I2C_CP2615 is not set +# CONFIG_I2C_PCI1XXXX is not set +# CONFIG_I2C_ROBOTFUZZ_OSIF is not set +# CONFIG_I2C_TAOS_EVM is not set +# CONFIG_I2C_TINY_USB is not set + +# +# Other I2C/SMBus bus drivers +# +# CONFIG_I2C_MLXCPLD is not set +# CONFIG_I2C_VIRTIO is not set +# end of I2C Hardware Bus support + +# CONFIG_I2C_STUB is not set +# CONFIG_I2C_SLAVE is not set +# CONFIG_I2C_DEBUG_CORE is not set +# CONFIG_I2C_DEBUG_ALGO is not set +# CONFIG_I2C_DEBUG_BUS is not set +# end of I2C support + +# CONFIG_I3C is not set +# CONFIG_SPI is not set +# CONFIG_SPMI is not set +# CONFIG_HSI is not set +CONFIG_PPS=y +# CONFIG_PPS_DEBUG is not set + +# +# PPS clients support +# +# CONFIG_PPS_CLIENT_KTIMER is not set +# CONFIG_PPS_CLIENT_LDISC is not set +# CONFIG_PPS_CLIENT_GPIO is not set + +# +# PPS generators support +# + +# +# PTP clock support +# +CONFIG_PTP_1588_CLOCK=y +CONFIG_PTP_1588_CLOCK_OPTIONAL=y + +# +# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks. +# +# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set +# CONFIG_PTP_1588_CLOCK_IDTCM is not set +# CONFIG_PTP_1588_CLOCK_VMW is not set +# end of PTP clock support + +# CONFIG_PINCTRL is not set +# CONFIG_GPIOLIB is not set +# CONFIG_W1 is not set +# CONFIG_POWER_RESET is not set +CONFIG_POWER_SUPPLY=y +# CONFIG_POWER_SUPPLY_DEBUG is not set +# CONFIG_PDA_POWER is not set +# CONFIG_IP5XXX_POWER is not set +# CONFIG_TEST_POWER is not set +# CONFIG_CHARGER_ADP5061 is not set +# CONFIG_BATTERY_CW2015 is not set +# CONFIG_BATTERY_DS2780 is not set +# CONFIG_BATTERY_DS2781 is not set +# CONFIG_BATTERY_DS2782 is not set +# CONFIG_BATTERY_SAMSUNG_SDI is not set +# CONFIG_BATTERY_SBS is not set +# CONFIG_CHARGER_SBS is not set +# CONFIG_BATTERY_BQ27XXX is not set +# CONFIG_BATTERY_MAX17040 is not set +# CONFIG_BATTERY_MAX17042 is not set +# CONFIG_CHARGER_MAX8903 is not set +# CONFIG_CHARGER_LP8727 is not set +# CONFIG_CHARGER_LTC4162L is not set +# CONFIG_CHARGER_MAX77976 is not set +# CONFIG_CHARGER_BQ2415X is not set +# CONFIG_BATTERY_GAUGE_LTC2941 is not set +# CONFIG_BATTERY_GOLDFISH is not set +# CONFIG_BATTERY_RT5033 is not set +# CONFIG_CHARGER_BD99954 is not set +# CONFIG_BATTERY_UG3105 is not set +# CONFIG_HWMON is not set +CONFIG_THERMAL=y +# CONFIG_THERMAL_NETLINK is not set +# CONFIG_THERMAL_STATISTICS is not set +CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0 +# CONFIG_THERMAL_WRITABLE_TRIPS is not set +CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y +# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set +# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set +# CONFIG_THERMAL_GOV_FAIR_SHARE is not set +CONFIG_THERMAL_GOV_STEP_WISE=y +# CONFIG_THERMAL_GOV_BANG_BANG is not set +# CONFIG_THERMAL_GOV_USER_SPACE is not set +# CONFIG_THERMAL_EMULATION is not set + +# +# Intel thermal drivers +# +# CONFIG_INTEL_POWERCLAMP is not set +CONFIG_X86_THERMAL_VECTOR=y +# CONFIG_X86_PKG_TEMP_THERMAL is not set +# CONFIG_INTEL_SOC_DTS_THERMAL is not set + +# +# ACPI INT340X thermal drivers +# +# CONFIG_INT340X_THERMAL is not set +# end of ACPI INT340X thermal drivers + +# CONFIG_INTEL_PCH_THERMAL is not set +# CONFIG_INTEL_TCC_COOLING is not set +# CONFIG_INTEL_HFI_THERMAL is not set +# end of Intel thermal drivers + +# CONFIG_WATCHDOG is not set +CONFIG_SSB_POSSIBLE=y +# CONFIG_SSB is not set +CONFIG_BCMA_POSSIBLE=y +# CONFIG_BCMA is not set + +# +# Multifunction device drivers +# +# CONFIG_MFD_AS3711 is not set +# CONFIG_PMIC_ADP5520 is not set +# CONFIG_MFD_BCM590XX is not set +# CONFIG_MFD_BD9571MWV is not set +# CONFIG_MFD_AXP20X_I2C is not set +# CONFIG_MFD_MADERA is not set +# CONFIG_PMIC_DA903X is not set +# CONFIG_MFD_DA9052_I2C is not set +# CONFIG_MFD_DA9055 is not set +# CONFIG_MFD_DA9062 is not set +# CONFIG_MFD_DA9063 is not set +# CONFIG_MFD_DA9150 is not set +# CONFIG_MFD_DLN2 is not set +# CONFIG_MFD_MC13XXX_I2C is not set +# CONFIG_MFD_MP2629 is not set +# CONFIG_HTC_PASIC3 is not set +# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set +# CONFIG_LPC_ICH is not set +# CONFIG_LPC_SCH is not set +# CONFIG_MFD_INTEL_LPSS_ACPI is not set +# CONFIG_MFD_INTEL_LPSS_PCI is not set +# CONFIG_MFD_IQS62X is not set +# CONFIG_MFD_JANZ_CMODIO is not set +# CONFIG_MFD_KEMPLD is not set +# CONFIG_MFD_88PM800 is not set +# CONFIG_MFD_88PM805 is not set +# CONFIG_MFD_88PM860X is not set +# CONFIG_MFD_MAX14577 is not set +# CONFIG_MFD_MAX77693 is not set +# CONFIG_MFD_MAX77843 is not set +# CONFIG_MFD_MAX8907 is not set +# CONFIG_MFD_MAX8925 is not set +# CONFIG_MFD_MAX8997 is not set +# CONFIG_MFD_MAX8998 is not set +# CONFIG_MFD_MT6360 is not set +# CONFIG_MFD_MT6370 is not set +# CONFIG_MFD_MT6397 is not set +# CONFIG_MFD_MENF21BMC is not set +# CONFIG_MFD_VIPERBOARD is not set +# CONFIG_MFD_RETU is not set +# CONFIG_MFD_PCF50633 is not set +# CONFIG_MFD_SY7636A is not set +# CONFIG_MFD_RDC321X is not set +# CONFIG_MFD_RT4831 is not set +# CONFIG_MFD_RT5033 is not set +# CONFIG_MFD_RT5120 is not set +# CONFIG_MFD_RC5T583 is not set +# CONFIG_MFD_SI476X_CORE is not set +# CONFIG_MFD_SM501 is not set +# CONFIG_MFD_SKY81452 is not set +# CONFIG_MFD_SYSCON is not set +# CONFIG_MFD_TI_AM335X_TSCADC is not set +# CONFIG_MFD_LP3943 is not set +# CONFIG_MFD_LP8788 is not set +# CONFIG_MFD_TI_LMU is not set +# CONFIG_MFD_PALMAS is not set +# CONFIG_TPS6105X is not set +# CONFIG_TPS6507X is not set +# CONFIG_MFD_TPS65086 is not set +# CONFIG_MFD_TPS65090 is not set +# CONFIG_MFD_TI_LP873X is not set +# CONFIG_MFD_TPS6586X is not set +# CONFIG_MFD_TPS65912_I2C is not set +# CONFIG_TWL4030_CORE is not set +# CONFIG_TWL6040_CORE is not set +# CONFIG_MFD_WL1273_CORE is not set +# CONFIG_MFD_LM3533 is not set +# CONFIG_MFD_TQMX86 is not set +# CONFIG_MFD_VX855 is not set +# CONFIG_MFD_ARIZONA_I2C is not set +# CONFIG_MFD_WM8400 is not set +# CONFIG_MFD_WM831X_I2C is not set +# CONFIG_MFD_WM8350_I2C is not set +# CONFIG_MFD_WM8994 is not set +# CONFIG_MFD_ATC260X_I2C is not set +# end of Multifunction device drivers + +# CONFIG_REGULATOR is not set +# CONFIG_RC_CORE is not set + +# +# CEC support +# +# CONFIG_MEDIA_CEC_SUPPORT is not set +# end of CEC support + +# CONFIG_MEDIA_SUPPORT is not set + +# +# Graphics support +# +# CONFIG_AGP is not set +# CONFIG_VGA_SWITCHEROO is not set +CONFIG_DRM=y +# CONFIG_DRM_DEBUG_MM is not set +# CONFIG_DRM_DEBUG_MODESET_LOCK is not set +# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set +CONFIG_DRM_GEM_SHMEM_HELPER=y + +# +# ARM devices +# +# end of ARM devices + +# CONFIG_DRM_RADEON is not set +# CONFIG_DRM_AMDGPU is not set +# CONFIG_DRM_NOUVEAU is not set +# CONFIG_DRM_I915 is not set +CONFIG_DRM_VGEM=y +# CONFIG_DRM_VKMS is not set +# CONFIG_DRM_VMWGFX is not set +# CONFIG_DRM_GMA500 is not set +# CONFIG_DRM_UDL is not set +# CONFIG_DRM_AST is not set +# CONFIG_DRM_MGAG200 is not set +# CONFIG_DRM_QXL is not set +# CONFIG_DRM_VIRTIO_GPU is not set +CONFIG_DRM_PANEL=y + +# +# Display Panels +# +# end of Display Panels + +CONFIG_DRM_BRIDGE=y +CONFIG_DRM_PANEL_BRIDGE=y + +# +# Display Interface Bridges +# +# CONFIG_DRM_ANALOGIX_ANX78XX is not set +# end of Display Interface Bridges + +# CONFIG_DRM_ETNAVIV is not set +# CONFIG_DRM_BOCHS is not set +# CONFIG_DRM_CIRRUS_QEMU is not set +# CONFIG_DRM_GM12U320 is not set +# CONFIG_DRM_SIMPLEDRM is not set +# CONFIG_DRM_VBOXVIDEO is not set +# CONFIG_DRM_GUD is not set +# CONFIG_DRM_SSD130X is not set +# CONFIG_DRM_HYPERV is not set +# CONFIG_DRM_LEGACY is not set +CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y +CONFIG_DRM_NOMODESET=y + +# +# Frame buffer Devices +# +CONFIG_FB_CMDLINE=y +# CONFIG_FB is not set +# end of Frame buffer Devices + +# +# Backlight & LCD device support +# +# CONFIG_LCD_CLASS_DEVICE is not set +# CONFIG_BACKLIGHT_CLASS_DEVICE is not set +# end of Backlight & LCD device support + +CONFIG_HDMI=y + +# +# Console display driver support +# +# CONFIG_VGA_CONSOLE is not set +CONFIG_DUMMY_CONSOLE=y +CONFIG_DUMMY_CONSOLE_COLUMNS=80 +CONFIG_DUMMY_CONSOLE_ROWS=25 +# end of Console display driver support +# end of Graphics support + +# CONFIG_SOUND is not set + +# +# HID support +# +CONFIG_HID=y +# CONFIG_HID_BATTERY_STRENGTH is not set +# CONFIG_HIDRAW is not set +# CONFIG_UHID is not set +CONFIG_HID_GENERIC=y + +# +# Special HID drivers +# +# CONFIG_HID_A4TECH is not set +# CONFIG_HID_ACCUTOUCH is not set +# CONFIG_HID_ACRUX is not set +# CONFIG_HID_APPLEIR is not set +# CONFIG_HID_AUREAL is not set +# CONFIG_HID_BELKIN is not set +# CONFIG_HID_BETOP_FF is not set +# CONFIG_HID_CHERRY is not set +# CONFIG_HID_CHICONY is not set +# CONFIG_HID_COUGAR is not set +# CONFIG_HID_MACALLY is not set +# CONFIG_HID_CMEDIA is not set +# CONFIG_HID_CREATIVE_SB0540 is not set +# CONFIG_HID_CYPRESS is not set +# CONFIG_HID_DRAGONRISE is not set +# CONFIG_HID_EMS_FF is not set +# CONFIG_HID_ELECOM is not set +# CONFIG_HID_ELO is not set +# CONFIG_HID_EZKEY is not set +# CONFIG_HID_GEMBIRD is not set +# CONFIG_HID_GFRM is not set +# CONFIG_HID_GLORIOUS is not set +# CONFIG_HID_HOLTEK is not set +# CONFIG_HID_VIVALDI is not set +# CONFIG_HID_KEYTOUCH is not set +# CONFIG_HID_KYE is not set +# CONFIG_HID_UCLOGIC is not set +# CONFIG_HID_WALTOP is not set +# CONFIG_HID_VIEWSONIC is not set +# CONFIG_HID_VRC2 is not set +# CONFIG_HID_XIAOMI is not set +# CONFIG_HID_GYRATION is not set +# CONFIG_HID_ICADE is not set +# CONFIG_HID_ITE is not set +# CONFIG_HID_JABRA is not set +# CONFIG_HID_TWINHAN is not set +# CONFIG_HID_KENSINGTON is not set +# CONFIG_HID_LCPOWER is not set +# CONFIG_HID_LENOVO is not set +# CONFIG_HID_LETSKETCH is not set +# CONFIG_HID_MAGICMOUSE is not set +# CONFIG_HID_MALTRON is not set +# CONFIG_HID_MAYFLASH is not set +# CONFIG_HID_MEGAWORLD_FF is not set +# CONFIG_HID_REDRAGON is not set +# CONFIG_HID_MICROSOFT is not set +# CONFIG_HID_MONTEREY is not set +# CONFIG_HID_MULTITOUCH is not set +# CONFIG_HID_NTI is not set +# CONFIG_HID_NTRIG is not set +# CONFIG_HID_ORTEK is not set +# CONFIG_HID_PANTHERLORD is not set +# CONFIG_HID_PENMOUNT is not set +# CONFIG_HID_PETALYNX is not set +# CONFIG_HID_PICOLCD is not set +# CONFIG_HID_PLANTRONICS is not set +# CONFIG_HID_PXRC is not set +# CONFIG_HID_RAZER is not set +# CONFIG_HID_PRIMAX is not set +# CONFIG_HID_RETRODE is not set +# CONFIG_HID_ROCCAT is not set +# CONFIG_HID_SAITEK is not set +# CONFIG_HID_SAMSUNG is not set +# CONFIG_HID_SEMITEK is not set +# CONFIG_HID_SIGMAMICRO is not set +# CONFIG_HID_SPEEDLINK is not set +# CONFIG_HID_STEAM is not set +# CONFIG_HID_STEELSERIES is not set +# CONFIG_HID_SUNPLUS is not set +# CONFIG_HID_RMI is not set +# CONFIG_HID_GREENASIA is not set +# CONFIG_HID_HYPERV_MOUSE is not set +# CONFIG_HID_SMARTJOYPLUS is not set +# CONFIG_HID_TIVO is not set +# CONFIG_HID_TOPSEED is not set +# CONFIG_HID_TOPRE is not set +# CONFIG_HID_THRUSTMASTER is not set +# CONFIG_HID_UDRAW_PS3 is not set +# CONFIG_HID_WACOM is not set +# CONFIG_HID_XINMO is not set +# CONFIG_HID_ZEROPLUS is not set +# CONFIG_HID_ZYDACRON is not set +# CONFIG_HID_SENSOR_HUB is not set +# CONFIG_HID_ALPS is not set +# end of Special HID drivers + +# +# USB HID support +# +CONFIG_USB_HID=y +# CONFIG_HID_PID is not set +# CONFIG_USB_HIDDEV is not set +# end of USB HID support + +# +# I2C HID support +# +# CONFIG_I2C_HID_ACPI is not set +# end of I2C HID support + +# +# Intel ISH HID support +# +# CONFIG_INTEL_ISH_HID is not set +# end of Intel ISH HID support + +# +# AMD SFH HID Support +# +# CONFIG_AMD_SFH_HID is not set +# end of AMD SFH HID Support +# end of HID support + +CONFIG_USB_OHCI_LITTLE_ENDIAN=y +CONFIG_USB_SUPPORT=y +CONFIG_USB_COMMON=y +# CONFIG_USB_ULPI_BUS is not set +CONFIG_USB_ARCH_HAS_HCD=y +CONFIG_USB=y +# CONFIG_USB_PCI is not set +CONFIG_USB_ANNOUNCE_NEW_DEVICES=y + +# +# Miscellaneous USB options +# +CONFIG_USB_DEFAULT_PERSIST=y +# CONFIG_USB_FEW_INIT_RETRIES is not set +# CONFIG_USB_DYNAMIC_MINORS is not set +# CONFIG_USB_OTG_PRODUCTLIST is not set +# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set +CONFIG_USB_AUTOSUSPEND_DELAY=2 +# CONFIG_USB_MON is not set + +# +# USB Host Controller Drivers +# +# CONFIG_USB_C67X00_HCD is not set +# CONFIG_USB_XHCI_HCD is not set +# CONFIG_USB_EHCI_HCD is not set +# CONFIG_USB_OXU210HP_HCD is not set +# CONFIG_USB_ISP116X_HCD is not set +# CONFIG_USB_FOTG210_HCD is not set +# CONFIG_USB_OHCI_HCD is not set +# CONFIG_USB_SL811_HCD is not set +# CONFIG_USB_R8A66597_HCD is not set +# CONFIG_USB_HCD_TEST_MODE is not set + +# +# USB Device Class drivers +# +CONFIG_USB_ACM=y +# CONFIG_USB_PRINTER is not set +# CONFIG_USB_WDM is not set +# CONFIG_USB_TMC is not set + +# +# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may +# + +# +# also be needed; see USB_STORAGE Help for more info +# +# CONFIG_USB_STORAGE is not set + +# +# USB Imaging devices +# +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_MICROTEK is not set +CONFIG_USBIP_CORE=y +CONFIG_USBIP_VHCI_HCD=y +CONFIG_USBIP_VHCI_HC_PORTS=8 +CONFIG_USBIP_VHCI_NR_HCS=1 +# CONFIG_USBIP_HOST is not set +# CONFIG_USBIP_DEBUG is not set +# CONFIG_USB_CDNS_SUPPORT is not set +# CONFIG_USB_MUSB_HDRC is not set +# CONFIG_USB_DWC3 is not set +# CONFIG_USB_DWC2 is not set +# CONFIG_USB_ISP1760 is not set + +# +# USB port drivers +# +CONFIG_USB_SERIAL=y +# CONFIG_USB_SERIAL_CONSOLE is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_SIMPLE is not set +# CONFIG_USB_SERIAL_AIRCABLE is not set +# CONFIG_USB_SERIAL_ARK3116 is not set +# CONFIG_USB_SERIAL_BELKIN is not set +CONFIG_USB_SERIAL_CH341=y +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +CONFIG_USB_SERIAL_CP210X=y +# CONFIG_USB_SERIAL_CYPRESS_M8 is not set +# CONFIG_USB_SERIAL_EMPEG is not set +CONFIG_USB_SERIAL_FTDI_SIO=y +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IPAQ is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_EDGEPORT_TI is not set +# CONFIG_USB_SERIAL_F81232 is not set +# CONFIG_USB_SERIAL_F8153X is not set +# CONFIG_USB_SERIAL_GARMIN is not set +# CONFIG_USB_SERIAL_IPW is not set +# CONFIG_USB_SERIAL_IUU is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KLSI is not set +# CONFIG_USB_SERIAL_KOBIL_SCT is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_METRO is not set +# CONFIG_USB_SERIAL_MOS7720 is not set +# CONFIG_USB_SERIAL_MOS7840 is not set +# CONFIG_USB_SERIAL_MXUPORT is not set +# CONFIG_USB_SERIAL_NAVMAN is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_OTI6858 is not set +# CONFIG_USB_SERIAL_QCAUX is not set +# CONFIG_USB_SERIAL_QUALCOMM is not set +# CONFIG_USB_SERIAL_SPCP8X5 is not set +# CONFIG_USB_SERIAL_SAFE is not set +# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set +# CONFIG_USB_SERIAL_SYMBOL is not set +# CONFIG_USB_SERIAL_TI is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_OPTION is not set +# CONFIG_USB_SERIAL_OMNINET is not set +# CONFIG_USB_SERIAL_OPTICON is not set +# CONFIG_USB_SERIAL_XSENS_MT is not set +# CONFIG_USB_SERIAL_WISHBONE is not set +# CONFIG_USB_SERIAL_SSU100 is not set +# CONFIG_USB_SERIAL_QT2 is not set +# CONFIG_USB_SERIAL_UPD78F0730 is not set +# CONFIG_USB_SERIAL_XR is not set +# CONFIG_USB_SERIAL_DEBUG is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_EMI62 is not set +# CONFIG_USB_EMI26 is not set +# CONFIG_USB_ADUTUX is not set +# CONFIG_USB_SEVSEG is not set +# CONFIG_USB_LEGOTOWER is not set +# CONFIG_USB_LCD is not set +# CONFIG_USB_CYPRESS_CY7C63 is not set +# CONFIG_USB_CYTHERM is not set +# CONFIG_USB_IDMOUSE is not set +# CONFIG_USB_FTDI_ELAN is not set +# CONFIG_USB_APPLEDISPLAY is not set +# CONFIG_APPLE_MFI_FASTCHARGE is not set +# CONFIG_USB_LD is not set +# CONFIG_USB_TRANCEVIBRATOR is not set +# CONFIG_USB_IOWARRIOR is not set +# CONFIG_USB_TEST is not set +# CONFIG_USB_EHSET_TEST_FIXTURE is not set +# CONFIG_USB_ISIGHTFW is not set +# CONFIG_USB_YUREX is not set +# CONFIG_USB_EZUSB_FX2 is not set +# CONFIG_USB_HUB_USB251XB is not set +# CONFIG_USB_HSIC_USB3503 is not set +# CONFIG_USB_HSIC_USB4604 is not set +# CONFIG_USB_LINK_LAYER_TEST is not set + +# +# USB Physical Layer drivers +# +# CONFIG_NOP_USB_XCEIV is not set +# CONFIG_USB_ISP1301 is not set +# end of USB Physical Layer drivers + +# CONFIG_USB_GADGET is not set +# CONFIG_TYPEC is not set +# CONFIG_USB_ROLE_SWITCH is not set +# CONFIG_MMC is not set +# CONFIG_SCSI_UFSHCD is not set +# CONFIG_MEMSTICK is not set +# CONFIG_NEW_LEDS is not set +# CONFIG_ACCESSIBILITY is not set +# CONFIG_INFINIBAND is not set +CONFIG_EDAC_ATOMIC_SCRUB=y +CONFIG_EDAC_SUPPORT=y +# CONFIG_EDAC is not set +CONFIG_RTC_LIB=y +CONFIG_RTC_MC146818_LIB=y +CONFIG_RTC_CLASS=y +CONFIG_RTC_HCTOSYS=y +CONFIG_RTC_HCTOSYS_DEVICE="rtc0" +CONFIG_RTC_SYSTOHC=y +CONFIG_RTC_SYSTOHC_DEVICE="rtc0" +# CONFIG_RTC_DEBUG is not set +CONFIG_RTC_NVMEM=y + +# +# RTC interfaces +# +CONFIG_RTC_INTF_SYSFS=y +CONFIG_RTC_INTF_PROC=y +CONFIG_RTC_INTF_DEV=y +CONFIG_RTC_INTF_DEV_UIE_EMUL=y +# CONFIG_RTC_DRV_TEST is not set + +# +# I2C RTC drivers +# +# CONFIG_RTC_DRV_ABB5ZES3 is not set +# CONFIG_RTC_DRV_ABEOZ9 is not set +# CONFIG_RTC_DRV_ABX80X is not set +# CONFIG_RTC_DRV_DS1307 is not set +# CONFIG_RTC_DRV_DS1374 is not set +# CONFIG_RTC_DRV_DS1672 is not set +# CONFIG_RTC_DRV_MAX6900 is not set +# CONFIG_RTC_DRV_RS5C372 is not set +# CONFIG_RTC_DRV_ISL1208 is not set +# CONFIG_RTC_DRV_ISL12022 is not set +# CONFIG_RTC_DRV_X1205 is not set +# CONFIG_RTC_DRV_PCF8523 is not set +# CONFIG_RTC_DRV_PCF85063 is not set +# CONFIG_RTC_DRV_PCF85363 is not set +# CONFIG_RTC_DRV_PCF8563 is not set +# CONFIG_RTC_DRV_PCF8583 is not set +# CONFIG_RTC_DRV_M41T80 is not set +# CONFIG_RTC_DRV_BQ32K is not set +# CONFIG_RTC_DRV_S35390A is not set +# CONFIG_RTC_DRV_FM3130 is not set +# CONFIG_RTC_DRV_RX8010 is not set +# CONFIG_RTC_DRV_RX8581 is not set +# CONFIG_RTC_DRV_RX8025 is not set +# CONFIG_RTC_DRV_EM3027 is not set +# CONFIG_RTC_DRV_RV3028 is not set +# CONFIG_RTC_DRV_RV3032 is not set +# CONFIG_RTC_DRV_RV8803 is not set +# CONFIG_RTC_DRV_SD3078 is not set + +# +# SPI RTC drivers +# +CONFIG_RTC_I2C_AND_SPI=y + +# +# SPI and I2C RTC drivers +# +# CONFIG_RTC_DRV_DS3232 is not set +# CONFIG_RTC_DRV_PCF2127 is not set +# CONFIG_RTC_DRV_RV3029C2 is not set +# CONFIG_RTC_DRV_RX6110 is not set + +# +# Platform RTC drivers +# +CONFIG_RTC_DRV_CMOS=y +# CONFIG_RTC_DRV_DS1286 is not set +# CONFIG_RTC_DRV_DS1511 is not set +# CONFIG_RTC_DRV_DS1553 is not set +# CONFIG_RTC_DRV_DS1685_FAMILY is not set +# CONFIG_RTC_DRV_DS1742 is not set +# CONFIG_RTC_DRV_DS2404 is not set +# CONFIG_RTC_DRV_STK17TA8 is not set +# CONFIG_RTC_DRV_M48T86 is not set +# CONFIG_RTC_DRV_M48T35 is not set +# CONFIG_RTC_DRV_M48T59 is not set +# CONFIG_RTC_DRV_MSM6242 is not set +# CONFIG_RTC_DRV_BQ4802 is not set +# CONFIG_RTC_DRV_RP5C01 is not set +# CONFIG_RTC_DRV_V3020 is not set + +# +# on-CPU RTC drivers +# +# CONFIG_RTC_DRV_FTRTC010 is not set + +# +# HID Sensor RTC drivers +# +# CONFIG_RTC_DRV_GOLDFISH is not set +# CONFIG_DMADEVICES is not set + +# +# DMABUF options +# +CONFIG_SYNC_FILE=y +# CONFIG_SW_SYNC is not set +# CONFIG_UDMABUF is not set +# CONFIG_DMABUF_MOVE_NOTIFY is not set +# CONFIG_DMABUF_DEBUG is not set +# CONFIG_DMABUF_SELFTESTS is not set +# CONFIG_DMABUF_HEAPS is not set +# CONFIG_DMABUF_SYSFS_STATS is not set +# end of DMABUF options + +# CONFIG_AUXDISPLAY is not set +CONFIG_UIO=y +# CONFIG_UIO_CIF is not set +CONFIG_UIO_PDRV_GENIRQ=y +CONFIG_UIO_DMEM_GENIRQ=y +# CONFIG_UIO_AEC is not set +# CONFIG_UIO_SERCOS3 is not set +# CONFIG_UIO_PCI_GENERIC is not set +# CONFIG_UIO_NETX is not set +# CONFIG_UIO_PRUSS is not set +# CONFIG_UIO_MF624 is not set +# CONFIG_UIO_HV_GENERIC is not set +CONFIG_VFIO=y +CONFIG_VFIO_IOMMU_TYPE1=y +CONFIG_VFIO_VIRQFD=y +# CONFIG_VFIO_NOIOMMU is not set +CONFIG_VFIO_PCI_CORE=y +CONFIG_VFIO_PCI_MMAP=y +CONFIG_VFIO_PCI_INTX=y +CONFIG_VFIO_PCI=y +# CONFIG_VFIO_PCI_IGD is not set +CONFIG_VFIO_MDEV=y +CONFIG_IRQ_BYPASS_MANAGER=y +# CONFIG_VIRT_DRIVERS is not set +CONFIG_VIRTIO_ANCHOR=y +CONFIG_VIRTIO=y +CONFIG_VIRTIO_PCI_LIB=y +CONFIG_VIRTIO_MENU=y +CONFIG_VIRTIO_PCI=y +# CONFIG_VIRTIO_PCI_LEGACY is not set +# CONFIG_VIRTIO_VDPA is not set +CONFIG_VIRTIO_PMEM=y +CONFIG_VIRTIO_BALLOON=y +CONFIG_VIRTIO_INPUT=y +CONFIG_VIRTIO_MMIO=y +# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set +CONFIG_VDPA=y +# CONFIG_VDPA_USER is not set +# CONFIG_IFCVF is not set +# CONFIG_VP_VDPA is not set +# CONFIG_ALIBABA_ENI_VDPA is not set +CONFIG_VHOST_IOTLB=y +CONFIG_VHOST=y +CONFIG_VHOST_MENU=y +CONFIG_VHOST_NET=y +# CONFIG_VHOST_VSOCK is not set +CONFIG_VHOST_VDPA=y +# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set + +# +# Microsoft Hyper-V guest support +# +CONFIG_HYPERV=y +CONFIG_HYPERV_TIMER=y +CONFIG_HYPERV_UTILS=y +CONFIG_HYPERV_BALLOON=y +CONFIG_DXGKRNL=y +# end of Microsoft Hyper-V guest support + +# CONFIG_GREYBUS is not set +# CONFIG_COMEDI is not set +# CONFIG_STAGING is not set +# CONFIG_CHROME_PLATFORMS is not set +# CONFIG_MELLANOX_PLATFORM is not set +# CONFIG_SURFACE_PLATFORMS is not set +# CONFIG_X86_PLATFORM_DEVICES is not set +# CONFIG_P2SB is not set +CONFIG_HAVE_CLK=y +CONFIG_HAVE_CLK_PREPARE=y +CONFIG_COMMON_CLK=y +# CONFIG_COMMON_CLK_MAX9485 is not set +# CONFIG_COMMON_CLK_SI5341 is not set +# CONFIG_COMMON_CLK_SI5351 is not set +# CONFIG_COMMON_CLK_SI544 is not set +# CONFIG_COMMON_CLK_CDCE706 is not set +# CONFIG_COMMON_CLK_CS2000_CP is not set +# CONFIG_XILINX_VCU is not set +# CONFIG_HWSPINLOCK is not set + +# +# Clock Source drivers +# +CONFIG_CLKEVT_I8253=y +CONFIG_I8253_LOCK=y +CONFIG_CLKBLD_I8253=y +# end of Clock Source drivers + +CONFIG_MAILBOX=y +CONFIG_PCC=y +# CONFIG_ALTERA_MBOX is not set +CONFIG_IOMMU_IOVA=y +CONFIG_IOASID=y +CONFIG_IOMMU_API=y +CONFIG_IOMMU_SUPPORT=y + +# +# Generic IOMMU Pagetable Support +# +CONFIG_IOMMU_IO_PGTABLE=y +# end of Generic IOMMU Pagetable Support + +# CONFIG_IOMMU_DEBUGFS is not set +# CONFIG_IOMMU_DEFAULT_DMA_STRICT is not set +CONFIG_IOMMU_DEFAULT_DMA_LAZY=y +# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set +CONFIG_IOMMU_DMA=y +CONFIG_AMD_IOMMU=y +# CONFIG_AMD_IOMMU_V2 is not set +CONFIG_DMAR_TABLE=y +CONFIG_INTEL_IOMMU=y +# CONFIG_INTEL_IOMMU_SVM is not set +# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set +CONFIG_INTEL_IOMMU_FLOPPY_WA=y +# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set +# CONFIG_IRQ_REMAP is not set +# CONFIG_HYPERV_IOMMU is not set +# CONFIG_VIRTIO_IOMMU is not set + +# +# Remoteproc drivers +# +# CONFIG_REMOTEPROC is not set +# end of Remoteproc drivers + +# +# Rpmsg drivers +# +# CONFIG_RPMSG_QCOM_GLINK_RPM is not set +# CONFIG_RPMSG_VIRTIO is not set +# end of Rpmsg drivers + +# CONFIG_SOUNDWIRE is not set + +# +# SOC (System On Chip) specific Drivers +# + +# +# Amlogic SoC drivers +# +# end of Amlogic SoC drivers + +# +# Broadcom SoC drivers +# +# end of Broadcom SoC drivers + +# +# NXP/Freescale QorIQ SoC drivers +# +# end of NXP/Freescale QorIQ SoC drivers + +# +# fujitsu SoC drivers +# +# end of fujitsu SoC drivers + +# +# i.MX SoC drivers +# +# end of i.MX SoC drivers + +# +# Enable LiteX SoC Builder specific drivers +# +# end of Enable LiteX SoC Builder specific drivers + +# +# Qualcomm SoC drivers +# +# end of Qualcomm SoC drivers + +# CONFIG_SOC_TI is not set + +# +# Xilinx SoC drivers +# +# end of Xilinx SoC drivers +# end of SOC (System On Chip) specific Drivers + +# CONFIG_PM_DEVFREQ is not set +# CONFIG_EXTCON is not set +# CONFIG_MEMORY is not set +# CONFIG_IIO is not set +# CONFIG_NTB is not set +# CONFIG_PWM is not set + +# +# IRQ chip support +# +# end of IRQ chip support + +# CONFIG_IPACK_BUS is not set +# CONFIG_RESET_CONTROLLER is not set + +# +# PHY Subsystem +# +CONFIG_GENERIC_PHY=y +# CONFIG_USB_LGM_PHY is not set +# CONFIG_PHY_CAN_TRANSCEIVER is not set + +# +# PHY drivers for Broadcom platforms +# +# CONFIG_BCM_KONA_USB2_PHY is not set +# end of PHY drivers for Broadcom platforms + +# CONFIG_PHY_PXA_28NM_HSIC is not set +# CONFIG_PHY_PXA_28NM_USB2 is not set +# CONFIG_PHY_INTEL_LGM_EMMC is not set +# end of PHY Subsystem + +# CONFIG_POWERCAP is not set +# CONFIG_MCB is not set + +# +# Performance monitor support +# +# end of Performance monitor support + +CONFIG_RAS=y +# CONFIG_USB4 is not set + +# +# Android +# +# CONFIG_ANDROID_BINDER_IPC is not set +# end of Android + +CONFIG_LIBNVDIMM=y +CONFIG_BLK_DEV_PMEM=y +CONFIG_ND_CLAIM=y +CONFIG_ND_BTT=y +CONFIG_BTT=y +CONFIG_ND_PFN=y +CONFIG_NVDIMM_PFN=y +CONFIG_NVDIMM_DAX=y +CONFIG_DAX=y +CONFIG_DEV_DAX=y +CONFIG_DEV_DAX_PMEM=y +CONFIG_DEV_DAX_KMEM=y +CONFIG_NVMEM=y +# CONFIG_NVMEM_SYSFS is not set +# CONFIG_NVMEM_RMEM is not set + +# +# HW tracing support +# +# CONFIG_STM is not set +# CONFIG_INTEL_TH is not set +# end of HW tracing support + +# CONFIG_FPGA is not set +# CONFIG_TEE is not set +# CONFIG_SIOX is not set +# CONFIG_SLIMBUS is not set +# CONFIG_INTERCONNECT is not set +# CONFIG_COUNTER is not set +# CONFIG_PECI is not set +# CONFIG_HTE is not set +# end of Device Drivers + +# +# File systems +# +CONFIG_DCACHE_WORD_ACCESS=y +# CONFIG_VALIDATE_FS_PARSER is not set +CONFIG_FS_IOMAP=y +# CONFIG_EXT2_FS is not set +# CONFIG_EXT3_FS is not set +CONFIG_EXT4_FS=y +CONFIG_EXT4_USE_FOR_EXT2=y +CONFIG_EXT4_FS_POSIX_ACL=y +CONFIG_EXT4_FS_SECURITY=y +# CONFIG_EXT4_DEBUG is not set +CONFIG_JBD2=y +# CONFIG_JBD2_DEBUG is not set +CONFIG_FS_MBCACHE=y +# CONFIG_REISERFS_FS is not set +# CONFIG_JFS_FS is not set +CONFIG_XFS_FS=y +# CONFIG_XFS_SUPPORT_V4 is not set +CONFIG_XFS_QUOTA=y +CONFIG_XFS_POSIX_ACL=y +CONFIG_XFS_RT=y +CONFIG_XFS_ONLINE_SCRUB=y +CONFIG_XFS_ONLINE_REPAIR=y +# CONFIG_XFS_WARN is not set +# CONFIG_XFS_DEBUG is not set +# CONFIG_GFS2_FS is not set +CONFIG_BTRFS_FS=y +CONFIG_BTRFS_FS_POSIX_ACL=y +# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set +# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set +# CONFIG_BTRFS_DEBUG is not set +# CONFIG_BTRFS_ASSERT is not set +# CONFIG_BTRFS_FS_REF_VERIFY is not set +# CONFIG_NILFS2_FS is not set +# CONFIG_F2FS_FS is not set +CONFIG_FS_DAX=y +CONFIG_FS_DAX_PMD=y +CONFIG_FS_POSIX_ACL=y +CONFIG_EXPORTFS=y +CONFIG_EXPORTFS_BLOCK_OPS=y +CONFIG_FILE_LOCKING=y +# CONFIG_FS_ENCRYPTION is not set +# CONFIG_FS_VERITY is not set +CONFIG_FSNOTIFY=y +CONFIG_DNOTIFY=y +CONFIG_INOTIFY_USER=y +CONFIG_FANOTIFY=y +CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y +CONFIG_QUOTA=y +# CONFIG_QUOTA_NETLINK_INTERFACE is not set +# CONFIG_PRINT_QUOTA_WARNING is not set +# CONFIG_QUOTA_DEBUG is not set +# CONFIG_QFMT_V1 is not set +# CONFIG_QFMT_V2 is not set +CONFIG_QUOTACTL=y +CONFIG_AUTOFS4_FS=y +CONFIG_AUTOFS_FS=y +CONFIG_FUSE_FS=y +CONFIG_CUSE=y +CONFIG_VIRTIO_FS=y +CONFIG_FUSE_DAX=y +CONFIG_OVERLAY_FS=y +# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set +# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set +# CONFIG_OVERLAY_FS_INDEX is not set +# CONFIG_OVERLAY_FS_XINO_AUTO is not set +# CONFIG_OVERLAY_FS_METACOPY is not set + +# +# Caches +# +CONFIG_NETFS_SUPPORT=y +# CONFIG_NETFS_STATS is not set +CONFIG_FSCACHE=y +# CONFIG_FSCACHE_STATS is not set +# CONFIG_FSCACHE_DEBUG is not set +# CONFIG_CACHEFILES is not set +# end of Caches + +# +# CD-ROM/DVD Filesystems +# +CONFIG_ISO9660_FS=y +CONFIG_JOLIET=y +CONFIG_ZISOFS=y +CONFIG_UDF_FS=y +# end of CD-ROM/DVD Filesystems + +# +# DOS/FAT/EXFAT/NT Filesystems +# +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +CONFIG_VFAT_FS=y +CONFIG_FAT_DEFAULT_CODEPAGE=437 +CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" +# CONFIG_FAT_DEFAULT_UTF8 is not set +# CONFIG_EXFAT_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS3_FS is not set +# end of DOS/FAT/EXFAT/NT Filesystems + +# +# Pseudo filesystems +# +CONFIG_PROC_FS=y +CONFIG_PROC_KCORE=y +CONFIG_PROC_SYSCTL=y +CONFIG_PROC_PAGE_MONITOR=y +CONFIG_PROC_CHILDREN=y +CONFIG_PROC_PID_ARCH_STATUS=y +CONFIG_KERNFS=y +CONFIG_SYSFS=y +CONFIG_TMPFS=y +CONFIG_TMPFS_POSIX_ACL=y +CONFIG_TMPFS_XATTR=y +# CONFIG_TMPFS_INODE64 is not set +CONFIG_HUGETLBFS=y +CONFIG_HUGETLB_PAGE=y +CONFIG_ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y +CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y +# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set +CONFIG_MEMFD_CREATE=y +CONFIG_ARCH_HAS_GIGANTIC_PAGE=y +# CONFIG_CONFIGFS_FS is not set +# CONFIG_EFIVAR_FS is not set +# end of Pseudo filesystems + +CONFIG_MISC_FILESYSTEMS=y +# CONFIG_ORANGEFS_FS is not set +# CONFIG_ADFS_FS is not set +# CONFIG_AFFS_FS is not set +# CONFIG_ECRYPT_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_HFSPLUS_FS is not set +# CONFIG_BEFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_SQUASHFS=y +# CONFIG_SQUASHFS_FILE_CACHE is not set +CONFIG_SQUASHFS_FILE_DIRECT=y +CONFIG_SQUASHFS_DECOMP_SINGLE=y +# CONFIG_SQUASHFS_DECOMP_MULTI is not set +# CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU is not set +CONFIG_SQUASHFS_XATTR=y +CONFIG_SQUASHFS_ZLIB=y +CONFIG_SQUASHFS_LZ4=y +CONFIG_SQUASHFS_LZO=y +CONFIG_SQUASHFS_XZ=y +CONFIG_SQUASHFS_ZSTD=y +# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set +# CONFIG_SQUASHFS_EMBEDDED is not set +CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3 +# CONFIG_VXFS_FS is not set +# CONFIG_MINIX_FS is not set +# CONFIG_OMFS_FS is not set +# CONFIG_HPFS_FS is not set +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX6FS_FS is not set +# CONFIG_ROMFS_FS is not set +# CONFIG_PSTORE is not set +# CONFIG_SYSV_FS is not set +# CONFIG_UFS_FS is not set +CONFIG_EROFS_FS=y +# CONFIG_EROFS_FS_DEBUG is not set +CONFIG_EROFS_FS_XATTR=y +CONFIG_EROFS_FS_POSIX_ACL=y +CONFIG_EROFS_FS_SECURITY=y +CONFIG_EROFS_FS_ZIP=y +# CONFIG_EROFS_FS_ZIP_LZMA is not set +CONFIG_NETWORK_FILESYSTEMS=y +CONFIG_NFS_FS=y +CONFIG_NFS_V2=y +CONFIG_NFS_V3=y +# CONFIG_NFS_V3_ACL is not set +CONFIG_NFS_V4=y +# CONFIG_NFS_SWAP is not set +CONFIG_NFS_V4_1=y +# CONFIG_NFS_V4_2 is not set +CONFIG_PNFS_FILE_LAYOUT=y +CONFIG_PNFS_BLOCK=y +CONFIG_PNFS_FLEXFILE_LAYOUT=y +CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org" +# CONFIG_NFS_V4_1_MIGRATION is not set +CONFIG_ROOT_NFS=y +# CONFIG_NFS_FSCACHE is not set +# CONFIG_NFS_USE_LEGACY_DNS is not set +CONFIG_NFS_USE_KERNEL_DNS=y +# CONFIG_NFS_DISABLE_UDP_SUPPORT is not set +CONFIG_NFSD=y +CONFIG_NFSD_V2_ACL=y +CONFIG_NFSD_V3_ACL=y +CONFIG_NFSD_V4=y +CONFIG_NFSD_PNFS=y +CONFIG_NFSD_BLOCKLAYOUT=y +CONFIG_NFSD_SCSILAYOUT=y +CONFIG_NFSD_FLEXFILELAYOUT=y +CONFIG_NFSD_V4_SECURITY_LABEL=y +CONFIG_GRACE_PERIOD=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +CONFIG_NFS_ACL_SUPPORT=y +CONFIG_NFS_COMMON=y +CONFIG_SUNRPC=y +CONFIG_SUNRPC_GSS=y +CONFIG_SUNRPC_BACKCHANNEL=y +# CONFIG_RPCSEC_GSS_KRB5 is not set +# CONFIG_SUNRPC_DEBUG is not set +CONFIG_CEPH_FS=y +CONFIG_CEPH_FSCACHE=y +CONFIG_CEPH_FS_POSIX_ACL=y +# CONFIG_CEPH_FS_SECURITY_LABEL is not set +CONFIG_CIFS=y +# CONFIG_CIFS_STATS2 is not set +CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y +# CONFIG_CIFS_UPCALL is not set +CONFIG_CIFS_XATTR=y +CONFIG_CIFS_POSIX=y +# CONFIG_CIFS_DEBUG is not set +# CONFIG_CIFS_DFS_UPCALL is not set +# CONFIG_CIFS_SWN_UPCALL is not set +# CONFIG_CIFS_FSCACHE is not set +# CONFIG_CIFS_ROOT is not set +# CONFIG_SMB_SERVER is not set +CONFIG_SMBFS=y +# CONFIG_CODA_FS is not set +# CONFIG_AFS_FS is not set +CONFIG_9P_FS=y +CONFIG_9P_FSCACHE=y +CONFIG_9P_FS_POSIX_ACL=y +CONFIG_9P_FS_SECURITY=y +CONFIG_NLS=y +CONFIG_NLS_DEFAULT="iso8859-1" +CONFIG_NLS_CODEPAGE_437=y +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1250 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +CONFIG_NLS_ASCII=y +CONFIG_NLS_ISO8859_1=y +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_MAC_ROMAN is not set +# CONFIG_NLS_MAC_CELTIC is not set +# CONFIG_NLS_MAC_CENTEURO is not set +# CONFIG_NLS_MAC_CROATIAN is not set +# CONFIG_NLS_MAC_CYRILLIC is not set +# CONFIG_NLS_MAC_GAELIC is not set +# CONFIG_NLS_MAC_GREEK is not set +# CONFIG_NLS_MAC_ICELAND is not set +# CONFIG_NLS_MAC_INUIT is not set +# CONFIG_NLS_MAC_ROMANIAN is not set +# CONFIG_NLS_MAC_TURKISH is not set +CONFIG_NLS_UTF8=y +# CONFIG_UNICODE is not set +CONFIG_IO_WQ=y +# end of File systems + +# +# Security options +# +CONFIG_KEYS=y +# CONFIG_KEYS_REQUEST_CACHE is not set +# CONFIG_PERSISTENT_KEYRINGS is not set +# CONFIG_BIG_KEYS is not set +# CONFIG_TRUSTED_KEYS is not set +# CONFIG_ENCRYPTED_KEYS is not set +# CONFIG_KEY_DH_OPERATIONS is not set +CONFIG_SECURITY_DMESG_RESTRICT=y +CONFIG_SECURITY=y +# CONFIG_SECURITYFS is not set +# CONFIG_SECURITY_NETWORK is not set +CONFIG_SECURITY_PATH=y +# CONFIG_INTEL_TXT is not set +CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y +CONFIG_HARDENED_USERCOPY=y +CONFIG_FORTIFY_SOURCE=y +# CONFIG_STATIC_USERMODEHELPER is not set +# CONFIG_SECURITY_SMACK is not set +# CONFIG_SECURITY_TOMOYO is not set +# CONFIG_SECURITY_APPARMOR is not set +# CONFIG_SECURITY_LOADPIN is not set +# CONFIG_SECURITY_YAMA is not set +# CONFIG_SECURITY_SAFESETID is not set +# CONFIG_SECURITY_LOCKDOWN_LSM is not set +CONFIG_SECURITY_LANDLOCK=y +# CONFIG_INTEGRITY is not set +# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set +CONFIG_DEFAULT_SECURITY_DAC=y +CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,bpf" + +# +# Kernel hardening options +# + +# +# Memory initialization +# +CONFIG_INIT_STACK_NONE=y +# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set +# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set +CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y +CONFIG_ZERO_CALL_USED_REGS=y +# end of Memory initialization + +CONFIG_RANDSTRUCT_NONE=y +# end of Kernel hardening options +# end of Security options + +CONFIG_XOR_BLOCKS=y +CONFIG_ASYNC_CORE=y +CONFIG_ASYNC_MEMCPY=y +CONFIG_ASYNC_XOR=y +CONFIG_ASYNC_PQ=y +CONFIG_ASYNC_RAID6_RECOV=y +CONFIG_CRYPTO=y + +# +# Crypto core or helper +# +CONFIG_CRYPTO_ALGAPI=y +CONFIG_CRYPTO_ALGAPI2=y +CONFIG_CRYPTO_AEAD=y +CONFIG_CRYPTO_AEAD2=y +CONFIG_CRYPTO_SKCIPHER=y +CONFIG_CRYPTO_SKCIPHER2=y +CONFIG_CRYPTO_HASH=y +CONFIG_CRYPTO_HASH2=y +CONFIG_CRYPTO_RNG=y +CONFIG_CRYPTO_RNG2=y +CONFIG_CRYPTO_RNG_DEFAULT=y +CONFIG_CRYPTO_AKCIPHER2=y +CONFIG_CRYPTO_AKCIPHER=y +CONFIG_CRYPTO_KPP2=y +CONFIG_CRYPTO_ACOMP2=y +CONFIG_CRYPTO_MANAGER=y +CONFIG_CRYPTO_MANAGER2=y +# CONFIG_CRYPTO_USER is not set +CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y +CONFIG_CRYPTO_GF128MUL=y +CONFIG_CRYPTO_NULL=y +CONFIG_CRYPTO_NULL2=y +# CONFIG_CRYPTO_PCRYPT is not set +# CONFIG_CRYPTO_CRYPTD is not set +CONFIG_CRYPTO_AUTHENC=y +# CONFIG_CRYPTO_TEST is not set +# end of Crypto core or helper + +# +# Public-key cryptography +# +CONFIG_CRYPTO_RSA=y +# CONFIG_CRYPTO_DH is not set +# CONFIG_CRYPTO_ECDH is not set +# CONFIG_CRYPTO_ECDSA is not set +# CONFIG_CRYPTO_ECRDSA is not set +# CONFIG_CRYPTO_SM2 is not set +# CONFIG_CRYPTO_CURVE25519 is not set +# end of Public-key cryptography + +# +# Block ciphers +# +CONFIG_CRYPTO_AES=y +# CONFIG_CRYPTO_AES_TI is not set +# CONFIG_CRYPTO_ANUBIS is not set +# CONFIG_CRYPTO_ARIA is not set +# CONFIG_CRYPTO_BLOWFISH is not set +# CONFIG_CRYPTO_CAMELLIA is not set +# CONFIG_CRYPTO_CAST5 is not set +# CONFIG_CRYPTO_CAST6 is not set +CONFIG_CRYPTO_DES=y +# CONFIG_CRYPTO_FCRYPT is not set +# CONFIG_CRYPTO_KHAZAD is not set +# CONFIG_CRYPTO_SEED is not set +# CONFIG_CRYPTO_SERPENT is not set +# CONFIG_CRYPTO_SM4_GENERIC is not set +# CONFIG_CRYPTO_TEA is not set +# CONFIG_CRYPTO_TWOFISH is not set +# end of Block ciphers + +# +# Length-preserving ciphers and modes +# +# CONFIG_CRYPTO_ADIANTUM is not set +CONFIG_CRYPTO_ARC4=y +# CONFIG_CRYPTO_CHACHA20 is not set +CONFIG_CRYPTO_CBC=y +# CONFIG_CRYPTO_CFB is not set +CONFIG_CRYPTO_CTR=y +CONFIG_CRYPTO_CTS=y +CONFIG_CRYPTO_ECB=y +# CONFIG_CRYPTO_HCTR2 is not set +# CONFIG_CRYPTO_KEYWRAP is not set +# CONFIG_CRYPTO_LRW is not set +# CONFIG_CRYPTO_OFB is not set +# CONFIG_CRYPTO_PCBC is not set +CONFIG_CRYPTO_XTS=y +# end of Length-preserving ciphers and modes + +# +# AEAD (authenticated encryption with associated data) ciphers +# +# CONFIG_CRYPTO_AEGIS128 is not set +# CONFIG_CRYPTO_CHACHA20POLY1305 is not set +CONFIG_CRYPTO_CCM=y +CONFIG_CRYPTO_GCM=y +CONFIG_CRYPTO_SEQIV=y +CONFIG_CRYPTO_ECHAINIV=y +CONFIG_CRYPTO_ESSIV=y +# end of AEAD (authenticated encryption with associated data) ciphers + +# +# Hashes, digests, and MACs +# +CONFIG_CRYPTO_BLAKE2B=y +CONFIG_CRYPTO_CMAC=y +CONFIG_CRYPTO_GHASH=y +CONFIG_CRYPTO_HMAC=y +CONFIG_CRYPTO_MD4=y +CONFIG_CRYPTO_MD5=y +# CONFIG_CRYPTO_MICHAEL_MIC is not set +# CONFIG_CRYPTO_POLY1305 is not set +# CONFIG_CRYPTO_RMD160 is not set +CONFIG_CRYPTO_SHA1=y +CONFIG_CRYPTO_SHA256=y +CONFIG_CRYPTO_SHA512=y +# CONFIG_CRYPTO_SHA3 is not set +# CONFIG_CRYPTO_SM3_GENERIC is not set +# CONFIG_CRYPTO_STREEBOG is not set +# CONFIG_CRYPTO_VMAC is not set +# CONFIG_CRYPTO_WP512 is not set +# CONFIG_CRYPTO_XCBC is not set +CONFIG_CRYPTO_XXHASH=y +# end of Hashes, digests, and MACs + +# +# CRCs (cyclic redundancy checks) +# +CONFIG_CRYPTO_CRC32C=y +# CONFIG_CRYPTO_CRC32 is not set +# CONFIG_CRYPTO_CRCT10DIF is not set +# end of CRCs (cyclic redundancy checks) + +# +# Compression +# +# CONFIG_CRYPTO_DEFLATE is not set +# CONFIG_CRYPTO_LZO is not set +# CONFIG_CRYPTO_842 is not set +# CONFIG_CRYPTO_LZ4 is not set +# CONFIG_CRYPTO_LZ4HC is not set +# CONFIG_CRYPTO_ZSTD is not set +# end of Compression + +# +# Random number generation +# +# CONFIG_CRYPTO_ANSI_CPRNG is not set +CONFIG_CRYPTO_DRBG_MENU=y +CONFIG_CRYPTO_DRBG_HMAC=y +# CONFIG_CRYPTO_DRBG_HASH is not set +# CONFIG_CRYPTO_DRBG_CTR is not set +CONFIG_CRYPTO_DRBG=y +CONFIG_CRYPTO_JITTERENTROPY=y +# end of Random number generation + +# +# Userspace interface +# +CONFIG_CRYPTO_USER_API=y +CONFIG_CRYPTO_USER_API_HASH=y +CONFIG_CRYPTO_USER_API_SKCIPHER=y +# CONFIG_CRYPTO_USER_API_RNG is not set +# CONFIG_CRYPTO_USER_API_AEAD is not set +CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y +# end of Userspace interface + +CONFIG_CRYPTO_HASH_INFO=y + +# +# Accelerated Cryptographic Algorithms for CPU (x86) +# +CONFIG_CRYPTO_CURVE25519_X86=y +# CONFIG_CRYPTO_AES_NI_INTEL is not set +# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set +# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set +# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set +# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set +# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set +# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set +# CONFIG_CRYPTO_DES3_EDE_X86_64 is not set +# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set +# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set +# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set +# CONFIG_CRYPTO_SM4_AESNI_AVX_X86_64 is not set +# CONFIG_CRYPTO_SM4_AESNI_AVX2_X86_64 is not set +# CONFIG_CRYPTO_TWOFISH_X86_64 is not set +# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set +# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set +# CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64 is not set +CONFIG_CRYPTO_CHACHA20_X86_64=y +# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set +# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set +# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set +CONFIG_CRYPTO_BLAKE2S_X86=y +# CONFIG_CRYPTO_POLYVAL_CLMUL_NI is not set +CONFIG_CRYPTO_POLY1305_X86_64=y +# CONFIG_CRYPTO_SHA1_SSSE3 is not set +# CONFIG_CRYPTO_SHA256_SSSE3 is not set +# CONFIG_CRYPTO_SHA512_SSSE3 is not set +# CONFIG_CRYPTO_SM3_AVX_X86_64 is not set +# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set +# CONFIG_CRYPTO_CRC32C_INTEL is not set +# CONFIG_CRYPTO_CRC32_PCLMUL is not set +# end of Accelerated Cryptographic Algorithms for CPU (x86) + +CONFIG_CRYPTO_HW=y +# CONFIG_CRYPTO_DEV_PADLOCK is not set +# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set +# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set +# CONFIG_CRYPTO_DEV_CCP is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCC is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set +# CONFIG_CRYPTO_DEV_QAT_C62X is not set +# CONFIG_CRYPTO_DEV_QAT_4XXX is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCCVF is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set +# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set +# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set +# CONFIG_CRYPTO_DEV_VIRTIO is not set +# CONFIG_CRYPTO_DEV_SAFEXCEL is not set +# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set +CONFIG_ASYMMETRIC_KEY_TYPE=y +CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y +CONFIG_X509_CERTIFICATE_PARSER=y +# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set +CONFIG_PKCS7_MESSAGE_PARSER=y +# CONFIG_PKCS7_TEST_KEY is not set +# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set +CONFIG_FIPS_SIGNATURE_SELFTEST=y + +# +# Certificates for signature checking +# +CONFIG_SYSTEM_TRUSTED_KEYRING=y +CONFIG_SYSTEM_TRUSTED_KEYS="" +# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set +# CONFIG_SECONDARY_TRUSTED_KEYRING is not set +# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set +# end of Certificates for signature checking + +CONFIG_BINARY_PRINTF=y + +# +# Library routines +# +CONFIG_RAID6_PQ=y +# CONFIG_RAID6_PQ_BENCHMARK is not set +# CONFIG_PACKING is not set +CONFIG_BITREVERSE=y +CONFIG_GENERIC_STRNCPY_FROM_USER=y +CONFIG_GENERIC_STRNLEN_USER=y +CONFIG_GENERIC_NET_UTILS=y +# CONFIG_CORDIC is not set +# CONFIG_PRIME_NUMBERS is not set +CONFIG_RATIONAL=y +CONFIG_GENERIC_PCI_IOMAP=y +CONFIG_GENERIC_IOMAP=y +CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y +CONFIG_ARCH_HAS_FAST_MULTIPLIER=y +CONFIG_ARCH_USE_SYM_ANNOTATIONS=y + +# +# Crypto library routines +# +CONFIG_CRYPTO_LIB_UTILS=y +CONFIG_CRYPTO_LIB_AES=y +CONFIG_CRYPTO_LIB_ARC4=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S=y +CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=y +CONFIG_CRYPTO_LIB_CHACHA_GENERIC=y +CONFIG_CRYPTO_LIB_CHACHA=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519=y +CONFIG_CRYPTO_LIB_CURVE25519_GENERIC=y +CONFIG_CRYPTO_LIB_CURVE25519=y +CONFIG_CRYPTO_LIB_DES=y +CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11 +CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_POLY1305_GENERIC=y +CONFIG_CRYPTO_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_CHACHA20POLY1305=y +CONFIG_CRYPTO_LIB_SHA1=y +CONFIG_CRYPTO_LIB_SHA256=y +# end of Crypto library routines + +CONFIG_CRC_CCITT=y +CONFIG_CRC16=y +# CONFIG_CRC_T10DIF is not set +# CONFIG_CRC64_ROCKSOFT is not set +CONFIG_CRC_ITU_T=y +CONFIG_CRC32=y +# CONFIG_CRC32_SELFTEST is not set +CONFIG_CRC32_SLICEBY8=y +# CONFIG_CRC32_SLICEBY4 is not set +# CONFIG_CRC32_SARWATE is not set +# CONFIG_CRC32_BIT is not set +# CONFIG_CRC64 is not set +# CONFIG_CRC4 is not set +# CONFIG_CRC7 is not set +CONFIG_LIBCRC32C=y +# CONFIG_CRC8 is not set +CONFIG_XXHASH=y +# CONFIG_RANDOM32_SELFTEST is not set +CONFIG_ZLIB_INFLATE=y +CONFIG_ZLIB_DEFLATE=y +CONFIG_LZO_COMPRESS=y +CONFIG_LZO_DECOMPRESS=y +CONFIG_LZ4_DECOMPRESS=y +CONFIG_ZSTD_COMMON=y +CONFIG_ZSTD_COMPRESS=y +CONFIG_ZSTD_DECOMPRESS=y +CONFIG_XZ_DEC=y +# CONFIG_XZ_DEC_X86 is not set +# CONFIG_XZ_DEC_POWERPC is not set +# CONFIG_XZ_DEC_IA64 is not set +# CONFIG_XZ_DEC_ARM is not set +# CONFIG_XZ_DEC_ARMTHUMB is not set +# CONFIG_XZ_DEC_SPARC is not set +# CONFIG_XZ_DEC_MICROLZMA is not set +# CONFIG_XZ_DEC_TEST is not set +CONFIG_DECOMPRESS_GZIP=y +CONFIG_DECOMPRESS_ZSTD=y +CONFIG_GENERIC_ALLOCATOR=y +CONFIG_TEXTSEARCH=y +CONFIG_TEXTSEARCH_KMP=y +CONFIG_INTERVAL_TREE=y +CONFIG_XARRAY_MULTI=y +CONFIG_ASSOCIATIVE_ARRAY=y +CONFIG_HAS_IOMEM=y +CONFIG_HAS_IOPORT_MAP=y +CONFIG_HAS_DMA=y +CONFIG_DMA_OPS=y +CONFIG_NEED_SG_DMA_LENGTH=y +CONFIG_NEED_DMA_MAP_STATE=y +CONFIG_ARCH_DMA_ADDR_T_64BIT=y +CONFIG_ARCH_HAS_FORCE_DMA_UNENCRYPTED=y +CONFIG_SWIOTLB=y +# CONFIG_DMA_API_DEBUG is not set +# CONFIG_DMA_MAP_BENCHMARK is not set +CONFIG_SGL_ALLOC=y +# CONFIG_FORCE_NR_CPUS is not set +CONFIG_CPU_RMAP=y +CONFIG_DQL=y +CONFIG_GLOB=y +# CONFIG_GLOB_SELFTEST is not set +CONFIG_NLATTR=y +CONFIG_CLZ_TAB=y +# CONFIG_IRQ_POLL is not set +CONFIG_MPILIB=y +CONFIG_OID_REGISTRY=y +CONFIG_UCS2_STRING=y +CONFIG_HAVE_GENERIC_VDSO=y +CONFIG_GENERIC_GETTIMEOFDAY=y +CONFIG_GENERIC_VDSO_TIME_NS=y +CONFIG_FONT_SUPPORT=y +CONFIG_FONT_8x16=y +CONFIG_FONT_AUTOSELECT=y +CONFIG_SG_POOL=y +CONFIG_ARCH_HAS_PMEM_API=y +CONFIG_MEMREGION=y +CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y +CONFIG_ARCH_HAS_COPY_MC=y +CONFIG_ARCH_STACKWALK=y +CONFIG_SBITMAP=y +# end of Library routines + +# +# Kernel hacking +# + +# +# printk and dmesg options +# +CONFIG_PRINTK_TIME=y +# CONFIG_PRINTK_CALLER is not set +# CONFIG_STACKTRACE_BUILD_ID is not set +CONFIG_CONSOLE_LOGLEVEL_DEFAULT=2 +CONFIG_CONSOLE_LOGLEVEL_QUIET=4 +CONFIG_MESSAGE_LOGLEVEL_DEFAULT=1 +# CONFIG_BOOT_PRINTK_DELAY is not set +# CONFIG_DYNAMIC_DEBUG is not set +# CONFIG_DYNAMIC_DEBUG_CORE is not set +# CONFIG_SYMBOLIC_ERRNAME is not set +CONFIG_DEBUG_BUGVERBOSE=y +# end of printk and dmesg options + +# CONFIG_DEBUG_KERNEL is not set +# CONFIG_DEBUG_MISC is not set + +# +# Compile-time checks and compiler options +# +# CONFIG_DEBUG_INFO is not set +CONFIG_AS_HAS_NON_CONST_LEB128=y +# CONFIG_DEBUG_INFO_NONE is not set +CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y +# CONFIG_DEBUG_INFO_DWARF4 is not set +# CONFIG_DEBUG_INFO_DWARF5 is not set +# CONFIG_DEBUG_INFO_REDUCED is not set +# CONFIG_DEBUG_INFO_COMPRESSED is not set +# CONFIG_DEBUG_INFO_SPLIT is not set +CONFIG_DEBUG_INFO_BTF=y +CONFIG_PAHOLE_HAS_SPLIT_BTF=y +CONFIG_DEBUG_INFO_BTF_MODULES=y +# CONFIG_MODULE_ALLOW_BTF_MISMATCH is not set +# CONFIG_GDB_SCRIPTS is not set +CONFIG_FRAME_WARN=1024 +# CONFIG_STRIP_ASM_SYMS is not set +# CONFIG_READABLE_ASM is not set +# CONFIG_HEADERS_INSTALL is not set +# CONFIG_DEBUG_SECTION_MISMATCH is not set +# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set +# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B is not set +CONFIG_OBJTOOL=y +# CONFIG_VMLINUX_MAP is not set +# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set +# end of Compile-time checks and compiler options + +# +# Generic Kernel Debugging Instruments +# +# CONFIG_MAGIC_SYSRQ is not set +CONFIG_DEBUG_FS=y +CONFIG_DEBUG_FS_ALLOW_ALL=y +# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set +# CONFIG_DEBUG_FS_ALLOW_NONE is not set +CONFIG_HAVE_ARCH_KGDB=y +# CONFIG_KGDB is not set +CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y +# CONFIG_UBSAN is not set +CONFIG_HAVE_ARCH_KCSAN=y +CONFIG_HAVE_KCSAN_COMPILER=y +# CONFIG_KCSAN is not set +# end of Generic Kernel Debugging Instruments + +# +# Networking Debugging +# +# CONFIG_NET_DEV_REFCNT_TRACKER is not set +# CONFIG_NET_NS_REFCNT_TRACKER is not set +# CONFIG_DEBUG_NET is not set +# end of Networking Debugging + +# +# Memory Debugging +# +CONFIG_PAGE_EXTENSION=y +# CONFIG_DEBUG_PAGEALLOC is not set +# CONFIG_SLUB_DEBUG is not set +# CONFIG_PAGE_OWNER is not set +# CONFIG_PAGE_POISONING is not set +# CONFIG_DEBUG_PAGE_REF is not set +# CONFIG_DEBUG_RODATA_TEST is not set +CONFIG_ARCH_HAS_DEBUG_WX=y +# CONFIG_DEBUG_WX is not set +CONFIG_GENERIC_PTDUMP=y +# CONFIG_PTDUMP_DEBUGFS is not set +# CONFIG_DEBUG_OBJECTS is not set +# CONFIG_SHRINKER_DEBUG is not set +CONFIG_HAVE_DEBUG_KMEMLEAK=y +# CONFIG_DEBUG_KMEMLEAK is not set +# CONFIG_DEBUG_STACK_USAGE is not set +CONFIG_SCHED_STACK_END_CHECK=y +CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y +# CONFIG_DEBUG_VM is not set +# CONFIG_DEBUG_VM_PGTABLE is not set +CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y +# CONFIG_DEBUG_VIRTUAL is not set +CONFIG_DEBUG_MEMORY_INIT=y +# CONFIG_DEBUG_PER_CPU_MAPS is not set +CONFIG_ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP=y +# CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP is not set +CONFIG_HAVE_ARCH_KASAN=y +CONFIG_HAVE_ARCH_KASAN_VMALLOC=y +CONFIG_CC_HAS_KASAN_GENERIC=y +CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y +# CONFIG_KASAN is not set +CONFIG_HAVE_ARCH_KFENCE=y +# CONFIG_KFENCE is not set +CONFIG_HAVE_ARCH_KMSAN=y +# end of Memory Debugging + +# CONFIG_DEBUG_SHIRQ is not set + +# +# Debug Oops, Lockups and Hangs +# +CONFIG_PANIC_ON_OOPS=y +CONFIG_PANIC_ON_OOPS_VALUE=1 +CONFIG_PANIC_TIMEOUT=0 +# CONFIG_SOFTLOCKUP_DETECTOR is not set +CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y +# CONFIG_HARDLOCKUP_DETECTOR is not set +# CONFIG_DETECT_HUNG_TASK is not set +# CONFIG_WQ_WATCHDOG is not set +# CONFIG_TEST_LOCKUP is not set +# end of Debug Oops, Lockups and Hangs + +# +# Scheduler Debugging +# +CONFIG_SCHED_DEBUG=y +CONFIG_SCHED_INFO=y +CONFIG_SCHEDSTATS=y +# end of Scheduler Debugging + +# CONFIG_DEBUG_TIMEKEEPING is not set + +# +# Lock Debugging (spinlocks, mutexes, etc...) +# +CONFIG_LOCK_DEBUGGING_SUPPORT=y +# CONFIG_PROVE_LOCKING is not set +# CONFIG_LOCK_STAT is not set +# CONFIG_DEBUG_RT_MUTEXES is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_DEBUG_MUTEXES is not set +# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set +# CONFIG_DEBUG_RWSEMS is not set +# CONFIG_DEBUG_LOCK_ALLOC is not set +# CONFIG_DEBUG_ATOMIC_SLEEP is not set +# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set +# CONFIG_LOCK_TORTURE_TEST is not set +# CONFIG_WW_MUTEX_SELFTEST is not set +# CONFIG_SCF_TORTURE_TEST is not set +# CONFIG_CSD_LOCK_WAIT_DEBUG is not set +# end of Lock Debugging (spinlocks, mutexes, etc...) + +# CONFIG_DEBUG_IRQFLAGS is not set +CONFIG_STACKTRACE=y +# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set +# CONFIG_DEBUG_KOBJECT is not set + +# +# Debug kernel data structures +# +CONFIG_DEBUG_LIST=y +# CONFIG_DEBUG_PLIST is not set +CONFIG_DEBUG_SG=y +CONFIG_DEBUG_NOTIFIERS=y +# CONFIG_BUG_ON_DATA_CORRUPTION is not set +# CONFIG_DEBUG_MAPLE_TREE is not set +# end of Debug kernel data structures + +CONFIG_DEBUG_CREDENTIALS=y + +# +# RCU Debugging +# +# CONFIG_RCU_SCALE_TEST is not set +# CONFIG_RCU_TORTURE_TEST is not set +# CONFIG_RCU_REF_SCALE_TEST is not set +CONFIG_RCU_CPU_STALL_TIMEOUT=60 +CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=0 +# CONFIG_RCU_TRACE is not set +# CONFIG_RCU_EQS_DEBUG is not set +# end of RCU Debugging + +# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set +# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set +# CONFIG_LATENCYTOP is not set +CONFIG_USER_STACKTRACE_SUPPORT=y +CONFIG_NOP_TRACER=y +CONFIG_HAVE_RETHOOK=y +CONFIG_RETHOOK=y +CONFIG_HAVE_FUNCTION_TRACER=y +CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y +CONFIG_HAVE_DYNAMIC_FTRACE=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y +CONFIG_HAVE_DYNAMIC_FTRACE_NO_PATCHABLE=y +CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y +CONFIG_HAVE_SYSCALL_TRACEPOINTS=y +CONFIG_HAVE_FENTRY=y +CONFIG_HAVE_OBJTOOL_MCOUNT=y +CONFIG_HAVE_C_RECORDMCOUNT=y +CONFIG_HAVE_BUILDTIME_MCOUNT_SORT=y +CONFIG_BUILDTIME_MCOUNT_SORT=y +CONFIG_TRACER_MAX_TRACE=y +CONFIG_TRACE_CLOCK=y +CONFIG_RING_BUFFER=y +CONFIG_EVENT_TRACING=y +CONFIG_CONTEXT_SWITCH_TRACER=y +CONFIG_RING_BUFFER_ALLOW_SWAP=y +CONFIG_TRACING=y +CONFIG_GENERIC_TRACER=y +CONFIG_TRACING_SUPPORT=y +CONFIG_FTRACE=y +# CONFIG_BOOTTIME_TRACING is not set +CONFIG_FUNCTION_TRACER=y +CONFIG_FUNCTION_GRAPH_TRACER=y +CONFIG_DYNAMIC_FTRACE=y +CONFIG_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y +CONFIG_DYNAMIC_FTRACE_WITH_ARGS=y +CONFIG_FPROBE=y +CONFIG_FUNCTION_PROFILER=y +CONFIG_STACK_TRACER=y +# CONFIG_IRQSOFF_TRACER is not set +CONFIG_SCHED_TRACER=y +CONFIG_HWLAT_TRACER=y +# CONFIG_OSNOISE_TRACER is not set +# CONFIG_TIMERLAT_TRACER is not set +# CONFIG_MMIOTRACE is not set +CONFIG_FTRACE_SYSCALLS=y +CONFIG_TRACER_SNAPSHOT=y +CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y +CONFIG_BRANCH_PROFILE_NONE=y +# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set +# CONFIG_BLK_DEV_IO_TRACE is not set +CONFIG_KPROBE_EVENTS=y +# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set +CONFIG_UPROBE_EVENTS=y +CONFIG_BPF_EVENTS=y +CONFIG_DYNAMIC_EVENTS=y +CONFIG_PROBE_EVENTS=y +CONFIG_FTRACE_MCOUNT_RECORD=y +CONFIG_FTRACE_MCOUNT_USE_CC=y +# CONFIG_SYNTH_EVENTS is not set +# CONFIG_HIST_TRIGGERS is not set +# CONFIG_TRACE_EVENT_INJECT is not set +# CONFIG_TRACEPOINT_BENCHMARK is not set +# CONFIG_RING_BUFFER_BENCHMARK is not set +# CONFIG_TRACE_EVAL_MAP_FILE is not set +# CONFIG_FTRACE_RECORD_RECURSION is not set +# CONFIG_FTRACE_STARTUP_TEST is not set +# CONFIG_FTRACE_SORT_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS is not set +# CONFIG_PREEMPTIRQ_DELAY_TEST is not set +# CONFIG_KPROBE_EVENT_GEN_TEST is not set +# CONFIG_RV is not set +# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set +# CONFIG_SAMPLES is not set +CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y +CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y +CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y +# CONFIG_STRICT_DEVMEM is not set + +# +# x86 Debugging +# +# CONFIG_X86_VERBOSE_BOOTUP is not set +CONFIG_EARLY_PRINTK=y +# CONFIG_EARLY_PRINTK_DBGP is not set +# CONFIG_EARLY_PRINTK_USB_XDBC is not set +# CONFIG_EFI_PGT_DUMP is not set +# CONFIG_DEBUG_TLBFLUSH is not set +CONFIG_HAVE_MMIOTRACE_SUPPORT=y +# CONFIG_X86_DECODER_SELFTEST is not set +CONFIG_IO_DELAY_0X80=y +# CONFIG_IO_DELAY_0XED is not set +# CONFIG_IO_DELAY_UDELAY is not set +# CONFIG_IO_DELAY_NONE is not set +# CONFIG_DEBUG_BOOT_PARAMS is not set +# CONFIG_CPA_DEBUG is not set +# CONFIG_DEBUG_ENTRY is not set +# CONFIG_DEBUG_NMI_SELFTEST is not set +# CONFIG_X86_DEBUG_FPU is not set +# CONFIG_PUNIT_ATOM_DEBUG is not set +CONFIG_UNWINDER_ORC=y +# CONFIG_UNWINDER_FRAME_POINTER is not set +# CONFIG_UNWINDER_GUESS is not set +# end of x86 Debugging + +# +# Kernel Testing and Coverage +# +# CONFIG_KUNIT is not set +# CONFIG_NOTIFIER_ERROR_INJECTION is not set +# CONFIG_FUNCTION_ERROR_INJECTION is not set +# CONFIG_FAULT_INJECTION is not set +CONFIG_ARCH_HAS_KCOV=y +CONFIG_CC_HAS_SANCOV_TRACE_PC=y +# CONFIG_KCOV is not set +# CONFIG_RUNTIME_TESTING_MENU is not set +CONFIG_ARCH_USE_MEMTEST=y +# CONFIG_MEMTEST is not set +# CONFIG_HYPERV_TESTING is not set +# end of Kernel Testing and Coverage + +# +# Rust hacking +# +# end of Rust hacking +# end of Kernel hacking diff --git a/config/kernel/linux-wsl2-x86-edge.config b/config/kernel/linux-wsl2-x86-edge.config new file mode 100644 index 000000000000..540b9a9fcb69 --- /dev/null +++ b/config/kernel/linux-wsl2-x86-edge.config @@ -0,0 +1,4476 @@ +# +# Automatically generated file; DO NOT EDIT. +# Linux/x86_64 6.6.2 Kernel Configuration +# +CONFIG_CC_VERSION_TEXT="gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0" +CONFIG_CC_IS_GCC=y +CONFIG_GCC_VERSION=110400 +CONFIG_CLANG_VERSION=0 +CONFIG_AS_IS_GNU=y +CONFIG_AS_VERSION=23800 +CONFIG_LD_IS_BFD=y +CONFIG_LD_VERSION=23800 +CONFIG_LLD_VERSION=0 +CONFIG_CC_CAN_LINK=y +CONFIG_CC_CAN_LINK_STATIC=y +CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y +CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y +CONFIG_TOOLS_SUPPORT_RELR=y +CONFIG_CC_HAS_ASM_INLINE=y +CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y +CONFIG_PAHOLE_VERSION=125 +CONFIG_IRQ_WORK=y +CONFIG_BUILDTIME_TABLE_SORT=y +CONFIG_THREAD_INFO_IN_TASK=y + +# +# General setup +# +CONFIG_INIT_ENV_ARG_LIMIT=32 +# CONFIG_COMPILE_TEST is not set +# CONFIG_WERROR is not set +CONFIG_LOCALVERSION="" +# CONFIG_LOCALVERSION_AUTO is not set +CONFIG_BUILD_SALT="" +CONFIG_HAVE_KERNEL_GZIP=y +CONFIG_HAVE_KERNEL_BZIP2=y +CONFIG_HAVE_KERNEL_LZMA=y +CONFIG_HAVE_KERNEL_XZ=y +CONFIG_HAVE_KERNEL_LZO=y +CONFIG_HAVE_KERNEL_LZ4=y +CONFIG_HAVE_KERNEL_ZSTD=y +CONFIG_KERNEL_GZIP=y +# CONFIG_KERNEL_BZIP2 is not set +# CONFIG_KERNEL_LZMA is not set +# CONFIG_KERNEL_XZ is not set +# CONFIG_KERNEL_LZO is not set +# CONFIG_KERNEL_LZ4 is not set +# CONFIG_KERNEL_ZSTD is not set +CONFIG_DEFAULT_INIT="" +CONFIG_DEFAULT_HOSTNAME="(none)" +CONFIG_SYSVIPC=y +CONFIG_SYSVIPC_SYSCTL=y +CONFIG_SYSVIPC_COMPAT=y +CONFIG_POSIX_MQUEUE=y +CONFIG_POSIX_MQUEUE_SYSCTL=y +# CONFIG_WATCH_QUEUE is not set +CONFIG_CROSS_MEMORY_ATTACH=y +# CONFIG_USELIB is not set +CONFIG_AUDIT=y +CONFIG_HAVE_ARCH_AUDITSYSCALL=y +CONFIG_AUDITSYSCALL=y + +# +# IRQ subsystem +# +CONFIG_GENERIC_IRQ_PROBE=y +CONFIG_GENERIC_IRQ_SHOW=y +CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y +CONFIG_GENERIC_PENDING_IRQ=y +CONFIG_GENERIC_IRQ_MIGRATION=y +CONFIG_HARDIRQS_SW_RESEND=y +CONFIG_IRQ_DOMAIN=y +CONFIG_IRQ_DOMAIN_HIERARCHY=y +CONFIG_GENERIC_MSI_IRQ=y +CONFIG_IRQ_MSI_IOMMU=y +CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y +CONFIG_GENERIC_IRQ_RESERVATION_MODE=y +CONFIG_IRQ_FORCED_THREADING=y +CONFIG_SPARSE_IRQ=y +# CONFIG_GENERIC_IRQ_DEBUGFS is not set +# end of IRQ subsystem + +CONFIG_CLOCKSOURCE_WATCHDOG=y +CONFIG_ARCH_CLOCKSOURCE_INIT=y +CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y +CONFIG_GENERIC_TIME_VSYSCALL=y +CONFIG_GENERIC_CLOCKEVENTS=y +CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y +CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y +CONFIG_GENERIC_CMOS_UPDATE=y +CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y +CONFIG_CONTEXT_TRACKING=y +CONFIG_CONTEXT_TRACKING_IDLE=y + +# +# Timers subsystem +# +CONFIG_TICK_ONESHOT=y +CONFIG_NO_HZ_COMMON=y +# CONFIG_HZ_PERIODIC is not set +CONFIG_NO_HZ_IDLE=y +# CONFIG_NO_HZ_FULL is not set +# CONFIG_NO_HZ is not set +CONFIG_HIGH_RES_TIMERS=y +CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US=100 +# end of Timers subsystem + +CONFIG_BPF=y +CONFIG_HAVE_EBPF_JIT=y +CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y + +# +# BPF subsystem +# +CONFIG_BPF_SYSCALL=y +CONFIG_BPF_JIT=y +CONFIG_BPF_JIT_ALWAYS_ON=y +CONFIG_BPF_JIT_DEFAULT_ON=y +CONFIG_BPF_UNPRIV_DEFAULT_OFF=y +# CONFIG_BPF_PRELOAD is not set +# CONFIG_BPF_LSM is not set +# end of BPF subsystem + +CONFIG_PREEMPT_NONE_BUILD=y +CONFIG_PREEMPT_NONE=y +# CONFIG_PREEMPT_VOLUNTARY is not set +# CONFIG_PREEMPT is not set +# CONFIG_PREEMPT_DYNAMIC is not set +# CONFIG_SCHED_CORE is not set + +# +# CPU/Task time and stats accounting +# +CONFIG_TICK_CPU_ACCOUNTING=y +# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set +# CONFIG_IRQ_TIME_ACCOUNTING is not set +CONFIG_BSD_PROCESS_ACCT=y +# CONFIG_BSD_PROCESS_ACCT_V3 is not set +CONFIG_TASKSTATS=y +CONFIG_TASK_DELAY_ACCT=y +CONFIG_TASK_XACCT=y +CONFIG_TASK_IO_ACCOUNTING=y +# CONFIG_PSI is not set +# end of CPU/Task time and stats accounting + +# CONFIG_CPU_ISOLATION is not set + +# +# RCU Subsystem +# +CONFIG_TREE_RCU=y +# CONFIG_RCU_EXPERT is not set +CONFIG_TREE_SRCU=y +CONFIG_TASKS_RCU_GENERIC=y +CONFIG_TASKS_RUDE_RCU=y +CONFIG_TASKS_TRACE_RCU=y +CONFIG_RCU_STALL_COMMON=y +CONFIG_RCU_NEED_SEGCBLIST=y +# end of RCU Subsystem + +CONFIG_IKCONFIG=y +CONFIG_IKCONFIG_PROC=y +# CONFIG_IKHEADERS is not set +CONFIG_LOG_BUF_SHIFT=17 +CONFIG_LOG_CPU_MAX_BUF_SHIFT=12 +# CONFIG_PRINTK_INDEX is not set +CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y + +# +# Scheduler features +# +# CONFIG_UCLAMP_TASK is not set +# end of Scheduler features + +CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y +CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y +CONFIG_CC_HAS_INT128=y +CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5" +CONFIG_GCC11_NO_ARRAY_BOUNDS=y +CONFIG_CC_NO_ARRAY_BOUNDS=y +CONFIG_ARCH_SUPPORTS_INT128=y +CONFIG_CGROUPS=y +CONFIG_PAGE_COUNTER=y +# CONFIG_CGROUP_FAVOR_DYNMODS is not set +CONFIG_MEMCG=y +CONFIG_MEMCG_KMEM=y +CONFIG_BLK_CGROUP=y +CONFIG_CGROUP_WRITEBACK=y +CONFIG_CGROUP_SCHED=y +CONFIG_FAIR_GROUP_SCHED=y +CONFIG_CFS_BANDWIDTH=y +CONFIG_RT_GROUP_SCHED=y +CONFIG_SCHED_MM_CID=y +CONFIG_CGROUP_PIDS=y +CONFIG_CGROUP_RDMA=y +CONFIG_CGROUP_FREEZER=y +CONFIG_CGROUP_HUGETLB=y +CONFIG_CPUSETS=y +CONFIG_PROC_PID_CPUSET=y +CONFIG_CGROUP_DEVICE=y +CONFIG_CGROUP_CPUACCT=y +CONFIG_CGROUP_PERF=y +CONFIG_CGROUP_BPF=y +CONFIG_CGROUP_MISC=y +# CONFIG_CGROUP_DEBUG is not set +CONFIG_SOCK_CGROUP_DATA=y +CONFIG_NAMESPACES=y +CONFIG_UTS_NS=y +CONFIG_TIME_NS=y +CONFIG_IPC_NS=y +CONFIG_USER_NS=y +CONFIG_PID_NS=y +CONFIG_NET_NS=y +CONFIG_CHECKPOINT_RESTORE=y +# CONFIG_SCHED_AUTOGROUP is not set +# CONFIG_RELAY is not set +CONFIG_BLK_DEV_INITRD=y +CONFIG_INITRAMFS_SOURCE="" +CONFIG_RD_GZIP=y +# CONFIG_RD_BZIP2 is not set +# CONFIG_RD_LZMA is not set +# CONFIG_RD_XZ is not set +# CONFIG_RD_LZO is not set +# CONFIG_RD_LZ4 is not set +CONFIG_RD_ZSTD=y +# CONFIG_BOOT_CONFIG is not set +# CONFIG_INITRAMFS_PRESERVE_MTIME is not set +CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y +# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set +CONFIG_LD_ORPHAN_WARN=y +CONFIG_LD_ORPHAN_WARN_LEVEL="warn" +CONFIG_SYSCTL=y +CONFIG_HAVE_UID16=y +CONFIG_SYSCTL_EXCEPTION_TRACE=y +CONFIG_HAVE_PCSPKR_PLATFORM=y +CONFIG_EXPERT=y +# CONFIG_UID16 is not set +CONFIG_MULTIUSER=y +CONFIG_SGETMASK_SYSCALL=y +CONFIG_SYSFS_SYSCALL=y +CONFIG_FHANDLE=y +CONFIG_POSIX_TIMERS=y +CONFIG_PRINTK=y +CONFIG_BUG=y +CONFIG_ELF_CORE=y +CONFIG_PCSPKR_PLATFORM=y +CONFIG_BASE_FULL=y +CONFIG_FUTEX=y +CONFIG_FUTEX_PI=y +CONFIG_EPOLL=y +CONFIG_SIGNALFD=y +CONFIG_TIMERFD=y +CONFIG_EVENTFD=y +CONFIG_SHMEM=y +CONFIG_AIO=y +CONFIG_IO_URING=y +CONFIG_ADVISE_SYSCALLS=y +CONFIG_MEMBARRIER=y +CONFIG_KALLSYMS=y +# CONFIG_KALLSYMS_SELFTEST is not set +# CONFIG_KALLSYMS_ALL is not set +CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y +CONFIG_KALLSYMS_BASE_RELATIVE=y +CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y +CONFIG_KCMP=y +CONFIG_RSEQ=y +CONFIG_CACHESTAT_SYSCALL=y +# CONFIG_DEBUG_RSEQ is not set +CONFIG_HAVE_PERF_EVENTS=y +CONFIG_GUEST_PERF_EVENTS=y +# CONFIG_PC104 is not set + +# +# Kernel Performance Events And Counters +# +CONFIG_PERF_EVENTS=y +# CONFIG_DEBUG_PERF_USE_VMALLOC is not set +# end of Kernel Performance Events And Counters + +CONFIG_SYSTEM_DATA_VERIFICATION=y +# CONFIG_PROFILING is not set +CONFIG_TRACEPOINTS=y + +# +# Kexec and crash features +# +CONFIG_CRASH_CORE=y +# CONFIG_KEXEC is not set +# CONFIG_KEXEC_FILE is not set +# CONFIG_CRASH_DUMP is not set +# end of Kexec and crash features +# end of General setup + +CONFIG_64BIT=y +CONFIG_X86_64=y +CONFIG_X86=y +CONFIG_INSTRUCTION_DECODER=y +CONFIG_OUTPUT_FORMAT="elf64-x86-64" +CONFIG_LOCKDEP_SUPPORT=y +CONFIG_STACKTRACE_SUPPORT=y +CONFIG_MMU=y +CONFIG_ARCH_MMAP_RND_BITS_MIN=28 +CONFIG_ARCH_MMAP_RND_BITS_MAX=32 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8 +CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16 +CONFIG_GENERIC_ISA_DMA=y +CONFIG_GENERIC_BUG=y +CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y +CONFIG_ARCH_MAY_HAVE_PC_FDC=y +CONFIG_GENERIC_CALIBRATE_DELAY=y +CONFIG_ARCH_HAS_CPU_RELAX=y +CONFIG_ARCH_HIBERNATION_POSSIBLE=y +CONFIG_ARCH_SUSPEND_POSSIBLE=y +CONFIG_AUDIT_ARCH=y +CONFIG_HAVE_INTEL_TXT=y +CONFIG_X86_64_SMP=y +CONFIG_ARCH_SUPPORTS_UPROBES=y +CONFIG_FIX_EARLYCON_MEM=y +CONFIG_DYNAMIC_PHYSICAL_MASK=y +CONFIG_PGTABLE_LEVELS=4 +CONFIG_CC_HAS_SANE_STACKPROTECTOR=y + +# +# Processor type and features +# +CONFIG_SMP=y +CONFIG_X86_X2APIC=y +# CONFIG_X86_MPPARSE is not set +# CONFIG_GOLDFISH is not set +# CONFIG_X86_CPU_RESCTRL is not set +# CONFIG_X86_EXTENDED_PLATFORM is not set +# CONFIG_X86_INTEL_LPSS is not set +# CONFIG_X86_AMD_PLATFORM_DEVICE is not set +# CONFIG_IOSF_MBI is not set +CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y +# CONFIG_SCHED_OMIT_FRAME_POINTER is not set +CONFIG_HYPERVISOR_GUEST=y +CONFIG_PARAVIRT=y +# CONFIG_PARAVIRT_DEBUG is not set +CONFIG_PARAVIRT_SPINLOCKS=y +CONFIG_X86_HV_CALLBACK_VECTOR=y +# CONFIG_XEN is not set +# CONFIG_KVM_GUEST is not set +# CONFIG_ARCH_CPUIDLE_HALTPOLL is not set +# CONFIG_PVH is not set +# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set +# CONFIG_JAILHOUSE_GUEST is not set +# CONFIG_ACRN_GUEST is not set +CONFIG_INTEL_TDX_GUEST=y +# CONFIG_MK8 is not set +# CONFIG_MPSC is not set +CONFIG_MCORE2=y +# CONFIG_MATOM is not set +# CONFIG_GENERIC_CPU is not set +CONFIG_X86_INTERNODE_CACHE_SHIFT=6 +CONFIG_X86_L1_CACHE_SHIFT=6 +CONFIG_X86_INTEL_USERCOPY=y +CONFIG_X86_USE_PPRO_CHECKSUM=y +CONFIG_X86_P6_NOP=y +CONFIG_X86_TSC=y +CONFIG_X86_CMPXCHG64=y +CONFIG_X86_CMOV=y +CONFIG_X86_MINIMUM_CPU_FAMILY=64 +CONFIG_X86_DEBUGCTLMSR=y +CONFIG_IA32_FEAT_CTL=y +CONFIG_X86_VMX_FEATURE_NAMES=y +CONFIG_PROCESSOR_SELECT=y +CONFIG_CPU_SUP_INTEL=y +CONFIG_CPU_SUP_AMD=y +# CONFIG_CPU_SUP_HYGON is not set +CONFIG_CPU_SUP_CENTAUR=y +# CONFIG_CPU_SUP_ZHAOXIN is not set +CONFIG_HPET_TIMER=y +CONFIG_HPET_EMULATE_RTC=y +CONFIG_DMI=y +# CONFIG_GART_IOMMU is not set +# CONFIG_MAXSMP is not set +CONFIG_NR_CPUS_RANGE_BEGIN=2 +CONFIG_NR_CPUS_RANGE_END=512 +CONFIG_NR_CPUS_DEFAULT=64 +CONFIG_NR_CPUS=256 +CONFIG_SCHED_CLUSTER=y +CONFIG_SCHED_SMT=y +CONFIG_SCHED_MC=y +# CONFIG_SCHED_MC_PRIO is not set +CONFIG_X86_LOCAL_APIC=y +CONFIG_X86_IO_APIC=y +# CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS is not set +CONFIG_X86_MCE=y +# CONFIG_X86_MCELOG_LEGACY is not set +CONFIG_X86_MCE_INTEL=y +CONFIG_X86_MCE_AMD=y +CONFIG_X86_MCE_THRESHOLD=y +# CONFIG_X86_MCE_INJECT is not set + +# +# Performance monitoring +# +# CONFIG_PERF_EVENTS_INTEL_UNCORE is not set +# CONFIG_PERF_EVENTS_INTEL_RAPL is not set +# CONFIG_PERF_EVENTS_INTEL_CSTATE is not set +# CONFIG_PERF_EVENTS_AMD_POWER is not set +# CONFIG_PERF_EVENTS_AMD_UNCORE is not set +CONFIG_PERF_EVENTS_AMD_BRS=y +# end of Performance monitoring + +CONFIG_X86_16BIT=y +CONFIG_X86_ESPFIX64=y +CONFIG_X86_VSYSCALL_EMULATION=y +# CONFIG_X86_IOPL_IOPERM is not set +CONFIG_MICROCODE=y +# CONFIG_MICROCODE_LATE_LOADING is not set +# CONFIG_X86_MSR is not set +# CONFIG_X86_CPUID is not set +# CONFIG_X86_5LEVEL is not set +CONFIG_X86_DIRECT_GBPAGES=y +# CONFIG_X86_CPA_STATISTICS is not set +CONFIG_X86_MEM_ENCRYPT=y +# CONFIG_AMD_MEM_ENCRYPT is not set +# CONFIG_NUMA is not set +CONFIG_ARCH_SPARSEMEM_ENABLE=y +CONFIG_ARCH_SPARSEMEM_DEFAULT=y +# CONFIG_ARCH_MEMORY_PROBE is not set +CONFIG_ARCH_PROC_KCORE_TEXT=y +CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000 +CONFIG_X86_PMEM_LEGACY_DEVICE=y +CONFIG_X86_PMEM_LEGACY=y +# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set +CONFIG_MTRR=y +# CONFIG_MTRR_SANITIZER is not set +CONFIG_X86_PAT=y +CONFIG_ARCH_USES_PG_UNCACHED=y +CONFIG_X86_UMIP=y +CONFIG_CC_HAS_IBT=y +# CONFIG_X86_KERNEL_IBT is not set +CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y +CONFIG_X86_INTEL_TSX_MODE_OFF=y +# CONFIG_X86_INTEL_TSX_MODE_ON is not set +# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set +# CONFIG_X86_SGX is not set +# CONFIG_X86_USER_SHADOW_STACK is not set +CONFIG_EFI=y +CONFIG_EFI_STUB=y +CONFIG_EFI_HANDOVER_PROTOCOL=y +CONFIG_EFI_MIXED=y +# CONFIG_EFI_FAKE_MEMMAP is not set +# CONFIG_EFI_RUNTIME_MAP is not set +CONFIG_HZ_100=y +# CONFIG_HZ_250 is not set +# CONFIG_HZ_300 is not set +# CONFIG_HZ_1000 is not set +CONFIG_HZ=100 +CONFIG_SCHED_HRTICK=y +CONFIG_ARCH_SUPPORTS_KEXEC=y +CONFIG_ARCH_SUPPORTS_KEXEC_FILE=y +CONFIG_ARCH_SUPPORTS_KEXEC_SIG=y +CONFIG_ARCH_SUPPORTS_KEXEC_SIG_FORCE=y +CONFIG_ARCH_SUPPORTS_KEXEC_BZIMAGE_VERIFY_SIG=y +CONFIG_ARCH_SUPPORTS_KEXEC_JUMP=y +CONFIG_ARCH_SUPPORTS_CRASH_DUMP=y +CONFIG_ARCH_SUPPORTS_CRASH_HOTPLUG=y +CONFIG_PHYSICAL_START=0x1000000 +CONFIG_RELOCATABLE=y +CONFIG_RANDOMIZE_BASE=y +CONFIG_X86_NEED_RELOCS=y +CONFIG_PHYSICAL_ALIGN=0x1000000 +CONFIG_DYNAMIC_MEMORY_LAYOUT=y +CONFIG_RANDOMIZE_MEMORY=y +CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa +# CONFIG_ADDRESS_MASKING is not set +CONFIG_HOTPLUG_CPU=y +# CONFIG_COMPAT_VDSO is not set +# CONFIG_LEGACY_VSYSCALL_XONLY is not set +CONFIG_LEGACY_VSYSCALL_NONE=y +# CONFIG_CMDLINE_BOOL is not set +CONFIG_MODIFY_LDT_SYSCALL=y +# CONFIG_STRICT_SIGALTSTACK_SIZE is not set +CONFIG_HAVE_LIVEPATCH=y +# end of Processor type and features + +CONFIG_CC_HAS_SLS=y +CONFIG_CC_HAS_RETURN_THUNK=y +CONFIG_CC_HAS_ENTRY_PADDING=y +CONFIG_FUNCTION_PADDING_CFI=11 +CONFIG_FUNCTION_PADDING_BYTES=16 +CONFIG_CALL_PADDING=y +CONFIG_HAVE_CALL_THUNKS=y +CONFIG_CALL_THUNKS=y +CONFIG_PREFIX_SYMBOLS=y +CONFIG_SPECULATION_MITIGATIONS=y +CONFIG_PAGE_TABLE_ISOLATION=y +CONFIG_RETPOLINE=y +CONFIG_RETHUNK=y +CONFIG_CPU_UNRET_ENTRY=y +CONFIG_CALL_DEPTH_TRACKING=y +# CONFIG_CALL_THUNKS_DEBUG is not set +CONFIG_CPU_IBPB_ENTRY=y +CONFIG_CPU_IBRS_ENTRY=y +CONFIG_CPU_SRSO=y +# CONFIG_SLS is not set +# CONFIG_GDS_FORCE_MITIGATION is not set +CONFIG_ARCH_HAS_ADD_PAGES=y + +# +# Power management and ACPI options +# +# CONFIG_SUSPEND is not set +# CONFIG_HIBERNATION is not set +# CONFIG_PM is not set +# CONFIG_ENERGY_MODEL is not set +CONFIG_ARCH_SUPPORTS_ACPI=y +CONFIG_ACPI=y +CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y +CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y +CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y +# CONFIG_ACPI_DEBUGGER is not set +# CONFIG_ACPI_SPCR_TABLE is not set +# CONFIG_ACPI_FPDT is not set +CONFIG_ACPI_LPIT=y +# CONFIG_ACPI_REV_OVERRIDE_POSSIBLE is not set +# CONFIG_ACPI_EC_DEBUGFS is not set +CONFIG_ACPI_AC=y +CONFIG_ACPI_BATTERY=y +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_TINY_POWER_BUTTON is not set +# CONFIG_ACPI_FAN is not set +# CONFIG_ACPI_DOCK is not set +CONFIG_ACPI_CPU_FREQ_PSS=y +CONFIG_ACPI_PROCESSOR_CSTATE=y +CONFIG_ACPI_PROCESSOR_IDLE=y +CONFIG_ACPI_CPPC_LIB=y +CONFIG_ACPI_PROCESSOR=y +CONFIG_ACPI_HOTPLUG_CPU=y +# CONFIG_ACPI_PROCESSOR_AGGREGATOR is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y +# CONFIG_ACPI_TABLE_UPGRADE is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_PCI_SLOT is not set +CONFIG_ACPI_CONTAINER=y +# CONFIG_ACPI_HOTPLUG_MEMORY is not set +CONFIG_ACPI_HOTPLUG_IOAPIC=y +# CONFIG_ACPI_SBS is not set +# CONFIG_ACPI_HED is not set +# CONFIG_ACPI_CUSTOM_METHOD is not set +# CONFIG_ACPI_BGRT is not set +# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set +CONFIG_ACPI_NFIT=y +# CONFIG_NFIT_SECURITY_DEBUG is not set +CONFIG_HAVE_ACPI_APEI=y +CONFIG_HAVE_ACPI_APEI_NMI=y +# CONFIG_ACPI_APEI is not set +# CONFIG_ACPI_DPTF is not set +# CONFIG_ACPI_CONFIGFS is not set +# CONFIG_ACPI_PFRUT is not set +CONFIG_ACPI_PCC=y +# CONFIG_ACPI_FFH is not set +# CONFIG_PMIC_OPREGION is not set +# CONFIG_ACPI_PRMT is not set +# CONFIG_X86_PM_TIMER is not set + +# +# CPU Frequency scaling +# +CONFIG_CPU_FREQ=y +CONFIG_CPU_FREQ_GOV_ATTR_SET=y +# CONFIG_CPU_FREQ_STAT is not set +CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set +# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set +CONFIG_CPU_FREQ_GOV_PERFORMANCE=y +# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set +# CONFIG_CPU_FREQ_GOV_USERSPACE is not set +# CONFIG_CPU_FREQ_GOV_ONDEMAND is not set +# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set +CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y + +# +# CPU frequency scaling drivers +# +# CONFIG_X86_INTEL_PSTATE is not set +# CONFIG_X86_PCC_CPUFREQ is not set +CONFIG_X86_AMD_PSTATE=y +CONFIG_X86_AMD_PSTATE_DEFAULT_MODE=3 +# CONFIG_X86_AMD_PSTATE_UT is not set +# CONFIG_X86_ACPI_CPUFREQ is not set +# CONFIG_X86_SPEEDSTEP_CENTRINO is not set +# CONFIG_X86_P4_CLOCKMOD is not set + +# +# shared options +# +# end of CPU Frequency scaling + +# +# CPU Idle +# +CONFIG_CPU_IDLE=y +# CONFIG_CPU_IDLE_GOV_LADDER is not set +CONFIG_CPU_IDLE_GOV_MENU=y +# CONFIG_CPU_IDLE_GOV_TEO is not set +# end of CPU Idle + +# CONFIG_INTEL_IDLE is not set +# end of Power management and ACPI options + +# +# Bus options (PCI etc.) +# +CONFIG_PCI_DIRECT=y +# CONFIG_PCI_MMCONFIG is not set +# CONFIG_PCI_CNB20LE_QUIRK is not set +# CONFIG_ISA_BUS is not set +CONFIG_ISA_DMA_API=y +CONFIG_AMD_NB=y +# end of Bus options (PCI etc.) + +# +# Binary Emulations +# +CONFIG_IA32_EMULATION=y +# CONFIG_X86_X32_ABI is not set +CONFIG_COMPAT_32=y +CONFIG_COMPAT=y +CONFIG_COMPAT_FOR_U64_ALIGNMENT=y +# end of Binary Emulations + +CONFIG_HAVE_KVM=y +CONFIG_HAVE_KVM_PFNCACHE=y +CONFIG_HAVE_KVM_IRQCHIP=y +CONFIG_HAVE_KVM_IRQFD=y +CONFIG_HAVE_KVM_IRQ_ROUTING=y +CONFIG_HAVE_KVM_DIRTY_RING=y +CONFIG_HAVE_KVM_DIRTY_RING_TSO=y +CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y +CONFIG_HAVE_KVM_EVENTFD=y +CONFIG_KVM_MMIO=y +CONFIG_KVM_ASYNC_PF=y +CONFIG_HAVE_KVM_MSI=y +CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y +CONFIG_KVM_VFIO=y +CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y +CONFIG_KVM_COMPAT=y +CONFIG_HAVE_KVM_IRQ_BYPASS=y +CONFIG_HAVE_KVM_NO_POLL=y +CONFIG_KVM_XFER_TO_GUEST_WORK=y +CONFIG_KVM_GENERIC_HARDWARE_ENABLING=y +CONFIG_VIRTUALIZATION=y +CONFIG_KVM=y +CONFIG_KVM_WERROR=y +CONFIG_KVM_INTEL=y +CONFIG_KVM_AMD=y +CONFIG_KVM_SMM=y +# CONFIG_KVM_XEN is not set +# CONFIG_KVM_PROVE_MMU is not set +CONFIG_AS_AVX512=y +CONFIG_AS_SHA1_NI=y +CONFIG_AS_SHA256_NI=y +CONFIG_AS_TPAUSE=y +CONFIG_AS_GFNI=y +CONFIG_AS_WRUSS=y + +# +# General architecture-dependent options +# +CONFIG_HOTPLUG_SMT=y +CONFIG_HOTPLUG_CORE_SYNC=y +CONFIG_HOTPLUG_CORE_SYNC_DEAD=y +CONFIG_HOTPLUG_CORE_SYNC_FULL=y +CONFIG_HOTPLUG_SPLIT_STARTUP=y +CONFIG_HOTPLUG_PARALLEL=y +CONFIG_GENERIC_ENTRY=y +CONFIG_KPROBES=y +CONFIG_JUMP_LABEL=y +# CONFIG_STATIC_KEYS_SELFTEST is not set +# CONFIG_STATIC_CALL_SELFTEST is not set +CONFIG_OPTPROBES=y +CONFIG_KPROBES_ON_FTRACE=y +CONFIG_UPROBES=y +CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y +CONFIG_ARCH_USE_BUILTIN_BSWAP=y +CONFIG_KRETPROBES=y +CONFIG_KRETPROBE_ON_RETHOOK=y +CONFIG_USER_RETURN_NOTIFIER=y +CONFIG_HAVE_IOREMAP_PROT=y +CONFIG_HAVE_KPROBES=y +CONFIG_HAVE_KRETPROBES=y +CONFIG_HAVE_OPTPROBES=y +CONFIG_HAVE_KPROBES_ON_FTRACE=y +CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y +CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y +CONFIG_HAVE_NMI=y +CONFIG_TRACE_IRQFLAGS_SUPPORT=y +CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y +CONFIG_HAVE_ARCH_TRACEHOOK=y +CONFIG_HAVE_DMA_CONTIGUOUS=y +CONFIG_GENERIC_SMP_IDLE_THREAD=y +CONFIG_ARCH_HAS_FORTIFY_SOURCE=y +CONFIG_ARCH_HAS_SET_MEMORY=y +CONFIG_ARCH_HAS_SET_DIRECT_MAP=y +CONFIG_ARCH_HAS_CPU_FINALIZE_INIT=y +CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y +CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y +CONFIG_ARCH_WANTS_NO_INSTR=y +CONFIG_HAVE_ASM_MODVERSIONS=y +CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y +CONFIG_HAVE_RSEQ=y +CONFIG_HAVE_RUST=y +CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y +CONFIG_HAVE_HW_BREAKPOINT=y +CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y +CONFIG_HAVE_USER_RETURN_NOTIFIER=y +CONFIG_HAVE_PERF_EVENTS_NMI=y +CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y +CONFIG_HAVE_PERF_REGS=y +CONFIG_HAVE_PERF_USER_STACK_DUMP=y +CONFIG_HAVE_ARCH_JUMP_LABEL=y +CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y +CONFIG_MMU_GATHER_TABLE_FREE=y +CONFIG_MMU_GATHER_RCU_TABLE_FREE=y +CONFIG_MMU_GATHER_MERGE_VMAS=y +CONFIG_MMU_LAZY_TLB_REFCOUNT=y +CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y +CONFIG_ARCH_HAS_NMI_SAFE_THIS_CPU_OPS=y +CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y +CONFIG_HAVE_CMPXCHG_LOCAL=y +CONFIG_HAVE_CMPXCHG_DOUBLE=y +CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y +CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y +CONFIG_HAVE_ARCH_SECCOMP=y +CONFIG_HAVE_ARCH_SECCOMP_FILTER=y +CONFIG_SECCOMP=y +CONFIG_SECCOMP_FILTER=y +# CONFIG_SECCOMP_CACHE_DEBUG is not set +CONFIG_HAVE_ARCH_STACKLEAK=y +CONFIG_HAVE_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR_STRONG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG=y +CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y +CONFIG_LTO_NONE=y +CONFIG_ARCH_SUPPORTS_CFI_CLANG=y +CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y +CONFIG_HAVE_CONTEXT_TRACKING_USER=y +CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK=y +CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y +CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y +CONFIG_HAVE_MOVE_PUD=y +CONFIG_HAVE_MOVE_PMD=y +CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y +CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y +CONFIG_HAVE_ARCH_HUGE_VMAP=y +CONFIG_HAVE_ARCH_HUGE_VMALLOC=y +CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y +CONFIG_ARCH_WANT_PMD_MKWRITE=y +CONFIG_HAVE_ARCH_SOFT_DIRTY=y +CONFIG_HAVE_MOD_ARCH_SPECIFIC=y +CONFIG_MODULES_USE_ELF_RELA=y +CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y +CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y +CONFIG_SOFTIRQ_ON_OWN_STACK=y +CONFIG_ARCH_HAS_ELF_RANDOMIZE=y +CONFIG_HAVE_ARCH_MMAP_RND_BITS=y +CONFIG_HAVE_EXIT_THREAD=y +CONFIG_ARCH_MMAP_RND_BITS=28 +CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y +CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8 +CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y +CONFIG_PAGE_SIZE_LESS_THAN_64KB=y +CONFIG_PAGE_SIZE_LESS_THAN_256KB=y +CONFIG_HAVE_OBJTOOL=y +CONFIG_HAVE_JUMP_LABEL_HACK=y +CONFIG_HAVE_NOINSTR_HACK=y +CONFIG_HAVE_NOINSTR_VALIDATION=y +CONFIG_HAVE_UACCESS_VALIDATION=y +CONFIG_HAVE_STACK_VALIDATION=y +CONFIG_HAVE_RELIABLE_STACKTRACE=y +CONFIG_OLD_SIGSUSPEND3=y +CONFIG_COMPAT_OLD_SIGACTION=y +CONFIG_COMPAT_32BIT_TIME=y +CONFIG_HAVE_ARCH_VMAP_STACK=y +CONFIG_VMAP_STACK=y +CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y +CONFIG_RANDOMIZE_KSTACK_OFFSET=y +# CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT is not set +CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y +CONFIG_STRICT_KERNEL_RWX=y +CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y +CONFIG_STRICT_MODULE_RWX=y +CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y +CONFIG_ARCH_USE_MEMREMAP_PROT=y +# CONFIG_LOCK_EVENT_COUNTS is not set +CONFIG_ARCH_HAS_MEM_ENCRYPT=y +CONFIG_ARCH_HAS_CC_PLATFORM=y +CONFIG_HAVE_STATIC_CALL=y +CONFIG_HAVE_STATIC_CALL_INLINE=y +CONFIG_HAVE_PREEMPT_DYNAMIC=y +CONFIG_HAVE_PREEMPT_DYNAMIC_CALL=y +CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y +CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y +CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y +CONFIG_ARCH_HAS_ELFCORE_COMPAT=y +CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y +CONFIG_DYNAMIC_SIGFRAME=y +CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y + +# +# GCOV-based kernel profiling +# +# CONFIG_GCOV_KERNEL is not set +CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y +# end of GCOV-based kernel profiling + +CONFIG_HAVE_GCC_PLUGINS=y +CONFIG_FUNCTION_ALIGNMENT_4B=y +CONFIG_FUNCTION_ALIGNMENT_16B=y +CONFIG_FUNCTION_ALIGNMENT=16 +# end of General architecture-dependent options + +CONFIG_RT_MUTEXES=y +CONFIG_BASE_SMALL=0 +CONFIG_MODULES=y +# CONFIG_MODULE_DEBUG is not set +CONFIG_MODULE_FORCE_LOAD=y +CONFIG_MODULE_UNLOAD=y +CONFIG_MODULE_FORCE_UNLOAD=y +# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set +CONFIG_MODVERSIONS=y +CONFIG_ASM_MODVERSIONS=y +CONFIG_MODULE_SRCVERSION_ALL=y +# CONFIG_MODULE_SIG is not set +CONFIG_MODULE_COMPRESS_NONE=y +# CONFIG_MODULE_COMPRESS_GZIP is not set +# CONFIG_MODULE_COMPRESS_XZ is not set +# CONFIG_MODULE_COMPRESS_ZSTD is not set +# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set +CONFIG_MODPROBE_PATH="/sbin/modprobe" +# CONFIG_TRIM_UNUSED_KSYMS is not set +CONFIG_MODULES_TREE_LOOKUP=y +CONFIG_BLOCK=y +CONFIG_BLOCK_LEGACY_AUTOLOAD=y +CONFIG_BLK_CGROUP_PUNT_BIO=y +CONFIG_BLK_DEV_BSG_COMMON=y +CONFIG_BLK_DEV_BSGLIB=y +# CONFIG_BLK_DEV_INTEGRITY is not set +# CONFIG_BLK_DEV_ZONED is not set +# CONFIG_BLK_DEV_THROTTLING is not set +# CONFIG_BLK_WBT is not set +# CONFIG_BLK_CGROUP_IOLATENCY is not set +# CONFIG_BLK_CGROUP_IOCOST is not set +# CONFIG_BLK_CGROUP_IOPRIO is not set +# CONFIG_BLK_DEBUG_FS is not set +# CONFIG_BLK_SED_OPAL is not set +# CONFIG_BLK_INLINE_ENCRYPTION is not set + +# +# Partition Types +# +CONFIG_PARTITION_ADVANCED=y +# CONFIG_ACORN_PARTITION is not set +# CONFIG_AIX_PARTITION is not set +# CONFIG_OSF_PARTITION is not set +# CONFIG_AMIGA_PARTITION is not set +# CONFIG_ATARI_PARTITION is not set +# CONFIG_MAC_PARTITION is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_BSD_DISKLABEL is not set +# CONFIG_MINIX_SUBPARTITION is not set +# CONFIG_SOLARIS_X86_PARTITION is not set +# CONFIG_UNIXWARE_DISKLABEL is not set +# CONFIG_LDM_PARTITION is not set +# CONFIG_SGI_PARTITION is not set +# CONFIG_ULTRIX_PARTITION is not set +# CONFIG_SUN_PARTITION is not set +# CONFIG_KARMA_PARTITION is not set +CONFIG_EFI_PARTITION=y +# CONFIG_SYSV68_PARTITION is not set +# CONFIG_CMDLINE_PARTITION is not set +# end of Partition Types + +CONFIG_BLK_MQ_PCI=y +CONFIG_BLK_MQ_VIRTIO=y +CONFIG_BLOCK_HOLDER_DEPRECATED=y +CONFIG_BLK_MQ_STACKING=y + +# +# IO Schedulers +# +# CONFIG_MQ_IOSCHED_DEADLINE is not set +# CONFIG_MQ_IOSCHED_KYBER is not set +# CONFIG_IOSCHED_BFQ is not set +# end of IO Schedulers + +CONFIG_PREEMPT_NOTIFIERS=y +CONFIG_PADATA=y +CONFIG_ASN1=y +CONFIG_INLINE_SPIN_UNLOCK_IRQ=y +CONFIG_INLINE_READ_UNLOCK=y +CONFIG_INLINE_READ_UNLOCK_IRQ=y +CONFIG_INLINE_WRITE_UNLOCK=y +CONFIG_INLINE_WRITE_UNLOCK_IRQ=y +CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y +CONFIG_MUTEX_SPIN_ON_OWNER=y +CONFIG_RWSEM_SPIN_ON_OWNER=y +CONFIG_LOCK_SPIN_ON_OWNER=y +CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y +CONFIG_QUEUED_SPINLOCKS=y +CONFIG_ARCH_USE_QUEUED_RWLOCKS=y +CONFIG_QUEUED_RWLOCKS=y +CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y +CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y +CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y +CONFIG_FREEZER=y + +# +# Executable file formats +# +CONFIG_BINFMT_ELF=y +CONFIG_COMPAT_BINFMT_ELF=y +CONFIG_ELFCORE=y +CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y +CONFIG_BINFMT_SCRIPT=y +CONFIG_BINFMT_MISC=y +CONFIG_COREDUMP=y +# end of Executable file formats + +# +# Memory Management options +# +CONFIG_SWAP=y +# CONFIG_ZSWAP is not set + +# +# SLAB allocator options +# +# CONFIG_SLAB_DEPRECATED is not set +CONFIG_SLUB=y +# CONFIG_SLUB_TINY is not set +# CONFIG_SLAB_MERGE_DEFAULT is not set +# CONFIG_SLAB_FREELIST_RANDOM is not set +# CONFIG_SLAB_FREELIST_HARDENED is not set +# CONFIG_SLUB_STATS is not set +# CONFIG_SLUB_CPU_PARTIAL is not set +# CONFIG_RANDOM_KMALLOC_CACHES is not set +# end of SLAB allocator options + +# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set +# CONFIG_COMPAT_BRK is not set +CONFIG_SPARSEMEM=y +CONFIG_SPARSEMEM_EXTREME=y +CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y +CONFIG_SPARSEMEM_VMEMMAP=y +CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP=y +CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP=y +CONFIG_HAVE_FAST_GUP=y +CONFIG_MEMORY_ISOLATION=y +CONFIG_HAVE_BOOTMEM_INFO_NODE=y +CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y +CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y +CONFIG_MEMORY_HOTPLUG=y +# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set +CONFIG_MEMORY_HOTREMOVE=y +CONFIG_MHP_MEMMAP_ON_MEMORY=y +CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y +CONFIG_SPLIT_PTLOCK_CPUS=4 +CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y +CONFIG_MEMORY_BALLOON=y +# CONFIG_BALLOON_COMPACTION is not set +CONFIG_COMPACTION=y +CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1 +CONFIG_PAGE_REPORTING=y +CONFIG_MIGRATION=y +CONFIG_DEVICE_MIGRATION=y +CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y +CONFIG_ARCH_ENABLE_THP_MIGRATION=y +CONFIG_CONTIG_ALLOC=y +CONFIG_PHYS_ADDR_T_64BIT=y +CONFIG_MMU_NOTIFIER=y +CONFIG_KSM=y +CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 +CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y +# CONFIG_MEMORY_FAILURE is not set +CONFIG_ARCH_WANT_GENERAL_HUGETLB=y +CONFIG_ARCH_WANTS_THP_SWAP=y +CONFIG_TRANSPARENT_HUGEPAGE=y +CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y +# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set +CONFIG_THP_SWAP=y +# CONFIG_READ_ONLY_THP_FOR_FS is not set +CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y +CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y +CONFIG_HAVE_SETUP_PER_CPU_AREA=y +# CONFIG_CMA is not set +# CONFIG_MEM_SOFT_DIRTY is not set +CONFIG_GENERIC_EARLY_IOREMAP=y +CONFIG_DEFERRED_STRUCT_PAGE_INIT=y +# CONFIG_IDLE_PAGE_TRACKING is not set +CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y +CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y +CONFIG_ARCH_HAS_PTE_DEVMAP=y +CONFIG_ARCH_HAS_ZONE_DMA_SET=y +CONFIG_ZONE_DMA=y +CONFIG_ZONE_DMA32=y +CONFIG_ZONE_DEVICE=y +# CONFIG_DEVICE_PRIVATE is not set +CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y +CONFIG_ARCH_HAS_PKEYS=y +CONFIG_VM_EVENT_COUNTERS=y +# CONFIG_PERCPU_STATS is not set +# CONFIG_GUP_TEST is not set +# CONFIG_DMAPOOL_TEST is not set +CONFIG_ARCH_HAS_PTE_SPECIAL=y +CONFIG_MEMFD_CREATE=y +CONFIG_SECRETMEM=y +CONFIG_ANON_VMA_NAME=y +CONFIG_USERFAULTFD=y +CONFIG_HAVE_ARCH_USERFAULTFD_WP=y +CONFIG_HAVE_ARCH_USERFAULTFD_MINOR=y +CONFIG_PTE_MARKER_UFFD_WP=y +# CONFIG_LRU_GEN is not set +CONFIG_ARCH_SUPPORTS_PER_VMA_LOCK=y +CONFIG_PER_VMA_LOCK=y +CONFIG_LOCK_MM_AND_FIND_VMA=y + +# +# Data Access Monitoring +# +# CONFIG_DAMON is not set +# end of Data Access Monitoring +# end of Memory Management options + +CONFIG_NET=y +CONFIG_NET_INGRESS=y +CONFIG_NET_EGRESS=y +CONFIG_NET_XGRESS=y +CONFIG_SKB_EXTENSIONS=y + +# +# Networking options +# +CONFIG_PACKET=y +CONFIG_PACKET_DIAG=y +CONFIG_UNIX=y +CONFIG_UNIX_SCM=y +CONFIG_AF_UNIX_OOB=y +CONFIG_UNIX_DIAG=y +# CONFIG_TLS is not set +CONFIG_XFRM=y +CONFIG_XFRM_ALGO=y +CONFIG_XFRM_USER=y +# CONFIG_XFRM_USER_COMPAT is not set +# CONFIG_XFRM_INTERFACE is not set +# CONFIG_XFRM_SUB_POLICY is not set +# CONFIG_XFRM_MIGRATE is not set +# CONFIG_XFRM_STATISTICS is not set +CONFIG_XFRM_ESP=y +# CONFIG_NET_KEY is not set +# CONFIG_XDP_SOCKETS is not set +CONFIG_NET_HANDSHAKE=y +CONFIG_INET=y +# CONFIG_IP_MULTICAST is not set +CONFIG_IP_ADVANCED_ROUTER=y +# CONFIG_IP_FIB_TRIE_STATS is not set +CONFIG_IP_MULTIPLE_TABLES=y +# CONFIG_IP_ROUTE_MULTIPATH is not set +# CONFIG_IP_ROUTE_VERBOSE is not set +CONFIG_IP_PNP=y +CONFIG_IP_PNP_DHCP=y +# CONFIG_IP_PNP_BOOTP is not set +# CONFIG_IP_PNP_RARP is not set +CONFIG_NET_IPIP=y +# CONFIG_NET_IPGRE_DEMUX is not set +CONFIG_NET_IP_TUNNEL=y +CONFIG_SYN_COOKIES=y +# CONFIG_NET_IPVTI is not set +CONFIG_NET_UDP_TUNNEL=y +# CONFIG_NET_FOU is not set +# CONFIG_NET_FOU_IP_TUNNELS is not set +# CONFIG_INET_AH is not set +CONFIG_INET_ESP=y +# CONFIG_INET_ESP_OFFLOAD is not set +# CONFIG_INET_ESPINTCP is not set +# CONFIG_INET_IPCOMP is not set +CONFIG_INET_TABLE_PERTURB_ORDER=16 +CONFIG_INET_TUNNEL=y +CONFIG_INET_DIAG=y +CONFIG_INET_TCP_DIAG=y +CONFIG_INET_UDP_DIAG=y +CONFIG_INET_RAW_DIAG=y +# CONFIG_INET_DIAG_DESTROY is not set +# CONFIG_TCP_CONG_ADVANCED is not set +CONFIG_TCP_CONG_CUBIC=y +CONFIG_DEFAULT_TCP_CONG="cubic" +# CONFIG_TCP_MD5SIG is not set +CONFIG_IPV6=y +# CONFIG_IPV6_ROUTER_PREF is not set +CONFIG_IPV6_OPTIMISTIC_DAD=y +# CONFIG_INET6_AH is not set +# CONFIG_INET6_ESP is not set +# CONFIG_INET6_IPCOMP is not set +# CONFIG_IPV6_MIP6 is not set +# CONFIG_IPV6_ILA is not set +# CONFIG_IPV6_VTI is not set +CONFIG_IPV6_SIT=y +# CONFIG_IPV6_SIT_6RD is not set +CONFIG_IPV6_NDISC_NODETYPE=y +# CONFIG_IPV6_TUNNEL is not set +# CONFIG_IPV6_MULTIPLE_TABLES is not set +# CONFIG_IPV6_MROUTE is not set +# CONFIG_IPV6_SEG6_LWTUNNEL is not set +# CONFIG_IPV6_SEG6_HMAC is not set +# CONFIG_IPV6_RPL_LWTUNNEL is not set +# CONFIG_IPV6_IOAM6_LWTUNNEL is not set +# CONFIG_NETLABEL is not set +# CONFIG_MPTCP is not set +CONFIG_NETWORK_SECMARK=y +CONFIG_NET_PTP_CLASSIFY=y +CONFIG_NETWORK_PHY_TIMESTAMPING=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_ADVANCED=y +CONFIG_BRIDGE_NETFILTER=y + +# +# Core Netfilter Configuration +# +CONFIG_NETFILTER_INGRESS=y +CONFIG_NETFILTER_EGRESS=y +CONFIG_NETFILTER_SKIP_EGRESS=y +CONFIG_NETFILTER_NETLINK=y +CONFIG_NETFILTER_FAMILY_BRIDGE=y +CONFIG_NETFILTER_FAMILY_ARP=y +CONFIG_NETFILTER_BPF_LINK=y +# CONFIG_NETFILTER_NETLINK_HOOK is not set +# CONFIG_NETFILTER_NETLINK_ACCT is not set +CONFIG_NETFILTER_NETLINK_QUEUE=y +CONFIG_NETFILTER_NETLINK_LOG=y +# CONFIG_NETFILTER_NETLINK_OSF is not set +CONFIG_NF_CONNTRACK=y +CONFIG_NF_LOG_SYSLOG=y +CONFIG_NETFILTER_CONNCOUNT=y +CONFIG_NF_CONNTRACK_MARK=y +# CONFIG_NF_CONNTRACK_SECMARK is not set +# CONFIG_NF_CONNTRACK_ZONES is not set +# CONFIG_NF_CONNTRACK_PROCFS is not set +CONFIG_NF_CONNTRACK_EVENTS=y +# CONFIG_NF_CONNTRACK_TIMEOUT is not set +# CONFIG_NF_CONNTRACK_TIMESTAMP is not set +# CONFIG_NF_CONNTRACK_LABELS is not set +# CONFIG_NF_CT_PROTO_DCCP is not set +CONFIG_NF_CT_PROTO_GRE=y +# CONFIG_NF_CT_PROTO_SCTP is not set +# CONFIG_NF_CT_PROTO_UDPLITE is not set +CONFIG_NF_CONNTRACK_AMANDA=y +CONFIG_NF_CONNTRACK_FTP=y +CONFIG_NF_CONNTRACK_H323=y +CONFIG_NF_CONNTRACK_IRC=y +CONFIG_NF_CONNTRACK_BROADCAST=y +CONFIG_NF_CONNTRACK_NETBIOS_NS=y +# CONFIG_NF_CONNTRACK_SNMP is not set +CONFIG_NF_CONNTRACK_PPTP=y +CONFIG_NF_CONNTRACK_SANE=y +CONFIG_NF_CONNTRACK_SIP=y +CONFIG_NF_CONNTRACK_TFTP=y +CONFIG_NF_CT_NETLINK=y +# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set +CONFIG_NF_NAT=y +CONFIG_NF_NAT_AMANDA=y +CONFIG_NF_NAT_FTP=y +CONFIG_NF_NAT_IRC=y +CONFIG_NF_NAT_SIP=y +CONFIG_NF_NAT_TFTP=y +CONFIG_NF_NAT_REDIRECT=y +CONFIG_NF_NAT_MASQUERADE=y +CONFIG_NETFILTER_SYNPROXY=y +CONFIG_NF_TABLES=y +CONFIG_NF_TABLES_INET=y +# CONFIG_NF_TABLES_NETDEV is not set +CONFIG_NFT_NUMGEN=y +CONFIG_NFT_CT=y +CONFIG_NFT_CONNLIMIT=y +CONFIG_NFT_LOG=y +CONFIG_NFT_LIMIT=y +CONFIG_NFT_MASQ=y +CONFIG_NFT_REDIR=y +CONFIG_NFT_NAT=y +CONFIG_NFT_TUNNEL=y +# CONFIG_NFT_QUEUE is not set +# CONFIG_NFT_QUOTA is not set +CONFIG_NFT_REJECT=y +CONFIG_NFT_REJECT_INET=y +CONFIG_NFT_COMPAT=y +# CONFIG_NFT_HASH is not set +CONFIG_NFT_XFRM=y +CONFIG_NFT_SOCKET=y +# CONFIG_NFT_OSF is not set +# CONFIG_NFT_TPROXY is not set +# CONFIG_NFT_SYNPROXY is not set +# CONFIG_NF_FLOW_TABLE is not set +CONFIG_NETFILTER_XTABLES=y +# CONFIG_NETFILTER_XTABLES_COMPAT is not set + +# +# Xtables combined modules +# +CONFIG_NETFILTER_XT_MARK=y +# CONFIG_NETFILTER_XT_CONNMARK is not set +CONFIG_NETFILTER_XT_SET=y + +# +# Xtables targets +# +# CONFIG_NETFILTER_XT_TARGET_AUDIT is not set +CONFIG_NETFILTER_XT_TARGET_CHECKSUM=y +# CONFIG_NETFILTER_XT_TARGET_CLASSIFY is not set +# CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set +# CONFIG_NETFILTER_XT_TARGET_CT is not set +# CONFIG_NETFILTER_XT_TARGET_DSCP is not set +CONFIG_NETFILTER_XT_TARGET_HL=y +# CONFIG_NETFILTER_XT_TARGET_HMARK is not set +# CONFIG_NETFILTER_XT_TARGET_IDLETIMER is not set +CONFIG_NETFILTER_XT_TARGET_LOG=y +CONFIG_NETFILTER_XT_TARGET_MARK=y +CONFIG_NETFILTER_XT_NAT=y +CONFIG_NETFILTER_XT_TARGET_NETMAP=y +CONFIG_NETFILTER_XT_TARGET_NFLOG=y +# CONFIG_NETFILTER_XT_TARGET_NFQUEUE is not set +# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set +# CONFIG_NETFILTER_XT_TARGET_RATEEST is not set +CONFIG_NETFILTER_XT_TARGET_REDIRECT=y +CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y +# CONFIG_NETFILTER_XT_TARGET_TEE is not set +# CONFIG_NETFILTER_XT_TARGET_TPROXY is not set +# CONFIG_NETFILTER_XT_TARGET_TRACE is not set +CONFIG_NETFILTER_XT_TARGET_SECMARK=y +CONFIG_NETFILTER_XT_TARGET_TCPMSS=y +# CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP is not set + +# +# Xtables matches +# +CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=y +# CONFIG_NETFILTER_XT_MATCH_BPF is not set +CONFIG_NETFILTER_XT_MATCH_CGROUP=y +# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set +CONFIG_NETFILTER_XT_MATCH_COMMENT=y +# CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLABEL is not set +# CONFIG_NETFILTER_XT_MATCH_CONNLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_CONNMARK is not set +CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y +# CONFIG_NETFILTER_XT_MATCH_CPU is not set +# CONFIG_NETFILTER_XT_MATCH_DCCP is not set +# CONFIG_NETFILTER_XT_MATCH_DEVGROUP is not set +# CONFIG_NETFILTER_XT_MATCH_DSCP is not set +CONFIG_NETFILTER_XT_MATCH_ECN=y +# CONFIG_NETFILTER_XT_MATCH_ESP is not set +# CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set +# CONFIG_NETFILTER_XT_MATCH_HELPER is not set +CONFIG_NETFILTER_XT_MATCH_HL=y +# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set +# CONFIG_NETFILTER_XT_MATCH_IPRANGE is not set +CONFIG_NETFILTER_XT_MATCH_IPVS=y +# CONFIG_NETFILTER_XT_MATCH_L2TP is not set +# CONFIG_NETFILTER_XT_MATCH_LENGTH is not set +CONFIG_NETFILTER_XT_MATCH_LIMIT=y +# CONFIG_NETFILTER_XT_MATCH_MAC is not set +# CONFIG_NETFILTER_XT_MATCH_MARK is not set +CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y +# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set +# CONFIG_NETFILTER_XT_MATCH_OSF is not set +CONFIG_NETFILTER_XT_MATCH_OWNER=y +# CONFIG_NETFILTER_XT_MATCH_POLICY is not set +CONFIG_NETFILTER_XT_MATCH_PHYSDEV=y +# CONFIG_NETFILTER_XT_MATCH_PKTTYPE is not set +# CONFIG_NETFILTER_XT_MATCH_QUOTA is not set +# CONFIG_NETFILTER_XT_MATCH_RATEEST is not set +# CONFIG_NETFILTER_XT_MATCH_REALM is not set +# CONFIG_NETFILTER_XT_MATCH_RECENT is not set +# CONFIG_NETFILTER_XT_MATCH_SCTP is not set +# CONFIG_NETFILTER_XT_MATCH_SOCKET is not set +# CONFIG_NETFILTER_XT_MATCH_STATE is not set +CONFIG_NETFILTER_XT_MATCH_STATISTIC=y +# CONFIG_NETFILTER_XT_MATCH_STRING is not set +# CONFIG_NETFILTER_XT_MATCH_TCPMSS is not set +# CONFIG_NETFILTER_XT_MATCH_TIME is not set +# CONFIG_NETFILTER_XT_MATCH_U32 is not set +# end of Core Netfilter Configuration + +CONFIG_IP_SET=y +CONFIG_IP_SET_MAX=256 +CONFIG_IP_SET_BITMAP_IP=y +CONFIG_IP_SET_BITMAP_IPMAC=y +CONFIG_IP_SET_BITMAP_PORT=y +CONFIG_IP_SET_HASH_IP=y +CONFIG_IP_SET_HASH_IPMARK=y +CONFIG_IP_SET_HASH_IPPORT=y +CONFIG_IP_SET_HASH_IPPORTIP=y +CONFIG_IP_SET_HASH_IPPORTNET=y +CONFIG_IP_SET_HASH_IPMAC=y +CONFIG_IP_SET_HASH_MAC=y +CONFIG_IP_SET_HASH_NETPORTNET=y +CONFIG_IP_SET_HASH_NET=y +CONFIG_IP_SET_HASH_NETNET=y +CONFIG_IP_SET_HASH_NETPORT=y +CONFIG_IP_SET_HASH_NETIFACE=y +# CONFIG_IP_SET_LIST_SET is not set +CONFIG_IP_VS=y +# CONFIG_IP_VS_IPV6 is not set +# CONFIG_IP_VS_DEBUG is not set +CONFIG_IP_VS_TAB_BITS=12 + +# +# IPVS transport protocol load balancing support +# +CONFIG_IP_VS_PROTO_TCP=y +CONFIG_IP_VS_PROTO_UDP=y +# CONFIG_IP_VS_PROTO_ESP is not set +# CONFIG_IP_VS_PROTO_AH is not set +# CONFIG_IP_VS_PROTO_SCTP is not set + +# +# IPVS scheduler +# +CONFIG_IP_VS_RR=y +CONFIG_IP_VS_WRR=y +# CONFIG_IP_VS_LC is not set +# CONFIG_IP_VS_WLC is not set +# CONFIG_IP_VS_FO is not set +# CONFIG_IP_VS_OVF is not set +# CONFIG_IP_VS_LBLC is not set +# CONFIG_IP_VS_LBLCR is not set +# CONFIG_IP_VS_DH is not set +CONFIG_IP_VS_SH=y +# CONFIG_IP_VS_MH is not set +# CONFIG_IP_VS_SED is not set +# CONFIG_IP_VS_NQ is not set +# CONFIG_IP_VS_TWOS is not set + +# +# IPVS SH scheduler +# +CONFIG_IP_VS_SH_TAB_BITS=8 + +# +# IPVS MH scheduler +# +CONFIG_IP_VS_MH_TAB_INDEX=12 + +# +# IPVS application helper +# +# CONFIG_IP_VS_FTP is not set +CONFIG_IP_VS_NFCT=y +# CONFIG_IP_VS_PE_SIP is not set + +# +# IP: Netfilter Configuration +# +CONFIG_NF_DEFRAG_IPV4=y +CONFIG_NF_SOCKET_IPV4=y +# CONFIG_NF_TPROXY_IPV4 is not set +CONFIG_NF_TABLES_IPV4=y +CONFIG_NFT_REJECT_IPV4=y +# CONFIG_NFT_DUP_IPV4 is not set +# CONFIG_NFT_FIB_IPV4 is not set +# CONFIG_NF_TABLES_ARP is not set +# CONFIG_NF_DUP_IPV4 is not set +# CONFIG_NF_LOG_ARP is not set +CONFIG_NF_LOG_IPV4=y +CONFIG_NF_REJECT_IPV4=y +CONFIG_NF_NAT_PPTP=y +CONFIG_NF_NAT_H323=y +CONFIG_IP_NF_IPTABLES=y +CONFIG_IP_NF_MATCH_AH=y +CONFIG_IP_NF_MATCH_ECN=y +CONFIG_IP_NF_MATCH_RPFILTER=y +CONFIG_IP_NF_MATCH_TTL=y +CONFIG_IP_NF_FILTER=y +CONFIG_IP_NF_TARGET_REJECT=y +CONFIG_IP_NF_TARGET_SYNPROXY=y +CONFIG_IP_NF_NAT=y +CONFIG_IP_NF_TARGET_MASQUERADE=y +CONFIG_IP_NF_TARGET_NETMAP=y +CONFIG_IP_NF_TARGET_REDIRECT=y +CONFIG_IP_NF_MANGLE=y +CONFIG_IP_NF_TARGET_ECN=y +CONFIG_IP_NF_TARGET_TTL=y +CONFIG_IP_NF_RAW=y +CONFIG_IP_NF_SECURITY=y +CONFIG_IP_NF_ARPTABLES=y +CONFIG_IP_NF_ARPFILTER=y +CONFIG_IP_NF_ARP_MANGLE=y +# end of IP: Netfilter Configuration + +# +# IPv6: Netfilter Configuration +# +CONFIG_NF_SOCKET_IPV6=y +# CONFIG_NF_TPROXY_IPV6 is not set +CONFIG_NF_TABLES_IPV6=y +CONFIG_NFT_REJECT_IPV6=y +# CONFIG_NFT_DUP_IPV6 is not set +# CONFIG_NFT_FIB_IPV6 is not set +# CONFIG_NF_DUP_IPV6 is not set +CONFIG_NF_REJECT_IPV6=y +CONFIG_NF_LOG_IPV6=y +CONFIG_IP6_NF_IPTABLES=y +CONFIG_IP6_NF_MATCH_AH=y +CONFIG_IP6_NF_MATCH_EUI64=y +CONFIG_IP6_NF_MATCH_FRAG=y +CONFIG_IP6_NF_MATCH_OPTS=y +CONFIG_IP6_NF_MATCH_HL=y +CONFIG_IP6_NF_MATCH_IPV6HEADER=y +CONFIG_IP6_NF_MATCH_MH=y +CONFIG_IP6_NF_MATCH_RPFILTER=y +CONFIG_IP6_NF_MATCH_RT=y +CONFIG_IP6_NF_MATCH_SRH=y +CONFIG_IP6_NF_TARGET_HL=y +CONFIG_IP6_NF_FILTER=y +CONFIG_IP6_NF_TARGET_REJECT=y +CONFIG_IP6_NF_TARGET_SYNPROXY=y +CONFIG_IP6_NF_MANGLE=y +CONFIG_IP6_NF_RAW=y +CONFIG_IP6_NF_SECURITY=y +CONFIG_IP6_NF_NAT=y +CONFIG_IP6_NF_TARGET_MASQUERADE=y +CONFIG_IP6_NF_TARGET_NPT=y +# end of IPv6: Netfilter Configuration + +CONFIG_NF_DEFRAG_IPV6=y +# CONFIG_NF_TABLES_BRIDGE is not set +# CONFIG_NF_CONNTRACK_BRIDGE is not set +# CONFIG_BRIDGE_NF_EBTABLES is not set +# CONFIG_BPFILTER is not set +# CONFIG_IP_DCCP is not set +CONFIG_IP_SCTP=y +# CONFIG_SCTP_DBG_OBJCNT is not set +CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set +# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set +CONFIG_SCTP_COOKIE_HMAC_MD5=y +# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set +CONFIG_INET_SCTP_DIAG=y +# CONFIG_RDS is not set +# CONFIG_TIPC is not set +# CONFIG_ATM is not set +# CONFIG_L2TP is not set +CONFIG_STP=y +CONFIG_BRIDGE=y +CONFIG_BRIDGE_IGMP_SNOOPING=y +CONFIG_BRIDGE_VLAN_FILTERING=y +# CONFIG_BRIDGE_MRP is not set +# CONFIG_BRIDGE_CFM is not set +# CONFIG_NET_DSA is not set +CONFIG_VLAN_8021Q=y +# CONFIG_VLAN_8021Q_GVRP is not set +# CONFIG_VLAN_8021Q_MVRP is not set +CONFIG_LLC=y +# CONFIG_LLC2 is not set +# CONFIG_ATALK is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_PHONET is not set +# CONFIG_6LOWPAN is not set +# CONFIG_IEEE802154 is not set +CONFIG_NET_SCHED=y + +# +# Queueing/Scheduling +# +# CONFIG_NET_SCH_HTB is not set +# CONFIG_NET_SCH_HFSC is not set +# CONFIG_NET_SCH_PRIO is not set +CONFIG_NET_SCH_MULTIQ=y +# CONFIG_NET_SCH_RED is not set +# CONFIG_NET_SCH_SFB is not set +# CONFIG_NET_SCH_SFQ is not set +# CONFIG_NET_SCH_TEQL is not set +# CONFIG_NET_SCH_TBF is not set +# CONFIG_NET_SCH_CBS is not set +# CONFIG_NET_SCH_ETF is not set +# CONFIG_NET_SCH_TAPRIO is not set +# CONFIG_NET_SCH_GRED is not set +# CONFIG_NET_SCH_NETEM is not set +# CONFIG_NET_SCH_DRR is not set +# CONFIG_NET_SCH_MQPRIO is not set +# CONFIG_NET_SCH_SKBPRIO is not set +# CONFIG_NET_SCH_CHOKE is not set +# CONFIG_NET_SCH_QFQ is not set +# CONFIG_NET_SCH_CODEL is not set +CONFIG_NET_SCH_FQ_CODEL=y +# CONFIG_NET_SCH_CAKE is not set +# CONFIG_NET_SCH_FQ is not set +# CONFIG_NET_SCH_HHF is not set +# CONFIG_NET_SCH_PIE is not set +CONFIG_NET_SCH_INGRESS=y +# CONFIG_NET_SCH_PLUG is not set +# CONFIG_NET_SCH_ETS is not set +CONFIG_NET_SCH_DEFAULT=y +CONFIG_DEFAULT_FQ_CODEL=y +# CONFIG_DEFAULT_PFIFO_FAST is not set +CONFIG_DEFAULT_NET_SCH="fq_codel" + +# +# Classification +# +CONFIG_NET_CLS=y +# CONFIG_NET_CLS_BASIC is not set +# CONFIG_NET_CLS_ROUTE4 is not set +# CONFIG_NET_CLS_FW is not set +CONFIG_NET_CLS_U32=y +CONFIG_CLS_U32_PERF=y +CONFIG_CLS_U32_MARK=y +# CONFIG_NET_CLS_FLOW is not set +CONFIG_NET_CLS_CGROUP=y +CONFIG_NET_CLS_BPF=y +CONFIG_NET_CLS_FLOWER=y +# CONFIG_NET_CLS_MATCHALL is not set +# CONFIG_NET_EMATCH is not set +CONFIG_NET_CLS_ACT=y +# CONFIG_NET_ACT_POLICE is not set +# CONFIG_NET_ACT_GACT is not set +CONFIG_NET_ACT_MIRRED=y +# CONFIG_NET_ACT_SAMPLE is not set +CONFIG_NET_ACT_IPT=y +# CONFIG_NET_ACT_NAT is not set +# CONFIG_NET_ACT_PEDIT is not set +# CONFIG_NET_ACT_SIMP is not set +# CONFIG_NET_ACT_SKBEDIT is not set +# CONFIG_NET_ACT_CSUM is not set +# CONFIG_NET_ACT_MPLS is not set +# CONFIG_NET_ACT_VLAN is not set +CONFIG_NET_ACT_BPF=y +# CONFIG_NET_ACT_CONNMARK is not set +# CONFIG_NET_ACT_CTINFO is not set +# CONFIG_NET_ACT_SKBMOD is not set +# CONFIG_NET_ACT_IFE is not set +# CONFIG_NET_ACT_TUNNEL_KEY is not set +# CONFIG_NET_ACT_GATE is not set +# CONFIG_NET_TC_SKB_EXT is not set +CONFIG_NET_SCH_FIFO=y +# CONFIG_DCB is not set +CONFIG_DNS_RESOLVER=y +# CONFIG_BATMAN_ADV is not set +# CONFIG_OPENVSWITCH is not set +CONFIG_VSOCKETS=y +CONFIG_VSOCKETS_DIAG=y +# CONFIG_VSOCKETS_LOOPBACK is not set +# CONFIG_VIRTIO_VSOCKETS is not set +CONFIG_HYPERV_VSOCKETS=y +CONFIG_NETLINK_DIAG=y +# CONFIG_MPLS is not set +# CONFIG_NET_NSH is not set +# CONFIG_HSR is not set +CONFIG_NET_SWITCHDEV=y +CONFIG_NET_L3_MASTER_DEV=y +# CONFIG_QRTR is not set +# CONFIG_NET_NCSI is not set +# CONFIG_PCPU_DEV_REFCNT is not set +CONFIG_MAX_SKB_FRAGS=17 +CONFIG_RPS=y +CONFIG_RFS_ACCEL=y +CONFIG_SOCK_RX_QUEUE_MAPPING=y +CONFIG_XPS=y +CONFIG_CGROUP_NET_PRIO=y +CONFIG_CGROUP_NET_CLASSID=y +CONFIG_NET_RX_BUSY_POLL=y +CONFIG_BQL=y +# CONFIG_BPF_STREAM_PARSER is not set +CONFIG_NET_FLOW_LIMIT=y + +# +# Network testing +# +# CONFIG_NET_PKTGEN is not set +CONFIG_NET_DROP_MONITOR=y +# end of Network testing +# end of Networking options + +# CONFIG_HAMRADIO is not set +# CONFIG_CAN is not set +# CONFIG_BT is not set +# CONFIG_AF_RXRPC is not set +# CONFIG_AF_KCM is not set +# CONFIG_MCTP is not set +CONFIG_FIB_RULES=y +# CONFIG_WIRELESS is not set +# CONFIG_RFKILL is not set +CONFIG_NET_9P=y +CONFIG_NET_9P_FD=y +CONFIG_NET_9P_VIRTIO=y +# CONFIG_NET_9P_DEBUG is not set +# CONFIG_CAIF is not set +CONFIG_CEPH_LIB=y +# CONFIG_CEPH_LIB_PRETTYDEBUG is not set +# CONFIG_CEPH_LIB_USE_DNS_RESOLVER is not set +# CONFIG_NFC is not set +# CONFIG_PSAMPLE is not set +# CONFIG_NET_IFE is not set +# CONFIG_LWTUNNEL is not set +CONFIG_DST_CACHE=y +CONFIG_GRO_CELLS=y +CONFIG_NET_SOCK_MSG=y +CONFIG_PAGE_POOL=y +# CONFIG_PAGE_POOL_STATS is not set +CONFIG_FAILOVER=y +# CONFIG_ETHTOOL_NETLINK is not set + +# +# Device Drivers +# +CONFIG_HAVE_EISA=y +# CONFIG_EISA is not set +CONFIG_HAVE_PCI=y +CONFIG_PCI=y +CONFIG_PCI_DOMAINS=y +CONFIG_PCIEPORTBUS=y +CONFIG_PCIEAER=y +# CONFIG_PCIEAER_INJECT is not set +# CONFIG_PCIE_ECRC is not set +CONFIG_PCIEASPM=y +CONFIG_PCIEASPM_DEFAULT=y +# CONFIG_PCIEASPM_POWERSAVE is not set +# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set +# CONFIG_PCIEASPM_PERFORMANCE is not set +# CONFIG_PCIE_DPC is not set +# CONFIG_PCIE_PTM is not set +CONFIG_PCI_MSI=y +CONFIG_PCI_QUIRKS=y +# CONFIG_PCI_DEBUG is not set +# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set +# CONFIG_PCI_STUB is not set +# CONFIG_PCI_PF_STUB is not set +CONFIG_PCI_ATS=y +CONFIG_PCI_LOCKLESS_CONFIG=y +CONFIG_PCI_IOV=y +CONFIG_PCI_PRI=y +CONFIG_PCI_PASID=y +# CONFIG_PCI_P2PDMA is not set +CONFIG_PCI_LABEL=y +CONFIG_PCI_HYPERV=y +# CONFIG_PCIE_BUS_TUNE_OFF is not set +CONFIG_PCIE_BUS_DEFAULT=y +# CONFIG_PCIE_BUS_SAFE is not set +# CONFIG_PCIE_BUS_PERFORMANCE is not set +# CONFIG_PCIE_BUS_PEER2PEER is not set +# CONFIG_VGA_ARB is not set +# CONFIG_HOTPLUG_PCI is not set + +# +# PCI controller drivers +# +# CONFIG_VMD is not set +CONFIG_PCI_HYPERV_INTERFACE=y + +# +# Cadence-based PCIe controllers +# +# end of Cadence-based PCIe controllers + +# +# DesignWare-based PCIe controllers +# +# CONFIG_PCI_MESON is not set +# CONFIG_PCIE_DW_PLAT_HOST is not set +# end of DesignWare-based PCIe controllers + +# +# Mobiveil-based PCIe controllers +# +# end of Mobiveil-based PCIe controllers +# end of PCI controller drivers + +# +# PCI Endpoint +# +# CONFIG_PCI_ENDPOINT is not set +# end of PCI Endpoint + +# +# PCI switch controller drivers +# +# CONFIG_PCI_SW_SWITCHTEC is not set +# end of PCI switch controller drivers + +# CONFIG_CXL_BUS is not set +# CONFIG_PCCARD is not set +# CONFIG_RAPIDIO is not set + +# +# Generic Driver Options +# +CONFIG_UEVENT_HELPER=y +CONFIG_UEVENT_HELPER_PATH="" +CONFIG_DEVTMPFS=y +CONFIG_DEVTMPFS_MOUNT=y +CONFIG_DEVTMPFS_SAFE=y +CONFIG_STANDALONE=y +CONFIG_PREVENT_FIRMWARE_BUILD=y + +# +# Firmware loader +# +CONFIG_FW_LOADER=y +CONFIG_FW_LOADER_PAGED_BUF=y +CONFIG_FW_LOADER_SYSFS=y +CONFIG_EXTRA_FIRMWARE="" +# CONFIG_FW_LOADER_USER_HELPER is not set +# CONFIG_FW_LOADER_COMPRESS is not set +CONFIG_FW_UPLOAD=y +# end of Firmware loader + +CONFIG_ALLOW_DEV_COREDUMP=y +# CONFIG_DEBUG_DRIVER is not set +# CONFIG_DEBUG_DEVRES is not set +# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set +# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set +CONFIG_GENERIC_CPU_AUTOPROBE=y +CONFIG_GENERIC_CPU_VULNERABILITIES=y +CONFIG_DMA_SHARED_BUFFER=y +# CONFIG_DMA_FENCE_TRACE is not set +# CONFIG_FW_DEVLINK_SYNC_STATE_TIMEOUT is not set +# end of Generic Driver Options + +# +# Bus devices +# +# CONFIG_MHI_BUS is not set +# CONFIG_MHI_BUS_EP is not set +# end of Bus devices + +# +# Cache Drivers +# +# end of Cache Drivers + +CONFIG_CONNECTOR=y +CONFIG_PROC_EVENTS=y + +# +# Firmware Drivers +# + +# +# ARM System Control and Management Interface Protocol +# +# end of ARM System Control and Management Interface Protocol + +# CONFIG_EDD is not set +CONFIG_FIRMWARE_MEMMAP=y +# CONFIG_DMIID is not set +# CONFIG_DMI_SYSFS is not set +CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y +# CONFIG_ISCSI_IBFT is not set +# CONFIG_FW_CFG_SYSFS is not set +# CONFIG_SYSFB_SIMPLEFB is not set +# CONFIG_GOOGLE_FIRMWARE is not set + +# +# EFI (Extensible Firmware Interface) Support +# +CONFIG_EFI_ESRT=y +CONFIG_EFI_DXE_MEM_ATTRIBUTES=y +CONFIG_EFI_RUNTIME_WRAPPERS=y +# CONFIG_EFI_BOOTLOADER_CONTROL is not set +# CONFIG_EFI_CAPSULE_LOADER is not set +# CONFIG_EFI_TEST is not set +# CONFIG_APPLE_PROPERTIES is not set +CONFIG_RESET_ATTACK_MITIGATION=y +# CONFIG_EFI_RCI2_TABLE is not set +# CONFIG_EFI_DISABLE_PCI_DMA is not set +CONFIG_EFI_EARLYCON=y +# CONFIG_EFI_CUSTOM_SSDT_OVERLAYS is not set +# CONFIG_EFI_DISABLE_RUNTIME is not set +CONFIG_EFI_COCO_SECRET=y +CONFIG_UNACCEPTED_MEMORY=y +# end of EFI (Extensible Firmware Interface) Support + +# +# Tegra firmware driver +# +# end of Tegra firmware driver +# end of Firmware Drivers + +# CONFIG_GNSS is not set +# CONFIG_MTD is not set +# CONFIG_OF is not set +CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y +# CONFIG_PARPORT is not set +CONFIG_PNP=y +# CONFIG_PNP_DEBUG_MESSAGES is not set + +# +# Protocols +# +CONFIG_PNPACPI=y +CONFIG_BLK_DEV=y +# CONFIG_BLK_DEV_NULL_BLK is not set +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set +CONFIG_BLK_DEV_LOOP=y +CONFIG_BLK_DEV_LOOP_MIN_COUNT=8 +# CONFIG_BLK_DEV_DRBD is not set +# CONFIG_BLK_DEV_NBD is not set +CONFIG_BLK_DEV_RAM=y +CONFIG_BLK_DEV_RAM_COUNT=16 +CONFIG_BLK_DEV_RAM_SIZE=65536 +# CONFIG_CDROM_PKTCDVD is not set +# CONFIG_ATA_OVER_ETH is not set +CONFIG_VIRTIO_BLK=y +# CONFIG_BLK_DEV_RBD is not set +# CONFIG_BLK_DEV_UBLK is not set + +# +# NVME Support +# +# CONFIG_BLK_DEV_NVME is not set +# CONFIG_NVME_FC is not set +# CONFIG_NVME_TCP is not set +# end of NVME Support + +# +# Misc devices +# +# CONFIG_AD525X_DPOT is not set +# CONFIG_DUMMY_IRQ is not set +# CONFIG_IBM_ASM is not set +# CONFIG_PHANTOM is not set +# CONFIG_TIFM_CORE is not set +# CONFIG_ICS932S401 is not set +# CONFIG_ENCLOSURE_SERVICES is not set +# CONFIG_HP_ILO is not set +# CONFIG_APDS9802ALS is not set +# CONFIG_ISL29003 is not set +# CONFIG_ISL29020 is not set +# CONFIG_SENSORS_TSL2550 is not set +# CONFIG_SENSORS_BH1770 is not set +# CONFIG_SENSORS_APDS990X is not set +# CONFIG_HMC6352 is not set +# CONFIG_DS1682 is not set +# CONFIG_SRAM is not set +# CONFIG_DW_XDATA_PCIE is not set +# CONFIG_PCI_ENDPOINT_TEST is not set +# CONFIG_XILINX_SDFEC is not set +# CONFIG_C2PORT is not set + +# +# EEPROM support +# +# CONFIG_EEPROM_AT24 is not set +# CONFIG_EEPROM_LEGACY is not set +# CONFIG_EEPROM_MAX6875 is not set +# CONFIG_EEPROM_93CX6 is not set +# CONFIG_EEPROM_IDT_89HPESX is not set +# CONFIG_EEPROM_EE1004 is not set +# end of EEPROM support + +# CONFIG_CB710_CORE is not set + +# +# Texas Instruments shared transport line discipline +# +# end of Texas Instruments shared transport line discipline + +# CONFIG_SENSORS_LIS3_I2C is not set +# CONFIG_ALTERA_STAPL is not set +# CONFIG_INTEL_MEI is not set +# CONFIG_INTEL_MEI_ME is not set +# CONFIG_INTEL_MEI_TXE is not set +# CONFIG_VMWARE_VMCI is not set +# CONFIG_GENWQE is not set +# CONFIG_ECHO is not set +# CONFIG_BCM_VK is not set +# CONFIG_MISC_ALCOR_PCI is not set +# CONFIG_MISC_RTSX_PCI is not set +# CONFIG_MISC_RTSX_USB is not set +# CONFIG_UACCE is not set +# CONFIG_PVPANIC is not set +# end of Misc devices + +# +# SCSI device support +# +CONFIG_SCSI_MOD=y +# CONFIG_RAID_ATTRS is not set +CONFIG_SCSI_COMMON=y +CONFIG_SCSI=y +CONFIG_SCSI_DMA=y +# CONFIG_SCSI_PROC_FS is not set + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +# CONFIG_CHR_DEV_ST is not set +# CONFIG_BLK_DEV_SR is not set +CONFIG_CHR_DEV_SG=y +CONFIG_BLK_DEV_BSG=y +# CONFIG_CHR_DEV_SCH is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set +# CONFIG_SCSI_SCAN_ASYNC is not set + +# +# SCSI Transports +# +# CONFIG_SCSI_SPI_ATTRS is not set +# CONFIG_SCSI_FC_ATTRS is not set +# CONFIG_SCSI_ISCSI_ATTRS is not set +# CONFIG_SCSI_SAS_ATTRS is not set +# CONFIG_SCSI_SAS_LIBSAS is not set +# CONFIG_SCSI_SRP_ATTRS is not set +# end of SCSI Transports + +CONFIG_SCSI_LOWLEVEL=y +# CONFIG_ISCSI_TCP is not set +# CONFIG_ISCSI_BOOT_SYSFS is not set +# CONFIG_SCSI_CXGB3_ISCSI is not set +# CONFIG_SCSI_CXGB4_ISCSI is not set +# CONFIG_SCSI_BNX2_ISCSI is not set +# CONFIG_BE2ISCSI is not set +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_HPSA is not set +# CONFIG_SCSI_3W_9XXX is not set +# CONFIG_SCSI_3W_SAS is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AACRAID is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC79XX is not set +# CONFIG_SCSI_AIC94XX is not set +# CONFIG_SCSI_MVSAS is not set +# CONFIG_SCSI_MVUMI is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_ARCMSR is not set +# CONFIG_SCSI_ESAS2R is not set +# CONFIG_MEGARAID_NEWGEN is not set +# CONFIG_MEGARAID_LEGACY is not set +# CONFIG_MEGARAID_SAS is not set +# CONFIG_SCSI_MPT3SAS is not set +# CONFIG_SCSI_MPT2SAS is not set +# CONFIG_SCSI_MPI3MR is not set +# CONFIG_SCSI_SMARTPQI is not set +# CONFIG_SCSI_HPTIOP is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_MYRB is not set +# CONFIG_SCSI_MYRS is not set +# CONFIG_VMWARE_PVSCSI is not set +CONFIG_HYPERV_STORAGE=y +# CONFIG_SCSI_SNIC is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_FDOMAIN_PCI is not set +# CONFIG_SCSI_ISCI is not set +# CONFIG_SCSI_IPS is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_STEX is not set +# CONFIG_SCSI_SYM53C8XX_2 is not set +# CONFIG_SCSI_IPR is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLA_ISCSI is not set +# CONFIG_SCSI_DC395x is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_WD719X is not set +# CONFIG_SCSI_DEBUG is not set +# CONFIG_SCSI_PMCRAID is not set +# CONFIG_SCSI_PM8001 is not set +CONFIG_SCSI_VIRTIO=y +# CONFIG_SCSI_DH is not set +# end of SCSI device support + +# CONFIG_ATA is not set +CONFIG_MD=y +CONFIG_BLK_DEV_MD=y +# CONFIG_MD_AUTODETECT is not set +CONFIG_MD_BITMAP_FILE=y +# CONFIG_MD_LINEAR is not set +CONFIG_MD_RAID0=y +CONFIG_MD_RAID1=y +CONFIG_MD_RAID10=y +CONFIG_MD_RAID456=y +# CONFIG_MD_MULTIPATH is not set +# CONFIG_MD_FAULTY is not set +# CONFIG_BCACHE is not set +CONFIG_BLK_DEV_DM_BUILTIN=y +CONFIG_BLK_DEV_DM=y +# CONFIG_DM_DEBUG is not set +CONFIG_DM_BUFIO=y +# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set +CONFIG_DM_BIO_PRISON=y +CONFIG_DM_PERSISTENT_DATA=y +# CONFIG_DM_UNSTRIPED is not set +CONFIG_DM_CRYPT=y +# CONFIG_DM_SNAPSHOT is not set +CONFIG_DM_THIN_PROVISIONING=y +# CONFIG_DM_CACHE is not set +# CONFIG_DM_WRITECACHE is not set +# CONFIG_DM_EBS is not set +# CONFIG_DM_ERA is not set +# CONFIG_DM_CLONE is not set +# CONFIG_DM_MIRROR is not set +CONFIG_DM_RAID=y +# CONFIG_DM_ZERO is not set +# CONFIG_DM_MULTIPATH is not set +# CONFIG_DM_DELAY is not set +# CONFIG_DM_DUST is not set +# CONFIG_DM_INIT is not set +# CONFIG_DM_UEVENT is not set +# CONFIG_DM_FLAKEY is not set +CONFIG_DM_VERITY=y +CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y +# CONFIG_DM_VERITY_FEC is not set +# CONFIG_DM_SWITCH is not set +# CONFIG_DM_LOG_WRITES is not set +# CONFIG_DM_INTEGRITY is not set +# CONFIG_DM_AUDIT is not set +# CONFIG_TARGET_CORE is not set +# CONFIG_FUSION is not set + +# +# IEEE 1394 (FireWire) support +# +# CONFIG_FIREWIRE is not set +# CONFIG_FIREWIRE_NOSY is not set +# end of IEEE 1394 (FireWire) support + +# CONFIG_MACINTOSH_DRIVERS is not set +CONFIG_NETDEVICES=y +CONFIG_MII=y +CONFIG_NET_CORE=y +CONFIG_BONDING=y +CONFIG_DUMMY=y +CONFIG_WIREGUARD=y +# CONFIG_WIREGUARD_DEBUG is not set +# CONFIG_EQUALIZER is not set +# CONFIG_NET_FC is not set +# CONFIG_IFB is not set +CONFIG_NET_TEAM=y +# CONFIG_NET_TEAM_MODE_BROADCAST is not set +# CONFIG_NET_TEAM_MODE_ROUNDROBIN is not set +# CONFIG_NET_TEAM_MODE_RANDOM is not set +# CONFIG_NET_TEAM_MODE_ACTIVEBACKUP is not set +# CONFIG_NET_TEAM_MODE_LOADBALANCE is not set +CONFIG_MACVLAN=y +CONFIG_MACVTAP=y +CONFIG_IPVLAN_L3S=y +CONFIG_IPVLAN=y +CONFIG_IPVTAP=y +CONFIG_VXLAN=y +CONFIG_GENEVE=y +# CONFIG_BAREUDP is not set +# CONFIG_GTP is not set +# CONFIG_MACSEC is not set +# CONFIG_NETCONSOLE is not set +CONFIG_TUN=y +CONFIG_TAP=y +# CONFIG_TUN_VNET_CROSS_LE is not set +CONFIG_VETH=y +CONFIG_VIRTIO_NET=y +# CONFIG_NLMON is not set +# CONFIG_ARCNET is not set +CONFIG_ETHERNET=y +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_NET_VENDOR_ADAPTEC is not set +# CONFIG_NET_VENDOR_AGERE is not set +# CONFIG_NET_VENDOR_ALACRITECH is not set +# CONFIG_NET_VENDOR_ALTEON is not set +# CONFIG_ALTERA_TSE is not set +# CONFIG_NET_VENDOR_AMAZON is not set +# CONFIG_NET_VENDOR_AMD is not set +# CONFIG_NET_VENDOR_AQUANTIA is not set +# CONFIG_NET_VENDOR_ARC is not set +# CONFIG_NET_VENDOR_ASIX is not set +# CONFIG_NET_VENDOR_ATHEROS is not set +# CONFIG_CX_ECAT is not set +# CONFIG_NET_VENDOR_BROADCOM is not set +# CONFIG_NET_VENDOR_CADENCE is not set +# CONFIG_NET_VENDOR_CAVIUM is not set +# CONFIG_NET_VENDOR_CHELSIO is not set +# CONFIG_NET_VENDOR_CISCO is not set +# CONFIG_NET_VENDOR_CORTINA is not set +# CONFIG_NET_VENDOR_DAVICOM is not set +# CONFIG_DNET is not set +# CONFIG_NET_VENDOR_DEC is not set +# CONFIG_NET_VENDOR_DLINK is not set +# CONFIG_NET_VENDOR_EMULEX is not set +# CONFIG_NET_VENDOR_ENGLEDER is not set +# CONFIG_NET_VENDOR_EZCHIP is not set +# CONFIG_NET_VENDOR_FUNGIBLE is not set +# CONFIG_NET_VENDOR_GOOGLE is not set +# CONFIG_NET_VENDOR_HUAWEI is not set +# CONFIG_NET_VENDOR_INTEL is not set +# CONFIG_JME is not set +# CONFIG_NET_VENDOR_LITEX is not set +# CONFIG_NET_VENDOR_MARVELL is not set +# CONFIG_NET_VENDOR_MELLANOX is not set +# CONFIG_NET_VENDOR_MICREL is not set +# CONFIG_NET_VENDOR_MICROCHIP is not set +# CONFIG_NET_VENDOR_MICROSEMI is not set +# CONFIG_NET_VENDOR_MICROSOFT is not set +# CONFIG_NET_VENDOR_MYRI is not set +# CONFIG_FEALNX is not set +# CONFIG_NET_VENDOR_NI is not set +# CONFIG_NET_VENDOR_NATSEMI is not set +# CONFIG_NET_VENDOR_NETERION is not set +# CONFIG_NET_VENDOR_NETRONOME is not set +# CONFIG_NET_VENDOR_NVIDIA is not set +# CONFIG_NET_VENDOR_OKI is not set +# CONFIG_ETHOC is not set +# CONFIG_NET_VENDOR_PACKET_ENGINES is not set +# CONFIG_NET_VENDOR_PENSANDO is not set +# CONFIG_NET_VENDOR_QLOGIC is not set +# CONFIG_NET_VENDOR_BROCADE is not set +# CONFIG_NET_VENDOR_QUALCOMM is not set +# CONFIG_NET_VENDOR_RDC is not set +# CONFIG_NET_VENDOR_REALTEK is not set +# CONFIG_NET_VENDOR_RENESAS is not set +# CONFIG_NET_VENDOR_ROCKER is not set +# CONFIG_NET_VENDOR_SAMSUNG is not set +# CONFIG_NET_VENDOR_SEEQ is not set +# CONFIG_NET_VENDOR_SILAN is not set +# CONFIG_NET_VENDOR_SIS is not set +# CONFIG_NET_VENDOR_SOLARFLARE is not set +# CONFIG_NET_VENDOR_SMSC is not set +# CONFIG_NET_VENDOR_SOCIONEXT is not set +# CONFIG_NET_VENDOR_STMICRO is not set +# CONFIG_NET_VENDOR_SUN is not set +# CONFIG_NET_VENDOR_SYNOPSYS is not set +# CONFIG_NET_VENDOR_TEHUTI is not set +# CONFIG_NET_VENDOR_TI is not set +# CONFIG_NET_VENDOR_VERTEXCOM is not set +# CONFIG_NET_VENDOR_VIA is not set +# CONFIG_NET_VENDOR_WANGXUN is not set +# CONFIG_NET_VENDOR_WIZNET is not set +# CONFIG_NET_VENDOR_XILINX is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_NET_SB1000 is not set +# CONFIG_PHYLIB is not set +# CONFIG_PSE_CONTROLLER is not set +# CONFIG_MDIO_DEVICE is not set + +# +# PCS device drivers +# +# end of PCS device drivers + +CONFIG_PPP=y +CONFIG_PPP_BSDCOMP=y +CONFIG_PPP_DEFLATE=y +CONFIG_PPP_FILTER=y +CONFIG_PPP_MPPE=y +CONFIG_PPP_MULTILINK=y +CONFIG_PPPOE=y +# CONFIG_PPPOE_HASH_BITS_1 is not set +# CONFIG_PPPOE_HASH_BITS_2 is not set +CONFIG_PPPOE_HASH_BITS_4=y +# CONFIG_PPPOE_HASH_BITS_8 is not set +CONFIG_PPPOE_HASH_BITS=4 +CONFIG_PPP_ASYNC=y +CONFIG_PPP_SYNC_TTY=y +# CONFIG_SLIP is not set +CONFIG_SLHC=y +CONFIG_USB_NET_DRIVERS=y +# CONFIG_USB_CATC is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_RTL8150 is not set +# CONFIG_USB_RTL8152 is not set +# CONFIG_USB_LAN78XX is not set +CONFIG_USB_USBNET=y +# CONFIG_USB_NET_AX8817X is not set +# CONFIG_USB_NET_AX88179_178A is not set +CONFIG_USB_NET_CDCETHER=y +# CONFIG_USB_NET_CDC_EEM is not set +CONFIG_USB_NET_CDC_NCM=y +# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set +# CONFIG_USB_NET_CDC_MBIM is not set +# CONFIG_USB_NET_DM9601 is not set +# CONFIG_USB_NET_SR9700 is not set +# CONFIG_USB_NET_SR9800 is not set +# CONFIG_USB_NET_SMSC75XX is not set +# CONFIG_USB_NET_SMSC95XX is not set +# CONFIG_USB_NET_GL620A is not set +# CONFIG_USB_NET_NET1080 is not set +# CONFIG_USB_NET_PLUSB is not set +# CONFIG_USB_NET_MCS7830 is not set +# CONFIG_USB_NET_RNDIS_HOST is not set +# CONFIG_USB_NET_CDC_SUBSET is not set +# CONFIG_USB_NET_ZAURUS is not set +# CONFIG_USB_NET_CX82310_ETH is not set +# CONFIG_USB_NET_KALMIA is not set +# CONFIG_USB_NET_QMI_WWAN is not set +# CONFIG_USB_NET_INT51X1 is not set +# CONFIG_USB_IPHETH is not set +# CONFIG_USB_SIERRA_NET is not set +# CONFIG_USB_VL600 is not set +# CONFIG_USB_NET_CH9200 is not set +# CONFIG_USB_NET_AQC111 is not set +CONFIG_USB_RTL8153_ECM=y +# CONFIG_WLAN is not set +# CONFIG_WAN is not set + +# +# Wireless WAN +# +# CONFIG_WWAN is not set +# end of Wireless WAN + +# CONFIG_VMXNET3 is not set +# CONFIG_FUJITSU_ES is not set +CONFIG_HYPERV_NET=y +# CONFIG_NETDEVSIM is not set +CONFIG_NET_FAILOVER=y +# CONFIG_ISDN is not set + +# +# Input device support +# +CONFIG_INPUT=y +# CONFIG_INPUT_FF_MEMLESS is not set +# CONFIG_INPUT_SPARSEKMAP is not set +# CONFIG_INPUT_MATRIXKMAP is not set + +# +# Userland interfaces +# +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set +# CONFIG_INPUT_EVBUG is not set + +# +# Input Device Drivers +# +# CONFIG_INPUT_KEYBOARD is not set +# CONFIG_INPUT_MOUSE is not set +# CONFIG_INPUT_JOYSTICK is not set +# CONFIG_INPUT_TABLET is not set +# CONFIG_INPUT_TOUCHSCREEN is not set +# CONFIG_INPUT_MISC is not set +# CONFIG_RMI4_CORE is not set + +# +# Hardware I/O ports +# +CONFIG_SERIO=y +CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y +# CONFIG_SERIO_I8042 is not set +CONFIG_SERIO_SERPORT=y +# CONFIG_SERIO_CT82C710 is not set +# CONFIG_SERIO_PCIPS2 is not set +# CONFIG_SERIO_LIBPS2 is not set +CONFIG_SERIO_RAW=y +# CONFIG_SERIO_ALTERA_PS2 is not set +# CONFIG_SERIO_PS2MULT is not set +# CONFIG_SERIO_ARC_PS2 is not set +CONFIG_HYPERV_KEYBOARD=y +# CONFIG_USERIO is not set +# CONFIG_GAMEPORT is not set +# end of Hardware I/O ports +# end of Input device support + +# +# Character devices +# +CONFIG_TTY=y +CONFIG_VT=y +CONFIG_CONSOLE_TRANSLATIONS=y +CONFIG_VT_CONSOLE=y +CONFIG_HW_CONSOLE=y +# CONFIG_VT_HW_CONSOLE_BINDING is not set +CONFIG_UNIX98_PTYS=y +CONFIG_LEGACY_PTYS=y +CONFIG_LEGACY_PTY_COUNT=256 +CONFIG_LEGACY_TIOCSTI=y +# CONFIG_LDISC_AUTOLOAD is not set + +# +# Serial drivers +# +CONFIG_SERIAL_EARLYCON=y +CONFIG_SERIAL_8250=y +CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y +CONFIG_SERIAL_8250_PNP=y +# CONFIG_SERIAL_8250_16550A_VARIANTS is not set +# CONFIG_SERIAL_8250_FINTEK is not set +CONFIG_SERIAL_8250_CONSOLE=y +CONFIG_SERIAL_8250_PCILIB=y +CONFIG_SERIAL_8250_PCI=y +# CONFIG_SERIAL_8250_EXAR is not set +CONFIG_SERIAL_8250_NR_UARTS=32 +CONFIG_SERIAL_8250_RUNTIME_UARTS=4 +# CONFIG_SERIAL_8250_EXTENDED is not set +# CONFIG_SERIAL_8250_PCI1XXXX is not set +# CONFIG_SERIAL_8250_DW is not set +# CONFIG_SERIAL_8250_RT288X is not set +# CONFIG_SERIAL_8250_LPSS is not set +# CONFIG_SERIAL_8250_MID is not set +# CONFIG_SERIAL_8250_PERICOM is not set + +# +# Non-8250 serial port support +# +# CONFIG_SERIAL_UARTLITE is not set +CONFIG_SERIAL_CORE=y +CONFIG_SERIAL_CORE_CONSOLE=y +# CONFIG_SERIAL_JSM is not set +# CONFIG_SERIAL_LANTIQ is not set +# CONFIG_SERIAL_SCCNXP is not set +# CONFIG_SERIAL_SC16IS7XX is not set +# CONFIG_SERIAL_ALTERA_JTAGUART is not set +# CONFIG_SERIAL_ALTERA_UART is not set +# CONFIG_SERIAL_ARC is not set +# CONFIG_SERIAL_RP2 is not set +# CONFIG_SERIAL_FSL_LPUART is not set +# CONFIG_SERIAL_FSL_LINFLEXUART is not set +# CONFIG_SERIAL_SPRD is not set +# end of Serial drivers + +# CONFIG_SERIAL_NONSTANDARD is not set +# CONFIG_N_GSM is not set +# CONFIG_NOZOMI is not set +# CONFIG_NULL_TTY is not set +CONFIG_HVC_DRIVER=y +# CONFIG_SERIAL_DEV_BUS is not set +# CONFIG_TTY_PRINTK is not set +CONFIG_VIRTIO_CONSOLE=y +# CONFIG_IPMI_HANDLER is not set +# CONFIG_HW_RANDOM is not set +# CONFIG_APPLICOM is not set +# CONFIG_MWAVE is not set +CONFIG_DEVMEM=y +CONFIG_NVRAM=y +# CONFIG_DEVPORT is not set +# CONFIG_HPET is not set +# CONFIG_HANGCHECK_TIMER is not set +# CONFIG_TCG_TPM is not set +# CONFIG_TELCLOCK is not set +# CONFIG_XILLYBUS is not set +# CONFIG_XILLYUSB is not set +# end of Character devices + +# +# I2C support +# +CONFIG_I2C=y +# CONFIG_ACPI_I2C_OPREGION is not set +CONFIG_I2C_BOARDINFO=y +# CONFIG_I2C_COMPAT is not set +# CONFIG_I2C_CHARDEV is not set +# CONFIG_I2C_MUX is not set +# CONFIG_I2C_HELPER_AUTO is not set +# CONFIG_I2C_SMBUS is not set + +# +# I2C Algorithms +# +CONFIG_I2C_ALGOBIT=y +# CONFIG_I2C_ALGOPCF is not set +# CONFIG_I2C_ALGOPCA is not set +# end of I2C Algorithms + +# +# I2C Hardware Bus support +# + +# +# PC SMBus host controller drivers +# +# CONFIG_I2C_ALI1535 is not set +# CONFIG_I2C_ALI1563 is not set +# CONFIG_I2C_ALI15X3 is not set +# CONFIG_I2C_AMD756 is not set +# CONFIG_I2C_AMD8111 is not set +# CONFIG_I2C_AMD_MP2 is not set +# CONFIG_I2C_I801 is not set +# CONFIG_I2C_ISCH is not set +# CONFIG_I2C_ISMT is not set +# CONFIG_I2C_PIIX4 is not set +# CONFIG_I2C_NFORCE2 is not set +# CONFIG_I2C_NVIDIA_GPU is not set +# CONFIG_I2C_SIS5595 is not set +# CONFIG_I2C_SIS630 is not set +# CONFIG_I2C_SIS96X is not set +# CONFIG_I2C_VIA is not set +# CONFIG_I2C_VIAPRO is not set + +# +# ACPI drivers +# +# CONFIG_I2C_SCMI is not set + +# +# I2C system bus drivers (mostly embedded / system-on-chip) +# +# CONFIG_I2C_DESIGNWARE_PLATFORM is not set +# CONFIG_I2C_DESIGNWARE_PCI is not set +# CONFIG_I2C_EMEV2 is not set +# CONFIG_I2C_OCORES is not set +# CONFIG_I2C_PCA_PLATFORM is not set +# CONFIG_I2C_SIMTEC is not set +# CONFIG_I2C_XILINX is not set + +# +# External I2C/SMBus adapter drivers +# +# CONFIG_I2C_DIOLAN_U2C is not set +# CONFIG_I2C_CP2615 is not set +# CONFIG_I2C_PCI1XXXX is not set +# CONFIG_I2C_ROBOTFUZZ_OSIF is not set +# CONFIG_I2C_TAOS_EVM is not set +# CONFIG_I2C_TINY_USB is not set + +# +# Other I2C/SMBus bus drivers +# +# CONFIG_I2C_MLXCPLD is not set +# CONFIG_I2C_VIRTIO is not set +# end of I2C Hardware Bus support + +# CONFIG_I2C_STUB is not set +# CONFIG_I2C_SLAVE is not set +# CONFIG_I2C_DEBUG_CORE is not set +# CONFIG_I2C_DEBUG_ALGO is not set +# CONFIG_I2C_DEBUG_BUS is not set +# end of I2C support + +# CONFIG_I3C is not set +# CONFIG_SPI is not set +# CONFIG_SPMI is not set +# CONFIG_HSI is not set +CONFIG_PPS=y +# CONFIG_PPS_DEBUG is not set + +# +# PPS clients support +# +# CONFIG_PPS_CLIENT_KTIMER is not set +# CONFIG_PPS_CLIENT_LDISC is not set +# CONFIG_PPS_CLIENT_GPIO is not set + +# +# PPS generators support +# + +# +# PTP clock support +# +CONFIG_PTP_1588_CLOCK=y +CONFIG_PTP_1588_CLOCK_OPTIONAL=y + +# +# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks. +# +# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set +# CONFIG_PTP_1588_CLOCK_IDTCM is not set +# CONFIG_PTP_1588_CLOCK_MOCK is not set +# CONFIG_PTP_1588_CLOCK_VMW is not set +# end of PTP clock support + +# CONFIG_PINCTRL is not set +# CONFIG_GPIOLIB is not set +# CONFIG_W1 is not set +# CONFIG_POWER_RESET is not set +CONFIG_POWER_SUPPLY=y +# CONFIG_POWER_SUPPLY_DEBUG is not set +# CONFIG_IP5XXX_POWER is not set +# CONFIG_TEST_POWER is not set +# CONFIG_CHARGER_ADP5061 is not set +# CONFIG_BATTERY_CW2015 is not set +# CONFIG_BATTERY_DS2780 is not set +# CONFIG_BATTERY_DS2781 is not set +# CONFIG_BATTERY_DS2782 is not set +# CONFIG_BATTERY_SAMSUNG_SDI is not set +# CONFIG_BATTERY_SBS is not set +# CONFIG_CHARGER_SBS is not set +# CONFIG_BATTERY_BQ27XXX is not set +# CONFIG_BATTERY_MAX17040 is not set +# CONFIG_BATTERY_MAX17042 is not set +# CONFIG_CHARGER_MAX8903 is not set +# CONFIG_CHARGER_LP8727 is not set +# CONFIG_CHARGER_LTC4162L is not set +# CONFIG_CHARGER_MAX77976 is not set +# CONFIG_CHARGER_BQ2415X is not set +# CONFIG_BATTERY_GAUGE_LTC2941 is not set +# CONFIG_BATTERY_GOLDFISH is not set +# CONFIG_BATTERY_RT5033 is not set +# CONFIG_CHARGER_BD99954 is not set +# CONFIG_BATTERY_UG3105 is not set +# CONFIG_HWMON is not set +CONFIG_THERMAL=y +# CONFIG_THERMAL_NETLINK is not set +# CONFIG_THERMAL_STATISTICS is not set +CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0 +# CONFIG_THERMAL_WRITABLE_TRIPS is not set +CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y +# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set +# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set +# CONFIG_THERMAL_GOV_FAIR_SHARE is not set +CONFIG_THERMAL_GOV_STEP_WISE=y +# CONFIG_THERMAL_GOV_BANG_BANG is not set +# CONFIG_THERMAL_GOV_USER_SPACE is not set +# CONFIG_THERMAL_EMULATION is not set + +# +# Intel thermal drivers +# +# CONFIG_INTEL_POWERCLAMP is not set +CONFIG_X86_THERMAL_VECTOR=y +# CONFIG_X86_PKG_TEMP_THERMAL is not set +# CONFIG_INTEL_SOC_DTS_THERMAL is not set + +# +# ACPI INT340X thermal drivers +# +# CONFIG_INT340X_THERMAL is not set +# end of ACPI INT340X thermal drivers + +# CONFIG_INTEL_PCH_THERMAL is not set +# CONFIG_INTEL_TCC_COOLING is not set +# CONFIG_INTEL_HFI_THERMAL is not set +# end of Intel thermal drivers + +# CONFIG_WATCHDOG is not set +CONFIG_SSB_POSSIBLE=y +# CONFIG_SSB is not set +CONFIG_BCMA_POSSIBLE=y +# CONFIG_BCMA is not set + +# +# Multifunction device drivers +# +# CONFIG_MFD_AS3711 is not set +# CONFIG_MFD_SMPRO is not set +# CONFIG_PMIC_ADP5520 is not set +# CONFIG_MFD_BCM590XX is not set +# CONFIG_MFD_BD9571MWV is not set +# CONFIG_MFD_AXP20X_I2C is not set +# CONFIG_MFD_CS42L43_I2C is not set +# CONFIG_MFD_MADERA is not set +# CONFIG_PMIC_DA903X is not set +# CONFIG_MFD_DA9052_I2C is not set +# CONFIG_MFD_DA9055 is not set +# CONFIG_MFD_DA9062 is not set +# CONFIG_MFD_DA9063 is not set +# CONFIG_MFD_DA9150 is not set +# CONFIG_MFD_DLN2 is not set +# CONFIG_MFD_MC13XXX_I2C is not set +# CONFIG_MFD_MP2629 is not set +# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set +# CONFIG_LPC_ICH is not set +# CONFIG_LPC_SCH is not set +# CONFIG_MFD_INTEL_LPSS_ACPI is not set +# CONFIG_MFD_INTEL_LPSS_PCI is not set +# CONFIG_MFD_IQS62X is not set +# CONFIG_MFD_JANZ_CMODIO is not set +# CONFIG_MFD_KEMPLD is not set +# CONFIG_MFD_88PM800 is not set +# CONFIG_MFD_88PM805 is not set +# CONFIG_MFD_88PM860X is not set +# CONFIG_MFD_MAX14577 is not set +# CONFIG_MFD_MAX77541 is not set +# CONFIG_MFD_MAX77693 is not set +# CONFIG_MFD_MAX77843 is not set +# CONFIG_MFD_MAX8907 is not set +# CONFIG_MFD_MAX8925 is not set +# CONFIG_MFD_MAX8997 is not set +# CONFIG_MFD_MAX8998 is not set +# CONFIG_MFD_MT6360 is not set +# CONFIG_MFD_MT6370 is not set +# CONFIG_MFD_MT6397 is not set +# CONFIG_MFD_MENF21BMC is not set +# CONFIG_MFD_VIPERBOARD is not set +# CONFIG_MFD_RETU is not set +# CONFIG_MFD_PCF50633 is not set +# CONFIG_MFD_SY7636A is not set +# CONFIG_MFD_RDC321X is not set +# CONFIG_MFD_RT4831 is not set +# CONFIG_MFD_RT5033 is not set +# CONFIG_MFD_RT5120 is not set +# CONFIG_MFD_RC5T583 is not set +# CONFIG_MFD_SI476X_CORE is not set +# CONFIG_MFD_SM501 is not set +# CONFIG_MFD_SKY81452 is not set +# CONFIG_MFD_SYSCON is not set +# CONFIG_MFD_TI_AM335X_TSCADC is not set +# CONFIG_MFD_LP3943 is not set +# CONFIG_MFD_LP8788 is not set +# CONFIG_MFD_TI_LMU is not set +# CONFIG_MFD_PALMAS is not set +# CONFIG_TPS6105X is not set +# CONFIG_TPS6507X is not set +# CONFIG_MFD_TPS65086 is not set +# CONFIG_MFD_TPS65090 is not set +# CONFIG_MFD_TI_LP873X is not set +# CONFIG_MFD_TPS6586X is not set +# CONFIG_MFD_TPS65912_I2C is not set +# CONFIG_MFD_TPS6594_I2C is not set +# CONFIG_TWL4030_CORE is not set +# CONFIG_TWL6040_CORE is not set +# CONFIG_MFD_WL1273_CORE is not set +# CONFIG_MFD_LM3533 is not set +# CONFIG_MFD_TQMX86 is not set +# CONFIG_MFD_VX855 is not set +# CONFIG_MFD_ARIZONA_I2C is not set +# CONFIG_MFD_WM8400 is not set +# CONFIG_MFD_WM831X_I2C is not set +# CONFIG_MFD_WM8350_I2C is not set +# CONFIG_MFD_WM8994 is not set +# CONFIG_MFD_ATC260X_I2C is not set +# end of Multifunction device drivers + +# CONFIG_REGULATOR is not set +# CONFIG_RC_CORE is not set + +# +# CEC support +# +# CONFIG_MEDIA_CEC_SUPPORT is not set +# end of CEC support + +# CONFIG_MEDIA_SUPPORT is not set + +# +# Graphics support +# +CONFIG_VIDEO_CMDLINE=y +CONFIG_VIDEO_NOMODESET=y +# CONFIG_AUXDISPLAY is not set +# CONFIG_AGP is not set +# CONFIG_VGA_SWITCHEROO is not set +CONFIG_DRM=y +# CONFIG_DRM_DEBUG_MM is not set +# CONFIG_DRM_DEBUG_MODESET_LOCK is not set +# CONFIG_DRM_FBDEV_EMULATION is not set +# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set +CONFIG_DRM_GEM_SHMEM_HELPER=y + +# +# ARM devices +# +# end of ARM devices + +# CONFIG_DRM_RADEON is not set +# CONFIG_DRM_AMDGPU is not set +# CONFIG_DRM_NOUVEAU is not set +# CONFIG_DRM_I915 is not set +CONFIG_DRM_VGEM=y +# CONFIG_DRM_VKMS is not set +# CONFIG_DRM_VMWGFX is not set +# CONFIG_DRM_GMA500 is not set +# CONFIG_DRM_UDL is not set +# CONFIG_DRM_AST is not set +# CONFIG_DRM_MGAG200 is not set +# CONFIG_DRM_QXL is not set +# CONFIG_DRM_VIRTIO_GPU is not set +CONFIG_DRM_PANEL=y + +# +# Display Panels +# +# end of Display Panels + +CONFIG_DRM_BRIDGE=y +CONFIG_DRM_PANEL_BRIDGE=y + +# +# Display Interface Bridges +# +# CONFIG_DRM_ANALOGIX_ANX78XX is not set +# end of Display Interface Bridges + +# CONFIG_DRM_LOONGSON is not set +# CONFIG_DRM_ETNAVIV is not set +# CONFIG_DRM_BOCHS is not set +# CONFIG_DRM_CIRRUS_QEMU is not set +# CONFIG_DRM_GM12U320 is not set +# CONFIG_DRM_SIMPLEDRM is not set +# CONFIG_DRM_VBOXVIDEO is not set +# CONFIG_DRM_GUD is not set +# CONFIG_DRM_SSD130X is not set +# CONFIG_DRM_HYPERV is not set +# CONFIG_DRM_LEGACY is not set +CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y + +# +# Frame buffer Devices +# +# CONFIG_FB is not set +# end of Frame buffer Devices + +# +# Backlight & LCD device support +# +# CONFIG_LCD_CLASS_DEVICE is not set +# CONFIG_BACKLIGHT_CLASS_DEVICE is not set +# end of Backlight & LCD device support + +CONFIG_HDMI=y + +# +# Console display driver support +# +# CONFIG_VGA_CONSOLE is not set +CONFIG_DUMMY_CONSOLE=y +CONFIG_DUMMY_CONSOLE_COLUMNS=80 +CONFIG_DUMMY_CONSOLE_ROWS=25 +# end of Console display driver support +# end of Graphics support + +# CONFIG_DRM_ACCEL is not set +# CONFIG_SOUND is not set +CONFIG_HID_SUPPORT=y +CONFIG_HID=y +# CONFIG_HID_BATTERY_STRENGTH is not set +# CONFIG_HIDRAW is not set +# CONFIG_UHID is not set +CONFIG_HID_GENERIC=y + +# +# Special HID drivers +# +# CONFIG_HID_A4TECH is not set +# CONFIG_HID_ACCUTOUCH is not set +# CONFIG_HID_ACRUX is not set +# CONFIG_HID_APPLEIR is not set +# CONFIG_HID_AUREAL is not set +# CONFIG_HID_BELKIN is not set +# CONFIG_HID_BETOP_FF is not set +# CONFIG_HID_CHERRY is not set +# CONFIG_HID_CHICONY is not set +# CONFIG_HID_COUGAR is not set +# CONFIG_HID_MACALLY is not set +# CONFIG_HID_CMEDIA is not set +# CONFIG_HID_CREATIVE_SB0540 is not set +# CONFIG_HID_CYPRESS is not set +# CONFIG_HID_DRAGONRISE is not set +# CONFIG_HID_EMS_FF is not set +# CONFIG_HID_ELECOM is not set +# CONFIG_HID_ELO is not set +# CONFIG_HID_EVISION is not set +# CONFIG_HID_EZKEY is not set +# CONFIG_HID_GEMBIRD is not set +# CONFIG_HID_GFRM is not set +# CONFIG_HID_GLORIOUS is not set +# CONFIG_HID_HOLTEK is not set +# CONFIG_HID_GOOGLE_STADIA_FF is not set +# CONFIG_HID_VIVALDI is not set +# CONFIG_HID_KEYTOUCH is not set +# CONFIG_HID_KYE is not set +# CONFIG_HID_UCLOGIC is not set +# CONFIG_HID_WALTOP is not set +# CONFIG_HID_VIEWSONIC is not set +# CONFIG_HID_VRC2 is not set +# CONFIG_HID_XIAOMI is not set +# CONFIG_HID_GYRATION is not set +# CONFIG_HID_ICADE is not set +# CONFIG_HID_ITE is not set +# CONFIG_HID_JABRA is not set +# CONFIG_HID_TWINHAN is not set +# CONFIG_HID_KENSINGTON is not set +# CONFIG_HID_LCPOWER is not set +# CONFIG_HID_LENOVO is not set +# CONFIG_HID_LETSKETCH is not set +# CONFIG_HID_MAGICMOUSE is not set +# CONFIG_HID_MALTRON is not set +# CONFIG_HID_MAYFLASH is not set +# CONFIG_HID_MEGAWORLD_FF is not set +# CONFIG_HID_REDRAGON is not set +# CONFIG_HID_MICROSOFT is not set +# CONFIG_HID_MONTEREY is not set +# CONFIG_HID_MULTITOUCH is not set +# CONFIG_HID_NTI is not set +# CONFIG_HID_NTRIG is not set +# CONFIG_HID_ORTEK is not set +# CONFIG_HID_PANTHERLORD is not set +# CONFIG_HID_PENMOUNT is not set +# CONFIG_HID_PETALYNX is not set +# CONFIG_HID_PICOLCD is not set +# CONFIG_HID_PLANTRONICS is not set +# CONFIG_HID_PXRC is not set +# CONFIG_HID_RAZER is not set +# CONFIG_HID_PRIMAX is not set +# CONFIG_HID_RETRODE is not set +# CONFIG_HID_ROCCAT is not set +# CONFIG_HID_SAITEK is not set +# CONFIG_HID_SAMSUNG is not set +# CONFIG_HID_SEMITEK is not set +# CONFIG_HID_SIGMAMICRO is not set +# CONFIG_HID_SPEEDLINK is not set +# CONFIG_HID_STEAM is not set +# CONFIG_HID_STEELSERIES is not set +# CONFIG_HID_SUNPLUS is not set +# CONFIG_HID_RMI is not set +# CONFIG_HID_GREENASIA is not set +# CONFIG_HID_HYPERV_MOUSE is not set +# CONFIG_HID_SMARTJOYPLUS is not set +# CONFIG_HID_TIVO is not set +# CONFIG_HID_TOPSEED is not set +# CONFIG_HID_TOPRE is not set +# CONFIG_HID_THRUSTMASTER is not set +# CONFIG_HID_UDRAW_PS3 is not set +# CONFIG_HID_WACOM is not set +# CONFIG_HID_XINMO is not set +# CONFIG_HID_ZEROPLUS is not set +# CONFIG_HID_ZYDACRON is not set +# CONFIG_HID_SENSOR_HUB is not set +# CONFIG_HID_ALPS is not set +# CONFIG_HID_MCP2221 is not set +# end of Special HID drivers + +# +# HID-BPF support +# +# CONFIG_HID_BPF is not set +# end of HID-BPF support + +# +# USB HID support +# +CONFIG_USB_HID=y +# CONFIG_HID_PID is not set +# CONFIG_USB_HIDDEV is not set +# end of USB HID support + +CONFIG_I2C_HID=y +# CONFIG_I2C_HID_ACPI is not set +# CONFIG_I2C_HID_OF is not set + +# +# Intel ISH HID support +# +# CONFIG_INTEL_ISH_HID is not set +# end of Intel ISH HID support + +# +# AMD SFH HID Support +# +# CONFIG_AMD_SFH_HID is not set +# end of AMD SFH HID Support + +CONFIG_USB_OHCI_LITTLE_ENDIAN=y +CONFIG_USB_SUPPORT=y +CONFIG_USB_COMMON=y +# CONFIG_USB_ULPI_BUS is not set +CONFIG_USB_ARCH_HAS_HCD=y +CONFIG_USB=y +# CONFIG_USB_PCI is not set +CONFIG_USB_ANNOUNCE_NEW_DEVICES=y + +# +# Miscellaneous USB options +# +CONFIG_USB_DEFAULT_PERSIST=y +# CONFIG_USB_FEW_INIT_RETRIES is not set +# CONFIG_USB_DYNAMIC_MINORS is not set +# CONFIG_USB_OTG_PRODUCTLIST is not set +# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set +CONFIG_USB_AUTOSUSPEND_DELAY=2 +# CONFIG_USB_MON is not set + +# +# USB Host Controller Drivers +# +# CONFIG_USB_C67X00_HCD is not set +# CONFIG_USB_XHCI_HCD is not set +# CONFIG_USB_EHCI_HCD is not set +# CONFIG_USB_OXU210HP_HCD is not set +# CONFIG_USB_ISP116X_HCD is not set +# CONFIG_USB_OHCI_HCD is not set +# CONFIG_USB_SL811_HCD is not set +# CONFIG_USB_R8A66597_HCD is not set +# CONFIG_USB_HCD_TEST_MODE is not set + +# +# USB Device Class drivers +# +CONFIG_USB_ACM=y +# CONFIG_USB_PRINTER is not set +# CONFIG_USB_WDM is not set +# CONFIG_USB_TMC is not set + +# +# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may +# + +# +# also be needed; see USB_STORAGE Help for more info +# +# CONFIG_USB_STORAGE is not set + +# +# USB Imaging devices +# +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_MICROTEK is not set +CONFIG_USBIP_CORE=y +CONFIG_USBIP_VHCI_HCD=y +CONFIG_USBIP_VHCI_HC_PORTS=8 +CONFIG_USBIP_VHCI_NR_HCS=1 +# CONFIG_USBIP_HOST is not set +# CONFIG_USBIP_DEBUG is not set + +# +# USB dual-mode controller drivers +# +# CONFIG_USB_CDNS_SUPPORT is not set +# CONFIG_USB_MUSB_HDRC is not set +# CONFIG_USB_DWC3 is not set +# CONFIG_USB_DWC2 is not set +# CONFIG_USB_ISP1760 is not set + +# +# USB port drivers +# +CONFIG_USB_SERIAL=y +# CONFIG_USB_SERIAL_CONSOLE is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_SIMPLE is not set +# CONFIG_USB_SERIAL_AIRCABLE is not set +# CONFIG_USB_SERIAL_ARK3116 is not set +# CONFIG_USB_SERIAL_BELKIN is not set +CONFIG_USB_SERIAL_CH341=y +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +CONFIG_USB_SERIAL_CP210X=y +# CONFIG_USB_SERIAL_CYPRESS_M8 is not set +# CONFIG_USB_SERIAL_EMPEG is not set +CONFIG_USB_SERIAL_FTDI_SIO=y +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IPAQ is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_EDGEPORT_TI is not set +# CONFIG_USB_SERIAL_F81232 is not set +# CONFIG_USB_SERIAL_F8153X is not set +# CONFIG_USB_SERIAL_GARMIN is not set +# CONFIG_USB_SERIAL_IPW is not set +# CONFIG_USB_SERIAL_IUU is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KLSI is not set +# CONFIG_USB_SERIAL_KOBIL_SCT is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_METRO is not set +# CONFIG_USB_SERIAL_MOS7720 is not set +# CONFIG_USB_SERIAL_MOS7840 is not set +# CONFIG_USB_SERIAL_MXUPORT is not set +# CONFIG_USB_SERIAL_NAVMAN is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_OTI6858 is not set +# CONFIG_USB_SERIAL_QCAUX is not set +# CONFIG_USB_SERIAL_QUALCOMM is not set +# CONFIG_USB_SERIAL_SPCP8X5 is not set +# CONFIG_USB_SERIAL_SAFE is not set +# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set +# CONFIG_USB_SERIAL_SYMBOL is not set +# CONFIG_USB_SERIAL_TI is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_OPTION is not set +# CONFIG_USB_SERIAL_OMNINET is not set +# CONFIG_USB_SERIAL_OPTICON is not set +# CONFIG_USB_SERIAL_XSENS_MT is not set +# CONFIG_USB_SERIAL_WISHBONE is not set +# CONFIG_USB_SERIAL_SSU100 is not set +# CONFIG_USB_SERIAL_QT2 is not set +# CONFIG_USB_SERIAL_UPD78F0730 is not set +# CONFIG_USB_SERIAL_XR is not set +# CONFIG_USB_SERIAL_DEBUG is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_EMI62 is not set +# CONFIG_USB_EMI26 is not set +# CONFIG_USB_ADUTUX is not set +# CONFIG_USB_SEVSEG is not set +# CONFIG_USB_LEGOTOWER is not set +# CONFIG_USB_LCD is not set +# CONFIG_USB_CYPRESS_CY7C63 is not set +# CONFIG_USB_CYTHERM is not set +# CONFIG_USB_IDMOUSE is not set +# CONFIG_USB_APPLEDISPLAY is not set +# CONFIG_APPLE_MFI_FASTCHARGE is not set +# CONFIG_USB_LD is not set +# CONFIG_USB_TRANCEVIBRATOR is not set +# CONFIG_USB_IOWARRIOR is not set +# CONFIG_USB_TEST is not set +# CONFIG_USB_EHSET_TEST_FIXTURE is not set +# CONFIG_USB_ISIGHTFW is not set +# CONFIG_USB_YUREX is not set +# CONFIG_USB_EZUSB_FX2 is not set +# CONFIG_USB_HUB_USB251XB is not set +# CONFIG_USB_HSIC_USB3503 is not set +# CONFIG_USB_HSIC_USB4604 is not set +# CONFIG_USB_LINK_LAYER_TEST is not set + +# +# USB Physical Layer drivers +# +# CONFIG_NOP_USB_XCEIV is not set +# CONFIG_USB_ISP1301 is not set +# end of USB Physical Layer drivers + +# CONFIG_USB_GADGET is not set +# CONFIG_TYPEC is not set +# CONFIG_USB_ROLE_SWITCH is not set +# CONFIG_MMC is not set +# CONFIG_SCSI_UFSHCD is not set +# CONFIG_MEMSTICK is not set +# CONFIG_NEW_LEDS is not set +# CONFIG_ACCESSIBILITY is not set +# CONFIG_INFINIBAND is not set +CONFIG_EDAC_ATOMIC_SCRUB=y +CONFIG_EDAC_SUPPORT=y +# CONFIG_EDAC is not set +CONFIG_RTC_LIB=y +CONFIG_RTC_MC146818_LIB=y +CONFIG_RTC_CLASS=y +CONFIG_RTC_HCTOSYS=y +CONFIG_RTC_HCTOSYS_DEVICE="rtc0" +CONFIG_RTC_SYSTOHC=y +CONFIG_RTC_SYSTOHC_DEVICE="rtc0" +# CONFIG_RTC_DEBUG is not set +CONFIG_RTC_NVMEM=y + +# +# RTC interfaces +# +CONFIG_RTC_INTF_SYSFS=y +CONFIG_RTC_INTF_PROC=y +CONFIG_RTC_INTF_DEV=y +CONFIG_RTC_INTF_DEV_UIE_EMUL=y +# CONFIG_RTC_DRV_TEST is not set + +# +# I2C RTC drivers +# +# CONFIG_RTC_DRV_ABB5ZES3 is not set +# CONFIG_RTC_DRV_ABEOZ9 is not set +# CONFIG_RTC_DRV_ABX80X is not set +# CONFIG_RTC_DRV_DS1307 is not set +# CONFIG_RTC_DRV_DS1374 is not set +# CONFIG_RTC_DRV_DS1672 is not set +# CONFIG_RTC_DRV_MAX6900 is not set +# CONFIG_RTC_DRV_RS5C372 is not set +# CONFIG_RTC_DRV_ISL1208 is not set +# CONFIG_RTC_DRV_ISL12022 is not set +# CONFIG_RTC_DRV_X1205 is not set +# CONFIG_RTC_DRV_PCF8523 is not set +# CONFIG_RTC_DRV_PCF85063 is not set +# CONFIG_RTC_DRV_PCF85363 is not set +# CONFIG_RTC_DRV_PCF8563 is not set +# CONFIG_RTC_DRV_PCF8583 is not set +# CONFIG_RTC_DRV_M41T80 is not set +# CONFIG_RTC_DRV_BQ32K is not set +# CONFIG_RTC_DRV_S35390A is not set +# CONFIG_RTC_DRV_FM3130 is not set +# CONFIG_RTC_DRV_RX8010 is not set +# CONFIG_RTC_DRV_RX8581 is not set +# CONFIG_RTC_DRV_RX8025 is not set +# CONFIG_RTC_DRV_EM3027 is not set +# CONFIG_RTC_DRV_RV3028 is not set +# CONFIG_RTC_DRV_RV3032 is not set +# CONFIG_RTC_DRV_RV8803 is not set +# CONFIG_RTC_DRV_SD3078 is not set + +# +# SPI RTC drivers +# +CONFIG_RTC_I2C_AND_SPI=y + +# +# SPI and I2C RTC drivers +# +# CONFIG_RTC_DRV_DS3232 is not set +# CONFIG_RTC_DRV_PCF2127 is not set +# CONFIG_RTC_DRV_RV3029C2 is not set +# CONFIG_RTC_DRV_RX6110 is not set + +# +# Platform RTC drivers +# +CONFIG_RTC_DRV_CMOS=y +# CONFIG_RTC_DRV_DS1286 is not set +# CONFIG_RTC_DRV_DS1511 is not set +# CONFIG_RTC_DRV_DS1553 is not set +# CONFIG_RTC_DRV_DS1685_FAMILY is not set +# CONFIG_RTC_DRV_DS1742 is not set +# CONFIG_RTC_DRV_DS2404 is not set +# CONFIG_RTC_DRV_STK17TA8 is not set +# CONFIG_RTC_DRV_M48T86 is not set +# CONFIG_RTC_DRV_M48T35 is not set +# CONFIG_RTC_DRV_M48T59 is not set +# CONFIG_RTC_DRV_MSM6242 is not set +# CONFIG_RTC_DRV_RP5C01 is not set + +# +# on-CPU RTC drivers +# +# CONFIG_RTC_DRV_FTRTC010 is not set + +# +# HID Sensor RTC drivers +# +# CONFIG_RTC_DRV_GOLDFISH is not set +# CONFIG_DMADEVICES is not set + +# +# DMABUF options +# +CONFIG_SYNC_FILE=y +# CONFIG_SW_SYNC is not set +# CONFIG_UDMABUF is not set +# CONFIG_DMABUF_MOVE_NOTIFY is not set +# CONFIG_DMABUF_DEBUG is not set +# CONFIG_DMABUF_SELFTESTS is not set +# CONFIG_DMABUF_HEAPS is not set +# CONFIG_DMABUF_SYSFS_STATS is not set +# end of DMABUF options + +CONFIG_UIO=y +# CONFIG_UIO_CIF is not set +CONFIG_UIO_PDRV_GENIRQ=y +CONFIG_UIO_DMEM_GENIRQ=y +# CONFIG_UIO_AEC is not set +# CONFIG_UIO_SERCOS3 is not set +# CONFIG_UIO_PCI_GENERIC is not set +# CONFIG_UIO_NETX is not set +# CONFIG_UIO_PRUSS is not set +# CONFIG_UIO_MF624 is not set +# CONFIG_UIO_HV_GENERIC is not set +CONFIG_VFIO=y +CONFIG_VFIO_GROUP=y +CONFIG_VFIO_CONTAINER=y +CONFIG_VFIO_IOMMU_TYPE1=y +# CONFIG_VFIO_NOIOMMU is not set +CONFIG_VFIO_VIRQFD=y + +# +# VFIO support for PCI devices +# +CONFIG_VFIO_PCI_CORE=y +CONFIG_VFIO_PCI_MMAP=y +CONFIG_VFIO_PCI_INTX=y +CONFIG_VFIO_PCI=y +# CONFIG_VFIO_PCI_IGD is not set +# end of VFIO support for PCI devices + +CONFIG_IRQ_BYPASS_MANAGER=y +# CONFIG_VIRT_DRIVERS is not set +CONFIG_VIRTIO_ANCHOR=y +CONFIG_VIRTIO=y +CONFIG_VIRTIO_PCI_LIB=y +CONFIG_VIRTIO_MENU=y +CONFIG_VIRTIO_PCI=y +# CONFIG_VIRTIO_PCI_LEGACY is not set +# CONFIG_VIRTIO_VDPA is not set +CONFIG_VIRTIO_PMEM=y +CONFIG_VIRTIO_BALLOON=y +CONFIG_VIRTIO_INPUT=y +CONFIG_VIRTIO_MMIO=y +# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set +CONFIG_VDPA=y +# CONFIG_VDPA_USER is not set +# CONFIG_IFCVF is not set +# CONFIG_MLX5_VDPA_STEERING_DEBUG is not set +# CONFIG_VP_VDPA is not set +# CONFIG_ALIBABA_ENI_VDPA is not set +# CONFIG_SNET_VDPA is not set +CONFIG_VHOST_IOTLB=y +CONFIG_VHOST_TASK=y +CONFIG_VHOST=y +CONFIG_VHOST_MENU=y +CONFIG_VHOST_NET=y +# CONFIG_VHOST_VSOCK is not set +CONFIG_VHOST_VDPA=y +# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set + +# +# Microsoft Hyper-V guest support +# +CONFIG_HYPERV=y +# CONFIG_HYPERV_VTL_MODE is not set +CONFIG_HYPERV_TIMER=y +CONFIG_HYPERV_UTILS=y +CONFIG_HYPERV_BALLOON=y +CONFIG_DXGKRNL=y +# end of Microsoft Hyper-V guest support + +# CONFIG_GREYBUS is not set +# CONFIG_COMEDI is not set +# CONFIG_STAGING is not set +# CONFIG_CHROME_PLATFORMS is not set +# CONFIG_MELLANOX_PLATFORM is not set +# CONFIG_SURFACE_PLATFORMS is not set +# CONFIG_X86_PLATFORM_DEVICES is not set +CONFIG_HAVE_CLK=y +CONFIG_HAVE_CLK_PREPARE=y +CONFIG_COMMON_CLK=y +# CONFIG_COMMON_CLK_MAX9485 is not set +# CONFIG_COMMON_CLK_SI5341 is not set +# CONFIG_COMMON_CLK_SI5351 is not set +# CONFIG_COMMON_CLK_SI544 is not set +# CONFIG_COMMON_CLK_CDCE706 is not set +# CONFIG_COMMON_CLK_CS2000_CP is not set +# CONFIG_XILINX_VCU is not set +# CONFIG_HWSPINLOCK is not set + +# +# Clock Source drivers +# +CONFIG_CLKEVT_I8253=y +CONFIG_I8253_LOCK=y +CONFIG_CLKBLD_I8253=y +# end of Clock Source drivers + +CONFIG_MAILBOX=y +CONFIG_PCC=y +# CONFIG_ALTERA_MBOX is not set +CONFIG_IOMMU_IOVA=y +CONFIG_IOMMU_API=y +CONFIG_IOMMU_SUPPORT=y + +# +# Generic IOMMU Pagetable Support +# +CONFIG_IOMMU_IO_PGTABLE=y +# end of Generic IOMMU Pagetable Support + +# CONFIG_IOMMU_DEBUGFS is not set +# CONFIG_IOMMU_DEFAULT_DMA_STRICT is not set +CONFIG_IOMMU_DEFAULT_DMA_LAZY=y +# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set +CONFIG_IOMMU_DMA=y +CONFIG_AMD_IOMMU=y +# CONFIG_AMD_IOMMU_V2 is not set +CONFIG_DMAR_TABLE=y +CONFIG_INTEL_IOMMU=y +# CONFIG_INTEL_IOMMU_SVM is not set +# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set +CONFIG_INTEL_IOMMU_FLOPPY_WA=y +# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set +CONFIG_INTEL_IOMMU_PERF_EVENTS=y +# CONFIG_IOMMUFD is not set +# CONFIG_IRQ_REMAP is not set +# CONFIG_HYPERV_IOMMU is not set +# CONFIG_VIRTIO_IOMMU is not set + +# +# Remoteproc drivers +# +# CONFIG_REMOTEPROC is not set +# end of Remoteproc drivers + +# +# Rpmsg drivers +# +# CONFIG_RPMSG_QCOM_GLINK_RPM is not set +# CONFIG_RPMSG_VIRTIO is not set +# end of Rpmsg drivers + +# CONFIG_SOUNDWIRE is not set + +# +# SOC (System On Chip) specific Drivers +# + +# +# Amlogic SoC drivers +# +# end of Amlogic SoC drivers + +# +# Broadcom SoC drivers +# +# end of Broadcom SoC drivers + +# +# NXP/Freescale QorIQ SoC drivers +# +# end of NXP/Freescale QorIQ SoC drivers + +# +# fujitsu SoC drivers +# +# end of fujitsu SoC drivers + +# +# i.MX SoC drivers +# +# end of i.MX SoC drivers + +# +# Enable LiteX SoC Builder specific drivers +# +# end of Enable LiteX SoC Builder specific drivers + +# CONFIG_WPCM450_SOC is not set + +# +# Qualcomm SoC drivers +# +# end of Qualcomm SoC drivers + +# CONFIG_SOC_TI is not set + +# +# Xilinx SoC drivers +# +# end of Xilinx SoC drivers +# end of SOC (System On Chip) specific Drivers + +# CONFIG_PM_DEVFREQ is not set +# CONFIG_EXTCON is not set +# CONFIG_MEMORY is not set +# CONFIG_IIO is not set +# CONFIG_NTB is not set +# CONFIG_PWM is not set + +# +# IRQ chip support +# +# end of IRQ chip support + +# CONFIG_IPACK_BUS is not set +# CONFIG_RESET_CONTROLLER is not set + +# +# PHY Subsystem +# +CONFIG_GENERIC_PHY=y +# CONFIG_USB_LGM_PHY is not set +# CONFIG_PHY_CAN_TRANSCEIVER is not set + +# +# PHY drivers for Broadcom platforms +# +# CONFIG_BCM_KONA_USB2_PHY is not set +# end of PHY drivers for Broadcom platforms + +# CONFIG_PHY_PXA_28NM_HSIC is not set +# CONFIG_PHY_PXA_28NM_USB2 is not set +# CONFIG_PHY_INTEL_LGM_EMMC is not set +# end of PHY Subsystem + +# CONFIG_POWERCAP is not set +# CONFIG_MCB is not set + +# +# Performance monitor support +# +# end of Performance monitor support + +CONFIG_RAS=y +# CONFIG_USB4 is not set + +# +# Android +# +# CONFIG_ANDROID_BINDER_IPC is not set +# end of Android + +CONFIG_LIBNVDIMM=y +CONFIG_BLK_DEV_PMEM=y +CONFIG_ND_CLAIM=y +CONFIG_ND_BTT=y +CONFIG_BTT=y +CONFIG_ND_PFN=y +CONFIG_NVDIMM_PFN=y +CONFIG_NVDIMM_DAX=y +CONFIG_DAX=y +CONFIG_DEV_DAX=y +CONFIG_DEV_DAX_PMEM=y +CONFIG_DEV_DAX_KMEM=y +CONFIG_NVMEM=y +# CONFIG_NVMEM_SYSFS is not set + +# +# Layout Types +# +# CONFIG_NVMEM_LAYOUT_SL28_VPD is not set +# CONFIG_NVMEM_LAYOUT_ONIE_TLV is not set +# end of Layout Types + +# CONFIG_NVMEM_RMEM is not set + +# +# HW tracing support +# +# CONFIG_STM is not set +# CONFIG_INTEL_TH is not set +# end of HW tracing support + +# CONFIG_FPGA is not set +# CONFIG_TEE is not set +# CONFIG_SIOX is not set +# CONFIG_SLIMBUS is not set +# CONFIG_INTERCONNECT is not set +# CONFIG_COUNTER is not set +# CONFIG_PECI is not set +# CONFIG_HTE is not set +# end of Device Drivers + +# +# File systems +# +CONFIG_DCACHE_WORD_ACCESS=y +# CONFIG_VALIDATE_FS_PARSER is not set +CONFIG_FS_IOMAP=y +CONFIG_BUFFER_HEAD=y +CONFIG_LEGACY_DIRECT_IO=y +# CONFIG_EXT2_FS is not set +# CONFIG_EXT3_FS is not set +CONFIG_EXT4_FS=y +CONFIG_EXT4_USE_FOR_EXT2=y +CONFIG_EXT4_FS_POSIX_ACL=y +CONFIG_EXT4_FS_SECURITY=y +# CONFIG_EXT4_DEBUG is not set +CONFIG_JBD2=y +# CONFIG_JBD2_DEBUG is not set +CONFIG_FS_MBCACHE=y +# CONFIG_REISERFS_FS is not set +# CONFIG_JFS_FS is not set +CONFIG_XFS_FS=y +# CONFIG_XFS_SUPPORT_V4 is not set +CONFIG_XFS_SUPPORT_ASCII_CI=y +CONFIG_XFS_QUOTA=y +CONFIG_XFS_POSIX_ACL=y +CONFIG_XFS_RT=y +CONFIG_XFS_DRAIN_INTENTS=y +CONFIG_XFS_ONLINE_SCRUB=y +CONFIG_XFS_ONLINE_SCRUB_STATS=y +CONFIG_XFS_ONLINE_REPAIR=y +CONFIG_XFS_DEBUG=y +CONFIG_XFS_ASSERT_FATAL=y +# CONFIG_GFS2_FS is not set +CONFIG_BTRFS_FS=y +CONFIG_BTRFS_FS_POSIX_ACL=y +# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set +# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set +# CONFIG_BTRFS_DEBUG is not set +# CONFIG_BTRFS_ASSERT is not set +# CONFIG_BTRFS_FS_REF_VERIFY is not set +# CONFIG_NILFS2_FS is not set +# CONFIG_F2FS_FS is not set +CONFIG_FS_DAX=y +CONFIG_FS_DAX_PMD=y +CONFIG_FS_POSIX_ACL=y +CONFIG_EXPORTFS=y +CONFIG_EXPORTFS_BLOCK_OPS=y +CONFIG_FILE_LOCKING=y +# CONFIG_FS_ENCRYPTION is not set +# CONFIG_FS_VERITY is not set +CONFIG_FSNOTIFY=y +CONFIG_DNOTIFY=y +CONFIG_INOTIFY_USER=y +CONFIG_FANOTIFY=y +CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y +CONFIG_QUOTA=y +# CONFIG_QUOTA_NETLINK_INTERFACE is not set +# CONFIG_QUOTA_DEBUG is not set +# CONFIG_QFMT_V1 is not set +# CONFIG_QFMT_V2 is not set +CONFIG_QUOTACTL=y +CONFIG_AUTOFS_FS=y +CONFIG_FUSE_FS=y +CONFIG_CUSE=y +CONFIG_VIRTIO_FS=y +CONFIG_FUSE_DAX=y +CONFIG_OVERLAY_FS=y +# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set +# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set +# CONFIG_OVERLAY_FS_INDEX is not set +# CONFIG_OVERLAY_FS_XINO_AUTO is not set +# CONFIG_OVERLAY_FS_METACOPY is not set +# CONFIG_OVERLAY_FS_DEBUG is not set + +# +# Caches +# +CONFIG_NETFS_SUPPORT=y +# CONFIG_NETFS_STATS is not set +CONFIG_FSCACHE=y +# CONFIG_FSCACHE_STATS is not set +# CONFIG_FSCACHE_DEBUG is not set +# CONFIG_CACHEFILES is not set +# end of Caches + +# +# CD-ROM/DVD Filesystems +# +CONFIG_ISO9660_FS=y +CONFIG_JOLIET=y +CONFIG_ZISOFS=y +CONFIG_UDF_FS=y +# end of CD-ROM/DVD Filesystems + +# +# DOS/FAT/EXFAT/NT Filesystems +# +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +CONFIG_VFAT_FS=y +CONFIG_FAT_DEFAULT_CODEPAGE=437 +CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" +# CONFIG_FAT_DEFAULT_UTF8 is not set +# CONFIG_EXFAT_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS3_FS is not set +# end of DOS/FAT/EXFAT/NT Filesystems + +# +# Pseudo filesystems +# +CONFIG_PROC_FS=y +CONFIG_PROC_KCORE=y +CONFIG_PROC_SYSCTL=y +CONFIG_PROC_PAGE_MONITOR=y +CONFIG_PROC_CHILDREN=y +CONFIG_PROC_PID_ARCH_STATUS=y +CONFIG_KERNFS=y +CONFIG_SYSFS=y +CONFIG_TMPFS=y +CONFIG_TMPFS_POSIX_ACL=y +CONFIG_TMPFS_XATTR=y +# CONFIG_TMPFS_INODE64 is not set +# CONFIG_TMPFS_QUOTA is not set +CONFIG_HUGETLBFS=y +CONFIG_HUGETLB_PAGE=y +CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y +# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set +CONFIG_ARCH_HAS_GIGANTIC_PAGE=y +# CONFIG_CONFIGFS_FS is not set +# CONFIG_EFIVAR_FS is not set +# end of Pseudo filesystems + +CONFIG_MISC_FILESYSTEMS=y +# CONFIG_ORANGEFS_FS is not set +# CONFIG_ADFS_FS is not set +# CONFIG_AFFS_FS is not set +# CONFIG_ECRYPT_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_HFSPLUS_FS is not set +# CONFIG_BEFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_SQUASHFS=y +# CONFIG_SQUASHFS_FILE_CACHE is not set +CONFIG_SQUASHFS_FILE_DIRECT=y +CONFIG_SQUASHFS_DECOMP_SINGLE=y +# CONFIG_SQUASHFS_CHOICE_DECOMP_BY_MOUNT is not set +CONFIG_SQUASHFS_COMPILE_DECOMP_SINGLE=y +# CONFIG_SQUASHFS_COMPILE_DECOMP_MULTI is not set +# CONFIG_SQUASHFS_COMPILE_DECOMP_MULTI_PERCPU is not set +CONFIG_SQUASHFS_XATTR=y +CONFIG_SQUASHFS_ZLIB=y +CONFIG_SQUASHFS_LZ4=y +CONFIG_SQUASHFS_LZO=y +CONFIG_SQUASHFS_XZ=y +CONFIG_SQUASHFS_ZSTD=y +# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set +# CONFIG_SQUASHFS_EMBEDDED is not set +CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3 +# CONFIG_VXFS_FS is not set +# CONFIG_MINIX_FS is not set +# CONFIG_OMFS_FS is not set +# CONFIG_HPFS_FS is not set +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX6FS_FS is not set +# CONFIG_ROMFS_FS is not set +# CONFIG_PSTORE is not set +# CONFIG_SYSV_FS is not set +# CONFIG_UFS_FS is not set +CONFIG_EROFS_FS=y +# CONFIG_EROFS_FS_DEBUG is not set +CONFIG_EROFS_FS_XATTR=y +CONFIG_EROFS_FS_POSIX_ACL=y +CONFIG_EROFS_FS_SECURITY=y +CONFIG_EROFS_FS_ZIP=y +# CONFIG_EROFS_FS_ZIP_LZMA is not set +# CONFIG_EROFS_FS_ZIP_DEFLATE is not set +# CONFIG_EROFS_FS_PCPU_KTHREAD is not set +CONFIG_NETWORK_FILESYSTEMS=y +CONFIG_NFS_FS=y +CONFIG_NFS_V2=y +CONFIG_NFS_V3=y +# CONFIG_NFS_V3_ACL is not set +CONFIG_NFS_V4=y +# CONFIG_NFS_SWAP is not set +CONFIG_NFS_V4_1=y +# CONFIG_NFS_V4_2 is not set +CONFIG_PNFS_FILE_LAYOUT=y +CONFIG_PNFS_BLOCK=y +CONFIG_PNFS_FLEXFILE_LAYOUT=y +CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org" +# CONFIG_NFS_V4_1_MIGRATION is not set +CONFIG_ROOT_NFS=y +# CONFIG_NFS_FSCACHE is not set +# CONFIG_NFS_USE_LEGACY_DNS is not set +CONFIG_NFS_USE_KERNEL_DNS=y +# CONFIG_NFS_DISABLE_UDP_SUPPORT is not set +CONFIG_NFSD=y +# CONFIG_NFSD_V2 is not set +CONFIG_NFSD_V3_ACL=y +CONFIG_NFSD_V4=y +CONFIG_NFSD_PNFS=y +CONFIG_NFSD_BLOCKLAYOUT=y +CONFIG_NFSD_SCSILAYOUT=y +CONFIG_NFSD_FLEXFILELAYOUT=y +CONFIG_NFSD_V4_SECURITY_LABEL=y +CONFIG_GRACE_PERIOD=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +CONFIG_NFS_ACL_SUPPORT=y +CONFIG_NFS_COMMON=y +CONFIG_SUNRPC=y +CONFIG_SUNRPC_GSS=y +CONFIG_SUNRPC_BACKCHANNEL=y +CONFIG_RPCSEC_GSS_KRB5=y +CONFIG_RPCSEC_GSS_KRB5_ENCTYPES_AES_SHA1=y +# CONFIG_RPCSEC_GSS_KRB5_ENCTYPES_AES_SHA2 is not set +# CONFIG_SUNRPC_DEBUG is not set +CONFIG_CEPH_FS=y +CONFIG_CEPH_FSCACHE=y +CONFIG_CEPH_FS_POSIX_ACL=y +# CONFIG_CEPH_FS_SECURITY_LABEL is not set +CONFIG_CIFS=y +# CONFIG_CIFS_STATS2 is not set +CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y +# CONFIG_CIFS_UPCALL is not set +CONFIG_CIFS_XATTR=y +CONFIG_CIFS_POSIX=y +# CONFIG_CIFS_DEBUG is not set +# CONFIG_CIFS_DFS_UPCALL is not set +# CONFIG_CIFS_SWN_UPCALL is not set +# CONFIG_CIFS_FSCACHE is not set +# CONFIG_CIFS_ROOT is not set +# CONFIG_SMB_SERVER is not set +CONFIG_SMBFS=y +# CONFIG_CODA_FS is not set +# CONFIG_AFS_FS is not set +CONFIG_9P_FS=y +CONFIG_9P_FSCACHE=y +CONFIG_9P_FS_POSIX_ACL=y +CONFIG_9P_FS_SECURITY=y +CONFIG_NLS=y +CONFIG_NLS_DEFAULT="iso8859-1" +CONFIG_NLS_CODEPAGE_437=y +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1250 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +CONFIG_NLS_ASCII=y +CONFIG_NLS_ISO8859_1=y +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_MAC_ROMAN is not set +# CONFIG_NLS_MAC_CELTIC is not set +# CONFIG_NLS_MAC_CENTEURO is not set +# CONFIG_NLS_MAC_CROATIAN is not set +# CONFIG_NLS_MAC_CYRILLIC is not set +# CONFIG_NLS_MAC_GAELIC is not set +# CONFIG_NLS_MAC_GREEK is not set +# CONFIG_NLS_MAC_ICELAND is not set +# CONFIG_NLS_MAC_INUIT is not set +# CONFIG_NLS_MAC_ROMANIAN is not set +# CONFIG_NLS_MAC_TURKISH is not set +CONFIG_NLS_UTF8=y +CONFIG_NLS_UCS2_UTILS=y +# CONFIG_UNICODE is not set +CONFIG_IO_WQ=y +# end of File systems + +# +# Security options +# +CONFIG_KEYS=y +# CONFIG_KEYS_REQUEST_CACHE is not set +# CONFIG_PERSISTENT_KEYRINGS is not set +# CONFIG_BIG_KEYS is not set +# CONFIG_TRUSTED_KEYS is not set +# CONFIG_ENCRYPTED_KEYS is not set +# CONFIG_KEY_DH_OPERATIONS is not set +CONFIG_SECURITY_DMESG_RESTRICT=y +CONFIG_SECURITY=y +# CONFIG_SECURITYFS is not set +# CONFIG_SECURITY_NETWORK is not set +CONFIG_SECURITY_PATH=y +# CONFIG_INTEL_TXT is not set +CONFIG_HARDENED_USERCOPY=y +CONFIG_FORTIFY_SOURCE=y +# CONFIG_STATIC_USERMODEHELPER is not set +# CONFIG_SECURITY_SMACK is not set +# CONFIG_SECURITY_TOMOYO is not set +# CONFIG_SECURITY_APPARMOR is not set +# CONFIG_SECURITY_LOADPIN is not set +# CONFIG_SECURITY_YAMA is not set +# CONFIG_SECURITY_SAFESETID is not set +# CONFIG_SECURITY_LOCKDOWN_LSM is not set +CONFIG_SECURITY_LANDLOCK=y +# CONFIG_INTEGRITY is not set +# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set +CONFIG_DEFAULT_SECURITY_DAC=y +CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,bpf" + +# +# Kernel hardening options +# + +# +# Memory initialization +# +CONFIG_INIT_STACK_NONE=y +# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set +# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set +CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y +CONFIG_ZERO_CALL_USED_REGS=y +# end of Memory initialization + +# +# Hardening of kernel data structures +# +CONFIG_LIST_HARDENED=y +# CONFIG_BUG_ON_DATA_CORRUPTION is not set +# end of Hardening of kernel data structures + +CONFIG_RANDSTRUCT_NONE=y +# end of Kernel hardening options +# end of Security options + +CONFIG_XOR_BLOCKS=y +CONFIG_ASYNC_CORE=y +CONFIG_ASYNC_MEMCPY=y +CONFIG_ASYNC_XOR=y +CONFIG_ASYNC_PQ=y +CONFIG_ASYNC_RAID6_RECOV=y +CONFIG_CRYPTO=y + +# +# Crypto core or helper +# +CONFIG_CRYPTO_ALGAPI=y +CONFIG_CRYPTO_ALGAPI2=y +CONFIG_CRYPTO_AEAD=y +CONFIG_CRYPTO_AEAD2=y +CONFIG_CRYPTO_SIG2=y +CONFIG_CRYPTO_SKCIPHER=y +CONFIG_CRYPTO_SKCIPHER2=y +CONFIG_CRYPTO_HASH=y +CONFIG_CRYPTO_HASH2=y +CONFIG_CRYPTO_RNG=y +CONFIG_CRYPTO_RNG2=y +CONFIG_CRYPTO_RNG_DEFAULT=y +CONFIG_CRYPTO_AKCIPHER2=y +CONFIG_CRYPTO_AKCIPHER=y +CONFIG_CRYPTO_KPP2=y +CONFIG_CRYPTO_ACOMP2=y +CONFIG_CRYPTO_MANAGER=y +CONFIG_CRYPTO_MANAGER2=y +# CONFIG_CRYPTO_USER is not set +CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y +CONFIG_CRYPTO_NULL=y +CONFIG_CRYPTO_NULL2=y +# CONFIG_CRYPTO_PCRYPT is not set +# CONFIG_CRYPTO_CRYPTD is not set +CONFIG_CRYPTO_AUTHENC=y +# CONFIG_CRYPTO_TEST is not set +# end of Crypto core or helper + +# +# Public-key cryptography +# +CONFIG_CRYPTO_RSA=y +# CONFIG_CRYPTO_DH is not set +# CONFIG_CRYPTO_ECDH is not set +# CONFIG_CRYPTO_ECDSA is not set +# CONFIG_CRYPTO_ECRDSA is not set +# CONFIG_CRYPTO_SM2 is not set +# CONFIG_CRYPTO_CURVE25519 is not set +# end of Public-key cryptography + +# +# Block ciphers +# +CONFIG_CRYPTO_AES=y +# CONFIG_CRYPTO_AES_TI is not set +# CONFIG_CRYPTO_ANUBIS is not set +# CONFIG_CRYPTO_ARIA is not set +# CONFIG_CRYPTO_BLOWFISH is not set +# CONFIG_CRYPTO_CAMELLIA is not set +# CONFIG_CRYPTO_CAST5 is not set +# CONFIG_CRYPTO_CAST6 is not set +CONFIG_CRYPTO_DES=y +# CONFIG_CRYPTO_FCRYPT is not set +# CONFIG_CRYPTO_KHAZAD is not set +# CONFIG_CRYPTO_SEED is not set +# CONFIG_CRYPTO_SERPENT is not set +# CONFIG_CRYPTO_SM4_GENERIC is not set +# CONFIG_CRYPTO_TEA is not set +# CONFIG_CRYPTO_TWOFISH is not set +# end of Block ciphers + +# +# Length-preserving ciphers and modes +# +# CONFIG_CRYPTO_ADIANTUM is not set +CONFIG_CRYPTO_ARC4=y +# CONFIG_CRYPTO_CHACHA20 is not set +CONFIG_CRYPTO_CBC=y +# CONFIG_CRYPTO_CFB is not set +CONFIG_CRYPTO_CTR=y +CONFIG_CRYPTO_CTS=y +CONFIG_CRYPTO_ECB=y +# CONFIG_CRYPTO_HCTR2 is not set +# CONFIG_CRYPTO_KEYWRAP is not set +# CONFIG_CRYPTO_LRW is not set +# CONFIG_CRYPTO_OFB is not set +# CONFIG_CRYPTO_PCBC is not set +CONFIG_CRYPTO_XTS=y +# end of Length-preserving ciphers and modes + +# +# AEAD (authenticated encryption with associated data) ciphers +# +# CONFIG_CRYPTO_AEGIS128 is not set +# CONFIG_CRYPTO_CHACHA20POLY1305 is not set +CONFIG_CRYPTO_CCM=y +CONFIG_CRYPTO_GCM=y +CONFIG_CRYPTO_GENIV=y +CONFIG_CRYPTO_SEQIV=y +CONFIG_CRYPTO_ECHAINIV=y +CONFIG_CRYPTO_ESSIV=y +# end of AEAD (authenticated encryption with associated data) ciphers + +# +# Hashes, digests, and MACs +# +CONFIG_CRYPTO_BLAKE2B=y +CONFIG_CRYPTO_CMAC=y +CONFIG_CRYPTO_GHASH=y +CONFIG_CRYPTO_HMAC=y +CONFIG_CRYPTO_MD4=y +CONFIG_CRYPTO_MD5=y +# CONFIG_CRYPTO_MICHAEL_MIC is not set +# CONFIG_CRYPTO_POLY1305 is not set +# CONFIG_CRYPTO_RMD160 is not set +CONFIG_CRYPTO_SHA1=y +CONFIG_CRYPTO_SHA256=y +CONFIG_CRYPTO_SHA512=y +CONFIG_CRYPTO_SHA3=y +# CONFIG_CRYPTO_SM3_GENERIC is not set +# CONFIG_CRYPTO_STREEBOG is not set +# CONFIG_CRYPTO_VMAC is not set +# CONFIG_CRYPTO_WP512 is not set +# CONFIG_CRYPTO_XCBC is not set +CONFIG_CRYPTO_XXHASH=y +# end of Hashes, digests, and MACs + +# +# CRCs (cyclic redundancy checks) +# +CONFIG_CRYPTO_CRC32C=y +# CONFIG_CRYPTO_CRC32 is not set +# CONFIG_CRYPTO_CRCT10DIF is not set +# end of CRCs (cyclic redundancy checks) + +# +# Compression +# +# CONFIG_CRYPTO_DEFLATE is not set +# CONFIG_CRYPTO_LZO is not set +# CONFIG_CRYPTO_842 is not set +# CONFIG_CRYPTO_LZ4 is not set +# CONFIG_CRYPTO_LZ4HC is not set +# CONFIG_CRYPTO_ZSTD is not set +# end of Compression + +# +# Random number generation +# +# CONFIG_CRYPTO_ANSI_CPRNG is not set +CONFIG_CRYPTO_DRBG_MENU=y +CONFIG_CRYPTO_DRBG_HMAC=y +# CONFIG_CRYPTO_DRBG_HASH is not set +# CONFIG_CRYPTO_DRBG_CTR is not set +CONFIG_CRYPTO_DRBG=y +CONFIG_CRYPTO_JITTERENTROPY=y +# CONFIG_CRYPTO_JITTERENTROPY_TESTINTERFACE is not set +# end of Random number generation + +# +# Userspace interface +# +CONFIG_CRYPTO_USER_API=y +CONFIG_CRYPTO_USER_API_HASH=y +CONFIG_CRYPTO_USER_API_SKCIPHER=y +# CONFIG_CRYPTO_USER_API_RNG is not set +# CONFIG_CRYPTO_USER_API_AEAD is not set +CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y +# end of Userspace interface + +CONFIG_CRYPTO_HASH_INFO=y + +# +# Accelerated Cryptographic Algorithms for CPU (x86) +# +CONFIG_CRYPTO_CURVE25519_X86=y +# CONFIG_CRYPTO_AES_NI_INTEL is not set +# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set +# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set +# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set +# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set +# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set +# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set +# CONFIG_CRYPTO_DES3_EDE_X86_64 is not set +# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set +# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set +# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set +# CONFIG_CRYPTO_SM4_AESNI_AVX_X86_64 is not set +# CONFIG_CRYPTO_SM4_AESNI_AVX2_X86_64 is not set +# CONFIG_CRYPTO_TWOFISH_X86_64 is not set +# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set +# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set +# CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64 is not set +# CONFIG_CRYPTO_ARIA_AESNI_AVX2_X86_64 is not set +# CONFIG_CRYPTO_ARIA_GFNI_AVX512_X86_64 is not set +CONFIG_CRYPTO_CHACHA20_X86_64=y +# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set +# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set +# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set +CONFIG_CRYPTO_BLAKE2S_X86=y +# CONFIG_CRYPTO_POLYVAL_CLMUL_NI is not set +CONFIG_CRYPTO_POLY1305_X86_64=y +# CONFIG_CRYPTO_SHA1_SSSE3 is not set +# CONFIG_CRYPTO_SHA256_SSSE3 is not set +# CONFIG_CRYPTO_SHA512_SSSE3 is not set +# CONFIG_CRYPTO_SM3_AVX_X86_64 is not set +# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set +# CONFIG_CRYPTO_CRC32C_INTEL is not set +# CONFIG_CRYPTO_CRC32_PCLMUL is not set +# end of Accelerated Cryptographic Algorithms for CPU (x86) + +CONFIG_CRYPTO_HW=y +# CONFIG_CRYPTO_DEV_PADLOCK is not set +# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set +# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set +# CONFIG_CRYPTO_DEV_CCP is not set +# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCC is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set +# CONFIG_CRYPTO_DEV_QAT_C62X is not set +# CONFIG_CRYPTO_DEV_QAT_4XXX is not set +# CONFIG_CRYPTO_DEV_QAT_DH895xCCVF is not set +# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set +# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set +# CONFIG_CRYPTO_DEV_VIRTIO is not set +# CONFIG_CRYPTO_DEV_SAFEXCEL is not set +# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set +CONFIG_ASYMMETRIC_KEY_TYPE=y +CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y +CONFIG_X509_CERTIFICATE_PARSER=y +# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set +CONFIG_PKCS7_MESSAGE_PARSER=y +# CONFIG_PKCS7_TEST_KEY is not set +# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set +CONFIG_FIPS_SIGNATURE_SELFTEST=y + +# +# Certificates for signature checking +# +CONFIG_SYSTEM_TRUSTED_KEYRING=y +CONFIG_SYSTEM_TRUSTED_KEYS="" +# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set +# CONFIG_SECONDARY_TRUSTED_KEYRING is not set +# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set +# end of Certificates for signature checking + +CONFIG_BINARY_PRINTF=y + +# +# Library routines +# +CONFIG_RAID6_PQ=y +# CONFIG_RAID6_PQ_BENCHMARK is not set +# CONFIG_PACKING is not set +CONFIG_BITREVERSE=y +CONFIG_GENERIC_STRNCPY_FROM_USER=y +CONFIG_GENERIC_STRNLEN_USER=y +CONFIG_GENERIC_NET_UTILS=y +# CONFIG_CORDIC is not set +# CONFIG_PRIME_NUMBERS is not set +CONFIG_RATIONAL=y +CONFIG_GENERIC_PCI_IOMAP=y +CONFIG_GENERIC_IOMAP=y +CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y +CONFIG_ARCH_HAS_FAST_MULTIPLIER=y +CONFIG_ARCH_USE_SYM_ANNOTATIONS=y + +# +# Crypto library routines +# +CONFIG_CRYPTO_LIB_UTILS=y +CONFIG_CRYPTO_LIB_AES=y +CONFIG_CRYPTO_LIB_ARC4=y +CONFIG_CRYPTO_LIB_GF128MUL=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S=y +CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=y +CONFIG_CRYPTO_LIB_CHACHA_GENERIC=y +CONFIG_CRYPTO_LIB_CHACHA=y +CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519=y +CONFIG_CRYPTO_LIB_CURVE25519_GENERIC=y +CONFIG_CRYPTO_LIB_CURVE25519=y +CONFIG_CRYPTO_LIB_DES=y +CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11 +CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_POLY1305_GENERIC=y +CONFIG_CRYPTO_LIB_POLY1305=y +CONFIG_CRYPTO_LIB_CHACHA20POLY1305=y +CONFIG_CRYPTO_LIB_SHA1=y +CONFIG_CRYPTO_LIB_SHA256=y +# end of Crypto library routines + +CONFIG_CRC_CCITT=y +CONFIG_CRC16=y +# CONFIG_CRC_T10DIF is not set +# CONFIG_CRC64_ROCKSOFT is not set +CONFIG_CRC_ITU_T=y +CONFIG_CRC32=y +# CONFIG_CRC32_SELFTEST is not set +CONFIG_CRC32_SLICEBY8=y +# CONFIG_CRC32_SLICEBY4 is not set +# CONFIG_CRC32_SARWATE is not set +# CONFIG_CRC32_BIT is not set +# CONFIG_CRC64 is not set +# CONFIG_CRC4 is not set +# CONFIG_CRC7 is not set +CONFIG_LIBCRC32C=y +# CONFIG_CRC8 is not set +CONFIG_XXHASH=y +# CONFIG_RANDOM32_SELFTEST is not set +CONFIG_ZLIB_INFLATE=y +CONFIG_ZLIB_DEFLATE=y +CONFIG_LZO_COMPRESS=y +CONFIG_LZO_DECOMPRESS=y +CONFIG_LZ4_DECOMPRESS=y +CONFIG_ZSTD_COMMON=y +CONFIG_ZSTD_COMPRESS=y +CONFIG_ZSTD_DECOMPRESS=y +CONFIG_XZ_DEC=y +# CONFIG_XZ_DEC_X86 is not set +# CONFIG_XZ_DEC_POWERPC is not set +# CONFIG_XZ_DEC_IA64 is not set +# CONFIG_XZ_DEC_ARM is not set +# CONFIG_XZ_DEC_ARMTHUMB is not set +# CONFIG_XZ_DEC_SPARC is not set +# CONFIG_XZ_DEC_MICROLZMA is not set +# CONFIG_XZ_DEC_TEST is not set +CONFIG_DECOMPRESS_GZIP=y +CONFIG_DECOMPRESS_ZSTD=y +CONFIG_GENERIC_ALLOCATOR=y +CONFIG_TEXTSEARCH=y +CONFIG_TEXTSEARCH_KMP=y +CONFIG_INTERVAL_TREE=y +CONFIG_XARRAY_MULTI=y +CONFIG_ASSOCIATIVE_ARRAY=y +CONFIG_HAS_IOMEM=y +CONFIG_HAS_IOPORT=y +CONFIG_HAS_IOPORT_MAP=y +CONFIG_HAS_DMA=y +CONFIG_DMA_OPS=y +CONFIG_NEED_SG_DMA_FLAGS=y +CONFIG_NEED_SG_DMA_LENGTH=y +CONFIG_NEED_DMA_MAP_STATE=y +CONFIG_ARCH_DMA_ADDR_T_64BIT=y +CONFIG_ARCH_HAS_FORCE_DMA_UNENCRYPTED=y +CONFIG_SWIOTLB=y +# CONFIG_SWIOTLB_DYNAMIC is not set +# CONFIG_DMA_API_DEBUG is not set +# CONFIG_DMA_MAP_BENCHMARK is not set +CONFIG_SGL_ALLOC=y +# CONFIG_FORCE_NR_CPUS is not set +CONFIG_CPU_RMAP=y +CONFIG_DQL=y +CONFIG_GLOB=y +# CONFIG_GLOB_SELFTEST is not set +CONFIG_NLATTR=y +CONFIG_CLZ_TAB=y +# CONFIG_IRQ_POLL is not set +CONFIG_MPILIB=y +CONFIG_OID_REGISTRY=y +CONFIG_UCS2_STRING=y +CONFIG_HAVE_GENERIC_VDSO=y +CONFIG_GENERIC_GETTIMEOFDAY=y +CONFIG_GENERIC_VDSO_TIME_NS=y +CONFIG_FONT_SUPPORT=y +CONFIG_FONT_8x16=y +CONFIG_FONT_AUTOSELECT=y +CONFIG_SG_POOL=y +CONFIG_ARCH_HAS_PMEM_API=y +CONFIG_MEMREGION=y +CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION=y +CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y +CONFIG_ARCH_HAS_COPY_MC=y +CONFIG_ARCH_STACKWALK=y +CONFIG_SBITMAP=y +# end of Library routines + +# +# Kernel hacking +# + +# +# printk and dmesg options +# +CONFIG_PRINTK_TIME=y +# CONFIG_PRINTK_CALLER is not set +# CONFIG_STACKTRACE_BUILD_ID is not set +CONFIG_CONSOLE_LOGLEVEL_DEFAULT=2 +CONFIG_CONSOLE_LOGLEVEL_QUIET=4 +CONFIG_MESSAGE_LOGLEVEL_DEFAULT=1 +# CONFIG_BOOT_PRINTK_DELAY is not set +# CONFIG_DYNAMIC_DEBUG is not set +# CONFIG_DYNAMIC_DEBUG_CORE is not set +# CONFIG_SYMBOLIC_ERRNAME is not set +CONFIG_DEBUG_BUGVERBOSE=y +# end of printk and dmesg options + +# CONFIG_DEBUG_KERNEL is not set +# CONFIG_DEBUG_MISC is not set + +# +# Compile-time checks and compiler options +# +# CONFIG_DEBUG_INFO is not set +CONFIG_AS_HAS_NON_CONST_LEB128=y +# CONFIG_DEBUG_INFO_NONE is not set +CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y +# CONFIG_DEBUG_INFO_DWARF4 is not set +# CONFIG_DEBUG_INFO_DWARF5 is not set +# CONFIG_DEBUG_INFO_REDUCED is not set +CONFIG_DEBUG_INFO_COMPRESSED_NONE=y +# CONFIG_DEBUG_INFO_COMPRESSED_ZLIB is not set +# CONFIG_DEBUG_INFO_SPLIT is not set +CONFIG_DEBUG_INFO_BTF=y +CONFIG_PAHOLE_HAS_SPLIT_BTF=y +CONFIG_PAHOLE_HAS_LANG_EXCLUDE=y +CONFIG_DEBUG_INFO_BTF_MODULES=y +# CONFIG_MODULE_ALLOW_BTF_MISMATCH is not set +# CONFIG_GDB_SCRIPTS is not set +CONFIG_FRAME_WARN=1024 +# CONFIG_STRIP_ASM_SYMS is not set +# CONFIG_READABLE_ASM is not set +# CONFIG_HEADERS_INSTALL is not set +# CONFIG_DEBUG_SECTION_MISMATCH is not set +# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set +# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B is not set +CONFIG_OBJTOOL=y +# CONFIG_VMLINUX_MAP is not set +# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set +# end of Compile-time checks and compiler options + +# +# Generic Kernel Debugging Instruments +# +# CONFIG_MAGIC_SYSRQ is not set +CONFIG_DEBUG_FS=y +CONFIG_DEBUG_FS_ALLOW_ALL=y +# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set +# CONFIG_DEBUG_FS_ALLOW_NONE is not set +CONFIG_HAVE_ARCH_KGDB=y +# CONFIG_KGDB is not set +CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y +# CONFIG_UBSAN is not set +CONFIG_HAVE_ARCH_KCSAN=y +CONFIG_HAVE_KCSAN_COMPILER=y +# CONFIG_KCSAN is not set +# end of Generic Kernel Debugging Instruments + +# +# Networking Debugging +# +# CONFIG_NET_DEV_REFCNT_TRACKER is not set +# CONFIG_NET_NS_REFCNT_TRACKER is not set +# CONFIG_DEBUG_NET is not set +# end of Networking Debugging + +# +# Memory Debugging +# +CONFIG_PAGE_EXTENSION=y +# CONFIG_DEBUG_PAGEALLOC is not set +# CONFIG_SLUB_DEBUG is not set +# CONFIG_PAGE_OWNER is not set +# CONFIG_PAGE_POISONING is not set +# CONFIG_DEBUG_PAGE_REF is not set +# CONFIG_DEBUG_RODATA_TEST is not set +CONFIG_ARCH_HAS_DEBUG_WX=y +# CONFIG_DEBUG_WX is not set +CONFIG_GENERIC_PTDUMP=y +# CONFIG_PTDUMP_DEBUGFS is not set +CONFIG_HAVE_DEBUG_KMEMLEAK=y +# CONFIG_DEBUG_KMEMLEAK is not set +# CONFIG_PER_VMA_LOCK_STATS is not set +# CONFIG_DEBUG_OBJECTS is not set +# CONFIG_SHRINKER_DEBUG is not set +# CONFIG_DEBUG_STACK_USAGE is not set +CONFIG_SCHED_STACK_END_CHECK=y +CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y +# CONFIG_DEBUG_VM is not set +# CONFIG_DEBUG_VM_PGTABLE is not set +CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y +# CONFIG_DEBUG_VIRTUAL is not set +CONFIG_DEBUG_MEMORY_INIT=y +# CONFIG_DEBUG_PER_CPU_MAPS is not set +CONFIG_ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP=y +# CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP is not set +CONFIG_HAVE_ARCH_KASAN=y +CONFIG_HAVE_ARCH_KASAN_VMALLOC=y +CONFIG_CC_HAS_KASAN_GENERIC=y +CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y +# CONFIG_KASAN is not set +CONFIG_HAVE_ARCH_KFENCE=y +# CONFIG_KFENCE is not set +CONFIG_HAVE_ARCH_KMSAN=y +# end of Memory Debugging + +# CONFIG_DEBUG_SHIRQ is not set + +# +# Debug Oops, Lockups and Hangs +# +CONFIG_PANIC_ON_OOPS=y +CONFIG_PANIC_ON_OOPS_VALUE=1 +CONFIG_PANIC_TIMEOUT=0 +# CONFIG_SOFTLOCKUP_DETECTOR is not set +CONFIG_HAVE_HARDLOCKUP_DETECTOR_BUDDY=y +# CONFIG_HARDLOCKUP_DETECTOR is not set +CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y +# CONFIG_DETECT_HUNG_TASK is not set +# CONFIG_WQ_WATCHDOG is not set +# CONFIG_WQ_CPU_INTENSIVE_REPORT is not set +# CONFIG_TEST_LOCKUP is not set +# end of Debug Oops, Lockups and Hangs + +# +# Scheduler Debugging +# +CONFIG_SCHED_DEBUG=y +CONFIG_SCHED_INFO=y +CONFIG_SCHEDSTATS=y +# end of Scheduler Debugging + +# CONFIG_DEBUG_TIMEKEEPING is not set + +# +# Lock Debugging (spinlocks, mutexes, etc...) +# +CONFIG_LOCK_DEBUGGING_SUPPORT=y +# CONFIG_PROVE_LOCKING is not set +# CONFIG_LOCK_STAT is not set +# CONFIG_DEBUG_RT_MUTEXES is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_DEBUG_MUTEXES is not set +# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set +# CONFIG_DEBUG_RWSEMS is not set +# CONFIG_DEBUG_LOCK_ALLOC is not set +# CONFIG_DEBUG_ATOMIC_SLEEP is not set +# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set +# CONFIG_LOCK_TORTURE_TEST is not set +# CONFIG_WW_MUTEX_SELFTEST is not set +# CONFIG_SCF_TORTURE_TEST is not set +# CONFIG_CSD_LOCK_WAIT_DEBUG is not set +# end of Lock Debugging (spinlocks, mutexes, etc...) + +# CONFIG_NMI_CHECK_CPU is not set +# CONFIG_DEBUG_IRQFLAGS is not set +CONFIG_STACKTRACE=y +# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set +# CONFIG_DEBUG_KOBJECT is not set + +# +# Debug kernel data structures +# +CONFIG_DEBUG_LIST=y +# CONFIG_DEBUG_PLIST is not set +CONFIG_DEBUG_SG=y +CONFIG_DEBUG_NOTIFIERS=y +# CONFIG_DEBUG_MAPLE_TREE is not set +# end of Debug kernel data structures + +CONFIG_DEBUG_CREDENTIALS=y + +# +# RCU Debugging +# +# CONFIG_RCU_SCALE_TEST is not set +# CONFIG_RCU_TORTURE_TEST is not set +# CONFIG_RCU_REF_SCALE_TEST is not set +CONFIG_RCU_CPU_STALL_TIMEOUT=60 +CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=0 +# CONFIG_RCU_CPU_STALL_CPUTIME is not set +# CONFIG_RCU_TRACE is not set +# CONFIG_RCU_EQS_DEBUG is not set +# end of RCU Debugging + +# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set +# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set +# CONFIG_LATENCYTOP is not set +# CONFIG_DEBUG_CGROUP_REF is not set +CONFIG_USER_STACKTRACE_SUPPORT=y +CONFIG_NOP_TRACER=y +CONFIG_HAVE_RETHOOK=y +CONFIG_RETHOOK=y +CONFIG_HAVE_FUNCTION_TRACER=y +CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y +CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y +CONFIG_HAVE_DYNAMIC_FTRACE=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y +CONFIG_HAVE_DYNAMIC_FTRACE_NO_PATCHABLE=y +CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y +CONFIG_HAVE_SYSCALL_TRACEPOINTS=y +CONFIG_HAVE_FENTRY=y +CONFIG_HAVE_OBJTOOL_MCOUNT=y +CONFIG_HAVE_OBJTOOL_NOP_MCOUNT=y +CONFIG_HAVE_C_RECORDMCOUNT=y +CONFIG_HAVE_BUILDTIME_MCOUNT_SORT=y +CONFIG_BUILDTIME_MCOUNT_SORT=y +CONFIG_TRACER_MAX_TRACE=y +CONFIG_TRACE_CLOCK=y +CONFIG_RING_BUFFER=y +CONFIG_EVENT_TRACING=y +CONFIG_CONTEXT_SWITCH_TRACER=y +CONFIG_RING_BUFFER_ALLOW_SWAP=y +CONFIG_TRACING=y +CONFIG_GENERIC_TRACER=y +CONFIG_TRACING_SUPPORT=y +CONFIG_FTRACE=y +# CONFIG_BOOTTIME_TRACING is not set +CONFIG_FUNCTION_TRACER=y +CONFIG_FUNCTION_GRAPH_TRACER=y +# CONFIG_FUNCTION_GRAPH_RETVAL is not set +CONFIG_DYNAMIC_FTRACE=y +CONFIG_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y +CONFIG_DYNAMIC_FTRACE_WITH_ARGS=y +CONFIG_FPROBE=y +CONFIG_FUNCTION_PROFILER=y +CONFIG_STACK_TRACER=y +# CONFIG_IRQSOFF_TRACER is not set +CONFIG_SCHED_TRACER=y +CONFIG_HWLAT_TRACER=y +# CONFIG_OSNOISE_TRACER is not set +# CONFIG_TIMERLAT_TRACER is not set +# CONFIG_MMIOTRACE is not set +CONFIG_FTRACE_SYSCALLS=y +CONFIG_TRACER_SNAPSHOT=y +CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y +CONFIG_BRANCH_PROFILE_NONE=y +# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set +# CONFIG_BLK_DEV_IO_TRACE is not set +CONFIG_FPROBE_EVENTS=y +CONFIG_PROBE_EVENTS_BTF_ARGS=y +CONFIG_KPROBE_EVENTS=y +# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set +CONFIG_UPROBE_EVENTS=y +CONFIG_BPF_EVENTS=y +CONFIG_DYNAMIC_EVENTS=y +CONFIG_PROBE_EVENTS=y +CONFIG_FTRACE_MCOUNT_RECORD=y +CONFIG_FTRACE_MCOUNT_USE_CC=y +# CONFIG_SYNTH_EVENTS is not set +# CONFIG_USER_EVENTS is not set +# CONFIG_HIST_TRIGGERS is not set +# CONFIG_TRACE_EVENT_INJECT is not set +# CONFIG_TRACEPOINT_BENCHMARK is not set +# CONFIG_RING_BUFFER_BENCHMARK is not set +# CONFIG_TRACE_EVAL_MAP_FILE is not set +# CONFIG_FTRACE_RECORD_RECURSION is not set +# CONFIG_FTRACE_STARTUP_TEST is not set +# CONFIG_FTRACE_SORT_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_STARTUP_TEST is not set +# CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS is not set +# CONFIG_PREEMPTIRQ_DELAY_TEST is not set +# CONFIG_KPROBE_EVENT_GEN_TEST is not set +# CONFIG_RV is not set +# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set +# CONFIG_SAMPLES is not set +CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y +CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y +CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y +# CONFIG_STRICT_DEVMEM is not set + +# +# x86 Debugging +# +# CONFIG_X86_VERBOSE_BOOTUP is not set +CONFIG_EARLY_PRINTK=y +# CONFIG_EARLY_PRINTK_DBGP is not set +# CONFIG_EARLY_PRINTK_USB_XDBC is not set +# CONFIG_EFI_PGT_DUMP is not set +# CONFIG_DEBUG_TLBFLUSH is not set +CONFIG_HAVE_MMIOTRACE_SUPPORT=y +# CONFIG_X86_DECODER_SELFTEST is not set +CONFIG_IO_DELAY_0X80=y +# CONFIG_IO_DELAY_0XED is not set +# CONFIG_IO_DELAY_UDELAY is not set +# CONFIG_IO_DELAY_NONE is not set +# CONFIG_DEBUG_BOOT_PARAMS is not set +# CONFIG_CPA_DEBUG is not set +# CONFIG_DEBUG_ENTRY is not set +# CONFIG_DEBUG_NMI_SELFTEST is not set +# CONFIG_X86_DEBUG_FPU is not set +# CONFIG_PUNIT_ATOM_DEBUG is not set +CONFIG_UNWINDER_ORC=y +# CONFIG_UNWINDER_FRAME_POINTER is not set +# CONFIG_UNWINDER_GUESS is not set +# end of x86 Debugging + +# +# Kernel Testing and Coverage +# +# CONFIG_KUNIT is not set +# CONFIG_NOTIFIER_ERROR_INJECTION is not set +# CONFIG_FUNCTION_ERROR_INJECTION is not set +# CONFIG_FAULT_INJECTION is not set +CONFIG_ARCH_HAS_KCOV=y +CONFIG_CC_HAS_SANCOV_TRACE_PC=y +# CONFIG_KCOV is not set +# CONFIG_RUNTIME_TESTING_MENU is not set +CONFIG_ARCH_USE_MEMTEST=y +# CONFIG_MEMTEST is not set +# CONFIG_HYPERV_TESTING is not set +# end of Kernel Testing and Coverage + +# +# Rust hacking +# +# end of Rust hacking +# end of Kernel hacking diff --git a/config/sources/vendors/microsoft/wsl2.hooks.sh b/config/sources/vendors/microsoft/wsl2.hooks.sh new file mode 100644 index 000000000000..8783cc3b0704 --- /dev/null +++ b/config/sources/vendors/microsoft/wsl2.hooks.sh @@ -0,0 +1,23 @@ +# A separate LINUXFAMILY and thus kernel .debs for wsl2; one day we might consider merging wsl2/hyperv patches into generic uefi +function post_family_config__wsl2() { + : "${LINUXFAMILY:?"LINUXFAMILY not set"}" + declare -g LINUXFAMILY="wsl2-${LINUXFAMILY}" + declare -g LINUXCONFIG="linux-${LINUXFAMILY}-${BRANCH}" + + # We _definitely_ don't want any extra drivers in these kernels -- it's purely a VM/Hyper-V thing + declare -g -r EXTRAWIFI="no" # readonly global +} + +function post_family_config_branch_current__wsl2() { + declare -g KERNEL_MAJOR_MINOR="6.1" # Major and minor versions of this kernel. For mainline caching. + declare -g KERNELBRANCH="branch:linux-6.1.y" # Branch or tag to build from. It should match MAJOR_MINOR + declare -g KERNELPATCHDIR="archive/${LINUXFAMILY}-${KERNEL_MAJOR_MINOR}" # Microsoft patches + display_alert "Using mainline kernel ${KERNELBRANCH} for" "${BOARD}" "info" +} + +function post_family_config_branch_edge__wsl2() { + declare -g KERNEL_MAJOR_MINOR="6.6" # Major and minor versions of this kernel. For mainline caching. + declare -g KERNELBRANCH="branch:linux-6.6.y" # Branch or tag to build from. It should match MAJOR_MINOR + declare -g KERNELPATCHDIR="archive/${LINUXFAMILY}-${KERNEL_MAJOR_MINOR}" # Microsoft patches + display_alert "Using mainline kernel ${KERNELBRANCH} for" "${BOARD}" "info" +} diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch b/patch/kernel/archive/wsl2-arm64-6.1/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch new file mode 100644 index 000000000000..0c35ef5b5fa1 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch @@ -0,0 +1,239 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Sunil Muthuswamy +Date: Mon, 3 May 2021 14:17:52 -0700 +Subject: Hyper-V: ARM64: Always use the Hyper-V hypercall interface + +This patch forces the use of the Hyper-V hypercall interface, +instead of the architectural SMCCC interface on ARM64 because +not all versions of Windows support the SMCCC interface. All +versions of Windows will support the Hyper-V hypercall interface, +so this change should be both forward and backward compatible. + +Signed-off-by: Sunil Muthuswamy + +[tyhicks: Forward ported to v5.15] +Signed-off-by: Tyler Hicks +[kms: Forward ported to v6.1] +Signed-off-by: Kelsey Steele +--- + arch/arm64/hyperv/Makefile | 2 +- + arch/arm64/hyperv/hv_core.c | 57 ++++----- + arch/arm64/hyperv/hv_hvc.S | 61 ++++++++++ + arch/arm64/include/asm/mshyperv.h | 4 + + 4 files changed, 91 insertions(+), 33 deletions(-) + +diff --git a/arch/arm64/hyperv/Makefile b/arch/arm64/hyperv/Makefile +index 87c31c001da9..4cbeaa36d189 100644 +--- a/arch/arm64/hyperv/Makefile ++++ b/arch/arm64/hyperv/Makefile +@@ -1,2 +1,2 @@ + # SPDX-License-Identifier: GPL-2.0 +-obj-y := hv_core.o mshyperv.o ++obj-y := hv_core.o mshyperv.o hv_hvc.o +diff --git a/arch/arm64/hyperv/hv_core.c b/arch/arm64/hyperv/hv_core.c +index b54c34793701..e7010b2a587c 100644 +--- a/arch/arm64/hyperv/hv_core.c ++++ b/arch/arm64/hyperv/hv_core.c +@@ -23,16 +23,13 @@ + */ + u64 hv_do_hypercall(u64 control, void *input, void *output) + { +- struct arm_smccc_res res; + u64 input_address; + u64 output_address; + + input_address = input ? virt_to_phys(input) : 0; + output_address = output ? virt_to_phys(output) : 0; + +- arm_smccc_1_1_hvc(HV_FUNC_ID, control, +- input_address, output_address, &res); +- return res.a0; ++ return hv_do_hvc(control, input_address, output_address); + } + EXPORT_SYMBOL_GPL(hv_do_hypercall); + +@@ -41,27 +38,33 @@ EXPORT_SYMBOL_GPL(hv_do_hypercall); + * with arguments in registers instead of physical memory. + * Avoids the overhead of virt_to_phys for simple hypercalls. + */ +- + u64 hv_do_fast_hypercall8(u16 code, u64 input) + { +- struct arm_smccc_res res; + u64 control; + + control = (u64)code | HV_HYPERCALL_FAST_BIT; +- +- arm_smccc_1_1_hvc(HV_FUNC_ID, control, input, &res); +- return res.a0; ++ return hv_do_hvc(control, input); + } + EXPORT_SYMBOL_GPL(hv_do_fast_hypercall8); + ++union hv_hypercall_status { ++ u64 as_uint64; ++ struct { ++ u16 status; ++ u16 reserved; ++ u16 reps_completed; /* Low 12 bits */ ++ u16 reserved2; ++ }; ++}; ++ + /* + * Set a single VP register to a 64-bit value. + */ + void hv_set_vpreg(u32 msr, u64 value) + { +- struct arm_smccc_res res; ++ union hv_hypercall_status status; + +- arm_smccc_1_1_hvc(HV_FUNC_ID, ++ status.as_uint64 = hv_do_hvc( + HVCALL_SET_VP_REGISTERS | HV_HYPERCALL_FAST_BIT | + HV_HYPERCALL_REP_COMP_1, + HV_PARTITION_ID_SELF, +@@ -69,15 +72,14 @@ void hv_set_vpreg(u32 msr, u64 value) + msr, + 0, + value, +- 0, +- &res); ++ 0); + + /* + * Something is fundamentally broken in the hypervisor if + * setting a VP register fails. There's really no way to + * continue as a guest VM, so panic. + */ +- BUG_ON(!hv_result_success(res.a0)); ++ BUG_ON(status.status != HV_STATUS_SUCCESS); + } + EXPORT_SYMBOL_GPL(hv_set_vpreg); + +@@ -90,31 +92,22 @@ EXPORT_SYMBOL_GPL(hv_set_vpreg); + + void hv_get_vpreg_128(u32 msr, struct hv_get_vp_registers_output *result) + { +- struct arm_smccc_1_2_regs args; +- struct arm_smccc_1_2_regs res; +- +- args.a0 = HV_FUNC_ID; +- args.a1 = HVCALL_GET_VP_REGISTERS | HV_HYPERCALL_FAST_BIT | +- HV_HYPERCALL_REP_COMP_1; +- args.a2 = HV_PARTITION_ID_SELF; +- args.a3 = HV_VP_INDEX_SELF; +- args.a4 = msr; ++ u64 status; + +- /* +- * Use the SMCCC 1.2 interface because the results are in registers +- * beyond X0-X3. +- */ +- arm_smccc_1_2_hvc(&args, &res); ++ status = hv_do_hvc_fast_get( ++ HVCALL_GET_VP_REGISTERS | HV_HYPERCALL_FAST_BIT | ++ HV_HYPERCALL_REP_COMP_1, ++ HV_PARTITION_ID_SELF, ++ HV_VP_INDEX_SELF, ++ msr, ++ result); + + /* + * Something is fundamentally broken in the hypervisor if + * getting a VP register fails. There's really no way to + * continue as a guest VM, so panic. + */ +- BUG_ON(!hv_result_success(res.a0)); +- +- result->as64.low = res.a6; +- result->as64.high = res.a7; ++ BUG_ON((status & HV_HYPERCALL_RESULT_MASK) != HV_STATUS_SUCCESS); + } + EXPORT_SYMBOL_GPL(hv_get_vpreg_128); + +diff --git a/arch/arm64/hyperv/hv_hvc.S b/arch/arm64/hyperv/hv_hvc.S +new file mode 100644 +index 000000000000..c22d34ccd0aa +--- /dev/null ++++ b/arch/arm64/hyperv/hv_hvc.S +@@ -0,0 +1,61 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Microsoft Hyper-V hypervisor invocation routines ++ * ++ * Copyright (C) 2018, Microsoft, Inc. ++ * ++ * Author : Michael Kelley ++ * ++ * This program is free software; you can redistribute it and/or modify it ++ * under the terms of the GNU General Public License version 2 as published ++ * by the Free Software Foundation. ++ * ++ * This program is distributed in the hope that it will be useful, but ++ * WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or ++ * NON INFRINGEMENT. See the GNU General Public License for more ++ * details. ++ */ ++ ++#include ++#include ++ ++ .text ++/* ++ * Do the HVC instruction. For Hyper-V the argument is always 1. ++ * x0 contains the hypercall control value, while additional registers ++ * vary depending on the hypercall, and whether the hypercall arguments ++ * are in memory or in registers (a "fast" hypercall per the Hyper-V ++ * TLFS). When the arguments are in memory x1 is the guest physical ++ * address of the input arguments, and x2 is the guest physical ++ * address of the output arguments. When the arguments are in ++ * registers, the register values depends on the hypercall. Note ++ * that this version cannot return any values in registers. ++ */ ++SYM_FUNC_START(hv_do_hvc) ++ hvc #1 ++ ret ++SYM_FUNC_END(hv_do_hvc) ++ ++/* ++ * This variant of HVC invocation is for hv_get_vpreg and ++ * hv_get_vpreg_128. The input parameters are passed in registers ++ * along with a pointer in x4 to where the output result should ++ * be stored. The output is returned in x15 and x16. x19 is used as ++ * scratch space to avoid buildng a stack frame, as Hyper-V does ++ * not preserve registers x0-x17. ++ */ ++SYM_FUNC_START(hv_do_hvc_fast_get) ++ /* ++ * Stash away x19 register so that it can be used as a scratch ++ * register and pop it at the end. ++ */ ++ str x19, [sp, #-16]! ++ mov x19, x4 ++ hvc #1 ++ str x15,[x19] ++ str x16,[x19,#8] ++ ldr x19, [sp], #16 ++ ret ++SYM_FUNC_END(hv_do_hvc_fast_get) +diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h +index 20070a847304..f87a450e5b6b 100644 +--- a/arch/arm64/include/asm/mshyperv.h ++++ b/arch/arm64/include/asm/mshyperv.h +@@ -22,6 +22,10 @@ + #include + #include + ++extern u64 hv_do_hvc(u64 control, ...); ++extern u64 hv_do_hvc_fast_get(u64 control, u64 input1, u64 input2, u64 input3, ++ struct hv_get_vp_registers_output *output); ++ + /* + * Declare calls to get and set Hyper-V VP register values on ARM64, which + * requires a hypercall. +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1667-arm64-hyperv-Enable-Hyper-V-synthetic-clocks-timers.patch b/patch/kernel/archive/wsl2-arm64-6.1/1667-arm64-hyperv-Enable-Hyper-V-synthetic-clocks-timers.patch new file mode 100644 index 000000000000..cd58a824a933 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1667-arm64-hyperv-Enable-Hyper-V-synthetic-clocks-timers.patch @@ -0,0 +1,185 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Michael Kelley +Date: Mon, 28 Feb 2022 08:41:24 -0800 +Subject: arm64: hyperv: Enable Hyper-V synthetic clocks/timers + +This patch adds support for Hyper-V synthetic clocks and timers on +ARM64. Upstream code assumes changes to Hyper-V that were made +in Fall 2021 that fully virtualize the ARM64 architectural counter +and timer so that the driver in drivers/clocksource/arm_arch_timer.c +can be used. But older versions of Hyper-V don't have this +support and must use the Hyper-V synthetic clocks and timers. +As such, this patch is out-of-tree code. + +This patch does two related things. First it splits the general +Hyper-V initialization code to create hyperv_early_init() that runs +much earlier during kernel boot. This early init function is needed +so that core Hyper-V functionality is ready before the synthetic clocks +and timers are initialized. Second, it adds Hyper-V clock and timer +initialization via TIMER_ACPI_DECLARE() and hyperv_timer_init() +in the Hyper-V clocksource driver in drivers/clocksource/hyperv_timer.c. + +Signed-off-by: Michael Kelley +[tyhicks: Forward port around a minor text conflict caused by commit + 245b993d8f6c ("clocksource: hyper-v: unexport __init-annotated + hv_init_clocksource()") +Signed-off-by: Tyler Hicks +[kms: Forward port to 6.1] +Signed-off-by: Kelsey Steele +--- + arch/arm64/hyperv/mshyperv.c | 15 +++++--- + arch/arm64/include/asm/mshyperv.h | 18 ++++++++++ + arch/arm64/kernel/setup.c | 4 +++ + drivers/clocksource/hyperv_timer.c | 14 ++++++++ + drivers/hv/Kconfig | 2 +- + 5 files changed, 47 insertions(+), 6 deletions(-) + +diff --git a/arch/arm64/hyperv/mshyperv.c b/arch/arm64/hyperv/mshyperv.c +index a406454578f0..0a868d490ef5 100644 +--- a/arch/arm64/hyperv/mshyperv.c ++++ b/arch/arm64/hyperv/mshyperv.c +@@ -19,12 +19,11 @@ + + static bool hyperv_initialized; + +-static int __init hyperv_init(void) ++void __init hyperv_early_init(void) + { + struct hv_get_vp_registers_output result; + u32 a, b, c, d; + u64 guest_id; +- int ret; + + /* + * Allow for a kernel built with CONFIG_HYPERV to be running in +@@ -32,10 +31,10 @@ static int __init hyperv_init(void) + * In such cases, do nothing and return success. + */ + if (acpi_disabled) +- return 0; ++ return; + + if (strncmp((char *)&acpi_gbl_FADT.hypervisor_id, "MsHyperV", 8)) +- return 0; ++ return; + + /* Setup the guest ID */ + guest_id = hv_generate_guest_id(LINUX_VERSION_CODE); +@@ -63,6 +62,13 @@ static int __init hyperv_init(void) + pr_info("Hyper-V: Host Build %d.%d.%d.%d-%d-%d\n", + b >> 16, b & 0xFFFF, a, d & 0xFFFFFF, c, d >> 24); + ++ hyperv_initialized = true; ++} ++ ++static int __init hyperv_init(void) ++{ ++ int ret; ++ + ret = hv_common_init(); + if (ret) + return ret; +@@ -74,7 +80,6 @@ static int __init hyperv_init(void) + return ret; + } + +- hyperv_initialized = true; + return 0; + } + +diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h +index f87a450e5b6b..713bebd87d6c 100644 +--- a/arch/arm64/include/asm/mshyperv.h ++++ b/arch/arm64/include/asm/mshyperv.h +@@ -21,6 +21,13 @@ + #include + #include + #include ++#include ++ ++#if IS_ENABLED(CONFIG_HYPERV) ++void __init hyperv_early_init(void); ++#else ++static inline void hyperv_early_init(void) {}; ++#endif + + extern u64 hv_do_hvc(u64 control, ...); + extern u64 hv_do_hvc_fast_get(u64 control, u64 input1, u64 input2, u64 input3, +@@ -45,6 +52,17 @@ static inline u64 hv_get_register(unsigned int reg) + return hv_get_vpreg(reg); + } + ++/* Define the interrupt ID used by STIMER0 Direct Mode interrupts. This ++ * value can't come from ACPI tables because it is needed before the ++ * Linux ACPI subsystem is initialized. ++ */ ++#define HYPERV_STIMER0_VECTOR 31 ++ ++static inline u64 hv_get_raw_timer(void) ++{ ++ return arch_timer_read_counter(); ++} ++ + /* SMCCC hypercall parameters */ + #define HV_SMCCC_FUNC_NUMBER 1 + #define HV_FUNC_ID ARM_SMCCC_CALL_VAL( \ +diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c +index fea3223704b6..b4e4f3e6ea20 100644 +--- a/arch/arm64/kernel/setup.c ++++ b/arch/arm64/kernel/setup.c +@@ -50,6 +50,7 @@ + #include + #include + #include ++#include + #include + + static int num_standard_resources; +@@ -343,6 +344,9 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) + if (acpi_disabled) + unflatten_device_tree(); + ++ /* Do after acpi_boot_table_init() so local FADT is available */ ++ hyperv_early_init(); ++ + bootmem_init(); + + kasan_init(); +diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c +index 18de1f439ffd..bccbeab3fa46 100644 +--- a/drivers/clocksource/hyperv_timer.c ++++ b/drivers/clocksource/hyperv_timer.c +@@ -566,3 +566,17 @@ void __init hv_init_clocksource(void) + hv_sched_clock_offset = hv_read_reference_counter(); + hv_setup_sched_clock(read_hv_sched_clock_msr); + } ++ ++/* Initialize everything on ARM64 */ ++static int __init hyperv_timer_init(struct acpi_table_header *table) ++{ ++ if (!hv_is_hyperv_initialized()) ++ return -EINVAL; ++ ++ hv_init_clocksource(); ++ if (hv_stimer_alloc(true)) ++ return -EINVAL; ++ ++ return 0; ++} ++TIMER_ACPI_DECLARE(hyperv, ACPI_SIG_GTDT, hyperv_timer_init); +diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig +index 0747a8f1fcee..6802f981ba8c 100644 +--- a/drivers/hv/Kconfig ++++ b/drivers/hv/Kconfig +@@ -14,7 +14,7 @@ config HYPERV + system. + + config HYPERV_TIMER +- def_bool HYPERV && X86 ++ def_bool HYPERV + + config HYPERV_UTILS + tristate "Microsoft Hyper-V Utilities driver" +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1668-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch b/patch/kernel/archive/wsl2-arm64-6.1/1668-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch new file mode 100644 index 000000000000..b02aaf114f50 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1668-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch @@ -0,0 +1,45 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 18:11:52 -0800 +Subject: drivers: hv: dxgkrnl: Add virtual compute device VMBus channel guids + +Add VMBus channel guids, which are used by hyper-v virtual compute +device driver. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + include/linux/hyperv.h | 16 ++++++++++ + 1 file changed, 16 insertions(+) + +diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h +index 646f1da9f27e..00d6ee8cdb94 100644 +--- a/include/linux/hyperv.h ++++ b/include/linux/hyperv.h +@@ -1457,6 +1457,22 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size); + .guid = GUID_INIT(0xda0a7802, 0xe377, 0x4aac, 0x8e, 0x77, \ + 0x05, 0x58, 0xeb, 0x10, 0x73, 0xf8) + ++/* ++ * GPU paravirtualization global DXGK channel ++ * {DDE9CBC0-5060-4436-9448-EA1254A5D177} ++ */ ++#define HV_GPUP_DXGK_GLOBAL_GUID \ ++ .guid = GUID_INIT(0xdde9cbc0, 0x5060, 0x4436, 0x94, 0x48, \ ++ 0xea, 0x12, 0x54, 0xa5, 0xd1, 0x77) ++ ++/* ++ * GPU paravirtualization per virtual GPU DXGK channel ++ * {6E382D18-3336-4F4B-ACC4-2B7703D4DF4A} ++ */ ++#define HV_GPUP_DXGK_VGPU_GUID \ ++ .guid = GUID_INIT(0x6e382d18, 0x3336, 0x4f4b, 0xac, 0xc4, \ ++ 0x2b, 0x77, 0x3, 0xd4, 0xdf, 0x4a) ++ + /* + * Synthetic FC GUID + * {2f9bcc4a-0069-4af3-b76b-6fd0be528cda} +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1669-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch b/patch/kernel/archive/wsl2-arm64-6.1/1669-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch new file mode 100644 index 000000000000..a146188aa9ed --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1669-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch @@ -0,0 +1,966 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 24 Mar 2021 11:10:28 -0700 +Subject: drivers: hv: dxgkrnl: Driver initialization and loading + +- Create skeleton and add basic functionality for the Hyper-V +compute device driver (dxgkrnl). + +- Register for PCI and VMBus driver notifications and handle +initialization of VMBus channels. + +- Connect the dxgkrnl module to the drivers/hv/ Makefile and Kconfig + +- Create a MAINTAINERS entry + +A VMBus channel is a communication interface between the Hyper-V guest +and the host. The are two type of VMBus channels, used in the driver: + - the global channel + - per virtual compute device channel + +A PCI device is created for each virtual compute device, projected +by the host. The device vendor is PCI_VENDOR_ID_MICROSOFT and device +id is PCI_DEVICE_ID_VIRTUAL_RENDER. dxg_pci_probe_device handles +arrival of such devices. The PCI config space of the virtual compute +device has luid of the corresponding virtual compute device VM +bus channel. This is how the compute device adapter objects are +linked to VMBus channels. + +VMBus interface version is exchanged by reading/writing the PCI config +space of the virtual compute device. + +The IO space is used to handle CPU accessible compute device +allocations. Hyper-V allocates IO space for the global VMBus channel. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + MAINTAINERS | 7 + + drivers/hv/Kconfig | 2 + + drivers/hv/Makefile | 1 + + drivers/hv/dxgkrnl/Kconfig | 26 + + drivers/hv/dxgkrnl/Makefile | 5 + + drivers/hv/dxgkrnl/dxgkrnl.h | 155 +++ + drivers/hv/dxgkrnl/dxgmodule.c | 506 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.c | 92 ++ + drivers/hv/dxgkrnl/dxgvmbus.h | 19 + + include/uapi/misc/d3dkmthk.h | 27 + + 10 files changed, 840 insertions(+) + +diff --git a/MAINTAINERS b/MAINTAINERS +index 07a9c274c0e2..e79dae6368a1 100644 +--- a/MAINTAINERS ++++ b/MAINTAINERS +@@ -9551,6 +9551,13 @@ F: Documentation/devicetree/bindings/mtd/ti,am654-hbmc.yaml + F: drivers/mtd/hyperbus/ + F: include/linux/mtd/hyperbus.h + ++Hyper-V vGPU DRIVER ++M: Iouri Tarassov ++L: linux-hyperv@vger.kernel.org ++S: Supported ++F: drivers/hv/dxgkrnl/ ++F: include/uapi/misc/d3dkmthk.h ++ + HYPERVISOR VIRTUAL CONSOLE DRIVER + L: linuxppc-dev@lists.ozlabs.org + S: Odd Fixes +diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig +index 6802f981ba8c..5a7aa3e567ab 100644 +--- a/drivers/hv/Kconfig ++++ b/drivers/hv/Kconfig +@@ -30,4 +30,6 @@ config HYPERV_BALLOON + help + Select this option to enable Hyper-V Balloon driver. + ++source "drivers/hv/dxgkrnl/Kconfig" ++ + endmenu +diff --git a/drivers/hv/Makefile b/drivers/hv/Makefile +index d76df5c8c2a9..aa1cbdb5d0d2 100644 +--- a/drivers/hv/Makefile ++++ b/drivers/hv/Makefile +@@ -2,6 +2,7 @@ + obj-$(CONFIG_HYPERV) += hv_vmbus.o + obj-$(CONFIG_HYPERV_UTILS) += hv_utils.o + obj-$(CONFIG_HYPERV_BALLOON) += hv_balloon.o ++obj-$(CONFIG_DXGKRNL) += dxgkrnl/ + + CFLAGS_hv_trace.o = -I$(src) + CFLAGS_hv_balloon.o = -I$(src) +diff --git a/drivers/hv/dxgkrnl/Kconfig b/drivers/hv/dxgkrnl/Kconfig +new file mode 100644 +index 000000000000..bcd92bbff939 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/Kconfig +@@ -0,0 +1,26 @@ ++# SPDX-License-Identifier: GPL-2.0 ++# Configuration for the hyper-v virtual compute driver (dxgkrnl) ++# ++ ++config DXGKRNL ++ tristate "Microsoft Paravirtualized GPU support" ++ depends on HYPERV ++ depends on 64BIT || COMPILE_TEST ++ help ++ This driver supports paravirtualized virtual compute devices, exposed ++ by Microsoft Hyper-V when Linux is running inside of a virtual machine ++ hosted by Windows. The virtual machines needs to be configured to use ++ host compute adapters. The driver name is dxgkrnl. ++ ++ An example of such virtual machine is a Windows Subsystem for ++ Linux container. When such container is instantiated, the Windows host ++ assigns compatible host GPU adapters to the container. The corresponding ++ virtual GPU devices appear on the PCI bus in the container. These ++ devices are enumerated and accessed by this driver. ++ ++ Communications with the driver are done by using the Microsoft libdxcore ++ library, which translates the D3DKMT interface ++ ++ to the driver IOCTLs. The virtual GPU devices are paravirtualized, ++ which means that access to the hardware is done in the host. The driver ++ communicates with the host using Hyper-V VM bus communication channels. +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +new file mode 100644 +index 000000000000..76349064b60a +--- /dev/null ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -0,0 +1,5 @@ ++# SPDX-License-Identifier: GPL-2.0 ++# Makefile for the hyper-v compute device driver (dxgkrnl). ++ ++obj-$(CONFIG_DXGKRNL) += dxgkrnl.o ++dxgkrnl-y := dxgmodule.o dxgvmbus.o +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +new file mode 100644 +index 000000000000..f7900840d1ed +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -0,0 +1,155 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Headers for internal objects ++ * ++ */ ++ ++#ifndef _DXGKRNL_H ++#define _DXGKRNL_H ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++ ++struct dxgadapter; ++ ++/* ++ * Driver private data. ++ * A single /dev/dxg device is created per virtual machine. ++ */ ++struct dxgdriver{ ++ struct dxgglobal *dxgglobal; ++ struct device *dxgdev; ++ struct pci_driver pci_drv; ++ struct hv_driver vmbus_drv; ++}; ++extern struct dxgdriver dxgdrv; ++ ++#define DXGDEV dxgdrv.dxgdev ++ ++struct dxgvmbuschannel { ++ struct vmbus_channel *channel; ++ struct hv_device *hdev; ++ spinlock_t packet_list_mutex; ++ struct list_head packet_list_head; ++ struct kmem_cache *packet_cache; ++ atomic64_t packet_request_id; ++}; ++ ++int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); ++void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); ++void dxgvmbuschannel_receive(void *ctx); ++ ++/* ++ * The structure defines an offered vGPU vm bus channel. ++ */ ++struct dxgvgpuchannel { ++ struct list_head vgpu_ch_list_entry; ++ struct winluid adapter_luid; ++ struct hv_device *hdev; ++}; ++ ++struct dxgglobal { ++ struct dxgdriver *drvdata; ++ struct dxgvmbuschannel channel; ++ struct hv_device *hdev; ++ u32 num_adapters; ++ u32 vmbus_ver; /* Interface version */ ++ struct resource *mem; ++ u64 mmiospace_base; ++ u64 mmiospace_size; ++ struct miscdevice dxgdevice; ++ struct mutex device_mutex; ++ ++ /* ++ * List of the vGPU VM bus channels (dxgvgpuchannel) ++ * Protected by device_mutex ++ */ ++ struct list_head vgpu_ch_list_head; ++ ++ /* protects acces to the global VM bus channel */ ++ struct rw_semaphore channel_lock; ++ ++ bool global_channel_initialized; ++ bool async_msg_enabled; ++ bool misc_registered; ++ bool pci_registered; ++ bool vmbus_registered; ++}; ++ ++static inline struct dxgglobal *dxggbl(void) ++{ ++ return dxgdrv.dxgglobal; ++} ++ ++struct dxgprocess { ++ /* Placeholder */ ++}; ++ ++/* ++ * The convention is that VNBus instance id is a GUID, but the host sets ++ * the lower part of the value to the host adapter LUID. The function ++ * provides the necessary conversion. ++ */ ++static inline void guid_to_luid(guid_t *guid, struct winluid *luid) ++{ ++ *luid = *(struct winluid *)&guid->b[0]; ++} ++ ++/* ++ * VM bus interface ++ * ++ */ ++ ++/* ++ * The interface version is used to ensure that the host and the guest use the ++ * same VM bus protocol. It needs to be incremented every time the VM bus ++ * interface changes. DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION is ++ * incremented each time the earlier versions of the interface are no longer ++ * compatible with the current version. ++ */ ++#define DXGK_VMBUS_INTERFACE_VERSION_OLD 27 ++#define DXGK_VMBUS_INTERFACE_VERSION 40 ++#define DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION 16 ++ ++#ifdef DEBUG ++ ++void dxgk_validate_ioctls(void); ++ ++#define DXG_TRACE(fmt, ...) do { \ ++ trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++} while (0) ++ ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ ++ trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++} while (0) ++ ++#else ++ ++#define DXG_TRACE(...) ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ ++} while (0) ++ ++#endif /* DEBUG */ ++ ++#endif +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +new file mode 100644 +index 000000000000..de02edc4d023 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -0,0 +1,506 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Interface with Linux kernel, PCI driver and the VM bus driver ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include "dxgkrnl.h" ++ ++#define PCI_VENDOR_ID_MICROSOFT 0x1414 ++#define PCI_DEVICE_ID_VIRTUAL_RENDER 0x008E ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++/* ++ * Interface from dxgglobal ++ */ ++ ++struct vmbus_channel *dxgglobal_get_vmbus(void) ++{ ++ return dxggbl()->channel.channel; ++} ++ ++struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void) ++{ ++ return &dxggbl()->channel; ++} ++ ++int dxgglobal_acquire_channel_lock(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ down_read(&dxgglobal->channel_lock); ++ if (dxgglobal->channel.channel == NULL) { ++ DXG_ERR("Failed to acquire global channel lock"); ++ return -ENODEV; ++ } else { ++ return 0; ++ } ++} ++ ++void dxgglobal_release_channel_lock(void) ++{ ++ up_read(&dxggbl()->channel_lock); ++} ++ ++const struct file_operations dxgk_fops = { ++ .owner = THIS_MODULE, ++}; ++ ++/* ++ * Interface with the PCI driver ++ */ ++ ++/* ++ * Part of the PCI config space of the compute device is used for ++ * configuration data. Reading/writing of the PCI config space is forwarded ++ * to the host. ++ * ++ * Below are offsets in the PCI config spaces for various configuration values. ++ */ ++ ++/* Compute device VM bus channel instance ID */ ++#define DXGK_VMBUS_CHANNEL_ID_OFFSET 192 ++ ++/* DXGK_VMBUS_INTERFACE_VERSION (u32) */ ++#define DXGK_VMBUS_VERSION_OFFSET (DXGK_VMBUS_CHANNEL_ID_OFFSET + \ ++ sizeof(guid_t)) ++ ++/* Luid of the virtual GPU on the host (struct winluid) */ ++#define DXGK_VMBUS_VGPU_LUID_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ ++ sizeof(u32)) ++ ++/* The guest writes its capabilities to this address */ ++#define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ ++ sizeof(u32)) ++ ++/* Capabilities of the guest driver, reported to the host */ ++struct dxgk_vmbus_guestcaps { ++ union { ++ struct { ++ u32 wsl2 : 1; ++ u32 reserved : 31; ++ }; ++ u32 guest_caps; ++ }; ++}; ++ ++/* ++ * A helper function to read PCI config space. ++ */ ++static int dxg_pci_read_dwords(struct pci_dev *dev, int offset, int size, ++ void *val) ++{ ++ int off = offset; ++ int ret; ++ int i; ++ ++ /* Make sure the offset and size are 32 bit aligned */ ++ if (offset & 3 || size & 3) ++ return -EINVAL; ++ ++ for (i = 0; i < size / sizeof(int); i++) { ++ ret = pci_read_config_dword(dev, off, &((int *)val)[i]); ++ if (ret) { ++ DXG_ERR("Failed to read PCI config: %d", off); ++ return ret; ++ } ++ off += sizeof(int); ++ } ++ return 0; ++} ++ ++static int dxg_pci_probe_device(struct pci_dev *dev, ++ const struct pci_device_id *id) ++{ ++ int ret; ++ guid_t guid; ++ u32 vmbus_interface_ver = DXGK_VMBUS_INTERFACE_VERSION; ++ struct winluid vgpu_luid = {}; ++ struct dxgk_vmbus_guestcaps guest_caps = {.wsl2 = 1}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ if (dxgglobal->vmbus_ver == 0) { ++ /* Report capabilities to the host */ ++ ++ ret = pci_write_config_dword(dev, DXGK_VMBUS_GUESTCAPS_OFFSET, ++ guest_caps.guest_caps); ++ if (ret) ++ goto cleanup; ++ ++ /* Negotiate the VM bus version */ ++ ++ ret = pci_read_config_dword(dev, DXGK_VMBUS_VERSION_OFFSET, ++ &vmbus_interface_ver); ++ if (ret == 0 && vmbus_interface_ver != 0) ++ dxgglobal->vmbus_ver = vmbus_interface_ver; ++ else ++ dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION_OLD; ++ ++ if (dxgglobal->vmbus_ver < DXGK_VMBUS_INTERFACE_VERSION) ++ goto read_channel_id; ++ ++ ret = pci_write_config_dword(dev, DXGK_VMBUS_VERSION_OFFSET, ++ DXGK_VMBUS_INTERFACE_VERSION); ++ if (ret) ++ goto cleanup; ++ ++ if (dxgglobal->vmbus_ver > DXGK_VMBUS_INTERFACE_VERSION) ++ dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION; ++ } ++ ++read_channel_id: ++ ++ /* Get the VM bus channel ID for the virtual GPU */ ++ ret = dxg_pci_read_dwords(dev, DXGK_VMBUS_CHANNEL_ID_OFFSET, ++ sizeof(guid), (int *)&guid); ++ if (ret) ++ goto cleanup; ++ ++ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) { ++ ret = dxg_pci_read_dwords(dev, DXGK_VMBUS_VGPU_LUID_OFFSET, ++ sizeof(vgpu_luid), &vgpu_luid); ++ if (ret) ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Adapter channel: %pUb", &guid); ++ DXG_TRACE("Vmbus interface version: %d", dxgglobal->vmbus_ver); ++ DXG_TRACE("Host luid: %x-%x", vgpu_luid.b, vgpu_luid.a); ++ ++cleanup: ++ ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static void dxg_pci_remove_device(struct pci_dev *dev) ++{ ++ /* Placeholder */ ++} ++ ++static struct pci_device_id dxg_pci_id_table[] = { ++ { ++ .vendor = PCI_VENDOR_ID_MICROSOFT, ++ .device = PCI_DEVICE_ID_VIRTUAL_RENDER, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID ++ }, ++ { 0 } ++}; ++ ++/* ++ * Interface with the VM bus driver ++ */ ++ ++static int dxgglobal_getiospace(struct dxgglobal *dxgglobal) ++{ ++ /* Get mmio space for the global channel */ ++ struct hv_device *hdev = dxgglobal->hdev; ++ struct vmbus_channel *channel = hdev->channel; ++ resource_size_t pot_start = 0; ++ resource_size_t pot_end = -1; ++ int ret; ++ ++ dxgglobal->mmiospace_size = channel->offermsg.offer.mmio_megabytes; ++ if (dxgglobal->mmiospace_size == 0) { ++ DXG_TRACE("Zero mmio space is offered"); ++ return -ENOMEM; ++ } ++ dxgglobal->mmiospace_size <<= 20; ++ DXG_TRACE("mmio offered: %llx", dxgglobal->mmiospace_size); ++ ++ ret = vmbus_allocate_mmio(&dxgglobal->mem, hdev, pot_start, pot_end, ++ dxgglobal->mmiospace_size, 0x10000, false); ++ if (ret) { ++ DXG_ERR("Unable to allocate mmio memory: %d", ret); ++ return ret; ++ } ++ dxgglobal->mmiospace_size = dxgglobal->mem->end - ++ dxgglobal->mem->start + 1; ++ dxgglobal->mmiospace_base = dxgglobal->mem->start; ++ DXG_TRACE("mmio allocated %llx %llx %llx %llx", ++ dxgglobal->mmiospace_base, dxgglobal->mmiospace_size, ++ dxgglobal->mem->start, dxgglobal->mem->end); ++ ++ return 0; ++} ++ ++int dxgglobal_init_global_channel(void) ++{ ++ int ret = 0; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = dxgvmbuschannel_init(&dxgglobal->channel, dxgglobal->hdev); ++ if (ret) { ++ DXG_ERR("dxgvmbuschannel_init failed: %d", ret); ++ goto error; ++ } ++ ++ ret = dxgglobal_getiospace(dxgglobal); ++ if (ret) { ++ DXG_ERR("getiospace failed: %d", ret); ++ goto error; ++ } ++ ++ hv_set_drvdata(dxgglobal->hdev, dxgglobal); ++ ++error: ++ return ret; ++} ++ ++void dxgglobal_destroy_global_channel(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ down_write(&dxgglobal->channel_lock); ++ ++ dxgglobal->global_channel_initialized = false; ++ ++ if (dxgglobal->mem) { ++ vmbus_free_mmio(dxgglobal->mmiospace_base, ++ dxgglobal->mmiospace_size); ++ dxgglobal->mem = NULL; ++ } ++ ++ dxgvmbuschannel_destroy(&dxgglobal->channel); ++ ++ if (dxgglobal->hdev) { ++ hv_set_drvdata(dxgglobal->hdev, NULL); ++ dxgglobal->hdev = NULL; ++ } ++ ++ up_write(&dxgglobal->channel_lock); ++} ++ ++static const struct hv_vmbus_device_id dxg_vmbus_id_table[] = { ++ /* Per GPU Device GUID */ ++ { HV_GPUP_DXGK_VGPU_GUID }, ++ /* Global Dxgkgnl channel for the virtual machine */ ++ { HV_GPUP_DXGK_GLOBAL_GUID }, ++ { } ++}; ++ ++static int dxg_probe_vmbus(struct hv_device *hdev, ++ const struct hv_vmbus_device_id *dev_id) ++{ ++ int ret = 0; ++ struct winluid luid; ++ struct dxgvgpuchannel *vgpuch; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) { ++ /* This is a new virtual GPU channel */ ++ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, &luid); ++ DXG_TRACE("vGPU channel: %pUb", ++ &hdev->channel->offermsg.offer.if_instance); ++ vgpuch = kzalloc(sizeof(struct dxgvgpuchannel), GFP_KERNEL); ++ if (vgpuch == NULL) { ++ ret = -ENOMEM; ++ goto error; ++ } ++ vgpuch->adapter_luid = luid; ++ vgpuch->hdev = hdev; ++ list_add_tail(&vgpuch->vgpu_ch_list_entry, ++ &dxgglobal->vgpu_ch_list_head); ++ } else if (uuid_le_cmp(hdev->dev_type, ++ dxg_vmbus_id_table[1].guid) == 0) { ++ /* This is the global Dxgkgnl channel */ ++ DXG_TRACE("Global channel: %pUb", ++ &hdev->channel->offermsg.offer.if_instance); ++ if (dxgglobal->hdev) { ++ /* This device should appear only once */ ++ DXG_ERR("global channel already exists"); ++ ret = -EBADE; ++ goto error; ++ } ++ dxgglobal->hdev = hdev; ++ } else { ++ /* Unknown device type */ ++ DXG_ERR("Unknown VM bus device type"); ++ ret = -ENODEV; ++ } ++ ++error: ++ ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ return ret; ++} ++ ++static int dxg_remove_vmbus(struct hv_device *hdev) ++{ ++ int ret = 0; ++ struct dxgvgpuchannel *vgpu_channel; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) { ++ DXG_TRACE("Remove virtual GPU channel"); ++ list_for_each_entry(vgpu_channel, ++ &dxgglobal->vgpu_ch_list_head, ++ vgpu_ch_list_entry) { ++ if (vgpu_channel->hdev == hdev) { ++ list_del(&vgpu_channel->vgpu_ch_list_entry); ++ kfree(vgpu_channel); ++ break; ++ } ++ } ++ } else if (uuid_le_cmp(hdev->dev_type, ++ dxg_vmbus_id_table[1].guid) == 0) { ++ DXG_TRACE("Remove global channel device"); ++ dxgglobal_destroy_global_channel(); ++ } else { ++ /* Unknown device type */ ++ DXG_ERR("Unknown device type"); ++ ret = -ENODEV; ++ } ++ ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ return ret; ++} ++ ++MODULE_DEVICE_TABLE(vmbus, dxg_vmbus_id_table); ++MODULE_DEVICE_TABLE(pci, dxg_pci_id_table); ++ ++/* ++ * Global driver data ++ */ ++ ++struct dxgdriver dxgdrv = { ++ .vmbus_drv.name = KBUILD_MODNAME, ++ .vmbus_drv.id_table = dxg_vmbus_id_table, ++ .vmbus_drv.probe = dxg_probe_vmbus, ++ .vmbus_drv.remove = dxg_remove_vmbus, ++ .vmbus_drv.driver = { ++ .probe_type = PROBE_PREFER_ASYNCHRONOUS, ++ }, ++ .pci_drv.name = KBUILD_MODNAME, ++ .pci_drv.id_table = dxg_pci_id_table, ++ .pci_drv.probe = dxg_pci_probe_device, ++ .pci_drv.remove = dxg_pci_remove_device ++}; ++ ++static struct dxgglobal *dxgglobal_create(void) ++{ ++ struct dxgglobal *dxgglobal; ++ ++ dxgglobal = kzalloc(sizeof(struct dxgglobal), GFP_KERNEL); ++ if (!dxgglobal) ++ return NULL; ++ ++ mutex_init(&dxgglobal->device_mutex); ++ ++ INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); ++ ++ init_rwsem(&dxgglobal->channel_lock); ++ ++ return dxgglobal; ++} ++ ++static void dxgglobal_destroy(struct dxgglobal *dxgglobal) ++{ ++ if (dxgglobal) { ++ mutex_lock(&dxgglobal->device_mutex); ++ dxgglobal_destroy_global_channel(); ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ if (dxgglobal->vmbus_registered) ++ vmbus_driver_unregister(&dxgdrv.vmbus_drv); ++ ++ dxgglobal_destroy_global_channel(); ++ ++ if (dxgglobal->pci_registered) ++ pci_unregister_driver(&dxgdrv.pci_drv); ++ ++ if (dxgglobal->misc_registered) ++ misc_deregister(&dxgglobal->dxgdevice); ++ ++ dxgglobal->drvdata->dxgdev = NULL; ++ ++ kfree(dxgglobal); ++ dxgglobal = NULL; ++ } ++} ++ ++static int __init dxg_drv_init(void) ++{ ++ int ret; ++ struct dxgglobal *dxgglobal = NULL; ++ ++ dxgglobal = dxgglobal_create(); ++ if (dxgglobal == NULL) { ++ pr_err("dxgglobal_init failed"); ++ ret = -ENOMEM; ++ goto error; ++ } ++ dxgglobal->drvdata = &dxgdrv; ++ ++ dxgglobal->dxgdevice.minor = MISC_DYNAMIC_MINOR; ++ dxgglobal->dxgdevice.name = "dxg"; ++ dxgglobal->dxgdevice.fops = &dxgk_fops; ++ dxgglobal->dxgdevice.mode = 0666; ++ ret = misc_register(&dxgglobal->dxgdevice); ++ if (ret) { ++ pr_err("misc_register failed: %d", ret); ++ goto error; ++ } ++ dxgglobal->misc_registered = true; ++ dxgdrv.dxgdev = dxgglobal->dxgdevice.this_device; ++ dxgdrv.dxgglobal = dxgglobal; ++ ++ ret = vmbus_driver_register(&dxgdrv.vmbus_drv); ++ if (ret) { ++ DXG_ERR("vmbus_driver_register failed: %d", ret); ++ goto error; ++ } ++ dxgglobal->vmbus_registered = true; ++ ++ ret = pci_register_driver(&dxgdrv.pci_drv); ++ if (ret) { ++ DXG_ERR("pci_driver_register failed: %d", ret); ++ goto error; ++ } ++ dxgglobal->pci_registered = true; ++ ++ return 0; ++ ++error: ++ /* This function does the cleanup */ ++ dxgglobal_destroy(dxgglobal); ++ dxgdrv.dxgglobal = NULL; ++ ++ return ret; ++} ++ ++static void __exit dxg_drv_exit(void) ++{ ++ dxgglobal_destroy(dxgdrv.dxgglobal); ++} ++ ++module_init(dxg_drv_init); ++module_exit(dxg_drv_exit); ++ ++MODULE_LICENSE("GPL"); ++MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver"); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +new file mode 100644 +index 000000000000..deb880e34377 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -0,0 +1,92 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * VM bus interface implementation ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include "dxgkrnl.h" ++#include "dxgvmbus.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++#define RING_BUFSIZE (256 * 1024) ++ ++/* ++ * The structure is used to track VM bus packets, waiting for completion. ++ */ ++struct dxgvmbuspacket { ++ struct list_head packet_list_entry; ++ u64 request_id; ++ struct completion wait; ++ void *buffer; ++ u32 buffer_length; ++ int status; ++ bool completed; ++}; ++ ++int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev) ++{ ++ int ret; ++ ++ ch->hdev = hdev; ++ spin_lock_init(&ch->packet_list_mutex); ++ INIT_LIST_HEAD(&ch->packet_list_head); ++ atomic64_set(&ch->packet_request_id, 0); ++ ++ ch->packet_cache = kmem_cache_create("DXGK packet cache", ++ sizeof(struct dxgvmbuspacket), 0, ++ 0, NULL); ++ if (ch->packet_cache == NULL) { ++ DXG_ERR("packet_cache alloc failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++#if LINUX_VERSION_CODE >= KERNEL_VERSION(5,15,0) ++ hdev->channel->max_pkt_size = DXG_MAX_VM_BUS_PACKET_SIZE; ++#endif ++ ret = vmbus_open(hdev->channel, RING_BUFSIZE, RING_BUFSIZE, ++ NULL, 0, dxgvmbuschannel_receive, ch); ++ if (ret) { ++ DXG_ERR("vmbus_open failed: %d", ret); ++ goto cleanup; ++ } ++ ++ ch->channel = hdev->channel; ++ ++cleanup: ++ ++ return ret; ++} ++ ++void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch) ++{ ++ kmem_cache_destroy(ch->packet_cache); ++ ch->packet_cache = NULL; ++ ++ if (ch->channel) { ++ vmbus_close(ch->channel); ++ ch->channel = NULL; ++ } ++} ++ ++/* Receive callback for messages from the host */ ++void dxgvmbuschannel_receive(void *ctx) ++{ ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +new file mode 100644 +index 000000000000..6cdca5e03d1f +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -0,0 +1,19 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * VM bus interface with the host definitions ++ * ++ */ ++ ++#ifndef _DXGVMBUS_H ++#define _DXGVMBUS_H ++ ++#define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128) ++ ++#endif /* _DXGVMBUS_H */ +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +new file mode 100644 +index 000000000000..5d973604400c +--- /dev/null ++++ b/include/uapi/misc/d3dkmthk.h +@@ -0,0 +1,27 @@ ++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ ++ ++/* ++ * Copyright (c) 2019, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * User mode WDDM interface definitions ++ * ++ */ ++ ++#ifndef _D3DKMTHK_H ++#define _D3DKMTHK_H ++ ++/* ++ * Matches the Windows LUID definition. ++ * LUID is a locally unique identifier (similar to GUID, but not global), ++ * which is guaranteed to be unique intil the computer is rebooted. ++ */ ++struct winluid { ++ __u32 a; ++ __u32 b; ++}; ++ ++#endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1670-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch b/patch/kernel/archive/wsl2-arm64-6.1/1670-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch new file mode 100644 index 000000000000..384618c56dd0 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1670-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch @@ -0,0 +1,660 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 18:53:07 -0800 +Subject: drivers: hv: dxgkrnl: Add VMBus message support, initialize VMBus + channels. + +Implement support for sending/receiving VMBus messages between +the host and the guest. + +Initialize the VMBus channels and notify the host about IO space +settings of the VMBus global channel. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 14 + + drivers/hv/dxgkrnl/dxgmodule.c | 9 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 318 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 67 ++ + drivers/hv/dxgkrnl/ioctl.c | 24 + + drivers/hv/dxgkrnl/misc.h | 72 +++ + include/uapi/misc/d3dkmthk.h | 34 + + 7 files changed, 536 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index f7900840d1ed..52b9e82c51e6 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -28,6 +28,8 @@ + #include + #include + #include ++#include "misc.h" ++#include + + struct dxgadapter; + +@@ -100,6 +102,13 @@ static inline struct dxgglobal *dxggbl(void) + return dxgdrv.dxgglobal; + } + ++int dxgglobal_init_global_channel(void); ++void dxgglobal_destroy_global_channel(void); ++struct vmbus_channel *dxgglobal_get_vmbus(void); ++struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void); ++int dxgglobal_acquire_channel_lock(void); ++void dxgglobal_release_channel_lock(void); ++ + struct dxgprocess { + /* Placeholder */ + }; +@@ -130,6 +139,11 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid) + #define DXGK_VMBUS_INTERFACE_VERSION 40 + #define DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION 16 + ++void dxgvmb_initialize(void); ++int dxgvmb_send_set_iospace_region(u64 start, u64 len); ++ ++int ntstatus2int(struct ntstatus status); ++ + #ifdef DEBUG + + void dxgk_validate_ioctls(void); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index de02edc4d023..e55639dc0adc 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -260,6 +260,13 @@ int dxgglobal_init_global_channel(void) + goto error; + } + ++ ret = dxgvmb_send_set_iospace_region(dxgglobal->mmiospace_base, ++ dxgglobal->mmiospace_size); ++ if (ret < 0) { ++ DXG_ERR("send_set_iospace_region failed"); ++ goto error; ++ } ++ + hv_set_drvdata(dxgglobal->hdev, dxgglobal); + + error: +@@ -429,8 +436,6 @@ static void dxgglobal_destroy(struct dxgglobal *dxgglobal) + if (dxgglobal->vmbus_registered) + vmbus_driver_unregister(&dxgdrv.vmbus_drv); + +- dxgglobal_destroy_global_channel(); +- + if (dxgglobal->pci_registered) + pci_unregister_driver(&dxgdrv.pci_drv); + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index deb880e34377..a4365739826a 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -40,6 +40,121 @@ struct dxgvmbuspacket { + bool completed; + }; + ++struct dxgvmb_ext_header { ++ /* Offset from the start of the message to DXGKVMB_COMMAND_BASE */ ++ u32 command_offset; ++ u32 reserved; ++ struct winluid vgpu_luid; ++}; ++ ++#define VMBUSMESSAGEONSTACK 64 ++ ++struct dxgvmbusmsg { ++/* Points to the allocated buffer */ ++ struct dxgvmb_ext_header *hdr; ++/* Points to dxgkvmb_command_vm_to_host or dxgkvmb_command_vgpu_to_host */ ++ void *msg; ++/* The vm bus channel, used to pass the message to the host */ ++ struct dxgvmbuschannel *channel; ++/* Message size in bytes including the header and the payload */ ++ u32 size; ++/* Buffer used for small messages */ ++ char msg_on_stack[VMBUSMESSAGEONSTACK]; ++}; ++ ++struct dxgvmbusmsgres { ++/* Points to the allocated buffer */ ++ struct dxgvmb_ext_header *hdr; ++/* Points to dxgkvmb_command_vm_to_host or dxgkvmb_command_vgpu_to_host */ ++ void *msg; ++/* The vm bus channel, used to pass the message to the host */ ++ struct dxgvmbuschannel *channel; ++/* Message size in bytes including the header, the payload and the result */ ++ u32 size; ++/* Result buffer size in bytes */ ++ u32 res_size; ++/* Points to the result within the allocated buffer */ ++ void *res; ++}; ++ ++static int init_message(struct dxgvmbusmsg *msg, ++ struct dxgprocess *process, u32 size) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ bool use_ext_header = dxgglobal->vmbus_ver >= ++ DXGK_VMBUS_INTERFACE_VERSION; ++ ++ if (use_ext_header) ++ size += sizeof(struct dxgvmb_ext_header); ++ msg->size = size; ++ if (size <= VMBUSMESSAGEONSTACK) { ++ msg->hdr = (void *)msg->msg_on_stack; ++ memset(msg->hdr, 0, size); ++ } else { ++ msg->hdr = vzalloc(size); ++ if (msg->hdr == NULL) ++ return -ENOMEM; ++ } ++ if (use_ext_header) { ++ msg->msg = (char *)&msg->hdr[1]; ++ msg->hdr->command_offset = sizeof(msg->hdr[0]); ++ } else { ++ msg->msg = (char *)msg->hdr; ++ } ++ msg->channel = &dxgglobal->channel; ++ return 0; ++} ++ ++static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process) ++{ ++ if (msg->hdr && (char *)msg->hdr != msg->msg_on_stack) ++ vfree(msg->hdr); ++} ++ ++/* ++ * Helper functions ++ */ ++ ++int ntstatus2int(struct ntstatus status) ++{ ++ if (NT_SUCCESS(status)) ++ return (int)status.v; ++ switch (status.v) { ++ case STATUS_OBJECT_NAME_COLLISION: ++ return -EEXIST; ++ case STATUS_NO_MEMORY: ++ return -ENOMEM; ++ case STATUS_INVALID_PARAMETER: ++ return -EINVAL; ++ case STATUS_OBJECT_NAME_INVALID: ++ case STATUS_OBJECT_NAME_NOT_FOUND: ++ return -ENOENT; ++ case STATUS_TIMEOUT: ++ return -EAGAIN; ++ case STATUS_BUFFER_TOO_SMALL: ++ return -EOVERFLOW; ++ case STATUS_DEVICE_REMOVED: ++ return -ENODEV; ++ case STATUS_ACCESS_DENIED: ++ return -EACCES; ++ case STATUS_NOT_SUPPORTED: ++ return -EPERM; ++ case STATUS_ILLEGAL_INSTRUCTION: ++ return -EOPNOTSUPP; ++ case STATUS_INVALID_HANDLE: ++ return -EBADF; ++ case STATUS_GRAPHICS_ALLOCATION_BUSY: ++ return -EINPROGRESS; ++ case STATUS_OBJECT_TYPE_MISMATCH: ++ return -EPROTOTYPE; ++ case STATUS_NOT_IMPLEMENTED: ++ return -EPERM; ++ default: ++ return -EINVAL; ++ } ++} ++ + int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev) + { + int ret; +@@ -86,7 +201,210 @@ void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch) + } + } + ++static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command, ++ enum dxgkvmb_commandtype_global type) ++{ ++ command->command_type = type; ++ command->process.v = 0; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VM_TO_HOST; ++} ++ ++static void process_inband_packet(struct dxgvmbuschannel *channel, ++ struct vmpacket_descriptor *desc) ++{ ++ u32 packet_length = hv_pkt_datalen(desc); ++ struct dxgkvmb_command_host_to_vm *packet; ++ ++ if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) { ++ DXG_ERR("Invalid global packet"); ++ } else { ++ packet = hv_pkt_data(desc); ++ DXG_TRACE("global packet %d", ++ packet->command_type); ++ switch (packet->command_type) { ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: ++ break; ++ case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: ++ break; ++ default: ++ DXG_ERR("unexpected host message %d", ++ packet->command_type); ++ } ++ } ++} ++ ++static void process_completion_packet(struct dxgvmbuschannel *channel, ++ struct vmpacket_descriptor *desc) ++{ ++ struct dxgvmbuspacket *packet = NULL; ++ struct dxgvmbuspacket *entry; ++ u32 packet_length = hv_pkt_datalen(desc); ++ unsigned long flags; ++ ++ spin_lock_irqsave(&channel->packet_list_mutex, flags); ++ list_for_each_entry(entry, &channel->packet_list_head, ++ packet_list_entry) { ++ if (desc->trans_id == entry->request_id) { ++ packet = entry; ++ list_del(&packet->packet_list_entry); ++ packet->completed = true; ++ break; ++ } ++ } ++ spin_unlock_irqrestore(&channel->packet_list_mutex, flags); ++ if (packet) { ++ if (packet->buffer_length) { ++ if (packet_length < packet->buffer_length) { ++ DXG_TRACE("invalid size %d Expected:%d", ++ packet_length, ++ packet->buffer_length); ++ packet->status = -EOVERFLOW; ++ } else { ++ memcpy(packet->buffer, hv_pkt_data(desc), ++ packet->buffer_length); ++ } ++ } ++ complete(&packet->wait); ++ } else { ++ DXG_ERR("did not find packet to complete"); ++ } ++} ++ + /* Receive callback for messages from the host */ + void dxgvmbuschannel_receive(void *ctx) + { ++ struct dxgvmbuschannel *channel = ctx; ++ struct vmpacket_descriptor *desc; ++ u32 packet_length = 0; ++ ++ foreach_vmbus_pkt(desc, channel->channel) { ++ packet_length = hv_pkt_datalen(desc); ++ DXG_TRACE("next packet (id, size, type): %llu %d %d", ++ desc->trans_id, packet_length, desc->type); ++ if (desc->type == VM_PKT_COMP) { ++ process_completion_packet(channel, desc); ++ } else { ++ if (desc->type != VM_PKT_DATA_INBAND) ++ DXG_ERR("unexpected packet type"); ++ else ++ process_inband_packet(channel, desc); ++ } ++ } ++} ++ ++int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, ++ void *command, ++ u32 cmd_size, ++ void *result, ++ u32 result_size) ++{ ++ int ret; ++ struct dxgvmbuspacket *packet = NULL; ++ ++ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE || ++ result_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("%s invalid data size", __func__); ++ return -EINVAL; ++ } ++ ++ packet = kmem_cache_alloc(channel->packet_cache, 0); ++ if (packet == NULL) { ++ DXG_ERR("kmem_cache_alloc failed"); ++ return -ENOMEM; ++ } ++ ++ packet->request_id = atomic64_inc_return(&channel->packet_request_id); ++ init_completion(&packet->wait); ++ packet->buffer = result; ++ packet->buffer_length = result_size; ++ packet->status = 0; ++ packet->completed = false; ++ spin_lock_irq(&channel->packet_list_mutex); ++ list_add_tail(&packet->packet_list_entry, &channel->packet_list_head); ++ spin_unlock_irq(&channel->packet_list_mutex); ++ ++ ret = vmbus_sendpacket(channel->channel, command, cmd_size, ++ packet->request_id, VM_PKT_DATA_INBAND, ++ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); ++ if (ret) { ++ DXG_ERR("vmbus_sendpacket failed: %x", ret); ++ spin_lock_irq(&channel->packet_list_mutex); ++ list_del(&packet->packet_list_entry); ++ spin_unlock_irq(&channel->packet_list_mutex); ++ goto cleanup; ++ } ++ ++ DXG_TRACE("waiting completion: %llu", packet->request_id); ++ ret = wait_for_completion_killable(&packet->wait); ++ if (ret) { ++ DXG_ERR("wait_for_completion failed: %x", ret); ++ spin_lock_irq(&channel->packet_list_mutex); ++ if (!packet->completed) ++ list_del(&packet->packet_list_entry); ++ spin_unlock_irq(&channel->packet_list_mutex); ++ goto cleanup; ++ } ++ DXG_TRACE("completion done: %llu %x", ++ packet->request_id, packet->status); ++ ret = packet->status; ++ ++cleanup: ++ ++ kmem_cache_free(channel->packet_cache, packet); ++ if (ret < 0) ++ DXG_TRACE("Error: %x", ret); ++ return ret; ++} ++ ++static int ++dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel, ++ void *command, u32 cmd_size) ++{ ++ struct ntstatus status; ++ int ret; ++ ++ ret = dxgvmb_send_sync_msg(channel, command, cmd_size, ++ &status, sizeof(status)); ++ if (ret >= 0) ++ ret = ntstatus2int(status); ++ return ret; ++} ++ ++/* ++ * Global messages to the host ++ */ ++ ++int dxgvmb_send_set_iospace_region(u64 start, u64 len) ++{ ++ int ret; ++ struct dxgkvmb_command_setiospaceregion *command; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init1(&command->hdr, ++ DXGK_VMBCOMMAND_SETIOSPACEREGION); ++ command->start = start; ++ command->length = len; ++ ret = dxgvmb_send_sync_msg_ntstatus(&dxgglobal->channel, msg.hdr, ++ msg.size); ++ if (ret < 0) ++ DXG_ERR("send_set_iospace_region failed %x", ret); ++ ++ dxgglobal_release_channel_lock(); ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_TRACE("Error: %d", ret); ++ return ret; + } +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 6cdca5e03d1f..b1bdd6039b73 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -16,4 +16,71 @@ + + #define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128) + ++enum dxgkvmb_commandchanneltype { ++ DXGKVMB_VGPU_TO_HOST, ++ DXGKVMB_VM_TO_HOST, ++ DXGKVMB_HOST_TO_VM ++}; ++ ++/* ++ * ++ * Commands, sent to the host via the guest global VM bus channel ++ * DXG_GUEST_GLOBAL_VMBUS ++ * ++ */ ++ ++enum dxgkvmb_commandtype_global { ++ DXGK_VMBCOMMAND_VM_TO_HOST_FIRST = 1000, ++ DXGK_VMBCOMMAND_CREATEPROCESS = DXGK_VMBCOMMAND_VM_TO_HOST_FIRST, ++ DXGK_VMBCOMMAND_DESTROYPROCESS = 1001, ++ DXGK_VMBCOMMAND_OPENSYNCOBJECT = 1002, ++ DXGK_VMBCOMMAND_DESTROYSYNCOBJECT = 1003, ++ DXGK_VMBCOMMAND_CREATENTSHAREDOBJECT = 1004, ++ DXGK_VMBCOMMAND_DESTROYNTSHAREDOBJECT = 1005, ++ DXGK_VMBCOMMAND_SIGNALFENCE = 1006, ++ DXGK_VMBCOMMAND_NOTIFYPROCESSFREEZE = 1007, ++ DXGK_VMBCOMMAND_NOTIFYPROCESSTHAW = 1008, ++ DXGK_VMBCOMMAND_QUERYETWSESSION = 1009, ++ DXGK_VMBCOMMAND_SETIOSPACEREGION = 1010, ++ DXGK_VMBCOMMAND_COMPLETETRANSACTION = 1011, ++ DXGK_VMBCOMMAND_SHAREOBJECTWITHHOST = 1021, ++ DXGK_VMBCOMMAND_INVALID_VM_TO_HOST ++}; ++ ++/* ++ * Commands, sent by the host to the VM ++ */ ++enum dxgkvmb_commandtype_host_to_vm { ++ DXGK_VMBCOMMAND_SIGNALGUESTEVENT, ++ DXGK_VMBCOMMAND_PROPAGATEPRESENTHISTORYTOKEN, ++ DXGK_VMBCOMMAND_SETGUESTDATA, ++ DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE, ++ DXGK_VMBCOMMAND_SENDWNFNOTIFICATION, ++ DXGK_VMBCOMMAND_INVALID_HOST_TO_VM ++}; ++ ++struct dxgkvmb_command_vm_to_host { ++ u64 command_id; ++ struct d3dkmthandle process; ++ enum dxgkvmb_commandchanneltype channel_type; ++ enum dxgkvmb_commandtype_global command_type; ++}; ++ ++struct dxgkvmb_command_host_to_vm { ++ u64 command_id; ++ struct d3dkmthandle process; ++ u32 channel_type : 8; ++ u32 async_msg : 1; ++ u32 reserved : 23; ++ enum dxgkvmb_commandtype_host_to_vm command_type; ++}; ++ ++/* Returns ntstatus */ ++struct dxgkvmb_command_setiospaceregion { ++ struct dxgkvmb_command_vm_to_host hdr; ++ u64 start; ++ u64 length; ++ u32 shared_page_gpadl; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +new file mode 100644 +index 000000000000..23ecd15b0cd7 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -0,0 +1,24 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Ioctl implementation ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++#include "dxgvmbus.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +new file mode 100644 +index 000000000000..4c6047c32a20 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -0,0 +1,72 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Misc definitions ++ * ++ */ ++ ++#ifndef _MISC_H_ ++#define _MISC_H_ ++ ++extern const struct d3dkmthandle zerohandle; ++ ++/* ++ * Synchronization lock hierarchy. ++ * ++ * The higher enum value, the higher is the lock order. ++ * When a lower lock ois held, the higher lock should not be acquired. ++ * ++ * channel_lock ++ * device_mutex ++ */ ++ ++/* ++ * Some of the Windows return codes, which needs to be translated to Linux ++ * IOCTL return codes. Positive values are success codes and need to be ++ * returned from the driver IOCTLs. libdxcore.so depends on returning ++ * specific return codes. ++ */ ++#define STATUS_SUCCESS ((int)(0)) ++#define STATUS_OBJECT_NAME_INVALID ((int)(0xC0000033L)) ++#define STATUS_DEVICE_REMOVED ((int)(0xC00002B6L)) ++#define STATUS_INVALID_HANDLE ((int)(0xC0000008L)) ++#define STATUS_ILLEGAL_INSTRUCTION ((int)(0xC000001DL)) ++#define STATUS_NOT_IMPLEMENTED ((int)(0xC0000002L)) ++#define STATUS_PENDING ((int)(0x00000103L)) ++#define STATUS_ACCESS_DENIED ((int)(0xC0000022L)) ++#define STATUS_BUFFER_TOO_SMALL ((int)(0xC0000023L)) ++#define STATUS_OBJECT_TYPE_MISMATCH ((int)(0xC0000024L)) ++#define STATUS_GRAPHICS_ALLOCATION_BUSY ((int)(0xC01E0102L)) ++#define STATUS_NOT_SUPPORTED ((int)(0xC00000BBL)) ++#define STATUS_TIMEOUT ((int)(0x00000102L)) ++#define STATUS_INVALID_PARAMETER ((int)(0xC000000DL)) ++#define STATUS_NO_MEMORY ((int)(0xC0000017L)) ++#define STATUS_OBJECT_NAME_COLLISION ((int)(0xC0000035L)) ++#define STATUS_OBJECT_NAME_NOT_FOUND ((int)(0xC0000034L)) ++ ++ ++#define NT_SUCCESS(status) (status.v >= 0) ++ ++#ifndef DEBUG ++ ++#define DXGKRNL_ASSERT(exp) ++ ++#else ++ ++#define DXGKRNL_ASSERT(exp) \ ++do { \ ++ if (!(exp)) { \ ++ dump_stack(); \ ++ BUG_ON(true); \ ++ } \ ++} while (0) ++ ++#endif /* DEBUG */ ++ ++#endif /* _MISC_H_ */ +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 5d973604400c..2ea04cc02a1f 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -14,6 +14,40 @@ + #ifndef _D3DKMTHK_H + #define _D3DKMTHK_H + ++/* ++ * This structure matches the definition of D3DKMTHANDLE in Windows. ++ * The handle is opaque in user mode. It is used by user mode applications to ++ * represent kernel mode objects, created by dxgkrnl. ++ */ ++struct d3dkmthandle { ++ union { ++ struct { ++ __u32 instance : 6; ++ __u32 index : 24; ++ __u32 unique : 2; ++ }; ++ __u32 v; ++ }; ++}; ++ ++/* ++ * VM bus messages return Windows' NTSTATUS, which is integer and only negative ++ * value indicates a failure. A positive number is a success and needs to be ++ * returned to user mode as the IOCTL return code. Negative status codes are ++ * converted to Linux error codes. ++ */ ++struct ntstatus { ++ union { ++ struct { ++ int code : 16; ++ int facility : 13; ++ int customer : 1; ++ int severity : 2; ++ }; ++ int v; ++ }; ++}; ++ + /* + * Matches the Windows LUID definition. + * LUID is a locally unique identifier (similar to GUID, but not global), +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1671-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch b/patch/kernel/archive/wsl2-arm64-6.1/1671-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch new file mode 100644 index 000000000000..901bfd24efea --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1671-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch @@ -0,0 +1,1160 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 19:00:38 -0800 +Subject: drivers: hv: dxgkrnl: Creation of dxgadapter object + +Handle creation and destruction of dxgadapter object, which +represents a virtual compute device, projected to the VM by +the host. The dxgadapter object is created when the +corresponding VMBus channel is offered by Hyper-V. + +There could be multiple virtual compute device objects, projected +by the host to VM. They are enumerated by issuing IOCTLs to +the /dev/dxg device. + +The adapter object can start functioning only when the global VMBus +channel and the corresponding per device VMBus channel are +initialized. Notifications about arrival of a virtual compute PCI +device and VMBus channels can happen in any order. Therefore, +the initial dxgadapter object state is DXGADAPTER_STATE_WAITING_VMBUS. +A list of VMBus channels and a list of waiting dxgadapter objects +are maintained. When dxgkrnl is notified about a VMBus channel +arrival, if tries to start all adapters, which are not started yet. + +Properties of the adapter object are determined by sending VMBus +messages to the host to the corresponding VMBus channel. + +When the per virtual compute device VMBus channel or the global +channel are destroyed, the adapter object is destroyed. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgadapter.c | 170 ++++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 85 ++++ + drivers/hv/dxgkrnl/dxgmodule.c | 204 ++++++++- + drivers/hv/dxgkrnl/dxgvmbus.c | 217 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 128 ++++++ + drivers/hv/dxgkrnl/misc.c | 37 ++ + drivers/hv/dxgkrnl/misc.h | 24 +- + 8 files changed, 844 insertions(+), 23 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index 76349064b60a..2ed07d877c91 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o dxgvmbus.o ++dxgkrnl-y := dxgmodule.o misc.o dxgadapter.o ioctl.o dxgvmbus.o +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +new file mode 100644 +index 000000000000..07d47699d255 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -0,0 +1,170 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Implementation of dxgadapter and its objects ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev) ++{ ++ int ret; ++ ++ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, ++ &adapter->luid); ++ DXG_TRACE("%x:%x %p %pUb", ++ adapter->luid.b, adapter->luid.a, hdev->channel, ++ &hdev->channel->offermsg.offer.if_instance); ++ ++ ret = dxgvmbuschannel_init(&adapter->channel, hdev); ++ if (ret) ++ goto cleanup; ++ ++ adapter->channel.adapter = adapter; ++ adapter->hv_dev = hdev; ++ ++ ret = dxgvmb_send_open_adapter(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_open_adapter failed: %d", ret); ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_get_internal_adapter_info(adapter); ++ ++cleanup: ++ if (ret) ++ DXG_ERR("Failed to set vmbus: %d", ret); ++ return ret; ++} ++ ++void dxgadapter_start(struct dxgadapter *adapter) ++{ ++ struct dxgvgpuchannel *ch = NULL; ++ struct dxgvgpuchannel *entry; ++ int ret; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ DXG_TRACE("%x-%x", adapter->luid.a, adapter->luid.b); ++ ++ /* Find the corresponding vGPU vm bus channel */ ++ list_for_each_entry(entry, &dxgglobal->vgpu_ch_list_head, ++ vgpu_ch_list_entry) { ++ if (memcmp(&adapter->luid, ++ &entry->adapter_luid, ++ sizeof(struct winluid)) == 0) { ++ ch = entry; ++ break; ++ } ++ } ++ if (ch == NULL) { ++ DXG_TRACE("vGPU chanel is not ready"); ++ return; ++ } ++ ++ /* The global channel is initialized when the first adapter starts */ ++ if (!dxgglobal->global_channel_initialized) { ++ ret = dxgglobal_init_global_channel(); ++ if (ret) { ++ dxgglobal_destroy_global_channel(); ++ return; ++ } ++ dxgglobal->global_channel_initialized = true; ++ } ++ ++ /* Initialize vGPU vm bus channel */ ++ ret = dxgadapter_set_vmbus(adapter, ch->hdev); ++ if (ret) { ++ DXG_ERR("Failed to start adapter %p", adapter); ++ adapter->adapter_state = DXGADAPTER_STATE_STOPPED; ++ return; ++ } ++ ++ adapter->adapter_state = DXGADAPTER_STATE_ACTIVE; ++ DXG_TRACE("Adapter started %p", adapter); ++} ++ ++void dxgadapter_stop(struct dxgadapter *adapter) ++{ ++ bool adapter_stopped = false; ++ ++ down_write(&adapter->core_lock); ++ if (!adapter->stopping_adapter) ++ adapter->stopping_adapter = true; ++ else ++ adapter_stopped = true; ++ up_write(&adapter->core_lock); ++ ++ if (adapter_stopped) ++ return; ++ ++ if (dxgadapter_acquire_lock_exclusive(adapter) == 0) { ++ dxgvmb_send_close_adapter(adapter); ++ dxgadapter_release_lock_exclusive(adapter); ++ } ++ dxgvmbuschannel_destroy(&adapter->channel); ++ ++ adapter->adapter_state = DXGADAPTER_STATE_STOPPED; ++} ++ ++void dxgadapter_release(struct kref *refcount) ++{ ++ struct dxgadapter *adapter; ++ ++ adapter = container_of(refcount, struct dxgadapter, adapter_kref); ++ DXG_TRACE("%p", adapter); ++ kfree(adapter); ++} ++ ++bool dxgadapter_is_active(struct dxgadapter *adapter) ++{ ++ return adapter->adapter_state == DXGADAPTER_STATE_ACTIVE; ++} ++ ++int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter) ++{ ++ down_write(&adapter->core_lock); ++ if (adapter->adapter_state != DXGADAPTER_STATE_ACTIVE) { ++ dxgadapter_release_lock_exclusive(adapter); ++ return -ENODEV; ++ } ++ return 0; ++} ++ ++void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter) ++{ ++ down_write(&adapter->core_lock); ++} ++ ++void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter) ++{ ++ up_write(&adapter->core_lock); ++} ++ ++int dxgadapter_acquire_lock_shared(struct dxgadapter *adapter) ++{ ++ down_read(&adapter->core_lock); ++ if (adapter->adapter_state == DXGADAPTER_STATE_ACTIVE) ++ return 0; ++ dxgadapter_release_lock_shared(adapter); ++ return -ENODEV; ++} ++ ++void dxgadapter_release_lock_shared(struct dxgadapter *adapter) ++{ ++ up_read(&adapter->core_lock); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 52b9e82c51e6..ba2a7c6001aa 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -47,9 +47,39 @@ extern struct dxgdriver dxgdrv; + + #define DXGDEV dxgdrv.dxgdev + ++struct dxgk_device_types { ++ u32 post_device:1; ++ u32 post_device_certain:1; ++ u32 software_device:1; ++ u32 soft_gpu_device:1; ++ u32 warp_device:1; ++ u32 bdd_device:1; ++ u32 support_miracast:1; ++ u32 mismatched_lda:1; ++ u32 indirect_display_device:1; ++ u32 xbox_one_device:1; ++ u32 child_id_support_dwm_clone:1; ++ u32 child_id_support_dwm_clone2:1; ++ u32 has_internal_panel:1; ++ u32 rfx_vgpu_device:1; ++ u32 virtual_render_device:1; ++ u32 support_preserve_boot_display:1; ++ u32 is_uefi_frame_buffer:1; ++ u32 removable_device:1; ++ u32 virtual_monitor_device:1; ++}; ++ ++enum dxgobjectstate { ++ DXGOBJECTSTATE_CREATED, ++ DXGOBJECTSTATE_ACTIVE, ++ DXGOBJECTSTATE_STOPPED, ++ DXGOBJECTSTATE_DESTROYED, ++}; ++ + struct dxgvmbuschannel { + struct vmbus_channel *channel; + struct hv_device *hdev; ++ struct dxgadapter *adapter; + spinlock_t packet_list_mutex; + struct list_head packet_list_head; + struct kmem_cache *packet_cache; +@@ -81,6 +111,10 @@ struct dxgglobal { + struct miscdevice dxgdevice; + struct mutex device_mutex; + ++ /* list of created adapters */ ++ struct list_head adapter_list_head; ++ struct rw_semaphore adapter_list_lock; ++ + /* + * List of the vGPU VM bus channels (dxgvgpuchannel) + * Protected by device_mutex +@@ -102,6 +136,10 @@ static inline struct dxgglobal *dxggbl(void) + return dxgdrv.dxgglobal; + } + ++int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, ++ struct winluid host_vgpu_luid); ++void dxgglobal_acquire_adapter_list_lock(enum dxglockstate state); ++void dxgglobal_release_adapter_list_lock(enum dxglockstate state); + int dxgglobal_init_global_channel(void); + void dxgglobal_destroy_global_channel(void); + struct vmbus_channel *dxgglobal_get_vmbus(void); +@@ -113,6 +151,47 @@ struct dxgprocess { + /* Placeholder */ + }; + ++enum dxgadapter_state { ++ DXGADAPTER_STATE_ACTIVE = 0, ++ DXGADAPTER_STATE_STOPPED = 1, ++ DXGADAPTER_STATE_WAITING_VMBUS = 2, ++}; ++ ++/* ++ * This object represents the grapchis adapter. ++ * Objects, which take reference on the adapter: ++ * - dxgglobal ++ * - adapter handle (struct d3dkmthandle) ++ */ ++struct dxgadapter { ++ struct rw_semaphore core_lock; ++ struct kref adapter_kref; ++ /* Entry in the list of adapters in dxgglobal */ ++ struct list_head adapter_list_entry; ++ struct pci_dev *pci_dev; ++ struct hv_device *hv_dev; ++ struct dxgvmbuschannel channel; ++ struct d3dkmthandle host_handle; ++ enum dxgadapter_state adapter_state; ++ struct winluid host_adapter_luid; ++ struct winluid host_vgpu_luid; ++ struct winluid luid; /* VM bus channel luid */ ++ u16 device_description[80]; ++ u16 device_instance_id[WIN_MAX_PATH]; ++ bool stopping_adapter; ++}; ++ ++int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev); ++bool dxgadapter_is_active(struct dxgadapter *adapter); ++void dxgadapter_start(struct dxgadapter *adapter); ++void dxgadapter_stop(struct dxgadapter *adapter); ++void dxgadapter_release(struct kref *refcount); ++int dxgadapter_acquire_lock_shared(struct dxgadapter *adapter); ++void dxgadapter_release_lock_shared(struct dxgadapter *adapter); ++int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); ++void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++ + /* + * The convention is that VNBus instance id is a GUID, but the host sets + * the lower part of the value to the host adapter LUID. The function +@@ -141,6 +220,12 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid) + + void dxgvmb_initialize(void); + int dxgvmb_send_set_iospace_region(u64 start, u64 len); ++int dxgvmb_send_open_adapter(struct dxgadapter *adapter); ++int dxgvmb_send_close_adapter(struct dxgadapter *adapter); ++int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter); ++int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, ++ void *command, ++ u32 cmd_size); + + int ntstatus2int(struct ntstatus status); + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index e55639dc0adc..ef80b920f010 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -55,6 +55,156 @@ void dxgglobal_release_channel_lock(void) + up_read(&dxggbl()->channel_lock); + } + ++void dxgglobal_acquire_adapter_list_lock(enum dxglockstate state) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (state == DXGLOCK_EXCL) ++ down_write(&dxgglobal->adapter_list_lock); ++ else ++ down_read(&dxgglobal->adapter_list_lock); ++} ++ ++void dxgglobal_release_adapter_list_lock(enum dxglockstate state) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (state == DXGLOCK_EXCL) ++ up_write(&dxgglobal->adapter_list_lock); ++ else ++ up_read(&dxgglobal->adapter_list_lock); ++} ++ ++/* ++ * Returns a pointer to dxgadapter object, which corresponds to the given PCI ++ * device, or NULL. ++ */ ++static struct dxgadapter *find_pci_adapter(struct pci_dev *dev) ++{ ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dev == entry->pci_dev) { ++ adapter = entry; ++ break; ++ } ++ } ++ ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ return adapter; ++} ++ ++/* ++ * Returns a pointer to dxgadapter object, which has the givel LUID ++ * device, or NULL. ++ */ ++static struct dxgadapter *find_adapter(struct winluid *luid) ++{ ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (memcmp(luid, &entry->luid, sizeof(struct winluid)) == 0) { ++ adapter = entry; ++ break; ++ } ++ } ++ ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ return adapter; ++} ++ ++/* ++ * Creates a new dxgadapter object, which represents a virtual GPU, projected ++ * by the host. ++ * The adapter is in the waiting state. It will become active when the global ++ * VM bus channel and the adapter VM bus channel are created. ++ */ ++int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, ++ struct winluid host_vgpu_luid) ++{ ++ struct dxgadapter *adapter; ++ int ret = 0; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ adapter = kzalloc(sizeof(struct dxgadapter), GFP_KERNEL); ++ if (adapter == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapter->adapter_state = DXGADAPTER_STATE_WAITING_VMBUS; ++ adapter->host_vgpu_luid = host_vgpu_luid; ++ kref_init(&adapter->adapter_kref); ++ init_rwsem(&adapter->core_lock); ++ ++ adapter->pci_dev = dev; ++ guid_to_luid(guid, &adapter->luid); ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ ++ list_add_tail(&adapter->adapter_list_entry, ++ &dxgglobal->adapter_list_head); ++ dxgglobal->num_adapters++; ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ ++ DXG_TRACE("new adapter added %p %x-%x", adapter, ++ adapter->luid.a, adapter->luid.b); ++cleanup: ++ return ret; ++} ++ ++/* ++ * Attempts to start dxgadapter objects, which are not active yet. ++ */ ++static void dxgglobal_start_adapters(void) ++{ ++ struct dxgadapter *adapter; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (dxgglobal->hdev == NULL) { ++ DXG_TRACE("Global channel is not ready"); ++ return; ++ } ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ list_for_each_entry(adapter, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (adapter->adapter_state == DXGADAPTER_STATE_WAITING_VMBUS) ++ dxgadapter_start(adapter); ++ } ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++} ++ ++/* ++ * Stopsthe active dxgadapter objects. ++ */ ++static void dxgglobal_stop_adapters(void) ++{ ++ struct dxgadapter *adapter; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (dxgglobal->hdev == NULL) { ++ DXG_TRACE("Global channel is not ready"); ++ return; ++ } ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ list_for_each_entry(adapter, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (adapter->adapter_state == DXGADAPTER_STATE_ACTIVE) ++ dxgadapter_stop(adapter); ++ } ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++} ++ + const struct file_operations dxgk_fops = { + .owner = THIS_MODULE, + }; +@@ -182,6 +332,15 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + DXG_TRACE("Vmbus interface version: %d", dxgglobal->vmbus_ver); + DXG_TRACE("Host luid: %x-%x", vgpu_luid.b, vgpu_luid.a); + ++ /* Create new virtual GPU adapter */ ++ ret = dxgglobal_create_adapter(dev, &guid, vgpu_luid); ++ if (ret) ++ goto cleanup; ++ ++ /* Attempt to start the adapter in case VM bus channels are created */ ++ ++ dxgglobal_start_adapters(); ++ + cleanup: + + mutex_unlock(&dxgglobal->device_mutex); +@@ -193,7 +352,25 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + + static void dxg_pci_remove_device(struct pci_dev *dev) + { +- /* Placeholder */ ++ struct dxgadapter *adapter; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ adapter = find_pci_adapter(dev); ++ if (adapter) { ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ list_del(&adapter->adapter_list_entry); ++ dxgglobal->num_adapters--; ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ ++ dxgadapter_stop(adapter); ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ } else { ++ DXG_ERR("Failed to find dxgadapter for pcidev"); ++ } ++ ++ mutex_unlock(&dxgglobal->device_mutex); + } + + static struct pci_device_id dxg_pci_id_table[] = { +@@ -297,6 +474,25 @@ void dxgglobal_destroy_global_channel(void) + up_write(&dxgglobal->channel_lock); + } + ++static void dxgglobal_stop_adapter_vmbus(struct hv_device *hdev) ++{ ++ struct dxgadapter *adapter = NULL; ++ struct winluid luid; ++ ++ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, &luid); ++ ++ DXG_TRACE("Stopping adapter %x:%x", luid.b, luid.a); ++ ++ adapter = find_adapter(&luid); ++ ++ if (adapter && adapter->adapter_state == DXGADAPTER_STATE_ACTIVE) { ++ down_write(&adapter->core_lock); ++ dxgvmbuschannel_destroy(&adapter->channel); ++ adapter->adapter_state = DXGADAPTER_STATE_STOPPED; ++ up_write(&adapter->core_lock); ++ } ++} ++ + static const struct hv_vmbus_device_id dxg_vmbus_id_table[] = { + /* Per GPU Device GUID */ + { HV_GPUP_DXGK_VGPU_GUID }, +@@ -329,6 +525,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev, + vgpuch->hdev = hdev; + list_add_tail(&vgpuch->vgpu_ch_list_entry, + &dxgglobal->vgpu_ch_list_head); ++ dxgglobal_start_adapters(); + } else if (uuid_le_cmp(hdev->dev_type, + dxg_vmbus_id_table[1].guid) == 0) { + /* This is the global Dxgkgnl channel */ +@@ -341,6 +538,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev, + goto error; + } + dxgglobal->hdev = hdev; ++ dxgglobal_start_adapters(); + } else { + /* Unknown device type */ + DXG_ERR("Unknown VM bus device type"); +@@ -364,6 +562,7 @@ static int dxg_remove_vmbus(struct hv_device *hdev) + + if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) { + DXG_TRACE("Remove virtual GPU channel"); ++ dxgglobal_stop_adapter_vmbus(hdev); + list_for_each_entry(vgpu_channel, + &dxgglobal->vgpu_ch_list_head, + vgpu_ch_list_entry) { +@@ -420,6 +619,8 @@ static struct dxgglobal *dxgglobal_create(void) + mutex_init(&dxgglobal->device_mutex); + + INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); ++ INIT_LIST_HEAD(&dxgglobal->adapter_list_head); ++ init_rwsem(&dxgglobal->adapter_list_lock); + + init_rwsem(&dxgglobal->channel_lock); + +@@ -430,6 +631,7 @@ static void dxgglobal_destroy(struct dxgglobal *dxgglobal) + { + if (dxgglobal) { + mutex_lock(&dxgglobal->device_mutex); ++ dxgglobal_stop_adapters(); + dxgglobal_destroy_global_channel(); + mutex_unlock(&dxgglobal->device_mutex); + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index a4365739826a..6d4b8d9d8d07 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -77,7 +77,7 @@ struct dxgvmbusmsgres { + void *res; + }; + +-static int init_message(struct dxgvmbusmsg *msg, ++static int init_message(struct dxgvmbusmsg *msg, struct dxgadapter *adapter, + struct dxgprocess *process, u32 size) + { + struct dxgglobal *dxgglobal = dxggbl(); +@@ -99,10 +99,15 @@ static int init_message(struct dxgvmbusmsg *msg, + if (use_ext_header) { + msg->msg = (char *)&msg->hdr[1]; + msg->hdr->command_offset = sizeof(msg->hdr[0]); ++ if (adapter) ++ msg->hdr->vgpu_luid = adapter->host_vgpu_luid; + } else { + msg->msg = (char *)msg->hdr; + } +- msg->channel = &dxgglobal->channel; ++ if (adapter && !dxgglobal->async_msg_enabled) ++ msg->channel = &adapter->channel; ++ else ++ msg->channel = &dxgglobal->channel; + return 0; + } + +@@ -116,6 +121,37 @@ static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process) + * Helper functions + */ + ++static void command_vm_to_host_init2(struct dxgkvmb_command_vm_to_host *command, ++ enum dxgkvmb_commandtype_global t, ++ struct d3dkmthandle process) ++{ ++ command->command_type = t; ++ command->process = process; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VM_TO_HOST; ++} ++ ++static void command_vgpu_to_host_init1(struct dxgkvmb_command_vgpu_to_host ++ *command, ++ enum dxgkvmb_commandtype type) ++{ ++ command->command_type = type; ++ command->process.v = 0; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VGPU_TO_HOST; ++} ++ ++static void command_vgpu_to_host_init2(struct dxgkvmb_command_vgpu_to_host ++ *command, ++ enum dxgkvmb_commandtype type, ++ struct d3dkmthandle process) ++{ ++ command->command_type = type; ++ command->process = process; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VGPU_TO_HOST; ++} ++ + int ntstatus2int(struct ntstatus status) + { + if (NT_SUCCESS(status)) +@@ -216,22 +252,26 @@ static void process_inband_packet(struct dxgvmbuschannel *channel, + u32 packet_length = hv_pkt_datalen(desc); + struct dxgkvmb_command_host_to_vm *packet; + +- if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) { +- DXG_ERR("Invalid global packet"); +- } else { +- packet = hv_pkt_data(desc); +- DXG_TRACE("global packet %d", +- packet->command_type); +- switch (packet->command_type) { +- case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: +- case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: +- break; +- case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: +- break; +- default: +- DXG_ERR("unexpected host message %d", ++ if (channel->adapter == NULL) { ++ if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) { ++ DXG_ERR("Invalid global packet"); ++ } else { ++ packet = hv_pkt_data(desc); ++ DXG_TRACE("global packet %d", + packet->command_type); ++ switch (packet->command_type) { ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: ++ break; ++ case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: ++ break; ++ default: ++ DXG_ERR("unexpected host message %d", ++ packet->command_type); ++ } + } ++ } else { ++ DXG_ERR("Unexpected packet for adapter channel"); + } + } + +@@ -279,6 +319,7 @@ void dxgvmbuschannel_receive(void *ctx) + struct vmpacket_descriptor *desc; + u32 packet_length = 0; + ++ DXG_TRACE("New adapter message: %p", channel->adapter); + foreach_vmbus_pkt(desc, channel->channel) { + packet_length = hv_pkt_datalen(desc); + DXG_TRACE("next packet (id, size, type): %llu %d %d", +@@ -302,6 +343,8 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, + { + int ret; + struct dxgvmbuspacket *packet = NULL; ++ struct dxgkvmb_command_vm_to_host *cmd1; ++ struct dxgkvmb_command_vgpu_to_host *cmd2; + + if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE || + result_size > DXG_MAX_VM_BUS_PACKET_SIZE) { +@@ -315,6 +358,16 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, + return -ENOMEM; + } + ++ if (channel->adapter == NULL) { ++ cmd1 = command; ++ DXG_TRACE("send_sync_msg global: %d %p %d %d", ++ cmd1->command_type, command, cmd_size, result_size); ++ } else { ++ cmd2 = command; ++ DXG_TRACE("send_sync_msg adapter: %d %p %d %d", ++ cmd2->command_type, command, cmd_size, result_size); ++ } ++ + packet->request_id = atomic64_inc_return(&channel->packet_request_id); + init_completion(&packet->wait); + packet->buffer = result; +@@ -358,6 +411,41 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, + return ret; + } + ++int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, ++ void *command, ++ u32 cmd_size) ++{ ++ int ret; ++ int try_count = 0; ++ ++ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("%s invalid data size", __func__); ++ return -EINVAL; ++ } ++ ++ if (channel->adapter) { ++ DXG_ERR("Async message sent to the adapter channel"); ++ return -EINVAL; ++ } ++ ++ do { ++ ret = vmbus_sendpacket(channel->channel, command, cmd_size, ++ 0, VM_PKT_DATA_INBAND, 0); ++ /* ++ * -EAGAIN is returned when the VM bus ring buffer if full. ++ * Wait 2ms to allow the host to process messages and try again. ++ */ ++ if (ret == -EAGAIN) { ++ usleep_range(1000, 2000); ++ try_count++; ++ } ++ } while (ret == -EAGAIN && try_count < 5000); ++ if (ret < 0) ++ DXG_ERR("vmbus_sendpacket failed: %x", ret); ++ ++ return ret; ++} ++ + static int + dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel, + void *command, u32 cmd_size) +@@ -383,7 +471,7 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len) + struct dxgvmbusmsg msg; + struct dxgglobal *dxgglobal = dxggbl(); + +- ret = init_message(&msg, NULL, sizeof(*command)); ++ ret = init_message(&msg, NULL, NULL, sizeof(*command)); + if (ret) + return ret; + command = (void *)msg.msg; +@@ -408,3 +496,98 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len) + DXG_TRACE("Error: %d", ret); + return ret; + } ++ ++/* ++ * Virtual GPU messages to the host ++ */ ++ ++int dxgvmb_send_open_adapter(struct dxgadapter *adapter) ++{ ++ int ret; ++ struct dxgkvmb_command_openadapter *command; ++ struct dxgkvmb_command_openadapter_return result = { }; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, adapter, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_OPENADAPTER); ++ command->vmbus_interface_version = dxgglobal->vmbus_ver; ++ command->vmbus_last_compatible_interface_version = ++ DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ adapter->host_handle = result.host_adapter_handle; ++ ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_ERR("Failed to open adapter: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_close_adapter(struct dxgadapter *adapter) ++{ ++ int ret; ++ struct dxgkvmb_command_closeadapter *command; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, adapter, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_CLOSEADAPTER); ++ command->host_handle = adapter->host_handle; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ NULL, 0); ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_ERR("Failed to close adapter: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter) ++{ ++ int ret; ++ struct dxgkvmb_command_getinternaladapterinfo *command; ++ struct dxgkvmb_command_getinternaladapterinfo_return result = { }; ++ struct dxgvmbusmsg msg; ++ u32 result_size = sizeof(result); ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, adapter, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init1(&command->hdr, ++ DXGK_VMBCOMMAND_GETINTERNALADAPTERINFO); ++ if (dxgglobal->vmbus_ver < DXGK_VMBUS_INTERFACE_VERSION) ++ result_size -= sizeof(struct winluid); ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, result_size); ++ if (ret >= 0) { ++ adapter->host_adapter_luid = result.host_adapter_luid; ++ adapter->host_vgpu_luid = result.host_vgpu_luid; ++ wcsncpy(adapter->device_description, result.device_description, ++ sizeof(adapter->device_description) / sizeof(u16)); ++ wcsncpy(adapter->device_instance_id, result.device_instance_id, ++ sizeof(adapter->device_instance_id) / sizeof(u16)); ++ dxgglobal->async_msg_enabled = result.async_msg_enabled != 0; ++ } ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_ERR("Failed to get adapter info: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index b1bdd6039b73..584cdd3db6c0 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -47,6 +47,83 @@ enum dxgkvmb_commandtype_global { + DXGK_VMBCOMMAND_INVALID_VM_TO_HOST + }; + ++/* ++ * ++ * Commands, sent to the host via the per adapter VM bus channel ++ * DXG_GUEST_VGPU_VMBUS ++ * ++ */ ++ ++enum dxgkvmb_commandtype { ++ DXGK_VMBCOMMAND_CREATEDEVICE = 0, ++ DXGK_VMBCOMMAND_DESTROYDEVICE = 1, ++ DXGK_VMBCOMMAND_QUERYADAPTERINFO = 2, ++ DXGK_VMBCOMMAND_DDIQUERYADAPTERINFO = 3, ++ DXGK_VMBCOMMAND_CREATEALLOCATION = 4, ++ DXGK_VMBCOMMAND_DESTROYALLOCATION = 5, ++ DXGK_VMBCOMMAND_CREATECONTEXTVIRTUAL = 6, ++ DXGK_VMBCOMMAND_DESTROYCONTEXT = 7, ++ DXGK_VMBCOMMAND_CREATESYNCOBJECT = 8, ++ DXGK_VMBCOMMAND_CREATEPAGINGQUEUE = 9, ++ DXGK_VMBCOMMAND_DESTROYPAGINGQUEUE = 10, ++ DXGK_VMBCOMMAND_MAKERESIDENT = 11, ++ DXGK_VMBCOMMAND_EVICT = 12, ++ DXGK_VMBCOMMAND_ESCAPE = 13, ++ DXGK_VMBCOMMAND_OPENADAPTER = 14, ++ DXGK_VMBCOMMAND_CLOSEADAPTER = 15, ++ DXGK_VMBCOMMAND_FREEGPUVIRTUALADDRESS = 16, ++ DXGK_VMBCOMMAND_MAPGPUVIRTUALADDRESS = 17, ++ DXGK_VMBCOMMAND_RESERVEGPUVIRTUALADDRESS = 18, ++ DXGK_VMBCOMMAND_UPDATEGPUVIRTUALADDRESS = 19, ++ DXGK_VMBCOMMAND_SUBMITCOMMAND = 20, ++ dxgk_vmbcommand_queryvideomemoryinfo = 21, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMCPU = 22, ++ DXGK_VMBCOMMAND_LOCK2 = 23, ++ DXGK_VMBCOMMAND_UNLOCK2 = 24, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMGPU = 25, ++ DXGK_VMBCOMMAND_SIGNALSYNCOBJECT = 26, ++ DXGK_VMBCOMMAND_SIGNALFENCENTSHAREDBYREF = 27, ++ DXGK_VMBCOMMAND_GETDEVICESTATE = 28, ++ DXGK_VMBCOMMAND_MARKDEVICEASERROR = 29, ++ DXGK_VMBCOMMAND_ADAPTERSTOP = 30, ++ DXGK_VMBCOMMAND_SETQUEUEDLIMIT = 31, ++ DXGK_VMBCOMMAND_OPENRESOURCE = 32, ++ DXGK_VMBCOMMAND_SETCONTEXTSCHEDULINGPRIORITY = 33, ++ DXGK_VMBCOMMAND_PRESENTHISTORYTOKEN = 34, ++ DXGK_VMBCOMMAND_SETREDIRECTEDFLIPFENCEVALUE = 35, ++ DXGK_VMBCOMMAND_GETINTERNALADAPTERINFO = 36, ++ DXGK_VMBCOMMAND_FLUSHHEAPTRANSITIONS = 37, ++ DXGK_VMBCOMMAND_BLT = 38, ++ DXGK_VMBCOMMAND_DDIGETSTANDARDALLOCATIONDRIVERDATA = 39, ++ DXGK_VMBCOMMAND_CDDGDICOMMAND = 40, ++ DXGK_VMBCOMMAND_QUERYALLOCATIONRESIDENCY = 41, ++ DXGK_VMBCOMMAND_FLUSHDEVICE = 42, ++ DXGK_VMBCOMMAND_FLUSHADAPTER = 43, ++ DXGK_VMBCOMMAND_DDIGETNODEMETADATA = 44, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE = 45, ++ DXGK_VMBCOMMAND_ISSYNCOBJECTSIGNALED = 46, ++ DXGK_VMBCOMMAND_CDDSYNCGPUACCESS = 47, ++ DXGK_VMBCOMMAND_QUERYSTATISTICS = 48, ++ DXGK_VMBCOMMAND_CHANGEVIDEOMEMORYRESERVATION = 49, ++ DXGK_VMBCOMMAND_CREATEHWQUEUE = 50, ++ DXGK_VMBCOMMAND_DESTROYHWQUEUE = 51, ++ DXGK_VMBCOMMAND_SUBMITCOMMANDTOHWQUEUE = 52, ++ DXGK_VMBCOMMAND_GETDRIVERSTOREFILE = 53, ++ DXGK_VMBCOMMAND_READDRIVERSTOREFILE = 54, ++ DXGK_VMBCOMMAND_GETNEXTHARDLINK = 55, ++ DXGK_VMBCOMMAND_UPDATEALLOCATIONPROPERTY = 56, ++ DXGK_VMBCOMMAND_OFFERALLOCATIONS = 57, ++ DXGK_VMBCOMMAND_RECLAIMALLOCATIONS = 58, ++ DXGK_VMBCOMMAND_SETALLOCATIONPRIORITY = 59, ++ DXGK_VMBCOMMAND_GETALLOCATIONPRIORITY = 60, ++ DXGK_VMBCOMMAND_GETCONTEXTSCHEDULINGPRIORITY = 61, ++ DXGK_VMBCOMMAND_QUERYCLOCKCALIBRATION = 62, ++ DXGK_VMBCOMMAND_QUERYRESOURCEINFO = 64, ++ DXGK_VMBCOMMAND_LOGEVENT = 65, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES = 66, ++ DXGK_VMBCOMMAND_INVALID ++}; ++ + /* + * Commands, sent by the host to the VM + */ +@@ -66,6 +143,15 @@ struct dxgkvmb_command_vm_to_host { + enum dxgkvmb_commandtype_global command_type; + }; + ++struct dxgkvmb_command_vgpu_to_host { ++ u64 command_id; ++ struct d3dkmthandle process; ++ u32 channel_type : 8; ++ u32 async_msg : 1; ++ u32 reserved : 23; ++ enum dxgkvmb_commandtype command_type; ++}; ++ + struct dxgkvmb_command_host_to_vm { + u64 command_id; + struct d3dkmthandle process; +@@ -83,4 +169,46 @@ struct dxgkvmb_command_setiospaceregion { + u32 shared_page_gpadl; + }; + ++struct dxgkvmb_command_openadapter { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ u32 vmbus_interface_version; ++ u32 vmbus_last_compatible_interface_version; ++ struct winluid guest_adapter_luid; ++}; ++ ++struct dxgkvmb_command_openadapter_return { ++ struct d3dkmthandle host_adapter_handle; ++ struct ntstatus status; ++ u32 vmbus_interface_version; ++ u32 vmbus_last_compatible_interface_version; ++}; ++ ++struct dxgkvmb_command_closeadapter { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle host_handle; ++}; ++ ++struct dxgkvmb_command_getinternaladapterinfo { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++}; ++ ++struct dxgkvmb_command_getinternaladapterinfo_return { ++ struct dxgk_device_types device_types; ++ u32 driver_store_copy_mode; ++ u32 driver_ddi_version; ++ u32 secure_virtual_machine : 1; ++ u32 virtual_machine_reset : 1; ++ u32 is_vail_supported : 1; ++ u32 hw_sch_enabled : 1; ++ u32 hw_sch_capable : 1; ++ u32 va_backed_vm : 1; ++ u32 async_msg_enabled : 1; ++ u32 hw_support_state : 2; ++ u32 reserved : 23; ++ struct winluid host_adapter_luid; ++ u16 device_description[80]; ++ u16 device_instance_id[WIN_MAX_PATH]; ++ struct winluid host_vgpu_luid; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c +new file mode 100644 +index 000000000000..cb1e0635bebc +--- /dev/null ++++ b/drivers/hv/dxgkrnl/misc.c +@@ -0,0 +1,37 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2019, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Helper functions ++ * ++ */ ++ ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++#include "misc.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++u16 *wcsncpy(u16 *dest, const u16 *src, size_t n) ++{ ++ int i; ++ ++ for (i = 0; i < n; i++) { ++ dest[i] = src[i]; ++ if (src[i] == 0) { ++ i++; ++ break; ++ } ++ } ++ dest[i - 1] = 0; ++ return dest; ++} +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index 4c6047c32a20..d292e9a9bb7f 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -14,18 +14,34 @@ + #ifndef _MISC_H_ + #define _MISC_H_ + ++/* Max characters in Windows path */ ++#define WIN_MAX_PATH 260 ++ + extern const struct d3dkmthandle zerohandle; + + /* + * Synchronization lock hierarchy. + * +- * The higher enum value, the higher is the lock order. +- * When a lower lock ois held, the higher lock should not be acquired. ++ * The locks here are in the order from lowest to highest. ++ * When a lower lock is held, the higher lock should not be acquired. + * +- * channel_lock +- * device_mutex ++ * channel_lock (VMBus channel lock) ++ * fd_mutex ++ * plistmutex (process list mutex) ++ * table_lock (handle table lock) ++ * core_lock (dxgadapter lock) ++ * device_lock (dxgdevice lock) ++ * adapter_list_lock ++ * device_mutex (dxgglobal mutex) + */ + ++u16 *wcsncpy(u16 *dest, const u16 *src, size_t n); ++ ++enum dxglockstate { ++ DXGLOCK_SHARED, ++ DXGLOCK_EXCL ++}; ++ + /* + * Some of the Windows return codes, which needs to be translated to Linux + * IOCTL return codes. Positive values are success codes and need to be +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1672-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch b/patch/kernel/archive/wsl2-arm64-6.1/1672-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch new file mode 100644 index 000000000000..413f14c3461c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1672-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch @@ -0,0 +1,1847 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 19:12:48 -0800 +Subject: drivers: hv: dxgkrnl: Opening of /dev/dxg device and dxgprocess + creation + +- Implement opening of the device (/dev/dxg) file object and creation of +dxgprocess objects. + +- Add VM bus messages to create and destroy the host side of a dxgprocess +object. + +- Implement the handle manager, which manages d3dkmthandle handles +for the internal process objects. The handles are used by a user mode +client to reference dxgkrnl objects. + +dxgprocess is created for each process, which opens /dev/dxg. +dxgprocess is ref counted, so the existing dxgprocess objects is used +for a process, which opens the device object multiple time. +dxgprocess is destroyed when the file object is released. + +A corresponding dxgprocess object is created on the host for every +dxgprocess object in the guest. + +When a dxgkrnl object is created, in most cases the corresponding +object is created in the host. The VM references the host objects by +handles (d3dkmthandle). d3dkmthandle values for a host object and +the corresponding VM object are the same. A host handle is allocated +first and its value is assigned to the guest object. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgadapter.c | 72 ++ + drivers/hv/dxgkrnl/dxgkrnl.h | 95 +- + drivers/hv/dxgkrnl/dxgmodule.c | 97 ++ + drivers/hv/dxgkrnl/dxgprocess.c | 262 +++++ + drivers/hv/dxgkrnl/dxgvmbus.c | 164 +++ + drivers/hv/dxgkrnl/dxgvmbus.h | 36 + + drivers/hv/dxgkrnl/hmgr.c | 563 ++++++++++ + drivers/hv/dxgkrnl/hmgr.h | 112 ++ + drivers/hv/dxgkrnl/ioctl.c | 60 + + drivers/hv/dxgkrnl/misc.h | 9 +- + include/uapi/misc/d3dkmthk.h | 103 ++ + 12 files changed, 1569 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index 2ed07d877c91..9d821e83448a 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o misc.o dxgadapter.o ioctl.o dxgvmbus.o ++dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 07d47699d255..fa0d6beca157 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -100,6 +100,7 @@ void dxgadapter_start(struct dxgadapter *adapter) + + void dxgadapter_stop(struct dxgadapter *adapter) + { ++ struct dxgprocess_adapter *entry; + bool adapter_stopped = false; + + down_write(&adapter->core_lock); +@@ -112,6 +113,15 @@ void dxgadapter_stop(struct dxgadapter *adapter) + if (adapter_stopped) + return; + ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &adapter->adapter_process_list_head, ++ adapter_process_list_entry) { ++ dxgprocess_adapter_stop(entry); ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ + if (dxgadapter_acquire_lock_exclusive(adapter) == 0) { + dxgvmb_send_close_adapter(adapter); + dxgadapter_release_lock_exclusive(adapter); +@@ -135,6 +145,21 @@ bool dxgadapter_is_active(struct dxgadapter *adapter) + return adapter->adapter_state == DXGADAPTER_STATE_ACTIVE; + } + ++/* Protected by dxgglobal_acquire_process_adapter_lock */ ++void dxgadapter_add_process(struct dxgadapter *adapter, ++ struct dxgprocess_adapter *process_info) ++{ ++ DXG_TRACE("%p %p", adapter, process_info); ++ list_add_tail(&process_info->adapter_process_list_entry, ++ &adapter->adapter_process_list_head); ++} ++ ++void dxgadapter_remove_process(struct dxgprocess_adapter *process_info) ++{ ++ DXG_TRACE("%p %p", process_info->adapter, process_info); ++ list_del(&process_info->adapter_process_list_entry); ++} ++ + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter) + { + down_write(&adapter->core_lock); +@@ -168,3 +193,50 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter) + { + up_read(&adapter->core_lock); + } ++ ++struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, ++ struct dxgadapter *adapter) ++{ ++ struct dxgprocess_adapter *adapter_info; ++ ++ adapter_info = kzalloc(sizeof(*adapter_info), GFP_KERNEL); ++ if (adapter_info) { ++ if (kref_get_unless_zero(&adapter->adapter_kref) == 0) { ++ DXG_ERR("failed to acquire adapter reference"); ++ goto cleanup; ++ } ++ adapter_info->adapter = adapter; ++ adapter_info->process = process; ++ adapter_info->refcount = 1; ++ list_add_tail(&adapter_info->process_adapter_list_entry, ++ &process->process_adapter_list_head); ++ dxgadapter_add_process(adapter, adapter_info); ++ } ++ return adapter_info; ++cleanup: ++ if (adapter_info) ++ kfree(adapter_info); ++ return NULL; ++} ++ ++void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info) ++{ ++} ++ ++void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info) ++{ ++ dxgadapter_remove_process(adapter_info); ++ kref_put(&adapter_info->adapter->adapter_kref, dxgadapter_release); ++ list_del(&adapter_info->process_adapter_list_entry); ++ kfree(adapter_info); ++} ++ ++/* ++ * Must be called when dxgglobal::process_adapter_mutex is held ++ */ ++void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter_info) ++{ ++ adapter_info->refcount--; ++ if (adapter_info->refcount == 0) ++ dxgprocess_adapter_destroy(adapter_info); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index ba2a7c6001aa..b089d126f801 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -29,8 +29,10 @@ + #include + #include + #include "misc.h" ++#include "hmgr.h" + #include + ++struct dxgprocess; + struct dxgadapter; + + /* +@@ -111,6 +113,10 @@ struct dxgglobal { + struct miscdevice dxgdevice; + struct mutex device_mutex; + ++ /* list of created processes */ ++ struct list_head plisthead; ++ struct mutex plistmutex; ++ + /* list of created adapters */ + struct list_head adapter_list_head; + struct rw_semaphore adapter_list_lock; +@@ -124,6 +130,9 @@ struct dxgglobal { + /* protects acces to the global VM bus channel */ + struct rw_semaphore channel_lock; + ++ /* protects the dxgprocess_adapter lists */ ++ struct mutex process_adapter_mutex; ++ + bool global_channel_initialized; + bool async_msg_enabled; + bool misc_registered; +@@ -144,13 +153,84 @@ int dxgglobal_init_global_channel(void); + void dxgglobal_destroy_global_channel(void); + struct vmbus_channel *dxgglobal_get_vmbus(void); + struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void); ++void dxgglobal_acquire_process_adapter_lock(void); ++void dxgglobal_release_process_adapter_lock(void); + int dxgglobal_acquire_channel_lock(void); + void dxgglobal_release_channel_lock(void); + ++/* ++ * Describes adapter information for each process ++ */ ++struct dxgprocess_adapter { ++ /* Entry in dxgadapter::adapter_process_list_head */ ++ struct list_head adapter_process_list_entry; ++ /* Entry in dxgprocess::process_adapter_list_head */ ++ struct list_head process_adapter_list_entry; ++ struct dxgadapter *adapter; ++ struct dxgprocess *process; ++ int refcount; ++}; ++ ++struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, ++ struct dxgadapter ++ *adapter); ++void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter); ++void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info); ++void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info); ++ ++/* ++ * The structure represents a process, which opened the /dev/dxg device. ++ * A corresponding object is created on the host. ++ */ + struct dxgprocess { +- /* Placeholder */ ++ /* ++ * Process list entry in dxgglobal. ++ * Protected by the dxgglobal->plistmutex. ++ */ ++ struct list_head plistentry; ++ pid_t pid; ++ pid_t tgid; ++ /* how many time the process was opened */ ++ struct kref process_kref; ++ /* ++ * This handle table is used for all objects except dxgadapter ++ * The handle table lock order is higher than the local_handle_table ++ * lock ++ */ ++ struct hmgrtable handle_table; ++ /* ++ * This handle table is used for dxgadapter objects. ++ * The handle table lock order is lowest. ++ */ ++ struct hmgrtable local_handle_table; ++ /* Handle of the corresponding objec on the host */ ++ struct d3dkmthandle host_handle; ++ ++ /* List of opened adapters (dxgprocess_adapter) */ ++ struct list_head process_adapter_list_head; + }; + ++struct dxgprocess *dxgprocess_create(void); ++void dxgprocess_destroy(struct dxgprocess *process); ++void dxgprocess_release(struct kref *refcount); ++int dxgprocess_open_adapter(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle *handle); ++int dxgprocess_close_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++void dxgprocess_ht_lock_shared_down(struct dxgprocess *process); ++void dxgprocess_ht_lock_shared_up(struct dxgprocess *process); ++void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process); ++void dxgprocess_ht_lock_exclusive_up(struct dxgprocess *process); ++struct dxgprocess_adapter *dxgprocess_get_adapter_info(struct dxgprocess ++ *process, ++ struct dxgadapter ++ *adapter); ++ + enum dxgadapter_state { + DXGADAPTER_STATE_ACTIVE = 0, + DXGADAPTER_STATE_STOPPED = 1, +@@ -168,6 +248,8 @@ struct dxgadapter { + struct kref adapter_kref; + /* Entry in the list of adapters in dxgglobal */ + struct list_head adapter_list_entry; ++ /* The list of dxgprocess_adapter entries */ ++ struct list_head adapter_process_list_head; + struct pci_dev *pci_dev; + struct hv_device *hv_dev; + struct dxgvmbuschannel channel; +@@ -191,6 +273,12 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter); + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); + void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); + void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_add_process(struct dxgadapter *adapter, ++ struct dxgprocess_adapter *process_info); ++void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); ++ ++long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); ++long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + + /* + * The convention is that VNBus instance id is a GUID, but the host sets +@@ -220,9 +308,14 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid) + + void dxgvmb_initialize(void); + int dxgvmb_send_set_iospace_region(u64 start, u64 len); ++int dxgvmb_send_create_process(struct dxgprocess *process); ++int dxgvmb_send_destroy_process(struct d3dkmthandle process); + int dxgvmb_send_open_adapter(struct dxgadapter *adapter); + int dxgvmb_send_close_adapter(struct dxgadapter *adapter); + int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter); ++int dxgvmb_send_query_adapter_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryadapterinfo *args); + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index ef80b920f010..17c22001ca6c 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -123,6 +123,20 @@ static struct dxgadapter *find_adapter(struct winluid *luid) + return adapter; + } + ++void dxgglobal_acquire_process_adapter_lock(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->process_adapter_mutex); ++} ++ ++void dxgglobal_release_process_adapter_lock(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_unlock(&dxgglobal->process_adapter_mutex); ++} ++ + /* + * Creates a new dxgadapter object, which represents a virtual GPU, projected + * by the host. +@@ -147,6 +161,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + kref_init(&adapter->adapter_kref); + init_rwsem(&adapter->core_lock); + ++ INIT_LIST_HEAD(&adapter->adapter_process_list_head); + adapter->pci_dev = dev; + guid_to_luid(guid, &adapter->luid); + +@@ -205,8 +220,87 @@ static void dxgglobal_stop_adapters(void) + dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); + } + ++/* ++ * Returns dxgprocess for the current executing process. ++ * Creates dxgprocess if it doesn't exist. ++ */ ++static struct dxgprocess *dxgglobal_get_current_process(void) ++{ ++ /* ++ * Find the DXG process for the current process. ++ * A new process is created if necessary. ++ */ ++ struct dxgprocess *process = NULL; ++ struct dxgprocess *entry = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->plistmutex); ++ list_for_each_entry(entry, &dxgglobal->plisthead, plistentry) { ++ /* All threads of a process have the same thread group ID */ ++ if (entry->tgid == current->tgid) { ++ if (kref_get_unless_zero(&entry->process_kref)) { ++ process = entry; ++ DXG_TRACE("found dxgprocess"); ++ } else { ++ DXG_TRACE("process is destroyed"); ++ } ++ break; ++ } ++ } ++ mutex_unlock(&dxgglobal->plistmutex); ++ ++ if (process == NULL) ++ process = dxgprocess_create(); ++ ++ return process; ++} ++ ++/* ++ * File operations for the /dev/dxg device ++ */ ++ ++static int dxgk_open(struct inode *n, struct file *f) ++{ ++ int ret = 0; ++ struct dxgprocess *process; ++ ++ DXG_TRACE("%p %d %d", f, current->pid, current->tgid); ++ ++ /* Find/create a dxgprocess structure for this process */ ++ process = dxgglobal_get_current_process(); ++ ++ if (process) { ++ f->private_data = process; ++ } else { ++ DXG_TRACE("cannot create dxgprocess"); ++ ret = -EBADF; ++ } ++ ++ return ret; ++} ++ ++static int dxgk_release(struct inode *n, struct file *f) ++{ ++ struct dxgprocess *process; ++ ++ process = (struct dxgprocess *)f->private_data; ++ DXG_TRACE("%p, %p", f, process); ++ ++ if (process == NULL) ++ return -EINVAL; ++ ++ kref_put(&process->process_kref, dxgprocess_release); ++ ++ f->private_data = NULL; ++ return 0; ++} ++ + const struct file_operations dxgk_fops = { + .owner = THIS_MODULE, ++ .open = dxgk_open, ++ .release = dxgk_release, ++ .compat_ioctl = dxgk_compat_ioctl, ++ .unlocked_ioctl = dxgk_unlocked_ioctl, + }; + + /* +@@ -616,7 +710,10 @@ static struct dxgglobal *dxgglobal_create(void) + if (!dxgglobal) + return NULL; + ++ INIT_LIST_HEAD(&dxgglobal->plisthead); ++ mutex_init(&dxgglobal->plistmutex); + mutex_init(&dxgglobal->device_mutex); ++ mutex_init(&dxgglobal->process_adapter_mutex); + + INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); + INIT_LIST_HEAD(&dxgglobal->adapter_list_head); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +new file mode 100644 +index 000000000000..ab9a01e3c8c8 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -0,0 +1,262 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * DXGPROCESS implementation ++ * ++ */ ++ ++#include "dxgkrnl.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++/* ++ * Creates a new dxgprocess object ++ * Must be called when dxgglobal->plistmutex is held ++ */ ++struct dxgprocess *dxgprocess_create(void) ++{ ++ struct dxgprocess *process; ++ int ret; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ process = kzalloc(sizeof(struct dxgprocess), GFP_KERNEL); ++ if (process != NULL) { ++ DXG_TRACE("new dxgprocess created"); ++ process->pid = current->pid; ++ process->tgid = current->tgid; ++ ret = dxgvmb_send_create_process(process); ++ if (ret < 0) { ++ DXG_TRACE("send_create_process failed"); ++ kfree(process); ++ process = NULL; ++ } else { ++ INIT_LIST_HEAD(&process->plistentry); ++ kref_init(&process->process_kref); ++ ++ mutex_lock(&dxgglobal->plistmutex); ++ list_add_tail(&process->plistentry, ++ &dxgglobal->plisthead); ++ mutex_unlock(&dxgglobal->plistmutex); ++ ++ hmgrtable_init(&process->handle_table, process); ++ hmgrtable_init(&process->local_handle_table, process); ++ INIT_LIST_HEAD(&process->process_adapter_list_head); ++ } ++ } ++ return process; ++} ++ ++void dxgprocess_destroy(struct dxgprocess *process) ++{ ++ int i; ++ enum hmgrentry_type t; ++ struct d3dkmthandle h; ++ void *o; ++ struct dxgprocess_adapter *entry; ++ struct dxgprocess_adapter *tmp; ++ ++ /* Destroy all adapter state */ ++ dxgglobal_acquire_process_adapter_lock(); ++ list_for_each_entry_safe(entry, tmp, ++ &process->process_adapter_list_head, ++ process_adapter_list_entry) { ++ dxgprocess_adapter_destroy(entry); ++ } ++ dxgglobal_release_process_adapter_lock(); ++ ++ i = 0; ++ while (hmgrtable_next_entry(&process->local_handle_table, ++ &i, &t, &h, &o)) { ++ switch (t) { ++ case HMGRENTRY_TYPE_DXGADAPTER: ++ dxgprocess_close_adapter(process, h); ++ break; ++ default: ++ DXG_ERR("invalid entry in handle table %d", t); ++ break; ++ } ++ } ++ ++ hmgrtable_destroy(&process->handle_table); ++ hmgrtable_destroy(&process->local_handle_table); ++} ++ ++void dxgprocess_release(struct kref *refcount) ++{ ++ struct dxgprocess *process; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ process = container_of(refcount, struct dxgprocess, process_kref); ++ ++ mutex_lock(&dxgglobal->plistmutex); ++ list_del(&process->plistentry); ++ mutex_unlock(&dxgglobal->plistmutex); ++ ++ dxgprocess_destroy(process); ++ ++ if (process->host_handle.v) ++ dxgvmb_send_destroy_process(process->host_handle); ++ kfree(process); ++} ++ ++struct dxgprocess_adapter *dxgprocess_get_adapter_info(struct dxgprocess ++ *process, ++ struct dxgadapter ++ *adapter) ++{ ++ struct dxgprocess_adapter *entry; ++ ++ list_for_each_entry(entry, &process->process_adapter_list_head, ++ process_adapter_list_entry) { ++ if (adapter == entry->adapter) { ++ DXG_TRACE("Found process info %p", entry); ++ return entry; ++ } ++ } ++ return NULL; ++} ++ ++/* ++ * Dxgprocess takes references on dxgadapter and dxgprocess_adapter. ++ * ++ * The process_adapter lock is held. ++ * ++ */ ++int dxgprocess_open_adapter(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle *h) ++{ ++ int ret = 0; ++ struct dxgprocess_adapter *adapter_info; ++ struct d3dkmthandle handle; ++ ++ h->v = 0; ++ adapter_info = dxgprocess_get_adapter_info(process, adapter); ++ if (adapter_info == NULL) { ++ DXG_TRACE("creating new process adapter info"); ++ adapter_info = dxgprocess_adapter_create(process, adapter); ++ if (adapter_info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ } else { ++ adapter_info->refcount++; ++ } ++ ++ handle = hmgrtable_alloc_handle_safe(&process->local_handle_table, ++ adapter, HMGRENTRY_TYPE_DXGADAPTER, ++ true); ++ if (handle.v) { ++ *h = handle; ++ } else { ++ DXG_ERR("failed to create adapter handle"); ++ ret = -ENOMEM; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (adapter_info) ++ dxgprocess_adapter_release(adapter_info); ++ } ++ ++ return ret; ++} ++ ++int dxgprocess_close_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ struct dxgadapter *adapter; ++ struct dxgprocess_adapter *adapter_info; ++ int ret = 0; ++ ++ if (handle.v == 0) ++ return 0; ++ ++ hmgrtable_lock(&process->local_handle_table, DXGLOCK_EXCL); ++ adapter = dxgprocess_get_adapter(process, handle); ++ if (adapter) ++ hmgrtable_free_handle(&process->local_handle_table, ++ HMGRENTRY_TYPE_DXGADAPTER, handle); ++ hmgrtable_unlock(&process->local_handle_table, DXGLOCK_EXCL); ++ ++ if (adapter) { ++ adapter_info = dxgprocess_get_adapter_info(process, adapter); ++ if (adapter_info) { ++ dxgglobal_acquire_process_adapter_lock(); ++ dxgprocess_adapter_release(adapter_info); ++ dxgglobal_release_process_adapter_lock(); ++ } else { ++ ret = -EINVAL; ++ } ++ } else { ++ DXG_ERR("Adapter not found %x", handle.v); ++ ret = -EINVAL; ++ } ++ ++ return ret; ++} ++ ++struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ struct dxgadapter *adapter; ++ ++ adapter = hmgrtable_get_object_by_type(&process->local_handle_table, ++ HMGRENTRY_TYPE_DXGADAPTER, ++ handle); ++ if (adapter == NULL) ++ DXG_ERR("Adapter not found %x", handle.v); ++ return adapter; ++} ++ ++/* ++ * Gets the adapter object from the process handle table. ++ * The adapter object is referenced. ++ * The function acquired the handle table lock shared. ++ */ ++struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ struct dxgadapter *adapter; ++ ++ hmgrtable_lock(&process->local_handle_table, DXGLOCK_SHARED); ++ adapter = hmgrtable_get_object_by_type(&process->local_handle_table, ++ HMGRENTRY_TYPE_DXGADAPTER, ++ handle); ++ if (adapter == NULL) ++ DXG_ERR("adapter_by_handle failed %x", handle.v); ++ else if (kref_get_unless_zero(&adapter->adapter_kref) == 0) { ++ DXG_ERR("failed to acquire adapter reference"); ++ adapter = NULL; ++ } ++ hmgrtable_unlock(&process->local_handle_table, DXGLOCK_SHARED); ++ return adapter; ++} ++ ++void dxgprocess_ht_lock_shared_down(struct dxgprocess *process) ++{ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++} ++ ++void dxgprocess_ht_lock_shared_up(struct dxgprocess *process) ++{ ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++} ++ ++void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process) ++{ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++} ++ ++void dxgprocess_ht_lock_exclusive_up(struct dxgprocess *process) ++{ ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 6d4b8d9d8d07..0abf45d0d3f7 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -497,6 +497,87 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len) + return ret; + } + ++int dxgvmb_send_create_process(struct dxgprocess *process) ++{ ++ int ret; ++ struct dxgkvmb_command_createprocess *command; ++ struct dxgkvmb_command_createprocess_return result = { 0 }; ++ struct dxgvmbusmsg msg; ++ char s[WIN_MAX_PATH]; ++ int i; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_CREATEPROCESS); ++ command->process = process; ++ command->process_id = process->pid; ++ command->linux_process = 1; ++ s[0] = 0; ++ __get_task_comm(s, WIN_MAX_PATH, current); ++ for (i = 0; i < WIN_MAX_PATH; i++) { ++ command->process_name[i] = s[i]; ++ if (s[i] == 0) ++ break; ++ } ++ ++ ret = dxgvmb_send_sync_msg(&dxgglobal->channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("create_process failed %d", ret); ++ } else if (result.hprocess.v == 0) { ++ DXG_ERR("create_process returned 0 handle"); ++ ret = -ENOTRECOVERABLE; ++ } else { ++ process->host_handle = result.hprocess; ++ DXG_TRACE("create_process returned %x", ++ process->host_handle.v); ++ } ++ ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_process(struct d3dkmthandle process) ++{ ++ int ret; ++ struct dxgkvmb_command_destroyprocess *command; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_DESTROYPROCESS, ++ process); ++ ret = dxgvmb_send_sync_msg_ntstatus(&dxgglobal->channel, ++ msg.hdr, msg.size); ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + /* + * Virtual GPU messages to the host + */ +@@ -591,3 +672,86 @@ int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter) + DXG_ERR("Failed to get adapter info: %d", ret); + return ret; + } ++ ++int dxgvmb_send_query_adapter_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryadapterinfo *args) ++{ ++ struct dxgkvmb_command_queryadapterinfo *command; ++ u32 cmd_size = sizeof(*command) + args->private_data_size - 1; ++ int ret; ++ u32 private_data_size; ++ void *private_data; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ ret = copy_from_user(command->private_data, ++ args->private_data, args->private_data_size); ++ if (ret) { ++ DXG_ERR("Faled to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYADAPTERINFO, ++ process->host_handle); ++ command->private_data_size = args->private_data_size; ++ command->query_type = args->type; ++ ++ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) { ++ private_data = msg.msg; ++ private_data_size = command->private_data_size + ++ sizeof(struct ntstatus); ++ } else { ++ private_data = command->private_data; ++ private_data_size = command->private_data_size; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ private_data, private_data_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) { ++ ret = ntstatus2int(*(struct ntstatus *)private_data); ++ if (ret < 0) ++ goto cleanup; ++ private_data = (char *)private_data + sizeof(struct ntstatus); ++ } ++ ++ switch (args->type) { ++ case _KMTQAITYPE_ADAPTERTYPE: ++ case _KMTQAITYPE_ADAPTERTYPE_RENDER: ++ { ++ struct d3dkmt_adaptertype *adapter_type = ++ (void *)private_data; ++ adapter_type->paravirtualized = 1; ++ adapter_type->display_supported = 0; ++ adapter_type->post_device = 0; ++ adapter_type->indirect_display_device = 0; ++ adapter_type->acg_supported = 0; ++ adapter_type->support_set_timings_from_vidpn = 0; ++ break; ++ } ++ default: ++ break; ++ } ++ ret = copy_to_user(args->private_data, private_data, ++ args->private_data_size); ++ if (ret) { ++ DXG_ERR("Faled to copy private data to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 584cdd3db6c0..a805a396e083 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -14,7 +14,11 @@ + #ifndef _DXGVMBUS_H + #define _DXGVMBUS_H + ++struct dxgprocess; ++struct dxgadapter; ++ + #define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128) ++#define DXG_VM_PROCESS_NAME_LENGTH 260 + + enum dxgkvmb_commandchanneltype { + DXGKVMB_VGPU_TO_HOST, +@@ -169,6 +173,26 @@ struct dxgkvmb_command_setiospaceregion { + u32 shared_page_gpadl; + }; + ++struct dxgkvmb_command_createprocess { ++ struct dxgkvmb_command_vm_to_host hdr; ++ void *process; ++ u64 process_id; ++ u16 process_name[DXG_VM_PROCESS_NAME_LENGTH + 1]; ++ u8 csrss_process:1; ++ u8 dwm_process:1; ++ u8 wow64_process:1; ++ u8 linux_process:1; ++}; ++ ++struct dxgkvmb_command_createprocess_return { ++ struct d3dkmthandle hprocess; ++}; ++ ++// The command returns ntstatus ++struct dxgkvmb_command_destroyprocess { ++ struct dxgkvmb_command_vm_to_host hdr; ++}; ++ + struct dxgkvmb_command_openadapter { + struct dxgkvmb_command_vgpu_to_host hdr; + u32 vmbus_interface_version; +@@ -211,4 +235,16 @@ struct dxgkvmb_command_getinternaladapterinfo_return { + struct winluid host_vgpu_luid; + }; + ++struct dxgkvmb_command_queryadapterinfo { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ enum kmtqueryadapterinfotype query_type; ++ u32 private_data_size; ++ u8 private_data[1]; ++}; ++ ++struct dxgkvmb_command_queryadapterinfo_return { ++ struct ntstatus status; ++ u8 private_data[1]; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c +new file mode 100644 +index 000000000000..526b50f46d96 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/hmgr.c +@@ -0,0 +1,563 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Handle manager implementation ++ * ++ */ ++ ++#include ++#include ++#include ++ ++#include "misc.h" ++#include "dxgkrnl.h" ++#include "hmgr.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++const struct d3dkmthandle zerohandle; ++ ++/* ++ * Handle parameters ++ */ ++#define HMGRHANDLE_INSTANCE_BITS 6 ++#define HMGRHANDLE_INDEX_BITS 24 ++#define HMGRHANDLE_UNIQUE_BITS 2 ++ ++#define HMGRHANDLE_INSTANCE_SHIFT 0 ++#define HMGRHANDLE_INDEX_SHIFT \ ++ (HMGRHANDLE_INSTANCE_BITS + HMGRHANDLE_INSTANCE_SHIFT) ++#define HMGRHANDLE_UNIQUE_SHIFT \ ++ (HMGRHANDLE_INDEX_BITS + HMGRHANDLE_INDEX_SHIFT) ++ ++#define HMGRHANDLE_INSTANCE_MASK \ ++ (((1 << HMGRHANDLE_INSTANCE_BITS) - 1) << HMGRHANDLE_INSTANCE_SHIFT) ++#define HMGRHANDLE_INDEX_MASK \ ++ (((1 << HMGRHANDLE_INDEX_BITS) - 1) << HMGRHANDLE_INDEX_SHIFT) ++#define HMGRHANDLE_UNIQUE_MASK \ ++ (((1 << HMGRHANDLE_UNIQUE_BITS) - 1) << HMGRHANDLE_UNIQUE_SHIFT) ++ ++#define HMGRHANDLE_INSTANCE_MAX ((1 << HMGRHANDLE_INSTANCE_BITS) - 1) ++#define HMGRHANDLE_INDEX_MAX ((1 << HMGRHANDLE_INDEX_BITS) - 1) ++#define HMGRHANDLE_UNIQUE_MAX ((1 << HMGRHANDLE_UNIQUE_BITS) - 1) ++ ++/* ++ * Handle entry ++ */ ++struct hmgrentry { ++ union { ++ void *object; ++ struct { ++ u32 prev_free_index; ++ u32 next_free_index; ++ }; ++ }; ++ u32 type:HMGRENTRY_TYPE_BITS + 1; ++ u32 unique:HMGRHANDLE_UNIQUE_BITS; ++ u32 instance:HMGRHANDLE_INSTANCE_BITS; ++ u32 destroyed:1; ++}; ++ ++#define HMGRTABLE_SIZE_INCREMENT 1024 ++#define HMGRTABLE_MIN_FREE_ENTRIES 128 ++#define HMGRTABLE_INVALID_INDEX (~((1 << HMGRHANDLE_INDEX_BITS) - 1)) ++#define HMGRTABLE_SIZE_MAX 0xFFFFFFF ++ ++static u32 table_size_increment = HMGRTABLE_SIZE_INCREMENT; ++ ++static u32 get_unique(struct d3dkmthandle h) ++{ ++ return (h.v & HMGRHANDLE_UNIQUE_MASK) >> HMGRHANDLE_UNIQUE_SHIFT; ++} ++ ++static u32 get_index(struct d3dkmthandle h) ++{ ++ return (h.v & HMGRHANDLE_INDEX_MASK) >> HMGRHANDLE_INDEX_SHIFT; ++} ++ ++static bool is_handle_valid(struct hmgrtable *table, struct d3dkmthandle h, ++ bool ignore_destroyed, enum hmgrentry_type t) ++{ ++ u32 index = get_index(h); ++ u32 unique = get_unique(h); ++ struct hmgrentry *entry; ++ ++ if (index >= table->table_size) { ++ DXG_ERR("Invalid index %x %d", h.v, index); ++ return false; ++ } ++ ++ entry = &table->entry_table[index]; ++ if (unique != entry->unique) { ++ DXG_ERR("Invalid unique %x %d %d %d %p", ++ h.v, unique, entry->unique, index, entry->object); ++ return false; ++ } ++ ++ if (entry->destroyed && !ignore_destroyed) { ++ DXG_ERR("Invalid destroyed value"); ++ return false; ++ } ++ ++ if (entry->type == HMGRENTRY_TYPE_FREE) { ++ DXG_ERR("Entry is freed %x %d", h.v, index); ++ return false; ++ } ++ ++ if (t != HMGRENTRY_TYPE_FREE && t != entry->type) { ++ DXG_ERR("type mismatch %x %d %d", h.v, t, entry->type); ++ return false; ++ } ++ ++ return true; ++} ++ ++static struct d3dkmthandle build_handle(u32 index, u32 unique, u32 instance) ++{ ++ struct d3dkmthandle handle; ++ ++ handle.v = (index << HMGRHANDLE_INDEX_SHIFT) & HMGRHANDLE_INDEX_MASK; ++ handle.v |= (unique << HMGRHANDLE_UNIQUE_SHIFT) & ++ HMGRHANDLE_UNIQUE_MASK; ++ handle.v |= (instance << HMGRHANDLE_INSTANCE_SHIFT) & ++ HMGRHANDLE_INSTANCE_MASK; ++ ++ return handle; ++} ++ ++inline u32 hmgrtable_get_used_entry_count(struct hmgrtable *table) ++{ ++ DXGKRNL_ASSERT(table->table_size >= table->free_count); ++ return (table->table_size - table->free_count); ++} ++ ++bool hmgrtable_mark_destroyed(struct hmgrtable *table, struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE)) ++ return false; ++ ++ table->entry_table[get_index(h)].destroyed = true; ++ return true; ++} ++ ++bool hmgrtable_unmark_destroyed(struct hmgrtable *table, struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, true, HMGRENTRY_TYPE_FREE)) ++ return true; ++ ++ DXGKRNL_ASSERT(table->entry_table[get_index(h)].destroyed); ++ table->entry_table[get_index(h)].destroyed = 0; ++ return true; ++} ++ ++static bool expand_table(struct hmgrtable *table, u32 NumEntries) ++{ ++ u32 new_table_size; ++ struct hmgrentry *new_entry; ++ u32 table_index; ++ u32 new_free_count; ++ u32 prev_free_index; ++ u32 tail_index = table->free_handle_list_tail; ++ ++ /* The tail should point to the last free element in the list */ ++ if (table->free_count != 0) { ++ if (tail_index >= table->table_size || ++ table->entry_table[tail_index].next_free_index != ++ HMGRTABLE_INVALID_INDEX) { ++ DXG_ERR("corruption"); ++ DXG_ERR("tail_index: %x", tail_index); ++ DXG_ERR("table size: %x", table->table_size); ++ DXG_ERR("free_count: %d", table->free_count); ++ DXG_ERR("NumEntries: %x", NumEntries); ++ return false; ++ } ++ } ++ ++ new_free_count = table_size_increment + table->free_count; ++ new_table_size = table->table_size + table_size_increment; ++ if (new_table_size < NumEntries) { ++ new_free_count += NumEntries - new_table_size; ++ new_table_size = NumEntries; ++ } ++ ++ if (new_table_size > HMGRHANDLE_INDEX_MAX) { ++ DXG_ERR("Invalid new table size"); ++ return false; ++ } ++ ++ new_entry = (struct hmgrentry *) ++ vzalloc(new_table_size * sizeof(struct hmgrentry)); ++ if (new_entry == NULL) { ++ DXG_ERR("allocation failed"); ++ return false; ++ } ++ ++ if (table->entry_table) { ++ memcpy(new_entry, table->entry_table, ++ table->table_size * sizeof(struct hmgrentry)); ++ vfree(table->entry_table); ++ } else { ++ table->free_handle_list_head = 0; ++ } ++ ++ table->entry_table = new_entry; ++ ++ /* Initialize new table entries and add to the free list */ ++ table_index = table->table_size; ++ ++ prev_free_index = table->free_handle_list_tail; ++ ++ while (table_index < new_table_size) { ++ struct hmgrentry *entry = &table->entry_table[table_index]; ++ ++ entry->prev_free_index = prev_free_index; ++ entry->next_free_index = table_index + 1; ++ entry->type = HMGRENTRY_TYPE_FREE; ++ entry->unique = 1; ++ entry->instance = 0; ++ prev_free_index = table_index; ++ ++ table_index++; ++ } ++ ++ table->entry_table[table_index - 1].next_free_index = ++ (u32) HMGRTABLE_INVALID_INDEX; ++ ++ if (table->free_count != 0) { ++ /* Link the current free list with the new entries */ ++ struct hmgrentry *entry; ++ ++ entry = &table->entry_table[table->free_handle_list_tail]; ++ entry->next_free_index = table->table_size; ++ } ++ table->free_handle_list_tail = new_table_size - 1; ++ if (table->free_handle_list_head == HMGRTABLE_INVALID_INDEX) ++ table->free_handle_list_head = table->table_size; ++ ++ table->table_size = new_table_size; ++ table->free_count = new_free_count; ++ ++ return true; ++} ++ ++void hmgrtable_init(struct hmgrtable *table, struct dxgprocess *process) ++{ ++ table->process = process; ++ table->entry_table = NULL; ++ table->table_size = 0; ++ table->free_handle_list_head = HMGRTABLE_INVALID_INDEX; ++ table->free_handle_list_tail = HMGRTABLE_INVALID_INDEX; ++ table->free_count = 0; ++ init_rwsem(&table->table_lock); ++} ++ ++void hmgrtable_destroy(struct hmgrtable *table) ++{ ++ if (table->entry_table) { ++ vfree(table->entry_table); ++ table->entry_table = NULL; ++ } ++} ++ ++void hmgrtable_lock(struct hmgrtable *table, enum dxglockstate state) ++{ ++ if (state == DXGLOCK_EXCL) ++ down_write(&table->table_lock); ++ else ++ down_read(&table->table_lock); ++} ++ ++void hmgrtable_unlock(struct hmgrtable *table, enum dxglockstate state) ++{ ++ if (state == DXGLOCK_EXCL) ++ up_write(&table->table_lock); ++ else ++ up_read(&table->table_lock); ++} ++ ++struct d3dkmthandle hmgrtable_alloc_handle(struct hmgrtable *table, ++ void *object, ++ enum hmgrentry_type type, ++ bool make_valid) ++{ ++ u32 index; ++ struct hmgrentry *entry; ++ u32 unique; ++ ++ DXGKRNL_ASSERT(type <= HMGRENTRY_TYPE_LIMIT); ++ DXGKRNL_ASSERT(type > HMGRENTRY_TYPE_FREE); ++ ++ if (table->free_count <= HMGRTABLE_MIN_FREE_ENTRIES) { ++ if (!expand_table(table, 0)) { ++ DXG_ERR("hmgrtable expand_table failed"); ++ return zerohandle; ++ } ++ } ++ ++ if (table->free_handle_list_head >= table->table_size) { ++ DXG_ERR("hmgrtable corrupted handle table head"); ++ return zerohandle; ++ } ++ ++ index = table->free_handle_list_head; ++ entry = &table->entry_table[index]; ++ ++ if (entry->type != HMGRENTRY_TYPE_FREE) { ++ DXG_ERR("hmgrtable expected free handle"); ++ return zerohandle; ++ } ++ ++ table->free_handle_list_head = entry->next_free_index; ++ ++ if (entry->next_free_index != table->free_handle_list_tail) { ++ if (entry->next_free_index >= table->table_size) { ++ DXG_ERR("hmgrtable invalid next free index"); ++ return zerohandle; ++ } ++ table->entry_table[entry->next_free_index].prev_free_index = ++ HMGRTABLE_INVALID_INDEX; ++ } ++ ++ unique = table->entry_table[index].unique; ++ ++ table->entry_table[index].object = object; ++ table->entry_table[index].type = type; ++ table->entry_table[index].instance = 0; ++ table->entry_table[index].destroyed = !make_valid; ++ table->free_count--; ++ DXGKRNL_ASSERT(table->free_count <= table->table_size); ++ ++ return build_handle(index, unique, table->entry_table[index].instance); ++} ++ ++int hmgrtable_assign_handle_safe(struct hmgrtable *table, ++ void *object, ++ enum hmgrentry_type type, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ ++ hmgrtable_lock(table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(table, object, type, h); ++ hmgrtable_unlock(table, DXGLOCK_EXCL); ++ return ret; ++} ++ ++int hmgrtable_assign_handle(struct hmgrtable *table, void *object, ++ enum hmgrentry_type type, struct d3dkmthandle h) ++{ ++ u32 index = get_index(h); ++ u32 unique = get_unique(h); ++ struct hmgrentry *entry = NULL; ++ ++ DXG_TRACE("%x, %d %p, %p", h.v, index, object, table); ++ ++ if (index >= HMGRHANDLE_INDEX_MAX) { ++ DXG_ERR("handle index is too big: %x %d", h.v, index); ++ return -EINVAL; ++ } ++ ++ if (index >= table->table_size) { ++ u32 new_size = index + table_size_increment; ++ ++ if (new_size > HMGRHANDLE_INDEX_MAX) ++ new_size = HMGRHANDLE_INDEX_MAX; ++ if (!expand_table(table, new_size)) { ++ DXG_ERR("failed to expand handle table %d", ++ new_size); ++ return -ENOMEM; ++ } ++ } ++ ++ entry = &table->entry_table[index]; ++ ++ if (entry->type != HMGRENTRY_TYPE_FREE) { ++ DXG_ERR("the entry is not free: %d %x", entry->type, ++ hmgrtable_build_entry_handle(table, index).v); ++ return -EINVAL; ++ } ++ ++ if (index != table->free_handle_list_tail) { ++ if (entry->next_free_index >= table->table_size) { ++ DXG_ERR("hmgr: invalid next free index %d", ++ entry->next_free_index); ++ return -EINVAL; ++ } ++ table->entry_table[entry->next_free_index].prev_free_index = ++ entry->prev_free_index; ++ } else { ++ table->free_handle_list_tail = entry->prev_free_index; ++ } ++ ++ if (index != table->free_handle_list_head) { ++ if (entry->prev_free_index >= table->table_size) { ++ DXG_ERR("hmgr: invalid next prev index %d", ++ entry->prev_free_index); ++ return -EINVAL; ++ } ++ table->entry_table[entry->prev_free_index].next_free_index = ++ entry->next_free_index; ++ } else { ++ table->free_handle_list_head = entry->next_free_index; ++ } ++ ++ entry->prev_free_index = HMGRTABLE_INVALID_INDEX; ++ entry->next_free_index = HMGRTABLE_INVALID_INDEX; ++ entry->object = object; ++ entry->type = type; ++ entry->instance = 0; ++ entry->unique = unique; ++ entry->destroyed = false; ++ ++ table->free_count--; ++ DXGKRNL_ASSERT(table->free_count <= table->table_size); ++ return 0; ++} ++ ++struct d3dkmthandle hmgrtable_alloc_handle_safe(struct hmgrtable *table, ++ void *obj, ++ enum hmgrentry_type type, ++ bool make_valid) ++{ ++ struct d3dkmthandle h; ++ ++ hmgrtable_lock(table, DXGLOCK_EXCL); ++ h = hmgrtable_alloc_handle(table, obj, type, make_valid); ++ hmgrtable_unlock(table, DXGLOCK_EXCL); ++ return h; ++} ++ ++void hmgrtable_free_handle(struct hmgrtable *table, enum hmgrentry_type t, ++ struct d3dkmthandle h) ++{ ++ struct hmgrentry *entry; ++ u32 i = get_index(h); ++ ++ DXG_TRACE("%p %x", table, h.v); ++ ++ /* Ignore the destroyed flag when checking the handle */ ++ if (is_handle_valid(table, h, true, t)) { ++ DXGKRNL_ASSERT(table->free_count < table->table_size); ++ entry = &table->entry_table[i]; ++ entry->unique = 1; ++ entry->type = HMGRENTRY_TYPE_FREE; ++ entry->destroyed = 0; ++ if (entry->unique != HMGRHANDLE_UNIQUE_MAX) ++ entry->unique += 1; ++ else ++ entry->unique = 1; ++ ++ table->free_count++; ++ DXGKRNL_ASSERT(table->free_count <= table->table_size); ++ ++ /* ++ * Insert the index to the free list at the tail. ++ */ ++ entry->next_free_index = HMGRTABLE_INVALID_INDEX; ++ entry->prev_free_index = table->free_handle_list_tail; ++ entry = &table->entry_table[table->free_handle_list_tail]; ++ entry->next_free_index = i; ++ table->free_handle_list_tail = i; ++ } else { ++ DXG_ERR("Invalid handle to free: %d %x", i, h.v); ++ } ++} ++ ++void hmgrtable_free_handle_safe(struct hmgrtable *table, enum hmgrentry_type t, ++ struct d3dkmthandle h) ++{ ++ hmgrtable_lock(table, DXGLOCK_EXCL); ++ hmgrtable_free_handle(table, t, h); ++ hmgrtable_unlock(table, DXGLOCK_EXCL); ++} ++ ++struct d3dkmthandle hmgrtable_build_entry_handle(struct hmgrtable *table, ++ u32 index) ++{ ++ DXGKRNL_ASSERT(index < table->table_size); ++ ++ return build_handle(index, table->entry_table[index].unique, ++ table->entry_table[index].instance); ++} ++ ++void *hmgrtable_get_object(struct hmgrtable *table, struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE)) ++ return NULL; ++ ++ return table->entry_table[get_index(h)].object; ++} ++ ++void *hmgrtable_get_object_by_type(struct hmgrtable *table, ++ enum hmgrentry_type type, ++ struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, type)) { ++ DXG_ERR("Invalid handle %x", h.v); ++ return NULL; ++ } ++ return table->entry_table[get_index(h)].object; ++} ++ ++void *hmgrtable_get_entry_object(struct hmgrtable *table, u32 index) ++{ ++ DXGKRNL_ASSERT(index < table->table_size); ++ DXGKRNL_ASSERT(table->entry_table[index].type != HMGRENTRY_TYPE_FREE); ++ ++ return table->entry_table[index].object; ++} ++ ++static enum hmgrentry_type hmgrtable_get_entry_type(struct hmgrtable *table, ++ u32 index) ++{ ++ DXGKRNL_ASSERT(index < table->table_size); ++ return (enum hmgrentry_type)table->entry_table[index].type; ++} ++ ++enum hmgrentry_type hmgrtable_get_object_type(struct hmgrtable *table, ++ struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE)) ++ return HMGRENTRY_TYPE_FREE; ++ ++ return hmgrtable_get_entry_type(table, get_index(h)); ++} ++ ++void *hmgrtable_get_object_ignore_destroyed(struct hmgrtable *table, ++ struct d3dkmthandle h, ++ enum hmgrentry_type type) ++{ ++ if (!is_handle_valid(table, h, true, type)) ++ return NULL; ++ return table->entry_table[get_index(h)].object; ++} ++ ++bool hmgrtable_next_entry(struct hmgrtable *tbl, ++ u32 *index, ++ enum hmgrentry_type *type, ++ struct d3dkmthandle *handle, ++ void **object) ++{ ++ u32 i; ++ struct hmgrentry *entry; ++ ++ for (i = *index; i < tbl->table_size; i++) { ++ entry = &tbl->entry_table[i]; ++ if (entry->type != HMGRENTRY_TYPE_FREE) { ++ *index = i + 1; ++ *object = entry->object; ++ *handle = build_handle(i, entry->unique, ++ entry->instance); ++ *type = entry->type; ++ return true; ++ } ++ } ++ return false; ++} +diff --git a/drivers/hv/dxgkrnl/hmgr.h b/drivers/hv/dxgkrnl/hmgr.h +new file mode 100644 +index 000000000000..23eec301137f +--- /dev/null ++++ b/drivers/hv/dxgkrnl/hmgr.h +@@ -0,0 +1,112 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Handle manager definitions ++ * ++ */ ++ ++#ifndef _HMGR_H_ ++#define _HMGR_H_ ++ ++#include "misc.h" ++ ++struct hmgrentry; ++ ++/* ++ * Handle manager table. ++ * ++ * Implementation notes: ++ * A list of free handles is built on top of the array of table entries. ++ * free_handle_list_head is the index of the first entry in the list. ++ * m_FreeHandleListTail is the index of an entry in the list, which is ++ * HMGRTABLE_MIN_FREE_ENTRIES from the head. It means that when a handle is ++ * freed, the next time the handle can be re-used is after allocating ++ * HMGRTABLE_MIN_FREE_ENTRIES number of handles. ++ * Handles are allocated from the start of the list and free handles are ++ * inserted after the tail of the list. ++ * ++ */ ++struct hmgrtable { ++ struct dxgprocess *process; ++ struct hmgrentry *entry_table; ++ u32 free_handle_list_head; ++ u32 free_handle_list_tail; ++ u32 table_size; ++ u32 free_count; ++ struct rw_semaphore table_lock; ++}; ++ ++/* ++ * Handle entry data types. ++ */ ++#define HMGRENTRY_TYPE_BITS 5 ++ ++enum hmgrentry_type { ++ HMGRENTRY_TYPE_FREE = 0, ++ HMGRENTRY_TYPE_DXGADAPTER = 1, ++ HMGRENTRY_TYPE_DXGSHAREDRESOURCE = 2, ++ HMGRENTRY_TYPE_DXGDEVICE = 3, ++ HMGRENTRY_TYPE_DXGRESOURCE = 4, ++ HMGRENTRY_TYPE_DXGALLOCATION = 5, ++ HMGRENTRY_TYPE_DXGOVERLAY = 6, ++ HMGRENTRY_TYPE_DXGCONTEXT = 7, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT = 8, ++ HMGRENTRY_TYPE_DXGKEYEDMUTEX = 9, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE = 10, ++ HMGRENTRY_TYPE_DXGDEVICESYNCOBJECT = 11, ++ HMGRENTRY_TYPE_DXGPROCESS = 12, ++ HMGRENTRY_TYPE_DXGSHAREDVMOBJECT = 13, ++ HMGRENTRY_TYPE_DXGPROTECTEDSESSION = 14, ++ HMGRENTRY_TYPE_DXGHWQUEUE = 15, ++ HMGRENTRY_TYPE_DXGREMOTEBUNDLEOBJECT = 16, ++ HMGRENTRY_TYPE_DXGCOMPOSITIONSURFACEOBJECT = 17, ++ HMGRENTRY_TYPE_DXGCOMPOSITIONSURFACEPROXY = 18, ++ HMGRENTRY_TYPE_DXGTRACKEDWORKLOAD = 19, ++ HMGRENTRY_TYPE_LIMIT = ((1 << HMGRENTRY_TYPE_BITS) - 1), ++ HMGRENTRY_TYPE_MONITOREDFENCE = HMGRENTRY_TYPE_LIMIT + 1, ++}; ++ ++void hmgrtable_init(struct hmgrtable *tbl, struct dxgprocess *process); ++void hmgrtable_destroy(struct hmgrtable *tbl); ++void hmgrtable_lock(struct hmgrtable *tbl, enum dxglockstate state); ++void hmgrtable_unlock(struct hmgrtable *tbl, enum dxglockstate state); ++struct d3dkmthandle hmgrtable_alloc_handle(struct hmgrtable *tbl, void *object, ++ enum hmgrentry_type t, bool make_valid); ++struct d3dkmthandle hmgrtable_alloc_handle_safe(struct hmgrtable *tbl, ++ void *obj, ++ enum hmgrentry_type t, ++ bool reserve); ++int hmgrtable_assign_handle(struct hmgrtable *tbl, void *obj, ++ enum hmgrentry_type, struct d3dkmthandle h); ++int hmgrtable_assign_handle_safe(struct hmgrtable *tbl, void *obj, ++ enum hmgrentry_type t, struct d3dkmthandle h); ++void hmgrtable_free_handle(struct hmgrtable *tbl, enum hmgrentry_type t, ++ struct d3dkmthandle h); ++void hmgrtable_free_handle_safe(struct hmgrtable *tbl, enum hmgrentry_type t, ++ struct d3dkmthandle h); ++struct d3dkmthandle hmgrtable_build_entry_handle(struct hmgrtable *tbl, ++ u32 index); ++enum hmgrentry_type hmgrtable_get_object_type(struct hmgrtable *tbl, ++ struct d3dkmthandle h); ++void *hmgrtable_get_object(struct hmgrtable *tbl, struct d3dkmthandle h); ++void *hmgrtable_get_object_by_type(struct hmgrtable *tbl, enum hmgrentry_type t, ++ struct d3dkmthandle h); ++void *hmgrtable_get_object_ignore_destroyed(struct hmgrtable *tbl, ++ struct d3dkmthandle h, ++ enum hmgrentry_type t); ++bool hmgrtable_mark_destroyed(struct hmgrtable *tbl, struct d3dkmthandle h); ++bool hmgrtable_unmark_destroyed(struct hmgrtable *tbl, struct d3dkmthandle h); ++void *hmgrtable_get_entry_object(struct hmgrtable *tbl, u32 index); ++bool hmgrtable_next_entry(struct hmgrtable *tbl, ++ u32 *start_index, ++ enum hmgrentry_type *type, ++ struct d3dkmthandle *handle, ++ void **object); ++ ++#endif +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 23ecd15b0cd7..60e38d104517 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -22,3 +22,63 @@ + + #undef pr_fmt + #define pr_fmt(fmt) "dxgk: " fmt ++ ++struct ioctl_desc { ++ int (*ioctl_callback)(struct dxgprocess *p, void __user *arg); ++ u32 ioctl; ++ u32 arg_size; ++}; ++ ++static struct ioctl_desc ioctls[] = { ++ ++}; ++ ++/* ++ * IOCTL processing ++ * The driver IOCTLs return ++ * - 0 in case of success ++ * - positive values, which are Windows NTSTATUS (for example, STATUS_PENDING). ++ * Positive values are success codes. ++ * - Linux negative error codes ++ */ ++static int dxgk_ioctl(struct file *f, unsigned int p1, unsigned long p2) ++{ ++ int code = _IOC_NR(p1); ++ int status; ++ struct dxgprocess *process; ++ ++ if (code < 1 || code >= ARRAY_SIZE(ioctls)) { ++ DXG_ERR("bad ioctl %x %x %x %x", ++ code, _IOC_TYPE(p1), _IOC_SIZE(p1), _IOC_DIR(p1)); ++ return -ENOTTY; ++ } ++ if (ioctls[code].ioctl_callback == NULL) { ++ DXG_ERR("ioctl callback is NULL %x", code); ++ return -ENOTTY; ++ } ++ if (ioctls[code].ioctl != p1) { ++ DXG_ERR("ioctl mismatch. Code: %x User: %x Kernel: %x", ++ code, p1, ioctls[code].ioctl); ++ return -ENOTTY; ++ } ++ process = (struct dxgprocess *)f->private_data; ++ if (process->tgid != current->tgid) { ++ DXG_ERR("Call from a wrong process: %d %d", ++ process->tgid, current->tgid); ++ return -ENOTTY; ++ } ++ status = ioctls[code].ioctl_callback(process, (void *__user)p2); ++ return status; ++} ++ ++long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2) ++{ ++ DXG_TRACE("compat ioctl %x", p1); ++ return dxgk_ioctl(f, p1, p2); ++} ++ ++long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2) ++{ ++ DXG_TRACE("unlocked ioctl %x Code:%d", p1, _IOC_NR(p1)); ++ return dxgk_ioctl(f, p1, p2); ++} +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index d292e9a9bb7f..dc849a8ed3f2 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -27,10 +27,11 @@ extern const struct d3dkmthandle zerohandle; + * + * channel_lock (VMBus channel lock) + * fd_mutex +- * plistmutex (process list mutex) +- * table_lock (handle table lock) +- * core_lock (dxgadapter lock) +- * device_lock (dxgdevice lock) ++ * plistmutex ++ * table_lock ++ * core_lock ++ * device_lock ++ * process_adapter_mutex + * adapter_list_lock + * device_mutex (dxgglobal mutex) + */ +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 2ea04cc02a1f..c675d5827ed5 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -58,4 +58,107 @@ struct winluid { + __u32 b; + }; + ++#define D3DKMT_ADAPTERS_MAX 64 ++ ++struct d3dkmt_adapterinfo { ++ struct d3dkmthandle adapter_handle; ++ struct winluid adapter_luid; ++ __u32 num_sources; ++ __u32 present_move_regions_preferred; ++}; ++ ++struct d3dkmt_enumadapters2 { ++ __u32 num_adapters; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmt_adapterinfo *adapters; ++#else ++ __u64 *adapters; ++#endif ++}; ++ ++struct d3dkmt_closeadapter { ++ struct d3dkmthandle adapter_handle; ++}; ++ ++struct d3dkmt_openadapterfromluid { ++ struct winluid adapter_luid; ++ struct d3dkmthandle adapter_handle; ++}; ++ ++struct d3dkmt_adaptertype { ++ union { ++ struct { ++ __u32 render_supported:1; ++ __u32 display_supported:1; ++ __u32 software_device:1; ++ __u32 post_device:1; ++ __u32 hybrid_discrete:1; ++ __u32 hybrid_integrated:1; ++ __u32 indirect_display_device:1; ++ __u32 paravirtualized:1; ++ __u32 acg_supported:1; ++ __u32 support_set_timings_from_vidpn:1; ++ __u32 detachable:1; ++ __u32 compute_only:1; ++ __u32 prototype:1; ++ __u32 reserved:19; ++ }; ++ __u32 value; ++ }; ++}; ++ ++enum kmtqueryadapterinfotype { ++ _KMTQAITYPE_UMDRIVERPRIVATE = 0, ++ _KMTQAITYPE_ADAPTERTYPE = 15, ++ _KMTQAITYPE_ADAPTERTYPE_RENDER = 57 ++}; ++ ++struct d3dkmt_queryadapterinfo { ++ struct d3dkmthandle adapter; ++ enum kmtqueryadapterinfotype type; ++#ifdef __KERNEL__ ++ void *private_data; ++#else ++ __u64 private_data; ++#endif ++ __u32 private_data_size; ++}; ++ ++union d3dkmt_enumadapters_filter { ++ struct { ++ __u64 include_compute_only:1; ++ __u64 include_display_only:1; ++ __u64 reserved:62; ++ }; ++ __u64 value; ++}; ++ ++struct d3dkmt_enumadapters3 { ++ union d3dkmt_enumadapters_filter filter; ++ __u32 adapter_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmt_adapterinfo *adapters; ++#else ++ __u64 adapters; ++#endif ++}; ++ ++/* ++ * Dxgkrnl Graphics Port Driver ioctl definitions ++ * ++ */ ++ ++#define LX_DXOPENADAPTERFROMLUID \ ++ _IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid) ++#define LX_DXQUERYADAPTERINFO \ ++ _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXENUMADAPTERS2 \ ++ _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) ++#define LX_DXCLOSEADAPTER \ ++ _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXENUMADAPTERS3 \ ++ _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) ++ + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1673-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1673-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch new file mode 100644 index 000000000000..42920ec0d2cc --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1673-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch @@ -0,0 +1,554 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 21 Mar 2022 19:18:50 -0700 +Subject: drivers: hv: dxgkrnl: Enumerate and open dxgadapter objects + +Implement ioctls to enumerate dxgadapter objects: + - The LX_DXENUMADAPTERS2 ioctl + - The LX_DXENUMADAPTERS3 ioctl. + +Implement ioctls to open adapter by LUID and to close adapter +handle: + - The LX_DXOPENADAPTERFROMLUID ioctl + - the LX_DXCLOSEADAPTER ioctl + +Impllement the ioctl to query dxgadapter information: + - The LX_DXQUERYADAPTERINFO ioctl + +When a dxgadapter is enumerated, it is implicitely opened and +a handle (d3dkmthandle) is created in the current process handle +table. The handle is returned to the caller and can be used +by user mode to reference the VGPU adapter in other ioctls. + +The caller is responsible to close the adapter when it is not +longer used by sending the LX_DXCLOSEADAPTER ioctl. + +A dxgprocess has a list of opened dxgadapter objects +(dxgprocess_adapter is used to represent the entry in the list). +A dxgadapter also has a list of dxgprocess_adapter objects. +This is needed for cleanup because either a process or an adapter +could be destroyed first. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgmodule.c | 3 + + drivers/hv/dxgkrnl/ioctl.c | 482 +++++++++- + 2 files changed, 484 insertions(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 17c22001ca6c..fbe1c58ecb46 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -721,6 +721,9 @@ static struct dxgglobal *dxgglobal_create(void) + + init_rwsem(&dxgglobal->channel_lock); + ++#ifdef DEBUG ++ dxgk_validate_ioctls(); ++#endif + return dxgglobal; + } + +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 60e38d104517..b08ea9430093 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -29,8 +29,472 @@ struct ioctl_desc { + u32 arg_size; + }; + +-static struct ioctl_desc ioctls[] = { ++#ifdef DEBUG ++static char *errorstr(int ret) ++{ ++ return ret < 0 ? "err" : ""; ++} ++#endif ++ ++static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_openadapterfromluid args; ++ int ret; ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmt_openadapterfromluid *__user result = inargs; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("Faled to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ if (*(u64 *) &entry->luid == ++ *(u64 *) &args.adapter_luid) { ++ ret = dxgprocess_open_adapter(process, entry, ++ &args.adapter_handle); ++ ++ if (ret >= 0) { ++ ret = copy_to_user( ++ &result->adapter_handle, ++ &args.adapter_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) ++ ret = -EINVAL; ++ } ++ adapter = entry; ++ } ++ dxgadapter_release_lock_shared(entry); ++ if (adapter) ++ break; ++ } ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ ++ if (args.adapter_handle.v == 0) ++ ret = -EINVAL; ++ ++cleanup: ++ ++ if (ret < 0) ++ dxgprocess_close_adapter(process, args.adapter_handle); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkp_enum_adapters(struct dxgprocess *process, ++ union d3dkmt_enumadapters_filter filter, ++ u32 adapter_count_max, ++ struct d3dkmt_adapterinfo *__user info_out, ++ u32 * __user adapter_count_out) ++{ ++ int ret = 0; ++ struct dxgadapter *entry; ++ struct d3dkmt_adapterinfo *info = NULL; ++ struct dxgadapter **adapters = NULL; ++ int adapter_count = 0; ++ int i; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (info_out == NULL || adapter_count_max == 0) { ++ ret = copy_to_user(adapter_count_out, ++ &dxgglobal->num_adapters, sizeof(u32)); ++ if (ret) { ++ DXG_ERR("copy_to_user faled"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ ++ if (adapter_count_max > 0xFFFF) { ++ DXG_ERR("too many adapters"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ info = vzalloc(sizeof(struct d3dkmt_adapterinfo) * adapter_count_max); ++ if (info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapters = vzalloc(sizeof(struct dxgadapter *) * adapter_count_max); ++ if (adapters == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ dxgglobal_acquire_process_adapter_lock(); + ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ struct d3dkmt_adapterinfo *inf = &info[adapter_count]; ++ ++ ret = dxgprocess_open_adapter(process, entry, ++ &inf->adapter_handle); ++ if (ret >= 0) { ++ inf->adapter_luid = entry->luid; ++ adapters[adapter_count] = entry; ++ DXG_TRACE("adapter: %x %x:%x", ++ inf->adapter_handle.v, ++ inf->adapter_luid.b, ++ inf->adapter_luid.a); ++ adapter_count++; ++ } ++ dxgadapter_release_lock_shared(entry); ++ } ++ if (ret < 0) ++ break; ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ ++ if (adapter_count > adapter_count_max) { ++ ret = STATUS_BUFFER_TOO_SMALL; ++ DXG_TRACE("Too many adapters"); ++ ret = copy_to_user(adapter_count_out, ++ &dxgglobal->num_adapters, sizeof(u32)); ++ if (ret) { ++ DXG_ERR("copy_to_user failed"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(adapter_count_out, &adapter_count, ++ sizeof(adapter_count)); ++ if (ret) { ++ DXG_ERR("failed to copy adapter_count"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(info_out, info, sizeof(info[0]) * adapter_count); ++ if (ret) { ++ DXG_ERR("failed to copy adapter info"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret >= 0) { ++ DXG_TRACE("found %d adapters", adapter_count); ++ goto success; ++ } ++ if (info) { ++ for (i = 0; i < adapter_count; i++) ++ dxgprocess_close_adapter(process, ++ info[i].adapter_handle); ++ } ++success: ++ if (info) ++ vfree(info); ++ if (adapters) ++ vfree(adapters); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_enumadapters2 args; ++ int ret; ++ struct dxgadapter *entry; ++ struct d3dkmt_adapterinfo *info = NULL; ++ struct dxgadapter **adapters = NULL; ++ int adapter_count = 0; ++ int i; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.adapters == NULL) { ++ DXG_TRACE("buffer is NULL"); ++ args.num_adapters = dxgglobal->num_adapters; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy args to user"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ if (args.num_adapters < dxgglobal->num_adapters) { ++ args.num_adapters = dxgglobal->num_adapters; ++ DXG_TRACE("buffer is too small"); ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ ++ if (args.num_adapters > D3DKMT_ADAPTERS_MAX) { ++ DXG_TRACE("too many adapters"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ info = vzalloc(sizeof(struct d3dkmt_adapterinfo) * args.num_adapters); ++ if (info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapters = vzalloc(sizeof(struct dxgadapter *) * args.num_adapters); ++ if (adapters == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ struct d3dkmt_adapterinfo *inf = &info[adapter_count]; ++ ++ ret = dxgprocess_open_adapter(process, entry, ++ &inf->adapter_handle); ++ if (ret >= 0) { ++ inf->adapter_luid = entry->luid; ++ adapters[adapter_count] = entry; ++ DXG_TRACE("adapter: %x %llx", ++ inf->adapter_handle.v, ++ *(u64 *) &inf->adapter_luid); ++ adapter_count++; ++ } ++ dxgadapter_release_lock_shared(entry); ++ } ++ if (ret < 0) ++ break; ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ ++ args.num_adapters = adapter_count; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy args to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(args.adapters, info, ++ sizeof(info[0]) * args.num_adapters); ++ if (ret) { ++ DXG_ERR("failed to copy adapter info to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (info) { ++ for (i = 0; i < args.num_adapters; i++) { ++ dxgprocess_close_adapter(process, ++ info[i].adapter_handle); ++ } ++ } ++ } else { ++ DXG_TRACE("found %d adapters", args.num_adapters); ++ } ++ ++ if (info) ++ vfree(info); ++ if (adapters) ++ vfree(adapters); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_enumadapters3 args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgkp_enum_adapters(process, args.filter, ++ args.adapter_count, ++ args.adapters, ++ &((struct d3dkmt_enumadapters3 *)inargs)-> ++ adapter_count); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmthandle args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgprocess_close_adapter(process, args); ++ if (ret < 0) ++ DXG_ERR("failed to close adapter: %d", ret); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryadapterinfo args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.private_data_size > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.private_data_size == 0) { ++ DXG_ERR("invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Type: %d Size: %x", args.type, args.private_data_size); ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_query_adapter_info(process, adapter, &args); ++ ++ dxgadapter_release_lock_shared(adapter); ++ ++cleanup: ++ ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static struct ioctl_desc ioctls[] = { ++/* 0x00 */ {}, ++/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, ++/* 0x02 */ {}, ++/* 0x03 */ {}, ++/* 0x04 */ {}, ++/* 0x05 */ {}, ++/* 0x06 */ {}, ++/* 0x07 */ {}, ++/* 0x08 */ {}, ++/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, ++/* 0x0a */ {}, ++/* 0x0b */ {}, ++/* 0x0c */ {}, ++/* 0x0d */ {}, ++/* 0x0e */ {}, ++/* 0x0f */ {}, ++/* 0x10 */ {}, ++/* 0x11 */ {}, ++/* 0x12 */ {}, ++/* 0x13 */ {}, ++/* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, ++/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, ++/* 0x16 */ {}, ++/* 0x17 */ {}, ++/* 0x18 */ {}, ++/* 0x19 */ {}, ++/* 0x1a */ {}, ++/* 0x1b */ {}, ++/* 0x1c */ {}, ++/* 0x1d */ {}, ++/* 0x1e */ {}, ++/* 0x1f */ {}, ++/* 0x20 */ {}, ++/* 0x21 */ {}, ++/* 0x22 */ {}, ++/* 0x23 */ {}, ++/* 0x24 */ {}, ++/* 0x25 */ {}, ++/* 0x26 */ {}, ++/* 0x27 */ {}, ++/* 0x28 */ {}, ++/* 0x29 */ {}, ++/* 0x2a */ {}, ++/* 0x2b */ {}, ++/* 0x2c */ {}, ++/* 0x2d */ {}, ++/* 0x2e */ {}, ++/* 0x2f */ {}, ++/* 0x30 */ {}, ++/* 0x31 */ {}, ++/* 0x32 */ {}, ++/* 0x33 */ {}, ++/* 0x34 */ {}, ++/* 0x35 */ {}, ++/* 0x36 */ {}, ++/* 0x37 */ {}, ++/* 0x38 */ {}, ++/* 0x39 */ {}, ++/* 0x3a */ {}, ++/* 0x3b */ {}, ++/* 0x3c */ {}, ++/* 0x3d */ {}, ++/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, ++/* 0x3f */ {}, ++/* 0x40 */ {}, ++/* 0x41 */ {}, ++/* 0x42 */ {}, ++/* 0x43 */ {}, ++/* 0x44 */ {}, ++/* 0x45 */ {}, + }; + + /* +@@ -82,3 +546,19 @@ long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2) + DXG_TRACE("unlocked ioctl %x Code:%d", p1, _IOC_NR(p1)); + return dxgk_ioctl(f, p1, p2); + } ++ ++#ifdef DEBUG ++void dxgk_validate_ioctls(void) ++{ ++ int i; ++ ++ for (i=0; i < ARRAY_SIZE(ioctls); i++) ++ { ++ if (ioctls[i].ioctl && _IOC_NR(ioctls[i].ioctl) != i) ++ { ++ DXG_ERR("Invalid ioctl"); ++ DXGKRNL_ASSERT(0); ++ } ++ } ++} ++#endif +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1674-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1674-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch new file mode 100644 index 000000000000..28ae3c0856b3 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1674-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch @@ -0,0 +1,828 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 17:23:58 -0800 +Subject: drivers: hv: dxgkrnl: Creation of dxgdevice objects + +Implement ioctls for creation and destruction of dxgdevice +objects: + - the LX_DXCREATEDEVICE ioctl + - the LX_DXDESTROYDEVICE ioctl + +A dxgdevice object represents a container of other virtual +compute device objects (allocations, sync objects, contexts, +etc.). It belongs to a dxgadapter object. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 187 ++++++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 58 +++ + drivers/hv/dxgkrnl/dxgprocess.c | 43 +++ + drivers/hv/dxgkrnl/dxgvmbus.c | 80 ++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 22 ++ + drivers/hv/dxgkrnl/ioctl.c | 130 ++++++- + drivers/hv/dxgkrnl/misc.h | 8 +- + include/uapi/misc/d3dkmthk.h | 82 ++++ + 8 files changed, 604 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index fa0d6beca157..a9a341716eba 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -194,6 +194,122 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter) + up_read(&adapter->core_lock); + } + ++struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, ++ struct dxgprocess *process) ++{ ++ struct dxgdevice *device; ++ int ret; ++ ++ device = kzalloc(sizeof(struct dxgdevice), GFP_KERNEL); ++ if (device) { ++ kref_init(&device->device_kref); ++ device->adapter = adapter; ++ device->process = process; ++ kref_get(&adapter->adapter_kref); ++ init_rwsem(&device->device_lock); ++ INIT_LIST_HEAD(&device->pqueue_list_head); ++ device->object_state = DXGOBJECTSTATE_CREATED; ++ device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; ++ ++ ret = dxgprocess_adapter_add_device(process, adapter, device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ } ++ } ++ return device; ++} ++ ++void dxgdevice_stop(struct dxgdevice *device) ++{ ++} ++ ++void dxgdevice_mark_destroyed(struct dxgdevice *device) ++{ ++ down_write(&device->device_lock); ++ device->object_state = DXGOBJECTSTATE_DESTROYED; ++ up_write(&device->device_lock); ++} ++ ++void dxgdevice_destroy(struct dxgdevice *device) ++{ ++ struct dxgprocess *process = device->process; ++ struct dxgadapter *adapter = device->adapter; ++ struct d3dkmthandle device_handle = {}; ++ ++ DXG_TRACE("Destroying device: %p", device); ++ ++ down_write(&device->device_lock); ++ ++ if (device->object_state != DXGOBJECTSTATE_ACTIVE) ++ goto cleanup; ++ ++ device->object_state = DXGOBJECTSTATE_DESTROYED; ++ ++ dxgdevice_stop(device); ++ ++ /* Guest handles need to be released before the host handles */ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (device->handle_valid) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, device->handle); ++ device_handle = device->handle; ++ device->handle_valid = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (device_handle.v) { ++ up_write(&device->device_lock); ++ if (dxgadapter_acquire_lock_shared(adapter) == 0) { ++ dxgvmb_send_destroy_device(adapter, process, ++ device_handle); ++ dxgadapter_release_lock_shared(adapter); ++ } ++ down_write(&device->device_lock); ++ } ++ ++cleanup: ++ ++ if (device->adapter) { ++ dxgprocess_adapter_remove_device(device); ++ kref_put(&device->adapter->adapter_kref, dxgadapter_release); ++ device->adapter = NULL; ++ } ++ ++ up_write(&device->device_lock); ++ ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("Device destroyed"); ++} ++ ++int dxgdevice_acquire_lock_shared(struct dxgdevice *device) ++{ ++ down_read(&device->device_lock); ++ if (!dxgdevice_is_active(device)) { ++ up_read(&device->device_lock); ++ return -ENODEV; ++ } ++ return 0; ++} ++ ++void dxgdevice_release_lock_shared(struct dxgdevice *device) ++{ ++ up_read(&device->device_lock); ++} ++ ++bool dxgdevice_is_active(struct dxgdevice *device) ++{ ++ return device->object_state == DXGOBJECTSTATE_ACTIVE; ++} ++ ++void dxgdevice_release(struct kref *refcount) ++{ ++ struct dxgdevice *device; ++ ++ device = container_of(refcount, struct dxgdevice, device_kref); ++ kfree(device); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +@@ -208,6 +324,8 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + adapter_info->adapter = adapter; + adapter_info->process = process; + adapter_info->refcount = 1; ++ mutex_init(&adapter_info->device_list_mutex); ++ INIT_LIST_HEAD(&adapter_info->device_list_head); + list_add_tail(&adapter_info->process_adapter_list_entry, + &process->process_adapter_list_head); + dxgadapter_add_process(adapter, adapter_info); +@@ -221,10 +339,34 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + + void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info) + { ++ struct dxgdevice *device; ++ ++ mutex_lock(&adapter_info->device_list_mutex); ++ list_for_each_entry(device, &adapter_info->device_list_head, ++ device_list_entry) { ++ dxgdevice_stop(device); ++ } ++ mutex_unlock(&adapter_info->device_list_mutex); + } + + void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info) + { ++ struct dxgdevice *device; ++ ++ mutex_lock(&adapter_info->device_list_mutex); ++ while (!list_empty(&adapter_info->device_list_head)) { ++ device = list_first_entry(&adapter_info->device_list_head, ++ struct dxgdevice, device_list_entry); ++ list_del(&device->device_list_entry); ++ device->device_list_entry.next = NULL; ++ mutex_unlock(&adapter_info->device_list_mutex); ++ dxgvmb_send_flush_device(device, ++ DXGDEVICE_FLUSHSCHEDULER_DEVICE_TERMINATE); ++ dxgdevice_destroy(device); ++ mutex_lock(&adapter_info->device_list_mutex); ++ } ++ mutex_unlock(&adapter_info->device_list_mutex); ++ + dxgadapter_remove_process(adapter_info); + kref_put(&adapter_info->adapter->adapter_kref, dxgadapter_release); + list_del(&adapter_info->process_adapter_list_entry); +@@ -240,3 +382,48 @@ void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter_info) + if (adapter_info->refcount == 0) + dxgprocess_adapter_destroy(adapter_info); + } ++ ++int dxgprocess_adapter_add_device(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct dxgdevice *device) ++{ ++ struct dxgprocess_adapter *entry; ++ struct dxgprocess_adapter *adapter_info = NULL; ++ int ret = 0; ++ ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &process->process_adapter_list_head, ++ process_adapter_list_entry) { ++ if (entry->adapter == adapter) { ++ adapter_info = entry; ++ break; ++ } ++ } ++ if (adapter_info == NULL) { ++ DXG_ERR("failed to find process adapter info"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ mutex_lock(&adapter_info->device_list_mutex); ++ list_add_tail(&device->device_list_entry, ++ &adapter_info->device_list_head); ++ device->adapter_info = adapter_info; ++ mutex_unlock(&adapter_info->device_list_mutex); ++ ++cleanup: ++ ++ dxgglobal_release_process_adapter_lock(); ++ return ret; ++} ++ ++void dxgprocess_adapter_remove_device(struct dxgdevice *device) ++{ ++ DXG_TRACE("Removing device: %p", device); ++ mutex_lock(&device->adapter_info->device_list_mutex); ++ if (device->device_list_entry.next) { ++ list_del(&device->device_list_entry); ++ device->device_list_entry.next = NULL; ++ } ++ mutex_unlock(&device->adapter_info->device_list_mutex); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b089d126f801..45ac1f25cc5e 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -34,6 +34,7 @@ + + struct dxgprocess; + struct dxgadapter; ++struct dxgdevice; + + /* + * Driver private data. +@@ -71,6 +72,10 @@ struct dxgk_device_types { + u32 virtual_monitor_device:1; + }; + ++enum dxgdevice_flushschedulerreason { ++ DXGDEVICE_FLUSHSCHEDULER_DEVICE_TERMINATE = 4, ++}; ++ + enum dxgobjectstate { + DXGOBJECTSTATE_CREATED, + DXGOBJECTSTATE_ACTIVE, +@@ -166,6 +171,9 @@ struct dxgprocess_adapter { + struct list_head adapter_process_list_entry; + /* Entry in dxgprocess::process_adapter_list_head */ + struct list_head process_adapter_list_entry; ++ /* List of all dxgdevice objects created for the process on adapter */ ++ struct list_head device_list_head; ++ struct mutex device_list_mutex; + struct dxgadapter *adapter; + struct dxgprocess *process; + int refcount; +@@ -175,6 +183,10 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter + *adapter); + void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter); ++int dxgprocess_adapter_add_device(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct dxgdevice *device); ++void dxgprocess_adapter_remove_device(struct dxgdevice *device); + void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info); + void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info); + +@@ -222,6 +234,11 @@ struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process, + struct d3dkmthandle handle); + struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, + struct d3dkmthandle handle); ++struct dxgdevice *dxgprocess_device_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, ++ enum hmgrentry_type t, ++ struct d3dkmthandle h); + void dxgprocess_ht_lock_shared_down(struct dxgprocess *process); + void dxgprocess_ht_lock_shared_up(struct dxgprocess *process); + void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process); +@@ -241,6 +258,7 @@ enum dxgadapter_state { + * This object represents the grapchis adapter. + * Objects, which take reference on the adapter: + * - dxgglobal ++ * - dxgdevice + * - adapter handle (struct d3dkmthandle) + */ + struct dxgadapter { +@@ -277,6 +295,38 @@ void dxgadapter_add_process(struct dxgadapter *adapter, + struct dxgprocess_adapter *process_info); + void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); + ++/* ++ * The object represent the device object. ++ * The following objects take reference on the device ++ * - device handle (struct d3dkmthandle) ++ */ ++struct dxgdevice { ++ enum dxgobjectstate object_state; ++ /* Device takes reference on the adapter */ ++ struct dxgadapter *adapter; ++ struct dxgprocess_adapter *adapter_info; ++ struct dxgprocess *process; ++ /* Entry in the DGXPROCESS_ADAPTER device list */ ++ struct list_head device_list_entry; ++ struct kref device_kref; ++ /* Protects destcruction of the device object */ ++ struct rw_semaphore device_lock; ++ /* List of paging queues. Protected by process handle table lock. */ ++ struct list_head pqueue_list_head; ++ struct d3dkmthandle handle; ++ enum d3dkmt_deviceexecution_state execution_state; ++ u32 handle_valid; ++}; ++ ++struct dxgdevice *dxgdevice_create(struct dxgadapter *a, struct dxgprocess *p); ++void dxgdevice_destroy(struct dxgdevice *device); ++void dxgdevice_stop(struct dxgdevice *device); ++void dxgdevice_mark_destroyed(struct dxgdevice *device); ++int dxgdevice_acquire_lock_shared(struct dxgdevice *dev); ++void dxgdevice_release_lock_shared(struct dxgdevice *dev); ++void dxgdevice_release(struct kref *refcount); ++bool dxgdevice_is_active(struct dxgdevice *dev); ++ + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + +@@ -313,6 +363,14 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process); + int dxgvmb_send_open_adapter(struct dxgadapter *adapter); + int dxgvmb_send_close_adapter(struct dxgadapter *adapter); + int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter); ++struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createdevice *args); ++int dxgvmb_send_destroy_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h); ++int dxgvmb_send_flush_device(struct dxgdevice *device, ++ enum dxgdevice_flushschedulerreason reason); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index ab9a01e3c8c8..8373f681e822 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -241,6 +241,49 @@ struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, + return adapter; + } + ++struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, ++ enum hmgrentry_type t, ++ struct d3dkmthandle handle) ++{ ++ struct dxgdevice *device = NULL; ++ void *obj; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ obj = hmgrtable_get_object_by_type(&process->handle_table, t, handle); ++ if (obj) { ++ struct d3dkmthandle device_handle = {}; ++ ++ switch (t) { ++ case HMGRENTRY_TYPE_DXGDEVICE: ++ device = obj; ++ break; ++ default: ++ DXG_ERR("invalid handle type: %d", t); ++ break; ++ } ++ if (device == NULL) ++ device = hmgrtable_get_object_by_type( ++ &process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ device_handle); ++ if (device) ++ if (kref_get_unless_zero(&device->device_kref) == 0) ++ device = NULL; ++ } ++ if (device == NULL) ++ DXG_ERR("device_by_handle failed: %d %x", t, handle.v); ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ return device; ++} ++ ++struct dxgdevice *dxgprocess_device_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ return dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ handle); ++} ++ + void dxgprocess_ht_lock_shared_down(struct dxgprocess *process) + { + hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 0abf45d0d3f7..73804d11ec49 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -673,6 +673,86 @@ int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter) + return ret; + } + ++struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createdevice *args) ++{ ++ int ret; ++ struct dxgkvmb_command_createdevice *command; ++ struct dxgkvmb_command_createdevice_return result = { }; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_CREATEDEVICE, ++ process->host_handle); ++ command->flags = args->flags; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ result.device.v = 0; ++ free_message(&msg, process); ++cleanup: ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return result.device; ++} ++ ++int dxgvmb_send_destroy_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ struct dxgkvmb_command_destroydevice *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_DESTROYDEVICE, ++ process->host_handle); ++ command->device = h; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_flush_device(struct dxgdevice *device, ++ enum dxgdevice_flushschedulerreason reason) ++{ ++ int ret; ++ struct dxgkvmb_command_flushdevice *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgprocess *process = device->process; ++ ++ ret = init_message(&msg, device->adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_FLUSHDEVICE, ++ process->host_handle); ++ command->device = device->handle; ++ command->reason = reason; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index a805a396e083..4ccf45765954 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -247,4 +247,26 @@ struct dxgkvmb_command_queryadapterinfo_return { + u8 private_data[1]; + }; + ++struct dxgkvmb_command_createdevice { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_createdeviceflags flags; ++ bool cdd_device; ++ void *error_code; ++}; ++ ++struct dxgkvmb_command_createdevice_return { ++ struct d3dkmthandle device; ++}; ++ ++struct dxgkvmb_command_destroydevice { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++}; ++ ++struct dxgkvmb_command_flushdevice { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ enum dxgdevice_flushschedulerreason reason; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index b08ea9430093..405e8b92913e 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -424,10 +424,136 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_device(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createdevice args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct d3dkmthandle host_device_handle = {}; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* The call acquires reference on the adapter */ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgdevice_create(adapter, process); ++ if (device == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) ++ goto cleanup; ++ ++ adapter_locked = true; ++ ++ host_device_handle = dxgvmb_send_create_device(adapter, process, &args); ++ if (host_device_handle.v) { ++ ret = copy_to_user(&((struct d3dkmt_createdevice *)inargs)-> ++ device, &host_device_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy device handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, device, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ host_device_handle); ++ if (ret >= 0) { ++ device->handle = host_device_handle; ++ device->handle_valid = 1; ++ device->object_state = DXGOBJECTSTATE_ACTIVE; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_device_handle.v) ++ dxgvmb_send_destroy_device(adapter, process, ++ host_device_handle); ++ if (device) ++ dxgdevice_destroy(device); ++ } ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroydevice args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ device = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ args.device); ++ if (device) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, args.device); ++ device->handle_valid = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (device == NULL) { ++ DXG_ERR("invalid device handle: %x", args.device.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ++ dxgdevice_destroy(device); ++ ++ if (dxgadapter_acquire_lock_shared(adapter) == 0) { ++ dxgvmb_send_destroy_device(adapter, process, args.device); ++ dxgadapter_release_lock_shared(adapter); ++ } ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +-/* 0x02 */ {}, ++/* 0x02 */ {dxgkio_create_device, LX_DXCREATEDEVICE}, + /* 0x03 */ {}, + /* 0x04 */ {}, + /* 0x05 */ {}, +@@ -450,7 +576,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x16 */ {}, + /* 0x17 */ {}, + /* 0x18 */ {}, +-/* 0x19 */ {}, ++/* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, + /* 0x1a */ {}, + /* 0x1b */ {}, + /* 0x1c */ {}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index dc849a8ed3f2..e0bd33b365b0 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -27,10 +27,10 @@ extern const struct d3dkmthandle zerohandle; + * + * channel_lock (VMBus channel lock) + * fd_mutex +- * plistmutex +- * table_lock +- * core_lock +- * device_lock ++ * plistmutex (process list mutex) ++ * table_lock (handle table lock) ++ * core_lock (dxgadapter lock) ++ * device_lock (dxgdevice lock) + * process_adapter_mutex + * adapter_list_lock + * device_mutex (dxgglobal mutex) +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index c675d5827ed5..7414f0f5ce8e 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -86,6 +86,74 @@ struct d3dkmt_openadapterfromluid { + struct d3dkmthandle adapter_handle; + }; + ++struct d3dddi_allocationlist { ++ struct d3dkmthandle allocation; ++ union { ++ struct { ++ __u32 write_operation :1; ++ __u32 do_not_retire_instance :1; ++ __u32 offer_priority :3; ++ __u32 reserved :27; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_patchlocationlist { ++ __u32 allocation_index; ++ union { ++ struct { ++ __u32 slot_id:24; ++ __u32 reserved:8; ++ }; ++ __u32 value; ++ }; ++ __u32 driver_id; ++ __u32 allocation_offset; ++ __u32 patch_offset; ++ __u32 split_offset; ++}; ++ ++struct d3dkmt_createdeviceflags { ++ __u32 legacy_mode:1; ++ __u32 request_vSync:1; ++ __u32 disable_gpu_timeout:1; ++ __u32 gdi_device:1; ++ __u32 reserved:28; ++}; ++ ++struct d3dkmt_createdevice { ++ struct d3dkmthandle adapter; ++ __u32 reserved3; ++ struct d3dkmt_createdeviceflags flags; ++ struct d3dkmthandle device; ++#ifdef __KERNEL__ ++ void *command_buffer; ++#else ++ __u64 command_buffer; ++#endif ++ __u32 command_buffer_size; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dddi_allocationlist *allocation_list; ++#else ++ __u64 allocation_list; ++#endif ++ __u32 allocation_list_size; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dddi_patchlocationlist *patch_location_list; ++#else ++ __u64 patch_location_list; ++#endif ++ __u32 patch_location_list_size; ++ __u32 reserved2; ++}; ++ ++struct d3dkmt_destroydevice { ++ struct d3dkmthandle device; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -125,6 +193,16 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + ++enum d3dkmt_deviceexecution_state { ++ _D3DKMT_DEVICEEXECUTION_ACTIVE = 1, ++ _D3DKMT_DEVICEEXECUTION_RESET = 2, ++ _D3DKMT_DEVICEEXECUTION_HUNG = 3, ++ _D3DKMT_DEVICEEXECUTION_STOPPED = 4, ++ _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, ++}; ++ + union d3dkmt_enumadapters_filter { + struct { + __u64 include_compute_only:1; +@@ -152,12 +230,16 @@ struct d3dkmt_enumadapters3 { + + #define LX_DXOPENADAPTERFROMLUID \ + _IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid) ++#define LX_DXCREATEDEVICE \ ++ _IOWR(0x47, 0x02, struct d3dkmt_createdevice) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXENUMADAPTERS2 \ + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXDESTROYDEVICE \ ++ _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1675-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1675-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch new file mode 100644 index 000000000000..73403cb5b4a1 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1675-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch @@ -0,0 +1,668 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 17:03:47 -0800 +Subject: drivers: hv: dxgkrnl: Creation of dxgcontext objects + +Implement ioctls for creation/destruction of dxgcontext +objects: + - the LX_DXCREATECONTEXTVIRTUAL ioctl + - the LX_DXDESTROYCONTEXT ioctl. + +A dxgcontext object represents a compute device execution thread. +Ccompute device DMA buffers and synchronization operations are +submitted for execution to a dxgcontext. dxgcontexts objects +belong to a dxgdevice object. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 103 ++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 38 +++ + drivers/hv/dxgkrnl/dxgprocess.c | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 101 +++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 18 + + drivers/hv/dxgkrnl/ioctl.c | 168 +++++++++- + drivers/hv/dxgkrnl/misc.h | 1 + + include/uapi/misc/d3dkmthk.h | 47 +++ + 8 files changed, 477 insertions(+), 3 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index a9a341716eba..cd103e092ac2 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -206,7 +206,9 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + device->adapter = adapter; + device->process = process; + kref_get(&adapter->adapter_kref); ++ INIT_LIST_HEAD(&device->context_list_head); + init_rwsem(&device->device_lock); ++ init_rwsem(&device->context_list_lock); + INIT_LIST_HEAD(&device->pqueue_list_head); + device->object_state = DXGOBJECTSTATE_CREATED; + device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; +@@ -248,6 +250,20 @@ void dxgdevice_destroy(struct dxgdevice *device) + + dxgdevice_stop(device); + ++ { ++ struct dxgcontext *context; ++ struct dxgcontext *tmp; ++ ++ DXG_TRACE("destroying contexts"); ++ dxgdevice_acquire_context_list_lock(device); ++ list_for_each_entry_safe(context, tmp, ++ &device->context_list_head, ++ context_list_entry) { ++ dxgcontext_destroy(process, context); ++ } ++ dxgdevice_release_context_list_lock(device); ++ } ++ + /* Guest handles need to be released before the host handles */ + hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); + if (device->handle_valid) { +@@ -302,6 +318,32 @@ bool dxgdevice_is_active(struct dxgdevice *device) + return device->object_state == DXGOBJECTSTATE_ACTIVE; + } + ++void dxgdevice_acquire_context_list_lock(struct dxgdevice *device) ++{ ++ down_write(&device->context_list_lock); ++} ++ ++void dxgdevice_release_context_list_lock(struct dxgdevice *device) ++{ ++ up_write(&device->context_list_lock); ++} ++ ++void dxgdevice_add_context(struct dxgdevice *device, struct dxgcontext *context) ++{ ++ down_write(&device->context_list_lock); ++ list_add_tail(&context->context_list_entry, &device->context_list_head); ++ up_write(&device->context_list_lock); ++} ++ ++void dxgdevice_remove_context(struct dxgdevice *device, ++ struct dxgcontext *context) ++{ ++ if (context->context_list_entry.next) { ++ list_del(&context->context_list_entry); ++ context->context_list_entry.next = NULL; ++ } ++} ++ + void dxgdevice_release(struct kref *refcount) + { + struct dxgdevice *device; +@@ -310,6 +352,67 @@ void dxgdevice_release(struct kref *refcount) + kfree(device); + } + ++struct dxgcontext *dxgcontext_create(struct dxgdevice *device) ++{ ++ struct dxgcontext *context; ++ ++ context = kzalloc(sizeof(struct dxgcontext), GFP_KERNEL); ++ if (context) { ++ kref_init(&context->context_kref); ++ context->device = device; ++ context->process = device->process; ++ context->device_handle = device->handle; ++ kref_get(&device->device_kref); ++ INIT_LIST_HEAD(&context->hwqueue_list_head); ++ init_rwsem(&context->hwqueue_list_lock); ++ dxgdevice_add_context(device, context); ++ context->object_state = DXGOBJECTSTATE_ACTIVE; ++ } ++ return context; ++} ++ ++/* ++ * Called when the device context list lock is held ++ */ ++void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context) ++{ ++ DXG_TRACE("Destroying context %p", context); ++ context->object_state = DXGOBJECTSTATE_DESTROYED; ++ if (context->device) { ++ if (context->handle.v) { ++ hmgrtable_free_handle_safe(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ context->handle); ++ } ++ dxgdevice_remove_context(context->device, context); ++ kref_put(&context->device->device_kref, dxgdevice_release); ++ } ++ kref_put(&context->context_kref, dxgcontext_release); ++} ++ ++void dxgcontext_destroy_safe(struct dxgprocess *process, ++ struct dxgcontext *context) ++{ ++ struct dxgdevice *device = context->device; ++ ++ dxgdevice_acquire_context_list_lock(device); ++ dxgcontext_destroy(process, context); ++ dxgdevice_release_context_list_lock(device); ++} ++ ++bool dxgcontext_is_active(struct dxgcontext *context) ++{ ++ return context->object_state == DXGOBJECTSTATE_ACTIVE; ++} ++ ++void dxgcontext_release(struct kref *refcount) ++{ ++ struct dxgcontext *context; ++ ++ context = container_of(refcount, struct dxgcontext, context_kref); ++ kfree(context); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 45ac1f25cc5e..a3d8d3c9f37d 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -35,6 +35,7 @@ + struct dxgprocess; + struct dxgadapter; + struct dxgdevice; ++struct dxgcontext; + + /* + * Driver private data. +@@ -298,6 +299,7 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); + /* + * The object represent the device object. + * The following objects take reference on the device ++ * - dxgcontext + * - device handle (struct d3dkmthandle) + */ + struct dxgdevice { +@@ -311,6 +313,8 @@ struct dxgdevice { + struct kref device_kref; + /* Protects destcruction of the device object */ + struct rw_semaphore device_lock; ++ struct rw_semaphore context_list_lock; ++ struct list_head context_list_head; + /* List of paging queues. Protected by process handle table lock. */ + struct list_head pqueue_list_head; + struct d3dkmthandle handle; +@@ -325,7 +329,33 @@ void dxgdevice_mark_destroyed(struct dxgdevice *device); + int dxgdevice_acquire_lock_shared(struct dxgdevice *dev); + void dxgdevice_release_lock_shared(struct dxgdevice *dev); + void dxgdevice_release(struct kref *refcount); ++void dxgdevice_add_context(struct dxgdevice *dev, struct dxgcontext *ctx); ++void dxgdevice_remove_context(struct dxgdevice *dev, struct dxgcontext *ctx); + bool dxgdevice_is_active(struct dxgdevice *dev); ++void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev); ++void dxgdevice_release_context_list_lock(struct dxgdevice *dev); ++ ++/* ++ * The object represent the execution context of a device. ++ */ ++struct dxgcontext { ++ enum dxgobjectstate object_state; ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ /* entry in the device context list */ ++ struct list_head context_list_entry; ++ struct list_head hwqueue_list_head; ++ struct rw_semaphore hwqueue_list_lock; ++ struct kref context_kref; ++ struct d3dkmthandle handle; ++ struct d3dkmthandle device_handle; ++}; ++ ++struct dxgcontext *dxgcontext_create(struct dxgdevice *dev); ++void dxgcontext_destroy(struct dxgprocess *pr, struct dxgcontext *ctx); ++void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); ++void dxgcontext_release(struct kref *refcount); ++bool dxgcontext_is_active(struct dxgcontext *ctx); + + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); +@@ -371,6 +401,14 @@ int dxgvmb_send_destroy_device(struct dxgadapter *adapter, + struct d3dkmthandle h); + int dxgvmb_send_flush_device(struct dxgdevice *device, + enum dxgdevice_flushschedulerreason reason); ++struct d3dkmthandle ++dxgvmb_send_create_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createcontextvirtual ++ *args); ++int dxgvmb_send_destroy_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index 8373f681e822..ca307beb9a9a 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -257,6 +257,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, + case HMGRENTRY_TYPE_DXGDEVICE: + device = obj; + break; ++ case HMGRENTRY_TYPE_DXGCONTEXT: ++ device_handle = ++ ((struct dxgcontext *)obj)->device_handle; ++ break; + default: + DXG_ERR("invalid handle type: %d", t); + break; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 73804d11ec49..e66aac7c13cb 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -731,7 +731,7 @@ int dxgvmb_send_flush_device(struct dxgdevice *device, + enum dxgdevice_flushschedulerreason reason) + { + int ret; +- struct dxgkvmb_command_flushdevice *command; ++ struct dxgkvmb_command_flushdevice *command = NULL; + struct dxgvmbusmsg msg = {.hdr = NULL}; + struct dxgprocess *process = device->process; + +@@ -745,6 +745,105 @@ int dxgvmb_send_flush_device(struct dxgdevice *device, + command->device = device->handle; + command->reason = reason; + ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++struct d3dkmthandle ++dxgvmb_send_create_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createcontextvirtual *args) ++{ ++ struct dxgkvmb_command_createcontextvirtual *command = NULL; ++ u32 cmd_size; ++ int ret; ++ struct d3dkmthandle context = {}; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("PrivateDriverDataSize is invalid"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ cmd_size = sizeof(struct dxgkvmb_command_createcontextvirtual) + ++ args->priv_drv_data_size - 1; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATECONTEXTVIRTUAL, ++ process->host_handle); ++ command->device = args->device; ++ command->node_ordinal = args->node_ordinal; ++ command->engine_affinity = args->engine_affinity; ++ command->flags = args->flags; ++ command->client_hint = args->client_hint; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user(command->priv_drv_data, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("Faled to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ /* Input command is returned back as output */ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ command, cmd_size); ++ if (ret < 0) { ++ goto cleanup; ++ } else { ++ context = command->context; ++ if (args->priv_drv_data_size) { ++ ret = copy_to_user(args->priv_drv_data, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "Faled to copy private data to user"); ++ ret = -EINVAL; ++ dxgvmb_send_destroy_context(adapter, process, ++ context); ++ context.v = 0; ++ } ++ } ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return context; ++} ++ ++int dxgvmb_send_destroy_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ struct dxgkvmb_command_destroycontext *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYCONTEXT, ++ process->host_handle); ++ command->context = h; ++ + ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); + cleanup: + free_message(&msg, process); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 4ccf45765954..ebcb7b0f62c1 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -269,4 +269,22 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_createcontextvirtual { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ struct d3dkmthandle device; ++ u32 node_ordinal; ++ u32 engine_affinity; ++ struct d3dddi_createcontextflags flags; ++ enum d3dkmt_clienthint client_hint; ++ u32 priv_drv_data_size; ++ u8 priv_drv_data[1]; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroycontext { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 405e8b92913e..5d10ebd2ce6a 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -550,13 +550,177 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createcontextvirtual args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgcontext *context = NULL; ++ struct d3dkmthandle host_context_handle = {}; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ context = dxgcontext_create(device); ++ if (context == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ host_context_handle = dxgvmb_send_create_context(adapter, ++ process, &args); ++ if (host_context_handle.v) { ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, context, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ host_context_handle); ++ if (ret >= 0) ++ context->handle = host_context_handle; ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(&((struct d3dkmt_createcontextvirtual *) ++ inargs)->context, &host_context_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy context handle"); ++ ret = -EINVAL; ++ } ++ } else { ++ DXG_ERR("invalid host handle"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_context_handle.v) { ++ dxgvmb_send_destroy_context(adapter, process, ++ host_context_handle); ++ } ++ if (context) ++ dxgcontext_destroy_safe(process, context); ++ } ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroycontext args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgcontext *context = NULL; ++ struct dxgdevice *device = NULL; ++ struct d3dkmthandle device_handle = {}; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ context = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (context) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, args.context); ++ context->handle.v = 0; ++ device_handle = context->device_handle; ++ context->object_state = DXGOBJECTSTATE_DESTROYED; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (context == NULL) { ++ DXG_ERR("invalid context handle: %x", args.context.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_destroy_context(adapter, process, args.context); ++ ++ dxgcontext_destroy_safe(process, context); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, + /* 0x02 */ {dxgkio_create_device, LX_DXCREATEDEVICE}, + /* 0x03 */ {}, +-/* 0x04 */ {}, +-/* 0x05 */ {}, ++/* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL}, ++/* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, + /* 0x06 */ {}, + /* 0x07 */ {}, + /* 0x08 */ {}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index e0bd33b365b0..3a9637f0b5e2 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -29,6 +29,7 @@ extern const struct d3dkmthandle zerohandle; + * fd_mutex + * plistmutex (process list mutex) + * table_lock (handle table lock) ++ * context_list_lock + * core_lock (dxgadapter lock) + * device_lock (dxgdevice lock) + * process_adapter_mutex +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 7414f0f5ce8e..4ba0070b061f 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -154,6 +154,49 @@ struct d3dkmt_destroydevice { + struct d3dkmthandle device; + }; + ++enum d3dkmt_clienthint { ++ _D3DKMT_CLIENTHNT_UNKNOWN = 0, ++ _D3DKMT_CLIENTHINT_OPENGL = 1, ++ _D3DKMT_CLIENTHINT_CDD = 2, ++ _D3DKMT_CLIENTHINT_DX7 = 7, ++ _D3DKMT_CLIENTHINT_DX8 = 8, ++ _D3DKMT_CLIENTHINT_DX9 = 9, ++ _D3DKMT_CLIENTHINT_DX10 = 10, ++}; ++ ++struct d3dddi_createcontextflags { ++ union { ++ struct { ++ __u32 null_rendering:1; ++ __u32 initial_data:1; ++ __u32 disable_gpu_timeout:1; ++ __u32 synchronization_only:1; ++ __u32 hw_queue_supported:1; ++ __u32 reserved:27; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_destroycontext { ++ struct d3dkmthandle context; ++}; ++ ++struct d3dkmt_createcontextvirtual { ++ struct d3dkmthandle device; ++ __u32 node_ordinal; ++ __u32 engine_affinity; ++ struct d3dddi_createcontextflags flags; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ enum d3dkmt_clienthint client_hint; ++ struct d3dkmthandle context; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -232,6 +275,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid) + #define LX_DXCREATEDEVICE \ + _IOWR(0x47, 0x02, struct d3dkmt_createdevice) ++#define LX_DXCREATECONTEXTVIRTUAL \ ++ _IOWR(0x47, 0x04, struct d3dkmt_createcontextvirtual) ++#define LX_DXDESTROYCONTEXT \ ++ _IOWR(0x47, 0x05, struct d3dkmt_destroycontext) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXENUMADAPTERS2 \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch b/patch/kernel/archive/wsl2-arm64-6.1/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch new file mode 100644 index 000000000000..d4323904b8b4 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch @@ -0,0 +1,2263 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 15:37:52 -0800 +Subject: drivers: hv: dxgkrnl: Creation of compute device allocations and + resources + +Implemented ioctls to create and destroy virtual compute device +allocations (dxgallocation) and resources (dxgresource): + - the LX_DXCREATEALLOCATION ioctl, + - the LX_DXDESTROYALLOCATION2 ioctl. + +Compute device allocations (dxgallocation objects) represent memory +allocation, which could be accessible by the device. Allocations can +be created around existing system memory (provided by an application) +or memory, allocated by dxgkrnl on the host. + +Compute device resources (dxgresource objects) represent containers of +compute device allocations. Allocations could be dynamically added, +removed from a resource. + +Each allocation/resource has associated driver private data, which +is provided during creation. + +Each created resource or allocation have a handle (d3dkmthandle), +which is used to reference the corresponding object in other ioctls. + +A dxgallocation can be resident (meaning that it is accessible by +the compute device) or evicted. When an allocation is evicted, +its content is stored in the backing store in system memory. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 282 ++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 113 ++ + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgvmbus.c | 649 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 123 ++ + drivers/hv/dxgkrnl/ioctl.c | 631 ++++++++- + drivers/hv/dxgkrnl/misc.h | 3 + + include/uapi/misc/d3dkmthk.h | 204 +++ + 8 files changed, 2004 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index cd103e092ac2..402caa81a5db 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -207,8 +207,11 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + device->process = process; + kref_get(&adapter->adapter_kref); + INIT_LIST_HEAD(&device->context_list_head); ++ INIT_LIST_HEAD(&device->alloc_list_head); ++ INIT_LIST_HEAD(&device->resource_list_head); + init_rwsem(&device->device_lock); + init_rwsem(&device->context_list_lock); ++ init_rwsem(&device->alloc_list_lock); + INIT_LIST_HEAD(&device->pqueue_list_head); + device->object_state = DXGOBJECTSTATE_CREATED; + device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; +@@ -224,6 +227,14 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + + void dxgdevice_stop(struct dxgdevice *device) + { ++ struct dxgallocation *alloc; ++ ++ DXG_TRACE("Destroying device: %p", device); ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_for_each_entry(alloc, &device->alloc_list_head, alloc_list_entry) { ++ dxgallocation_stop(alloc); ++ } ++ dxgdevice_release_alloc_list_lock(device); + } + + void dxgdevice_mark_destroyed(struct dxgdevice *device) +@@ -250,6 +261,33 @@ void dxgdevice_destroy(struct dxgdevice *device) + + dxgdevice_stop(device); + ++ dxgdevice_acquire_alloc_list_lock(device); ++ ++ { ++ struct dxgallocation *alloc; ++ struct dxgallocation *tmp; ++ ++ DXG_TRACE("destroying allocations"); ++ list_for_each_entry_safe(alloc, tmp, &device->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_destroy(alloc); ++ } ++ } ++ ++ { ++ struct dxgresource *resource; ++ struct dxgresource *tmp; ++ ++ DXG_TRACE("destroying resources"); ++ list_for_each_entry_safe(resource, tmp, ++ &device->resource_list_head, ++ resource_list_entry) { ++ dxgresource_destroy(resource); ++ } ++ } ++ ++ dxgdevice_release_alloc_list_lock(device); ++ + { + struct dxgcontext *context; + struct dxgcontext *tmp; +@@ -328,6 +366,26 @@ void dxgdevice_release_context_list_lock(struct dxgdevice *device) + up_write(&device->context_list_lock); + } + ++void dxgdevice_acquire_alloc_list_lock(struct dxgdevice *device) ++{ ++ down_write(&device->alloc_list_lock); ++} ++ ++void dxgdevice_release_alloc_list_lock(struct dxgdevice *device) ++{ ++ up_write(&device->alloc_list_lock); ++} ++ ++void dxgdevice_acquire_alloc_list_lock_shared(struct dxgdevice *device) ++{ ++ down_read(&device->alloc_list_lock); ++} ++ ++void dxgdevice_release_alloc_list_lock_shared(struct dxgdevice *device) ++{ ++ up_read(&device->alloc_list_lock); ++} ++ + void dxgdevice_add_context(struct dxgdevice *device, struct dxgcontext *context) + { + down_write(&device->context_list_lock); +@@ -344,6 +402,161 @@ void dxgdevice_remove_context(struct dxgdevice *device, + } + } + ++void dxgdevice_add_alloc(struct dxgdevice *device, struct dxgallocation *alloc) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&alloc->alloc_list_entry, &device->alloc_list_head); ++ kref_get(&device->device_kref); ++ alloc->owner.device = device; ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_alloc(struct dxgdevice *device, ++ struct dxgallocation *alloc) ++{ ++ if (alloc->alloc_list_entry.next) { ++ list_del(&alloc->alloc_list_entry); ++ alloc->alloc_list_entry.next = NULL; ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++} ++ ++void dxgdevice_remove_alloc_safe(struct dxgdevice *device, ++ struct dxgallocation *alloc) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ dxgdevice_remove_alloc(device, alloc); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_add_resource(struct dxgdevice *device, struct dxgresource *res) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&res->resource_list_entry, &device->resource_list_head); ++ kref_get(&device->device_kref); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_resource(struct dxgdevice *device, ++ struct dxgresource *res) ++{ ++ if (res->resource_list_entry.next) { ++ list_del(&res->resource_list_entry); ++ res->resource_list_entry.next = NULL; ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++} ++ ++struct dxgresource *dxgresource_create(struct dxgdevice *device) ++{ ++ struct dxgresource *resource; ++ ++ resource = kzalloc(sizeof(struct dxgresource), GFP_KERNEL); ++ if (resource) { ++ kref_init(&resource->resource_kref); ++ resource->device = device; ++ resource->process = device->process; ++ resource->object_state = DXGOBJECTSTATE_ACTIVE; ++ mutex_init(&resource->resource_mutex); ++ INIT_LIST_HEAD(&resource->alloc_list_head); ++ dxgdevice_add_resource(device, resource); ++ } ++ return resource; ++} ++ ++void dxgresource_free_handle(struct dxgresource *resource) ++{ ++ struct dxgallocation *alloc; ++ struct dxgprocess *process; ++ ++ if (resource->handle_valid) { ++ process = resource->device->process; ++ hmgrtable_free_handle_safe(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ resource->handle); ++ resource->handle_valid = 0; ++ } ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_free_handle(alloc); ++ } ++} ++ ++void dxgresource_destroy(struct dxgresource *resource) ++{ ++ /* device->alloc_list_lock is held */ ++ struct dxgallocation *alloc; ++ struct dxgallocation *tmp; ++ struct d3dkmt_destroyallocation2 args = { }; ++ int destroyed = test_and_set_bit(0, &resource->flags); ++ struct dxgdevice *device = resource->device; ++ ++ if (!destroyed) { ++ dxgresource_free_handle(resource); ++ if (resource->handle.v) { ++ args.device = device->handle; ++ args.resource = resource->handle; ++ dxgvmb_send_destroy_allocation(device->process, ++ device, &args, NULL); ++ resource->handle.v = 0; ++ } ++ list_for_each_entry_safe(alloc, tmp, &resource->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_destroy(alloc); ++ } ++ dxgdevice_remove_resource(device, resource); ++ } ++ kref_put(&resource->resource_kref, dxgresource_release); ++} ++ ++void dxgresource_release(struct kref *refcount) ++{ ++ struct dxgresource *resource; ++ ++ resource = container_of(refcount, struct dxgresource, resource_kref); ++ kfree(resource); ++} ++ ++bool dxgresource_is_active(struct dxgresource *resource) ++{ ++ return resource->object_state == DXGOBJECTSTATE_ACTIVE; ++} ++ ++int dxgresource_add_alloc(struct dxgresource *resource, ++ struct dxgallocation *alloc) ++{ ++ int ret = -ENODEV; ++ struct dxgdevice *device = resource->device; ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (dxgresource_is_active(resource)) { ++ list_add_tail(&alloc->alloc_list_entry, ++ &resource->alloc_list_head); ++ alloc->owner.resource = resource; ++ ret = 0; ++ } ++ alloc->resource_owner = 1; ++ dxgdevice_release_alloc_list_lock(device); ++ return ret; ++} ++ ++void dxgresource_remove_alloc(struct dxgresource *resource, ++ struct dxgallocation *alloc) ++{ ++ if (alloc->alloc_list_entry.next) { ++ list_del(&alloc->alloc_list_entry); ++ alloc->alloc_list_entry.next = NULL; ++ } ++} ++ ++void dxgresource_remove_alloc_safe(struct dxgresource *resource, ++ struct dxgallocation *alloc) ++{ ++ dxgdevice_acquire_alloc_list_lock(resource->device); ++ dxgresource_remove_alloc(resource, alloc); ++ dxgdevice_release_alloc_list_lock(resource->device); ++} ++ + void dxgdevice_release(struct kref *refcount) + { + struct dxgdevice *device; +@@ -413,6 +626,75 @@ void dxgcontext_release(struct kref *refcount) + kfree(context); + } + ++struct dxgallocation *dxgallocation_create(struct dxgprocess *process) ++{ ++ struct dxgallocation *alloc; ++ ++ alloc = kzalloc(sizeof(struct dxgallocation), GFP_KERNEL); ++ if (alloc) ++ alloc->process = process; ++ return alloc; ++} ++ ++void dxgallocation_stop(struct dxgallocation *alloc) ++{ ++ if (alloc->pages) { ++ release_pages(alloc->pages, alloc->num_pages); ++ vfree(alloc->pages); ++ alloc->pages = NULL; ++ } ++} ++ ++void dxgallocation_free_handle(struct dxgallocation *alloc) ++{ ++ dxgprocess_ht_lock_exclusive_down(alloc->process); ++ if (alloc->handle_valid) { ++ hmgrtable_free_handle(&alloc->process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ alloc->alloc_handle); ++ alloc->handle_valid = 0; ++ } ++ dxgprocess_ht_lock_exclusive_up(alloc->process); ++} ++ ++void dxgallocation_destroy(struct dxgallocation *alloc) ++{ ++ struct dxgprocess *process = alloc->process; ++ struct d3dkmt_destroyallocation2 args = { }; ++ ++ dxgallocation_stop(alloc); ++ if (alloc->resource_owner) ++ dxgresource_remove_alloc(alloc->owner.resource, alloc); ++ else if (alloc->owner.device) ++ dxgdevice_remove_alloc(alloc->owner.device, alloc); ++ dxgallocation_free_handle(alloc); ++ if (alloc->alloc_handle.v && !alloc->resource_owner) { ++ args.device = alloc->owner.device->handle; ++ args.alloc_count = 1; ++ dxgvmb_send_destroy_allocation(process, ++ alloc->owner.device, ++ &args, &alloc->alloc_handle); ++ } ++#ifdef _MAIN_KERNEL_ ++ if (alloc->gpadl.gpadl_handle) { ++ DXG_TRACE("Teardown gpadl %d", ++ alloc->gpadl.gpadl_handle); ++ vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); ++ alloc->gpadl.gpadl_handle = 0; ++ } ++else ++ if (alloc->gpadl) { ++ DXG_TRACE("Teardown gpadl %d", ++ alloc->gpadl); ++ vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); ++ alloc->gpadl = 0; ++ } ++#endif ++ if (alloc->priv_drv_data) ++ vfree(alloc->priv_drv_data); ++ kfree(alloc); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index a3d8d3c9f37d..fa053fb6ac9c 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -36,6 +36,8 @@ struct dxgprocess; + struct dxgadapter; + struct dxgdevice; + struct dxgcontext; ++struct dxgallocation; ++struct dxgresource; + + /* + * Driver private data. +@@ -269,6 +271,8 @@ struct dxgadapter { + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ + struct list_head adapter_process_list_head; ++ /* This lock protects shared resource and syncobject lists */ ++ struct rw_semaphore shared_resource_list_lock; + struct pci_dev *pci_dev; + struct hv_device *hv_dev; + struct dxgvmbuschannel channel; +@@ -315,6 +319,10 @@ struct dxgdevice { + struct rw_semaphore device_lock; + struct rw_semaphore context_list_lock; + struct list_head context_list_head; ++ /* List of device allocations */ ++ struct rw_semaphore alloc_list_lock; ++ struct list_head alloc_list_head; ++ struct list_head resource_list_head; + /* List of paging queues. Protected by process handle table lock. */ + struct list_head pqueue_list_head; + struct d3dkmthandle handle; +@@ -331,9 +339,19 @@ void dxgdevice_release_lock_shared(struct dxgdevice *dev); + void dxgdevice_release(struct kref *refcount); + void dxgdevice_add_context(struct dxgdevice *dev, struct dxgcontext *ctx); + void dxgdevice_remove_context(struct dxgdevice *dev, struct dxgcontext *ctx); ++void dxgdevice_add_alloc(struct dxgdevice *dev, struct dxgallocation *a); ++void dxgdevice_remove_alloc(struct dxgdevice *dev, struct dxgallocation *a); ++void dxgdevice_remove_alloc_safe(struct dxgdevice *dev, ++ struct dxgallocation *a); ++void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res); ++void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res); + bool dxgdevice_is_active(struct dxgdevice *dev); + void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev); + void dxgdevice_release_context_list_lock(struct dxgdevice *dev); ++void dxgdevice_acquire_alloc_list_lock(struct dxgdevice *dev); ++void dxgdevice_release_alloc_list_lock(struct dxgdevice *dev); ++void dxgdevice_acquire_alloc_list_lock_shared(struct dxgdevice *dev); ++void dxgdevice_release_alloc_list_lock_shared(struct dxgdevice *dev); + + /* + * The object represent the execution context of a device. +@@ -357,6 +375,83 @@ void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_release(struct kref *refcount); + bool dxgcontext_is_active(struct dxgcontext *ctx); + ++struct dxgresource { ++ struct kref resource_kref; ++ enum dxgobjectstate object_state; ++ struct d3dkmthandle handle; ++ struct list_head alloc_list_head; ++ struct list_head resource_list_entry; ++ struct list_head shared_resource_list_entry; ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ /* Protects adding allocations to resource and resource destruction */ ++ struct mutex resource_mutex; ++ u64 private_runtime_handle; ++ union { ++ struct { ++ u32 destroyed:1; /* Must be the first */ ++ u32 handle_valid:1; ++ u32 reserved:30; ++ }; ++ long flags; ++ }; ++}; ++ ++struct dxgresource *dxgresource_create(struct dxgdevice *dev); ++void dxgresource_destroy(struct dxgresource *res); ++void dxgresource_free_handle(struct dxgresource *res); ++void dxgresource_release(struct kref *refcount); ++int dxgresource_add_alloc(struct dxgresource *res, ++ struct dxgallocation *a); ++void dxgresource_remove_alloc(struct dxgresource *res, struct dxgallocation *a); ++void dxgresource_remove_alloc_safe(struct dxgresource *res, ++ struct dxgallocation *a); ++bool dxgresource_is_active(struct dxgresource *res); ++ ++struct privdata { ++ u32 data_size; ++ u8 data[1]; ++}; ++ ++struct dxgallocation { ++ /* Entry in the device list or resource list (when resource exists) */ ++ struct list_head alloc_list_entry; ++ /* Allocation owner */ ++ union { ++ struct dxgdevice *device; ++ struct dxgresource *resource; ++ } owner; ++ struct dxgprocess *process; ++ /* Pointer to private driver data desc. Used for shared resources */ ++ struct privdata *priv_drv_data; ++ struct d3dkmthandle alloc_handle; ++ /* Set to 1 when allocation belongs to resource. */ ++ u32 resource_owner:1; ++ /* Set to 1 when the allocatio is mapped as cached */ ++ u32 cached:1; ++ u32 handle_valid:1; ++ /* GPADL address list for existing sysmem allocations */ ++#ifdef _MAIN_KERNEL_ ++ struct vmbus_gpadl gpadl; ++#else ++ u32 gpadl; ++#endif ++ /* Number of pages in the 'pages' array */ ++ u32 num_pages; ++ /* ++ * CPU address from the existing sysmem allocation, or ++ * mapped to the CPU visible backing store in the IO space ++ */ ++ void *cpu_address; ++ /* Describes pages for the existing sysmem allocation */ ++ struct page **pages; ++}; ++ ++struct dxgallocation *dxgallocation_create(struct dxgprocess *process); ++void dxgallocation_stop(struct dxgallocation *a); ++void dxgallocation_destroy(struct dxgallocation *a); ++void dxgallocation_free_handle(struct dxgallocation *a); ++ + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + +@@ -409,9 +504,27 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + struct dxgprocess *process, + struct d3dkmthandle h); ++int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, ++ struct d3dkmt_createallocation *args, ++ struct d3dkmt_createallocation *__user inargs, ++ struct dxgresource *res, ++ struct dxgallocation **allocs, ++ struct d3dddi_allocationinfo2 *alloc_info, ++ struct d3dkmt_createstandardallocation *stda); ++int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, ++ struct d3dkmt_destroyallocation2 *args, ++ struct d3dkmthandle *alloc_handles); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, ++ enum d3dkmdt_standardallocationtype t, ++ struct d3dkmdt_gdisurfacedata *data, ++ u32 physical_adapter_index, ++ u32 *alloc_priv_driver_size, ++ void *prive_alloc_data, ++ u32 *res_priv_data_size, ++ void *priv_res_data); + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index fbe1c58ecb46..053ce6f3e083 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -162,6 +162,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + init_rwsem(&adapter->core_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); ++ init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; + guid_to_luid(guid, &adapter->luid); + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index e66aac7c13cb..14b51a3c6afc 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -111,6 +111,41 @@ static int init_message(struct dxgvmbusmsg *msg, struct dxgadapter *adapter, + return 0; + } + ++static int init_message_res(struct dxgvmbusmsgres *msg, ++ struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ u32 size, ++ u32 result_size) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ bool use_ext_header = dxgglobal->vmbus_ver >= ++ DXGK_VMBUS_INTERFACE_VERSION; ++ ++ if (use_ext_header) ++ size += sizeof(struct dxgvmb_ext_header); ++ msg->size = size; ++ msg->res_size += (result_size + 7) & ~7; ++ size += msg->res_size; ++ msg->hdr = vzalloc(size); ++ if (msg->hdr == NULL) { ++ DXG_ERR("Failed to allocate VM bus message: %d", size); ++ return -ENOMEM; ++ } ++ if (use_ext_header) { ++ msg->msg = (char *)&msg->hdr[1]; ++ msg->hdr->command_offset = sizeof(msg->hdr[0]); ++ msg->hdr->vgpu_luid = adapter->host_vgpu_luid; ++ } else { ++ msg->msg = (char *)msg->hdr; ++ } ++ msg->res = (char *)msg->hdr + msg->size; ++ if (dxgglobal->async_msg_enabled) ++ msg->channel = &dxgglobal->channel; ++ else ++ msg->channel = &adapter->channel; ++ return 0; ++} ++ + static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process) + { + if (msg->hdr && (char *)msg->hdr != msg->msg_on_stack) +@@ -852,6 +887,620 @@ int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + return ret; + } + ++static int ++copy_private_data(struct d3dkmt_createallocation *args, ++ struct dxgkvmb_command_createallocation *command, ++ struct d3dddi_allocationinfo2 *input_alloc_info, ++ struct d3dkmt_createstandardallocation *standard_alloc) ++{ ++ struct dxgkvmb_command_createallocation_allocinfo *alloc_info; ++ struct d3dddi_allocationinfo2 *input_alloc; ++ int ret = 0; ++ int i; ++ u8 *private_data_dest = (u8 *) &command[1] + ++ (args->alloc_count * ++ sizeof(struct dxgkvmb_command_createallocation_allocinfo)); ++ ++ if (args->private_runtime_data_size) { ++ ret = copy_from_user(private_data_dest, ++ args->private_runtime_data, ++ args->private_runtime_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy runtime data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ private_data_dest += args->private_runtime_data_size; ++ } ++ ++ if (args->flags.standard_allocation) { ++ DXG_TRACE("private data offset %d", ++ (u32) (private_data_dest - (u8 *) command)); ++ ++ args->priv_drv_data_size = sizeof(*args->standard_allocation); ++ memcpy(private_data_dest, standard_alloc, ++ sizeof(*standard_alloc)); ++ private_data_dest += args->priv_drv_data_size; ++ } else if (args->priv_drv_data_size) { ++ ret = copy_from_user(private_data_dest, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ private_data_dest += args->priv_drv_data_size; ++ } ++ ++ alloc_info = (void *)&command[1]; ++ input_alloc = input_alloc_info; ++ if (input_alloc_info[0].sysmem) ++ command->flags.existing_sysmem = 1; ++ for (i = 0; i < args->alloc_count; i++) { ++ alloc_info->flags = input_alloc->flags.value; ++ alloc_info->vidpn_source_id = input_alloc->vidpn_source_id; ++ alloc_info->priv_drv_data_size = ++ input_alloc->priv_drv_data_size; ++ if (input_alloc->priv_drv_data_size) { ++ ret = copy_from_user(private_data_dest, ++ input_alloc->priv_drv_data, ++ input_alloc->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ private_data_dest += input_alloc->priv_drv_data_size; ++ } ++ alloc_info++; ++ input_alloc++; ++ } ++ ++cleanup: ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static ++int create_existing_sysmem(struct dxgdevice *device, ++ struct dxgkvmb_command_allocinfo_return *host_alloc, ++ struct dxgallocation *dxgalloc, ++ bool read_only, ++ const void *sysmem) ++{ ++ int ret1 = 0; ++ void *kmem = NULL; ++ int ret = 0; ++ struct dxgkvmb_command_setexistingsysmemstore *set_store_command; ++ u64 alloc_size = host_alloc->allocation_size; ++ u32 npages = alloc_size >> PAGE_SHIFT; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, device->adapter, device->process, ++ sizeof(*set_store_command)); ++ if (ret) ++ goto cleanup; ++ set_store_command = (void *)msg.msg; ++ ++ /* ++ * Create a guest physical address list and set it as the allocation ++ * backing store in the host. This is done after creating the host ++ * allocation, because only now the allocation size is known. ++ */ ++ ++ DXG_TRACE("Alloc size: %lld", alloc_size); ++ ++ dxgalloc->cpu_address = (void *)sysmem; ++ dxgalloc->pages = vzalloc(npages * sizeof(void *)); ++ if (dxgalloc->pages == NULL) { ++ DXG_ERR("failed to allocate pages"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret1 = get_user_pages_fast((unsigned long)sysmem, npages, !read_only, ++ dxgalloc->pages); ++ if (ret1 != npages) { ++ DXG_ERR("get_user_pages_fast failed: %d", ret1); ++ if (ret1 > 0 && ret1 < npages) ++ release_pages(dxgalloc->pages, ret1); ++ vfree(dxgalloc->pages); ++ dxgalloc->pages = NULL; ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL); ++ if (kmem == NULL) { ++ DXG_ERR("vmap failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem, ++ alloc_size, &dxgalloc->gpadl); ++ if (ret1) { ++ DXG_ERR("establish_gpadl failed: %d", ret1); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); ++ ++ command_vgpu_to_host_init2(&set_store_command->hdr, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, ++ device->process->host_handle); ++ set_store_command->device = device->handle; ++ set_store_command->device = device->handle; ++ set_store_command->allocation = host_alloc->allocation; ++#ifdef _MAIN_KERNEL_ ++ set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; ++#else ++ set_store_command->gpadl = dxgalloc->gpadl; ++#endif ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ if (ret < 0) ++ DXG_ERR("failed to set existing store: %x", ret); ++ ++cleanup: ++ if (kmem) ++ vunmap(kmem); ++ free_message(&msg, device->process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static int ++process_allocation_handles(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createallocation *args, ++ struct dxgkvmb_command_createallocation_return *res, ++ struct dxgallocation **dxgalloc, ++ struct dxgresource *resource) ++{ ++ int ret = 0; ++ int i; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (args->flags.create_resource) { ++ ret = hmgrtable_assign_handle(&process->handle_table, resource, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ res->resource); ++ if (ret < 0) { ++ DXG_ERR("failed to assign resource handle %x", ++ res->resource.v); ++ } else { ++ resource->handle = res->resource; ++ resource->handle_valid = 1; ++ } ++ } ++ for (i = 0; i < args->alloc_count; i++) { ++ struct dxgkvmb_command_allocinfo_return *host_alloc; ++ ++ host_alloc = &res->allocation_info[i]; ++ ret = hmgrtable_assign_handle(&process->handle_table, ++ dxgalloc[i], ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ host_alloc->allocation); ++ if (ret < 0) { ++ DXG_ERR("failed assign alloc handle %x %d %d", ++ host_alloc->allocation.v, ++ args->alloc_count, i); ++ break; ++ } ++ dxgalloc[i]->alloc_handle = host_alloc->allocation; ++ dxgalloc[i]->handle_valid = 1; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static int ++create_local_allocations(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createallocation *args, ++ struct d3dkmt_createallocation *__user input_args, ++ struct d3dddi_allocationinfo2 *alloc_info, ++ struct dxgkvmb_command_createallocation_return *result, ++ struct dxgresource *resource, ++ struct dxgallocation **dxgalloc, ++ u32 destroy_buffer_size) ++{ ++ int i; ++ int alloc_count = args->alloc_count; ++ u8 *alloc_private_data = NULL; ++ int ret = 0; ++ int ret1; ++ struct dxgkvmb_command_destroyallocation *destroy_buf; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, device->adapter, process, ++ destroy_buffer_size); ++ if (ret) ++ goto cleanup; ++ destroy_buf = (void *)msg.msg; ++ ++ /* Prepare the command to destroy allocation in case of failure */ ++ command_vgpu_to_host_init2(&destroy_buf->hdr, ++ DXGK_VMBCOMMAND_DESTROYALLOCATION, ++ process->host_handle); ++ destroy_buf->device = args->device; ++ destroy_buf->resource = args->resource; ++ destroy_buf->alloc_count = alloc_count; ++ destroy_buf->flags.assume_not_in_use = 1; ++ for (i = 0; i < alloc_count; i++) { ++ DXG_TRACE("host allocation: %d %x", ++ i, result->allocation_info[i].allocation.v); ++ destroy_buf->allocations[i] = ++ result->allocation_info[i].allocation; ++ } ++ ++ if (args->flags.create_resource) { ++ DXG_TRACE("new resource: %x", result->resource.v); ++ ret = copy_to_user(&input_args->resource, &result->resource, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy resource handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ alloc_private_data = (u8 *) result + ++ sizeof(struct dxgkvmb_command_createallocation_return) + ++ sizeof(struct dxgkvmb_command_allocinfo_return) * (alloc_count - 1); ++ ++ for (i = 0; i < alloc_count; i++) { ++ struct dxgkvmb_command_allocinfo_return *host_alloc; ++ struct d3dddi_allocationinfo2 *user_alloc; ++ ++ host_alloc = &result->allocation_info[i]; ++ user_alloc = &alloc_info[i]; ++ dxgalloc[i]->num_pages = ++ host_alloc->allocation_size >> PAGE_SHIFT; ++ if (user_alloc->sysmem) { ++ ret = create_existing_sysmem(device, host_alloc, ++ dxgalloc[i], ++ args->flags.read_only != 0, ++ user_alloc->sysmem); ++ if (ret < 0) ++ goto cleanup; ++ } ++ dxgalloc[i]->cached = host_alloc->allocation_flags.cached; ++ if (host_alloc->priv_drv_data_size) { ++ ret = copy_to_user(user_alloc->priv_drv_data, ++ alloc_private_data, ++ host_alloc->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ alloc_private_data += host_alloc->priv_drv_data_size; ++ } ++ ret = copy_to_user(&args->allocation_info[i].allocation, ++ &host_alloc->allocation, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = process_allocation_handles(process, device, args, result, ++ dxgalloc, resource); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input_args->global_share, &args->global_share, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy global share"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ /* Free local handles before freeing the handles in the host */ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (dxgalloc) ++ for (i = 0; i < alloc_count; i++) ++ if (dxgalloc[i]) ++ dxgallocation_free_handle(dxgalloc[i]); ++ if (resource && args->flags.create_resource) ++ dxgresource_free_handle(resource); ++ dxgdevice_release_alloc_list_lock(device); ++ ++ /* Destroy allocations in the host to unmap gpadls */ ++ ret1 = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ if (ret1 < 0) ++ DXG_ERR("failed to destroy allocations: %x", ++ ret1); ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (dxgalloc) { ++ for (i = 0; i < alloc_count; i++) { ++ if (dxgalloc[i]) { ++ dxgalloc[i]->alloc_handle.v = 0; ++ dxgallocation_destroy(dxgalloc[i]); ++ dxgalloc[i] = NULL; ++ } ++ } ++ } ++ if (resource && args->flags.create_resource) { ++ /* ++ * Prevent the resource memory from freeing. ++ * It will be freed in the top level function. ++ */ ++ kref_get(&resource->resource_kref); ++ dxgresource_destroy(resource); ++ } ++ dxgdevice_release_alloc_list_lock(device); ++ } ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_create_allocation(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createallocation *args, ++ struct d3dkmt_createallocation *__user ++ input_args, ++ struct dxgresource *resource, ++ struct dxgallocation **dxgalloc, ++ struct d3dddi_allocationinfo2 *alloc_info, ++ struct d3dkmt_createstandardallocation ++ *standard_alloc) ++{ ++ struct dxgkvmb_command_createallocation *command = NULL; ++ struct dxgkvmb_command_createallocation_return *result = NULL; ++ int ret = -EINVAL; ++ int i; ++ u32 result_size = 0; ++ u32 cmd_size = 0; ++ u32 destroy_buffer_size = 0; ++ u32 priv_drv_data_size; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->private_runtime_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE || ++ args->priv_drv_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ ++ /* ++ * Preallocate the buffer, which will be used for destruction in case ++ * of a failure ++ */ ++ destroy_buffer_size = sizeof(struct dxgkvmb_command_destroyallocation) + ++ args->alloc_count * sizeof(struct d3dkmthandle); ++ ++ /* Compute the total private driver size */ ++ ++ priv_drv_data_size = 0; ++ ++ for (i = 0; i < args->alloc_count; i++) { ++ if (alloc_info[i].priv_drv_data_size >= ++ DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } else { ++ priv_drv_data_size += alloc_info[i].priv_drv_data_size; ++ } ++ if (priv_drv_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ } ++ ++ /* ++ * Private driver data for the result includes only per allocation ++ * private data ++ */ ++ result_size = sizeof(struct dxgkvmb_command_createallocation_return) + ++ (args->alloc_count - 1) * ++ sizeof(struct dxgkvmb_command_allocinfo_return) + ++ priv_drv_data_size; ++ result = vzalloc(result_size); ++ if (result == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ /* Private drv data for the command includes the global private data */ ++ priv_drv_data_size += args->priv_drv_data_size; ++ ++ cmd_size = sizeof(struct dxgkvmb_command_createallocation) + ++ args->alloc_count * ++ sizeof(struct dxgkvmb_command_createallocation_allocinfo) + ++ args->private_runtime_data_size + priv_drv_data_size; ++ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("command size, driver_data_size %d %d %ld %ld", ++ cmd_size, priv_drv_data_size, ++ sizeof(struct dxgkvmb_command_createallocation), ++ sizeof(struct dxgkvmb_command_createallocation_allocinfo)); ++ ++ ret = init_message(&msg, device->adapter, process, ++ cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATEALLOCATION, ++ process->host_handle); ++ command->device = args->device; ++ command->flags = args->flags; ++ command->resource = args->resource; ++ command->private_runtime_resource_handle = ++ args->private_runtime_resource_handle; ++ command->alloc_count = args->alloc_count; ++ command->private_runtime_data_size = args->private_runtime_data_size; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ ++ ret = copy_private_data(args, command, alloc_info, standard_alloc); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, result_size); ++ if (ret < 0) { ++ DXG_ERR("send_create_allocation failed %x", ret); ++ goto cleanup; ++ } ++ ++ ret = create_local_allocations(process, device, args, input_args, ++ alloc_info, result, resource, dxgalloc, ++ destroy_buffer_size); ++cleanup: ++ ++ if (result) ++ vfree(result); ++ free_message(&msg, process); ++ ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_allocation(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_destroyallocation2 *args, ++ struct d3dkmthandle *alloc_handles) ++{ ++ struct dxgkvmb_command_destroyallocation *destroy_buffer; ++ u32 destroy_buffer_size; ++ int ret; ++ int allocations_size = args->alloc_count * sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ destroy_buffer_size = sizeof(struct dxgkvmb_command_destroyallocation) + ++ allocations_size; ++ ++ ret = init_message(&msg, device->adapter, process, ++ destroy_buffer_size); ++ if (ret) ++ goto cleanup; ++ destroy_buffer = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&destroy_buffer->hdr, ++ DXGK_VMBCOMMAND_DESTROYALLOCATION, ++ process->host_handle); ++ destroy_buffer->device = args->device; ++ destroy_buffer->resource = args->resource; ++ destroy_buffer->alloc_count = args->alloc_count; ++ destroy_buffer->flags = args->flags; ++ if (allocations_size) ++ memcpy(destroy_buffer->allocations, alloc_handles, ++ allocations_size); ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, ++ enum d3dkmdt_standardallocationtype alloctype, ++ struct d3dkmdt_gdisurfacedata *alloc_data, ++ u32 physical_adapter_index, ++ u32 *alloc_priv_driver_size, ++ void *priv_alloc_data, ++ u32 *res_priv_data_size, ++ void *priv_res_data) ++{ ++ struct dxgkvmb_command_getstandardallocprivdata *command; ++ struct dxgkvmb_command_getstandardallocprivdata_return *result = NULL; ++ u32 result_size = sizeof(*result); ++ int ret; ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (priv_alloc_data) ++ result_size += *alloc_priv_driver_size; ++ if (priv_res_data) ++ result_size += *res_priv_data_size; ++ ret = init_message_res(&msg, device->adapter, device->process, ++ sizeof(*command), result_size); ++ if (ret) ++ goto cleanup; ++ command = msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DDIGETSTANDARDALLOCATIONDRIVERDATA, ++ device->process->host_handle); ++ ++ command->alloc_type = alloctype; ++ command->priv_driver_data_size = *alloc_priv_driver_size; ++ command->physical_adapter_index = physical_adapter_index; ++ command->priv_driver_resource_size = *res_priv_data_size; ++ switch (alloctype) { ++ case _D3DKMDT_STANDARDALLOCATION_GDISURFACE: ++ command->gdi_surface = *alloc_data; ++ break; ++ case _D3DKMDT_STANDARDALLOCATION_SHAREDPRIMARYSURFACE: ++ case _D3DKMDT_STANDARDALLOCATION_SHADOWSURFACE: ++ case _D3DKMDT_STANDARDALLOCATION_STAGINGSURFACE: ++ default: ++ DXG_ERR("Invalid standard alloc type"); ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (*alloc_priv_driver_size && ++ result->priv_driver_data_size != *alloc_priv_driver_size) { ++ DXG_ERR("Priv data size mismatch"); ++ goto cleanup; ++ } ++ if (*res_priv_data_size && ++ result->priv_driver_resource_size != *res_priv_data_size) { ++ DXG_ERR("Resource priv data size mismatch"); ++ goto cleanup; ++ } ++ *alloc_priv_driver_size = result->priv_driver_data_size; ++ *res_priv_data_size = result->priv_driver_resource_size; ++ if (priv_alloc_data) { ++ memcpy(priv_alloc_data, &result[1], ++ result->priv_driver_data_size); ++ } ++ if (priv_res_data) { ++ memcpy(priv_res_data, ++ (char *)(&result[1]) + result->priv_driver_data_size, ++ result->priv_driver_resource_size); ++ } ++ ++cleanup: ++ ++ free_message((struct dxgvmbusmsg *)&msg, device->process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index ebcb7b0f62c1..4b7466d1b9f2 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -173,6 +173,14 @@ struct dxgkvmb_command_setiospaceregion { + u32 shared_page_gpadl; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setexistingsysmemstore { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++ u32 gpadl; ++}; ++ + struct dxgkvmb_command_createprocess { + struct dxgkvmb_command_vm_to_host hdr; + void *process; +@@ -269,6 +277,121 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_createallocation_allocinfo { ++ u32 flags; ++ u32 priv_drv_data_size; ++ u32 vidpn_source_id; ++}; ++ ++struct dxgkvmb_command_createallocation { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 private_runtime_data_size; ++ u32 priv_drv_data_size; ++ u32 alloc_count; ++ struct d3dkmt_createallocationflags flags; ++ u64 private_runtime_resource_handle; ++ bool make_resident; ++/* dxgkvmb_command_createallocation_allocinfo alloc_info[alloc_count]; */ ++/* u8 private_rutime_data[private_runtime_data_size] */ ++/* u8 priv_drv_data[] for each alloc_info */ ++}; ++ ++struct dxgkvmb_command_getstandardallocprivdata { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ enum d3dkmdt_standardallocationtype alloc_type; ++ u32 priv_driver_data_size; ++ u32 priv_driver_resource_size; ++ u32 physical_adapter_index; ++ union { ++ struct d3dkmdt_sharedprimarysurfacedata primary; ++ struct d3dkmdt_shadowsurfacedata shadow; ++ struct d3dkmdt_stagingsurfacedata staging; ++ struct d3dkmdt_gdisurfacedata gdi_surface; ++ }; ++}; ++ ++struct dxgkvmb_command_getstandardallocprivdata_return { ++ struct ntstatus status; ++ u32 priv_driver_data_size; ++ u32 priv_driver_resource_size; ++ union { ++ struct d3dkmdt_sharedprimarysurfacedata primary; ++ struct d3dkmdt_shadowsurfacedata shadow; ++ struct d3dkmdt_stagingsurfacedata staging; ++ struct d3dkmdt_gdisurfacedata gdi_surface; ++ }; ++/* char alloc_priv_data[priv_driver_data_size]; */ ++/* char resource_priv_data[priv_driver_resource_size]; */ ++}; ++ ++struct dxgkarg_describeallocation { ++ u64 allocation; ++ u32 width; ++ u32 height; ++ u32 format; ++ u32 multisample_method; ++ struct d3dddi_rational refresh_rate; ++ u32 private_driver_attribute; ++ u32 flags; ++ u32 rotation; ++}; ++ ++struct dxgkvmb_allocflags { ++ union { ++ u32 flags; ++ struct { ++ u32 primary:1; ++ u32 cdd_primary:1; ++ u32 dod_primary:1; ++ u32 overlay:1; ++ u32 reserved6:1; ++ u32 capture:1; ++ u32 reserved0:4; ++ u32 reserved1:1; ++ u32 existing_sysmem:1; ++ u32 stereo:1; ++ u32 direct_flip:1; ++ u32 hardware_protected:1; ++ u32 reserved2:1; ++ u32 reserved3:1; ++ u32 reserved4:1; ++ u32 protected:1; ++ u32 cached:1; ++ u32 independent_primary:1; ++ u32 reserved:11; ++ }; ++ }; ++}; ++ ++struct dxgkvmb_command_allocinfo_return { ++ struct d3dkmthandle allocation; ++ u32 priv_drv_data_size; ++ struct dxgkvmb_allocflags allocation_flags; ++ u64 allocation_size; ++ struct dxgkarg_describeallocation driver_info; ++}; ++ ++struct dxgkvmb_command_createallocation_return { ++ struct d3dkmt_createallocationflags flags; ++ struct d3dkmthandle resource; ++ struct d3dkmthandle global_share; ++ u32 vgpu_flags; ++ struct dxgkvmb_command_allocinfo_return allocation_info[1]; ++ /* Private driver data for allocations */ ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroyallocation { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 alloc_count; ++ struct d3dddicb_destroyallocation2flags flags; ++ struct d3dkmthandle allocations[1]; ++}; ++ + struct dxgkvmb_command_createcontextvirtual { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmthandle context; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 5d10ebd2ce6a..0eaa577d7ed4 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -714,6 +714,633 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++get_standard_alloc_priv_data(struct dxgdevice *device, ++ struct d3dkmt_createstandardallocation *alloc_info, ++ u32 *standard_alloc_priv_data_size, ++ void **standard_alloc_priv_data, ++ u32 *standard_res_priv_data_size, ++ void **standard_res_priv_data) ++{ ++ int ret; ++ struct d3dkmdt_gdisurfacedata gdi_data = { }; ++ u32 priv_data_size = 0; ++ u32 res_priv_data_size = 0; ++ void *priv_data = NULL; ++ void *res_priv_data = NULL; ++ ++ gdi_data.type = _D3DKMDT_GDISURFACE_TEXTURE_CROSSADAPTER; ++ gdi_data.width = alloc_info->existing_heap_data.size; ++ gdi_data.height = 1; ++ gdi_data.format = _D3DDDIFMT_UNKNOWN; ++ ++ *standard_alloc_priv_data_size = 0; ++ ret = dxgvmb_send_get_stdalloc_data(device, ++ _D3DKMDT_STANDARDALLOCATION_GDISURFACE, ++ &gdi_data, 0, ++ &priv_data_size, NULL, ++ &res_priv_data_size, ++ NULL); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Priv data size: %d", priv_data_size); ++ if (priv_data_size == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ priv_data = vzalloc(priv_data_size); ++ if (priv_data == NULL) { ++ ret = -ENOMEM; ++ DXG_ERR("failed to allocate memory for priv data: %d", ++ priv_data_size); ++ goto cleanup; ++ } ++ if (res_priv_data_size) { ++ res_priv_data = vzalloc(res_priv_data_size); ++ if (res_priv_data == NULL) { ++ ret = -ENOMEM; ++ dev_err(DXGDEV, ++ "failed to alloc memory for res priv data: %d", ++ res_priv_data_size); ++ goto cleanup; ++ } ++ } ++ ret = dxgvmb_send_get_stdalloc_data(device, ++ _D3DKMDT_STANDARDALLOCATION_GDISURFACE, ++ &gdi_data, 0, ++ &priv_data_size, ++ priv_data, ++ &res_priv_data_size, ++ res_priv_data); ++ if (ret < 0) ++ goto cleanup; ++ *standard_alloc_priv_data_size = priv_data_size; ++ *standard_alloc_priv_data = priv_data; ++ *standard_res_priv_data_size = res_priv_data_size; ++ *standard_res_priv_data = res_priv_data; ++ priv_data = NULL; ++ res_priv_data = NULL; ++ ++cleanup: ++ if (priv_data) ++ vfree(priv_data); ++ if (res_priv_data) ++ vfree(res_priv_data); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static int ++dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createallocation args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct d3dddi_allocationinfo2 *alloc_info = NULL; ++ struct d3dkmt_createstandardallocation standard_alloc; ++ u32 alloc_info_size = 0; ++ struct dxgresource *resource = NULL; ++ struct dxgallocation **dxgalloc = NULL; ++ bool resource_mutex_acquired = false; ++ u32 standard_alloc_priv_data_size = 0; ++ void *standard_alloc_priv_data = NULL; ++ u32 res_priv_data_size = 0; ++ void *res_priv_data = NULL; ++ int i; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_CREATEALLOCATION_MAX || ++ args.alloc_count == 0) { ++ DXG_ERR("invalid number of allocations to create"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ alloc_info_size = sizeof(struct d3dddi_allocationinfo2) * ++ args.alloc_count; ++ alloc_info = vzalloc(alloc_info_size); ++ if (alloc_info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(alloc_info, args.allocation_info, ++ alloc_info_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc info"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ for (i = 0; i < args.alloc_count; i++) { ++ if (args.flags.standard_allocation) { ++ if (alloc_info[i].priv_drv_data_size != 0) { ++ DXG_ERR("private data size not zero"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if (alloc_info[i].priv_drv_data_size >= ++ DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("private data size too big: %d %d %ld", ++ i, alloc_info[i].priv_drv_data_size, ++ sizeof(alloc_info[0])); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (args.flags.existing_section || args.flags.create_protected) { ++ DXG_ERR("invalid allocation flags"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.standard_allocation) { ++ if (args.standard_allocation == NULL) { ++ DXG_ERR("invalid standard allocation"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_from_user(&standard_alloc, ++ args.standard_allocation, ++ sizeof(standard_alloc)); ++ if (ret) { ++ DXG_ERR("failed to copy std alloc data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (standard_alloc.type == ++ _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP) { ++ if (alloc_info[0].sysmem == NULL || ++ (unsigned long)alloc_info[0].sysmem & ++ (PAGE_SIZE - 1)) { ++ DXG_ERR("invalid sysmem pointer"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ if (!args.flags.existing_sysmem) { ++ DXG_ERR("expect existing_sysmem flag"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ } else if (standard_alloc.type == ++ _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER) { ++ if (args.flags.existing_sysmem) { ++ DXG_ERR("existing_sysmem flag invalid"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ ++ } ++ if (alloc_info[0].sysmem != NULL) { ++ DXG_ERR("sysmem should be NULL"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ } else { ++ DXG_ERR("invalid standard allocation type"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ ++ if (args.priv_drv_data_size != 0 || ++ args.alloc_count != 1 || ++ standard_alloc.existing_heap_data.size == 0 || ++ standard_alloc.existing_heap_data.size & (PAGE_SIZE - 1)) { ++ DXG_ERR("invalid standard allocation"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ args.priv_drv_data_size = ++ sizeof(struct d3dkmt_createstandardallocation); ++ } ++ ++ if (args.flags.create_shared && !args.flags.create_resource) { ++ DXG_ERR("create_resource must be set for create_shared"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ if (args.flags.standard_allocation) { ++ ret = get_standard_alloc_priv_data(device, ++ &standard_alloc, ++ &standard_alloc_priv_data_size, ++ &standard_alloc_priv_data, ++ &res_priv_data_size, ++ &res_priv_data); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Alloc private data: %d", ++ standard_alloc_priv_data_size); ++ } ++ ++ if (args.flags.create_resource) { ++ resource = dxgresource_create(device); ++ if (resource == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ resource->private_runtime_handle = ++ args.private_runtime_resource_handle; ++ } else { ++ if (args.resource.v) { ++ /* Adding new allocations to the given resource */ ++ ++ dxgprocess_ht_lock_shared_down(process); ++ resource = hmgrtable_get_object_by_type( ++ &process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ args.resource); ++ kref_get(&resource->resource_kref); ++ dxgprocess_ht_lock_shared_up(process); ++ ++ if (resource == NULL || resource->device != device) { ++ DXG_ERR("invalid resource handle %x", ++ args.resource.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ /* Synchronize with resource destruction */ ++ mutex_lock(&resource->resource_mutex); ++ if (!dxgresource_is_active(resource)) { ++ mutex_unlock(&resource->resource_mutex); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ resource_mutex_acquired = true; ++ } ++ } ++ ++ dxgalloc = vzalloc(sizeof(struct dxgallocation *) * args.alloc_count); ++ if (dxgalloc == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ for (i = 0; i < args.alloc_count; i++) { ++ struct dxgallocation *alloc; ++ u32 priv_data_size; ++ ++ if (args.flags.standard_allocation) ++ priv_data_size = standard_alloc_priv_data_size; ++ else ++ priv_data_size = alloc_info[i].priv_drv_data_size; ++ ++ if (alloc_info[i].sysmem && !args.flags.standard_allocation) { ++ if ((unsigned long) ++ alloc_info[i].sysmem & (PAGE_SIZE - 1)) { ++ DXG_ERR("invalid sysmem alloc %d, %p", ++ i, alloc_info[i].sysmem); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if ((alloc_info[0].sysmem == NULL) != ++ (alloc_info[i].sysmem == NULL)) { ++ DXG_ERR("All allocs must have sysmem pointer"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgalloc[i] = dxgallocation_create(process); ++ if (dxgalloc[i] == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ alloc = dxgalloc[i]; ++ ++ if (resource) { ++ ret = dxgresource_add_alloc(resource, alloc); ++ if (ret < 0) ++ goto cleanup; ++ } else { ++ dxgdevice_add_alloc(device, alloc); ++ } ++ if (args.flags.create_shared) { ++ /* Remember alloc private data to use it during open */ ++ alloc->priv_drv_data = vzalloc(priv_data_size + ++ offsetof(struct privdata, data)); ++ if (alloc->priv_drv_data == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ if (args.flags.standard_allocation) { ++ memcpy(alloc->priv_drv_data->data, ++ standard_alloc_priv_data, ++ priv_data_size); ++ } else { ++ ret = copy_from_user( ++ alloc->priv_drv_data->data, ++ alloc_info[i].priv_drv_data, ++ priv_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "failed to copy priv data"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ } ++ alloc->priv_drv_data->data_size = priv_data_size; ++ } ++ } ++ ++ ret = dxgvmb_send_create_allocation(process, device, &args, inargs, ++ resource, dxgalloc, alloc_info, ++ &standard_alloc); ++cleanup: ++ ++ if (resource_mutex_acquired) { ++ mutex_unlock(&resource->resource_mutex); ++ kref_put(&resource->resource_kref, dxgresource_release); ++ } ++ if (ret < 0) { ++ if (dxgalloc) { ++ for (i = 0; i < args.alloc_count; i++) { ++ if (dxgalloc[i]) ++ dxgallocation_destroy(dxgalloc[i]); ++ } ++ } ++ if (resource && args.flags.create_resource) { ++ dxgresource_destroy(resource); ++ } ++ } ++ if (dxgalloc) ++ vfree(dxgalloc); ++ if (standard_alloc_priv_data) ++ vfree(standard_alloc_priv_data); ++ if (res_priv_data) ++ vfree(res_priv_data); ++ if (alloc_info) ++ vfree(alloc_info); ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int validate_alloc(struct dxgallocation *alloc0, ++ struct dxgallocation *alloc, ++ struct dxgdevice *device, ++ struct d3dkmthandle alloc_handle) ++{ ++ u32 fail_reason; ++ ++ if (alloc == NULL) { ++ fail_reason = 1; ++ goto cleanup; ++ } ++ if (alloc->resource_owner != alloc0->resource_owner) { ++ fail_reason = 2; ++ goto cleanup; ++ } ++ if (alloc->resource_owner) { ++ if (alloc->owner.resource != alloc0->owner.resource) { ++ fail_reason = 3; ++ goto cleanup; ++ } ++ if (alloc->owner.resource->device != device) { ++ fail_reason = 4; ++ goto cleanup; ++ } ++ } else { ++ if (alloc->owner.device != device) { ++ fail_reason = 6; ++ goto cleanup; ++ } ++ } ++ return 0; ++cleanup: ++ DXG_ERR("Alloc validation failed: reason: %d %x", ++ fail_reason, alloc_handle.v); ++ return -EINVAL; ++} ++ ++static int ++dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroyallocation2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ struct d3dkmthandle *alloc_handles = NULL; ++ struct dxgallocation **allocs = NULL; ++ struct dxgresource *resource = NULL; ++ int i; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_CREATEALLOCATION_MAX || ++ ((args.alloc_count == 0) == (args.resource.v == 0))) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count) { ++ u32 handle_size = sizeof(struct d3dkmthandle) * ++ args.alloc_count; ++ ++ alloc_handles = vzalloc(handle_size); ++ if (alloc_handles == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ allocs = vzalloc(sizeof(struct dxgallocation *) * ++ args.alloc_count); ++ if (allocs == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(alloc_handles, args.allocations, ++ handle_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* Acquire the device lock to synchronize with the device destriction */ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ /* ++ * Destroy the local allocation handles first. If the host handle ++ * is destroyed first, another object could be assigned to the process ++ * table at the same place as the allocation handle and it will fail. ++ */ ++ if (args.alloc_count) { ++ dxgprocess_ht_lock_exclusive_down(process); ++ for (i = 0; i < args.alloc_count; i++) { ++ allocs[i] = ++ hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ alloc_handles[i]); ++ ret = ++ validate_alloc(allocs[0], allocs[i], device, ++ alloc_handles[i]); ++ if (ret < 0) { ++ dxgprocess_ht_lock_exclusive_up(process); ++ goto cleanup; ++ } ++ } ++ dxgprocess_ht_lock_exclusive_up(process); ++ for (i = 0; i < args.alloc_count; i++) ++ dxgallocation_free_handle(allocs[i]); ++ } else { ++ struct dxgallocation *alloc; ++ ++ dxgprocess_ht_lock_exclusive_down(process); ++ resource = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ args.resource); ++ if (resource == NULL) { ++ DXG_ERR("Invalid resource handle: %x", ++ args.resource.v); ++ ret = -EINVAL; ++ } else if (resource->device != device) { ++ DXG_ERR("Resource belongs to wrong device: %x", ++ args.resource.v); ++ ret = -EINVAL; ++ } else { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ args.resource); ++ resource->object_state = DXGOBJECTSTATE_DESTROYED; ++ resource->handle.v = 0; ++ resource->handle_valid = 0; ++ } ++ dxgprocess_ht_lock_exclusive_up(process); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ dxgdevice_acquire_alloc_list_lock_shared(device); ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_free_handle(alloc); ++ } ++ dxgdevice_release_alloc_list_lock_shared(device); ++ } ++ ++ if (args.alloc_count && allocs[0]->resource_owner) ++ resource = allocs[0]->owner.resource; ++ ++ if (resource) { ++ kref_get(&resource->resource_kref); ++ mutex_lock(&resource->resource_mutex); ++ } ++ ++ ret = dxgvmb_send_destroy_allocation(process, device, &args, ++ alloc_handles); ++ ++ /* ++ * Destroy the allocations after the host destroyed it. ++ * The allocation gpadl teardown will wait until the host unmaps its ++ * gpadl. ++ */ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (args.alloc_count) { ++ for (i = 0; i < args.alloc_count; i++) { ++ if (allocs[i]) { ++ allocs[i]->alloc_handle.v = 0; ++ dxgallocation_destroy(allocs[i]); ++ } ++ } ++ } else { ++ dxgresource_destroy(resource); ++ } ++ dxgdevice_release_alloc_list_lock(device); ++ ++ if (resource) { ++ mutex_unlock(&resource->resource_mutex); ++ kref_put(&resource->resource_kref, dxgresource_release); ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ if (alloc_handles) ++ vfree(alloc_handles); ++ ++ if (allocs) ++ vfree(allocs); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -721,7 +1348,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x03 */ {}, + /* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL}, + /* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, +-/* 0x06 */ {}, ++/* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION}, + /* 0x07 */ {}, + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, +@@ -734,7 +1361,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x10 */ {}, + /* 0x11 */ {}, + /* 0x12 */ {}, +-/* 0x13 */ {}, ++/* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, + /* 0x16 */ {}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index 3a9637f0b5e2..a51b29a6a68f 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -30,6 +30,9 @@ extern const struct d3dkmthandle zerohandle; + * plistmutex (process list mutex) + * table_lock (handle table lock) + * context_list_lock ++ * alloc_list_lock ++ * resource_mutex ++ * shared_resource_list_lock + * core_lock (dxgadapter lock) + * device_lock (dxgdevice lock) + * process_adapter_mutex +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 4ba0070b061f..cf670b9c4dc2 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -58,6 +58,7 @@ struct winluid { + __u32 b; + }; + ++#define D3DKMT_CREATEALLOCATION_MAX 1024 + #define D3DKMT_ADAPTERS_MAX 64 + + struct d3dkmt_adapterinfo { +@@ -197,6 +198,205 @@ struct d3dkmt_createcontextvirtual { + struct d3dkmthandle context; + }; + ++enum d3dkmdt_gdisurfacetype { ++ _D3DKMDT_GDISURFACE_INVALID = 0, ++ _D3DKMDT_GDISURFACE_TEXTURE = 1, ++ _D3DKMDT_GDISURFACE_STAGING_CPUVISIBLE = 2, ++ _D3DKMDT_GDISURFACE_STAGING = 3, ++ _D3DKMDT_GDISURFACE_LOOKUPTABLE = 4, ++ _D3DKMDT_GDISURFACE_EXISTINGSYSMEM = 5, ++ _D3DKMDT_GDISURFACE_TEXTURE_CPUVISIBLE = 6, ++ _D3DKMDT_GDISURFACE_TEXTURE_CROSSADAPTER = 7, ++ _D3DKMDT_GDISURFACE_TEXTURE_CPUVISIBLE_CROSSADAPTER = 8, ++}; ++ ++struct d3dddi_rational { ++ __u32 numerator; ++ __u32 denominator; ++}; ++ ++enum d3dddiformat { ++ _D3DDDIFMT_UNKNOWN = 0, ++}; ++ ++struct d3dkmdt_gdisurfacedata { ++ __u32 width; ++ __u32 height; ++ __u32 format; ++ enum d3dkmdt_gdisurfacetype type; ++ __u32 flags; ++ __u32 pitch; ++}; ++ ++struct d3dkmdt_stagingsurfacedata { ++ __u32 width; ++ __u32 height; ++ __u32 pitch; ++}; ++ ++struct d3dkmdt_sharedprimarysurfacedata { ++ __u32 width; ++ __u32 height; ++ enum d3dddiformat format; ++ struct d3dddi_rational refresh_rate; ++ __u32 vidpn_source_id; ++}; ++ ++struct d3dkmdt_shadowsurfacedata { ++ __u32 width; ++ __u32 height; ++ enum d3dddiformat format; ++ __u32 pitch; ++}; ++ ++enum d3dkmdt_standardallocationtype { ++ _D3DKMDT_STANDARDALLOCATION_SHAREDPRIMARYSURFACE = 1, ++ _D3DKMDT_STANDARDALLOCATION_SHADOWSURFACE = 2, ++ _D3DKMDT_STANDARDALLOCATION_STAGINGSURFACE = 3, ++ _D3DKMDT_STANDARDALLOCATION_GDISURFACE = 4, ++}; ++ ++enum d3dkmt_standardallocationtype { ++ _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, ++ _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, ++}; ++ ++struct d3dkmt_standardallocation_existingheap { ++ __u64 size; ++}; ++ ++struct d3dkmt_createstandardallocationflags { ++ union { ++ struct { ++ __u32 reserved:32; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_createstandardallocation { ++ enum d3dkmt_standardallocationtype type; ++ __u32 reserved; ++ struct d3dkmt_standardallocation_existingheap existing_heap_data; ++ struct d3dkmt_createstandardallocationflags flags; ++ __u32 reserved1; ++}; ++ ++struct d3dddi_allocationinfo2 { ++ struct d3dkmthandle allocation; ++#ifdef __KERNEL__ ++ const void *sysmem; ++#else ++ __u64 sysmem; ++#endif ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ __u32 vidpn_source_id; ++ union { ++ struct { ++ __u32 primary:1; ++ __u32 stereo:1; ++ __u32 override_priority:1; ++ __u32 reserved:29; ++ }; ++ __u32 value; ++ } flags; ++ __u64 gpu_virtual_address; ++ union { ++ __u32 priority; ++ __u64 unused; ++ }; ++ __u64 reserved[5]; ++}; ++ ++struct d3dkmt_createallocationflags { ++ union { ++ struct { ++ __u32 create_resource:1; ++ __u32 create_shared:1; ++ __u32 non_secure:1; ++ __u32 create_protected:1; ++ __u32 restrict_shared_access:1; ++ __u32 existing_sysmem:1; ++ __u32 nt_security_sharing:1; ++ __u32 read_only:1; ++ __u32 create_write_combined:1; ++ __u32 create_cached:1; ++ __u32 swap_chain_back_buffer:1; ++ __u32 cross_adapter:1; ++ __u32 open_cross_adapter:1; ++ __u32 partial_shared_creation:1; ++ __u32 zeroed:1; ++ __u32 write_watch:1; ++ __u32 standard_allocation:1; ++ __u32 existing_section:1; ++ __u32 reserved:14; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_createallocation { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ struct d3dkmthandle global_share; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ const void *private_runtime_data; ++#else ++ __u64 private_runtime_data; ++#endif ++ __u32 private_runtime_data_size; ++ __u32 reserved1; ++ union { ++#ifdef __KERNEL__ ++ struct d3dkmt_createstandardallocation *standard_allocation; ++ const void *priv_drv_data; ++#else ++ __u64 standard_allocation; ++ __u64 priv_drv_data; ++#endif ++ }; ++ __u32 priv_drv_data_size; ++ __u32 alloc_count; ++#ifdef __KERNEL__ ++ struct d3dddi_allocationinfo2 *allocation_info; ++#else ++ __u64 allocation_info; ++#endif ++ struct d3dkmt_createallocationflags flags; ++ __u32 reserved2; ++ __u64 private_runtime_resource_handle; ++}; ++ ++struct d3dddicb_destroyallocation2flags { ++ union { ++ struct { ++ __u32 assume_not_in_use:1; ++ __u32 synchronous_destroy:1; ++ __u32 reserved:29; ++ __u32 system_use_only:1; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_destroyallocation2 { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocations; ++#else ++ __u64 allocations; ++#endif ++ __u32 alloc_count; ++ struct d3dddicb_destroyallocation2flags flags; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -279,8 +479,12 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x04, struct d3dkmt_createcontextvirtual) + #define LX_DXDESTROYCONTEXT \ + _IOWR(0x47, 0x05, struct d3dkmt_destroycontext) ++#define LX_DXCREATEALLOCATION \ ++ _IOWR(0x47, 0x06, struct d3dkmt_createallocation) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXDESTROYALLOCATION2 \ ++ _IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2) + #define LX_DXENUMADAPTERS2 \ + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1677-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1677-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch new file mode 100644 index 000000000000..3b0d750f67c2 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1677-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch @@ -0,0 +1,1016 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 14:38:32 -0800 +Subject: drivers: hv: dxgkrnl: Creation of compute device sync objects + +Implement ioctls to create and destroy compute devicesync objects: + - the LX_DXCREATESYNCHRONIZATIONOBJECT ioctl, + - the LX_DXDESTROYSYNCHRONIZATIONOBJECT ioctl. + +Compute device synchronization objects are used to synchronize +execution of compute device commands, which are queued to +different execution contexts (dxgcontext objects). + +There are several types of sync objects (mutex, monitored +fence, CPU event, fence). A "signal" or a "wait" operation +could be queued to an execution context. + +Monitored fence sync objects are particular important. +A monitored fence object has a fence value, which could be +monitored by the compute device or by CPU. Therefore, a CPU +virtual address is allocated during object creation to allow +an application to read the fence value. dxg_map_iospace and +dxg_unmap_iospace implement creation of the CPU virtual address. +This is done as follow: +- The host allocates a portion of the guest IO space, which is mapped + to the actual fence value memory on the host +- The host returns the guest IO space address to the guest +- The guest allocates a CPU virtual address and updates page tables + to point to the IO space address + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 184 +++++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 80 ++++ + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgprocess.c | 16 + + drivers/hv/dxgkrnl/dxgvmbus.c | 205 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 20 + + drivers/hv/dxgkrnl/ioctl.c | 130 +++++- + include/uapi/misc/d3dkmthk.h | 95 +++++ + 8 files changed, 729 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 402caa81a5db..d2f2b96527e6 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -160,6 +160,24 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info) + list_del(&process_info->adapter_process_list_entry); + } + ++void dxgadapter_add_syncobj(struct dxgadapter *adapter, ++ struct dxgsyncobject *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ list_add_tail(&object->syncobj_list_entry, &adapter->syncobj_list_head); ++ up_write(&adapter->shared_resource_list_lock); ++} ++ ++void dxgadapter_remove_syncobj(struct dxgsyncobject *object) ++{ ++ down_write(&object->adapter->shared_resource_list_lock); ++ if (object->syncobj_list_entry.next) { ++ list_del(&object->syncobj_list_entry); ++ object->syncobj_list_entry.next = NULL; ++ } ++ up_write(&object->adapter->shared_resource_list_lock); ++} ++ + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter) + { + down_write(&adapter->core_lock); +@@ -213,6 +231,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + init_rwsem(&device->context_list_lock); + init_rwsem(&device->alloc_list_lock); + INIT_LIST_HEAD(&device->pqueue_list_head); ++ INIT_LIST_HEAD(&device->syncobj_list_head); + device->object_state = DXGOBJECTSTATE_CREATED; + device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; + +@@ -228,6 +247,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + void dxgdevice_stop(struct dxgdevice *device) + { + struct dxgallocation *alloc; ++ struct dxgsyncobject *syncobj; + + DXG_TRACE("Destroying device: %p", device); + dxgdevice_acquire_alloc_list_lock(device); +@@ -235,6 +255,14 @@ void dxgdevice_stop(struct dxgdevice *device) + dxgallocation_stop(alloc); + } + dxgdevice_release_alloc_list_lock(device); ++ ++ hmgrtable_lock(&device->process->handle_table, DXGLOCK_EXCL); ++ list_for_each_entry(syncobj, &device->syncobj_list_head, ++ syncobj_list_entry) { ++ dxgsyncobject_stop(syncobj); ++ } ++ hmgrtable_unlock(&device->process->handle_table, DXGLOCK_EXCL); ++ DXG_TRACE("Device stopped: %p", device); + } + + void dxgdevice_mark_destroyed(struct dxgdevice *device) +@@ -263,6 +291,20 @@ void dxgdevice_destroy(struct dxgdevice *device) + + dxgdevice_acquire_alloc_list_lock(device); + ++ while (!list_empty(&device->syncobj_list_head)) { ++ struct dxgsyncobject *syncobj = ++ list_first_entry(&device->syncobj_list_head, ++ struct dxgsyncobject, ++ syncobj_list_entry); ++ list_del(&syncobj->syncobj_list_entry); ++ syncobj->syncobj_list_entry.next = NULL; ++ dxgdevice_release_alloc_list_lock(device); ++ ++ dxgsyncobject_destroy(process, syncobj); ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ } ++ + { + struct dxgallocation *alloc; + struct dxgallocation *tmp; +@@ -565,6 +607,30 @@ void dxgdevice_release(struct kref *refcount) + kfree(device); + } + ++void dxgdevice_add_syncobj(struct dxgdevice *device, ++ struct dxgsyncobject *syncobj) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&syncobj->syncobj_list_entry, &device->syncobj_list_head); ++ kref_get(&syncobj->syncobj_kref); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_syncobj(struct dxgsyncobject *entry) ++{ ++ struct dxgdevice *device = entry->device; ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (entry->syncobj_list_entry.next) { ++ list_del(&entry->syncobj_list_entry); ++ entry->syncobj_list_entry.next = NULL; ++ kref_put(&entry->syncobj_kref, dxgsyncobject_release); ++ } ++ dxgdevice_release_alloc_list_lock(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ entry->device = NULL; ++} ++ + struct dxgcontext *dxgcontext_create(struct dxgdevice *device) + { + struct dxgcontext *context; +@@ -812,3 +878,121 @@ void dxgprocess_adapter_remove_device(struct dxgdevice *device) + } + mutex_unlock(&device->adapter_info->device_list_mutex); + } ++ ++struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct dxgadapter *adapter, ++ enum ++ d3dddi_synchronizationobject_type ++ type, ++ struct ++ d3dddi_synchronizationobject_flags ++ flags) ++{ ++ struct dxgsyncobject *syncobj; ++ ++ syncobj = kzalloc(sizeof(*syncobj), GFP_KERNEL); ++ if (syncobj == NULL) ++ goto cleanup; ++ syncobj->type = type; ++ syncobj->process = process; ++ switch (type) { ++ case _D3DDDI_MONITORED_FENCE: ++ case _D3DDDI_PERIODIC_MONITORED_FENCE: ++ syncobj->monitored_fence = 1; ++ break; ++ default: ++ break; ++ } ++ if (flags.shared) { ++ syncobj->shared = 1; ++ if (!flags.nt_security_sharing) { ++ DXG_ERR("nt_security_sharing must be set"); ++ goto cleanup; ++ } ++ } ++ ++ kref_init(&syncobj->syncobj_kref); ++ ++ if (syncobj->monitored_fence) { ++ syncobj->device = device; ++ syncobj->device_handle = device->handle; ++ kref_get(&device->device_kref); ++ dxgdevice_add_syncobj(device, syncobj); ++ } else { ++ dxgadapter_add_syncobj(adapter, syncobj); ++ } ++ syncobj->adapter = adapter; ++ kref_get(&adapter->adapter_kref); ++ ++ DXG_TRACE("Syncobj created: %p", syncobj); ++ return syncobj; ++cleanup: ++ if (syncobj) ++ kfree(syncobj); ++ return NULL; ++} ++ ++void dxgsyncobject_destroy(struct dxgprocess *process, ++ struct dxgsyncobject *syncobj) ++{ ++ int destroyed; ++ ++ DXG_TRACE("Destroying syncobj: %p", syncobj); ++ ++ dxgsyncobject_stop(syncobj); ++ ++ destroyed = test_and_set_bit(0, &syncobj->flags); ++ if (!destroyed) { ++ DXG_TRACE("Deleting handle: %x", syncobj->handle.v); ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (syncobj->handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ syncobj->handle); ++ syncobj->handle.v = 0; ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (syncobj->monitored_fence) ++ dxgdevice_remove_syncobj(syncobj); ++ else ++ dxgadapter_remove_syncobj(syncobj); ++ if (syncobj->adapter) { ++ kref_put(&syncobj->adapter->adapter_kref, ++ dxgadapter_release); ++ syncobj->adapter = NULL; ++ } ++ } ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++} ++ ++void dxgsyncobject_stop(struct dxgsyncobject *syncobj) ++{ ++ int stopped = test_and_set_bit(1, &syncobj->flags); ++ ++ if (!stopped) { ++ DXG_TRACE("Stopping syncobj"); ++ if (syncobj->monitored_fence) { ++ if (syncobj->mapped_address) { ++ int ret = ++ dxg_unmap_iospace(syncobj->mapped_address, ++ PAGE_SIZE); ++ ++ (void)ret; ++ DXG_TRACE("unmap fence %d %p", ++ ret, syncobj->mapped_address); ++ syncobj->mapped_address = NULL; ++ } ++ } ++ } ++} ++ ++void dxgsyncobject_release(struct kref *refcount) ++{ ++ struct dxgsyncobject *syncobj; ++ ++ syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref); ++ kfree(syncobj); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index fa053fb6ac9c..1b9410c9152b 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -38,6 +38,7 @@ struct dxgdevice; + struct dxgcontext; + struct dxgallocation; + struct dxgresource; ++struct dxgsyncobject; + + /* + * Driver private data. +@@ -100,6 +101,56 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); + void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); + void dxgvmbuschannel_receive(void *ctx); + ++/* ++ * This is GPU synchronization object, which is used to synchronize execution ++ * between GPU contextx/hardware queues or for tracking GPU execution progress. ++ * A dxgsyncobject is created when somebody creates a syncobject or opens a ++ * shared syncobject. ++ * A syncobject belongs to an adapter, unless it is a cross-adapter object. ++ * Cross adapter syncobjects are currently not implemented. ++ * ++ * D3DDDI_MONITORED_FENCE and D3DDDI_PERIODIC_MONITORED_FENCE are called ++ * "device" syncobject, because the belong to a device (dxgdevice). ++ * Device syncobjects are inserted to a list in dxgdevice. ++ * ++ */ ++struct dxgsyncobject { ++ struct kref syncobj_kref; ++ enum d3dddi_synchronizationobject_type type; ++ /* ++ * List entry in dxgdevice for device sync objects. ++ * List entry in dxgadapter for other objects ++ */ ++ struct list_head syncobj_list_entry; ++ /* Adapter, the syncobject belongs to. NULL for stopped sync obejcts. */ ++ struct dxgadapter *adapter; ++ /* ++ * Pointer to the device, which was used to create the object. ++ * This is NULL for non-device syncbjects ++ */ ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ /* CPU virtual address of the fence value for "device" syncobjects */ ++ void *mapped_address; ++ /* Handle in the process handle table */ ++ struct d3dkmthandle handle; ++ /* Cached handle of the device. Used to avoid device dereference. */ ++ struct d3dkmthandle device_handle; ++ union { ++ struct { ++ /* Must be the first bit */ ++ u32 destroyed:1; ++ /* Must be the second bit */ ++ u32 stopped:1; ++ /* device syncobject */ ++ u32 monitored_fence:1; ++ u32 shared:1; ++ u32 reserved:27; ++ }; ++ long flags; ++ }; ++}; ++ + /* + * The structure defines an offered vGPU vm bus channel. + */ +@@ -109,6 +160,20 @@ struct dxgvgpuchannel { + struct hv_device *hdev; + }; + ++struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct dxgadapter *adapter, ++ enum ++ d3dddi_synchronizationobject_type ++ type, ++ struct ++ d3dddi_synchronizationobject_flags ++ flags); ++void dxgsyncobject_destroy(struct dxgprocess *process, ++ struct dxgsyncobject *syncobj); ++void dxgsyncobject_stop(struct dxgsyncobject *syncobj); ++void dxgsyncobject_release(struct kref *refcount); ++ + struct dxgglobal { + struct dxgdriver *drvdata; + struct dxgvmbuschannel channel; +@@ -271,6 +336,8 @@ struct dxgadapter { + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ + struct list_head adapter_process_list_head; ++ /* List of all non-device dxgsyncobject objects */ ++ struct list_head syncobj_list_head; + /* This lock protects shared resource and syncobject lists */ + struct rw_semaphore shared_resource_list_lock; + struct pci_dev *pci_dev; +@@ -296,6 +363,9 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter); + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); + void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); + void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_add_syncobj(struct dxgadapter *adapter, ++ struct dxgsyncobject *so); ++void dxgadapter_remove_syncobj(struct dxgsyncobject *so); + void dxgadapter_add_process(struct dxgadapter *adapter, + struct dxgprocess_adapter *process_info); + void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); +@@ -325,6 +395,7 @@ struct dxgdevice { + struct list_head resource_list_head; + /* List of paging queues. Protected by process handle table lock. */ + struct list_head pqueue_list_head; ++ struct list_head syncobj_list_head; + struct d3dkmthandle handle; + enum d3dkmt_deviceexecution_state execution_state; + u32 handle_valid; +@@ -345,6 +416,8 @@ void dxgdevice_remove_alloc_safe(struct dxgdevice *dev, + struct dxgallocation *a); + void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res); + void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res); ++void dxgdevice_add_syncobj(struct dxgdevice *dev, struct dxgsyncobject *so); ++void dxgdevice_remove_syncobj(struct dxgsyncobject *so); + bool dxgdevice_is_active(struct dxgdevice *dev); + void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev); + void dxgdevice_release_context_list_lock(struct dxgdevice *dev); +@@ -455,6 +528,7 @@ void dxgallocation_free_handle(struct dxgallocation *a); + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + ++int dxg_unmap_iospace(void *va, u32 size); + /* + * The convention is that VNBus instance id is a GUID, but the host sets + * the lower part of the value to the host adapter LUID. The function +@@ -514,6 +588,12 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_destroyallocation2 *args, + struct d3dkmthandle *alloc_handles); ++int dxgvmb_send_create_sync_object(struct dxgprocess *pr, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createsynchronizationobject2 ++ *args, struct dxgsyncobject *so); ++int dxgvmb_send_destroy_sync_object(struct dxgprocess *pr, ++ struct d3dkmthandle h); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 053ce6f3e083..9bc8931c5043 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -162,6 +162,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + init_rwsem(&adapter->core_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); ++ INIT_LIST_HEAD(&adapter->syncobj_list_head); + init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; + guid_to_luid(guid, &adapter->luid); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index ca307beb9a9a..a41985ef438d 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -59,6 +59,7 @@ void dxgprocess_destroy(struct dxgprocess *process) + enum hmgrentry_type t; + struct d3dkmthandle h; + void *o; ++ struct dxgsyncobject *syncobj; + struct dxgprocess_adapter *entry; + struct dxgprocess_adapter *tmp; + +@@ -84,6 +85,21 @@ void dxgprocess_destroy(struct dxgprocess *process) + } + } + ++ i = 0; ++ while (hmgrtable_next_entry(&process->handle_table, &i, &t, &h, &o)) { ++ switch (t) { ++ case HMGRENTRY_TYPE_DXGSYNCOBJECT: ++ DXG_TRACE("Destroy syncobj: %p %d", o, i); ++ syncobj = o; ++ syncobj->handle.v = 0; ++ dxgsyncobject_destroy(process, syncobj); ++ break; ++ default: ++ DXG_ERR("invalid entry in handle table %d", t); ++ break; ++ } ++ } ++ + hmgrtable_destroy(&process->handle_table); + hmgrtable_destroy(&process->local_handle_table); + } +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 14b51a3c6afc..d323afc85249 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -495,6 +495,88 @@ dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel, + return ret; + } + ++static int check_iospace_address(unsigned long address, u32 size) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (address < dxgglobal->mmiospace_base || ++ size > dxgglobal->mmiospace_size || ++ address >= (dxgglobal->mmiospace_base + ++ dxgglobal->mmiospace_size - size)) { ++ DXG_ERR("invalid iospace address %lx", address); ++ return -EINVAL; ++ } ++ return 0; ++} ++ ++int dxg_unmap_iospace(void *va, u32 size) ++{ ++ int ret = 0; ++ ++ DXG_TRACE("Unmapping io space: %p %x", va, size); ++ ++ /* ++ * When an app calls exit(), dxgkrnl is called to close the device ++ * with current->mm equal to NULL. ++ */ ++ if (current->mm) { ++ ret = vm_munmap((unsigned long)va, size); ++ if (ret) { ++ DXG_ERR("vm_munmap failed %d", ret); ++ return -ENOTRECOVERABLE; ++ } ++ } ++ return 0; ++} ++ ++static u8 *dxg_map_iospace(u64 iospace_address, u32 size, ++ unsigned long protection, bool cached) ++{ ++ struct vm_area_struct *vma; ++ unsigned long va; ++ int ret = 0; ++ ++ DXG_TRACE("Mapping io space: %llx %x %lx", ++ iospace_address, size, protection); ++ if (check_iospace_address(iospace_address, size) < 0) { ++ DXG_ERR("invalid address to map"); ++ return NULL; ++ } ++ ++ va = vm_mmap(NULL, 0, size, protection, MAP_SHARED | MAP_ANONYMOUS, 0); ++ if ((long)va <= 0) { ++ DXG_ERR("vm_mmap failed %lx %d", va, size); ++ return NULL; ++ } ++ ++ mmap_read_lock(current->mm); ++ vma = find_vma(current->mm, (unsigned long)va); ++ if (vma) { ++ pgprot_t prot = vma->vm_page_prot; ++ ++ if (!cached) ++ prot = pgprot_writecombine(prot); ++ DXG_TRACE("vma: %lx %lx %lx", ++ vma->vm_start, vma->vm_end, va); ++ vma->vm_pgoff = iospace_address >> PAGE_SHIFT; ++ ret = io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, ++ size, prot); ++ if (ret) ++ DXG_ERR("io_remap_pfn_range failed: %d", ret); ++ } else { ++ DXG_ERR("failed to find vma: %p %lx", vma, va); ++ ret = -ENOMEM; ++ } ++ mmap_read_unlock(current->mm); ++ ++ if (ret) { ++ dxg_unmap_iospace((void *)va, size); ++ return NULL; ++ } ++ DXG_TRACE("Mapped VA: %lx", va); ++ return (u8 *) va; ++} ++ + /* + * Global messages to the host + */ +@@ -613,6 +695,39 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process) + return ret; + } + ++int dxgvmb_send_destroy_sync_object(struct dxgprocess *process, ++ struct d3dkmthandle sync_object) ++{ ++ struct dxgkvmb_command_destroysyncobject *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYSYNCOBJECT, ++ process->host_handle); ++ command->sync_object = sync_object; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size); ++ ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + /* + * Virtual GPU messages to the host + */ +@@ -1023,7 +1138,11 @@ int create_existing_sysmem(struct dxgdevice *device, + ret = -ENOMEM; + goto cleanup; + } ++#ifdef _MAIN_KERNEL_ + DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); ++#else ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl); ++#endif + + command_vgpu_to_host_init2(&set_store_command->hdr, + DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, +@@ -1501,6 +1620,92 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + return ret; + } + ++static void set_result(struct d3dkmt_createsynchronizationobject2 *args, ++ u64 fence_gpu_va, u8 *va) ++{ ++ args->info.periodic_monitored_fence.fence_gpu_virtual_address = ++ fence_gpu_va; ++ args->info.periodic_monitored_fence.fence_cpu_virtual_address = va; ++} ++ ++int ++dxgvmb_send_create_sync_object(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createsynchronizationobject2 *args, ++ struct dxgsyncobject *syncobj) ++{ ++ struct dxgkvmb_command_createsyncobject_return result = { }; ++ struct dxgkvmb_command_createsyncobject *command; ++ int ret; ++ u8 *va = 0; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATESYNCOBJECT, ++ process->host_handle); ++ command->args = *args; ++ command->client_hint = 1; /* CLIENTHINT_UMD */ ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("failed %d", ret); ++ goto cleanup; ++ } ++ args->sync_object = result.sync_object; ++ if (syncobj->shared) { ++ if (result.global_sync_object.v == 0) { ++ DXG_ERR("shared handle is 0"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ args->info.shared_handle = result.global_sync_object; ++ } ++ ++ if (syncobj->monitored_fence) { ++ va = dxg_map_iospace(result.fence_storage_address, PAGE_SIZE, ++ PROT_READ | PROT_WRITE, true); ++ if (va == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ if (args->info.type == _D3DDDI_MONITORED_FENCE) { ++ args->info.monitored_fence.fence_gpu_virtual_address = ++ result.fence_gpu_va; ++ args->info.monitored_fence.fence_cpu_virtual_address = ++ va; ++ { ++ unsigned long value; ++ ++ DXG_TRACE("fence cpu va: %p", va); ++ ret = copy_from_user(&value, va, ++ sizeof(u64)); ++ if (ret) { ++ DXG_ERR("failed to read fence"); ++ ret = -EINVAL; ++ } else { ++ DXG_TRACE("fence value:%lx", ++ value); ++ } ++ } ++ } else { ++ set_result(args, result.fence_gpu_va, va); ++ } ++ syncobj->mapped_address = va; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 4b7466d1b9f2..bbf5f31cdf81 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -410,4 +410,24 @@ struct dxgkvmb_command_destroycontext { + struct d3dkmthandle context; + }; + ++struct dxgkvmb_command_createsyncobject { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_createsynchronizationobject2 args; ++ u32 client_hint; ++}; ++ ++struct dxgkvmb_command_createsyncobject_return { ++ struct d3dkmthandle sync_object; ++ struct d3dkmthandle global_sync_object; ++ u64 fence_gpu_va; ++ u64 fence_storage_address; ++ u32 fence_storage_offset; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroysyncobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle sync_object; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 0eaa577d7ed4..4bba1e209f33 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1341,6 +1341,132 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_createsynchronizationobject2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ syncobj = dxgsyncobject_create(process, device, adapter, args.info.type, ++ args.info.flags); ++ if (syncobj == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_create_sync_object(process, adapter, &args, syncobj); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, syncobj, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ if (ret >= 0) ++ syncobj->handle = args.sync_object; ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (syncobj) { ++ dxgsyncobject_destroy(process, syncobj); ++ if (args.sync_object.v) ++ dxgvmb_send_destroy_sync_object(process, ++ args.sync_object); ++ } ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroysynchronizationobject args; ++ struct dxgsyncobject *syncobj = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("handle 0x%x", args.sync_object.v); ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ syncobj = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ if (syncobj) { ++ DXG_TRACE("syncobj 0x%p", syncobj); ++ syncobj->handle.v = 0; ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (syncobj == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgsyncobject_destroy(process, syncobj); ++ ++ ret = dxgvmb_send_destroy_sync_object(process, args.sync_object); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -1358,7 +1484,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0d */ {}, + /* 0x0e */ {}, + /* 0x0f */ {}, +-/* 0x10 */ {}, ++/* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, + /* 0x11 */ {}, + /* 0x12 */ {}, + /* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, +@@ -1371,7 +1497,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1a */ {}, + /* 0x1b */ {}, + /* 0x1c */ {}, +-/* 0x1d */ {}, ++/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, + /* 0x1f */ {}, + /* 0x20 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index cf670b9c4dc2..4e1069f41d76 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -256,6 +256,97 @@ enum d3dkmdt_standardallocationtype { + _D3DKMDT_STANDARDALLOCATION_GDISURFACE = 4, + }; + ++struct d3dddi_synchronizationobject_flags { ++ union { ++ struct { ++ __u32 shared:1; ++ __u32 nt_security_sharing:1; ++ __u32 cross_adapter:1; ++ __u32 top_of_pipeline:1; ++ __u32 no_signal:1; ++ __u32 no_wait:1; ++ __u32 no_signal_max_value_on_tdr:1; ++ __u32 no_gpu_access:1; ++ __u32 reserved:23; ++ }; ++ __u32 value; ++ }; ++}; ++ ++enum d3dddi_synchronizationobject_type { ++ _D3DDDI_SYNCHRONIZATION_MUTEX = 1, ++ _D3DDDI_SEMAPHORE = 2, ++ _D3DDDI_FENCE = 3, ++ _D3DDDI_CPU_NOTIFICATION = 4, ++ _D3DDDI_MONITORED_FENCE = 5, ++ _D3DDDI_PERIODIC_MONITORED_FENCE = 6, ++ _D3DDDI_SYNCHRONIZATION_TYPE_LIMIT ++}; ++ ++struct d3dddi_synchronizationobjectinfo2 { ++ enum d3dddi_synchronizationobject_type type; ++ struct d3dddi_synchronizationobject_flags flags; ++ union { ++ struct { ++ __u32 initial_state; ++ } synchronization_mutex; ++ ++ struct { ++ __u32 max_count; ++ __u32 initial_count; ++ } semaphore; ++ ++ struct { ++ __u64 fence_value; ++ } fence; ++ ++ struct { ++ __u64 event; ++ } cpu_notification; ++ ++ struct { ++ __u64 initial_fence_value; ++#ifdef __KERNEL__ ++ void *fence_cpu_virtual_address; ++#else ++ __u64 *fence_cpu_virtual_address; ++#endif ++ __u64 fence_gpu_virtual_address; ++ __u32 engine_affinity; ++ } monitored_fence; ++ ++ struct { ++ struct d3dkmthandle adapter; ++ __u32 vidpn_target_id; ++ __u64 time; ++#ifdef __KERNEL__ ++ void *fence_cpu_virtual_address; ++#else ++ __u64 fence_cpu_virtual_address; ++#endif ++ __u64 fence_gpu_virtual_address; ++ __u32 engine_affinity; ++ } periodic_monitored_fence; ++ ++ struct { ++ __u64 reserved[8]; ++ } reserved; ++ }; ++ struct d3dkmthandle shared_handle; ++}; ++ ++struct d3dkmt_createsynchronizationobject2 { ++ struct d3dkmthandle device; ++ __u32 reserved; ++ struct d3dddi_synchronizationobjectinfo2 info; ++ struct d3dkmthandle sync_object; ++ __u32 reserved1; ++}; ++ ++struct d3dkmt_destroysynchronizationobject { ++ struct d3dkmthandle sync_object; ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -483,6 +574,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x06, struct d3dkmt_createallocation) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXCREATESYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2) + #define LX_DXDESTROYALLOCATION2 \ + _IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2) + #define LX_DXENUMADAPTERS2 \ +@@ -491,6 +584,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) + #define LX_DXDESTROYDEVICE \ + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) ++#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1678-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1678-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch new file mode 100644 index 000000000000..731bcaac98c6 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1678-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch @@ -0,0 +1,1689 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 13:59:23 -0800 +Subject: drivers: hv: dxgkrnl: Operations using sync objects + +Implement ioctls to submit operations with compute device +sync objects: + - the LX_DXSIGNALSYNCHRONIZATIONOBJECT ioctl. + The ioctl is used to submit a signal to a sync object. + - the LX_DXWAITFORSYNCHRONIZATIONOBJECT ioctl. + The ioctl is used to submit a wait for a sync object + - the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU ioctl + The ioctl is used to signal to a monitored fence sync object + from a CPU thread. + - the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU ioctl. + The ioctl is used to submit a signal to a monitored fence + sync object.. + - the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 ioctl. + The ioctl is used to submit a signal to a monitored fence + sync object. + - the LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU ioctl. + The ioctl is used to submit a wait for a monitored fence + sync object. + +Compute device synchronization objects are used to synchronize +execution of DMA buffers between different execution contexts. +Operations with sync objects include "signal" and "wait". A wait +for a sync object is satisfied when the sync object is signaled. + +A signal operation could be submitted to a compute device context or +the sync object could be signaled by a CPU thread. + +To improve performance, submitting operations to the host is done +asynchronously when the host supports it. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 38 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 62 + + drivers/hv/dxgkrnl/dxgmodule.c | 102 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 219 ++- + drivers/hv/dxgkrnl/dxgvmbus.h | 48 + + drivers/hv/dxgkrnl/ioctl.c | 702 +++++++++- + drivers/hv/dxgkrnl/misc.h | 2 + + include/uapi/misc/d3dkmthk.h | 159 +++ + 8 files changed, 1311 insertions(+), 21 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index d2f2b96527e6..04d827a15c54 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -249,7 +249,7 @@ void dxgdevice_stop(struct dxgdevice *device) + struct dxgallocation *alloc; + struct dxgsyncobject *syncobj; + +- DXG_TRACE("Destroying device: %p", device); ++ DXG_TRACE("Stopping device: %p", device); + dxgdevice_acquire_alloc_list_lock(device); + list_for_each_entry(alloc, &device->alloc_list_head, alloc_list_entry) { + dxgallocation_stop(alloc); +@@ -743,15 +743,13 @@ void dxgallocation_destroy(struct dxgallocation *alloc) + } + #ifdef _MAIN_KERNEL_ + if (alloc->gpadl.gpadl_handle) { +- DXG_TRACE("Teardown gpadl %d", +- alloc->gpadl.gpadl_handle); ++ DXG_TRACE("Teardown gpadl %d", alloc->gpadl.gpadl_handle); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); + alloc->gpadl.gpadl_handle = 0; + } + else + if (alloc->gpadl) { +- DXG_TRACE("Teardown gpadl %d", +- alloc->gpadl); ++ DXG_TRACE("Teardown gpadl %d", alloc->gpadl); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); + alloc->gpadl = 0; + } +@@ -901,6 +899,13 @@ struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + case _D3DDDI_PERIODIC_MONITORED_FENCE: + syncobj->monitored_fence = 1; + break; ++ case _D3DDDI_CPU_NOTIFICATION: ++ syncobj->cpu_event = 1; ++ syncobj->host_event = kzalloc(sizeof(*syncobj->host_event), ++ GFP_KERNEL); ++ if (syncobj->host_event == NULL) ++ goto cleanup; ++ break; + default: + break; + } +@@ -928,6 +933,8 @@ struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + DXG_TRACE("Syncobj created: %p", syncobj); + return syncobj; + cleanup: ++ if (syncobj->host_event) ++ kfree(syncobj->host_event); + if (syncobj) + kfree(syncobj); + return NULL; +@@ -937,6 +944,7 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + struct dxgsyncobject *syncobj) + { + int destroyed; ++ struct dxghosteventcpu *host_event; + + DXG_TRACE("Destroying syncobj: %p", syncobj); + +@@ -955,6 +963,16 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + } + hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); + ++ if (syncobj->cpu_event) { ++ host_event = syncobj->host_event; ++ if (host_event->cpu_event) { ++ eventfd_ctx_put(host_event->cpu_event); ++ if (host_event->hdr.event_id) ++ dxgglobal_remove_host_event( ++ &host_event->hdr); ++ host_event->cpu_event = NULL; ++ } ++ } + if (syncobj->monitored_fence) + dxgdevice_remove_syncobj(syncobj); + else +@@ -971,16 +989,14 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + void dxgsyncobject_stop(struct dxgsyncobject *syncobj) + { + int stopped = test_and_set_bit(1, &syncobj->flags); ++ int ret; + + if (!stopped) { + DXG_TRACE("Stopping syncobj"); + if (syncobj->monitored_fence) { + if (syncobj->mapped_address) { +- int ret = +- dxg_unmap_iospace(syncobj->mapped_address, +- PAGE_SIZE); +- +- (void)ret; ++ ret = dxg_unmap_iospace(syncobj->mapped_address, ++ PAGE_SIZE); + DXG_TRACE("unmap fence %d %p", + ret, syncobj->mapped_address); + syncobj->mapped_address = NULL; +@@ -994,5 +1010,7 @@ void dxgsyncobject_release(struct kref *refcount) + struct dxgsyncobject *syncobj; + + syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref); ++ if (syncobj->host_event) ++ kfree(syncobj->host_event); + kfree(syncobj); + } +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 1b9410c9152b..8431523f42de 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -101,6 +101,29 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); + void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); + void dxgvmbuschannel_receive(void *ctx); + ++/* ++ * The structure describes an event, which will be signaled by ++ * a message from host. ++ */ ++enum dxghosteventtype { ++ dxghostevent_cpu_event = 1, ++}; ++ ++struct dxghostevent { ++ struct list_head host_event_list_entry; ++ u64 event_id; ++ enum dxghosteventtype event_type; ++}; ++ ++struct dxghosteventcpu { ++ struct dxghostevent hdr; ++ struct dxgprocess *process; ++ struct eventfd_ctx *cpu_event; ++ struct completion *completion_event; ++ bool destroy_after_signal; ++ bool remove_from_list; ++}; ++ + /* + * This is GPU synchronization object, which is used to synchronize execution + * between GPU contextx/hardware queues or for tracking GPU execution progress. +@@ -130,6 +153,8 @@ struct dxgsyncobject { + */ + struct dxgdevice *device; + struct dxgprocess *process; ++ /* Used by D3DDDI_CPU_NOTIFICATION objects */ ++ struct dxghosteventcpu *host_event; + /* CPU virtual address of the fence value for "device" syncobjects */ + void *mapped_address; + /* Handle in the process handle table */ +@@ -144,6 +169,7 @@ struct dxgsyncobject { + u32 stopped:1; + /* device syncobject */ + u32 monitored_fence:1; ++ u32 cpu_event:1; + u32 shared:1; + u32 reserved:27; + }; +@@ -206,6 +232,11 @@ struct dxgglobal { + /* protects the dxgprocess_adapter lists */ + struct mutex process_adapter_mutex; + ++ /* list of events, waiting to be signaled by the host */ ++ struct list_head host_event_list_head; ++ spinlock_t host_event_list_mutex; ++ atomic64_t host_event_id; ++ + bool global_channel_initialized; + bool async_msg_enabled; + bool misc_registered; +@@ -228,6 +259,11 @@ struct vmbus_channel *dxgglobal_get_vmbus(void); + struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void); + void dxgglobal_acquire_process_adapter_lock(void); + void dxgglobal_release_process_adapter_lock(void); ++void dxgglobal_add_host_event(struct dxghostevent *hostevent); ++void dxgglobal_remove_host_event(struct dxghostevent *hostevent); ++u64 dxgglobal_new_host_event_id(void); ++void dxgglobal_signal_host_event(u64 event_id); ++struct dxghostevent *dxgglobal_get_host_event(u64 event_id); + int dxgglobal_acquire_channel_lock(void); + void dxgglobal_release_channel_lock(void); + +@@ -594,6 +630,31 @@ int dxgvmb_send_create_sync_object(struct dxgprocess *pr, + *args, struct dxgsyncobject *so); + int dxgvmb_send_destroy_sync_object(struct dxgprocess *pr, + struct d3dkmthandle h); ++int dxgvmb_send_signal_sync_object(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddicb_signalflags flags, ++ u64 legacy_fence_value, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle *object, ++ u32 context_count, ++ struct d3dkmthandle *contexts, ++ u32 fence_count, u64 *fences, ++ struct eventfd_ctx *cpu_event, ++ struct d3dkmthandle device); ++int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle *objects, ++ u64 *fences, ++ bool legacy_fence); ++int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct ++ d3dkmt_waitforsynchronizationobjectfromcpu ++ *args, ++ u64 cpu_event); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +@@ -609,6 +670,7 @@ int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); + ++void signal_host_cpu_event(struct dxghostevent *eventhdr); + int ntstatus2int(struct ntstatus status); + + #ifdef DEBUG +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 9bc8931c5043..5a5ca8791d27 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -123,6 +123,102 @@ static struct dxgadapter *find_adapter(struct winluid *luid) + return adapter; + } + ++void dxgglobal_add_host_event(struct dxghostevent *event) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ spin_lock_irq(&dxgglobal->host_event_list_mutex); ++ list_add_tail(&event->host_event_list_entry, ++ &dxgglobal->host_event_list_head); ++ spin_unlock_irq(&dxgglobal->host_event_list_mutex); ++} ++ ++void dxgglobal_remove_host_event(struct dxghostevent *event) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ spin_lock_irq(&dxgglobal->host_event_list_mutex); ++ if (event->host_event_list_entry.next != NULL) { ++ list_del(&event->host_event_list_entry); ++ event->host_event_list_entry.next = NULL; ++ } ++ spin_unlock_irq(&dxgglobal->host_event_list_mutex); ++} ++ ++void signal_host_cpu_event(struct dxghostevent *eventhdr) ++{ ++ struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; ++ ++ if (event->remove_from_list || ++ event->destroy_after_signal) { ++ list_del(&eventhdr->host_event_list_entry); ++ eventhdr->host_event_list_entry.next = NULL; ++ } ++ if (event->cpu_event) { ++ DXG_TRACE("signal cpu event"); ++ eventfd_signal(event->cpu_event, 1); ++ if (event->destroy_after_signal) ++ eventfd_ctx_put(event->cpu_event); ++ } else { ++ DXG_TRACE("signal completion"); ++ complete(event->completion_event); ++ } ++ if (event->destroy_after_signal) { ++ DXG_TRACE("destroying event %p", event); ++ kfree(event); ++ } ++} ++ ++void dxgglobal_signal_host_event(u64 event_id) ++{ ++ struct dxghostevent *event; ++ unsigned long flags; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ DXG_TRACE("Signaling host event %lld", event_id); ++ ++ spin_lock_irqsave(&dxgglobal->host_event_list_mutex, flags); ++ list_for_each_entry(event, &dxgglobal->host_event_list_head, ++ host_event_list_entry) { ++ if (event->event_id == event_id) { ++ DXG_TRACE("found event to signal"); ++ if (event->event_type == dxghostevent_cpu_event) ++ signal_host_cpu_event(event); ++ else ++ DXG_ERR("Unknown host event type"); ++ break; ++ } ++ } ++ spin_unlock_irqrestore(&dxgglobal->host_event_list_mutex, flags); ++} ++ ++struct dxghostevent *dxgglobal_get_host_event(u64 event_id) ++{ ++ struct dxghostevent *entry; ++ struct dxghostevent *event = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ spin_lock_irq(&dxgglobal->host_event_list_mutex); ++ list_for_each_entry(entry, &dxgglobal->host_event_list_head, ++ host_event_list_entry) { ++ if (entry->event_id == event_id) { ++ list_del(&entry->host_event_list_entry); ++ entry->host_event_list_entry.next = NULL; ++ event = entry; ++ break; ++ } ++ } ++ spin_unlock_irq(&dxgglobal->host_event_list_mutex); ++ return event; ++} ++ ++u64 dxgglobal_new_host_event_id(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ return atomic64_inc_return(&dxgglobal->host_event_id); ++} ++ + void dxgglobal_acquire_process_adapter_lock(void) + { + struct dxgglobal *dxgglobal = dxggbl(); +@@ -720,12 +816,16 @@ static struct dxgglobal *dxgglobal_create(void) + INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); + INIT_LIST_HEAD(&dxgglobal->adapter_list_head); + init_rwsem(&dxgglobal->adapter_list_lock); +- + init_rwsem(&dxgglobal->channel_lock); + ++ INIT_LIST_HEAD(&dxgglobal->host_event_list_head); ++ spin_lock_init(&dxgglobal->host_event_list_mutex); ++ atomic64_set(&dxgglobal->host_event_id, 1); ++ + #ifdef DEBUG + dxgk_validate_ioctls(); + #endif ++ + return dxgglobal; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index d323afc85249..6b2dea24a509 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -281,6 +281,22 @@ static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command, + command->channel_type = DXGKVMB_VM_TO_HOST; + } + ++static void signal_guest_event(struct dxgkvmb_command_host_to_vm *packet, ++ u32 packet_length) ++{ ++ struct dxgkvmb_command_signalguestevent *command = (void *)packet; ++ ++ if (packet_length < sizeof(struct dxgkvmb_command_signalguestevent)) { ++ DXG_ERR("invalid signal guest event packet size"); ++ return; ++ } ++ if (command->event == 0) { ++ DXG_ERR("invalid event pointer"); ++ return; ++ } ++ dxgglobal_signal_host_event(command->event); ++} ++ + static void process_inband_packet(struct dxgvmbuschannel *channel, + struct vmpacket_descriptor *desc) + { +@@ -297,6 +313,7 @@ static void process_inband_packet(struct dxgvmbuschannel *channel, + switch (packet->command_type) { + case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: + case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: ++ signal_guest_event(packet, packet_length); + break; + case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: + break; +@@ -959,7 +976,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + command->priv_drv_data, + args->priv_drv_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "Faled to copy private data to user"); + ret = -EINVAL; + dxgvmb_send_destroy_context(adapter, process, +@@ -1706,6 +1723,206 @@ dxgvmb_send_create_sync_object(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_signal_sync_object(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddicb_signalflags flags, ++ u64 legacy_fence_value, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle __user *objects, ++ u32 context_count, ++ struct d3dkmthandle __user *contexts, ++ u32 fence_count, ++ u64 __user *fences, ++ struct eventfd_ctx *cpu_event_handle, ++ struct d3dkmthandle device) ++{ ++ int ret; ++ struct dxgkvmb_command_signalsyncobject *command; ++ u32 object_size = object_count * sizeof(struct d3dkmthandle); ++ u32 context_size = context_count * sizeof(struct d3dkmthandle); ++ u32 fence_size = fences ? fence_count * sizeof(u64) : 0; ++ u8 *current_pos; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_signalsyncobject) + ++ object_size + context_size + fence_size; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (context.v) ++ cmd_size += sizeof(struct d3dkmthandle); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SIGNALSYNCOBJECT, ++ process->host_handle); ++ ++ if (flags.enqueue_cpu_event) ++ command->cpu_event_handle = (u64) cpu_event_handle; ++ else ++ command->device = device; ++ command->flags = flags; ++ command->fence_value = legacy_fence_value; ++ command->object_count = object_count; ++ command->context_count = context_count; ++ current_pos = (u8 *) &command[1]; ++ ret = copy_from_user(current_pos, objects, object_size); ++ if (ret) { ++ DXG_ERR("Failed to read objects %p %d", ++ objects, object_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += object_size; ++ if (context.v) { ++ command->context_count++; ++ *(struct d3dkmthandle *) current_pos = context; ++ current_pos += sizeof(struct d3dkmthandle); ++ } ++ if (context_size) { ++ ret = copy_from_user(current_pos, contexts, context_size); ++ if (ret) { ++ DXG_ERR("Failed to read contexts %p %d", ++ contexts, context_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += context_size; ++ } ++ if (fence_size) { ++ ret = copy_from_user(current_pos, fences, fence_size); ++ if (ret) { ++ DXG_ERR("Failed to read fences %p %d", ++ fences, fence_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct ++ d3dkmt_waitforsynchronizationobjectfromcpu ++ *args, ++ u64 cpu_event) ++{ ++ int ret = -EINVAL; ++ struct dxgkvmb_command_waitforsyncobjectfromcpu *command; ++ u32 object_size = args->object_count * sizeof(struct d3dkmthandle); ++ u32 fence_size = args->object_count * sizeof(u64); ++ u8 *current_pos; ++ u32 cmd_size = sizeof(*command) + object_size + fence_size; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMCPU, ++ process->host_handle); ++ command->device = args->device; ++ command->flags = args->flags; ++ command->object_count = args->object_count; ++ command->guest_event_pointer = (u64) cpu_event; ++ current_pos = (u8 *) &command[1]; ++ ++ ret = copy_from_user(current_pos, args->objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += object_size; ++ ret = copy_from_user(current_pos, args->fence_values, ++ fence_size); ++ if (ret) { ++ DXG_ERR("failed to copy fences"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle *objects, ++ u64 *fences, ++ bool legacy_fence) ++{ ++ int ret; ++ struct dxgkvmb_command_waitforsyncobjectfromgpu *command; ++ u32 fence_size = object_count * sizeof(u64); ++ u32 object_size = object_count * sizeof(struct d3dkmthandle); ++ u8 *current_pos; ++ u32 cmd_size = object_size + fence_size - sizeof(u64) + ++ sizeof(struct dxgkvmb_command_waitforsyncobjectfromgpu); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (object_count == 0 || object_count > D3DDDI_MAX_OBJECT_WAITED_ON) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMGPU, ++ process->host_handle); ++ command->context = context; ++ command->object_count = object_count; ++ command->legacy_fence_object = legacy_fence; ++ current_pos = (u8 *) command->fence_values; ++ memcpy(current_pos, fences, fence_size); ++ current_pos += fence_size; ++ memcpy(current_pos, objects, object_size); ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index bbf5f31cdf81..89fecbcefbc8 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -165,6 +165,13 @@ struct dxgkvmb_command_host_to_vm { + enum dxgkvmb_commandtype_host_to_vm command_type; + }; + ++struct dxgkvmb_command_signalguestevent { ++ struct dxgkvmb_command_host_to_vm hdr; ++ u64 event; ++ u64 process_id; ++ bool dereference_event; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_setiospaceregion { + struct dxgkvmb_command_vm_to_host hdr; +@@ -430,4 +437,45 @@ struct dxgkvmb_command_destroysyncobject { + struct d3dkmthandle sync_object; + }; + ++/* The command returns ntstatus */ ++struct dxgkvmb_command_signalsyncobject { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ u32 object_count; ++ struct d3dddicb_signalflags flags; ++ u32 context_count; ++ u64 fence_value; ++ union { ++ /* Pointer to the guest event object */ ++ u64 cpu_event_handle; ++ /* Non zero when signal from CPU is done */ ++ struct d3dkmthandle device; ++ }; ++ /* struct d3dkmthandle ObjectHandleArray[object_count] */ ++ /* struct d3dkmthandle ContextArray[context_count] */ ++ /* u64 MonitoredFenceValueArray[object_count] */ ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_waitforsyncobjectfromcpu { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ u32 object_count; ++ struct d3dddi_waitforsynchronizationobjectfromcpu_flags flags; ++ u64 guest_event_pointer; ++ bool dereference_event; ++ /* struct d3dkmthandle ObjectHandleArray[object_count] */ ++ /* u64 FenceValueArray [object_count] */ ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_waitforsyncobjectfromgpu { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ /* Must be 1 when bLegacyFenceObject is TRUE */ ++ u32 object_count; ++ bool legacy_fence_object; ++ u64 fence_values[1]; ++ /* struct d3dkmthandle ObjectHandles[object_count] */ ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 4bba1e209f33..0025e1ee2d4d 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -759,7 +759,7 @@ get_standard_alloc_priv_data(struct dxgdevice *device, + res_priv_data = vzalloc(res_priv_data_size); + if (res_priv_data == NULL) { + ret = -ENOMEM; +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to alloc memory for res priv data: %d", + res_priv_data_size); + goto cleanup; +@@ -1065,7 +1065,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + alloc_info[i].priv_drv_data, + priv_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to copy priv data"); + ret = -EFAULT; + goto cleanup; +@@ -1348,8 +1348,10 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + struct d3dkmt_createsynchronizationobject2 args; + struct dxgdevice *device = NULL; + struct dxgadapter *adapter = NULL; ++ struct eventfd_ctx *event = NULL; + struct dxgsyncobject *syncobj = NULL; + bool device_lock_acquired = false; ++ struct dxghosteventcpu *host_event = NULL; + + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { +@@ -1384,6 +1386,27 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + goto cleanup; + } + ++ if (args.info.type == _D3DDDI_CPU_NOTIFICATION) { ++ event = eventfd_ctx_fdget((int) ++ args.info.cpu_notification.event); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ host_event = syncobj->host_event; ++ host_event->hdr.event_id = dxgglobal_new_host_event_id(); ++ host_event->cpu_event = event; ++ host_event->remove_from_list = false; ++ host_event->destroy_after_signal = false; ++ host_event->hdr.event_type = dxghostevent_cpu_event; ++ dxgglobal_add_host_event(&host_event->hdr); ++ args.info.cpu_notification.event = host_event->hdr.event_id; ++ DXG_TRACE("creating CPU notification event: %lld", ++ args.info.cpu_notification.event); ++ } ++ + ret = dxgvmb_send_create_sync_object(process, adapter, &args, syncobj); + if (ret < 0) + goto cleanup; +@@ -1411,7 +1434,10 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (args.sync_object.v) + dxgvmb_send_destroy_sync_object(process, + args.sync_object); ++ event = NULL; + } ++ if (event) ++ eventfd_ctx_put(event); + } + if (adapter) + dxgadapter_release_lock_shared(adapter); +@@ -1467,6 +1493,659 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobject2 args; ++ struct d3dkmt_signalsynchronizationobject2 *__user in_args = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ u32 fence_count = 1; ++ struct eventfd_ctx *event = NULL; ++ struct dxghosteventcpu *host_event = NULL; ++ bool host_event_added = false; ++ u64 host_event_id = 0; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.context_count >= D3DDDI_MAX_BROADCAST_CONTEXT || ++ args.object_count > D3DDDI_MAX_OBJECT_SIGNALED) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.enqueue_cpu_event) { ++ host_event = kzalloc(sizeof(*host_event), GFP_KERNEL); ++ if (host_event == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ host_event->process = process; ++ event = eventfd_ctx_fdget((int)args.cpu_event_handle); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ fence_count = 0; ++ host_event->cpu_event = event; ++ host_event_id = dxgglobal_new_host_event_id(); ++ host_event->hdr.event_type = dxghostevent_cpu_event; ++ host_event->hdr.event_id = host_event_id; ++ host_event->remove_from_list = true; ++ host_event->destroy_after_signal = true; ++ dxgglobal_add_host_event(&host_event->hdr); ++ host_event_added = true; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, args.fence.fence_value, ++ args.context, args.object_count, ++ in_args->object_array, ++ args.context_count, ++ in_args->contexts, fence_count, ++ NULL, (void *)host_event_id, ++ zerohandle); ++ ++ /* ++ * When the send operation succeeds, the host event will be destroyed ++ * after signal from the host ++ */ ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_event_added) { ++ /* The event might be signaled and destroyed by host */ ++ host_event = (struct dxghosteventcpu *) ++ dxgglobal_get_host_event(host_event_id); ++ if (host_event) { ++ eventfd_ctx_put(event); ++ event = NULL; ++ kfree(host_event); ++ host_event = NULL; ++ } ++ } ++ if (event) ++ eventfd_ctx_put(event); ++ if (host_event) ++ kfree(host_event); ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobjectfromcpu args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args.object_count == 0 || ++ args.object_count > D3DDDI_MAX_OBJECT_SIGNALED) { ++ DXG_TRACE("Too many syncobjects : %d", args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, 0, zerohandle, ++ args.object_count, args.objects, 0, ++ NULL, args.object_count, ++ args.fence_values, NULL, ++ args.device); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobjectfromgpu args; ++ struct d3dkmt_signalsynchronizationobjectfromgpu *__user user_args = ++ inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dddicb_signalflags flags = { }; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count == 0 || ++ args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ flags, 0, zerohandle, ++ args.object_count, ++ args.objects, 1, ++ &user_args->context, ++ args.object_count, ++ args.monitored_fence_values, NULL, ++ zerohandle); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobjectfromgpu2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle context_handle; ++ struct eventfd_ctx *event = NULL; ++ u64 *fences = NULL; ++ u32 fence_count = 0; ++ int ret; ++ struct dxghosteventcpu *host_event = NULL; ++ bool host_event_added = false; ++ u64 host_event_id = 0; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.enqueue_cpu_event) { ++ if (args.object_count != 0 || args.cpu_event_handle == 0) { ++ DXG_ERR("Bad input in EnqueueCpuEvent: %d %lld", ++ args.object_count, args.cpu_event_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else if (args.object_count == 0 || ++ args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.context_count == 0 || ++ args.context_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("Invalid input: %d %d", ++ args.object_count, args.context_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(&context_handle, args.contexts, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy context handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.enqueue_cpu_event) { ++ host_event = kzalloc(sizeof(*host_event), GFP_KERNEL); ++ if (host_event == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ host_event->process = process; ++ event = eventfd_ctx_fdget((int)args.cpu_event_handle); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ fence_count = 0; ++ host_event->cpu_event = event; ++ host_event_id = dxgglobal_new_host_event_id(); ++ host_event->hdr.event_id = host_event_id; ++ host_event->hdr.event_type = dxghostevent_cpu_event; ++ host_event->remove_from_list = true; ++ host_event->destroy_after_signal = true; ++ dxgglobal_add_host_event(&host_event->hdr); ++ host_event_added = true; ++ } else { ++ fences = args.monitored_fence_values; ++ fence_count = args.object_count; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ context_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, 0, zerohandle, ++ args.object_count, args.objects, ++ args.context_count, args.contexts, ++ fence_count, fences, ++ (void *)host_event_id, zerohandle); ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_event_added) { ++ /* The event might be signaled and destroyed by host */ ++ host_event = (struct dxghosteventcpu *) ++ dxgglobal_get_host_event(host_event_id); ++ if (host_event) { ++ eventfd_ctx_put(event); ++ event = NULL; ++ kfree(host_event); ++ host_event = NULL; ++ } ++ } ++ if (event) ++ eventfd_ctx_put(event); ++ if (host_event) ++ kfree(host_event); ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitforsynchronizationobject2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > D3DDDI_MAX_OBJECT_WAITED_ON || ++ args.object_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Fence value: %lld", args.fence.fence_value); ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.context, args.object_count, ++ args.object_array, ++ &args.fence.fence_value, true); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitforsynchronizationobjectfromcpu args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct eventfd_ctx *event = NULL; ++ struct dxghosteventcpu host_event = { }; ++ struct dxghosteventcpu *async_host_event = NULL; ++ struct completion local_event = { }; ++ u64 event_id = 0; ++ int ret; ++ bool host_event_added = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.object_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.async_event) { ++ async_host_event = kzalloc(sizeof(*async_host_event), ++ GFP_KERNEL); ++ if (async_host_event == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ async_host_event->process = process; ++ event = eventfd_ctx_fdget((int)args.async_event); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ async_host_event->cpu_event = event; ++ async_host_event->hdr.event_id = dxgglobal_new_host_event_id(); ++ async_host_event->destroy_after_signal = true; ++ async_host_event->hdr.event_type = dxghostevent_cpu_event; ++ dxgglobal_add_host_event(&async_host_event->hdr); ++ event_id = async_host_event->hdr.event_id; ++ host_event_added = true; ++ } else { ++ init_completion(&local_event); ++ host_event.completion_event = &local_event; ++ host_event.hdr.event_id = dxgglobal_new_host_event_id(); ++ host_event.hdr.event_type = dxghostevent_cpu_event; ++ dxgglobal_add_host_event(&host_event.hdr); ++ event_id = host_event.hdr.event_id; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_wait_sync_object_cpu(process, adapter, ++ &args, event_id); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (args.async_event == 0) { ++ dxgadapter_release_lock_shared(adapter); ++ adapter = NULL; ++ ret = wait_for_completion_interruptible(&local_event); ++ if (ret) { ++ DXG_ERR("wait_completion_interruptible: %d", ++ ret); ++ ret = -ERESTARTSYS; ++ } ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ if (host_event.hdr.event_id) ++ dxgglobal_remove_host_event(&host_event.hdr); ++ if (ret < 0) { ++ if (host_event_added) { ++ async_host_event = (struct dxghosteventcpu *) ++ dxgglobal_get_host_event(event_id); ++ if (async_host_event) { ++ if (async_host_event->hdr.event_type == ++ dxghostevent_cpu_event) { ++ eventfd_ctx_put(event); ++ event = NULL; ++ kfree(async_host_event); ++ async_host_event = NULL; ++ } else { ++ DXG_ERR("Invalid event type"); ++ DXGKRNL_ASSERT(0); ++ } ++ } ++ } ++ if (event) ++ eventfd_ctx_put(event); ++ if (async_host_event) ++ kfree(async_host_event); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitforsynchronizationobjectfromgpu args; ++ struct dxgcontext *context = NULL; ++ struct d3dkmthandle device_handle = {}; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; ++ struct d3dkmthandle *objects = NULL; ++ u32 object_size; ++ u64 *fences = NULL; ++ int ret; ++ enum hmgrentry_type syncobj_type = HMGRENTRY_TYPE_FREE; ++ bool monitored_fence = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.object_count == 0) { ++ DXG_ERR("Invalid object count: %d", args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ object_size = sizeof(struct d3dkmthandle) * args.object_count; ++ objects = vzalloc(object_size); ++ if (objects == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(objects, args.objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ context = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (context) { ++ device_handle = context->device_handle; ++ syncobj_type = ++ hmgrtable_get_object_type(&process->handle_table, ++ objects[0]); ++ } ++ if (device_handle.v == 0) { ++ DXG_ERR("Invalid context handle: %x", args.context.v); ++ ret = -EINVAL; ++ } else { ++ if (syncobj_type == HMGRENTRY_TYPE_MONITOREDFENCE) { ++ monitored_fence = true; ++ } else if (syncobj_type == HMGRENTRY_TYPE_DXGSYNCOBJECT) { ++ syncobj = ++ hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ objects[0]); ++ if (syncobj == NULL) { ++ DXG_ERR("Invalid syncobj: %x", ++ objects[0].v); ++ ret = -EINVAL; ++ } else { ++ monitored_fence = syncobj->monitored_fence; ++ } ++ } else { ++ DXG_ERR("Invalid syncobj type: %x", ++ objects[0].v); ++ ret = -EINVAL; ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ if (monitored_fence) { ++ object_size = sizeof(u64) * args.object_count; ++ fences = vzalloc(object_size); ++ if (fences == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(fences, args.monitored_fence_values, ++ object_size); ++ if (ret) { ++ DXG_ERR("failed to copy fences"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ fences = &args.fence_value; ++ } ++ ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.context, args.object_count, ++ objects, fences, ++ !monitored_fence); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ if (objects) ++ vfree(objects); ++ if (fences && fences != &args.fence_value) ++ vfree(fences); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -1485,8 +2164,8 @@ static struct ioctl_desc ioctls[] = { + /* 0x0e */ {}, + /* 0x0f */ {}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, +-/* 0x11 */ {}, +-/* 0x12 */ {}, ++/* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT}, ++/* 0x12 */ {dxgkio_wait_sync_object, LX_DXWAITFORSYNCHRONIZATIONOBJECT}, + /* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, +@@ -1517,17 +2196,22 @@ static struct ioctl_desc ioctls[] = { + /* 0x2e */ {}, + /* 0x2f */ {}, + /* 0x30 */ {}, +-/* 0x31 */ {}, +-/* 0x32 */ {}, +-/* 0x33 */ {}, ++/* 0x31 */ {dxgkio_signal_sync_object_cpu, ++ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU}, ++/* 0x32 */ {dxgkio_signal_sync_object_gpu, ++ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU}, ++/* 0x33 */ {dxgkio_signal_sync_object_gpu2, ++ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, + /* 0x34 */ {}, + /* 0x35 */ {}, + /* 0x36 */ {}, + /* 0x37 */ {}, + /* 0x38 */ {}, + /* 0x39 */ {}, +-/* 0x3a */ {}, +-/* 0x3b */ {}, ++/* 0x3a */ {dxgkio_wait_sync_object_cpu, ++ LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU}, ++/* 0x3b */ {dxgkio_wait_sync_object_gpu, ++ LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU}, + /* 0x3c */ {}, + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index a51b29a6a68f..ee2ebfdd1c13 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -25,6 +25,8 @@ extern const struct d3dkmthandle zerohandle; + * The locks here are in the order from lowest to highest. + * When a lower lock is held, the higher lock should not be acquired. + * ++ * device_list_mutex ++ * host_event_list_mutex + * channel_lock (VMBus channel lock) + * fd_mutex + * plistmutex (process list mutex) +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 4e1069f41d76..39055b0c1069 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -60,6 +60,9 @@ struct winluid { + + #define D3DKMT_CREATEALLOCATION_MAX 1024 + #define D3DKMT_ADAPTERS_MAX 64 ++#define D3DDDI_MAX_BROADCAST_CONTEXT 64 ++#define D3DDDI_MAX_OBJECT_WAITED_ON 32 ++#define D3DDDI_MAX_OBJECT_SIGNALED 32 + + struct d3dkmt_adapterinfo { + struct d3dkmthandle adapter_handle; +@@ -343,6 +346,148 @@ struct d3dkmt_createsynchronizationobject2 { + __u32 reserved1; + }; + ++struct d3dkmt_waitforsynchronizationobject2 { ++ struct d3dkmthandle context; ++ __u32 object_count; ++ struct d3dkmthandle object_array[D3DDDI_MAX_OBJECT_WAITED_ON]; ++ union { ++ struct { ++ __u64 fence_value; ++ } fence; ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dddicb_signalflags { ++ union { ++ struct { ++ __u32 signal_at_submission:1; ++ __u32 enqueue_cpu_event:1; ++ __u32 allow_fence_rewind:1; ++ __u32 reserved:28; ++ __u32 DXGK_SIGNAL_FLAG_INTERNAL0:1; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_signalsynchronizationobject2 { ++ struct d3dkmthandle context; ++ __u32 object_count; ++ struct d3dkmthandle object_array[D3DDDI_MAX_OBJECT_SIGNALED]; ++ struct d3dddicb_signalflags flags; ++ __u32 context_count; ++ struct d3dkmthandle contexts[D3DDDI_MAX_BROADCAST_CONTEXT]; ++ union { ++ struct { ++ __u64 fence_value; ++ } fence; ++ __u64 cpu_event_handle; ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dddi_waitforsynchronizationobjectfromcpu_flags { ++ union { ++ struct { ++ __u32 wait_any:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_waitforsynchronizationobjectfromcpu { ++ struct d3dkmthandle device; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++ __u64 async_event; ++ struct d3dddi_waitforsynchronizationobjectfromcpu_flags flags; ++}; ++ ++struct d3dkmt_signalsynchronizationobjectfromcpu { ++ struct d3dkmthandle device; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++ struct d3dddicb_signalflags flags; ++}; ++ ++struct d3dkmt_waitforsynchronizationobjectfromgpu { ++ struct d3dkmthandle context; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++#else ++ __u64 objects; ++#endif ++ union { ++#ifdef __KERNEL__ ++ __u64 *monitored_fence_values; ++#else ++ __u64 monitored_fence_values; ++#endif ++ __u64 fence_value; ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dkmt_signalsynchronizationobjectfromgpu { ++ struct d3dkmthandle context; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++#else ++ __u64 objects; ++#endif ++ union { ++#ifdef __KERNEL__ ++ __u64 *monitored_fence_values; ++#else ++ __u64 monitored_fence_values; ++#endif ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dkmt_signalsynchronizationobjectfromgpu2 { ++ __u32 object_count; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++#else ++ __u64 objects; ++#endif ++ struct d3dddicb_signalflags flags; ++ __u32 context_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *contexts; ++#else ++ __u64 contexts; ++#endif ++ union { ++ __u64 fence_value; ++ __u64 cpu_event_handle; ++#ifdef __KERNEL__ ++ __u64 *monitored_fence_values; ++#else ++ __u64 monitored_fence_values; ++#endif ++ __u64 reserved[8]; ++ }; ++}; ++ + struct d3dkmt_destroysynchronizationobject { + struct d3dkmthandle sync_object; + }; +@@ -576,6 +721,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x11, struct d3dkmt_signalsynchronizationobject2) ++#define LX_DXWAITFORSYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x12, struct d3dkmt_waitforsynchronizationobject2) + #define LX_DXDESTROYALLOCATION2 \ + _IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2) + #define LX_DXENUMADAPTERS2 \ +@@ -586,6 +735,16 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ ++ _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ ++ _IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \ ++ _IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2) ++#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ ++ _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) ++#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ ++ _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1679-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1679-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch new file mode 100644 index 000000000000..b199663fd638 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1679-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch @@ -0,0 +1,1464 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 31 Jan 2022 17:52:31 -0800 +Subject: drivers: hv: dxgkrnl: Sharing of dxgresource objects + +Implement creation of shared resources and ioctls for sharing +dxgresource objects between processes in the virtual machine. + +A dxgresource object is a collection of dxgallocation objects. +The driver API allows addition/removal of allocations to a resource, +but has limitations on addition/removal of allocations to a shared +resource. When a resource is "sealed", addition/removal of allocations +is not allowed. + +Resources are shared using file descriptor (FD) handles. The name +"NT handle" is used to be compatible with Windows implementation. + +An FD handle is created by the LX_DXSHAREOBJECTS ioctl. The given FD +handle could be sent to another process using any Linux API. + +To use a shared resource object in other ioctls the object needs to be +opened using its FD handle. An resource object is opened by the +LX_DXOPENRESOURCEFROMNTHANDLE ioctl. This ioctl returns a d3dkmthandle +value, which can be used to reference the resource object. + +The LX_DXQUERYRESOURCEINFOFROMNTHANDLE ioctl is used to query private +driver data of a shared resource object. This private data needs to be +used to actually open the object using the LX_DXOPENRESOURCEFROMNTHANDLE +ioctl. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 81 + + drivers/hv/dxgkrnl/dxgkrnl.h | 77 + + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgvmbus.c | 127 ++ + drivers/hv/dxgkrnl/dxgvmbus.h | 30 + + drivers/hv/dxgkrnl/ioctl.c | 792 +++++++++- + include/uapi/misc/d3dkmthk.h | 96 ++ + 7 files changed, 1200 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 04d827a15c54..26fce9aba4f3 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -160,6 +160,17 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info) + list_del(&process_info->adapter_process_list_entry); + } + ++void dxgadapter_remove_shared_resource(struct dxgadapter *adapter, ++ struct dxgsharedresource *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ if (object->shared_resource_list_entry.next) { ++ list_del(&object->shared_resource_list_entry); ++ object->shared_resource_list_entry.next = NULL; ++ } ++ up_write(&adapter->shared_resource_list_lock); ++} ++ + void dxgadapter_add_syncobj(struct dxgadapter *adapter, + struct dxgsyncobject *object) + { +@@ -489,6 +500,69 @@ void dxgdevice_remove_resource(struct dxgdevice *device, + } + } + ++struct dxgsharedresource *dxgsharedresource_create(struct dxgadapter *adapter) ++{ ++ struct dxgsharedresource *resource; ++ ++ resource = kzalloc(sizeof(*resource), GFP_KERNEL); ++ if (resource) { ++ INIT_LIST_HEAD(&resource->resource_list_head); ++ kref_init(&resource->sresource_kref); ++ mutex_init(&resource->fd_mutex); ++ resource->adapter = adapter; ++ } ++ return resource; ++} ++ ++void dxgsharedresource_destroy(struct kref *refcount) ++{ ++ struct dxgsharedresource *resource; ++ ++ resource = container_of(refcount, struct dxgsharedresource, ++ sresource_kref); ++ if (resource->runtime_private_data) ++ vfree(resource->runtime_private_data); ++ if (resource->resource_private_data) ++ vfree(resource->resource_private_data); ++ if (resource->alloc_private_data_sizes) ++ vfree(resource->alloc_private_data_sizes); ++ if (resource->alloc_private_data) ++ vfree(resource->alloc_private_data); ++ kfree(resource); ++} ++ ++void dxgsharedresource_add_resource(struct dxgsharedresource *shared_resource, ++ struct dxgresource *resource) ++{ ++ down_write(&shared_resource->adapter->shared_resource_list_lock); ++ DXG_TRACE("Adding resource: %p %p", shared_resource, resource); ++ list_add_tail(&resource->shared_resource_list_entry, ++ &shared_resource->resource_list_head); ++ kref_get(&shared_resource->sresource_kref); ++ kref_get(&resource->resource_kref); ++ resource->shared_owner = shared_resource; ++ up_write(&shared_resource->adapter->shared_resource_list_lock); ++} ++ ++void dxgsharedresource_remove_resource(struct dxgsharedresource ++ *shared_resource, ++ struct dxgresource *resource) ++{ ++ struct dxgadapter *adapter = shared_resource->adapter; ++ ++ down_write(&adapter->shared_resource_list_lock); ++ DXG_TRACE("Removing resource: %p %p", shared_resource, resource); ++ if (resource->shared_resource_list_entry.next) { ++ list_del(&resource->shared_resource_list_entry); ++ resource->shared_resource_list_entry.next = NULL; ++ kref_put(&shared_resource->sresource_kref, ++ dxgsharedresource_destroy); ++ resource->shared_owner = NULL; ++ kref_put(&resource->resource_kref, dxgresource_release); ++ } ++ up_write(&adapter->shared_resource_list_lock); ++} ++ + struct dxgresource *dxgresource_create(struct dxgdevice *device) + { + struct dxgresource *resource; +@@ -532,6 +606,7 @@ void dxgresource_destroy(struct dxgresource *resource) + struct d3dkmt_destroyallocation2 args = { }; + int destroyed = test_and_set_bit(0, &resource->flags); + struct dxgdevice *device = resource->device; ++ struct dxgsharedresource *shared_resource; + + if (!destroyed) { + dxgresource_free_handle(resource); +@@ -547,6 +622,12 @@ void dxgresource_destroy(struct dxgresource *resource) + dxgallocation_destroy(alloc); + } + dxgdevice_remove_resource(device, resource); ++ shared_resource = resource->shared_owner; ++ if (shared_resource) { ++ dxgsharedresource_remove_resource(shared_resource, ++ resource); ++ resource->shared_owner = NULL; ++ } + } + kref_put(&resource->resource_kref, dxgresource_release); + } +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 8431523f42de..0336e1843223 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -38,6 +38,7 @@ struct dxgdevice; + struct dxgcontext; + struct dxgallocation; + struct dxgresource; ++struct dxgsharedresource; + struct dxgsyncobject; + + /* +@@ -372,6 +373,8 @@ struct dxgadapter { + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ + struct list_head adapter_process_list_head; ++ /* List of all dxgsharedresource objects */ ++ struct list_head shared_resource_list_head; + /* List of all non-device dxgsyncobject objects */ + struct list_head syncobj_list_head; + /* This lock protects shared resource and syncobject lists */ +@@ -405,6 +408,8 @@ void dxgadapter_remove_syncobj(struct dxgsyncobject *so); + void dxgadapter_add_process(struct dxgadapter *adapter, + struct dxgprocess_adapter *process_info); + void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); ++void dxgadapter_remove_shared_resource(struct dxgadapter *adapter, ++ struct dxgsharedresource *object); + + /* + * The object represent the device object. +@@ -484,6 +489,64 @@ void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_release(struct kref *refcount); + bool dxgcontext_is_active(struct dxgcontext *ctx); + ++/* ++ * A shared resource object is created to track the list of dxgresource objects, ++ * which are opened for the same underlying shared resource. ++ * Objects are shared by using a file descriptor handle. ++ * FD is created by calling dxgk_share_objects and providing shandle to ++ * dxgsharedresource. The FD points to a dxgresource object, which is created ++ * by calling dxgk_open_resource_nt. dxgresource object is referenced by the ++ * FD. ++ * ++ * The object is referenced by every dxgresource in its list. ++ * ++ */ ++struct dxgsharedresource { ++ /* Every dxgresource object in the resource list takes a reference */ ++ struct kref sresource_kref; ++ struct dxgadapter *adapter; ++ /* List of dxgresource objects, opened for the shared resource. */ ++ /* Protected by dxgadapter::shared_resource_list_lock */ ++ struct list_head resource_list_head; ++ /* Entry in the list of dxgsharedresource in dxgadapter */ ++ /* Protected by dxgadapter::shared_resource_list_lock */ ++ struct list_head shared_resource_list_entry; ++ struct mutex fd_mutex; ++ /* Referenced by file descriptors */ ++ int host_shared_handle_nt_reference; ++ /* Corresponding global handle in the host */ ++ struct d3dkmthandle host_shared_handle; ++ /* ++ * When the sync object is shared by NT handle, this is the ++ * corresponding handle in the host ++ */ ++ struct d3dkmthandle host_shared_handle_nt; ++ /* Values below are computed when the resource is sealed */ ++ u32 runtime_private_data_size; ++ u32 alloc_private_data_size; ++ u32 resource_private_data_size; ++ u32 allocation_count; ++ union { ++ struct { ++ /* Cannot add new allocations */ ++ u32 sealed:1; ++ u32 reserved:31; ++ }; ++ long flags; ++ }; ++ u32 *alloc_private_data_sizes; ++ u8 *alloc_private_data; ++ u8 *runtime_private_data; ++ u8 *resource_private_data; ++}; ++ ++struct dxgsharedresource *dxgsharedresource_create(struct dxgadapter *adapter); ++void dxgsharedresource_destroy(struct kref *refcount); ++void dxgsharedresource_add_resource(struct dxgsharedresource *sres, ++ struct dxgresource *res); ++void dxgsharedresource_remove_resource(struct dxgsharedresource *sres, ++ struct dxgresource *res); ++ + struct dxgresource { + struct kref resource_kref; + enum dxgobjectstate object_state; +@@ -504,6 +567,8 @@ struct dxgresource { + }; + long flags; + }; ++ /* Owner of the shared resource */ ++ struct dxgsharedresource *shared_owner; + }; + + struct dxgresource *dxgresource_create(struct dxgdevice *dev); +@@ -658,6 +723,18 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, ++ struct d3dkmthandle object, ++ struct d3dkmthandle *shared_handle); ++int dxgvmb_send_destroy_nt_shared_object(struct d3dkmthandle shared_handle); ++int dxgvmb_send_open_resource(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmthandle global_share, ++ u32 allocation_count, ++ u32 total_priv_drv_data_size, ++ struct d3dkmthandle *resource_handle, ++ struct d3dkmthandle *alloc_handles); + int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + enum d3dkmdt_standardallocationtype t, + struct d3dkmdt_gdisurfacedata *data, +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 5a5ca8791d27..69e221613af9 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -258,6 +258,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + init_rwsem(&adapter->core_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); ++ INIT_LIST_HEAD(&adapter->shared_resource_list_head); + INIT_LIST_HEAD(&adapter->syncobj_list_head); + init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 6b2dea24a509..b3a4377c8b0b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -712,6 +712,79 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process) + return ret; + } + ++int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, ++ struct d3dkmthandle object, ++ struct d3dkmthandle *shared_handle) ++{ ++ struct dxgkvmb_command_createntsharedobject *command; ++ int ret; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATENTSHAREDOBJECT, ++ process->host_handle); ++ command->object = object; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size, shared_handle, ++ sizeof(*shared_handle)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret < 0) ++ goto cleanup; ++ if (shared_handle->v == 0) { ++ DXG_ERR("failed to create NT shared object"); ++ ret = -ENOTRECOVERABLE; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_nt_shared_object(struct d3dkmthandle shared_handle) ++{ ++ struct dxgkvmb_command_destroyntsharedobject *command; ++ int ret; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, NULL, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init1(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYNTSHAREDOBJECT); ++ command->shared_handle = shared_handle; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size); ++ ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_destroy_sync_object(struct dxgprocess *process, + struct d3dkmthandle sync_object) + { +@@ -1552,6 +1625,60 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_open_resource(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmthandle global_share, ++ u32 allocation_count, ++ u32 total_priv_drv_data_size, ++ struct d3dkmthandle *resource_handle, ++ struct d3dkmthandle *alloc_handles) ++{ ++ struct dxgkvmb_command_openresource *command; ++ struct dxgkvmb_command_openresource_return *result; ++ struct d3dkmthandle *handles; ++ int ret; ++ int i; ++ u32 result_size = allocation_count * sizeof(struct d3dkmthandle) + ++ sizeof(*result); ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ ret = init_message_res(&msg, adapter, process, sizeof(*command), ++ result_size); ++ if (ret) ++ goto cleanup; ++ command = msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENRESOURCE, ++ process->host_handle); ++ command->device = device; ++ command->nt_security_sharing = 1; ++ command->global_share = global_share; ++ command->allocation_count = allocation_count; ++ command->total_priv_drv_data_size = total_priv_drv_data_size; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ *resource_handle = result->resource; ++ handles = (struct d3dkmthandle *) &result[1]; ++ for (i = 0; i < allocation_count; i++) ++ alloc_handles[i] = handles[i]; ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + enum d3dkmdt_standardallocationtype alloctype, + struct d3dkmdt_gdisurfacedata *alloc_data, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 89fecbcefbc8..73d7adac60a1 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -172,6 +172,21 @@ struct dxgkvmb_command_signalguestevent { + bool dereference_event; + }; + ++/* ++ * The command returns struct d3dkmthandle of a shared object for the ++ * given pre-process object ++ */ ++struct dxgkvmb_command_createntsharedobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle object; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroyntsharedobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle shared_handle; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_setiospaceregion { + struct dxgkvmb_command_vm_to_host hdr; +@@ -305,6 +320,21 @@ struct dxgkvmb_command_createallocation { + /* u8 priv_drv_data[] for each alloc_info */ + }; + ++struct dxgkvmb_command_openresource { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ bool nt_security_sharing; ++ struct d3dkmthandle global_share; ++ u32 allocation_count; ++ u32 total_priv_drv_data_size; ++}; ++ ++struct dxgkvmb_command_openresource_return { ++ struct d3dkmthandle resource; ++ struct ntstatus status; ++/* struct d3dkmthandle allocation[allocation_count]; */ ++}; ++ + struct dxgkvmb_command_getstandardallocprivdata { + struct dxgkvmb_command_vgpu_to_host hdr; + enum d3dkmdt_standardallocationtype alloc_type; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 0025e1ee2d4d..abb64f6c3a59 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -36,8 +36,35 @@ static char *errorstr(int ret) + } + #endif + ++static int dxgsharedresource_release(struct inode *inode, struct file *file) ++{ ++ struct dxgsharedresource *resource = file->private_data; ++ ++ DXG_TRACE("Release resource: %p", resource); ++ mutex_lock(&resource->fd_mutex); ++ kref_get(&resource->sresource_kref); ++ resource->host_shared_handle_nt_reference--; ++ if (resource->host_shared_handle_nt_reference == 0) { ++ if (resource->host_shared_handle_nt.v) { ++ dxgvmb_send_destroy_nt_shared_object( ++ resource->host_shared_handle_nt); ++ DXG_TRACE("Resource host_handle_nt destroyed: %x", ++ resource->host_shared_handle_nt.v); ++ resource->host_shared_handle_nt.v = 0; ++ } ++ kref_put(&resource->sresource_kref, dxgsharedresource_destroy); ++ } ++ mutex_unlock(&resource->fd_mutex); ++ kref_put(&resource->sresource_kref, dxgsharedresource_destroy); ++ return 0; ++} ++ ++static const struct file_operations dxg_resource_fops = { ++ .release = dxgsharedresource_release, ++}; ++ + static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, +- void *__user inargs) ++ void *__user inargs) + { + struct d3dkmt_openadapterfromluid args; + int ret; +@@ -212,6 +239,98 @@ dxgkp_enum_adapters(struct dxgprocess *process, + return ret; + } + ++static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource) ++{ ++ int ret = 0; ++ int i = 0; ++ u8 *private_data; ++ u32 data_size; ++ struct dxgresource *resource; ++ struct dxgallocation *alloc; ++ ++ DXG_TRACE("Sealing resource: %p", shared_resource); ++ ++ down_write(&shared_resource->adapter->shared_resource_list_lock); ++ if (shared_resource->sealed) { ++ DXG_TRACE("Resource already sealed"); ++ goto cleanup; ++ } ++ shared_resource->sealed = 1; ++ if (!list_empty(&shared_resource->resource_list_head)) { ++ resource = ++ list_first_entry(&shared_resource->resource_list_head, ++ struct dxgresource, ++ shared_resource_list_entry); ++ DXG_TRACE("First resource: %p", resource); ++ mutex_lock(&resource->resource_mutex); ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ DXG_TRACE("Resource alloc: %p %d", alloc, ++ alloc->priv_drv_data->data_size); ++ shared_resource->allocation_count++; ++ shared_resource->alloc_private_data_size += ++ alloc->priv_drv_data->data_size; ++ if (shared_resource->alloc_private_data_size < ++ alloc->priv_drv_data->data_size) { ++ DXG_ERR("alloc private data overflow"); ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ } ++ if (shared_resource->alloc_private_data_size == 0) { ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ shared_resource->alloc_private_data = ++ vzalloc(shared_resource->alloc_private_data_size); ++ if (shared_resource->alloc_private_data == NULL) { ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ shared_resource->alloc_private_data_sizes = ++ vzalloc(sizeof(u32)*shared_resource->allocation_count); ++ if (shared_resource->alloc_private_data_sizes == NULL) { ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ private_data = shared_resource->alloc_private_data; ++ data_size = shared_resource->alloc_private_data_size; ++ i = 0; ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ u32 alloc_data_size = alloc->priv_drv_data->data_size; ++ ++ if (alloc_data_size) { ++ if (data_size < alloc_data_size) { ++ dev_err(DXGDEV, ++ "Invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ shared_resource->alloc_private_data_sizes[i] = ++ alloc_data_size; ++ memcpy(private_data, ++ alloc->priv_drv_data->data, ++ alloc_data_size); ++ vfree(alloc->priv_drv_data); ++ alloc->priv_drv_data = NULL; ++ private_data += alloc_data_size; ++ data_size -= alloc_data_size; ++ } ++ i++; ++ } ++ if (data_size != 0) { ++ DXG_ERR("Data size mismatch"); ++ ret = -EINVAL; ++ } ++cleanup1: ++ mutex_unlock(&resource->resource_mutex); ++ } ++cleanup: ++ up_write(&shared_resource->adapter->shared_resource_list_lock); ++ return ret; ++} ++ + static int + dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + { +@@ -803,6 +922,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + u32 alloc_info_size = 0; + struct dxgresource *resource = NULL; + struct dxgallocation **dxgalloc = NULL; ++ struct dxgsharedresource *shared_resource = NULL; + bool resource_mutex_acquired = false; + u32 standard_alloc_priv_data_size = 0; + void *standard_alloc_priv_data = NULL; +@@ -973,6 +1093,76 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + } + resource->private_runtime_handle = + args.private_runtime_resource_handle; ++ if (args.flags.create_shared) { ++ if (!args.flags.nt_security_sharing) { ++ dev_err(DXGDEV, ++ "nt_security_sharing must be set"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ shared_resource = dxgsharedresource_create(adapter); ++ if (shared_resource == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ shared_resource->runtime_private_data_size = ++ args.priv_drv_data_size; ++ shared_resource->resource_private_data_size = ++ args.priv_drv_data_size; ++ ++ shared_resource->runtime_private_data_size = ++ args.private_runtime_data_size; ++ shared_resource->resource_private_data_size = ++ args.priv_drv_data_size; ++ dxgsharedresource_add_resource(shared_resource, ++ resource); ++ if (args.flags.standard_allocation) { ++ shared_resource->resource_private_data = ++ res_priv_data; ++ shared_resource->resource_private_data_size = ++ res_priv_data_size; ++ res_priv_data = NULL; ++ } ++ if (args.private_runtime_data_size) { ++ shared_resource->runtime_private_data = ++ vzalloc(args.private_runtime_data_size); ++ if (shared_resource->runtime_private_data == ++ NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user( ++ shared_resource->runtime_private_data, ++ args.private_runtime_data, ++ args.private_runtime_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "failed to copy runtime data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if (args.priv_drv_data_size && ++ !args.flags.standard_allocation) { ++ shared_resource->resource_private_data = ++ vzalloc(args.priv_drv_data_size); ++ if (shared_resource->resource_private_data == ++ NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user( ++ shared_resource->resource_private_data, ++ args.priv_drv_data, ++ args.priv_drv_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "failed to copy res data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ } + } else { + if (args.resource.v) { + /* Adding new allocations to the given resource */ +@@ -991,6 +1181,12 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + ret = -EINVAL; + goto cleanup; + } ++ if (resource->shared_owner && ++ resource->shared_owner->sealed) { ++ DXG_ERR("Resource is sealed"); ++ ret = -EINVAL; ++ goto cleanup; ++ } + /* Synchronize with resource destruction */ + mutex_lock(&resource->resource_mutex); + if (!dxgresource_is_active(resource)) { +@@ -1092,9 +1288,16 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + } + } + if (resource && args.flags.create_resource) { ++ if (shared_resource) { ++ dxgsharedresource_remove_resource ++ (shared_resource, resource); ++ } + dxgresource_destroy(resource); + } + } ++ if (shared_resource) ++ kref_put(&shared_resource->sresource_kref, ++ dxgsharedresource_destroy); + if (dxgalloc) + vfree(dxgalloc); + if (standard_alloc_priv_data) +@@ -1140,6 +1343,10 @@ static int validate_alloc(struct dxgallocation *alloc0, + fail_reason = 4; + goto cleanup; + } ++ if (alloc->owner.resource->shared_owner) { ++ fail_reason = 5; ++ goto cleanup; ++ } + } else { + if (alloc->owner.device != device) { + fail_reason = 6; +@@ -2146,6 +2353,582 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource, ++ struct dxgprocess *process, ++ struct d3dkmthandle objecthandle) ++{ ++ int ret = 0; ++ ++ mutex_lock(&resource->fd_mutex); ++ if (resource->host_shared_handle_nt_reference == 0) { ++ ret = dxgvmb_send_create_nt_shared_object(process, ++ objecthandle, ++ &resource->host_shared_handle_nt); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Resource host_shared_handle_ht: %x", ++ resource->host_shared_handle_nt.v); ++ kref_get(&resource->sresource_kref); ++ } ++ resource->host_shared_handle_nt_reference++; ++cleanup: ++ mutex_unlock(&resource->fd_mutex); ++ return ret; ++} ++ ++enum dxg_sharedobject_type { ++ DXG_SHARED_RESOURCE ++}; ++ ++static int get_object_fd(enum dxg_sharedobject_type type, ++ void *object, int *fdout) ++{ ++ struct file *file; ++ int fd; ++ ++ fd = get_unused_fd_flags(O_CLOEXEC); ++ if (fd < 0) { ++ DXG_ERR("get_unused_fd_flags failed: %x", fd); ++ return -ENOTRECOVERABLE; ++ } ++ ++ switch (type) { ++ case DXG_SHARED_RESOURCE: ++ file = anon_inode_getfile("dxgresource", ++ &dxg_resource_fops, object, 0); ++ break; ++ default: ++ return -EINVAL; ++ }; ++ if (IS_ERR(file)) { ++ DXG_ERR("anon_inode_getfile failed: %x", fd); ++ put_unused_fd(fd); ++ return -ENOTRECOVERABLE; ++ } ++ ++ fd_install(fd, file); ++ *fdout = fd; ++ return 0; ++} ++ ++static int ++dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_shareobjects args; ++ enum hmgrentry_type object_type; ++ struct dxgsyncobject *syncobj = NULL; ++ struct dxgresource *resource = NULL; ++ struct dxgsharedresource *shared_resource = NULL; ++ struct d3dkmthandle *handles = NULL; ++ int object_fd = -1; ++ void *obj = NULL; ++ u32 handle_size; ++ int ret; ++ u64 tmp = 0; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count == 0 || args.object_count > 1) { ++ DXG_ERR("invalid object count %d", args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ handle_size = args.object_count * sizeof(struct d3dkmthandle); ++ ++ handles = vzalloc(handle_size); ++ if (handles == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(handles, args.objects, handle_size); ++ if (ret) { ++ DXG_ERR("failed to copy object handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Sharing handle: %x", handles[0].v); ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ object_type = hmgrtable_get_object_type(&process->handle_table, ++ handles[0]); ++ obj = hmgrtable_get_object(&process->handle_table, handles[0]); ++ if (obj == NULL) { ++ DXG_ERR("invalid object handle %x", handles[0].v); ++ ret = -EINVAL; ++ } else { ++ switch (object_type) { ++ case HMGRENTRY_TYPE_DXGRESOURCE: ++ resource = obj; ++ if (resource->shared_owner) { ++ kref_get(&resource->resource_kref); ++ shared_resource = resource->shared_owner; ++ } else { ++ resource = NULL; ++ DXG_ERR("resource object shared"); ++ ret = -EINVAL; ++ } ++ break; ++ default: ++ DXG_ERR("invalid object type %d", object_type); ++ ret = -EINVAL; ++ break; ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ switch (object_type) { ++ case HMGRENTRY_TYPE_DXGRESOURCE: ++ ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource, ++ &object_fd); ++ if (ret < 0) { ++ DXG_ERR("get_object_fd failed for resource"); ++ goto cleanup; ++ } ++ ret = dxgsharedresource_get_host_nt_handle(shared_resource, ++ process, handles[0]); ++ if (ret < 0) { ++ DXG_ERR("get_host_res_nt_handle failed"); ++ goto cleanup; ++ } ++ ret = dxgsharedresource_seal(shared_resource); ++ if (ret < 0) { ++ DXG_ERR("dxgsharedresource_seal failed"); ++ goto cleanup; ++ } ++ break; ++ default: ++ ret = -EINVAL; ++ break; ++ } ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ DXG_TRACE("Object FD: %x", object_fd); ++ ++ tmp = (u64) object_fd; ++ ++ ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64)); ++ if (ret < 0) ++ DXG_ERR("failed to copy shared handle"); ++ ++cleanup: ++ if (ret < 0) { ++ if (object_fd >= 0) ++ put_unused_fd(object_fd); ++ } ++ ++ if (handles) ++ vfree(handles); ++ ++ if (syncobj) ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ ++ if (resource) ++ kref_put(&resource->resource_kref, dxgresource_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryresourceinfofromnthandle args; ++ int ret; ++ struct dxgdevice *device = NULL; ++ struct dxgsharedresource *shared_resource = NULL; ++ struct file *file = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ file = fget(args.nt_handle); ++ if (!file) { ++ DXG_ERR("failed to get file from handle: %llx", ++ args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (file->f_op != &dxg_resource_fops) { ++ DXG_ERR("invalid fd: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ shared_resource = file->private_data; ++ if (shared_resource == NULL) { ++ DXG_ERR("invalid private data: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgsharedresource_seal(shared_resource); ++ if (ret < 0) ++ goto cleanup; ++ ++ args.private_runtime_data_size = ++ shared_resource->runtime_private_data_size; ++ args.resource_priv_drv_data_size = ++ shared_resource->resource_private_data_size; ++ args.allocation_count = shared_resource->allocation_count; ++ args.total_priv_drv_data_size = ++ shared_resource->alloc_private_data_size; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (file) ++ fput(file); ++ if (device) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++assign_resource_handles(struct dxgprocess *process, ++ struct dxgsharedresource *shared_resource, ++ struct d3dkmt_openresourcefromnthandle *args, ++ struct d3dkmthandle resource_handle, ++ struct dxgresource *resource, ++ struct dxgallocation **allocs, ++ struct d3dkmthandle *handles) ++{ ++ int ret; ++ int i; ++ u8 *cur_priv_data; ++ u32 total_priv_data_size = 0; ++ struct d3dddi_openallocationinfo2 open_alloc_info = { }; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, resource, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ resource_handle); ++ if (ret < 0) ++ goto cleanup; ++ resource->handle = resource_handle; ++ resource->handle_valid = 1; ++ cur_priv_data = args->total_priv_drv_data; ++ for (i = 0; i < args->allocation_count; i++) { ++ ret = hmgrtable_assign_handle(&process->handle_table, allocs[i], ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ handles[i]); ++ if (ret < 0) ++ goto cleanup; ++ allocs[i]->alloc_handle = handles[i]; ++ allocs[i]->handle_valid = 1; ++ open_alloc_info.allocation = handles[i]; ++ if (shared_resource->alloc_private_data_sizes) ++ open_alloc_info.priv_drv_data_size = ++ shared_resource->alloc_private_data_sizes[i]; ++ else ++ open_alloc_info.priv_drv_data_size = 0; ++ ++ total_priv_data_size += open_alloc_info.priv_drv_data_size; ++ open_alloc_info.priv_drv_data = cur_priv_data; ++ cur_priv_data += open_alloc_info.priv_drv_data_size; ++ ++ ret = copy_to_user(&args->open_alloc_info[i], ++ &open_alloc_info, ++ sizeof(open_alloc_info)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc info"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ args->total_priv_drv_data_size = total_priv_data_size; ++cleanup: ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (ret < 0) { ++ for (i = 0; i < args->allocation_count; i++) ++ dxgallocation_free_handle(allocs[i]); ++ dxgresource_free_handle(resource); ++ } ++ return ret; ++} ++ ++static int ++open_resource(struct dxgprocess *process, ++ struct d3dkmt_openresourcefromnthandle *args, ++ __user struct d3dkmthandle *res_out, ++ __user u32 *total_driver_data_size_out) ++{ ++ int ret = 0; ++ int i; ++ struct d3dkmthandle *alloc_handles = NULL; ++ int alloc_handles_size = sizeof(struct d3dkmthandle) * ++ args->allocation_count; ++ struct dxgsharedresource *shared_resource = NULL; ++ struct dxgresource *resource = NULL; ++ struct dxgallocation **allocs = NULL; ++ struct d3dkmthandle global_share = {}; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle resource_handle = {}; ++ struct file *file = NULL; ++ ++ DXG_TRACE("Opening resource handle: %llx", args->nt_handle); ++ ++ file = fget(args->nt_handle); ++ if (!file) { ++ DXG_ERR("failed to get file from handle: %llx", ++ args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (file->f_op != &dxg_resource_fops) { ++ DXG_ERR("invalid fd type: %llx", args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ shared_resource = file->private_data; ++ if (shared_resource == NULL) { ++ DXG_ERR("invalid private data: %llx", args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (kref_get_unless_zero(&shared_resource->sresource_kref) == 0) ++ shared_resource = NULL; ++ else ++ global_share = shared_resource->host_shared_handle_nt; ++ ++ if (shared_resource == NULL) { ++ DXG_ERR("Invalid shared resource handle: %x", ++ (u32)args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Shared resource: %p %x", shared_resource, ++ global_share.v); ++ ++ device = dxgprocess_device_by_handle(process, args->device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgsharedresource_seal(shared_resource); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (args->allocation_count != shared_resource->allocation_count || ++ args->private_runtime_data_size < ++ shared_resource->runtime_private_data_size || ++ args->resource_priv_drv_data_size < ++ shared_resource->resource_private_data_size || ++ args->total_priv_drv_data_size < ++ shared_resource->alloc_private_data_size) { ++ ret = -EINVAL; ++ DXG_ERR("Invalid data sizes"); ++ goto cleanup; ++ } ++ ++ alloc_handles = vzalloc(alloc_handles_size); ++ if (alloc_handles == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ allocs = vzalloc(sizeof(void *) * args->allocation_count); ++ if (allocs == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ resource = dxgresource_create(device); ++ if (resource == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dxgsharedresource_add_resource(shared_resource, resource); ++ ++ for (i = 0; i < args->allocation_count; i++) { ++ allocs[i] = dxgallocation_create(process); ++ if (allocs[i] == NULL) ++ goto cleanup; ++ ret = dxgresource_add_alloc(resource, allocs[i]); ++ if (ret < 0) ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_open_resource(process, adapter, ++ device->handle, global_share, ++ args->allocation_count, ++ args->total_priv_drv_data_size, ++ &resource_handle, alloc_handles); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_open_resource failed"); ++ goto cleanup; ++ } ++ ++ if (shared_resource->runtime_private_data_size) { ++ ret = copy_to_user(args->private_runtime_data, ++ shared_resource->runtime_private_data, ++ shared_resource->runtime_private_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy runtime data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (shared_resource->resource_private_data_size) { ++ ret = copy_to_user(args->resource_priv_drv_data, ++ shared_resource->resource_private_data, ++ shared_resource->resource_private_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy resource data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (shared_resource->alloc_private_data_size) { ++ ret = copy_to_user(args->total_priv_drv_data, ++ shared_resource->alloc_private_data, ++ shared_resource->alloc_private_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = assign_resource_handles(process, shared_resource, args, ++ resource_handle, resource, allocs, ++ alloc_handles); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(res_out, &resource_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy resource handle to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(total_driver_data_size_out, ++ &args->total_priv_drv_data_size, sizeof(u32)); ++ if (ret) { ++ DXG_ERR("failed to copy total driver data size"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (resource_handle.v) { ++ struct d3dkmt_destroyallocation2 tmp = { }; ++ ++ tmp.flags.assume_not_in_use = 1; ++ tmp.device = args->device; ++ tmp.resource = resource_handle; ++ ret = dxgvmb_send_destroy_allocation(process, device, ++ &tmp, NULL); ++ } ++ if (resource) ++ dxgresource_destroy(resource); ++ } ++ ++ if (file) ++ fput(file); ++ if (allocs) ++ vfree(allocs); ++ if (shared_resource) ++ kref_put(&shared_resource->sresource_kref, ++ dxgsharedresource_destroy); ++ if (alloc_handles) ++ vfree(alloc_handles); ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ ++static int ++dxgkio_open_resource_nt(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_openresourcefromnthandle args; ++ struct d3dkmt_openresourcefromnthandle *__user args_user = inargs; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = open_resource(process, &args, ++ &args_user->resource, ++ &args_user->total_priv_drv_data_size); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -2215,10 +2998,11 @@ static struct ioctl_desc ioctls[] = { + /* 0x3c */ {}, + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, +-/* 0x3f */ {}, ++/* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, + /* 0x40 */ {}, +-/* 0x41 */ {}, +-/* 0x42 */ {}, ++/* 0x41 */ {dxgkio_query_resource_info_nt, ++ LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, ++/* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, + /* 0x43 */ {}, + /* 0x44 */ {}, + /* 0x45 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 39055b0c1069..f74564cf7ee9 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -682,6 +682,94 @@ enum d3dkmt_deviceexecution_state { + _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, + }; + ++struct d3dddi_openallocationinfo2 { ++ struct d3dkmthandle allocation; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ __u64 gpu_va; ++ __u64 reserved[6]; ++}; ++ ++struct d3dkmt_openresourcefromnthandle { ++ struct d3dkmthandle device; ++ __u32 reserved; ++ __u64 nt_handle; ++ __u32 allocation_count; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dddi_openallocationinfo2 *open_alloc_info; ++#else ++ __u64 open_alloc_info; ++#endif ++ int private_runtime_data_size; ++ __u32 reserved2; ++#ifdef __KERNEL__ ++ void *private_runtime_data; ++#else ++ __u64 private_runtime_data; ++#endif ++ __u32 resource_priv_drv_data_size; ++ __u32 reserved3; ++#ifdef __KERNEL__ ++ void *resource_priv_drv_data; ++#else ++ __u64 resource_priv_drv_data; ++#endif ++ __u32 total_priv_drv_data_size; ++#ifdef __KERNEL__ ++ void *total_priv_drv_data; ++#else ++ __u64 total_priv_drv_data; ++#endif ++ struct d3dkmthandle resource; ++ struct d3dkmthandle keyed_mutex; ++#ifdef __KERNEL__ ++ void *keyed_mutex_private_data; ++#else ++ __u64 keyed_mutex_private_data; ++#endif ++ __u32 keyed_mutex_private_data_size; ++ struct d3dkmthandle sync_object; ++}; ++ ++struct d3dkmt_queryresourceinfofromnthandle { ++ struct d3dkmthandle device; ++ __u32 reserved; ++ __u64 nt_handle; ++#ifdef __KERNEL__ ++ void *private_runtime_data; ++#else ++ __u64 private_runtime_data; ++#endif ++ __u32 private_runtime_data_size; ++ __u32 total_priv_drv_data_size; ++ __u32 resource_priv_drv_data_size; ++ __u32 allocation_count; ++}; ++ ++struct d3dkmt_shareobjects { ++ __u32 object_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *objects; ++ void *object_attr; /* security attributes */ ++#else ++ __u64 objects; ++ __u64 object_attr; ++#endif ++ __u32 desired_access; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ __u64 *shared_handle; /* output file descriptors */ ++#else ++ __u64 shared_handle; ++#endif ++}; ++ + union d3dkmt_enumadapters_filter { + struct { + __u64 include_compute_only:1; +@@ -747,5 +835,13 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) ++#define LX_DXSHAREOBJECTS \ ++ _IOWR(0x47, 0x3f, struct d3dkmt_shareobjects) ++#define LX_DXOPENSYNCOBJECTFROMNTHANDLE2 \ ++ _IOWR(0x47, 0x40, struct d3dkmt_opensyncobjectfromnthandle2) ++#define LX_DXQUERYRESOURCEINFOFROMNTHANDLE \ ++ _IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle) ++#define LX_DXOPENRESOURCEFROMNTHANDLE \ ++ _IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1680-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1680-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch new file mode 100644 index 000000000000..5e47bde59c2c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1680-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch @@ -0,0 +1,1555 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 31 Jan 2022 16:41:28 -0800 +Subject: drivers: hv: dxgkrnl: Sharing of sync objects + +Implement creation of a shared sync objects and the ioctl for sharing +dxgsyncobject objects between processes in the virtual machine. + +Sync objects are shared using file descriptor (FD) handles. +The name "NT handle" is used to be compatible with Windows implementation. + +An FD handle is created by the LX_DXSHAREOBJECTS ioctl. The created FD +handle could be sent to another process using any Linux API. + +To use a shared sync object in other ioctls, the object needs to be +opened using its FD handle. A sync object is opened by the +LX_DXOPENSYNCOBJECTFROMNTHANDLE2 ioctl, which returns a d3dkmthandle +value. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 181 ++- + drivers/hv/dxgkrnl/dxgkrnl.h | 96 ++ + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgprocess.c | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 221 ++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 35 + + drivers/hv/dxgkrnl/ioctl.c | 556 +++++++++- + include/uapi/misc/d3dkmthk.h | 93 ++ + 8 files changed, 1181 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 26fce9aba4f3..f59173f13559 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -171,6 +171,26 @@ void dxgadapter_remove_shared_resource(struct dxgadapter *adapter, + up_write(&adapter->shared_resource_list_lock); + } + ++void dxgadapter_add_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ list_add_tail(&object->adapter_shared_syncobj_list_entry, ++ &adapter->adapter_shared_syncobj_list_head); ++ up_write(&adapter->shared_resource_list_lock); ++} ++ ++void dxgadapter_remove_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ if (object->adapter_shared_syncobj_list_entry.next) { ++ list_del(&object->adapter_shared_syncobj_list_entry); ++ object->adapter_shared_syncobj_list_entry.next = NULL; ++ } ++ up_write(&adapter->shared_resource_list_lock); ++} ++ + void dxgadapter_add_syncobj(struct dxgadapter *adapter, + struct dxgsyncobject *object) + { +@@ -622,7 +642,7 @@ void dxgresource_destroy(struct dxgresource *resource) + dxgallocation_destroy(alloc); + } + dxgdevice_remove_resource(device, resource); +- shared_resource = resource->shared_owner; ++ shared_resource = resource->shared_owner; + if (shared_resource) { + dxgsharedresource_remove_resource(shared_resource, + resource); +@@ -736,6 +756,9 @@ struct dxgcontext *dxgcontext_create(struct dxgdevice *device) + */ + void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context) + { ++ struct dxghwqueue *hwqueue; ++ struct dxghwqueue *tmp; ++ + DXG_TRACE("Destroying context %p", context); + context->object_state = DXGOBJECTSTATE_DESTROYED; + if (context->device) { +@@ -747,6 +770,10 @@ void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context) + dxgdevice_remove_context(context->device, context); + kref_put(&context->device->device_kref, dxgdevice_release); + } ++ list_for_each_entry_safe(hwqueue, tmp, &context->hwqueue_list_head, ++ hwqueue_list_entry) { ++ dxghwqueue_destroy(process, hwqueue); ++ } + kref_put(&context->context_kref, dxgcontext_release); + } + +@@ -773,6 +800,38 @@ void dxgcontext_release(struct kref *refcount) + kfree(context); + } + ++int dxgcontext_add_hwqueue(struct dxgcontext *context, ++ struct dxghwqueue *hwqueue) ++{ ++ int ret = 0; ++ ++ down_write(&context->hwqueue_list_lock); ++ if (dxgcontext_is_active(context)) ++ list_add_tail(&hwqueue->hwqueue_list_entry, ++ &context->hwqueue_list_head); ++ else ++ ret = -ENODEV; ++ up_write(&context->hwqueue_list_lock); ++ return ret; ++} ++ ++void dxgcontext_remove_hwqueue(struct dxgcontext *context, ++ struct dxghwqueue *hwqueue) ++{ ++ if (hwqueue->hwqueue_list_entry.next) { ++ list_del(&hwqueue->hwqueue_list_entry); ++ hwqueue->hwqueue_list_entry.next = NULL; ++ } ++} ++ ++void dxgcontext_remove_hwqueue_safe(struct dxgcontext *context, ++ struct dxghwqueue *hwqueue) ++{ ++ down_write(&context->hwqueue_list_lock); ++ dxgcontext_remove_hwqueue(context, hwqueue); ++ up_write(&context->hwqueue_list_lock); ++} ++ + struct dxgallocation *dxgallocation_create(struct dxgprocess *process) + { + struct dxgallocation *alloc; +@@ -958,6 +1017,63 @@ void dxgprocess_adapter_remove_device(struct dxgdevice *device) + mutex_unlock(&device->adapter_info->device_list_mutex); + } + ++struct dxgsharedsyncobject *dxgsharedsyncobj_create(struct dxgadapter *adapter, ++ struct dxgsyncobject *so) ++{ ++ struct dxgsharedsyncobject *syncobj; ++ ++ syncobj = kzalloc(sizeof(*syncobj), GFP_KERNEL); ++ if (syncobj) { ++ kref_init(&syncobj->ssyncobj_kref); ++ INIT_LIST_HEAD(&syncobj->shared_syncobj_list_head); ++ syncobj->adapter = adapter; ++ syncobj->type = so->type; ++ syncobj->monitored_fence = so->monitored_fence; ++ dxgadapter_add_shared_syncobj(adapter, syncobj); ++ kref_get(&adapter->adapter_kref); ++ init_rwsem(&syncobj->syncobj_list_lock); ++ mutex_init(&syncobj->fd_mutex); ++ } ++ return syncobj; ++} ++ ++void dxgsharedsyncobj_release(struct kref *refcount) ++{ ++ struct dxgsharedsyncobject *syncobj; ++ ++ syncobj = container_of(refcount, struct dxgsharedsyncobject, ++ ssyncobj_kref); ++ DXG_TRACE("Destroying shared sync object %p", syncobj); ++ if (syncobj->adapter) { ++ dxgadapter_remove_shared_syncobj(syncobj->adapter, ++ syncobj); ++ kref_put(&syncobj->adapter->adapter_kref, ++ dxgadapter_release); ++ } ++ kfree(syncobj); ++} ++ ++void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *shared, ++ struct dxgsyncobject *syncobj) ++{ ++ DXG_TRACE("Add syncobj 0x%p 0x%p", shared, syncobj); ++ kref_get(&shared->ssyncobj_kref); ++ down_write(&shared->syncobj_list_lock); ++ list_add(&syncobj->shared_syncobj_list_entry, ++ &shared->shared_syncobj_list_head); ++ syncobj->shared_owner = shared; ++ up_write(&shared->syncobj_list_lock); ++} ++ ++void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *shared, ++ struct dxgsyncobject *syncobj) ++{ ++ DXG_TRACE("Remove syncobj 0x%p", shared); ++ down_write(&shared->syncobj_list_lock); ++ list_del(&syncobj->shared_syncobj_list_entry); ++ up_write(&shared->syncobj_list_lock); ++} ++ + struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + struct dxgdevice *device, + struct dxgadapter *adapter, +@@ -1091,7 +1207,70 @@ void dxgsyncobject_release(struct kref *refcount) + struct dxgsyncobject *syncobj; + + syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref); ++ if (syncobj->shared_owner) { ++ dxgsharedsyncobj_remove_syncobj(syncobj->shared_owner, ++ syncobj); ++ kref_put(&syncobj->shared_owner->ssyncobj_kref, ++ dxgsharedsyncobj_release); ++ } + if (syncobj->host_event) + kfree(syncobj->host_event); + kfree(syncobj); + } ++ ++struct dxghwqueue *dxghwqueue_create(struct dxgcontext *context) ++{ ++ struct dxgprocess *process = context->device->process; ++ struct dxghwqueue *hwqueue = kzalloc(sizeof(*hwqueue), GFP_KERNEL); ++ ++ if (hwqueue) { ++ kref_init(&hwqueue->hwqueue_kref); ++ hwqueue->context = context; ++ hwqueue->process = process; ++ hwqueue->device_handle = context->device->handle; ++ if (dxgcontext_add_hwqueue(context, hwqueue) < 0) { ++ kref_put(&hwqueue->hwqueue_kref, dxghwqueue_release); ++ hwqueue = NULL; ++ } else { ++ kref_get(&context->context_kref); ++ } ++ } ++ return hwqueue; ++} ++ ++void dxghwqueue_destroy(struct dxgprocess *process, struct dxghwqueue *hwqueue) ++{ ++ DXG_TRACE("Destroyng hwqueue %p", hwqueue); ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (hwqueue->handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ hwqueue->handle); ++ hwqueue->handle.v = 0; ++ } ++ if (hwqueue->progress_fence_sync_object.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ hwqueue->progress_fence_sync_object); ++ hwqueue->progress_fence_sync_object.v = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (hwqueue->progress_fence_mapped_address) { ++ dxg_unmap_iospace(hwqueue->progress_fence_mapped_address, ++ PAGE_SIZE); ++ hwqueue->progress_fence_mapped_address = NULL; ++ } ++ dxgcontext_remove_hwqueue_safe(hwqueue->context, hwqueue); ++ ++ kref_put(&hwqueue->context->context_kref, dxgcontext_release); ++ kref_put(&hwqueue->hwqueue_kref, dxghwqueue_release); ++} ++ ++void dxghwqueue_release(struct kref *refcount) ++{ ++ struct dxghwqueue *hwqueue; ++ ++ hwqueue = container_of(refcount, struct dxghwqueue, hwqueue_kref); ++ kfree(hwqueue); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 0336e1843223..0330352b9c06 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -40,6 +40,8 @@ struct dxgallocation; + struct dxgresource; + struct dxgsharedresource; + struct dxgsyncobject; ++struct dxgsharedsyncobject; ++struct dxghwqueue; + + /* + * Driver private data. +@@ -137,6 +139,18 @@ struct dxghosteventcpu { + * "device" syncobject, because the belong to a device (dxgdevice). + * Device syncobjects are inserted to a list in dxgdevice. + * ++ * A syncobject can be "shared", meaning that it could be opened by many ++ * processes. ++ * ++ * Shared syncobjects are inserted to a list in its owner ++ * (dxgsharedsyncobject). ++ * A syncobject can be shared by using a global handle or by using ++ * "NT security handle". ++ * When global handle sharing is used, the handle is created durinig object ++ * creation. ++ * When "NT security" is used, the handle for sharing is create be calling ++ * dxgk_share_objects. On Linux "NT handle" is represented by a file ++ * descriptor. FD points to dxgsharedsyncobject. + */ + struct dxgsyncobject { + struct kref syncobj_kref; +@@ -146,6 +160,8 @@ struct dxgsyncobject { + * List entry in dxgadapter for other objects + */ + struct list_head syncobj_list_entry; ++ /* List entry in the dxgsharedsyncobject object for shared synobjects */ ++ struct list_head shared_syncobj_list_entry; + /* Adapter, the syncobject belongs to. NULL for stopped sync obejcts. */ + struct dxgadapter *adapter; + /* +@@ -156,6 +172,8 @@ struct dxgsyncobject { + struct dxgprocess *process; + /* Used by D3DDDI_CPU_NOTIFICATION objects */ + struct dxghosteventcpu *host_event; ++ /* Owner object for shared syncobjects */ ++ struct dxgsharedsyncobject *shared_owner; + /* CPU virtual address of the fence value for "device" syncobjects */ + void *mapped_address; + /* Handle in the process handle table */ +@@ -187,6 +205,41 @@ struct dxgvgpuchannel { + struct hv_device *hdev; + }; + ++/* ++ * The object is used as parent of all sync objects, created for a shared ++ * syncobject. When a shared syncobject is created without NT security, the ++ * handle in the global handle table will point to this object. ++ */ ++struct dxgsharedsyncobject { ++ struct kref ssyncobj_kref; ++ /* Referenced by file descriptors */ ++ int host_shared_handle_nt_reference; ++ /* Corresponding handle in the host global handle table */ ++ struct d3dkmthandle host_shared_handle; ++ /* ++ * When the sync object is shared by NT handle, this is the ++ * corresponding handle in the host ++ */ ++ struct d3dkmthandle host_shared_handle_nt; ++ /* Protects access to host_shared_handle_nt */ ++ struct mutex fd_mutex; ++ struct rw_semaphore syncobj_list_lock; ++ struct list_head shared_syncobj_list_head; ++ struct list_head adapter_shared_syncobj_list_entry; ++ struct dxgadapter *adapter; ++ enum d3dddi_synchronizationobject_type type; ++ u32 monitored_fence:1; ++}; ++ ++struct dxgsharedsyncobject *dxgsharedsyncobj_create(struct dxgadapter *adapter, ++ struct dxgsyncobject ++ *syncobj); ++void dxgsharedsyncobj_release(struct kref *refcount); ++void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *sharedsyncobj, ++ struct dxgsyncobject *syncobj); ++void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *sharedsyncobj, ++ struct dxgsyncobject *syncobj); ++ + struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + struct dxgdevice *device, + struct dxgadapter *adapter, +@@ -375,6 +428,8 @@ struct dxgadapter { + struct list_head adapter_process_list_head; + /* List of all dxgsharedresource objects */ + struct list_head shared_resource_list_head; ++ /* List of all dxgsharedsyncobject objects */ ++ struct list_head adapter_shared_syncobj_list_head; + /* List of all non-device dxgsyncobject objects */ + struct list_head syncobj_list_head; + /* This lock protects shared resource and syncobject lists */ +@@ -402,6 +457,10 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter); + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); + void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); + void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_add_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *so); ++void dxgadapter_remove_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *so); + void dxgadapter_add_syncobj(struct dxgadapter *adapter, + struct dxgsyncobject *so); + void dxgadapter_remove_syncobj(struct dxgsyncobject *so); +@@ -487,8 +546,32 @@ struct dxgcontext *dxgcontext_create(struct dxgdevice *dev); + void dxgcontext_destroy(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_release(struct kref *refcount); ++int dxgcontext_add_hwqueue(struct dxgcontext *ctx, ++ struct dxghwqueue *hq); ++void dxgcontext_remove_hwqueue(struct dxgcontext *ctx, struct dxghwqueue *hq); ++void dxgcontext_remove_hwqueue_safe(struct dxgcontext *ctx, ++ struct dxghwqueue *hq); + bool dxgcontext_is_active(struct dxgcontext *ctx); + ++/* ++ * The object represent the execution hardware queue of a device. ++ */ ++struct dxghwqueue { ++ /* entry in the context hw queue list */ ++ struct list_head hwqueue_list_entry; ++ struct kref hwqueue_kref; ++ struct dxgcontext *context; ++ struct dxgprocess *process; ++ struct d3dkmthandle progress_fence_sync_object; ++ struct d3dkmthandle handle; ++ struct d3dkmthandle device_handle; ++ void *progress_fence_mapped_address; ++}; ++ ++struct dxghwqueue *dxghwqueue_create(struct dxgcontext *ctx); ++void dxghwqueue_destroy(struct dxgprocess *pr, struct dxghwqueue *hq); ++void dxghwqueue_release(struct kref *refcount); ++ + /* + * A shared resource object is created to track the list of dxgresource objects, + * which are opened for the same underlying shared resource. +@@ -720,9 +803,22 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + d3dkmt_waitforsynchronizationobjectfromcpu + *args, + u64 cpu_event); ++int dxgvmb_send_create_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createhwqueue *args, ++ struct d3dkmt_createhwqueue *__user inargs, ++ struct dxghwqueue *hq); ++int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle handle); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, ++ struct dxgvmbuschannel *channel, ++ struct d3dkmt_opensyncobjectfromnthandle2 ++ *args, ++ struct dxgsyncobject *syncobj); + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 69e221613af9..8cbe1095599f 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -259,6 +259,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); + INIT_LIST_HEAD(&adapter->shared_resource_list_head); ++ INIT_LIST_HEAD(&adapter->adapter_shared_syncobj_list_head); + INIT_LIST_HEAD(&adapter->syncobj_list_head); + init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index a41985ef438d..4021084ebd78 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -277,6 +277,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, + device_handle = + ((struct dxgcontext *)obj)->device_handle; + break; ++ case HMGRENTRY_TYPE_DXGHWQUEUE: ++ device_handle = ++ ((struct dxghwqueue *)obj)->device_handle; ++ break; + default: + DXG_ERR("invalid handle type: %d", t); + break; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index b3a4377c8b0b..e83600945de1 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -712,6 +712,69 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process) + return ret; + } + ++int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, ++ struct dxgvmbuschannel *channel, ++ struct d3dkmt_opensyncobjectfromnthandle2 ++ *args, ++ struct dxgsyncobject *syncobj) ++{ ++ struct dxgkvmb_command_opensyncobject *command; ++ struct dxgkvmb_command_opensyncobject_return result = { }; ++ int ret; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENSYNCOBJECT, ++ process->host_handle); ++ command->device = args->device; ++ command->global_sync_object = syncobj->shared_owner->host_shared_handle; ++ command->flags = args->flags; ++ if (syncobj->monitored_fence) ++ command->engine_affinity = ++ args->monitored_fence.engine_affinity; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ args->sync_object = result.sync_object; ++ if (syncobj->monitored_fence) { ++ void *va = dxg_map_iospace(result.guest_cpu_physical_address, ++ PAGE_SIZE, PROT_READ | PROT_WRITE, ++ true); ++ if (va == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ args->monitored_fence.fence_value_cpu_va = va; ++ args->monitored_fence.fence_value_gpu_va = ++ result.gpu_virtual_address; ++ syncobj->mapped_address = va; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle) +@@ -2050,6 +2113,164 @@ int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_create_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createhwqueue *args, ++ struct d3dkmt_createhwqueue *__user inargs, ++ struct dxghwqueue *hwqueue) ++{ ++ struct dxgkvmb_command_createhwqueue *command = NULL; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_createhwqueue); ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid private driver data size: %d", ++ args->priv_drv_data_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args->priv_drv_data_size) ++ cmd_size += args->priv_drv_data_size - 1; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATEHWQUEUE, ++ process->host_handle); ++ command->context = args->context; ++ command->flags = args->flags; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user(command->priv_drv_data, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ command, cmd_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(command->status); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_sync_msg failed: %x", ++ command->status.v); ++ goto cleanup; ++ } ++ ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ command->hwqueue); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, ++ NULL, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ command->hwqueue_progress_fence); ++ if (ret < 0) ++ goto cleanup; ++ ++ hwqueue->handle = command->hwqueue; ++ hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence; ++ ++ hwqueue->progress_fence_mapped_address = ++ dxg_map_iospace((u64)command->hwqueue_progress_fence_cpuva, ++ PAGE_SIZE, PROT_READ | PROT_WRITE, true); ++ if (hwqueue->progress_fence_mapped_address == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(&inargs->queue, &command->hwqueue, ++ sizeof(struct d3dkmthandle)); ++ if (ret < 0) { ++ DXG_ERR("failed to copy hwqueue handle"); ++ goto cleanup; ++ } ++ ret = copy_to_user(&inargs->queue_progress_fence, ++ &command->hwqueue_progress_fence, ++ sizeof(struct d3dkmthandle)); ++ if (ret < 0) { ++ DXG_ERR("failed to progress fence"); ++ goto cleanup; ++ } ++ ret = copy_to_user(&inargs->queue_progress_fence_cpu_va, ++ &hwqueue->progress_fence_mapped_address, ++ sizeof(inargs->queue_progress_fence_cpu_va)); ++ if (ret < 0) { ++ DXG_ERR("failed to copy fence cpu va"); ++ goto cleanup; ++ } ++ ret = copy_to_user(&inargs->queue_progress_fence_gpu_va, ++ &command->hwqueue_progress_fence_gpuva, ++ sizeof(u64)); ++ if (ret < 0) { ++ DXG_ERR("failed to copy fence gpu va"); ++ goto cleanup; ++ } ++ if (args->priv_drv_data_size) { ++ ret = copy_to_user(args->priv_drv_data, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret < 0) ++ DXG_ERR("failed to copy private data"); ++ } ++ ++cleanup: ++ if (ret < 0) { ++ DXG_ERR("failed %x", ret); ++ if (hwqueue->handle.v) { ++ hmgrtable_free_handle_safe(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ hwqueue->handle); ++ hwqueue->handle.v = 0; ++ } ++ if (command && command->hwqueue.v) ++ dxgvmb_send_destroy_hwqueue(process, adapter, ++ command->hwqueue); ++ } ++ free_message(&msg, process); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle handle) ++{ ++ int ret; ++ struct dxgkvmb_command_destroyhwqueue *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYHWQUEUE, ++ process->host_handle); ++ command->hwqueue = handle; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 73d7adac60a1..2e2fd1ae5ec2 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -172,6 +172,21 @@ struct dxgkvmb_command_signalguestevent { + bool dereference_event; + }; + ++struct dxgkvmb_command_opensyncobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle global_sync_object; ++ u32 engine_affinity; ++ struct d3dddi_synchronizationobject_flags flags; ++}; ++ ++struct dxgkvmb_command_opensyncobject_return { ++ struct d3dkmthandle sync_object; ++ struct ntstatus status; ++ u64 gpu_virtual_address; ++ u64 guest_cpu_physical_address; ++}; ++ + /* + * The command returns struct d3dkmthandle of a shared object for the + * given pre-process object +@@ -508,4 +523,24 @@ struct dxgkvmb_command_waitforsyncobjectfromgpu { + /* struct d3dkmthandle ObjectHandles[object_count] */ + }; + ++/* Returns the same structure */ ++struct dxgkvmb_command_createhwqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct ntstatus status; ++ struct d3dkmthandle hwqueue; ++ struct d3dkmthandle hwqueue_progress_fence; ++ void *hwqueue_progress_fence_cpuva; ++ u64 hwqueue_progress_fence_gpuva; ++ struct d3dkmthandle context; ++ struct d3dddi_createhwqueueflags flags; ++ u32 priv_drv_data_size; ++ char priv_drv_data[1]; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroyhwqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle hwqueue; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index abb64f6c3a59..3cfc1c40e0bb 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -36,6 +36,33 @@ static char *errorstr(int ret) + } + #endif + ++static int dxgsyncobj_release(struct inode *inode, struct file *file) ++{ ++ struct dxgsharedsyncobject *syncobj = file->private_data; ++ ++ DXG_TRACE("Release syncobj: %p", syncobj); ++ mutex_lock(&syncobj->fd_mutex); ++ kref_get(&syncobj->ssyncobj_kref); ++ syncobj->host_shared_handle_nt_reference--; ++ if (syncobj->host_shared_handle_nt_reference == 0) { ++ if (syncobj->host_shared_handle_nt.v) { ++ dxgvmb_send_destroy_nt_shared_object( ++ syncobj->host_shared_handle_nt); ++ DXG_TRACE("Syncobj host_handle_nt destroyed: %x", ++ syncobj->host_shared_handle_nt.v); ++ syncobj->host_shared_handle_nt.v = 0; ++ } ++ kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release); ++ } ++ mutex_unlock(&syncobj->fd_mutex); ++ kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release); ++ return 0; ++} ++ ++static const struct file_operations dxg_syncobj_fops = { ++ .release = dxgsyncobj_release, ++}; ++ + static int dxgsharedresource_release(struct inode *inode, struct file *file) + { + struct dxgsharedresource *resource = file->private_data; +@@ -833,6 +860,156 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createhwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgcontext *context = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxghwqueue *hwqueue = NULL; ++ int ret; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ context = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (context == NULL) { ++ DXG_ERR("Invalid context handle %x", args.context.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hwqueue = dxghwqueue_create(context); ++ if (hwqueue == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_create_hwqueue(process, adapter, &args, ++ inargs, hwqueue); ++ ++cleanup: ++ ++ if (ret < 0 && hwqueue) ++ dxghwqueue_destroy(process, hwqueue); ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int dxgkio_destroy_hwqueue(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_destroyhwqueue args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxghwqueue *hwqueue = NULL; ++ struct d3dkmthandle device_handle = {}; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ hwqueue = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ args.queue); ++ if (hwqueue) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, args.queue); ++ hwqueue->handle.v = 0; ++ device_handle = hwqueue->device_handle; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (hwqueue == NULL) { ++ DXG_ERR("invalid hwqueue handle: %x", args.queue.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_destroy_hwqueue(process, adapter, args.queue); ++ ++ dxghwqueue_destroy(process, hwqueue); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + get_standard_alloc_priv_data(struct dxgdevice *device, + struct d3dkmt_createstandardallocation *alloc_info, +@@ -1548,6 +1725,164 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_submitsignalsyncobjectstohwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle hwqueue = {}; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.hwqueue_count > D3DDDI_MAX_BROADCAST_CONTEXT || ++ args.hwqueue_count == 0) { ++ DXG_ERR("invalid hwqueue count: %d", ++ args.hwqueue_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > D3DDDI_MAX_OBJECT_SIGNALED || ++ args.object_count == 0) { ++ DXG_ERR("invalid number of syncobjects: %d", ++ args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(&hwqueue, args.hwqueues, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy hwqueue handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ hwqueue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, 0, zerohandle, ++ args.object_count, args.objects, ++ args.hwqueue_count, args.hwqueues, ++ args.object_count, ++ args.fence_values, NULL, ++ zerohandle); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_submitwaitforsyncobjectstohwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ struct d3dkmthandle *objects = NULL; ++ u32 object_size; ++ u64 *fences = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > D3DDDI_MAX_OBJECT_WAITED_ON || ++ args.object_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ object_size = sizeof(struct d3dkmthandle) * args.object_count; ++ objects = vzalloc(object_size); ++ if (objects == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(objects, args.objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ object_size = sizeof(u64) * args.object_count; ++ fences = vzalloc(object_size); ++ if (fences == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(fences, args.fence_values, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy fence values"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ args.hwqueue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.hwqueue, args.object_count, ++ objects, fences, false); ++ ++cleanup: ++ ++ if (objects) ++ vfree(objects); ++ if (fences) ++ vfree(fences); ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + { +@@ -1558,6 +1893,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + struct eventfd_ctx *event = NULL; + struct dxgsyncobject *syncobj = NULL; + bool device_lock_acquired = false; ++ struct dxgsharedsyncobject *syncobjgbl = NULL; + struct dxghosteventcpu *host_event = NULL; + + ret = copy_from_user(&args, inargs, sizeof(args)); +@@ -1618,6 +1954,22 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (ret < 0) + goto cleanup; + ++ if (args.info.flags.shared) { ++ if (args.info.shared_handle.v == 0) { ++ DXG_ERR("shared handle should not be 0"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ syncobjgbl = dxgsharedsyncobj_create(device->adapter, syncobj); ++ if (syncobjgbl == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dxgsharedsyncobj_add_syncobj(syncobjgbl, syncobj); ++ ++ syncobjgbl->host_shared_handle = args.info.shared_handle; ++ } ++ + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +@@ -1646,6 +1998,8 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (event) + eventfd_ctx_put(event); + } ++ if (syncobjgbl) ++ kref_put(&syncobjgbl->ssyncobj_kref, dxgsharedsyncobj_release); + if (adapter) + dxgadapter_release_lock_shared(adapter); + if (device_lock_acquired) +@@ -1700,6 +2054,140 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_opensyncobjectfromnthandle2 args; ++ struct dxgsyncobject *syncobj = NULL; ++ struct dxgsharedsyncobject *syncobj_fd = NULL; ++ struct file *file = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dddi_synchronizationobject_flags flags = { }; ++ int ret; ++ bool device_lock_acquired = false; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ args.sync_object.v = 0; ++ ++ if (args.device.v) { ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ return -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ DXG_ERR("device handle is missing"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ file = fget(args.nt_handle); ++ if (!file) { ++ DXG_ERR("failed to get file from handle: %llx", ++ args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (file->f_op != &dxg_syncobj_fops) { ++ DXG_ERR("invalid fd: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ syncobj_fd = file->private_data; ++ if (syncobj_fd == NULL) { ++ DXG_ERR("invalid private data: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ flags.shared = 1; ++ flags.nt_security_sharing = 1; ++ syncobj = dxgsyncobject_create(process, device, adapter, ++ syncobj_fd->type, flags); ++ if (syncobj == NULL) { ++ DXG_ERR("failed to create sync object"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ dxgsharedsyncobj_add_syncobj(syncobj_fd, syncobj); ++ ++ ret = dxgvmb_send_open_sync_object_nt(process, &dxgglobal->channel, ++ &args, syncobj); ++ if (ret < 0) { ++ DXG_ERR("failed to open sync object on host: %x", ++ syncobj_fd->host_shared_handle.v); ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, syncobj, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ if (ret >= 0) { ++ syncobj->handle = args.sync_object; ++ kref_get(&syncobj->syncobj_kref); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret == 0) ++ goto success; ++ DXG_ERR("failed to copy output args"); ++ ++cleanup: ++ ++ if (syncobj) { ++ dxgsyncobject_destroy(process, syncobj); ++ syncobj = NULL; ++ } ++ ++ if (args.sync_object.v) ++ dxgvmb_send_destroy_sync_object(process, args.sync_object); ++ ++success: ++ ++ if (file) ++ fput(file); ++ if (syncobj) ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) + { +@@ -2353,6 +2841,30 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, ++ struct dxgprocess *process, ++ struct d3dkmthandle objecthandle) ++{ ++ int ret = 0; ++ ++ mutex_lock(&syncobj->fd_mutex); ++ if (syncobj->host_shared_handle_nt_reference == 0) { ++ ret = dxgvmb_send_create_nt_shared_object(process, ++ objecthandle, ++ &syncobj->host_shared_handle_nt); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Host_shared_handle_ht: %x", ++ syncobj->host_shared_handle_nt.v); ++ kref_get(&syncobj->ssyncobj_kref); ++ } ++ syncobj->host_shared_handle_nt_reference++; ++cleanup: ++ mutex_unlock(&syncobj->fd_mutex); ++ return ret; ++} ++ + static int + dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource, + struct dxgprocess *process, +@@ -2378,6 +2890,7 @@ dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource, + } + + enum dxg_sharedobject_type { ++ DXG_SHARED_SYNCOBJECT, + DXG_SHARED_RESOURCE + }; + +@@ -2394,6 +2907,10 @@ static int get_object_fd(enum dxg_sharedobject_type type, + } + + switch (type) { ++ case DXG_SHARED_SYNCOBJECT: ++ file = anon_inode_getfile("dxgsyncobj", ++ &dxg_syncobj_fops, object, 0); ++ break; + case DXG_SHARED_RESOURCE: + file = anon_inode_getfile("dxgresource", + &dxg_resource_fops, object, 0); +@@ -2419,6 +2936,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + enum hmgrentry_type object_type; + struct dxgsyncobject *syncobj = NULL; + struct dxgresource *resource = NULL; ++ struct dxgsharedsyncobject *shared_syncobj = NULL; + struct dxgsharedresource *shared_resource = NULL; + struct d3dkmthandle *handles = NULL; + int object_fd = -1; +@@ -2465,6 +2983,17 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = -EINVAL; + } else { + switch (object_type) { ++ case HMGRENTRY_TYPE_DXGSYNCOBJECT: ++ syncobj = obj; ++ if (syncobj->shared) { ++ kref_get(&syncobj->syncobj_kref); ++ shared_syncobj = syncobj->shared_owner; ++ } else { ++ DXG_ERR("sync object is not shared"); ++ syncobj = NULL; ++ ret = -EINVAL; ++ } ++ break; + case HMGRENTRY_TYPE_DXGRESOURCE: + resource = obj; + if (resource->shared_owner) { +@@ -2488,6 +3017,21 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + goto cleanup; + + switch (object_type) { ++ case HMGRENTRY_TYPE_DXGSYNCOBJECT: ++ ret = get_object_fd(DXG_SHARED_SYNCOBJECT, shared_syncobj, ++ &object_fd); ++ if (ret < 0) { ++ DXG_ERR("get_object_fd failed for sync object"); ++ goto cleanup; ++ } ++ ret = dxgsharedsyncobj_get_host_nt_handle(shared_syncobj, ++ process, ++ handles[0]); ++ if (ret < 0) { ++ DXG_ERR("get_host_nt_handle failed"); ++ goto cleanup; ++ } ++ break; + case HMGRENTRY_TYPE_DXGRESOURCE: + ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource, + &object_fd); +@@ -2954,10 +3498,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, + /* 0x16 */ {}, + /* 0x17 */ {}, +-/* 0x18 */ {}, ++/* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE}, + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, + /* 0x1a */ {}, +-/* 0x1b */ {}, ++/* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE}, + /* 0x1c */ {}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, +@@ -2986,8 +3530,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x33 */ {dxgkio_signal_sync_object_gpu2, + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, + /* 0x34 */ {}, +-/* 0x35 */ {}, +-/* 0x36 */ {}, ++/* 0x35 */ {dxgkio_submit_signal_to_hwqueue, ++ LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, ++/* 0x36 */ {dxgkio_submit_wait_to_hwqueue, ++ LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {}, + /* 0x38 */ {}, + /* 0x39 */ {}, +@@ -2999,7 +3545,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, + /* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, +-/* 0x40 */ {}, ++/* 0x40 */ {dxgkio_open_sync_object_nt, LX_DXOPENSYNCOBJECTFROMNTHANDLE2}, + /* 0x41 */ {dxgkio_query_resource_info_nt, + LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index f74564cf7ee9..a78252901c8d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -201,6 +201,16 @@ struct d3dkmt_createcontextvirtual { + struct d3dkmthandle context; + }; + ++struct d3dddi_createhwqueueflags { ++ union { ++ struct { ++ __u32 disable_gpu_timeout:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ + enum d3dkmdt_gdisurfacetype { + _D3DKMDT_GDISURFACE_INVALID = 0, + _D3DKMDT_GDISURFACE_TEXTURE = 1, +@@ -694,6 +704,81 @@ struct d3dddi_openallocationinfo2 { + __u64 reserved[6]; + }; + ++struct d3dkmt_createhwqueue { ++ struct d3dkmthandle context; ++ struct d3dddi_createhwqueueflags flags; ++ __u32 priv_drv_data_size; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ struct d3dkmthandle queue; ++ struct d3dkmthandle queue_progress_fence; ++#ifdef __KERNEL__ ++ void *queue_progress_fence_cpu_va; ++#else ++ __u64 queue_progress_fence_cpu_va; ++#endif ++ __u64 queue_progress_fence_gpu_va; ++}; ++ ++struct d3dkmt_destroyhwqueue { ++ struct d3dkmthandle queue; ++}; ++ ++struct d3dkmt_submitwaitforsyncobjectstohwqueue { ++ struct d3dkmthandle hwqueue; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++}; ++ ++struct d3dkmt_submitsignalsyncobjectstohwqueue { ++ struct d3dddicb_signalflags flags; ++ __u32 hwqueue_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *hwqueues; ++#else ++ __u64 hwqueues; ++#endif ++ __u32 object_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++}; ++ ++struct d3dkmt_opensyncobjectfromnthandle2 { ++ __u64 nt_handle; ++ struct d3dkmthandle device; ++ struct d3dddi_synchronizationobject_flags flags; ++ struct d3dkmthandle sync_object; ++ __u32 reserved1; ++ union { ++ struct { ++#ifdef __KERNEL__ ++ void *fence_value_cpu_va; ++#else ++ __u64 fence_value_cpu_va; ++#endif ++ __u64 fence_value_gpu_va; ++ __u32 engine_affinity; ++ } monitored_fence; ++ __u64 reserved[8]; ++ }; ++}; ++ + struct d3dkmt_openresourcefromnthandle { + struct d3dkmthandle device; + __u32 reserved; +@@ -819,6 +904,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXCREATEHWQUEUE \ ++ _IOWR(0x47, 0x18, struct d3dkmt_createhwqueue) ++#define LX_DXDESTROYHWQUEUE \ ++ _IOWR(0x47, 0x1b, struct d3dkmt_destroyhwqueue) + #define LX_DXDESTROYDEVICE \ + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ +@@ -829,6 +918,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \ + _IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2) ++#define LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE \ ++ _IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue) ++#define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \ ++ _IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1681-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1681-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch new file mode 100644 index 000000000000..4410308be34e --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1681-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch @@ -0,0 +1,640 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Thu, 20 Jan 2022 15:15:18 -0800 +Subject: drivers: hv: dxgkrnl: Creation of paging queue objects. + +Implement ioctls for creation/destruction of the paging queue objects: + - LX_DXCREATEPAGINGQUEUE, + - LX_DXDESTROYPAGINGQUEUE + +Paging queue objects (dxgpagingqueue) contain operations, which +handle residency of device accessible allocations. An allocation is +resident, when the device has access to it. For example, the allocation +resides in local device memory or device page tables point to system +memory which is made non-pageable. + +Each paging queue has an associated monitored fence sync object, which +is used to detect when a paging operation is completed. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 89 +++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 24 ++ + drivers/hv/dxgkrnl/dxgprocess.c | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 74 ++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 17 + + drivers/hv/dxgkrnl/ioctl.c | 189 +++++++++- + include/uapi/misc/d3dkmthk.h | 27 ++ + 7 files changed, 418 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index f59173f13559..410f08768bad 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -278,6 +278,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + void dxgdevice_stop(struct dxgdevice *device) + { + struct dxgallocation *alloc; ++ struct dxgpagingqueue *pqueue; + struct dxgsyncobject *syncobj; + + DXG_TRACE("Stopping device: %p", device); +@@ -288,6 +289,10 @@ void dxgdevice_stop(struct dxgdevice *device) + dxgdevice_release_alloc_list_lock(device); + + hmgrtable_lock(&device->process->handle_table, DXGLOCK_EXCL); ++ list_for_each_entry(pqueue, &device->pqueue_list_head, ++ pqueue_list_entry) { ++ dxgpagingqueue_stop(pqueue); ++ } + list_for_each_entry(syncobj, &device->syncobj_list_head, + syncobj_list_entry) { + dxgsyncobject_stop(syncobj); +@@ -375,6 +380,17 @@ void dxgdevice_destroy(struct dxgdevice *device) + dxgdevice_release_context_list_lock(device); + } + ++ { ++ struct dxgpagingqueue *tmp; ++ struct dxgpagingqueue *pqueue; ++ ++ DXG_TRACE("destroying paging queues"); ++ list_for_each_entry_safe(pqueue, tmp, &device->pqueue_list_head, ++ pqueue_list_entry) { ++ dxgpagingqueue_destroy(pqueue); ++ } ++ } ++ + /* Guest handles need to be released before the host handles */ + hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); + if (device->handle_valid) { +@@ -708,6 +724,26 @@ void dxgdevice_release(struct kref *refcount) + kfree(device); + } + ++void dxgdevice_add_paging_queue(struct dxgdevice *device, ++ struct dxgpagingqueue *entry) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&entry->pqueue_list_entry, &device->pqueue_list_head); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_paging_queue(struct dxgpagingqueue *pqueue) ++{ ++ struct dxgdevice *device = pqueue->device; ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (pqueue->pqueue_list_entry.next) { ++ list_del(&pqueue->pqueue_list_entry); ++ pqueue->pqueue_list_entry.next = NULL; ++ } ++ dxgdevice_release_alloc_list_lock(device); ++} ++ + void dxgdevice_add_syncobj(struct dxgdevice *device, + struct dxgsyncobject *syncobj) + { +@@ -899,6 +935,59 @@ else + kfree(alloc); + } + ++struct dxgpagingqueue *dxgpagingqueue_create(struct dxgdevice *device) ++{ ++ struct dxgpagingqueue *pqueue; ++ ++ pqueue = kzalloc(sizeof(*pqueue), GFP_KERNEL); ++ if (pqueue) { ++ pqueue->device = device; ++ pqueue->process = device->process; ++ pqueue->device_handle = device->handle; ++ dxgdevice_add_paging_queue(device, pqueue); ++ } ++ return pqueue; ++} ++ ++void dxgpagingqueue_stop(struct dxgpagingqueue *pqueue) ++{ ++ int ret; ++ ++ if (pqueue->mapped_address) { ++ ret = dxg_unmap_iospace(pqueue->mapped_address, PAGE_SIZE); ++ DXG_TRACE("fence is unmapped %d %p", ++ ret, pqueue->mapped_address); ++ pqueue->mapped_address = NULL; ++ } ++} ++ ++void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue) ++{ ++ struct dxgprocess *process = pqueue->process; ++ ++ DXG_TRACE("Destroying pqueue %p %x", pqueue, pqueue->handle.v); ++ ++ dxgpagingqueue_stop(pqueue); ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (pqueue->handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ pqueue->handle); ++ pqueue->handle.v = 0; ++ } ++ if (pqueue->syncobj_handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ pqueue->syncobj_handle); ++ pqueue->syncobj_handle.v = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (pqueue->device) ++ dxgdevice_remove_paging_queue(pqueue); ++ kfree(pqueue); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 0330352b9c06..440d1f9b8882 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -104,6 +104,16 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); + void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); + void dxgvmbuschannel_receive(void *ctx); + ++struct dxgpagingqueue { ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ struct list_head pqueue_list_entry; ++ struct d3dkmthandle device_handle; ++ struct d3dkmthandle handle; ++ struct d3dkmthandle syncobj_handle; ++ void *mapped_address; ++}; ++ + /* + * The structure describes an event, which will be signaled by + * a message from host. +@@ -127,6 +137,10 @@ struct dxghosteventcpu { + bool remove_from_list; + }; + ++struct dxgpagingqueue *dxgpagingqueue_create(struct dxgdevice *device); ++void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue); ++void dxgpagingqueue_stop(struct dxgpagingqueue *pqueue); ++ + /* + * This is GPU synchronization object, which is used to synchronize execution + * between GPU contextx/hardware queues or for tracking GPU execution progress. +@@ -516,6 +530,9 @@ void dxgdevice_remove_alloc_safe(struct dxgdevice *dev, + struct dxgallocation *a); + void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res); + void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res); ++void dxgdevice_add_paging_queue(struct dxgdevice *dev, ++ struct dxgpagingqueue *pqueue); ++void dxgdevice_remove_paging_queue(struct dxgpagingqueue *pqueue); + void dxgdevice_add_syncobj(struct dxgdevice *dev, struct dxgsyncobject *so); + void dxgdevice_remove_syncobj(struct dxgsyncobject *so); + bool dxgdevice_is_active(struct dxgdevice *dev); +@@ -762,6 +779,13 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + struct dxgprocess *process, + struct d3dkmthandle h); ++int dxgvmb_send_create_paging_queue(struct dxgprocess *pr, ++ struct dxgdevice *dev, ++ struct d3dkmt_createpagingqueue *args, ++ struct dxgpagingqueue *pq); ++int dxgvmb_send_destroy_paging_queue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle h); + int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_createallocation *args, + struct d3dkmt_createallocation *__user inargs, +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index 4021084ebd78..5de3f8ccb448 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -277,6 +277,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, + device_handle = + ((struct dxgcontext *)obj)->device_handle; + break; ++ case HMGRENTRY_TYPE_DXGPAGINGQUEUE: ++ device_handle = ++ ((struct dxgpagingqueue *)obj)->device_handle; ++ break; + case HMGRENTRY_TYPE_DXGHWQUEUE: + device_handle = + ((struct dxghwqueue *)obj)->device_handle; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index e83600945de1..c9c00b288ae0 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1155,6 +1155,80 @@ int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + return ret; + } + ++int dxgvmb_send_create_paging_queue(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createpagingqueue *args, ++ struct dxgpagingqueue *pqueue) ++{ ++ struct dxgkvmb_command_createpagingqueue_return result; ++ struct dxgkvmb_command_createpagingqueue *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, device->adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATEPAGINGQUEUE, ++ process->host_handle); ++ command->args = *args; ++ args->paging_queue.v = 0; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("send_create_paging_queue failed %x", ret); ++ goto cleanup; ++ } ++ ++ args->paging_queue = result.paging_queue; ++ args->sync_object = result.sync_object; ++ args->fence_cpu_virtual_address = ++ dxg_map_iospace(result.fence_storage_physical_address, PAGE_SIZE, ++ PROT_READ | PROT_WRITE, true); ++ if (args->fence_cpu_virtual_address == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ pqueue->mapped_address = args->fence_cpu_virtual_address; ++ pqueue->handle = args->paging_queue; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_paging_queue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ struct dxgkvmb_command_destroypagingqueue *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYPAGINGQUEUE, ++ process->host_handle); ++ command->paging_queue = h; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, NULL, 0); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + static int + copy_private_data(struct d3dkmt_createallocation *args, + struct dxgkvmb_command_createallocation *command, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 2e2fd1ae5ec2..aba075d374c9 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -462,6 +462,23 @@ struct dxgkvmb_command_destroycontext { + struct d3dkmthandle context; + }; + ++struct dxgkvmb_command_createpagingqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_createpagingqueue args; ++}; ++ ++struct dxgkvmb_command_createpagingqueue_return { ++ struct d3dkmthandle paging_queue; ++ struct d3dkmthandle sync_object; ++ u64 fence_storage_physical_address; ++ u64 fence_storage_offset; ++}; ++ ++struct dxgkvmb_command_destroypagingqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle paging_queue; ++}; ++ + struct dxgkvmb_command_createsyncobject { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_createsynchronizationobject2 args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 3cfc1c40e0bb..a2d236f5eff5 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -329,7 +329,7 @@ static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource) + + if (alloc_data_size) { + if (data_size < alloc_data_size) { +- dev_err(DXGDEV, ++ DXG_ERR( + "Invalid private data size"); + ret = -EINVAL; + goto cleanup1; +@@ -1010,6 +1010,183 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process, + return ret; + } + ++static int ++dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createpagingqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgpagingqueue *pqueue = NULL; ++ int ret; ++ struct d3dkmthandle host_handle = {}; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ adapter = device->adapter; ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ pqueue = dxgpagingqueue_create(device); ++ if (pqueue == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_create_paging_queue(process, device, &args, pqueue); ++ if (ret >= 0) { ++ host_handle = args.paging_queue; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, pqueue, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ host_handle); ++ if (ret >= 0) { ++ pqueue->handle = host_handle; ++ ret = hmgrtable_assign_handle(&process->handle_table, ++ NULL, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ args.sync_object); ++ if (ret >= 0) ++ pqueue->syncobj_handle = args.sync_object; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ /* should not fail after this */ ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (pqueue) ++ dxgpagingqueue_destroy(pqueue); ++ if (host_handle.v) ++ dxgvmb_send_destroy_paging_queue(process, ++ adapter, ++ host_handle); ++ } ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dddi_destroypagingqueue args; ++ struct dxgpagingqueue *paging_queue = NULL; ++ int ret; ++ struct d3dkmthandle device_handle = {}; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ paging_queue = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (paging_queue) { ++ device_handle = paging_queue->device_handle; ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ paging_queue->syncobj_handle); ++ paging_queue->syncobj_handle.v = 0; ++ paging_queue->handle.v = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ if (device_handle.v) ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_destroy_paging_queue(process, adapter, ++ args.paging_queue); ++ ++ dxgpagingqueue_destroy(paging_queue); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + get_standard_alloc_priv_data(struct dxgdevice *device, + struct d3dkmt_createstandardallocation *alloc_info, +@@ -1272,7 +1449,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + args.private_runtime_resource_handle; + if (args.flags.create_shared) { + if (!args.flags.nt_security_sharing) { +- dev_err(DXGDEV, ++ DXG_ERR( + "nt_security_sharing must be set"); + ret = -EINVAL; + goto cleanup; +@@ -1313,7 +1490,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + args.private_runtime_data, + args.private_runtime_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to copy runtime data"); + ret = -EINVAL; + goto cleanup; +@@ -1333,7 +1510,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + args.priv_drv_data, + args.priv_drv_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to copy res data"); + ret = -EINVAL; + goto cleanup; +@@ -3481,7 +3658,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL}, + /* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, + /* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION}, +-/* 0x07 */ {}, ++/* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE}, + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, + /* 0x0a */ {}, +@@ -3502,7 +3679,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, + /* 0x1a */ {}, + /* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE}, +-/* 0x1c */ {}, ++/* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, + /* 0x1f */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index a78252901c8d..6ec70852de6e 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -211,6 +211,29 @@ struct d3dddi_createhwqueueflags { + }; + }; + ++enum d3dddi_pagingqueue_priority { ++ _D3DDDI_PAGINGQUEUE_PRIORITY_BELOW_NORMAL = -1, ++ _D3DDDI_PAGINGQUEUE_PRIORITY_NORMAL = 0, ++ _D3DDDI_PAGINGQUEUE_PRIORITY_ABOVE_NORMAL = 1, ++}; ++ ++struct d3dkmt_createpagingqueue { ++ struct d3dkmthandle device; ++ enum d3dddi_pagingqueue_priority priority; ++ struct d3dkmthandle paging_queue; ++ struct d3dkmthandle sync_object; ++#ifdef __KERNEL__ ++ void *fence_cpu_virtual_address; ++#else ++ __u64 fence_cpu_virtual_address; ++#endif ++ __u32 physical_adapter_index; ++}; ++ ++struct d3dddi_destroypagingqueue { ++ struct d3dkmthandle paging_queue; ++}; ++ + enum d3dkmdt_gdisurfacetype { + _D3DKMDT_GDISURFACE_INVALID = 0, + _D3DKMDT_GDISURFACE_TEXTURE = 1, +@@ -890,6 +913,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x05, struct d3dkmt_destroycontext) + #define LX_DXCREATEALLOCATION \ + _IOWR(0x47, 0x06, struct d3dkmt_createallocation) ++#define LX_DXCREATEPAGINGQUEUE \ ++ _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ +@@ -908,6 +933,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x18, struct d3dkmt_createhwqueue) + #define LX_DXDESTROYHWQUEUE \ + _IOWR(0x47, 0x1b, struct d3dkmt_destroyhwqueue) ++#define LX_DXDESTROYPAGINGQUEUE \ ++ _IOWR(0x47, 0x1c, struct d3dddi_destroypagingqueue) + #define LX_DXDESTROYDEVICE \ + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1682-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch b/patch/kernel/archive/wsl2-arm64-6.1/1682-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch new file mode 100644 index 000000000000..243b807aa1a4 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1682-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch @@ -0,0 +1,450 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 18:02:09 -0800 +Subject: drivers: hv: dxgkrnl: Submit execution commands to the compute device + +Implements ioctls for submission of compute device buffers for execution: + - LX_DXSUBMITCOMMAND + The ioctl is used to submit a command buffer to the device, + working in the "packet scheduling" mode. + + - LX_DXSUBMITCOMMANDTOHWQUEUE + The ioctl is used to submit a command buffer to the device, + working in the "hardware scheduling" mode. + +To improve performance both ioctls use asynchronous VM bus messages +to communicate with the host as these are high frequency operations. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 6 + + drivers/hv/dxgkrnl/dxgvmbus.c | 113 +++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 14 + + drivers/hv/dxgkrnl/ioctl.c | 127 +++++++++- + include/uapi/misc/d3dkmthk.h | 58 +++++ + 5 files changed, 316 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 440d1f9b8882..ab97bc53b124 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -796,6 +796,9 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_destroyallocation2 *args, + struct d3dkmthandle *alloc_handles); ++int dxgvmb_send_submit_command(struct dxgprocess *pr, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommand *args); + int dxgvmb_send_create_sync_object(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_createsynchronizationobject2 +@@ -838,6 +841,9 @@ int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process, + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommandtohwqueue *a); + int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct dxgvmbuschannel *channel, + struct d3dkmt_opensyncobjectfromnthandle2 +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index c9c00b288ae0..7cb04fec217e 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1901,6 +1901,61 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + return ret; + } + ++int dxgvmb_send_submit_command(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommand *args) ++{ ++ int ret; ++ u32 cmd_size; ++ struct dxgkvmb_command_submitcommand *command; ++ u32 hbufsize = args->num_history_buffers * sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ cmd_size = sizeof(struct dxgkvmb_command_submitcommand) + ++ hbufsize + args->priv_drv_data_size; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ ret = copy_from_user(&command[1], args->history_buffer_array, ++ hbufsize); ++ if (ret) { ++ DXG_ERR(" failed to copy history buffer"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_from_user((u8 *) &command[1] + hbufsize, ++ args->priv_drv_data, args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy history priv data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SUBMITCOMMAND, ++ process->host_handle); ++ command->args = *args; ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + static void set_result(struct d3dkmt_createsynchronizationobject2 *args, + u64 fence_gpu_va, u8 *va) + { +@@ -2427,3 +2482,61 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + DXG_TRACE("err: %d", ret); + return ret; + } ++ ++int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommandtohwqueue ++ *args) ++{ ++ int ret = -EINVAL; ++ u32 cmd_size; ++ struct dxgkvmb_command_submitcommandtohwqueue *command; ++ u32 primaries_size = args->num_primaries * sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ cmd_size = sizeof(*command) + args->priv_drv_data_size + primaries_size; ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ if (primaries_size) { ++ ret = copy_from_user(&command[1], args->written_primaries, ++ primaries_size); ++ if (ret) { ++ DXG_ERR("failed to copy primaries handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user((char *)&command[1] + primaries_size, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy primaries data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SUBMITCOMMANDTOHWQUEUE, ++ process->host_handle); ++ command->args = *args; ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index aba075d374c9..acfdbde09e82 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -314,6 +314,20 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_submitcommand { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_submitcommand args; ++ /* HistoryBufferHandles */ ++ /* PrivateDriverData */ ++}; ++ ++struct dxgkvmb_command_submitcommandtohwqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_submitcommandtohwqueue args; ++ /* Written primaries */ ++ /* PrivateDriverData */ ++}; ++ + struct dxgkvmb_command_createallocation_allocinfo { + u32 flags; + u32 priv_drv_data_size; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index a2d236f5eff5..9128694c8e78 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1902,6 +1902,129 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_submitcommand args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.broadcast_context_count > D3DDDI_MAX_BROADCAST_CONTEXT || ++ args.broadcast_context_count == 0) { ++ DXG_ERR("invalid number of contexts"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.num_history_buffers > 1024) { ++ DXG_ERR("invalid number of history buffers"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.num_primaries > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid number of primaries"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.broadcast_context[0]); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_submit_command(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_submitcommandtohwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.num_primaries > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid number of primaries"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ args.hwqueue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_submit_command_hwqueue(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + { +@@ -3666,7 +3789,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0c */ {}, + /* 0x0d */ {}, + /* 0x0e */ {}, +-/* 0x0f */ {}, ++/* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, + /* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT}, + /* 0x12 */ {dxgkio_wait_sync_object, LX_DXWAITFORSYNCHRONIZATIONOBJECT}, +@@ -3706,7 +3829,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU}, + /* 0x33 */ {dxgkio_signal_sync_object_gpu2, + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, +-/* 0x34 */ {}, ++/* 0x34 */ {dxgkio_submit_command_to_hwqueue, LX_DXSUBMITCOMMANDTOHWQUEUE}, + /* 0x35 */ {dxgkio_submit_signal_to_hwqueue, + LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 6ec70852de6e..9238115d165d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -58,6 +58,8 @@ struct winluid { + __u32 b; + }; + ++#define D3DDDI_MAX_WRITTEN_PRIMARIES 16 ++ + #define D3DKMT_CREATEALLOCATION_MAX 1024 + #define D3DKMT_ADAPTERS_MAX 64 + #define D3DDDI_MAX_BROADCAST_CONTEXT 64 +@@ -525,6 +527,58 @@ struct d3dkmt_destroysynchronizationobject { + struct d3dkmthandle sync_object; + }; + ++struct d3dkmt_submitcommandflags { ++ __u32 null_rendering:1; ++ __u32 present_redirected:1; ++ __u32 reserved:30; ++}; ++ ++struct d3dkmt_submitcommand { ++ __u64 command_buffer; ++ __u32 command_length; ++ struct d3dkmt_submitcommandflags flags; ++ __u64 present_history_token; ++ __u32 broadcast_context_count; ++ struct d3dkmthandle broadcast_context[D3DDDI_MAX_BROADCAST_CONTEXT]; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ __u32 num_primaries; ++ struct d3dkmthandle written_primaries[D3DDDI_MAX_WRITTEN_PRIMARIES]; ++ __u32 num_history_buffers; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *history_buffer_array; ++#else ++ __u64 history_buffer_array; ++#endif ++}; ++ ++struct d3dkmt_submitcommandtohwqueue { ++ struct d3dkmthandle hwqueue; ++ __u32 reserved; ++ __u64 hwqueue_progress_fence_id; ++ __u64 command_buffer; ++ __u32 command_length; ++ __u32 priv_drv_data_size; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 num_primaries; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *written_primaries; ++#else ++ __u64 written_primaries; ++#endif ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -917,6 +971,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXSUBMITCOMMAND \ ++ _IOWR(0x47, 0x0f, struct d3dkmt_submitcommand) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECT \ +@@ -945,6 +1001,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \ + _IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2) ++#define LX_DXSUBMITCOMMANDTOHWQUEUE \ ++ _IOWR(0x47, 0x34, struct d3dkmt_submitcommandtohwqueue) + #define LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE \ + _IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue) + #define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1683-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch b/patch/kernel/archive/wsl2-arm64-6.1/1683-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch new file mode 100644 index 000000000000..433f03238b69 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1683-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch @@ -0,0 +1,271 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Sat, 7 Aug 2021 18:11:34 -0700 +Subject: drivers: hv: dxgkrnl: Share objects with the host + +Implement the LX_DXSHAREOBJECTWITHHOST ioctl. +This ioctl is used to create a Windows NT handle on the host +for the given shared object (resource or sync object). The NT +handle is returned to the caller. The caller could share the NT +handle with a host application, which needs to access the object. +The host application can open the shared resource using the NT +handle. This way the guest and the host have access to the same +object. + +Fix incorrect handling of error results from copy_from_user(). + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 2 + + drivers/hv/dxgkrnl/dxgvmbus.c | 60 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 18 +++ + drivers/hv/dxgkrnl/ioctl.c | 38 +++++- + include/uapi/misc/d3dkmthk.h | 9 ++ + 5 files changed, 120 insertions(+), 7 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index ab97bc53b124..a39d11d76e41 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -872,6 +872,8 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); ++int dxgvmb_send_share_object_with_host(struct dxgprocess *process, ++ struct d3dkmt_shareobjectwithhost *args); + + void signal_host_cpu_event(struct dxghostevent *eventhdr); + int ntstatus2int(struct ntstatus status); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 7cb04fec217e..67a16de622e0 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -881,6 +881,50 @@ int dxgvmb_send_destroy_sync_object(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_share_object_with_host(struct dxgprocess *process, ++ struct d3dkmt_shareobjectwithhost *args) ++{ ++ struct dxgkvmb_command_shareobjectwithhost *command; ++ struct dxgkvmb_command_shareobjectwithhost_return result = {}; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SHAREOBJECTWITHHOST, ++ process->host_handle); ++ command->device_handle = args->device_handle; ++ command->object_handle = args->object_handle; ++ ++ ret = dxgvmb_send_sync_msg(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size, &result, sizeof(result)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret || !NT_SUCCESS(result.status)) { ++ if (ret == 0) ++ ret = ntstatus2int(result.status); ++ DXG_ERR("Host failed to share object with host: %d %x", ++ ret, result.status.v); ++ goto cleanup; ++ } ++ args->object_vail_nt_handle = result.vail_nt_handle; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_ERR("err: %d", ret); ++ return ret; ++} ++ + /* + * Virtual GPU messages to the host + */ +@@ -2323,37 +2367,43 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + + ret = copy_to_user(&inargs->queue, &command->hwqueue, + sizeof(struct d3dkmthandle)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to copy hwqueue handle"); ++ ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence, + &command->hwqueue_progress_fence, + sizeof(struct d3dkmthandle)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to progress fence"); ++ ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_cpu_va, + &hwqueue->progress_fence_mapped_address, + sizeof(inargs->queue_progress_fence_cpu_va)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to copy fence cpu va"); ++ ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_gpu_va, + &command->hwqueue_progress_fence_gpuva, + sizeof(u64)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to copy fence gpu va"); ++ ret = -EINVAL; + goto cleanup; + } + if (args->priv_drv_data_size) { + ret = copy_to_user(args->priv_drv_data, + command->priv_drv_data, + args->priv_drv_data_size); +- if (ret < 0) ++ if (ret) { + DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ } + } + + cleanup: +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index acfdbde09e82..c1f693917d99 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -574,4 +574,22 @@ struct dxgkvmb_command_destroyhwqueue { + struct d3dkmthandle hwqueue; + }; + ++struct dxgkvmb_command_shareobjectwithhost { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle device_handle; ++ struct d3dkmthandle object_handle; ++ u64 reserved; ++}; ++ ++struct dxgkvmb_command_shareobjectwithhost_return { ++ struct ntstatus status; ++ u32 alignment; ++ u64 vail_nt_handle; ++}; ++ ++int ++dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, ++ void *command, u32 command_size, void *result, ++ u32 result_size); ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 9128694c8e78..ac052836ce27 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -2460,6 +2460,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + if (ret == 0) + goto success; + DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; + + cleanup: + +@@ -3364,8 +3365,10 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + tmp = (u64) object_fd; + + ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64)); +- if (ret < 0) ++ if (ret) { + DXG_ERR("failed to copy shared handle"); ++ ret = -EINVAL; ++ } + + cleanup: + if (ret < 0) { +@@ -3773,6 +3776,37 @@ dxgkio_open_resource_nt(struct dxgprocess *process, + return ret; + } + ++static int ++dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_shareobjectwithhost args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_share_object_with_host(process, &args); ++ if (ret) { ++ DXG_ERR("dxgvmb_send_share_object_with_host dailed"); ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy data to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -3850,7 +3884,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, + /* 0x43 */ {}, +-/* 0x44 */ {}, ++/* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, + /* 0x45 */ {}, + }; + +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 9238115d165d..895861505e6e 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -952,6 +952,13 @@ struct d3dkmt_enumadapters3 { + #endif + }; + ++struct d3dkmt_shareobjectwithhost { ++ struct d3dkmthandle device_handle; ++ struct d3dkmthandle object_handle; ++ __u64 reserved; ++ __u64 object_vail_nt_handle; ++}; ++ + /* + * Dxgkrnl Graphics Port Driver ioctl definitions + * +@@ -1021,5 +1028,7 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle) + #define LX_DXOPENRESOURCEFROMNTHANDLE \ + _IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle) ++#define LX_DXSHAREOBJECTWITHHOST \ ++ _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1684-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch b/patch/kernel/archive/wsl2-arm64-6.1/1684-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch new file mode 100644 index 000000000000..604dcb0ffaf5 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1684-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch @@ -0,0 +1,454 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 16:53:47 -0800 +Subject: drivers: hv: dxgkrnl: Query the dxgdevice state + +Implement the ioctl to query the dxgdevice state - LX_DXGETDEVICESTATE. +The IOCTL is used to query the state of the given dxgdevice object (active, +error, etc.). + +A call to the dxgdevice execution state could be high frequency. +The following method is used to avoid sending a synchronous VM +bus message to the host for every call: +- When a dxgdevice is created, a pointer to dxgglobal->device_state_counter + is sent to the host +- Every time the device state on the host is changed, the host will send + an asynchronous message to the guest (DXGK_VMBCOMMAND_SETGUESTDATA) and + the guest will increment the device_state_counter value. +- the dxgdevice object has execution_state_counter member, which is equal + to dxgglobal->device_state_counter value at the time when + LX_DXGETDEVICESTATE was last processed.. +- if execution_state_counter is different from device_state_counter, the + dxgk_vmbcommand_getdevicestate VM bus message is sent to the host. + Otherwise, the cached value is returned to the caller. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 11 + + drivers/hv/dxgkrnl/dxgmodule.c | 1 - + drivers/hv/dxgkrnl/dxgvmbus.c | 68 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 26 +++ + drivers/hv/dxgkrnl/ioctl.c | 66 +++++- + include/uapi/misc/d3dkmthk.h | 101 +++++++++- + 6 files changed, 261 insertions(+), 12 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index a39d11d76e41..b131c3b43838 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -268,12 +268,18 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + void dxgsyncobject_stop(struct dxgsyncobject *syncobj); + void dxgsyncobject_release(struct kref *refcount); + ++/* ++ * device_state_counter - incremented every time the execition state of ++ * a DXGDEVICE is changed in the host. Used to optimize access to the ++ * device execution state. ++ */ + struct dxgglobal { + struct dxgdriver *drvdata; + struct dxgvmbuschannel channel; + struct hv_device *hdev; + u32 num_adapters; + u32 vmbus_ver; /* Interface version */ ++ atomic_t device_state_counter; + struct resource *mem; + u64 mmiospace_base; + u64 mmiospace_size; +@@ -512,6 +518,7 @@ struct dxgdevice { + struct list_head syncobj_list_head; + struct d3dkmthandle handle; + enum d3dkmt_deviceexecution_state execution_state; ++ int execution_state_counter; + u32 handle_valid; + }; + +@@ -849,6 +856,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct d3dkmt_opensyncobjectfromnthandle2 + *args, + struct dxgsyncobject *syncobj); ++int dxgvmb_send_get_device_state(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getdevicestate *args, ++ struct d3dkmt_getdevicestate *__user inargs); + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 8cbe1095599f..5c364a46b65f 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -827,7 +827,6 @@ static struct dxgglobal *dxgglobal_create(void) + #ifdef DEBUG + dxgk_validate_ioctls(); + #endif +- + return dxgglobal; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 67a16de622e0..ed800dc09180 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -281,6 +281,24 @@ static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command, + command->channel_type = DXGKVMB_VM_TO_HOST; + } + ++static void set_guest_data(struct dxgkvmb_command_host_to_vm *packet, ++ u32 packet_length) ++{ ++ struct dxgkvmb_command_setguestdata *command = (void *)packet; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ DXG_TRACE("Setting guest data: %d %d %p %p", ++ command->data_type, ++ command->data32, ++ command->guest_pointer, ++ &dxgglobal->device_state_counter); ++ if (command->data_type == SETGUESTDATA_DATATYPE_DWORD && ++ command->guest_pointer == &dxgglobal->device_state_counter && ++ command->data32 != 0) { ++ atomic_inc(&dxgglobal->device_state_counter); ++ } ++} ++ + static void signal_guest_event(struct dxgkvmb_command_host_to_vm *packet, + u32 packet_length) + { +@@ -311,6 +329,9 @@ static void process_inband_packet(struct dxgvmbuschannel *channel, + DXG_TRACE("global packet %d", + packet->command_type); + switch (packet->command_type) { ++ case DXGK_VMBCOMMAND_SETGUESTDATA: ++ set_guest_data(packet, packet_length); ++ break; + case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: + case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: + signal_guest_event(packet, packet_length); +@@ -1028,6 +1049,7 @@ struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, + struct dxgkvmb_command_createdevice *command; + struct dxgkvmb_command_createdevice_return result = { }; + struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); + + ret = init_message(&msg, adapter, process, sizeof(*command)); + if (ret) +@@ -1037,6 +1059,7 @@ struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, + command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_CREATEDEVICE, + process->host_handle); + command->flags = args->flags; ++ command->error_code = &dxgglobal->device_state_counter; + + ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, + &result, sizeof(result)); +@@ -1806,6 +1829,51 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_get_device_state(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getdevicestate *args, ++ struct d3dkmt_getdevicestate *__user output) ++{ ++ int ret; ++ struct dxgkvmb_command_getdevicestate *command; ++ struct dxgkvmb_command_getdevicestate_return result = { }; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_GETDEVICESTATE, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(output, &result.args, sizeof(result.args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++ if (args->state_type == _D3DKMT_DEVICESTATE_EXECUTION) ++ args->execution_state = result.args.execution_state; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_open_resource(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle device, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index c1f693917d99..6ca1068b0d4c 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -172,6 +172,22 @@ struct dxgkvmb_command_signalguestevent { + bool dereference_event; + }; + ++enum set_guestdata_type { ++ SETGUESTDATA_DATATYPE_DWORD = 0, ++ SETGUESTDATA_DATATYPE_UINT64 = 1 ++}; ++ ++struct dxgkvmb_command_setguestdata { ++ struct dxgkvmb_command_host_to_vm hdr; ++ void *guest_pointer; ++ union { ++ u64 data64; ++ u32 data32; ++ }; ++ u32 dereference : 1; ++ u32 data_type : 4; ++}; ++ + struct dxgkvmb_command_opensyncobject { + struct dxgkvmb_command_vm_to_host hdr; + struct d3dkmthandle device; +@@ -574,6 +590,16 @@ struct dxgkvmb_command_destroyhwqueue { + struct d3dkmthandle hwqueue; + }; + ++struct dxgkvmb_command_getdevicestate { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_getdevicestate args; ++}; ++ ++struct dxgkvmb_command_getdevicestate_return { ++ struct d3dkmt_getdevicestate args; ++ struct ntstatus status; ++}; ++ + struct dxgkvmb_command_shareobjectwithhost { + struct dxgkvmb_command_vm_to_host hdr; + struct d3dkmthandle device_handle; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index ac052836ce27..26d410fd6e99 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3142,6 +3142,70 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_getdevicestate args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int global_device_state_counter = 0; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ if (args.state_type == _D3DKMT_DEVICESTATE_EXECUTION) { ++ global_device_state_counter = ++ atomic_read(&dxgglobal->device_state_counter); ++ if (device->execution_state_counter == ++ global_device_state_counter) { ++ args.execution_state = device->execution_state; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy args to user"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_get_device_state(process, adapter, &args, inargs); ++ ++ if (ret == 0 && args.state_type == _D3DKMT_DEVICESTATE_EXECUTION) { ++ device->execution_state = args.execution_state; ++ device->execution_state_counter = global_device_state_counter; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ if (ret < 0) ++ DXG_ERR("Failed to get device state %x", ret); ++ ++ return ret; ++} ++ + static int + dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, + struct dxgprocess *process, +@@ -3822,7 +3886,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0b */ {}, + /* 0x0c */ {}, + /* 0x0d */ {}, +-/* 0x0e */ {}, ++/* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, + /* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, + /* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 895861505e6e..8a013b07e88a 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -236,6 +236,95 @@ struct d3dddi_destroypagingqueue { + struct d3dkmthandle paging_queue; + }; + ++enum dxgk_render_pipeline_stage { ++ _DXGK_RENDER_PIPELINE_STAGE_UNKNOWN = 0, ++ _DXGK_RENDER_PIPELINE_STAGE_INPUT_ASSEMBLER = 1, ++ _DXGK_RENDER_PIPELINE_STAGE_VERTEX_SHADER = 2, ++ _DXGK_RENDER_PIPELINE_STAGE_GEOMETRY_SHADER = 3, ++ _DXGK_RENDER_PIPELINE_STAGE_STREAM_OUTPUT = 4, ++ _DXGK_RENDER_PIPELINE_STAGE_RASTERIZER = 5, ++ _DXGK_RENDER_PIPELINE_STAGE_PIXEL_SHADER = 6, ++ _DXGK_RENDER_PIPELINE_STAGE_OUTPUT_MERGER = 7, ++}; ++ ++enum dxgk_page_fault_flags { ++ _DXGK_PAGE_FAULT_WRITE = 0x1, ++ _DXGK_PAGE_FAULT_FENCE_INVALID = 0x2, ++ _DXGK_PAGE_FAULT_ADAPTER_RESET_REQUIRED = 0x4, ++ _DXGK_PAGE_FAULT_ENGINE_RESET_REQUIRED = 0x8, ++ _DXGK_PAGE_FAULT_FATAL_HARDWARE_ERROR = 0x10, ++ _DXGK_PAGE_FAULT_IOMMU = 0x20, ++ _DXGK_PAGE_FAULT_HW_CONTEXT_VALID = 0x40, ++ _DXGK_PAGE_FAULT_PROCESS_HANDLE_VALID = 0x80, ++}; ++ ++enum dxgk_general_error_code { ++ _DXGK_GENERAL_ERROR_PAGE_FAULT = 0, ++ _DXGK_GENERAL_ERROR_INVALID_INSTRUCTION = 1, ++}; ++ ++struct dxgk_fault_error_code { ++ union { ++ struct { ++ __u32 is_device_specific_code:1; ++ enum dxgk_general_error_code general_error_code:31; ++ }; ++ struct { ++ __u32 is_device_specific_code_reserved_bit:1; ++ __u32 device_specific_code:31; ++ }; ++ }; ++}; ++ ++struct d3dkmt_devicereset_state { ++ union { ++ struct { ++ __u32 desktop_switched:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_devicepagefault_state { ++ __u64 faulted_primitive_api_sequence_number; ++ enum dxgk_render_pipeline_stage faulted_pipeline_stage; ++ __u32 faulted_bind_table_entry; ++ enum dxgk_page_fault_flags page_fault_flags; ++ struct dxgk_fault_error_code fault_error_code; ++ __u64 faulted_virtual_address; ++}; ++ ++enum d3dkmt_deviceexecution_state { ++ _D3DKMT_DEVICEEXECUTION_ACTIVE = 1, ++ _D3DKMT_DEVICEEXECUTION_RESET = 2, ++ _D3DKMT_DEVICEEXECUTION_HUNG = 3, ++ _D3DKMT_DEVICEEXECUTION_STOPPED = 4, ++ _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, ++}; ++ ++enum d3dkmt_devicestate_type { ++ _D3DKMT_DEVICESTATE_EXECUTION = 1, ++ _D3DKMT_DEVICESTATE_PRESENT = 2, ++ _D3DKMT_DEVICESTATE_RESET = 3, ++ _D3DKMT_DEVICESTATE_PRESENT_DWM = 4, ++ _D3DKMT_DEVICESTATE_PAGE_FAULT = 5, ++ _D3DKMT_DEVICESTATE_PRESENT_QUEUE = 6, ++}; ++ ++struct d3dkmt_getdevicestate { ++ struct d3dkmthandle device; ++ enum d3dkmt_devicestate_type state_type; ++ union { ++ enum d3dkmt_deviceexecution_state execution_state; ++ struct d3dkmt_devicereset_state reset_state; ++ struct d3dkmt_devicepagefault_state page_fault_state; ++ char alignment[48]; ++ }; ++}; ++ + enum d3dkmdt_gdisurfacetype { + _D3DKMDT_GDISURFACE_INVALID = 0, + _D3DKMDT_GDISURFACE_TEXTURE = 1, +@@ -759,16 +848,6 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + +-enum d3dkmt_deviceexecution_state { +- _D3DKMT_DEVICEEXECUTION_ACTIVE = 1, +- _D3DKMT_DEVICEEXECUTION_RESET = 2, +- _D3DKMT_DEVICEEXECUTION_HUNG = 3, +- _D3DKMT_DEVICEEXECUTION_STOPPED = 4, +- _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5, +- _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6, +- _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, +-}; +- + struct d3dddi_openallocationinfo2 { + struct d3dkmthandle allocation; + #ifdef __KERNEL__ +@@ -978,6 +1057,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXGETDEVICESTATE \ ++ _IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate) + #define LX_DXSUBMITCOMMAND \ + _IOWR(0x47, 0x0f, struct d3dkmt_submitcommand) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1685-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch b/patch/kernel/archive/wsl2-arm64-6.1/1685-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch new file mode 100644 index 000000000000..0f2e123431fd --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1685-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch @@ -0,0 +1,498 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 13:58:28 -0800 +Subject: drivers: hv: dxgkrnl: Map(unmap) CPU address to device allocation + +Implement ioctls to map/unmap CPU virtual addresses to compute device +allocations - LX_DXLOCK2 and LX_DXUNLOCK2. + +The LX_DXLOCK2 ioctl maps a CPU virtual address to a compute device +allocation. The allocation could be located in system memory or local +device memory on the host. When the device allocation is created +from the guest system memory (existing sysmem allocation), the +allocation CPU address is known and is returned to the caller. +For other CPU visible allocations the code flow is the following: +1. A VM bus message is sent to the host to map the allocation +2. The host allocates a portion of the guest IO space and maps it + to the allocation backing store. The IO space address of the + allocation is returned back to the guest. +3. The guest allocates a CPU virtual address and maps it to the IO + space (see the dxg_map_iospace function). +4. The CPU VA is returned back to the caller +cpu_address_mapped and cpu_address_refcount are used to track how +many times an allocation was mapped. + +The LX_DXUNLOCK2 ioctl unmaps a CPU virtual address from a compute +device allocation. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 11 + + drivers/hv/dxgkrnl/dxgkrnl.h | 14 + + drivers/hv/dxgkrnl/dxgvmbus.c | 107 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 19 ++ + drivers/hv/dxgkrnl/ioctl.c | 160 +++++++++- + include/uapi/misc/d3dkmthk.h | 30 ++ + 6 files changed, 339 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 410f08768bad..23f00db7637e 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -885,6 +885,15 @@ void dxgallocation_stop(struct dxgallocation *alloc) + vfree(alloc->pages); + alloc->pages = NULL; + } ++ dxgprocess_ht_lock_exclusive_down(alloc->process); ++ if (alloc->cpu_address_mapped) { ++ dxg_unmap_iospace(alloc->cpu_address, ++ alloc->num_pages << PAGE_SHIFT); ++ alloc->cpu_address_mapped = false; ++ alloc->cpu_address = NULL; ++ alloc->cpu_address_refcount = 0; ++ } ++ dxgprocess_ht_lock_exclusive_up(alloc->process); + } + + void dxgallocation_free_handle(struct dxgallocation *alloc) +@@ -932,6 +941,8 @@ else + #endif + if (alloc->priv_drv_data) + vfree(alloc->priv_drv_data); ++ if (alloc->cpu_address_mapped) ++ pr_err("Alloc IO space is mapped: %p", alloc); + kfree(alloc); + } + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b131c3b43838..1d6b552f1c1a 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -708,6 +708,8 @@ struct dxgallocation { + struct d3dkmthandle alloc_handle; + /* Set to 1 when allocation belongs to resource. */ + u32 resource_owner:1; ++ /* Set to 1 when 'cpu_address' is mapped to the IO space. */ ++ u32 cpu_address_mapped:1; + /* Set to 1 when the allocatio is mapped as cached */ + u32 cached:1; + u32 handle_valid:1; +@@ -719,6 +721,11 @@ struct dxgallocation { + #endif + /* Number of pages in the 'pages' array */ + u32 num_pages; ++ /* ++ * How many times dxgk_lock2 is called to allocation, which is mapped ++ * to IO space. ++ */ ++ u32 cpu_address_refcount; + /* + * CPU address from the existing sysmem allocation, or + * mapped to the CPU visible backing store in the IO space +@@ -837,6 +844,13 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + d3dkmt_waitforsynchronizationobjectfromcpu + *args, + u64 cpu_event); ++int dxgvmb_send_lock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_lock2 *args, ++ struct d3dkmt_lock2 *__user outargs); ++int dxgvmb_send_unlock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_unlock2 *args); + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index ed800dc09180..a80f84d9065a 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2354,6 +2354,113 @@ int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_lock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_lock2 *args, ++ struct d3dkmt_lock2 *__user outargs) ++{ ++ int ret; ++ struct dxgkvmb_command_lock2 *command; ++ struct dxgkvmb_command_lock2_return result = { }; ++ struct dxgallocation *alloc = NULL; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_LOCK2, process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ alloc = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ args->allocation); ++ if (alloc == NULL) { ++ DXG_ERR("invalid alloc"); ++ ret = -EINVAL; ++ } else { ++ if (alloc->cpu_address) { ++ args->data = alloc->cpu_address; ++ if (alloc->cpu_address_mapped) ++ alloc->cpu_address_refcount++; ++ } else { ++ u64 offset = (u64)result.cpu_visible_buffer_offset; ++ ++ args->data = dxg_map_iospace(offset, ++ alloc->num_pages << PAGE_SHIFT, ++ PROT_READ | PROT_WRITE, alloc->cached); ++ if (args->data) { ++ alloc->cpu_address_refcount = 1; ++ alloc->cpu_address_mapped = true; ++ alloc->cpu_address = args->data; ++ } ++ } ++ if (args->data == NULL) { ++ ret = -ENOMEM; ++ } else { ++ ret = copy_to_user(&outargs->data, &args->data, ++ sizeof(args->data)); ++ if (ret) { ++ DXG_ERR("failed to copy data"); ++ ret = -EINVAL; ++ alloc->cpu_address_refcount--; ++ if (alloc->cpu_address_refcount == 0) { ++ dxg_unmap_iospace(alloc->cpu_address, ++ alloc->num_pages << PAGE_SHIFT); ++ alloc->cpu_address_mapped = false; ++ alloc->cpu_address = NULL; ++ } ++ } ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_unlock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_unlock2 *args) ++{ ++ int ret; ++ struct dxgkvmb_command_unlock2 *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_UNLOCK2, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 6ca1068b0d4c..447bb1ba391b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -570,6 +570,25 @@ struct dxgkvmb_command_waitforsyncobjectfromgpu { + /* struct d3dkmthandle ObjectHandles[object_count] */ + }; + ++struct dxgkvmb_command_lock2 { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_lock2 args; ++ bool use_legacy_lock; ++ u32 flags; ++ u32 priv_drv_data; ++}; ++ ++struct dxgkvmb_command_lock2_return { ++ struct ntstatus status; ++ void *cpu_visible_buffer_offset; ++}; ++ ++struct dxgkvmb_command_unlock2 { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_unlock2 args; ++ bool use_legacy_unlock; ++}; ++ + /* Returns the same structure */ + struct dxgkvmb_command_createhwqueue { + struct dxgkvmb_command_vgpu_to_host hdr; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 26d410fd6e99..37e218443310 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3142,6 +3142,162 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_lock2(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_lock2 args; ++ struct d3dkmt_lock2 *__user result = inargs; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgallocation *alloc = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ args.data = NULL; ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ alloc = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ args.allocation); ++ if (alloc == NULL) { ++ ret = -EINVAL; ++ } else { ++ if (alloc->cpu_address) { ++ ret = copy_to_user(&result->data, ++ &alloc->cpu_address, ++ sizeof(args.data)); ++ if (ret == 0) { ++ args.data = alloc->cpu_address; ++ if (alloc->cpu_address_mapped) ++ alloc->cpu_address_refcount++; ++ } else { ++ DXG_ERR("Failed to copy cpu address"); ++ ret = -EINVAL; ++ } ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (ret < 0) ++ goto cleanup; ++ if (args.data) ++ goto success; ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_lock2(process, adapter, &args, result); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++success: ++ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ return ret; ++} ++ ++static int ++dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_unlock2 args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgallocation *alloc = NULL; ++ bool done = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ alloc = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ args.allocation); ++ if (alloc == NULL) { ++ ret = -EINVAL; ++ } else { ++ if (alloc->cpu_address == NULL) { ++ DXG_ERR("Allocation is not locked: %p", alloc); ++ ret = -EINVAL; ++ } else if (alloc->cpu_address_mapped) { ++ if (alloc->cpu_address_refcount > 0) { ++ alloc->cpu_address_refcount--; ++ if (alloc->cpu_address_refcount != 0) { ++ done = true; ++ } else { ++ dxg_unmap_iospace(alloc->cpu_address, ++ alloc->num_pages << PAGE_SHIFT); ++ alloc->cpu_address_mapped = false; ++ alloc->cpu_address = NULL; ++ } ++ } else { ++ DXG_ERR("Invalid cpu access refcount"); ++ done = true; ++ } ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (done) ++ goto success; ++ if (ret < 0) ++ goto cleanup; ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_unlock2(process, adapter, &args); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++success: ++ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ return ret; ++} ++ + static int + dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + { +@@ -3909,7 +4065,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x22 */ {}, + /* 0x23 */ {}, + /* 0x24 */ {}, +-/* 0x25 */ {}, ++/* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, + /* 0x26 */ {}, + /* 0x27 */ {}, + /* 0x28 */ {}, +@@ -3932,7 +4088,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, +-/* 0x37 */ {}, ++/* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, + /* 0x38 */ {}, + /* 0x39 */ {}, + /* 0x3a */ {dxgkio_wait_sync_object_cpu, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 8a013b07e88a..b498f09e694d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -668,6 +668,32 @@ struct d3dkmt_submitcommandtohwqueue { + #endif + }; + ++struct d3dddicb_lock2flags { ++ union { ++ struct { ++ __u32 reserved:32; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_lock2 { ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++ struct d3dddicb_lock2flags flags; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ void *data; ++#else ++ __u64 data; ++#endif ++}; ++ ++struct d3dkmt_unlock2 { ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -1083,6 +1109,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXLOCK2 \ ++ _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ +@@ -1095,6 +1123,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue) + #define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \ + _IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue) ++#define LX_DXUNLOCK2 \ ++ _IOWR(0x47, 0x37, struct d3dkmt_unlock2) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1686-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch b/patch/kernel/archive/wsl2-arm64-6.1/1686-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch new file mode 100644 index 000000000000..02b8c5eeba69 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1686-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch @@ -0,0 +1,912 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 11:14:22 -0800 +Subject: drivers: hv: dxgkrnl: Manage device allocation properties + +Implement ioctls to manage properties of a compute device allocation: + - LX_DXUPDATEALLOCPROPERTY, + - LX_DXSETALLOCATIONPRIORITY, + - LX_DXGETALLOCATIONPRIORITY, + - LX_DXQUERYALLOCATIONRESIDENCY. + - LX_DXCHANGEVIDEOMEMORYRESERVATION, + +The LX_DXUPDATEALLOCPROPERTY ioctl requests the host to update +various properties of a compute devoce allocation. + +The LX_DXSETALLOCATIONPRIORITY and LX_DXGETALLOCATIONPRIORITY ioctls +are used to set/get allocation priority, which defines the +importance of the allocation to be in the local device memory. + +The LX_DXQUERYALLOCATIONRESIDENCY ioctl queries if the allocation +is located in the compute device accessible memory. + +The LX_DXCHANGEVIDEOMEMORYRESERVATION ioctl changes compute device +memory reservation of an allocation. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 21 + + drivers/hv/dxgkrnl/dxgvmbus.c | 300 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 50 ++ + drivers/hv/dxgkrnl/ioctl.c | 217 ++++++- + include/uapi/misc/d3dkmthk.h | 127 ++++ + 5 files changed, 708 insertions(+), 7 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 1d6b552f1c1a..7fefe4617488 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -851,6 +851,23 @@ int dxgvmb_send_lock2(struct dxgprocess *process, + int dxgvmb_send_unlock2(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_unlock2 *args); ++int dxgvmb_send_update_alloc_property(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_updateallocproperty *args, ++ struct d3dddi_updateallocproperty *__user ++ inargs); ++int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_setallocationpriority *a); ++int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getallocationpriority *a); ++int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle other_process, ++ struct ++ d3dkmt_changevideomemoryreservation ++ *args); + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +@@ -870,6 +887,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct d3dkmt_opensyncobjectfromnthandle2 + *args, + struct dxgsyncobject *syncobj); ++int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryallocationresidency ++ *args); + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index a80f84d9065a..dd2c97fee27b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1829,6 +1829,79 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryallocationresidency ++ *args) ++{ ++ int ret = -EINVAL; ++ struct dxgkvmb_command_queryallocationresidency *command = NULL; ++ u32 cmd_size = sizeof(*command); ++ u32 alloc_size = 0; ++ u32 result_allocation_size = 0; ++ struct dxgkvmb_command_queryallocationresidency_return *result = NULL; ++ u32 result_size = sizeof(*result); ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args->allocation_count) { ++ alloc_size = args->allocation_count * ++ sizeof(struct d3dkmthandle); ++ cmd_size += alloc_size; ++ result_allocation_size = args->allocation_count * ++ sizeof(args->residency_status[0]); ++ } else { ++ result_allocation_size = sizeof(args->residency_status[0]); ++ } ++ result_size += result_allocation_size; ++ ++ ret = init_message_res(&msg, adapter, process, cmd_size, result_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYALLOCATIONRESIDENCY, ++ process->host_handle); ++ command->args = *args; ++ if (alloc_size) { ++ ret = copy_from_user(&command[1], args->allocations, ++ alloc_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(args->residency_status, &result[1], ++ result_allocation_size); ++ if (ret) { ++ DXG_ERR("failed to copy residency status"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +@@ -2461,6 +2534,233 @@ int dxgvmb_send_unlock2(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_update_alloc_property(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_updateallocproperty *args, ++ struct d3dddi_updateallocproperty *__user ++ inargs) ++{ ++ int ret; ++ int ret1; ++ struct dxgkvmb_command_updateallocationproperty *command; ++ struct dxgkvmb_command_updateallocationproperty_return result = { }; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_UPDATEALLOCATIONPROPERTY, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ ++ if (ret < 0) ++ goto cleanup; ++ ret = ntstatus2int(result.status); ++ /* STATUS_PENING is a success code > 0 */ ++ if (ret == STATUS_PENDING) { ++ ret1 = copy_to_user(&inargs->paging_fence_value, ++ &result.paging_fence_value, ++ sizeof(u64)); ++ if (ret1) { ++ DXG_ERR("failed to copy paging fence"); ++ ret = -EINVAL; ++ } ++ } ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_setallocationpriority *args) ++{ ++ u32 cmd_size = sizeof(struct dxgkvmb_command_setallocationpriority); ++ u32 alloc_size = 0; ++ u32 priority_size = 0; ++ struct dxgkvmb_command_setallocationpriority *command; ++ int ret; ++ struct d3dkmthandle *allocations; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args->resource.v) { ++ priority_size = sizeof(u32); ++ if (args->allocation_count != 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ if (args->allocation_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ alloc_size = args->allocation_count * ++ sizeof(struct d3dkmthandle); ++ cmd_size += alloc_size; ++ priority_size = sizeof(u32) * args->allocation_count; ++ } ++ cmd_size += priority_size; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SETALLOCATIONPRIORITY, ++ process->host_handle); ++ command->device = args->device; ++ command->allocation_count = args->allocation_count; ++ command->resource = args->resource; ++ allocations = (struct d3dkmthandle *) &command[1]; ++ ret = copy_from_user(allocations, args->allocation_list, ++ alloc_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_from_user((u8 *) allocations + alloc_size, ++ args->priorities, priority_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc priority"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getallocationpriority *args) ++{ ++ u32 cmd_size = sizeof(struct dxgkvmb_command_getallocationpriority); ++ u32 result_size; ++ u32 alloc_size = 0; ++ u32 priority_size = 0; ++ struct dxgkvmb_command_getallocationpriority *command; ++ struct dxgkvmb_command_getallocationpriority_return *result; ++ int ret; ++ struct d3dkmthandle *allocations; ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args->resource.v) { ++ priority_size = sizeof(u32); ++ if (args->allocation_count != 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ if (args->allocation_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ alloc_size = args->allocation_count * ++ sizeof(struct d3dkmthandle); ++ cmd_size += alloc_size; ++ priority_size = sizeof(u32) * args->allocation_count; ++ } ++ result_size = sizeof(*result) + priority_size; ++ ++ ret = init_message_res(&msg, adapter, process, cmd_size, result_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_GETALLOCATIONPRIORITY, ++ process->host_handle); ++ command->device = args->device; ++ command->allocation_count = args->allocation_count; ++ command->resource = args->resource; ++ allocations = (struct d3dkmthandle *) &command[1]; ++ ret = copy_from_user(allocations, args->allocation_list, ++ alloc_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, ++ msg.size + msg.res_size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(args->priorities, ++ (u8 *) result + sizeof(*result), ++ priority_size); ++ if (ret) { ++ DXG_ERR("failed to copy priorities"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle other_process, ++ struct ++ d3dkmt_changevideomemoryreservation ++ *args) ++{ ++ struct dxgkvmb_command_changevideomemoryreservation *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CHANGEVIDEOMEMORYRESERVATION, ++ process->host_handle); ++ command->args = *args; ++ command->args.process = other_process.v; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 447bb1ba391b..dbb01b9ab066 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -308,6 +308,29 @@ struct dxgkvmb_command_queryadapterinfo_return { + u8 private_data[1]; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setallocationpriority { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 allocation_count; ++ /* struct d3dkmthandle allocations[allocation_count or 0]; */ ++ /* u32 priorities[allocation_count or 1]; */ ++}; ++ ++struct dxgkvmb_command_getallocationpriority { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 allocation_count; ++ /* struct d3dkmthandle allocations[allocation_count or 0]; */ ++}; ++ ++struct dxgkvmb_command_getallocationpriority_return { ++ struct ntstatus status; ++ /* u32 priorities[allocation_count or 1]; */ ++}; ++ + struct dxgkvmb_command_createdevice { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_createdeviceflags flags; +@@ -589,6 +612,22 @@ struct dxgkvmb_command_unlock2 { + bool use_legacy_unlock; + }; + ++struct dxgkvmb_command_updateallocationproperty { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dddi_updateallocproperty args; ++}; ++ ++struct dxgkvmb_command_updateallocationproperty_return { ++ u64 paging_fence_value; ++ struct ntstatus status; ++}; ++ ++/* Returns ntstatus */ ++struct dxgkvmb_command_changevideomemoryreservation { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_changevideomemoryreservation args; ++}; ++ + /* Returns the same structure */ + struct dxgkvmb_command_createhwqueue { + struct dxgkvmb_command_vgpu_to_host hdr; +@@ -609,6 +648,17 @@ struct dxgkvmb_command_destroyhwqueue { + struct d3dkmthandle hwqueue; + }; + ++struct dxgkvmb_command_queryallocationresidency { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_queryallocationresidency args; ++ /* struct d3dkmthandle allocations[0 or number of allocations] */ ++}; ++ ++struct dxgkvmb_command_queryallocationresidency_return { ++ struct ntstatus status; ++ /* d3dkmt_allocationresidencystatus[NumAllocations] */ ++}; ++ + struct dxgkvmb_command_getdevicestate { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_getdevicestate args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 37e218443310..b626e2518ff2 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3214,7 +3214,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); + return ret; + } + +@@ -3294,7 +3294,209 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dddi_updateallocproperty args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_update_alloc_property(process, adapter, ++ &args, inargs); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryallocationresidency args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if ((args.allocation_count == 0) == (args.resource.v == 0)) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_query_alloc_residency(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_setallocationpriority args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_set_allocation_priority(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_getallocationpriority args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_get_allocation_priority(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_changevideomemoryreservation args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.process != 0) { ++ DXG_ERR("setting memory reservation for other process"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ args.adapter.v = 0; ++ ret = dxgvmb_send_change_vidmem_reservation(process, adapter, ++ zerohandle, &args); ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); + return ret; + } + +@@ -4050,7 +4252,8 @@ static struct ioctl_desc ioctls[] = { + /* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, +-/* 0x16 */ {}, ++/* 0x16 */ {dxgkio_change_vidmem_reservation, ++ LX_DXCHANGEVIDEOMEMORYRESERVATION}, + /* 0x17 */ {}, + /* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE}, + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, +@@ -4070,11 +4273,11 @@ static struct ioctl_desc ioctls[] = { + /* 0x27 */ {}, + /* 0x28 */ {}, + /* 0x29 */ {}, +-/* 0x2a */ {}, ++/* 0x2a */ {dxgkio_query_alloc_residency, LX_DXQUERYALLOCATIONRESIDENCY}, + /* 0x2b */ {}, + /* 0x2c */ {}, + /* 0x2d */ {}, +-/* 0x2e */ {}, ++/* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY}, + /* 0x2f */ {}, + /* 0x30 */ {}, + /* 0x31 */ {dxgkio_signal_sync_object_cpu, +@@ -4089,13 +4292,13 @@ static struct ioctl_desc ioctls[] = { + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, +-/* 0x38 */ {}, ++/* 0x38 */ {dxgkio_update_alloc_property, LX_DXUPDATEALLOCPROPERTY}, + /* 0x39 */ {}, + /* 0x3a */ {dxgkio_wait_sync_object_cpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU}, + /* 0x3b */ {dxgkio_wait_sync_object_gpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU}, +-/* 0x3c */ {}, ++/* 0x3c */ {dxgkio_get_allocation_priority, LX_DXGETALLOCATIONPRIORITY}, + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, + /* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index b498f09e694d..af381101fd90 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -668,6 +668,63 @@ struct d3dkmt_submitcommandtohwqueue { + #endif + }; + ++struct d3dkmt_setallocationpriority { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocation_list; ++#else ++ __u64 allocation_list; ++#endif ++ __u32 allocation_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ const __u32 *priorities; ++#else ++ __u64 priorities; ++#endif ++}; ++ ++struct d3dkmt_getallocationpriority { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocation_list; ++#else ++ __u64 allocation_list; ++#endif ++ __u32 allocation_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ __u32 *priorities; ++#else ++ __u64 priorities; ++#endif ++}; ++ ++enum d3dkmt_allocationresidencystatus { ++ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_RESIDENTINGPUMEMORY = 1, ++ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_RESIDENTINSHAREDMEMORY = 2, ++ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_NOTRESIDENT = 3, ++}; ++ ++struct d3dkmt_queryallocationresidency { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *allocations; ++#else ++ __u64 allocations; ++#endif ++ __u32 allocation_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ enum d3dkmt_allocationresidencystatus *residency_status; ++#else ++ __u64 residency_status; ++#endif ++}; ++ + struct d3dddicb_lock2flags { + union { + struct { +@@ -835,6 +892,11 @@ struct d3dkmt_destroyallocation2 { + struct d3dddicb_destroyallocation2flags flags; + }; + ++enum d3dkmt_memory_segment_group { ++ _D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0, ++ _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -886,6 +948,61 @@ struct d3dddi_openallocationinfo2 { + __u64 reserved[6]; + }; + ++struct d3dddi_updateallocproperty_flags { ++ union { ++ struct { ++ __u32 accessed_physically:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_segmentpreference { ++ union { ++ struct { ++ __u32 segment_id0:5; ++ __u32 direction0:1; ++ __u32 segment_id1:5; ++ __u32 direction1:1; ++ __u32 segment_id2:5; ++ __u32 direction2:1; ++ __u32 segment_id3:5; ++ __u32 direction3:1; ++ __u32 segment_id4:5; ++ __u32 direction4:1; ++ __u32 reserved:2; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_updateallocproperty { ++ struct d3dkmthandle paging_queue; ++ struct d3dkmthandle allocation; ++ __u32 supported_segment_set; ++ struct d3dddi_segmentpreference preferred_segment; ++ struct d3dddi_updateallocproperty_flags flags; ++ __u64 paging_fence_value; ++ union { ++ struct { ++ __u32 set_accessed_physically:1; ++ __u32 set_supported_segmentSet:1; ++ __u32 set_preferred_segment:1; ++ __u32 reserved:29; ++ }; ++ __u32 property_mask_value; ++ }; ++}; ++ ++struct d3dkmt_changevideomemoryreservation { ++ __u64 process; ++ struct d3dkmthandle adapter; ++ enum d3dkmt_memory_segment_group memory_segment_group; ++ __u64 reservation; ++ __u32 physical_adapter_index; ++}; ++ + struct d3dkmt_createhwqueue { + struct d3dkmthandle context; + struct d3dddi_createhwqueueflags flags; +@@ -1099,6 +1216,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXCHANGEVIDEOMEMORYRESERVATION \ ++ _IOWR(0x47, 0x16, struct d3dkmt_changevideomemoryreservation) + #define LX_DXCREATEHWQUEUE \ + _IOWR(0x47, 0x18, struct d3dkmt_createhwqueue) + #define LX_DXDESTROYHWQUEUE \ +@@ -1111,6 +1230,10 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) ++#define LX_DXQUERYALLOCATIONRESIDENCY \ ++ _IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency) ++#define LX_DXSETALLOCATIONPRIORITY \ ++ _IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ +@@ -1125,10 +1248,14 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue) + #define LX_DXUNLOCK2 \ + _IOWR(0x47, 0x37, struct d3dkmt_unlock2) ++#define LX_DXUPDATEALLOCPROPERTY \ ++ _IOWR(0x47, 0x38, struct d3dddi_updateallocproperty) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ + _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) ++#define LX_DXGETALLOCATIONPRIORITY \ ++ _IOWR(0x47, 0x3c, struct d3dkmt_getallocationpriority) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + #define LX_DXSHAREOBJECTS \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1687-drivers-hv-dxgkrnl-Flush-heap-transitions.patch b/patch/kernel/archive/wsl2-arm64-6.1/1687-drivers-hv-dxgkrnl-Flush-heap-transitions.patch new file mode 100644 index 000000000000..61a4f862b297 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1687-drivers-hv-dxgkrnl-Flush-heap-transitions.patch @@ -0,0 +1,194 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 18 Jan 2022 17:25:37 -0800 +Subject: drivers: hv: dxgkrnl: Flush heap transitions + +Implement the ioctl to flush heap transitions +(LX_DXFLUSHHEAPTRANSITIONS). + +The ioctl is used to ensure that the video memory manager on the host +flushes all internal operations. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 2 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 23 +++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 5 + + drivers/hv/dxgkrnl/ioctl.c | 49 +++++++++- + include/uapi/misc/d3dkmthk.h | 6 ++ + 6 files changed, 86 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 23f00db7637e..6f763e326a65 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -942,7 +942,7 @@ else + if (alloc->priv_drv_data) + vfree(alloc->priv_drv_data); + if (alloc->cpu_address_mapped) +- pr_err("Alloc IO space is mapped: %p", alloc); ++ DXG_ERR("Alloc IO space is mapped: %p", alloc); + kfree(alloc); + } + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 7fefe4617488..ced9dd294f5f 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -882,6 +882,9 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_submitcommandtohwqueue *a); ++int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_flushheaptransitions *arg); + int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct dxgvmbuschannel *channel, + struct d3dkmt_opensyncobjectfromnthandle2 +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index dd2c97fee27b..928fad5f133b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1829,6 +1829,29 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_flushheaptransitions *args) ++{ ++ struct dxgkvmb_command_flushheaptransitions *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_FLUSHHEAPTRANSITIONS, ++ process->host_handle); ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index dbb01b9ab066..d232eb234e2c 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -367,6 +367,11 @@ struct dxgkvmb_command_submitcommandtohwqueue { + /* PrivateDriverData */ + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_flushheaptransitions { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++}; ++ + struct dxgkvmb_command_createallocation_allocinfo { + u32 flags; + u32 priv_drv_data_size; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index b626e2518ff2..8b7d00e4c881 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3500,6 +3500,53 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + return ret; + } + ++static int ++dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_flushheaptransitions args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_flush_heap_transitions(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ return ret; ++} ++ + static int + dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + { +@@ -4262,7 +4309,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, +-/* 0x1f */ {}, ++/* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, + /* 0x20 */ {}, + /* 0x21 */ {}, + /* 0x22 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index af381101fd90..873feb951129 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -936,6 +936,10 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + ++struct d3dkmt_flushheaptransitions { ++ struct d3dkmthandle adapter; ++}; ++ + struct d3dddi_openallocationinfo2 { + struct d3dkmthandle allocation; + #ifdef __KERNEL__ +@@ -1228,6 +1232,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXFLUSHHEAPTRANSITIONS \ ++ _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXQUERYALLOCATIONRESIDENCY \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1688-drivers-hv-dxgkrnl-Query-video-memory-information.patch b/patch/kernel/archive/wsl2-arm64-6.1/1688-drivers-hv-dxgkrnl-Query-video-memory-information.patch new file mode 100644 index 000000000000..462c1f6b3f2f --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1688-drivers-hv-dxgkrnl-Query-video-memory-information.patch @@ -0,0 +1,237 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 8 Feb 2022 18:34:07 -0800 +Subject: drivers: hv: dxgkrnl: Query video memory information + +Implement the ioctl to query video memory information from the host +(LX_DXQUERYVIDEOMEMORYINFO). + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 5 + + drivers/hv/dxgkrnl/dxgvmbus.c | 64 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 14 ++ + drivers/hv/dxgkrnl/ioctl.c | 50 +++++++- + include/uapi/misc/d3dkmthk.h | 13 ++ + 5 files changed, 145 insertions(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index ced9dd294f5f..b6a7288a4177 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -894,6 +894,11 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency + *args); ++int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryvideomemoryinfo *args, ++ struct d3dkmt_queryvideomemoryinfo ++ *__user iargs); + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 928fad5f133b..48ff49456057 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1925,6 +1925,70 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryvideomemoryinfo *args, ++ struct d3dkmt_queryvideomemoryinfo *__user ++ output) ++{ ++ int ret; ++ struct dxgkvmb_command_queryvideomemoryinfo *command; ++ struct dxgkvmb_command_queryvideomemoryinfo_return result = { }; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&command->hdr, ++ dxgk_vmbcommand_queryvideomemoryinfo, ++ process->host_handle); ++ command->adapter = args->adapter; ++ command->memory_segment_group = args->memory_segment_group; ++ command->physical_adapter_index = args->physical_adapter_index; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&output->budget, &result.budget, ++ sizeof(output->budget)); ++ if (ret) { ++ pr_err("%s failed to copy budget", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(&output->current_usage, &result.current_usage, ++ sizeof(output->current_usage)); ++ if (ret) { ++ pr_err("%s failed to copy current usage", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(&output->current_reservation, ++ &result.current_reservation, ++ sizeof(output->current_reservation)); ++ if (ret) { ++ pr_err("%s failed to copy reservation", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(&output->available_for_reservation, ++ &result.available_for_reservation, ++ sizeof(output->available_for_reservation)); ++ if (ret) { ++ pr_err("%s failed to copy avail reservation", __func__); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ dev_dbg(DXGDEV, "err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index d232eb234e2c..a1549983d50f 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -664,6 +664,20 @@ struct dxgkvmb_command_queryallocationresidency_return { + /* d3dkmt_allocationresidencystatus[NumAllocations] */ + }; + ++struct dxgkvmb_command_queryvideomemoryinfo { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle adapter; ++ enum d3dkmt_memory_segment_group memory_segment_group; ++ u32 physical_adapter_index; ++}; ++ ++struct dxgkvmb_command_queryvideomemoryinfo_return { ++ u64 budget; ++ u64 current_usage; ++ u64 current_reservation; ++ u64 available_for_reservation; ++}; ++ + struct dxgkvmb_command_getdevicestate { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_getdevicestate args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 8b7d00e4c881..e692b127e219 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3547,6 +3547,54 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryvideomemoryinfo args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.process != 0) { ++ DXG_ERR("query vidmem info from another process"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_query_vidmem_info(process, adapter, &args, inargs); ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ if (ret < 0) ++ DXG_ERR("failed: %x", ret); ++ return ret; ++} ++ + static int + dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + { +@@ -4287,7 +4335,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE}, + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, +-/* 0x0a */ {}, ++/* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, + /* 0x0b */ {}, + /* 0x0c */ {}, + /* 0x0d */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 873feb951129..b7d8b1d91cfc 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -897,6 +897,17 @@ enum d3dkmt_memory_segment_group { + _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 + }; + ++struct d3dkmt_queryvideomemoryinfo { ++ __u64 process; ++ struct d3dkmthandle adapter; ++ enum d3dkmt_memory_segment_group memory_segment_group; ++ __u64 budget; ++ __u64 current_usage; ++ __u64 current_reservation; ++ __u64 available_for_reservation; ++ __u32 physical_adapter_index; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -1204,6 +1215,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXQUERYVIDEOMEMORYINFO \ ++ _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) + #define LX_DXGETDEVICESTATE \ + _IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate) + #define LX_DXSUBMITCOMMAND \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1689-drivers-hv-dxgkrnl-The-escape-ioctl.patch b/patch/kernel/archive/wsl2-arm64-6.1/1689-drivers-hv-dxgkrnl-The-escape-ioctl.patch new file mode 100644 index 000000000000..30de89e8310e --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1689-drivers-hv-dxgkrnl-The-escape-ioctl.patch @@ -0,0 +1,305 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 18 Jan 2022 15:50:30 -0800 +Subject: drivers: hv: dxgkrnl: The escape ioctl + +Implement the escape ioctl (LX_DXESCAPE). + +This ioctl is used to send/receive private data between user mode +compute device driver (guest) and kernel mode compute device +driver (host). It allows the user mode driver to extend the virtual +compute device API. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 75 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 12 ++ + drivers/hv/dxgkrnl/ioctl.c | 42 +++++- + include/uapi/misc/d3dkmthk.h | 41 +++++ + 5 files changed, 167 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b6a7288a4177..dafc721ed6cf 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -894,6 +894,9 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency + *args); ++int dxgvmb_send_escape(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_escape *args); + int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryvideomemoryinfo *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 48ff49456057..8bdd49bc7aa6 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1925,6 +1925,70 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_escape(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_escape *args) ++{ ++ int ret; ++ struct dxgkvmb_command_escape *command = NULL; ++ u32 cmd_size = sizeof(*command); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ cmd_size = cmd_size - sizeof(args->priv_drv_data[0]) + ++ args->priv_drv_data_size; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_ESCAPE, ++ process->host_handle); ++ command->adapter = args->adapter; ++ command->device = args->device; ++ command->type = args->type; ++ command->flags = args->flags; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ command->context = args->context; ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user(command->priv_drv_data, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy priv data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (args->priv_drv_data_size) { ++ ret = copy_to_user(args->priv_drv_data, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy priv data"); ++ ret = -EINVAL; ++ } ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryvideomemoryinfo *args, +@@ -1955,14 +2019,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + ret = copy_to_user(&output->budget, &result.budget, + sizeof(output->budget)); + if (ret) { +- pr_err("%s failed to copy budget", __func__); ++ DXG_ERR("failed to copy budget"); + ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&output->current_usage, &result.current_usage, + sizeof(output->current_usage)); + if (ret) { +- pr_err("%s failed to copy current usage", __func__); ++ DXG_ERR("failed to copy current usage"); + ret = -EINVAL; + goto cleanup; + } +@@ -1970,7 +2034,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + &result.current_reservation, + sizeof(output->current_reservation)); + if (ret) { +- pr_err("%s failed to copy reservation", __func__); ++ DXG_ERR("failed to copy reservation"); + ret = -EINVAL; + goto cleanup; + } +@@ -1978,14 +2042,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + &result.available_for_reservation, + sizeof(output->available_for_reservation)); + if (ret) { +- pr_err("%s failed to copy avail reservation", __func__); ++ DXG_ERR("failed to copy avail reservation"); + ret = -EINVAL; + } + + cleanup: + free_message(&msg, process); + if (ret) +- dev_dbg(DXGDEV, "err: %d", ret); ++ DXG_TRACE("err: %d", ret); + return ret; + } + +@@ -3152,3 +3216,4 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + DXG_TRACE("err: %d", ret); + return ret; + } ++ +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index a1549983d50f..e1c2ed7b1580 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -664,6 +664,18 @@ struct dxgkvmb_command_queryallocationresidency_return { + /* d3dkmt_allocationresidencystatus[NumAllocations] */ + }; + ++/* Returns only private data */ ++struct dxgkvmb_command_escape { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle adapter; ++ struct d3dkmthandle device; ++ enum d3dkmt_escapetype type; ++ struct d3dddi_escapeflags flags; ++ u32 priv_drv_data_size; ++ struct d3dkmthandle context; ++ u8 priv_drv_data[1]; ++}; ++ + struct dxgkvmb_command_queryvideomemoryinfo { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmthandle adapter; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index e692b127e219..78de76abce2d 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3547,6 +3547,46 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_escape(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_escape args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_escape(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs) + { +@@ -4338,7 +4378,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, + /* 0x0b */ {}, + /* 0x0c */ {}, +-/* 0x0d */ {}, ++/* 0x0d */ {dxgkio_escape, LX_DXESCAPE}, + /* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, + /* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index b7d8b1d91cfc..749edf28bd43 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -236,6 +236,45 @@ struct d3dddi_destroypagingqueue { + struct d3dkmthandle paging_queue; + }; + ++enum d3dkmt_escapetype { ++ _D3DKMT_ESCAPE_DRIVERPRIVATE = 0, ++ _D3DKMT_ESCAPE_VIDMM = 1, ++ _D3DKMT_ESCAPE_VIDSCH = 3, ++ _D3DKMT_ESCAPE_DEVICE = 4, ++ _D3DKMT_ESCAPE_DRT_TEST = 8, ++}; ++ ++struct d3dddi_escapeflags { ++ union { ++ struct { ++ __u32 hardware_access:1; ++ __u32 device_status_query:1; ++ __u32 change_frame_latency:1; ++ __u32 no_adapter_synchronization:1; ++ __u32 reserved:1; ++ __u32 virtual_machine_data:1; ++ __u32 driver_known_escape:1; ++ __u32 driver_common_escape:1; ++ __u32 reserved2:24; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_escape { ++ struct d3dkmthandle adapter; ++ struct d3dkmthandle device; ++ enum d3dkmt_escapetype type; ++ struct d3dddi_escapeflags flags; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ struct d3dkmthandle context; ++}; ++ + enum dxgk_render_pipeline_stage { + _DXGK_RENDER_PIPELINE_STAGE_UNKNOWN = 0, + _DXGK_RENDER_PIPELINE_STAGE_INPUT_ASSEMBLER = 1, +@@ -1217,6 +1256,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXQUERYVIDEOMEMORYINFO \ + _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) ++#define LX_DXESCAPE \ ++ _IOWR(0x47, 0x0d, struct d3dkmt_escape) + #define LX_DXGETDEVICESTATE \ + _IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate) + #define LX_DXSUBMITCOMMAND \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1690-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch b/patch/kernel/archive/wsl2-arm64-6.1/1690-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch new file mode 100644 index 000000000000..faf991f53182 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1690-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch @@ -0,0 +1,180 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 9 Feb 2022 10:57:57 -0800 +Subject: drivers: hv: dxgkrnl: Ioctl to put device to error state + +Implement the ioctl to put the virtual compute device to the error +state (LX_DXMARKDEVICEASERROR). + +This ioctl is used by the user mode driver when it detects an +unrecoverable error condition. + +When a compute device is put to the error state, all subsequent +ioctl calls to the device will fail. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 25 ++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 5 ++ + drivers/hv/dxgkrnl/ioctl.c | 38 +++++++++- + include/uapi/misc/d3dkmthk.h | 12 +++ + 5 files changed, 82 insertions(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index dafc721ed6cf..b454c7430f06 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -856,6 +856,9 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process, + struct d3dddi_updateallocproperty *args, + struct d3dddi_updateallocproperty *__user + inargs); ++int dxgvmb_send_mark_device_as_error(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_markdeviceaserror *args); + int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_setallocationpriority *a); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 8bdd49bc7aa6..f7264b12a477 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2730,6 +2730,31 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_mark_device_as_error(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_markdeviceaserror *args) ++{ ++ struct dxgkvmb_command_markdeviceaserror *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_MARKDEVICEASERROR, ++ process->host_handle); ++ command->args = *args; ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_setallocationpriority *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index e1c2ed7b1580..a66e11097bb2 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -627,6 +627,11 @@ struct dxgkvmb_command_updateallocationproperty_return { + struct ntstatus status; + }; + ++struct dxgkvmb_command_markdeviceaserror { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_markdeviceaserror args; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_changevideomemoryreservation { + struct dxgkvmb_command_vgpu_to_host hdr; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 78de76abce2d..ce4af610ada7 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3341,6 +3341,42 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_markdeviceaserror args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ device->execution_state = _D3DKMT_DEVICEEXECUTION_RESET; ++ ret = dxgvmb_send_mark_device_as_error(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) + { +@@ -4404,7 +4440,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x23 */ {}, + /* 0x24 */ {}, + /* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, +-/* 0x26 */ {}, ++/* 0x26 */ {dxgkio_mark_device_as_error, LX_DXMARKDEVICEASERROR}, + /* 0x27 */ {}, + /* 0x28 */ {}, + /* 0x29 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 749edf28bd43..ce5a638a886d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -790,6 +790,16 @@ struct d3dkmt_unlock2 { + struct d3dkmthandle allocation; + }; + ++enum d3dkmt_device_error_reason { ++ _D3DKMT_DEVICE_ERROR_REASON_GENERIC = 0x80000000, ++ _D3DKMT_DEVICE_ERROR_REASON_DRIVER_ERROR = 0x80000006, ++}; ++ ++struct d3dkmt_markdeviceaserror { ++ struct d3dkmthandle device; ++ enum d3dkmt_device_error_reason reason; ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -1290,6 +1300,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) ++#define LX_DXMARKDEVICEASERROR \ ++ _IOWR(0x47, 0x26, struct d3dkmt_markdeviceaserror) + #define LX_DXQUERYALLOCATIONRESIDENCY \ + _IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency) + #define LX_DXSETALLOCATIONPRIORITY \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1691-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch b/patch/kernel/archive/wsl2-arm64-6.1/1691-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch new file mode 100644 index 000000000000..32760960bd12 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1691-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch @@ -0,0 +1,423 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 9 Feb 2022 11:01:57 -0800 +Subject: drivers: hv: dxgkrnl: Ioctls to query statistics and clock + calibration + +Implement ioctls to query statistics from the VGPU device +(LX_DXQUERYSTATISTICS) and to query clock calibration +(LX_DXQUERYCLOCKCALIBRATION). + +The LX_DXQUERYSTATISTICS ioctl is used to query various statistics from +the compute device on the host. + +The LX_DXQUERYCLOCKCALIBRATION ioctl queries the compute device clock +and is used for performance monitoring. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 8 + + drivers/hv/dxgkrnl/dxgvmbus.c | 77 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 21 ++ + drivers/hv/dxgkrnl/ioctl.c | 111 +++++++++- + include/uapi/misc/d3dkmthk.h | 62 ++++++ + 5 files changed, 277 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b454c7430f06..a55873bdd9a6 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -885,6 +885,11 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_submitcommandtohwqueue *a); ++int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryclockcalibration *a, ++ struct d3dkmt_queryclockcalibration ++ *__user inargs); + int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_flushheaptransitions *arg); +@@ -929,6 +934,9 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + void *prive_alloc_data, + u32 *res_priv_data_size, + void *priv_res_data); ++int dxgvmb_send_query_statistics(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_querystatistics *args); + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index f7264b12a477..9a1864bb4e14 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1829,6 +1829,48 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryclockcalibration ++ *args, ++ struct d3dkmt_queryclockcalibration ++ *__user inargs) ++{ ++ struct dxgkvmb_command_queryclockcalibration *command; ++ struct dxgkvmb_command_queryclockcalibration_return result; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYCLOCKCALIBRATION, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(&inargs->clock_data, &result.clock_data, ++ sizeof(result.clock_data)); ++ if (ret) { ++ pr_err("%s failed to copy clock data", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = ntstatus2int(result.status); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_flushheaptransitions *args) +@@ -3242,3 +3284,38 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_statistics(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_querystatistics *args) ++{ ++ struct dxgkvmb_command_querystatistics *command; ++ struct dxgkvmb_command_querystatistics_return *result; ++ int ret; ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ ret = init_message_res(&msg, adapter, process, sizeof(*command), ++ sizeof(*result)); ++ if (ret) ++ goto cleanup; ++ command = msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYSTATISTICS, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ args->result = result->result; ++ ret = ntstatus2int(result->status); ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index a66e11097bb2..17768ed0e68d 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -372,6 +372,16 @@ struct dxgkvmb_command_flushheaptransitions { + struct dxgkvmb_command_vgpu_to_host hdr; + }; + ++struct dxgkvmb_command_queryclockcalibration { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_queryclockcalibration args; ++}; ++ ++struct dxgkvmb_command_queryclockcalibration_return { ++ struct ntstatus status; ++ struct dxgk_gpuclockdata clock_data; ++}; ++ + struct dxgkvmb_command_createallocation_allocinfo { + u32 flags; + u32 priv_drv_data_size; +@@ -408,6 +418,17 @@ struct dxgkvmb_command_openresource_return { + /* struct d3dkmthandle allocation[allocation_count]; */ + }; + ++struct dxgkvmb_command_querystatistics { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_querystatistics args; ++}; ++ ++struct dxgkvmb_command_querystatistics_return { ++ struct ntstatus status; ++ u32 reserved; ++ struct d3dkmt_querystatistics_result result; ++}; ++ + struct dxgkvmb_command_getstandardallocprivdata { + struct dxgkvmb_command_vgpu_to_host hdr; + enum d3dkmdt_standardallocationtype alloc_type; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index ce4af610ada7..4babb21f38a9 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -149,6 +149,65 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + return ret; + } + ++static int dxgkio_query_statistics(struct dxgprocess *process, ++ void __user *inargs) ++{ ++ struct d3dkmt_querystatistics *args; ++ int ret; ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct winluid tmp; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ args = vzalloc(sizeof(struct d3dkmt_querystatistics)); ++ if (args == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(args, inargs, sizeof(*args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ if (*(u64 *) &entry->luid == ++ *(u64 *) &args->adapter_luid) { ++ adapter = entry; ++ break; ++ } ++ dxgadapter_release_lock_shared(entry); ++ } ++ } ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ if (adapter) { ++ tmp = args->adapter_luid; ++ args->adapter_luid = adapter->host_adapter_luid; ++ ret = dxgvmb_send_query_statistics(process, adapter, args); ++ if (ret >= 0) { ++ args->adapter_luid = tmp; ++ ret = copy_to_user(inargs, args, sizeof(*args)); ++ if (ret) { ++ DXG_ERR("failed to copy args"); ++ ret = -EINVAL; ++ } ++ } ++ dxgadapter_release_lock_shared(adapter); ++ } ++ ++cleanup: ++ if (args) ++ vfree(args); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkp_enum_adapters(struct dxgprocess *process, + union d3dkmt_enumadapters_filter filter, +@@ -3536,6 +3595,54 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + return ret; + } + ++static int ++dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryclockcalibration args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_query_clock_calibration(process, adapter, ++ &args, inargs); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ return ret; ++} ++ + static int + dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + { +@@ -4470,14 +4577,14 @@ static struct ioctl_desc ioctls[] = { + /* 0x3b */ {dxgkio_wait_sync_object_gpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU}, + /* 0x3c */ {dxgkio_get_allocation_priority, LX_DXGETALLOCATIONPRIORITY}, +-/* 0x3d */ {}, ++/* 0x3d */ {dxgkio_query_clock_calibration, LX_DXQUERYCLOCKCALIBRATION}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, + /* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, + /* 0x40 */ {dxgkio_open_sync_object_nt, LX_DXOPENSYNCOBJECTFROMNTHANDLE2}, + /* 0x41 */ {dxgkio_query_resource_info_nt, + LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, +-/* 0x43 */ {}, ++/* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS}, + /* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, + /* 0x45 */ {}, + }; +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index ce5a638a886d..ea18242ceb83 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -996,6 +996,34 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + ++#pragma pack(push, 1) ++ ++struct dxgk_gpuclockdata_flags { ++ union { ++ struct { ++ __u32 context_management_processor:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct dxgk_gpuclockdata { ++ __u64 gpu_frequency; ++ __u64 gpu_clock_counter; ++ __u64 cpu_clock_counter; ++ struct dxgk_gpuclockdata_flags flags; ++} __packed; ++ ++struct d3dkmt_queryclockcalibration { ++ struct d3dkmthandle adapter; ++ __u32 node_ordinal; ++ __u32 physical_adapter_index; ++ struct dxgk_gpuclockdata clock_data; ++}; ++ ++#pragma pack(pop) ++ + struct d3dkmt_flushheaptransitions { + struct d3dkmthandle adapter; + }; +@@ -1238,6 +1266,36 @@ struct d3dkmt_enumadapters3 { + #endif + }; + ++enum d3dkmt_querystatistics_type { ++ _D3DKMT_QUERYSTATISTICS_ADAPTER = 0, ++ _D3DKMT_QUERYSTATISTICS_PROCESS = 1, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_ADAPTER = 2, ++ _D3DKMT_QUERYSTATISTICS_SEGMENT = 3, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_SEGMENT = 4, ++ _D3DKMT_QUERYSTATISTICS_NODE = 5, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_NODE = 6, ++ _D3DKMT_QUERYSTATISTICS_VIDPNSOURCE = 7, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_VIDPNSOURCE = 8, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_SEGMENT_GROUP = 9, ++ _D3DKMT_QUERYSTATISTICS_PHYSICAL_ADAPTER = 10, ++}; ++ ++struct d3dkmt_querystatistics_result { ++ char size[0x308]; ++}; ++ ++struct d3dkmt_querystatistics { ++ union { ++ struct { ++ enum d3dkmt_querystatistics_type type; ++ struct winluid adapter_luid; ++ __u64 process; ++ struct d3dkmt_querystatistics_result result; ++ }; ++ char size[0x328]; ++ }; ++}; ++ + struct d3dkmt_shareobjectwithhost { + struct d3dkmthandle device_handle; + struct d3dkmthandle object_handle; +@@ -1328,6 +1386,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) + #define LX_DXGETALLOCATIONPRIORITY \ + _IOWR(0x47, 0x3c, struct d3dkmt_getallocationpriority) ++#define LX_DXQUERYCLOCKCALIBRATION \ ++ _IOWR(0x47, 0x3d, struct d3dkmt_queryclockcalibration) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + #define LX_DXSHAREOBJECTS \ +@@ -1338,6 +1398,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle) + #define LX_DXOPENRESOURCEFROMNTHANDLE \ + _IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle) ++#define LX_DXQUERYSTATISTICS \ ++ _IOWR(0x47, 0x43, struct d3dkmt_querystatistics) + #define LX_DXSHAREOBJECTWITHHOST \ + _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1692-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch b/patch/kernel/archive/wsl2-arm64-6.1/1692-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch new file mode 100644 index 000000000000..b4a0af9e1ada --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1692-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch @@ -0,0 +1,466 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 18 Jan 2022 15:01:55 -0800 +Subject: drivers: hv: dxgkrnl: Offer and reclaim allocations + +Implement ioctls to offer and reclaim compute device allocations: + - LX_DXOFFERALLOCATIONS, + - LX_DXRECLAIMALLOCATIONS2 + +When a user mode driver (UMD) does not need to access an allocation, +it can "offer" it by issuing the LX_DXOFFERALLOCATIONS ioctl. This +means that the allocation is not in use and its local device memory +could be evicted. The freed space could be given to another allocation. +When the allocation is again needed, the UMD can attempt to"reclaim" +the allocation by issuing the LX_DXRECLAIMALLOCATIONS2 ioctl. If the +allocation is still not evicted, the reclaim operation succeeds and no +other action is required. If the reclaim operation fails, the caller +must restore the content of the allocation before it can be used by +the device. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 8 + + drivers/hv/dxgkrnl/dxgvmbus.c | 124 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 27 ++ + drivers/hv/dxgkrnl/ioctl.c | 117 ++++++++- + include/uapi/misc/d3dkmthk.h | 67 +++++ + 5 files changed, 340 insertions(+), 3 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index a55873bdd9a6..494ea8fb0bb3 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -865,6 +865,14 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getallocationpriority *a); ++int dxgvmb_send_offer_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_offerallocations *args); ++int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmt_reclaimallocations2 *args, ++ u64 __user *paging_fence_value); + int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle other_process, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 9a1864bb4e14..8448fd78975b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1858,7 +1858,7 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, + ret = copy_to_user(&inargs->clock_data, &result.clock_data, + sizeof(result.clock_data)); + if (ret) { +- pr_err("%s failed to copy clock data", __func__); ++ DXG_ERR("failed to copy clock data"); + ret = -EINVAL; + goto cleanup; + } +@@ -2949,6 +2949,128 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_offer_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_offerallocations *args) ++{ ++ struct dxgkvmb_command_offerallocations *command; ++ int ret = -EINVAL; ++ u32 alloc_size = sizeof(struct d3dkmthandle) * args->allocation_count; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_offerallocations) + ++ alloc_size - sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_OFFERALLOCATIONS, ++ process->host_handle); ++ command->flags = args->flags; ++ command->priority = args->priority; ++ command->device = args->device; ++ command->allocation_count = args->allocation_count; ++ if (args->resources) { ++ command->resources = true; ++ ret = copy_from_user(command->allocations, args->resources, ++ alloc_size); ++ } else { ++ ret = copy_from_user(command->allocations, ++ args->allocations, alloc_size); ++ } ++ if (ret) { ++ DXG_ERR("failed to copy input handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ pr_debug("err: %s %d", __func__, ret); ++ return ret; ++} ++ ++int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmt_reclaimallocations2 *args, ++ u64 __user *paging_fence_value) ++{ ++ struct dxgkvmb_command_reclaimallocations *command; ++ struct dxgkvmb_command_reclaimallocations_return *result; ++ int ret; ++ u32 alloc_size = sizeof(struct d3dkmthandle) * args->allocation_count; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_reclaimallocations) + ++ alloc_size - sizeof(struct d3dkmthandle); ++ u32 result_size = sizeof(*result); ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (args->results) ++ result_size += (args->allocation_count - 1) * ++ sizeof(enum d3dddi_reclaim_result); ++ ++ ret = init_message_res(&msg, adapter, process, cmd_size, result_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_RECLAIMALLOCATIONS, ++ process->host_handle); ++ command->device = device; ++ command->paging_queue = args->paging_queue; ++ command->allocation_count = args->allocation_count; ++ command->write_results = args->results != NULL; ++ if (args->resources) { ++ command->resources = true; ++ ret = copy_from_user(command->allocations, args->resources, ++ alloc_size); ++ } else { ++ ret = copy_from_user(command->allocations, ++ args->allocations, alloc_size); ++ } ++ if (ret) { ++ DXG_ERR("failed to copy input handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(paging_fence_value, ++ &result->paging_fence_value, sizeof(u64)); ++ if (ret) { ++ DXG_ERR("failed to copy paging fence"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = ntstatus2int(result->status); ++ if (NT_SUCCESS(result->status) && args->results) { ++ ret = copy_to_user(args->results, result->discarded, ++ sizeof(result->discarded[0]) * ++ args->allocation_count); ++ if (ret) { ++ DXG_ERR("failed to copy results"); ++ ret = -EINVAL; ++ } ++ } ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ pr_debug("err: %s %d", __func__, ret); ++ return ret; ++} ++ + int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle other_process, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 17768ed0e68d..558c6576a262 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -653,6 +653,33 @@ struct dxgkvmb_command_markdeviceaserror { + struct d3dkmt_markdeviceaserror args; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_offerallocations { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ u32 allocation_count; ++ enum d3dkmt_offer_priority priority; ++ struct d3dkmt_offer_flags flags; ++ bool resources; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_reclaimallocations { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle paging_queue; ++ u32 allocation_count; ++ bool resources; ++ bool write_results; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_reclaimallocations_return { ++ u64 paging_fence_value; ++ struct ntstatus status; ++ enum d3dddi_reclaim_result discarded[1]; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_changevideomemoryreservation { + struct dxgkvmb_command_vgpu_to_host hdr; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 4babb21f38a9..fa880aa0196a 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1961,6 +1961,119 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_offerallocations args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.allocation_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.allocation_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if ((args.resources == NULL) == (args.allocations == NULL)) { ++ DXG_ERR("invalid pointer to resources/allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_offer_allocations(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_reclaimallocations2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmt_reclaimallocations2 * __user in_args = inargs; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.allocation_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.allocation_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if ((args.resources == NULL) == (args.allocations == NULL)) { ++ DXG_ERR("invalid pointer to resources/allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_reclaim_allocations(process, adapter, ++ device->handle, &args, ++ &in_args->paging_fence_value); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + { +@@ -4548,12 +4661,12 @@ static struct ioctl_desc ioctls[] = { + /* 0x24 */ {}, + /* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, + /* 0x26 */ {dxgkio_mark_device_as_error, LX_DXMARKDEVICEASERROR}, +-/* 0x27 */ {}, ++/* 0x27 */ {dxgkio_offer_allocations, LX_DXOFFERALLOCATIONS}, + /* 0x28 */ {}, + /* 0x29 */ {}, + /* 0x2a */ {dxgkio_query_alloc_residency, LX_DXQUERYALLOCATIONRESIDENCY}, + /* 0x2b */ {}, +-/* 0x2c */ {}, ++/* 0x2c */ {dxgkio_reclaim_allocations, LX_DXRECLAIMALLOCATIONS2}, + /* 0x2d */ {}, + /* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY}, + /* 0x2f */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index ea18242ceb83..46b9f6d303bf 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -61,6 +61,7 @@ struct winluid { + #define D3DDDI_MAX_WRITTEN_PRIMARIES 16 + + #define D3DKMT_CREATEALLOCATION_MAX 1024 ++#define D3DKMT_MAKERESIDENT_ALLOC_MAX (1024 * 10) + #define D3DKMT_ADAPTERS_MAX 64 + #define D3DDDI_MAX_BROADCAST_CONTEXT 64 + #define D3DDDI_MAX_OBJECT_WAITED_ON 32 +@@ -1087,6 +1088,68 @@ struct d3dddi_updateallocproperty { + }; + }; + ++enum d3dkmt_offer_priority { ++ _D3DKMT_OFFER_PRIORITY_LOW = 1, ++ _D3DKMT_OFFER_PRIORITY_NORMAL = 2, ++ _D3DKMT_OFFER_PRIORITY_HIGH = 3, ++ _D3DKMT_OFFER_PRIORITY_AUTO = 4, ++}; ++ ++struct d3dkmt_offer_flags { ++ union { ++ struct { ++ __u32 offer_immediately:1; ++ __u32 allow_decommit:1; ++ __u32 reserved:30; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_offerallocations { ++ struct d3dkmthandle device; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *resources; ++ const struct d3dkmthandle *allocations; ++#else ++ __u64 resources; ++ __u64 allocations; ++#endif ++ __u32 allocation_count; ++ enum d3dkmt_offer_priority priority; ++ struct d3dkmt_offer_flags flags; ++ __u32 reserved1; ++}; ++ ++enum d3dddi_reclaim_result { ++ _D3DDDI_RECLAIM_RESULT_OK = 0, ++ _D3DDDI_RECLAIM_RESULT_DISCARDED = 1, ++ _D3DDDI_RECLAIM_RESULT_NOT_COMMITTED = 2, ++}; ++ ++struct d3dkmt_reclaimallocations2 { ++ struct d3dkmthandle paging_queue; ++ __u32 allocation_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *resources; ++ struct d3dkmthandle *allocations; ++#else ++ __u64 resources; ++ __u64 allocations; ++#endif ++ union { ++#ifdef __KERNEL__ ++ __u32 *discarded; ++ enum d3dddi_reclaim_result *results; ++#else ++ __u64 discarded; ++ __u64 results; ++#endif ++ }; ++ __u64 paging_fence_value; ++}; ++ + struct d3dkmt_changevideomemoryreservation { + __u64 process; + struct d3dkmthandle adapter; +@@ -1360,8 +1423,12 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXMARKDEVICEASERROR \ + _IOWR(0x47, 0x26, struct d3dkmt_markdeviceaserror) ++#define LX_DXOFFERALLOCATIONS \ ++ _IOWR(0x47, 0x27, struct d3dkmt_offerallocations) + #define LX_DXQUERYALLOCATIONRESIDENCY \ + _IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency) ++#define LX_DXRECLAIMALLOCATIONS2 \ ++ _IOWR(0x47, 0x2c, struct d3dkmt_reclaimallocations2) + #define LX_DXSETALLOCATIONPRIORITY \ + _IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1693-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch b/patch/kernel/archive/wsl2-arm64-6.1/1693-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch new file mode 100644 index 000000000000..3e736173ad4c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1693-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch @@ -0,0 +1,427 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 14 Jan 2022 17:57:41 -0800 +Subject: drivers: hv: dxgkrnl: Ioctls to manage scheduling priority + +Implement iocts to manage compute device scheduling priority: + - LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY + - LX_DXGETCONTEXTSCHEDULINGPRIORITY + - LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY + - LX_DXSETCONTEXTSCHEDULINGPRIORITY + +Each compute device execution context has an assigned scheduling +priority. It is used by the compute device scheduler on the host to +pick contexts for execution. There is a global priority and a +priority within a process. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 9 + + drivers/hv/dxgkrnl/dxgvmbus.c | 67 +++- + drivers/hv/dxgkrnl/dxgvmbus.h | 19 + + drivers/hv/dxgkrnl/ioctl.c | 177 +++++++++- + include/uapi/misc/d3dkmthk.h | 28 ++ + 5 files changed, 294 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 494ea8fb0bb3..02d10bdcc820 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -865,6 +865,15 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getallocationpriority *a); ++int dxgvmb_send_set_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int priority, bool in_process); ++int dxgvmb_send_get_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int *priority, ++ bool in_process); + int dxgvmb_send_offer_allocations(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_offerallocations *args); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 8448fd78975b..9a610d48bed7 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2949,6 +2949,69 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_set_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int priority, ++ bool in_process) ++{ ++ struct dxgkvmb_command_setcontextschedulingpriority2 *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SETCONTEXTSCHEDULINGPRIORITY, ++ process->host_handle); ++ command->context = context; ++ command->priority = priority; ++ command->in_process = in_process; ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int *priority, ++ bool in_process) ++{ ++ struct dxgkvmb_command_getcontextschedulingpriority *command; ++ struct dxgkvmb_command_getcontextschedulingpriority_return result = { }; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_GETCONTEXTSCHEDULINGPRIORITY, ++ process->host_handle); ++ command->context = context; ++ command->in_process = in_process; ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret >= 0) { ++ ret = ntstatus2int(result.status); ++ *priority = result.priority; ++ } ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_offer_allocations(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_offerallocations *args) +@@ -2991,7 +3054,7 @@ int dxgvmb_send_offer_allocations(struct dxgprocess *process, + cleanup: + free_message(&msg, process); + if (ret) +- pr_debug("err: %s %d", __func__, ret); ++ DXG_TRACE("err: %d", ret); + return ret; + } + +@@ -3067,7 +3130,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + cleanup: + free_message((struct dxgvmbusmsg *)&msg, process); + if (ret) +- pr_debug("err: %s %d", __func__, ret); ++ DXG_TRACE("err: %d", ret); + return ret; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 558c6576a262..509482e1f870 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -331,6 +331,25 @@ struct dxgkvmb_command_getallocationpriority_return { + /* u32 priorities[allocation_count or 1]; */ + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setcontextschedulingpriority2 { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ int priority; ++ bool in_process; ++}; ++ ++struct dxgkvmb_command_getcontextschedulingpriority { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ bool in_process; ++}; ++ ++struct dxgkvmb_command_getcontextschedulingpriority_return { ++ struct ntstatus status; ++ int priority; ++}; ++ + struct dxgkvmb_command_createdevice { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_createdeviceflags flags; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index fa880aa0196a..bc0adebe52ae 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3660,6 +3660,171 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++set_context_scheduling_priority(struct dxgprocess *process, ++ struct d3dkmthandle hcontext, ++ int priority, bool in_process) ++{ ++ int ret = 0; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ hcontext); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_set_context_sch_priority(process, adapter, ++ hcontext, priority, ++ in_process); ++ if (ret < 0) ++ DXG_ERR("send_set_context_scheduling_priority failed"); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ ++static int ++dxgkio_set_context_scheduling_priority(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_setcontextschedulingpriority args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = set_context_scheduling_priority(process, args.context, ++ args.priority, false); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++get_context_scheduling_priority(struct dxgprocess *process, ++ struct d3dkmthandle hcontext, ++ int __user *priority, ++ bool in_process) ++{ ++ int ret; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int pri = 0; ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ hcontext); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_get_context_sch_priority(process, adapter, ++ hcontext, &pri, in_process); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(priority, &pri, sizeof(pri)); ++ if (ret) { ++ DXG_ERR("failed to copy priority to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ ++static int ++dxgkio_get_context_scheduling_priority(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_getcontextschedulingpriority args; ++ struct d3dkmt_getcontextschedulingpriority __user *input = inargs; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = get_context_scheduling_priority(process, args.context, ++ &input->priority, false); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_set_context_process_scheduling_priority(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_setcontextinprocessschedulingpriority args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = set_context_scheduling_priority(process, args.context, ++ args.priority, true); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process, ++ void __user *inargs) ++{ ++ struct d3dkmt_getcontextinprocessschedulingpriority args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = get_context_scheduling_priority(process, args.context, ++ &((struct d3dkmt_getcontextinprocessschedulingpriority *) ++ inargs)->priority, true); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs) + { +@@ -4655,8 +4820,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x1e */ {}, + /* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, + /* 0x20 */ {}, +-/* 0x21 */ {}, +-/* 0x22 */ {}, ++/* 0x21 */ {dxgkio_get_context_process_scheduling_priority, ++ LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY}, ++/* 0x22 */ {dxgkio_get_context_scheduling_priority, ++ LX_DXGETCONTEXTSCHEDULINGPRIORITY}, + /* 0x23 */ {}, + /* 0x24 */ {}, + /* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, +@@ -4669,8 +4836,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x2c */ {dxgkio_reclaim_allocations, LX_DXRECLAIMALLOCATIONS2}, + /* 0x2d */ {}, + /* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY}, +-/* 0x2f */ {}, +-/* 0x30 */ {}, ++/* 0x2f */ {dxgkio_set_context_process_scheduling_priority, ++ LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY}, ++/* 0x30 */ {dxgkio_set_context_scheduling_priority, ++ LX_DXSETCONTEXTSCHEDULINGPRIORITY}, + /* 0x31 */ {dxgkio_signal_sync_object_cpu, + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU}, + /* 0x32 */ {dxgkio_signal_sync_object_gpu, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 46b9f6d303bf..a9bafab97c18 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -708,6 +708,26 @@ struct d3dkmt_submitcommandtohwqueue { + #endif + }; + ++struct d3dkmt_setcontextschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ ++struct d3dkmt_setcontextinprocessschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ ++struct d3dkmt_getcontextschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ ++struct d3dkmt_getcontextinprocessschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ + struct d3dkmt_setallocationpriority { + struct d3dkmthandle device; + struct d3dkmthandle resource; +@@ -1419,6 +1439,10 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) + #define LX_DXFLUSHHEAPTRANSITIONS \ + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) ++#define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x21, struct d3dkmt_getcontextinprocessschedulingpriority) ++#define LX_DXGETCONTEXTSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x22, struct d3dkmt_getcontextschedulingpriority) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXMARKDEVICEASERROR \ +@@ -1431,6 +1455,10 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x2c, struct d3dkmt_reclaimallocations2) + #define LX_DXSETALLOCATIONPRIORITY \ + _IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority) ++#define LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x2f, struct d3dkmt_setcontextinprocessschedulingpriority) ++#define LX_DXSETCONTEXTSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x30, struct d3dkmt_setcontextschedulingpriority) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1694-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch b/patch/kernel/archive/wsl2-arm64-6.1/1694-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch new file mode 100644 index 000000000000..4c579a39fe80 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1694-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch @@ -0,0 +1,447 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 14 Jan 2022 17:33:52 -0800 +Subject: drivers: hv: dxgkrnl: Manage residency of allocations + +Implement ioctls to manage residency of compute device allocations: + - LX_DXMAKERESIDENT, + - LX_DXEVICT. + +An allocation is "resident" when the compute devoce is setup to +access it. It means that the allocation is in the local device +memory or in non-pageable system memory. + +The current design does not support on demand compute device page +faulting. An allocation must be resident before the compute device +is allowed to access it. + +The LX_DXMAKERESIDENT ioctl instructs the video memory manager to +make the given allocations resident. The operation is submitted to +a paging queue (dxgpagingqueue). When the ioctl returns a "pending" +status, a monitored fence sync object can be used to synchronize +with the completion of the operation. + +The LX_DXEVICT ioctl istructs the video memory manager to evict +the given allocations from device accessible memory. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 98 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 27 ++ + drivers/hv/dxgkrnl/ioctl.c | 141 +++++++++- + include/uapi/misc/d3dkmthk.h | 54 ++++ + 5 files changed, 322 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 02d10bdcc820..93c3ceb23865 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -810,6 +810,10 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_destroyallocation2 *args, + struct d3dkmthandle *alloc_handles); ++int dxgvmb_send_make_resident(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dddi_makeresident *args); ++int dxgvmb_send_evict(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dkmt_evict *args); + int dxgvmb_send_submit_command(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_submitcommand *args); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 9a610d48bed7..f4c4a7e7ad8b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2279,6 +2279,104 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + return ret; + } + ++int dxgvmb_send_make_resident(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_makeresident *args) ++{ ++ int ret; ++ u32 cmd_size; ++ struct dxgkvmb_command_makeresident_return result = { }; ++ struct dxgkvmb_command_makeresident *command = NULL; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ cmd_size = (args->alloc_count - 1) * sizeof(struct d3dkmthandle) + ++ sizeof(struct dxgkvmb_command_makeresident); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ ret = copy_from_user(command->allocations, args->allocation_list, ++ args->alloc_count * ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_MAKERESIDENT, ++ process->host_handle); ++ command->alloc_count = args->alloc_count; ++ command->paging_queue = args->paging_queue; ++ command->flags = args->flags; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("send_make_resident failed %x", ret); ++ goto cleanup; ++ } ++ ++ args->paging_fence_value = result.paging_fence_value; ++ args->num_bytes_to_trim = result.num_bytes_to_trim; ++ ret = ntstatus2int(result.status); ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_evict(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_evict *args) ++{ ++ int ret; ++ u32 cmd_size; ++ struct dxgkvmb_command_evict_return result = { }; ++ struct dxgkvmb_command_evict *command = NULL; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ cmd_size = (args->alloc_count - 1) * sizeof(struct d3dkmthandle) + ++ sizeof(struct dxgkvmb_command_evict); ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ret = copy_from_user(command->allocations, args->allocations, ++ args->alloc_count * ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_EVICT, process->host_handle); ++ command->alloc_count = args->alloc_count; ++ command->device = args->device; ++ command->flags = args->flags; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("send_evict failed %x", ret); ++ goto cleanup; ++ } ++ args->num_bytes_to_trim = result.num_bytes_to_trim; ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_submit_command(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_submitcommand *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 509482e1f870..23f92ab9f8ad 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -372,6 +372,33 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_makeresident { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle paging_queue; ++ struct d3dddi_makeresident_flags flags; ++ u32 alloc_count; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_makeresident_return { ++ u64 paging_fence_value; ++ u64 num_bytes_to_trim; ++ struct ntstatus status; ++}; ++ ++struct dxgkvmb_command_evict { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dddi_evict_flags flags; ++ u32 alloc_count; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_evict_return { ++ u64 num_bytes_to_trim; ++}; ++ + struct dxgkvmb_command_submitcommand { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_submitcommand args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index bc0adebe52ae..2700da51bc01 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1961,6 +1961,143 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret, ret2; ++ struct d3dddi_makeresident args; ++ struct d3dddi_makeresident *input = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.alloc_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args.paging_queue.v == 0) { ++ DXG_ERR("paging queue is missing"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_make_resident(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ /* STATUS_PENING is a success code > 0. It is returned to user mode */ ++ if (!(ret == STATUS_PENDING || ret == 0)) { ++ DXG_ERR("Unexpected error %x", ret); ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->paging_fence_value, ++ &args.paging_fence_value, sizeof(u64)); ++ if (ret2) { ++ DXG_ERR("failed to copy paging fence"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->num_bytes_to_trim, ++ &args.num_bytes_to_trim, sizeof(u64)); ++ if (ret2) { ++ DXG_ERR("failed to copy bytes to trim"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ ++ return ret; ++} ++ ++static int ++dxgkio_evict(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_evict args; ++ struct d3dkmt_evict *input = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.alloc_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_evict(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input->num_bytes_to_trim, ++ &args.num_bytes_to_trim, sizeof(u64)); ++ if (ret) { ++ DXG_ERR("failed to copy bytes to trim to user"); ++ ret = -EINVAL; ++ } ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) + { +@@ -4797,7 +4934,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, + /* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, +-/* 0x0b */ {}, ++/* 0x0b */ {dxgkio_make_resident, LX_DXMAKERESIDENT}, + /* 0x0c */ {}, + /* 0x0d */ {dxgkio_escape, LX_DXESCAPE}, + /* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, +@@ -4817,7 +4954,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE}, + /* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, +-/* 0x1e */ {}, ++/* 0x1e */ {dxgkio_evict, LX_DXEVICT}, + /* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, + /* 0x20 */ {}, + /* 0x21 */ {dxgkio_get_context_process_scheduling_priority, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index a9bafab97c18..944f9d1e73d6 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -962,6 +962,56 @@ struct d3dkmt_destroyallocation2 { + struct d3dddicb_destroyallocation2flags flags; + }; + ++struct d3dddi_makeresident_flags { ++ union { ++ struct { ++ __u32 cant_trim_further:1; ++ __u32 must_succeed:1; ++ __u32 reserved:30; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_makeresident { ++ struct d3dkmthandle paging_queue; ++ __u32 alloc_count; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocation_list; ++ const __u32 *priority_list; ++#else ++ __u64 allocation_list; ++ __u64 priority_list; ++#endif ++ struct d3dddi_makeresident_flags flags; ++ __u64 paging_fence_value; ++ __u64 num_bytes_to_trim; ++}; ++ ++struct d3dddi_evict_flags { ++ union { ++ struct { ++ __u32 evict_only_if_necessary:1; ++ __u32 not_written_to:1; ++ __u32 reserved:30; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_evict { ++ struct d3dkmthandle device; ++ __u32 alloc_count; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocations; ++#else ++ __u64 allocations; ++#endif ++ struct d3dddi_evict_flags flags; ++ __u32 reserved; ++ __u64 num_bytes_to_trim; ++}; ++ + enum d3dkmt_memory_segment_group { + _D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0, + _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 +@@ -1407,6 +1457,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXQUERYVIDEOMEMORYINFO \ + _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) ++#define LX_DXMAKERESIDENT \ ++ _IOWR(0x47, 0x0b, struct d3dddi_makeresident) + #define LX_DXESCAPE \ + _IOWR(0x47, 0x0d, struct d3dkmt_escape) + #define LX_DXGETDEVICESTATE \ +@@ -1437,6 +1489,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXEVICT \ ++ _IOWR(0x47, 0x1e, struct d3dkmt_evict) + #define LX_DXFLUSHHEAPTRANSITIONS \ + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) + #define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1695-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch b/patch/kernel/archive/wsl2-arm64-6.1/1695-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch new file mode 100644 index 000000000000..633f1005806c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1695-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch @@ -0,0 +1,703 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 14 Jan 2022 17:13:04 -0800 +Subject: drivers: hv: dxgkrnl: Manage compute device virtual addresses + +Implement ioctls to manage compute device virtual addresses (VA): + - LX_DXRESERVEGPUVIRTUALADDRESS, + - LX_DXFREEGPUVIRTUALADDRESS, + - LX_DXMAPGPUVIRTUALADDRESS, + - LX_DXUPDATEGPUVIRTUALADDRESS. + +Compute devices access memory by using virtual addressses. +Each process has a dedicated VA space. The video memory manager +on the host is responsible with updating device page tables +before submitting a DMA buffer for execution. + +The LX_DXRESERVEGPUVIRTUALADDRESS ioctl reserves a portion of the +process compute device VA space. + +The LX_DXMAPGPUVIRTUALADDRESS ioctl reserves a portion of the process +compute device VA space and maps it to the given compute device +allocation. + +The LX_DXFREEGPUVIRTUALADDRESS frees the previously reserved portion +of the compute device VA space. + +The LX_DXUPDATEGPUVIRTUALADDRESS ioctl adds operations to modify the +compute device VA space to a compute device execution context. It +allows the operations to be queued and synchronized with execution +of other compute device DMA buffers.. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 10 + + drivers/hv/dxgkrnl/dxgvmbus.c | 150 ++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 38 ++ + drivers/hv/dxgkrnl/ioctl.c | 228 +++++++++- + include/uapi/misc/d3dkmthk.h | 126 +++++ + 5 files changed, 548 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 93c3ceb23865..93bc9b41aa41 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -817,6 +817,16 @@ int dxgvmb_send_evict(struct dxgprocess *pr, struct dxgadapter *adapter, + int dxgvmb_send_submit_command(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_submitcommand *args); ++int dxgvmb_send_map_gpu_va(struct dxgprocess *pr, struct d3dkmthandle h, ++ struct dxgadapter *adapter, ++ struct d3dddi_mapgpuvirtualaddress *args); ++int dxgvmb_send_reserve_gpu_va(struct dxgprocess *pr, ++ struct dxgadapter *adapter, ++ struct d3dddi_reservegpuvirtualaddress *args); ++int dxgvmb_send_free_gpu_va(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dkmt_freegpuvirtualaddress *args); ++int dxgvmb_send_update_gpu_va(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dkmt_updategpuvirtualaddress *args); + int dxgvmb_send_create_sync_object(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_createsynchronizationobject2 +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index f4c4a7e7ad8b..425a1ab87bd6 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2432,6 +2432,156 @@ int dxgvmb_send_submit_command(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_map_gpu_va(struct dxgprocess *process, ++ struct d3dkmthandle device, ++ struct dxgadapter *adapter, ++ struct d3dddi_mapgpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_mapgpuvirtualaddress *command; ++ struct dxgkvmb_command_mapgpuvirtualaddress_return result; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_MAPGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->args = *args; ++ command->device = device; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ args->virtual_address = result.virtual_address; ++ args->paging_fence_value = result.paging_fence_value; ++ ret = ntstatus2int(result.status); ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_reserve_gpu_va(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_reservegpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_reservegpuvirtualaddress *command; ++ struct dxgkvmb_command_reservegpuvirtualaddress_return result; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_RESERVEGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ args->virtual_address = result.virtual_address; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_free_gpu_va(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_freegpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_freegpuvirtualaddress *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_FREEGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_update_gpu_va(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_updategpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_updategpuvirtualaddress *command; ++ u32 cmd_size; ++ u32 op_size; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->num_operations == 0 || ++ (DXG_MAX_VM_BUS_PACKET_SIZE / ++ sizeof(struct d3dddi_updategpuvirtualaddress_operation)) < ++ args->num_operations) { ++ ret = -EINVAL; ++ DXG_ERR("Invalid number of operations: %d", ++ args->num_operations); ++ goto cleanup; ++ } ++ ++ op_size = args->num_operations * ++ sizeof(struct d3dddi_updategpuvirtualaddress_operation); ++ cmd_size = sizeof(struct dxgkvmb_command_updategpuvirtualaddress) + ++ op_size - sizeof(args->operations[0]); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_UPDATEGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->fence_value = args->fence_value; ++ command->device = args->device; ++ command->context = args->context; ++ command->fence_object = args->fence_object; ++ command->num_operations = args->num_operations; ++ command->flags = args->flags.value; ++ ret = copy_from_user(command->operations, args->operations, ++ op_size); ++ if (ret) { ++ DXG_ERR("failed to copy operations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + static void set_result(struct d3dkmt_createsynchronizationobject2 *args, + u64 fence_gpu_va, u8 *va) + { +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 23f92ab9f8ad..88967ff6a505 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -418,6 +418,44 @@ struct dxgkvmb_command_flushheaptransitions { + struct dxgkvmb_command_vgpu_to_host hdr; + }; + ++struct dxgkvmb_command_freegpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_freegpuvirtualaddress args; ++}; ++ ++struct dxgkvmb_command_mapgpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dddi_mapgpuvirtualaddress args; ++ struct d3dkmthandle device; ++}; ++ ++struct dxgkvmb_command_mapgpuvirtualaddress_return { ++ u64 virtual_address; ++ u64 paging_fence_value; ++ struct ntstatus status; ++}; ++ ++struct dxgkvmb_command_reservegpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dddi_reservegpuvirtualaddress args; ++}; ++ ++struct dxgkvmb_command_reservegpuvirtualaddress_return { ++ u64 virtual_address; ++ u64 paging_fence_value; ++}; ++ ++struct dxgkvmb_command_updategpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ u64 fence_value; ++ struct d3dkmthandle device; ++ struct d3dkmthandle context; ++ struct d3dkmthandle fence_object; ++ u32 num_operations; ++ u32 flags; ++ struct d3dddi_updategpuvirtualaddress_operation operations[1]; ++}; ++ + struct dxgkvmb_command_queryclockcalibration { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_queryclockcalibration args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 2700da51bc01..f6700e974f25 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -2492,6 +2492,226 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret, ret2; ++ struct d3dddi_mapgpuvirtualaddress args; ++ struct d3dddi_mapgpuvirtualaddress *input = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_map_gpu_va(process, zerohandle, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ /* STATUS_PENING is a success code > 0. It is returned to user mode */ ++ if (!(ret == STATUS_PENDING || ret == 0)) { ++ DXG_ERR("Unexpected error %x", ret); ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->paging_fence_value, ++ &args.paging_fence_value, sizeof(u64)); ++ if (ret2) { ++ DXG_ERR("failed to copy paging fence to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->virtual_address, &args.virtual_address, ++ sizeof(args.virtual_address)); ++ if (ret2) { ++ DXG_ERR("failed to copy va to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dddi_reservegpuvirtualaddress args; ++ struct d3dddi_reservegpuvirtualaddress *input = inargs; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.adapter); ++ if (device == NULL) { ++ DXG_ERR("invalid adapter or paging queue: 0x%x", ++ args.adapter.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ kref_get(&adapter->adapter_kref); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } else { ++ args.adapter = adapter->host_handle; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_reserve_gpu_va(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input->virtual_address, &args.virtual_address, ++ sizeof(args.virtual_address)); ++ if (ret) { ++ DXG_ERR("failed to copy VA to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter) { ++ dxgadapter_release_lock_shared(adapter); ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_free_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_freegpuvirtualaddress args; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_free_gpu_va(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) { ++ dxgadapter_release_lock_shared(adapter); ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ } ++ ++ return ret; ++} ++ ++static int ++dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_updategpuvirtualaddress args; ++ struct d3dkmt_updategpuvirtualaddress *input = inargs; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_update_gpu_va(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input->fence_value, &args.fence_value, ++ sizeof(args.fence_value)); ++ if (ret) { ++ DXG_ERR("failed to copy fence value to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ + static int + dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + { +@@ -4931,11 +5151,11 @@ static struct ioctl_desc ioctls[] = { + /* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, + /* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION}, + /* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE}, +-/* 0x08 */ {}, ++/* 0x08 */ {dxgkio_reserve_gpu_va, LX_DXRESERVEGPUVIRTUALADDRESS}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, + /* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, + /* 0x0b */ {dxgkio_make_resident, LX_DXMAKERESIDENT}, +-/* 0x0c */ {}, ++/* 0x0c */ {dxgkio_map_gpu_va, LX_DXMAPGPUVIRTUALADDRESS}, + /* 0x0d */ {dxgkio_escape, LX_DXESCAPE}, + /* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, + /* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, +@@ -4956,7 +5176,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {dxgkio_evict, LX_DXEVICT}, + /* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, +-/* 0x20 */ {}, ++/* 0x20 */ {dxgkio_free_gpu_va, LX_DXFREEGPUVIRTUALADDRESS}, + /* 0x21 */ {dxgkio_get_context_process_scheduling_priority, + LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY}, + /* 0x22 */ {dxgkio_get_context_scheduling_priority, +@@ -4990,7 +5210,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, + /* 0x38 */ {dxgkio_update_alloc_property, LX_DXUPDATEALLOCPROPERTY}, +-/* 0x39 */ {}, ++/* 0x39 */ {dxgkio_update_gpu_va, LX_DXUPDATEGPUVIRTUALADDRESS}, + /* 0x3a */ {dxgkio_wait_sync_object_cpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU}, + /* 0x3b */ {dxgkio_wait_sync_object_gpu, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 944f9d1e73d6..1f60f5120e1d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -1012,6 +1012,124 @@ struct d3dkmt_evict { + __u64 num_bytes_to_trim; + }; + ++struct d3dddigpuva_protection_type { ++ union { ++ struct { ++ __u64 write:1; ++ __u64 execute:1; ++ __u64 zero:1; ++ __u64 no_access:1; ++ __u64 system_use_only:1; ++ __u64 reserved:59; ++ }; ++ __u64 value; ++ }; ++}; ++ ++enum d3dddi_updategpuvirtualaddress_operation_type { ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_MAP = 0, ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_UNMAP = 1, ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_COPY = 2, ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_MAP_PROTECT = 3, ++}; ++ ++struct d3dddi_updategpuvirtualaddress_operation { ++ enum d3dddi_updategpuvirtualaddress_operation_type operation; ++ union { ++ struct { ++ __u64 base_address; ++ __u64 size; ++ struct d3dkmthandle allocation; ++ __u64 allocation_offset; ++ __u64 allocation_size; ++ } map; ++ struct { ++ __u64 base_address; ++ __u64 size; ++ struct d3dkmthandle allocation; ++ __u64 allocation_offset; ++ __u64 allocation_size; ++ struct d3dddigpuva_protection_type protection; ++ __u64 driver_protection; ++ } map_protect; ++ struct { ++ __u64 base_address; ++ __u64 size; ++ struct d3dddigpuva_protection_type protection; ++ } unmap; ++ struct { ++ __u64 source_address; ++ __u64 size; ++ __u64 dest_address; ++ } copy; ++ }; ++}; ++ ++enum d3dddigpuva_reservation_type { ++ _D3DDDIGPUVA_RESERVE_NO_ACCESS = 0, ++ _D3DDDIGPUVA_RESERVE_ZERO = 1, ++ _D3DDDIGPUVA_RESERVE_NO_COMMIT = 2 ++}; ++ ++struct d3dkmt_updategpuvirtualaddress { ++ struct d3dkmthandle device; ++ struct d3dkmthandle context; ++ struct d3dkmthandle fence_object; ++ __u32 num_operations; ++#ifdef __KERNEL__ ++ struct d3dddi_updategpuvirtualaddress_operation *operations; ++#else ++ __u64 operations; ++#endif ++ __u32 reserved0; ++ __u32 reserved1; ++ __u64 reserved2; ++ __u64 fence_value; ++ union { ++ struct { ++ __u32 do_not_wait:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ } flags; ++ __u32 reserved3; ++}; ++ ++struct d3dddi_mapgpuvirtualaddress { ++ struct d3dkmthandle paging_queue; ++ __u64 base_address; ++ __u64 minimum_address; ++ __u64 maximum_address; ++ struct d3dkmthandle allocation; ++ __u64 offset_in_pages; ++ __u64 size_in_pages; ++ struct d3dddigpuva_protection_type protection; ++ __u64 driver_protection; ++ __u32 reserved0; ++ __u64 reserved1; ++ __u64 virtual_address; ++ __u64 paging_fence_value; ++}; ++ ++struct d3dddi_reservegpuvirtualaddress { ++ struct d3dkmthandle adapter; ++ __u64 base_address; ++ __u64 minimum_address; ++ __u64 maximum_address; ++ __u64 size; ++ enum d3dddigpuva_reservation_type reservation_type; ++ __u64 driver_protection; ++ __u64 virtual_address; ++ __u64 paging_fence_value; ++}; ++ ++struct d3dkmt_freegpuvirtualaddress { ++ struct d3dkmthandle adapter; ++ __u32 reserved; ++ __u64 base_address; ++ __u64 size; ++}; ++ + enum d3dkmt_memory_segment_group { + _D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0, + _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 +@@ -1453,12 +1571,16 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x06, struct d3dkmt_createallocation) + #define LX_DXCREATEPAGINGQUEUE \ + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) ++#define LX_DXRESERVEGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x08, struct d3dddi_reservegpuvirtualaddress) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXQUERYVIDEOMEMORYINFO \ + _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) + #define LX_DXMAKERESIDENT \ + _IOWR(0x47, 0x0b, struct d3dddi_makeresident) ++#define LX_DXMAPGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x0c, struct d3dddi_mapgpuvirtualaddress) + #define LX_DXESCAPE \ + _IOWR(0x47, 0x0d, struct d3dkmt_escape) + #define LX_DXGETDEVICESTATE \ +@@ -1493,6 +1615,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1e, struct d3dkmt_evict) + #define LX_DXFLUSHHEAPTRANSITIONS \ + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) ++#define LX_DXFREEGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x20, struct d3dkmt_freegpuvirtualaddress) + #define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \ + _IOWR(0x47, 0x21, struct d3dkmt_getcontextinprocessschedulingpriority) + #define LX_DXGETCONTEXTSCHEDULINGPRIORITY \ +@@ -1529,6 +1653,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x37, struct d3dkmt_unlock2) + #define LX_DXUPDATEALLOCPROPERTY \ + _IOWR(0x47, 0x38, struct d3dddi_updateallocproperty) ++#define LX_DXUPDATEGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x39, struct d3dkmt_updategpuvirtualaddress) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1696-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch b/patch/kernel/archive/wsl2-arm64-6.1/1696-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch new file mode 100644 index 000000000000..1d4f77001bf6 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1696-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch @@ -0,0 +1,313 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 8 Oct 2021 14:17:39 -0700 +Subject: drivers: hv: dxgkrnl: Add support to map guest pages by host + +Implement support for mapping guest memory pages by the host. +This removes hyper-v limitations of using GPADL (guest physical +address list). + +Dxgkrnl uses hyper-v GPADLs to share guest system memory with the +host. This method has limitations: +- a single GPADL can represent only ~32MB of memory +- there is a limit of how much memory the total size of GPADLs + in a VM can represent. +To avoid these limitations the host implemented mapping guest memory +pages. Presence of this support is determined by reading PCI config +space. When the support is enabled, dxgkrnl does not use GPADLs and +instead uses the following code flow: +- memory pages of an existing system memory buffer are pinned +- PFNs of the pages are sent to the host via a VM bus message +- the host maps the PFNs to get access to the memory + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 1 + + drivers/hv/dxgkrnl/dxgmodule.c | 33 ++- + drivers/hv/dxgkrnl/dxgvmbus.c | 117 +++++++--- + drivers/hv/dxgkrnl/dxgvmbus.h | 10 + + drivers/hv/dxgkrnl/misc.c | 1 + + 6 files changed, 129 insertions(+), 35 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index 9d821e83448a..fc85a47a6ad5 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o ++dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 93bc9b41aa41..091dbe999d33 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -316,6 +316,7 @@ struct dxgglobal { + bool misc_registered; + bool pci_registered; + bool vmbus_registered; ++ bool map_guest_pages_enabled; + }; + + static inline struct dxgglobal *dxggbl(void) +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 5c364a46b65f..b1b612b90fc1 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -147,7 +147,7 @@ void dxgglobal_remove_host_event(struct dxghostevent *event) + + void signal_host_cpu_event(struct dxghostevent *eventhdr) + { +- struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; ++ struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; + + if (event->remove_from_list || + event->destroy_after_signal) { +@@ -426,7 +426,11 @@ const struct file_operations dxgk_fops = { + #define DXGK_VMBUS_VGPU_LUID_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ + sizeof(u32)) + +-/* The guest writes its capabilities to this address */ ++/* The host caps (dxgk_vmbus_hostcaps) */ ++#define DXGK_VMBUS_HOSTCAPS_OFFSET (DXGK_VMBUS_VGPU_LUID_OFFSET + \ ++ sizeof(struct winluid)) ++ ++/* The guest writes its capavilities to this adderss */ + #define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ + sizeof(u32)) + +@@ -441,6 +445,23 @@ struct dxgk_vmbus_guestcaps { + }; + }; + ++/* ++ * The structure defines features, supported by the host. ++ * ++ * map_guest_memory ++ * Host can map guest memory pages, so the guest can avoid using GPADLs ++ * to represent existing system memory allocations. ++ */ ++struct dxgk_vmbus_hostcaps { ++ union { ++ struct { ++ u32 map_guest_memory : 1; ++ u32 reserved : 31; ++ }; ++ u32 host_caps; ++ }; ++}; ++ + /* + * A helper function to read PCI config space. + */ +@@ -475,6 +496,7 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + struct winluid vgpu_luid = {}; + struct dxgk_vmbus_guestcaps guest_caps = {.wsl2 = 1}; + struct dxgglobal *dxgglobal = dxggbl(); ++ struct dxgk_vmbus_hostcaps host_caps = {}; + + mutex_lock(&dxgglobal->device_mutex); + +@@ -503,6 +525,13 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + if (ret) + goto cleanup; + ++ ret = pci_read_config_dword(dev, DXGK_VMBUS_HOSTCAPS_OFFSET, ++ &host_caps.host_caps); ++ if (ret == 0) { ++ if (host_caps.map_guest_memory) ++ dxgglobal->map_guest_pages_enabled = true; ++ } ++ + if (dxgglobal->vmbus_ver > DXGK_VMBUS_INTERFACE_VERSION) + dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION; + } +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 425a1ab87bd6..4d7807909284 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1383,15 +1383,19 @@ int create_existing_sysmem(struct dxgdevice *device, + void *kmem = NULL; + int ret = 0; + struct dxgkvmb_command_setexistingsysmemstore *set_store_command; ++ struct dxgkvmb_command_setexistingsysmempages *set_pages_command; + u64 alloc_size = host_alloc->allocation_size; + u32 npages = alloc_size >> PAGE_SHIFT; + struct dxgvmbusmsg msg = {.hdr = NULL}; +- +- ret = init_message(&msg, device->adapter, device->process, +- sizeof(*set_store_command)); +- if (ret) +- goto cleanup; +- set_store_command = (void *)msg.msg; ++ const u32 max_pfns_in_message = ++ (DXG_MAX_VM_BUS_PACKET_SIZE - sizeof(*set_pages_command) - ++ PAGE_SIZE) / sizeof(__u64); ++ u32 alloc_offset_in_pages = 0; ++ struct page **page_in; ++ u64 *pfn; ++ u32 pages_to_send; ++ u32 i; ++ struct dxgglobal *dxgglobal = dxggbl(); + + /* + * Create a guest physical address list and set it as the allocation +@@ -1402,6 +1406,7 @@ int create_existing_sysmem(struct dxgdevice *device, + DXG_TRACE("Alloc size: %lld", alloc_size); + + dxgalloc->cpu_address = (void *)sysmem; ++ + dxgalloc->pages = vzalloc(npages * sizeof(void *)); + if (dxgalloc->pages == NULL) { + DXG_ERR("failed to allocate pages"); +@@ -1419,39 +1424,87 @@ int create_existing_sysmem(struct dxgdevice *device, + ret = -ENOMEM; + goto cleanup; + } +- kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL); +- if (kmem == NULL) { +- DXG_ERR("vmap failed"); +- ret = -ENOMEM; +- goto cleanup; +- } +- ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem, +- alloc_size, &dxgalloc->gpadl); +- if (ret1) { +- DXG_ERR("establish_gpadl failed: %d", ret1); +- ret = -ENOMEM; +- goto cleanup; +- } ++ if (!dxgglobal->map_guest_pages_enabled) { ++ ret = init_message(&msg, device->adapter, device->process, ++ sizeof(*set_store_command)); ++ if (ret) ++ goto cleanup; ++ set_store_command = (void *)msg.msg; ++ ++ kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL); ++ if (kmem == NULL) { ++ DXG_ERR("vmap failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem, ++ alloc_size, &dxgalloc->gpadl); ++ if (ret1) { ++ DXG_ERR("establish_gpadl failed: %d", ret1); ++ ret = -ENOMEM; ++ goto cleanup; ++ } + #ifdef _MAIN_KERNEL_ +- DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); + #else +- DXG_TRACE("New gpadl %d", dxgalloc->gpadl); ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl); + #endif + +- command_vgpu_to_host_init2(&set_store_command->hdr, +- DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, +- device->process->host_handle); +- set_store_command->device = device->handle; +- set_store_command->device = device->handle; +- set_store_command->allocation = host_alloc->allocation; ++ command_vgpu_to_host_init2(&set_store_command->hdr, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, ++ device->process->host_handle); ++ set_store_command->device = device->handle; ++ set_store_command->allocation = host_alloc->allocation; + #ifdef _MAIN_KERNEL_ +- set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; ++ set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; + #else +- set_store_command->gpadl = dxgalloc->gpadl; ++ set_store_command->gpadl = dxgalloc->gpadl; + #endif +- ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); +- if (ret < 0) +- DXG_ERR("failed to set existing store: %x", ret); ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ if (ret < 0) ++ DXG_ERR("failed set existing store: %x", ret); ++ } else { ++ /* ++ * Send the list of the allocation PFNs to the host. The host ++ * will map the pages for GPU access. ++ */ ++ ++ ret = init_message(&msg, device->adapter, device->process, ++ sizeof(*set_pages_command) + ++ max_pfns_in_message * sizeof(u64)); ++ if (ret) ++ goto cleanup; ++ set_pages_command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&set_pages_command->hdr, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES, ++ device->process->host_handle); ++ set_pages_command->device = device->handle; ++ set_pages_command->allocation = host_alloc->allocation; ++ ++ page_in = dxgalloc->pages; ++ while (alloc_offset_in_pages < npages) { ++ pfn = (u64 *)((char *)msg.msg + ++ sizeof(*set_pages_command)); ++ pages_to_send = min(npages - alloc_offset_in_pages, ++ max_pfns_in_message); ++ set_pages_command->num_pages = pages_to_send; ++ set_pages_command->alloc_offset_in_pages = ++ alloc_offset_in_pages; ++ ++ for (i = 0; i < pages_to_send; i++) ++ *pfn++ = page_to_pfn(*page_in++); ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, ++ msg.hdr, ++ msg.size); ++ if (ret < 0) { ++ DXG_ERR("failed set existing pages: %x", ret); ++ break; ++ } ++ alloc_offset_in_pages += pages_to_send; ++ } ++ } + + cleanup: + if (kmem) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 88967ff6a505..b4a98f7c2522 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -234,6 +234,16 @@ struct dxgkvmb_command_setexistingsysmemstore { + u32 gpadl; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setexistingsysmempages { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++ u32 num_pages; ++ u32 alloc_offset_in_pages; ++ /* u64 pfn_array[num_pages] */ ++}; ++ + struct dxgkvmb_command_createprocess { + struct dxgkvmb_command_vm_to_host hdr; + void *process; +diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c +index cb1e0635bebc..4a1309d80ee5 100644 +--- a/drivers/hv/dxgkrnl/misc.c ++++ b/drivers/hv/dxgkrnl/misc.c +@@ -35,3 +35,4 @@ u16 *wcsncpy(u16 *dest, const u16 *src, size_t n) + dest[i - 1] = 0; + return dest; + } ++ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1697-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch b/patch/kernel/archive/wsl2-arm64-6.1/1697-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch new file mode 100644 index 000000000000..3dd5107c34f8 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1697-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch @@ -0,0 +1,29 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 21 Mar 2022 20:32:44 -0700 +Subject: drivers: hv: dxgkrnl: Removed struct vmbus_gpadl, which was defined + in the main linux branch + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 6f763e326a65..236febbc6fca 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -932,7 +932,7 @@ void dxgallocation_destroy(struct dxgallocation *alloc) + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); + alloc->gpadl.gpadl_handle = 0; + } +-else ++#else + if (alloc->gpadl) { + DXG_TRACE("Teardown gpadl %d", alloc->gpadl); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1698-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch b/patch/kernel/archive/wsl2-arm64-6.1/1698-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch new file mode 100644 index 000000000000..ab84c3939431 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1698-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch @@ -0,0 +1,100 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 22 Mar 2022 10:32:54 -0700 +Subject: drivers: hv: dxgkrnl: Remove dxgk_init_ioctls + +The array of ioctls is initialized statically to remove the unnecessary +function. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgmodule.c | 2 +- + drivers/hv/dxgkrnl/ioctl.c | 15 +++++----- + 2 files changed, 8 insertions(+), 9 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index b1b612b90fc1..f1245a9d8826 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -300,7 +300,7 @@ static void dxgglobal_start_adapters(void) + } + + /* +- * Stopsthe active dxgadapter objects. ++ * Stop the active dxgadapter objects. + */ + static void dxgglobal_stop_adapters(void) + { +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index f6700e974f25..8732a66040a0 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -26,7 +26,6 @@ + struct ioctl_desc { + int (*ioctl_callback)(struct dxgprocess *p, void __user *arg); + u32 ioctl; +- u32 arg_size; + }; + + #ifdef DEBUG +@@ -91,7 +90,7 @@ static const struct file_operations dxg_resource_fops = { + }; + + static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, +- void *__user inargs) ++ void *__user inargs) + { + struct d3dkmt_openadapterfromluid args; + int ret; +@@ -1002,7 +1001,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) + } + + static int dxgkio_destroy_hwqueue(struct dxgprocess *process, +- void *__user inargs) ++ void *__user inargs) + { + struct d3dkmt_destroyhwqueue args; + int ret; +@@ -2280,7 +2279,8 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + } + + static int +-dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, ++ void *__user inargs) + { + int ret; + struct d3dkmt_submitcommandtohwqueue args; +@@ -5087,8 +5087,7 @@ open_resource(struct dxgprocess *process, + } + + static int +-dxgkio_open_resource_nt(struct dxgprocess *process, +- void *__user inargs) ++dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs) + { + struct d3dkmt_openresourcefromnthandle args; + struct d3dkmt_openresourcefromnthandle *__user args_user = inargs; +@@ -5166,7 +5165,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, + /* 0x16 */ {dxgkio_change_vidmem_reservation, +- LX_DXCHANGEVIDEOMEMORYRESERVATION}, ++ LX_DXCHANGEVIDEOMEMORYRESERVATION}, + /* 0x17 */ {}, + /* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE}, + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, +@@ -5205,7 +5204,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, + /* 0x34 */ {dxgkio_submit_command_to_hwqueue, LX_DXSUBMITCOMMANDTOHWQUEUE}, + /* 0x35 */ {dxgkio_submit_signal_to_hwqueue, +- LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, ++ LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1699-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch b/patch/kernel/archive/wsl2-arm64-6.1/1699-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch new file mode 100644 index 000000000000..221f67a88890 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1699-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch @@ -0,0 +1,482 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 22 Mar 2022 11:02:49 -0700 +Subject: drivers: hv: dxgkrnl: Creation of dxgsyncfile objects + +Implement the ioctl to create a dxgsyncfile object +(LX_DXCREATESYNCFILE). This object is a wrapper around a monitored +fence sync object and a fence value. + +dxgsyncfile is built on top of the Linux sync_file object and +provides a way for the user mode to synchronize with the execution +of the device DMA packets. + +The ioctl creates a dxgsyncfile object for the given GPU synchronization +object and a fence value. A file descriptor of the sync_file object +is returned to the caller. The caller could wait for the object by using +poll(). When the underlying GPU synchronization object is signaled on +the host, the host sends a message to the virtual machine and the +sync_file object is signaled. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Kconfig | 2 + + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 2 + + drivers/hv/dxgkrnl/dxgmodule.c | 12 + + drivers/hv/dxgkrnl/dxgsyncfile.c | 215 ++++++++++ + drivers/hv/dxgkrnl/dxgsyncfile.h | 30 ++ + drivers/hv/dxgkrnl/dxgvmbus.c | 33 +- + drivers/hv/dxgkrnl/ioctl.c | 5 +- + include/uapi/misc/d3dkmthk.h | 9 + + 9 files changed, 294 insertions(+), 16 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Kconfig b/drivers/hv/dxgkrnl/Kconfig +index bcd92bbff939..782692610887 100644 +--- a/drivers/hv/dxgkrnl/Kconfig ++++ b/drivers/hv/dxgkrnl/Kconfig +@@ -6,6 +6,8 @@ config DXGKRNL + tristate "Microsoft Paravirtualized GPU support" + depends on HYPERV + depends on 64BIT || COMPILE_TEST ++ select DMA_SHARED_BUFFER ++ select SYNC_FILE + help + This driver supports paravirtualized virtual compute devices, exposed + by Microsoft Hyper-V when Linux is running inside of a virtual machine +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index fc85a47a6ad5..89824cda670a 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o ++dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o dxgsyncfile.o +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 091dbe999d33..3a69e3b34e1c 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -120,6 +120,7 @@ struct dxgpagingqueue { + */ + enum dxghosteventtype { + dxghostevent_cpu_event = 1, ++ dxghostevent_dma_fence = 2, + }; + + struct dxghostevent { +@@ -858,6 +859,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + struct + d3dkmt_waitforsynchronizationobjectfromcpu + *args, ++ bool user_address, + u64 cpu_event); + int dxgvmb_send_lock2(struct dxgprocess *process, + struct dxgadapter *adapter, +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index f1245a9d8826..af51fcd35697 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -16,6 +16,7 @@ + #include + #include + #include "dxgkrnl.h" ++#include "dxgsyncfile.h" + + #define PCI_VENDOR_ID_MICROSOFT 0x1414 + #define PCI_DEVICE_ID_VIRTUAL_RENDER 0x008E +@@ -145,6 +146,15 @@ void dxgglobal_remove_host_event(struct dxghostevent *event) + spin_unlock_irq(&dxgglobal->host_event_list_mutex); + } + ++static void signal_dma_fence(struct dxghostevent *eventhdr) ++{ ++ struct dxgsyncpoint *event = (struct dxgsyncpoint *)eventhdr; ++ ++ event->fence_value++; ++ list_del(&eventhdr->host_event_list_entry); ++ dma_fence_signal(&event->base); ++} ++ + void signal_host_cpu_event(struct dxghostevent *eventhdr) + { + struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; +@@ -184,6 +194,8 @@ void dxgglobal_signal_host_event(u64 event_id) + DXG_TRACE("found event to signal"); + if (event->event_type == dxghostevent_cpu_event) + signal_host_cpu_event(event); ++ else if (event->event_type == dxghostevent_dma_fence) ++ signal_dma_fence(event); + else + DXG_ERR("Unknown host event type"); + break; +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c +new file mode 100644 +index 000000000000..88fd78f08fbe +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.c +@@ -0,0 +1,215 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Ioctl implementation ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++#include "dxgvmbus.h" ++#include "dxgsyncfile.h" ++ ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt ++ ++#ifdef DEBUG ++static char *errorstr(int ret) ++{ ++ return ret < 0 ? "err" : ""; ++} ++#endif ++ ++static const struct dma_fence_ops dxgdmafence_ops; ++ ++static struct dxgsyncpoint *to_syncpoint(struct dma_fence *fence) ++{ ++ if (fence->ops != &dxgdmafence_ops) ++ return NULL; ++ return container_of(fence, struct dxgsyncpoint, base); ++} ++ ++int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createsyncfile args; ++ struct dxgsyncpoint *pt = NULL; ++ int ret = 0; ++ int fd = get_unused_fd_flags(O_CLOEXEC); ++ struct sync_file *sync_file = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmt_waitforsynchronizationobjectfromcpu waitargs = {}; ++ ++ if (fd < 0) { ++ DXG_ERR("get_unused_fd_flags failed: %d", fd); ++ ret = fd; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ DXG_ERR("dxgprocess_device_by_handle failed"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ DXG_ERR("dxgdevice_acquire_lock_shared failed"); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgadapter_acquire_lock_shared failed"); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ pt = kzalloc(sizeof(*pt), GFP_KERNEL); ++ if (!pt) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ spin_lock_init(&pt->lock); ++ pt->fence_value = args.fence_value; ++ pt->context = dma_fence_context_alloc(1); ++ pt->hdr.event_id = dxgglobal_new_host_event_id(); ++ pt->hdr.event_type = dxghostevent_dma_fence; ++ dxgglobal_add_host_event(&pt->hdr); ++ ++ dma_fence_init(&pt->base, &dxgdmafence_ops, &pt->lock, ++ pt->context, args.fence_value); ++ ++ sync_file = sync_file_create(&pt->base); ++ if (sync_file == NULL) { ++ DXG_ERR("sync_file_create failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dma_fence_put(&pt->base); ++ ++ waitargs.device = args.device; ++ waitargs.object_count = 1; ++ waitargs.objects = &args.monitored_fence; ++ waitargs.fence_values = &args.fence_value; ++ ret = dxgvmb_send_wait_sync_object_cpu(process, adapter, ++ &waitargs, false, ++ pt->hdr.event_id); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_wait_sync_object_cpu failed"); ++ goto cleanup; ++ } ++ ++ args.sync_file_handle = (u64)fd; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ fd_install(fd, sync_file->file); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ dxgdevice_release_lock_shared(device); ++ if (ret) { ++ if (sync_file) { ++ fput(sync_file->file); ++ /* sync_file_release will destroy dma_fence */ ++ pt = NULL; ++ } ++ if (pt) ++ dma_fence_put(&pt->base); ++ if (fd >= 0) ++ put_unused_fd(fd); ++ } ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static const char *dxgdmafence_get_driver_name(struct dma_fence *fence) ++{ ++ return "dxgkrnl"; ++} ++ ++static const char *dxgdmafence_get_timeline_name(struct dma_fence *fence) ++{ ++ return "no_timeline"; ++} ++ ++static void dxgdmafence_release(struct dma_fence *fence) ++{ ++ struct dxgsyncpoint *syncpoint; ++ ++ syncpoint = to_syncpoint(fence); ++ if (syncpoint) { ++ if (syncpoint->hdr.event_id) ++ dxgglobal_get_host_event(syncpoint->hdr.event_id); ++ kfree(syncpoint); ++ } ++} ++ ++static bool dxgdmafence_signaled(struct dma_fence *fence) ++{ ++ struct dxgsyncpoint *syncpoint; ++ ++ syncpoint = to_syncpoint(fence); ++ if (syncpoint == 0) ++ return true; ++ return __dma_fence_is_later(syncpoint->fence_value, fence->seqno, ++ fence->ops); ++} ++ ++static bool dxgdmafence_enable_signaling(struct dma_fence *fence) ++{ ++ return true; ++} ++ ++static void dxgdmafence_value_str(struct dma_fence *fence, ++ char *str, int size) ++{ ++ snprintf(str, size, "%lld", fence->seqno); ++} ++ ++static void dxgdmafence_timeline_value_str(struct dma_fence *fence, ++ char *str, int size) ++{ ++ struct dxgsyncpoint *syncpoint; ++ ++ syncpoint = to_syncpoint(fence); ++ snprintf(str, size, "%lld", syncpoint->fence_value); ++} ++ ++static const struct dma_fence_ops dxgdmafence_ops = { ++ .get_driver_name = dxgdmafence_get_driver_name, ++ .get_timeline_name = dxgdmafence_get_timeline_name, ++ .enable_signaling = dxgdmafence_enable_signaling, ++ .signaled = dxgdmafence_signaled, ++ .release = dxgdmafence_release, ++ .fence_value_str = dxgdmafence_value_str, ++ .timeline_value_str = dxgdmafence_timeline_value_str, ++}; +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.h b/drivers/hv/dxgkrnl/dxgsyncfile.h +new file mode 100644 +index 000000000000..207ef9b30f67 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.h +@@ -0,0 +1,30 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Headers for sync file objects ++ * ++ */ ++ ++#ifndef _DXGSYNCFILE_H ++#define _DXGSYNCFILE_H ++ ++#include ++ ++int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs); ++ ++struct dxgsyncpoint { ++ struct dxghostevent hdr; ++ struct dma_fence base; ++ u64 fence_value; ++ u64 context; ++ spinlock_t lock; ++ u64 u64; ++}; ++ ++#endif /* _DXGSYNCFILE_H */ +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 4d7807909284..913ea3cabb31 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2820,6 +2820,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + struct + d3dkmt_waitforsynchronizationobjectfromcpu + *args, ++ bool user_address, + u64 cpu_event) + { + int ret = -EINVAL; +@@ -2844,19 +2845,25 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + command->guest_event_pointer = (u64) cpu_event; + current_pos = (u8 *) &command[1]; + +- ret = copy_from_user(current_pos, args->objects, object_size); +- if (ret) { +- DXG_ERR("failed to copy objects"); +- ret = -EINVAL; +- goto cleanup; +- } +- current_pos += object_size; +- ret = copy_from_user(current_pos, args->fence_values, +- fence_size); +- if (ret) { +- DXG_ERR("failed to copy fences"); +- ret = -EINVAL; +- goto cleanup; ++ if (user_address) { ++ ret = copy_from_user(current_pos, args->objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += object_size; ++ ret = copy_from_user(current_pos, args->fence_values, ++ fence_size); ++ if (ret) { ++ DXG_ERR("failed to copy fences"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ memcpy(current_pos, args->objects, object_size); ++ current_pos += object_size; ++ memcpy(current_pos, args->fence_values, fence_size); + } + + ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 8732a66040a0..6c26aafb0619 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -19,6 +19,7 @@ + + #include "dxgkrnl.h" + #include "dxgvmbus.h" ++#include "dxgsyncfile.h" + + #undef pr_fmt + #define pr_fmt(fmt) "dxgk: " fmt +@@ -3488,7 +3489,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + } + + ret = dxgvmb_send_wait_sync_object_cpu(process, adapter, +- &args, event_id); ++ &args, true, event_id); + if (ret < 0) + goto cleanup; + +@@ -5224,7 +5225,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, + /* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS}, + /* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, +-/* 0x45 */ {}, ++/* 0x45 */ {dxgkio_create_sync_file, LX_DXCREATESYNCFILE}, + }; + + /* +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 1f60f5120e1d..c7f168425dc7 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -1554,6 +1554,13 @@ struct d3dkmt_shareobjectwithhost { + __u64 object_vail_nt_handle; + }; + ++struct d3dkmt_createsyncfile { ++ struct d3dkmthandle device; ++ struct d3dkmthandle monitored_fence; ++ __u64 fence_value; ++ __u64 sync_file_handle; /* out */ ++}; ++ + /* + * Dxgkrnl Graphics Port Driver ioctl definitions + * +@@ -1677,5 +1684,7 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x43, struct d3dkmt_querystatistics) + #define LX_DXSHAREOBJECTWITHHOST \ + _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) ++#define LX_DXCREATESYNCFILE \ ++ _IOWR(0x47, 0x45, struct d3dkmt_createsyncfile) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1700-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch b/patch/kernel/archive/wsl2-arm64-6.1/1700-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch new file mode 100644 index 000000000000..5795bc96d37c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1700-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch @@ -0,0 +1,205 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Thu, 24 Mar 2022 15:03:41 -0700 +Subject: drivers: hv: dxgkrnl: Use tracing instead of dev_dbg + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 4 +-- + drivers/hv/dxgkrnl/dxgmodule.c | 5 ++- + drivers/hv/dxgkrnl/dxgprocess.c | 6 ++-- + drivers/hv/dxgkrnl/dxgvmbus.c | 4 +-- + drivers/hv/dxgkrnl/hmgr.c | 16 +++++----- + drivers/hv/dxgkrnl/ioctl.c | 8 ++--- + drivers/hv/dxgkrnl/misc.c | 4 +-- + 7 files changed, 25 insertions(+), 22 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 236febbc6fca..3d8bec295b87 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -18,8 +18,8 @@ + + #include "dxgkrnl.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev) + { +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index af51fcd35697..08feae97e845 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -24,6 +24,9 @@ + #undef pr_fmt + #define pr_fmt(fmt) "dxgk: " fmt + ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt ++ + /* + * Interface from dxgglobal + */ +@@ -442,7 +445,7 @@ const struct file_operations dxgk_fops = { + #define DXGK_VMBUS_HOSTCAPS_OFFSET (DXGK_VMBUS_VGPU_LUID_OFFSET + \ + sizeof(struct winluid)) + +-/* The guest writes its capavilities to this adderss */ ++/* The guest writes its capabilities to this address */ + #define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ + sizeof(u32)) + +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index 5de3f8ccb448..afef196c0588 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -13,8 +13,8 @@ + + #include "dxgkrnl.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + /* + * Creates a new dxgprocess object +@@ -248,7 +248,7 @@ struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, + HMGRENTRY_TYPE_DXGADAPTER, + handle); + if (adapter == NULL) +- DXG_ERR("adapter_by_handle failed %x", handle.v); ++ DXG_TRACE("adapter_by_handle failed %x", handle.v); + else if (kref_get_unless_zero(&adapter->adapter_kref) == 0) { + DXG_ERR("failed to acquire adapter reference"); + adapter = NULL; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 913ea3cabb31..d53d4254be63 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -22,8 +22,8 @@ + #include "dxgkrnl.h" + #include "dxgvmbus.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + #define RING_BUFSIZE (256 * 1024) + +diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c +index 526b50f46d96..24101d0091ab 100644 +--- a/drivers/hv/dxgkrnl/hmgr.c ++++ b/drivers/hv/dxgkrnl/hmgr.c +@@ -19,8 +19,8 @@ + #include "dxgkrnl.h" + #include "hmgr.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + const struct d3dkmthandle zerohandle; + +@@ -90,29 +90,29 @@ static bool is_handle_valid(struct hmgrtable *table, struct d3dkmthandle h, + struct hmgrentry *entry; + + if (index >= table->table_size) { +- DXG_ERR("Invalid index %x %d", h.v, index); ++ DXG_TRACE("Invalid index %x %d", h.v, index); + return false; + } + + entry = &table->entry_table[index]; + if (unique != entry->unique) { +- DXG_ERR("Invalid unique %x %d %d %d %p", ++ DXG_TRACE("Invalid unique %x %d %d %d %p", + h.v, unique, entry->unique, index, entry->object); + return false; + } + + if (entry->destroyed && !ignore_destroyed) { +- DXG_ERR("Invalid destroyed value"); ++ DXG_TRACE("Invalid destroyed value"); + return false; + } + + if (entry->type == HMGRENTRY_TYPE_FREE) { +- DXG_ERR("Entry is freed %x %d", h.v, index); ++ DXG_TRACE("Entry is freed %x %d", h.v, index); + return false; + } + + if (t != HMGRENTRY_TYPE_FREE && t != entry->type) { +- DXG_ERR("type mismatch %x %d %d", h.v, t, entry->type); ++ DXG_TRACE("type mismatch %x %d %d", h.v, t, entry->type); + return false; + } + +@@ -500,7 +500,7 @@ void *hmgrtable_get_object_by_type(struct hmgrtable *table, + struct d3dkmthandle h) + { + if (!is_handle_valid(table, h, false, type)) { +- DXG_ERR("Invalid handle %x", h.v); ++ DXG_TRACE("Invalid handle %x", h.v); + return NULL; + } + return table->entry_table[get_index(h)].object; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 6c26aafb0619..4db23cd55b24 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -21,8 +21,8 @@ + #include "dxgvmbus.h" + #include "dxgsyncfile.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + struct ioctl_desc { + int (*ioctl_callback)(struct dxgprocess *p, void __user *arg); +@@ -556,7 +556,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); + return ret; + } + +@@ -5242,7 +5242,7 @@ static int dxgk_ioctl(struct file *f, unsigned int p1, unsigned long p2) + int status; + struct dxgprocess *process; + +- if (code < 1 || code >= ARRAY_SIZE(ioctls)) { ++ if (code < 1 || code >= ARRAY_SIZE(ioctls)) { + DXG_ERR("bad ioctl %x %x %x %x", + code, _IOC_TYPE(p1), _IOC_SIZE(p1), _IOC_DIR(p1)); + return -ENOTTY; +diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c +index 4a1309d80ee5..4bf6fe80d22a 100644 +--- a/drivers/hv/dxgkrnl/misc.c ++++ b/drivers/hv/dxgkrnl/misc.c +@@ -18,8 +18,8 @@ + #include "dxgkrnl.h" + #include "misc.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + u16 *wcsncpy(u16 *dest, const u16 *src, size_t n) + { +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1701-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch b/patch/kernel/archive/wsl2-arm64-6.1/1701-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch new file mode 100644 index 000000000000..dbf33a8e06c1 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1701-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch @@ -0,0 +1,658 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 2 May 2022 11:46:48 -0700 +Subject: drivers: hv: dxgkrnl: Implement D3DKMTWaitSyncFile + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 11 + + drivers/hv/dxgkrnl/dxgmodule.c | 7 +- + drivers/hv/dxgkrnl/dxgprocess.c | 12 +- + drivers/hv/dxgkrnl/dxgsyncfile.c | 291 +++++++++- + drivers/hv/dxgkrnl/dxgsyncfile.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 49 ++ + drivers/hv/dxgkrnl/ioctl.c | 16 +- + include/uapi/misc/d3dkmthk.h | 23 + + 8 files changed, 396 insertions(+), 16 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 3a69e3b34e1c..d92e1348ccfb 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -254,6 +254,10 @@ void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *sharedsyncobj, + struct dxgsyncobject *syncobj); + void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *sharedsyncobj, + struct dxgsyncobject *syncobj); ++int dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, ++ struct dxgprocess *process, ++ struct d3dkmthandle objecthandle); ++void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj); + + struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + struct dxgdevice *device, +@@ -384,6 +388,8 @@ struct dxgprocess { + pid_t tgid; + /* how many time the process was opened */ + struct kref process_kref; ++ /* protects the object memory */ ++ struct kref process_mem_kref; + /* + * This handle table is used for all objects except dxgadapter + * The handle table lock order is higher than the local_handle_table +@@ -405,6 +411,7 @@ struct dxgprocess { + struct dxgprocess *dxgprocess_create(void); + void dxgprocess_destroy(struct dxgprocess *process); + void dxgprocess_release(struct kref *refcount); ++void dxgprocess_mem_release(struct kref *refcount); + int dxgprocess_open_adapter(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle *handle); +@@ -932,6 +939,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct d3dkmt_opensyncobjectfromnthandle2 + *args, + struct dxgsyncobject *syncobj); ++int dxgvmb_send_open_sync_object(struct dxgprocess *process, ++ struct d3dkmthandle device, ++ struct d3dkmthandle host_shared_syncobj, ++ struct d3dkmthandle *syncobj); + int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 08feae97e845..5570f35954d4 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -149,10 +149,11 @@ void dxgglobal_remove_host_event(struct dxghostevent *event) + spin_unlock_irq(&dxgglobal->host_event_list_mutex); + } + +-static void signal_dma_fence(struct dxghostevent *eventhdr) ++static void dxg_signal_dma_fence(struct dxghostevent *eventhdr) + { + struct dxgsyncpoint *event = (struct dxgsyncpoint *)eventhdr; + ++ DXG_TRACE("syncpoint: %px, fence: %lld", event, event->fence_value); + event->fence_value++; + list_del(&eventhdr->host_event_list_entry); + dma_fence_signal(&event->base); +@@ -198,7 +199,7 @@ void dxgglobal_signal_host_event(u64 event_id) + if (event->event_type == dxghostevent_cpu_event) + signal_host_cpu_event(event); + else if (event->event_type == dxghostevent_dma_fence) +- signal_dma_fence(event); ++ dxg_signal_dma_fence(event); + else + DXG_ERR("Unknown host event type"); + break; +@@ -355,6 +356,7 @@ static struct dxgprocess *dxgglobal_get_current_process(void) + if (entry->tgid == current->tgid) { + if (kref_get_unless_zero(&entry->process_kref)) { + process = entry; ++ kref_get(&entry->process_mem_kref); + DXG_TRACE("found dxgprocess"); + } else { + DXG_TRACE("process is destroyed"); +@@ -405,6 +407,7 @@ static int dxgk_release(struct inode *n, struct file *f) + return -EINVAL; + + kref_put(&process->process_kref, dxgprocess_release); ++ kref_put(&process->process_mem_kref, dxgprocess_mem_release); + + f->private_data = NULL; + return 0; +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index afef196c0588..e77e3a4983f8 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -39,6 +39,7 @@ struct dxgprocess *dxgprocess_create(void) + } else { + INIT_LIST_HEAD(&process->plistentry); + kref_init(&process->process_kref); ++ kref_init(&process->process_mem_kref); + + mutex_lock(&dxgglobal->plistmutex); + list_add_tail(&process->plistentry, +@@ -117,8 +118,17 @@ void dxgprocess_release(struct kref *refcount) + + dxgprocess_destroy(process); + +- if (process->host_handle.v) ++ if (process->host_handle.v) { + dxgvmb_send_destroy_process(process->host_handle); ++ process->host_handle.v = 0; ++ } ++} ++ ++void dxgprocess_mem_release(struct kref *refcount) ++{ ++ struct dxgprocess *process; ++ ++ process = container_of(refcount, struct dxgprocess, process_mem_kref); + kfree(process); + } + +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c +index 88fd78f08fbe..9d5832c90ad7 100644 +--- a/drivers/hv/dxgkrnl/dxgsyncfile.c ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.c +@@ -9,6 +9,20 @@ + * Dxgkrnl Graphics Driver + * Ioctl implementation + * ++ * dxgsyncpoint: ++ * - pointer to dxgsharedsyncobject ++ * - host_shared_handle_nt_reference incremented ++ * - list of (process, local syncobj d3dkmthandle) pairs ++ * wait for sync file ++ * - get dxgsyncpoint ++ * - if process doesn't have a local syncobj ++ * - create local dxgsyncobject ++ * - send open syncobj to the host ++ * - Send wait for syncobj to the context ++ * dxgsyncpoint destruction ++ * - walk the list of (process, local syncobj) ++ * - destroy syncobj ++ * - remove reference to dxgsharedsyncobject + */ + + #include +@@ -45,12 +59,15 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + struct d3dkmt_createsyncfile args; + struct dxgsyncpoint *pt = NULL; + int ret = 0; +- int fd = get_unused_fd_flags(O_CLOEXEC); ++ int fd; + struct sync_file *sync_file = NULL; + struct dxgdevice *device = NULL; + struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; + struct d3dkmt_waitforsynchronizationobjectfromcpu waitargs = {}; ++ bool device_lock_acquired = false; + ++ fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) { + DXG_ERR("get_unused_fd_flags failed: %d", fd); + ret = fd; +@@ -74,9 +91,9 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + ret = dxgdevice_acquire_lock_shared(device); + if (ret < 0) { + DXG_ERR("dxgdevice_acquire_lock_shared failed"); +- device = NULL; + goto cleanup; + } ++ device_lock_acquired = true; + + adapter = device->adapter; + ret = dxgadapter_acquire_lock_shared(adapter); +@@ -109,6 +126,30 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + } + dma_fence_put(&pt->base); + ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ syncobj = hmgrtable_get_object(&process->handle_table, ++ args.monitored_fence); ++ if (syncobj == NULL) { ++ DXG_ERR("invalid syncobj handle %x", args.monitored_fence.v); ++ ret = -EINVAL; ++ } else { ++ if (syncobj->shared) { ++ kref_get(&syncobj->syncobj_kref); ++ pt->shared_syncobj = syncobj->shared_owner; ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (pt->shared_syncobj) { ++ ret = dxgsharedsyncobj_get_host_nt_handle(pt->shared_syncobj, ++ process, ++ args.monitored_fence); ++ if (ret) ++ pt->shared_syncobj = NULL; ++ } ++ if (ret) ++ goto cleanup; ++ + waitargs.device = args.device; + waitargs.object_count = 1; + waitargs.objects = &args.monitored_fence; +@@ -132,10 +173,15 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + fd_install(fd, sync_file->file); + + cleanup: ++ if (syncobj && syncobj->shared) ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); + if (adapter) + dxgadapter_release_lock_shared(adapter); +- if (device) +- dxgdevice_release_lock_shared(device); ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } + if (ret) { + if (sync_file) { + fput(sync_file->file); +@@ -151,6 +197,228 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_opensyncobjectfromsyncfile args; ++ int ret = 0; ++ struct dxgsyncpoint *pt = NULL; ++ struct dma_fence *dmafence = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; ++ struct d3dddi_synchronizationobject_flags flags = { }; ++ struct d3dkmt_opensyncobjectfromnthandle2 openargs = { }; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ dmafence = sync_file_get_fence(args.sync_file_handle); ++ if (dmafence == NULL) { ++ DXG_ERR("failed to get dmafence from handle: %llx", ++ args.sync_file_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ pt = to_syncpoint(dmafence); ++ if (pt->shared_syncobj == NULL) { ++ DXG_ERR("Sync object is not shared"); ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ DXG_ERR("dxgprocess_device_by_handle failed"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ DXG_ERR("dxgdevice_acquire_lock_shared failed"); ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgadapter_acquire_lock_shared failed"); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ flags.shared = 1; ++ flags.nt_security_sharing = 1; ++ syncobj = dxgsyncobject_create(process, device, adapter, ++ _D3DDDI_MONITORED_FENCE, flags); ++ if (syncobj == NULL) { ++ DXG_ERR("failed to create sync object"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dxgsharedsyncobj_add_syncobj(pt->shared_syncobj, syncobj); ++ ++ /* Open the shared syncobj to get a local handle */ ++ ++ openargs.device = device->handle; ++ openargs.flags.shared = 1; ++ openargs.flags.nt_security_sharing = 1; ++ openargs.flags.no_signal = 1; ++ ++ ret = dxgvmb_send_open_sync_object_nt(process, ++ &dxgglobal->channel, &openargs, syncobj); ++ if (ret) { ++ DXG_ERR("Failed to open shared syncobj on host"); ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, ++ syncobj, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ openargs.sync_object); ++ if (ret == 0) { ++ syncobj->handle = openargs.sync_object; ++ kref_get(&syncobj->syncobj_kref); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ args.syncobj = openargs.sync_object; ++ args.fence_value = pt->fence_value; ++ args.fence_value_cpu_va = openargs.monitored_fence.fence_value_cpu_va; ++ args.fence_value_gpu_va = openargs.monitored_fence.fence_value_gpu_va; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EFAULT; ++ } ++ ++cleanup: ++ if (dmafence) ++ dma_fence_put(dmafence); ++ if (ret) { ++ if (syncobj) { ++ dxgsyncobject_destroy(process, syncobj); ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ } ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitsyncfile args; ++ struct dma_fence *dmafence = NULL; ++ int ret = 0; ++ struct dxgsyncpoint *pt = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle syncobj_handle = {}; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ dmafence = sync_file_get_fence(args.sync_file_handle); ++ if (dmafence == NULL) { ++ DXG_ERR("failed to get dmafence from handle: %llx", ++ args.sync_file_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ pt = to_syncpoint(dmafence); ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ DXG_ERR("dxgdevice_acquire_lock_shared failed"); ++ device = NULL; ++ goto cleanup; ++ } ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgadapter_acquire_lock_shared failed"); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ /* Open the shared syncobj to get a local handle */ ++ if (pt->shared_syncobj == NULL) { ++ DXG_ERR("Sync object is not shared"); ++ goto cleanup; ++ } ++ ret = dxgvmb_send_open_sync_object(process, ++ device->handle, ++ pt->shared_syncobj->host_shared_handle, ++ &syncobj_handle); ++ if (ret) { ++ DXG_ERR("Failed to open shared syncobj on host"); ++ goto cleanup; ++ } ++ ++ /* Ask the host to insert the syncobj to the context queue */ ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.context, 1, ++ &syncobj_handle, ++ &pt->fence_value, ++ false); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_wait_sync_object_cpu failed"); ++ goto cleanup; ++ } ++ ++ /* ++ * Destroy the local syncobject immediately. This will not unblock ++ * GPU waiters, but will unblock CPU waiter, which includes the sync ++ * file itself. ++ */ ++ ret = dxgvmb_send_destroy_sync_object(process, syncobj_handle); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ if (dmafence) ++ dma_fence_put(dmafence); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static const char *dxgdmafence_get_driver_name(struct dma_fence *fence) + { + return "dxgkrnl"; +@@ -166,11 +434,16 @@ static void dxgdmafence_release(struct dma_fence *fence) + struct dxgsyncpoint *syncpoint; + + syncpoint = to_syncpoint(fence); +- if (syncpoint) { +- if (syncpoint->hdr.event_id) +- dxgglobal_get_host_event(syncpoint->hdr.event_id); +- kfree(syncpoint); +- } ++ if (syncpoint == NULL) ++ return; ++ ++ if (syncpoint->hdr.event_id) ++ dxgglobal_get_host_event(syncpoint->hdr.event_id); ++ ++ if (syncpoint->shared_syncobj) ++ dxgsharedsyncobj_put(syncpoint->shared_syncobj); ++ ++ kfree(syncpoint); + } + + static bool dxgdmafence_signaled(struct dma_fence *fence) +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.h b/drivers/hv/dxgkrnl/dxgsyncfile.h +index 207ef9b30f67..292b7f718987 100644 +--- a/drivers/hv/dxgkrnl/dxgsyncfile.h ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.h +@@ -17,10 +17,13 @@ + #include + + int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs); ++int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs); ++int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *p, void *__user args); + + struct dxgsyncpoint { + struct dxghostevent hdr; + struct dma_fence base; ++ struct dxgsharedsyncobject *shared_syncobj; + u64 fence_value; + u64 context; + spinlock_t lock; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index d53d4254be63..36f4d4e84d3e 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -796,6 +796,55 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_open_sync_object(struct dxgprocess *process, ++ struct d3dkmthandle device, ++ struct d3dkmthandle host_shared_syncobj, ++ struct d3dkmthandle *syncobj) ++{ ++ struct dxgkvmb_command_opensyncobject *command; ++ struct dxgkvmb_command_opensyncobject_return result = { }; ++ int ret; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENSYNCOBJECT, ++ process->host_handle); ++ command->device = device; ++ command->global_sync_object = host_shared_syncobj; ++ command->flags.shared = 1; ++ command->flags.nt_security_sharing = 1; ++ command->flags.no_signal = 1; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(&dxgglobal->channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ *syncobj = result.sync_object; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle) +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 4db23cd55b24..622904d5c3a9 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -36,10 +36,8 @@ static char *errorstr(int ret) + } + #endif + +-static int dxgsyncobj_release(struct inode *inode, struct file *file) ++void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj) + { +- struct dxgsharedsyncobject *syncobj = file->private_data; +- + DXG_TRACE("Release syncobj: %p", syncobj); + mutex_lock(&syncobj->fd_mutex); + kref_get(&syncobj->ssyncobj_kref); +@@ -56,6 +54,13 @@ static int dxgsyncobj_release(struct inode *inode, struct file *file) + } + mutex_unlock(&syncobj->fd_mutex); + kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release); ++} ++ ++static int dxgsyncobj_release(struct inode *inode, struct file *file) ++{ ++ struct dxgsharedsyncobject *syncobj = file->private_data; ++ ++ dxgsharedsyncobj_put(syncobj); + return 0; + } + +@@ -4478,7 +4483,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + return ret; + } + +-static int ++int + dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, + struct dxgprocess *process, + struct d3dkmthandle objecthandle) +@@ -5226,6 +5231,9 @@ static struct ioctl_desc ioctls[] = { + /* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS}, + /* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, + /* 0x45 */ {dxgkio_create_sync_file, LX_DXCREATESYNCFILE}, ++/* 0x46 */ {dxgkio_wait_sync_file, LX_DXWAITSYNCFILE}, ++/* 0x46 */ {dxgkio_open_syncobj_from_syncfile, ++ LX_DXOPENSYNCOBJECTFROMSYNCFILE}, + }; + + /* +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index c7f168425dc7..1eaa3f038322 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -1561,6 +1561,25 @@ struct d3dkmt_createsyncfile { + __u64 sync_file_handle; /* out */ + }; + ++struct d3dkmt_waitsyncfile { ++ __u64 sync_file_handle; ++ struct d3dkmthandle context; ++ __u32 reserved; ++}; ++ ++struct d3dkmt_opensyncobjectfromsyncfile { ++ __u64 sync_file_handle; ++ struct d3dkmthandle device; ++ struct d3dkmthandle syncobj; /* out */ ++ __u64 fence_value; /* out */ ++#ifdef __KERNEL__ ++ void *fence_value_cpu_va; /* out */ ++#else ++ __u64 fence_value_cpu_va; /* out */ ++#endif ++ __u64 fence_value_gpu_va; /* out */ ++}; ++ + /* + * Dxgkrnl Graphics Port Driver ioctl definitions + * +@@ -1686,5 +1705,9 @@ struct d3dkmt_createsyncfile { + _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) + #define LX_DXCREATESYNCFILE \ + _IOWR(0x47, 0x45, struct d3dkmt_createsyncfile) ++#define LX_DXWAITSYNCFILE \ ++ _IOWR(0x47, 0x46, struct d3dkmt_waitsyncfile) ++#define LX_DXOPENSYNCOBJECTFROMSYNCFILE \ ++ _IOWR(0x47, 0x47, struct d3dkmt_opensyncobjectfromsyncfile) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1702-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch b/patch/kernel/archive/wsl2-arm64-6.1/1702-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch new file mode 100644 index 000000000000..5cda6870cc54 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1702-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch @@ -0,0 +1,2000 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 6 May 2022 19:19:09 -0700 +Subject: drivers: hv: dxgkrnl: Improve tracing and return values from copy + from user + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 17 +- + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgsyncfile.c | 13 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 98 +-- + drivers/hv/dxgkrnl/ioctl.c | 327 +++++----- + 5 files changed, 225 insertions(+), 231 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index d92e1348ccfb..f63aa6f7a9dc 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -999,18 +999,25 @@ void dxgk_validate_ioctls(void); + trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \ + } while (0) + +-#define DXG_ERR(fmt, ...) do { \ +- dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ +- trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \ ++ trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \ + } while (0) + + #else + + #define DXG_TRACE(...) +-#define DXG_ERR(fmt, ...) do { \ +- dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \ + } while (0) + + #endif /* DEBUG */ + ++#define DXG_TRACE_IOCTL_END(ret) do { \ ++ if (ret < 0) \ ++ DXG_ERR("Ioctl failed: %d", ret); \ ++ else \ ++ DXG_TRACE("Ioctl returned: %d", ret); \ ++} while (0) ++ + #endif +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 5570f35954d4..aa27931a3447 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -961,3 +961,4 @@ module_exit(dxg_drv_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver"); ++MODULE_VERSION("2.0.0"); +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c +index 9d5832c90ad7..f3b3e8dd4568 100644 +--- a/drivers/hv/dxgkrnl/dxgsyncfile.c ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.c +@@ -38,13 +38,6 @@ + #undef dev_fmt + #define dev_fmt(fmt) "dxgk: " fmt + +-#ifdef DEBUG +-static char *errorstr(int ret) +-{ +- return ret < 0 ? "err" : ""; +-} +-#endif +- + static const struct dma_fence_ops dxgdmafence_ops; + + static struct dxgsyncpoint *to_syncpoint(struct dma_fence *fence) +@@ -193,7 +186,7 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + if (fd >= 0) + put_unused_fd(fd); + } +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -317,7 +310,7 @@ int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *process, + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -415,7 +408,7 @@ int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs) + if (dmafence) + dma_fence_put(dmafence); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 36f4d4e84d3e..566ccb6d01c9 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1212,7 +1212,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("Faled to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1230,7 +1230,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + if (ret) { + DXG_ERR( + "Faled to copy private data to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + dxgvmb_send_destroy_context(adapter, process, + context); + context.v = 0; +@@ -1365,7 +1365,7 @@ copy_private_data(struct d3dkmt_createallocation *args, + args->private_runtime_data_size); + if (ret) { + DXG_ERR("failed to copy runtime data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + private_data_dest += args->private_runtime_data_size; +@@ -1385,7 +1385,7 @@ copy_private_data(struct d3dkmt_createallocation *args, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + private_data_dest += args->priv_drv_data_size; +@@ -1406,7 +1406,7 @@ copy_private_data(struct d3dkmt_createallocation *args, + input_alloc->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy alloc data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + private_data_dest += input_alloc->priv_drv_data_size; +@@ -1658,7 +1658,7 @@ create_local_allocations(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy resource handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1690,7 +1690,7 @@ create_local_allocations(struct dxgprocess *process, + host_alloc->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + alloc_private_data += host_alloc->priv_drv_data_size; +@@ -1700,7 +1700,7 @@ create_local_allocations(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy alloc handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1714,7 +1714,7 @@ create_local_allocations(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy global share"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -1961,7 +1961,7 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, + sizeof(result.clock_data)); + if (ret) { + DXG_ERR("failed to copy clock data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = ntstatus2int(result.status); +@@ -2041,7 +2041,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + alloc_size); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -2059,7 +2059,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + result_allocation_size); + if (ret) { + DXG_ERR("failed to copy residency status"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2105,7 +2105,7 @@ int dxgvmb_send_escape(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy priv data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -2164,14 +2164,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + sizeof(output->budget)); + if (ret) { + DXG_ERR("failed to copy budget"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&output->current_usage, &result.current_usage, + sizeof(output->current_usage)); + if (ret) { + DXG_ERR("failed to copy current usage"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&output->current_reservation, +@@ -2179,7 +2179,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + sizeof(output->current_reservation)); + if (ret) { + DXG_ERR("failed to copy reservation"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&output->available_for_reservation, +@@ -2187,7 +2187,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + sizeof(output->available_for_reservation)); + if (ret) { + DXG_ERR("failed to copy avail reservation"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2229,7 +2229,7 @@ int dxgvmb_send_get_device_state(struct dxgprocess *process, + ret = copy_to_user(output, &result.args, sizeof(result.args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + if (args->state_type == _D3DKMT_DEVICESTATE_EXECUTION) +@@ -2404,7 +2404,7 @@ int dxgvmb_send_make_resident(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + command_vgpu_to_host_init2(&command->hdr, +@@ -2454,7 +2454,7 @@ int dxgvmb_send_evict(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + command_vgpu_to_host_init2(&command->hdr, +@@ -2502,14 +2502,14 @@ int dxgvmb_send_submit_command(struct dxgprocess *process, + hbufsize); + if (ret) { + DXG_ERR(" failed to copy history buffer"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_from_user((u8 *) &command[1] + hbufsize, + args->priv_drv_data, args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy history priv data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2671,7 +2671,7 @@ int dxgvmb_send_update_gpu_va(struct dxgprocess *process, + op_size); + if (ret) { + DXG_ERR("failed to copy operations"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2751,7 +2751,7 @@ dxgvmb_send_create_sync_object(struct dxgprocess *process, + sizeof(u64)); + if (ret) { + DXG_ERR("failed to read fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + } else { + DXG_TRACE("fence value:%lx", + value); +@@ -2820,7 +2820,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process, + if (ret) { + DXG_ERR("Failed to read objects %p %d", + objects, object_size); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + current_pos += object_size; +@@ -2834,7 +2834,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process, + if (ret) { + DXG_ERR("Failed to read contexts %p %d", + contexts, context_size); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + current_pos += context_size; +@@ -2844,7 +2844,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process, + if (ret) { + DXG_ERR("Failed to read fences %p %d", + fences, fence_size); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -2898,7 +2898,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + ret = copy_from_user(current_pos, args->objects, object_size); + if (ret) { + DXG_ERR("failed to copy objects"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + current_pos += object_size; +@@ -2906,7 +2906,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + fence_size); + if (ret) { + DXG_ERR("failed to copy fences"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } else { +@@ -3037,7 +3037,7 @@ int dxgvmb_send_lock2(struct dxgprocess *process, + sizeof(args->data)); + if (ret) { + DXG_ERR("failed to copy data"); +- ret = -EINVAL; ++ ret = -EFAULT; + alloc->cpu_address_refcount--; + if (alloc->cpu_address_refcount == 0) { + dxg_unmap_iospace(alloc->cpu_address, +@@ -3119,7 +3119,7 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process, + sizeof(u64)); + if (ret1) { + DXG_ERR("failed to copy paging fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + cleanup: +@@ -3204,14 +3204,14 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + alloc_size); + if (ret) { + DXG_ERR("failed to copy alloc handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_from_user((u8 *) allocations + alloc_size, + args->priorities, priority_size); + if (ret) { + DXG_ERR("failed to copy alloc priority"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3277,7 +3277,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + alloc_size); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3296,7 +3296,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + priority_size); + if (ret) { + DXG_ERR("failed to copy priorities"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -3402,7 +3402,7 @@ int dxgvmb_send_offer_allocations(struct dxgprocess *process, + } + if (ret) { + DXG_ERR("failed to copy input handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3457,7 +3457,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + } + if (ret) { + DXG_ERR("failed to copy input handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3469,7 +3469,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + &result->paging_fence_value, sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy paging fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3480,7 +3480,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + args->allocation_count); + if (ret) { + DXG_ERR("failed to copy results"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + +@@ -3559,7 +3559,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -3604,7 +3604,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy hwqueue handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence, +@@ -3612,7 +3612,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to progress fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_cpu_va, +@@ -3620,7 +3620,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(inargs->queue_progress_fence_cpu_va)); + if (ret) { + DXG_ERR("failed to copy fence cpu va"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_gpu_va, +@@ -3628,7 +3628,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy fence gpu va"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + if (args->priv_drv_data_size) { +@@ -3637,7 +3637,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + +@@ -3706,7 +3706,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + args->private_data, args->private_data_size); + if (ret) { + DXG_ERR("Faled to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3758,7 +3758,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + args->private_data_size); + if (ret) { + DXG_ERR("Faled to copy private data to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -3791,7 +3791,7 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + primaries_size); + if (ret) { + DXG_ERR("failed to copy primaries handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -3801,7 +3801,7 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy primaries data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 622904d5c3a9..3dc9e76f4f3d 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -29,13 +29,6 @@ struct ioctl_desc { + u32 ioctl; + }; + +-#ifdef DEBUG +-static char *errorstr(int ret) +-{ +- return ret < 0 ? "err" : ""; +-} +-#endif +- + void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj) + { + DXG_TRACE("Release syncobj: %p", syncobj); +@@ -108,7 +101,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("Faled to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -129,7 +122,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + &args.adapter_handle, + sizeof(struct d3dkmthandle)); + if (ret) +- ret = -EINVAL; ++ ret = -EFAULT; + } + adapter = entry; + } +@@ -150,7 +143,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + if (ret < 0) + dxgprocess_close_adapter(process, args.adapter_handle); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -173,7 +166,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process, + ret = copy_from_user(args, inargs, sizeof(*args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -199,7 +192,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process, + ret = copy_to_user(inargs, args, sizeof(*args)); + if (ret) { + DXG_ERR("failed to copy args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + dxgadapter_release_lock_shared(adapter); +@@ -209,7 +202,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process, + if (args) + vfree(args); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -233,7 +226,7 @@ dxgkp_enum_adapters(struct dxgprocess *process, + &dxgglobal->num_adapters, sizeof(u32)); + if (ret) { + DXG_ERR("copy_to_user faled"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -291,7 +284,7 @@ dxgkp_enum_adapters(struct dxgprocess *process, + &dxgglobal->num_adapters, sizeof(u32)); + if (ret) { + DXG_ERR("copy_to_user failed"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -300,13 +293,13 @@ dxgkp_enum_adapters(struct dxgprocess *process, + sizeof(adapter_count)); + if (ret) { + DXG_ERR("failed to copy adapter_count"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(info_out, info, sizeof(info[0]) * adapter_count); + if (ret) { + DXG_ERR("failed to copy adapter info"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -326,7 +319,7 @@ dxgkp_enum_adapters(struct dxgprocess *process, + if (adapters) + vfree(adapters); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -437,7 +430,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -447,7 +440,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy args to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -508,14 +501,14 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy args to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(args.adapters, info, + sizeof(info[0]) * args.num_adapters); + if (ret) { + DXG_ERR("failed to copy adapter info to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -536,7 +529,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + if (adapters) + vfree(adapters); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -549,7 +542,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -561,7 +554,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -574,7 +567,7 @@ dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -584,7 +577,7 @@ dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -598,7 +591,7 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -630,7 +623,7 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -647,7 +640,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -677,7 +670,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy device handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -709,7 +702,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -724,7 +717,7 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -756,7 +749,7 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -774,7 +767,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -824,7 +817,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy context handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } else { + DXG_ERR("invalid host handle"); +@@ -851,7 +844,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -868,7 +861,7 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -920,7 +913,7 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -938,7 +931,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1002,7 +995,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1019,7 +1012,7 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1070,7 +1063,7 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process, + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1088,7 +1081,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1128,7 +1121,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1169,7 +1162,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1186,7 +1179,7 @@ dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1247,7 +1240,7 @@ dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1351,7 +1344,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1373,7 +1366,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + alloc_info_size); + if (ret) { + DXG_ERR("failed to copy alloc info"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1412,7 +1405,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + sizeof(standard_alloc)); + if (ret) { + DXG_ERR("failed to copy std alloc data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + if (standard_alloc.type == +@@ -1556,7 +1549,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + if (ret) { + DXG_ERR( + "failed to copy runtime data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1576,7 +1569,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + if (ret) { + DXG_ERR( + "failed to copy res data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1733,7 +1726,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1793,7 +1786,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1823,7 +1816,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + handle_size); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1962,7 +1955,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + if (allocs) + vfree(allocs); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1978,7 +1971,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2022,7 +2015,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + &args.paging_fence_value, sizeof(u64)); + if (ret2) { + DXG_ERR("failed to copy paging fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2030,7 +2023,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + &args.num_bytes_to_trim, sizeof(u64)); + if (ret2) { + DXG_ERR("failed to copy bytes to trim"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2041,7 +2034,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + + return ret; + } +@@ -2058,7 +2051,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2090,7 +2083,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs) + &args.num_bytes_to_trim, sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy bytes to trim to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + cleanup: + +@@ -2099,7 +2092,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2114,7 +2107,7 @@ dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2153,7 +2146,7 @@ dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2169,7 +2162,7 @@ dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2212,7 +2205,7 @@ dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2227,7 +2220,7 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2280,7 +2273,7 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2296,7 +2289,7 @@ dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2336,7 +2329,7 @@ dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2352,7 +2345,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2376,7 +2369,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy hwqueue handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2410,7 +2403,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2428,7 +2421,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2447,7 +2440,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(objects, args.objects, object_size); + if (ret) { + DXG_ERR("failed to copy objects"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2460,7 +2453,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(fences, args.fence_values, object_size); + if (ret) { + DXG_ERR("failed to copy fence values"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2494,7 +2487,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2510,7 +2503,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2542,7 +2535,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + &args.paging_fence_value, sizeof(u64)); + if (ret2) { + DXG_ERR("failed to copy paging fence to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2550,7 +2543,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + sizeof(args.virtual_address)); + if (ret2) { + DXG_ERR("failed to copy va to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2561,7 +2554,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2577,7 +2570,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2614,7 +2607,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) + sizeof(args.virtual_address)); + if (ret) { + DXG_ERR("failed to copy VA to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2624,7 +2617,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) + kref_put(&adapter->adapter_kref, dxgadapter_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2638,7 +2631,7 @@ dxgkio_free_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2680,7 +2673,7 @@ dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2705,7 +2698,7 @@ dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs) + sizeof(args.fence_value)); + if (ret) { + DXG_ERR("failed to copy fence value to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2734,7 +2727,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2808,7 +2801,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2842,7 +2835,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2856,7 +2849,7 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2885,7 +2878,7 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2906,7 +2899,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2995,7 +2988,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + if (ret == 0) + goto success; + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + + cleanup: + +@@ -3020,7 +3013,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3041,7 +3034,7 @@ dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3129,7 +3122,7 @@ dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3144,7 +3137,7 @@ dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + if (args.object_count == 0 || +@@ -3181,7 +3174,7 @@ dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3199,7 +3192,7 @@ dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3240,7 +3233,7 @@ dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3262,7 +3255,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3287,7 +3280,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy context handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3365,7 +3358,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3380,7 +3373,7 @@ dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3418,7 +3411,7 @@ dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3439,7 +3432,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3540,7 +3533,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + kfree(async_host_event); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3563,7 +3556,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3583,7 +3576,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(objects, args.objects, object_size); + if (ret) { + DXG_ERR("failed to copy objects"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3637,7 +3630,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + object_size); + if (ret) { + DXG_ERR("failed to copy fences"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } else { +@@ -3673,7 +3666,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + if (fences && fences != &args.fence_value) + vfree(fences); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3690,7 +3683,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3712,7 +3705,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + alloc->cpu_address_refcount++; + } else { + DXG_ERR("Failed to copy cpu address"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + } +@@ -3749,7 +3742,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3766,7 +3759,7 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3829,7 +3822,7 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3844,7 +3837,7 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3872,7 +3865,7 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3887,7 +3880,7 @@ dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + device = dxgprocess_device_by_handle(process, args.device); +@@ -3908,7 +3901,7 @@ dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3923,7 +3916,7 @@ dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3949,7 +3942,7 @@ dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3964,7 +3957,7 @@ dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + device = dxgprocess_device_by_handle(process, args.device); +@@ -3984,7 +3977,7 @@ dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3999,7 +3992,7 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + device = dxgprocess_device_by_handle(process, args.device); +@@ -4019,7 +4012,7 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4069,14 +4062,14 @@ dxgkio_set_context_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + + ret = set_context_scheduling_priority(process, args.context, + args.priority, false); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4111,7 +4104,7 @@ get_context_scheduling_priority(struct dxgprocess *process, + ret = copy_to_user(priority, &pri, sizeof(pri)); + if (ret) { + DXG_ERR("failed to copy priority to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4134,14 +4127,14 @@ dxgkio_get_context_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + + ret = get_context_scheduling_priority(process, args.context, + &input->priority, false); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4155,14 +4148,14 @@ dxgkio_set_context_process_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + + ret = set_context_scheduling_priority(process, args.context, + args.priority, true); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4176,7 +4169,7 @@ dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4184,7 +4177,7 @@ dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process, + &((struct d3dkmt_getcontextinprocessschedulingpriority *) + inargs)->priority, true); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4199,7 +4192,7 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4232,7 +4225,7 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4247,7 +4240,7 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4272,7 +4265,7 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4295,7 +4288,7 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4319,7 +4312,7 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4341,7 +4334,7 @@ dxgkio_escape(struct dxgprocess *process, void *__user inargs) + + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4367,7 +4360,7 @@ dxgkio_escape(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4382,7 +4375,7 @@ dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4432,7 +4425,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4458,7 +4451,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy args to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -4590,7 +4583,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4610,7 +4603,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(handles, args.objects, handle_size); + if (ret) { + DXG_ERR("failed to copy object handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4708,7 +4701,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy shared handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4726,7 +4719,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + if (resource) + kref_put(&resource->resource_kref, dxgresource_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4742,7 +4735,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4795,7 +4788,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4807,7 +4800,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4859,7 +4852,7 @@ assign_resource_handles(struct dxgprocess *process, + sizeof(open_alloc_info)); + if (ret) { + DXG_ERR("failed to copy alloc info"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5009,7 +5002,7 @@ open_resource(struct dxgprocess *process, + shared_resource->runtime_private_data_size); + if (ret) { + DXG_ERR("failed to copy runtime data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5020,7 +5013,7 @@ open_resource(struct dxgprocess *process, + shared_resource->resource_private_data_size); + if (ret) { + DXG_ERR("failed to copy resource data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5031,7 +5024,7 @@ open_resource(struct dxgprocess *process, + shared_resource->alloc_private_data_size); + if (ret) { + DXG_ERR("failed to copy alloc data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5046,7 +5039,7 @@ open_resource(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy resource handle to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -5054,7 +5047,7 @@ open_resource(struct dxgprocess *process, + &args->total_priv_drv_data_size, sizeof(u32)); + if (ret) { + DXG_ERR("failed to copy total driver data size"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -5102,7 +5095,7 @@ dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -5112,7 +5105,7 @@ dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -5125,7 +5118,7 @@ dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -5138,12 +5131,12 @@ dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy data to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1703-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch b/patch/kernel/archive/wsl2-arm64-6.1/1703-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch new file mode 100644 index 000000000000..2c643b7be7d8 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1703-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch @@ -0,0 +1,391 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 13 Jun 2022 14:18:10 -0700 +Subject: drivers: hv: dxgkrnl: Fix synchronization locks + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 19 ++- + drivers/hv/dxgkrnl/dxgkrnl.h | 8 +- + drivers/hv/dxgkrnl/dxgmodule.c | 3 +- + drivers/hv/dxgkrnl/dxgprocess.c | 11 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 85 +++++++--- + drivers/hv/dxgkrnl/ioctl.c | 24 ++- + drivers/hv/dxgkrnl/misc.h | 1 + + 7 files changed, 101 insertions(+), 50 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 3d8bec295b87..d9d45bd4a31e 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -136,7 +136,7 @@ void dxgadapter_release(struct kref *refcount) + struct dxgadapter *adapter; + + adapter = container_of(refcount, struct dxgadapter, adapter_kref); +- DXG_TRACE("%p", adapter); ++ DXG_TRACE("Destroying adapter: %px", adapter); + kfree(adapter); + } + +@@ -270,6 +270,8 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + if (ret < 0) { + kref_put(&device->device_kref, dxgdevice_release); + device = NULL; ++ } else { ++ DXG_TRACE("dxgdevice created: %px", device); + } + } + return device; +@@ -413,11 +415,8 @@ void dxgdevice_destroy(struct dxgdevice *device) + + cleanup: + +- if (device->adapter) { ++ if (device->adapter) + dxgprocess_adapter_remove_device(device); +- kref_put(&device->adapter->adapter_kref, dxgadapter_release); +- device->adapter = NULL; +- } + + up_write(&device->device_lock); + +@@ -721,6 +720,8 @@ void dxgdevice_release(struct kref *refcount) + struct dxgdevice *device; + + device = container_of(refcount, struct dxgdevice, device_kref); ++ DXG_TRACE("Destroying device: %px", device); ++ kref_put(&device->adapter->adapter_kref, dxgadapter_release); + kfree(device); + } + +@@ -999,6 +1000,9 @@ void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue) + kfree(pqueue); + } + ++/* ++ * Process_adapter_mutex is held. ++ */ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +@@ -1108,7 +1112,7 @@ int dxgprocess_adapter_add_device(struct dxgprocess *process, + + void dxgprocess_adapter_remove_device(struct dxgdevice *device) + { +- DXG_TRACE("Removing device: %p", device); ++ DXG_TRACE("Removing device: %px", device); + mutex_lock(&device->adapter_info->device_list_mutex); + if (device->device_list_entry.next) { + list_del(&device->device_list_entry); +@@ -1147,8 +1151,7 @@ void dxgsharedsyncobj_release(struct kref *refcount) + if (syncobj->adapter) { + dxgadapter_remove_shared_syncobj(syncobj->adapter, + syncobj); +- kref_put(&syncobj->adapter->adapter_kref, +- dxgadapter_release); ++ kref_put(&syncobj->adapter->adapter_kref, dxgadapter_release); + } + kfree(syncobj); + } +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index f63aa6f7a9dc..1b40d6e39085 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -404,7 +404,10 @@ struct dxgprocess { + /* Handle of the corresponding objec on the host */ + struct d3dkmthandle host_handle; + +- /* List of opened adapters (dxgprocess_adapter) */ ++ /* ++ * List of opened adapters (dxgprocess_adapter). ++ * Protected by process_adapter_mutex. ++ */ + struct list_head process_adapter_list_head; + }; + +@@ -451,6 +454,8 @@ enum dxgadapter_state { + struct dxgadapter { + struct rw_semaphore core_lock; + struct kref adapter_kref; ++ /* Protects creation and destruction of dxgdevice objects */ ++ struct mutex device_creation_lock; + /* Entry in the list of adapters in dxgglobal */ + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ +@@ -997,6 +1002,7 @@ void dxgk_validate_ioctls(void); + + #define DXG_TRACE(fmt, ...) do { \ + trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++ dev_dbg(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \ + } while (0) + + #define DXG_ERR(fmt, ...) do { \ +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index aa27931a3447..f419597f711a 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -272,6 +272,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + adapter->host_vgpu_luid = host_vgpu_luid; + kref_init(&adapter->adapter_kref); + init_rwsem(&adapter->core_lock); ++ mutex_init(&adapter->device_creation_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); + INIT_LIST_HEAD(&adapter->shared_resource_list_head); +@@ -961,4 +962,4 @@ module_exit(dxg_drv_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver"); +-MODULE_VERSION("2.0.0"); ++MODULE_VERSION("2.0.1"); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index e77e3a4983f8..fd51fd968049 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -214,14 +214,15 @@ int dxgprocess_close_adapter(struct dxgprocess *process, + hmgrtable_unlock(&process->local_handle_table, DXGLOCK_EXCL); + + if (adapter) { ++ mutex_lock(&adapter->device_creation_lock); ++ dxgglobal_acquire_process_adapter_lock(); + adapter_info = dxgprocess_get_adapter_info(process, adapter); +- if (adapter_info) { +- dxgglobal_acquire_process_adapter_lock(); ++ if (adapter_info) + dxgprocess_adapter_release(adapter_info); +- dxgglobal_release_process_adapter_lock(); +- } else { ++ else + ret = -EINVAL; +- } ++ dxgglobal_release_process_adapter_lock(); ++ mutex_unlock(&adapter->device_creation_lock); + } else { + DXG_ERR("Adapter not found %x", handle.v); + ret = -EINVAL; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 566ccb6d01c9..8c99f141482e 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1573,8 +1573,27 @@ process_allocation_handles(struct dxgprocess *process, + struct dxgresource *resource) + { + int ret = 0; +- int i; ++ int i = 0; ++ int k; ++ struct dxgkvmb_command_allocinfo_return *host_alloc; + ++ /* ++ * Assign handle to the internal objects, so VM bus messages will be ++ * sent to the host to free them during object destruction. ++ */ ++ if (args->flags.create_resource) ++ resource->handle = res->resource; ++ for (i = 0; i < args->alloc_count; i++) { ++ host_alloc = &res->allocation_info[i]; ++ dxgalloc[i]->alloc_handle = host_alloc->allocation; ++ } ++ ++ /* ++ * Assign handle to the handle table. ++ * In case of a failure all handles should be freed. ++ * When the function returns, the objects could be destroyed by ++ * handle immediately. ++ */ + hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); + if (args->flags.create_resource) { + ret = hmgrtable_assign_handle(&process->handle_table, resource, +@@ -1583,14 +1602,12 @@ process_allocation_handles(struct dxgprocess *process, + if (ret < 0) { + DXG_ERR("failed to assign resource handle %x", + res->resource.v); ++ goto cleanup; + } else { +- resource->handle = res->resource; + resource->handle_valid = 1; + } + } + for (i = 0; i < args->alloc_count; i++) { +- struct dxgkvmb_command_allocinfo_return *host_alloc; +- + host_alloc = &res->allocation_info[i]; + ret = hmgrtable_assign_handle(&process->handle_table, + dxgalloc[i], +@@ -1602,9 +1619,26 @@ process_allocation_handles(struct dxgprocess *process, + args->alloc_count, i); + break; + } +- dxgalloc[i]->alloc_handle = host_alloc->allocation; + dxgalloc[i]->handle_valid = 1; + } ++ if (ret < 0) { ++ if (args->flags.create_resource) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ res->resource); ++ resource->handle_valid = 0; ++ } ++ for (k = 0; k < i; k++) { ++ host_alloc = &res->allocation_info[i]; ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ host_alloc->allocation); ++ dxgalloc[i]->handle_valid = 0; ++ } ++ } ++ ++cleanup: ++ + hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); + + if (ret) +@@ -1705,18 +1739,17 @@ create_local_allocations(struct dxgprocess *process, + } + } + +- ret = process_allocation_handles(process, device, args, result, +- dxgalloc, resource); +- if (ret < 0) +- goto cleanup; +- + ret = copy_to_user(&input_args->global_share, &args->global_share, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy global share"); + ret = -EFAULT; ++ goto cleanup; + } + ++ ret = process_allocation_handles(process, device, args, result, ++ dxgalloc, resource); ++ + cleanup: + + if (ret < 0) { +@@ -3576,22 +3609,6 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + goto cleanup; + } + +- ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue, +- HMGRENTRY_TYPE_DXGHWQUEUE, +- command->hwqueue); +- if (ret < 0) +- goto cleanup; +- +- ret = hmgrtable_assign_handle_safe(&process->handle_table, +- NULL, +- HMGRENTRY_TYPE_MONITOREDFENCE, +- command->hwqueue_progress_fence); +- if (ret < 0) +- goto cleanup; +- +- hwqueue->handle = command->hwqueue; +- hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence; +- + hwqueue->progress_fence_mapped_address = + dxg_map_iospace((u64)command->hwqueue_progress_fence_cpuva, + PAGE_SIZE, PROT_READ | PROT_WRITE, true); +@@ -3641,6 +3658,22 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + } + } + ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, ++ NULL, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ command->hwqueue_progress_fence); ++ if (ret < 0) ++ goto cleanup; ++ ++ hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence; ++ hwqueue->handle = command->hwqueue; ++ ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ command->hwqueue); ++ if (ret < 0) ++ hwqueue->handle.v = 0; ++ + cleanup: + if (ret < 0) { + DXG_ERR("failed %x", ret); +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 3dc9e76f4f3d..7c72790f917f 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -636,6 +636,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + struct dxgdevice *device = NULL; + struct d3dkmthandle host_device_handle = {}; + bool adapter_locked = false; ++ bool device_creation_locked = false; + + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { +@@ -651,6 +652,9 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + goto cleanup; + } + ++ mutex_lock(&adapter->device_creation_lock); ++ device_creation_locked = true; ++ + device = dxgdevice_create(adapter, process); + if (device == NULL) { + ret = -ENOMEM; +@@ -699,6 +703,9 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + if (adapter_locked) + dxgadapter_release_lock_shared(adapter); + ++ if (device_creation_locked) ++ mutex_unlock(&adapter->device_creation_lock); ++ + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +@@ -803,22 +810,21 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + host_context_handle = dxgvmb_send_create_context(adapter, + process, &args); + if (host_context_handle.v) { +- hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); +- ret = hmgrtable_assign_handle(&process->handle_table, context, +- HMGRENTRY_TYPE_DXGCONTEXT, +- host_context_handle); +- if (ret >= 0) +- context->handle = host_context_handle; +- hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); +- if (ret < 0) +- goto cleanup; + ret = copy_to_user(&((struct d3dkmt_createcontextvirtual *) + inargs)->context, &host_context_handle, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy context handle"); + ret = -EFAULT; ++ goto cleanup; + } ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, context, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ host_context_handle); ++ if (ret >= 0) ++ context->handle = host_context_handle; ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); + } else { + DXG_ERR("invalid host handle"); + ret = -EINVAL; +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index ee2ebfdd1c13..9fcab4ae2c0c 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -38,6 +38,7 @@ extern const struct d3dkmthandle zerohandle; + * core_lock (dxgadapter lock) + * device_lock (dxgdevice lock) + * process_adapter_mutex ++ * device_creation_lock in dxgadapter + * adapter_list_lock + * device_mutex (dxgglobal mutex) + */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1704-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch b/patch/kernel/archive/wsl2-arm64-6.1/1704-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch new file mode 100644 index 000000000000..c13eff3e946e --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1704-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch @@ -0,0 +1,80 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 28 Jun 2022 17:26:11 -0700 +Subject: drivers: hv: dxgkrnl: Close shared file objects in case of a failure + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/ioctl.c | 14 +++++++--- + 1 file changed, 10 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 7c72790f917f..69324510c9e2 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -4536,7 +4536,7 @@ enum dxg_sharedobject_type { + }; + + static int get_object_fd(enum dxg_sharedobject_type type, +- void *object, int *fdout) ++ void *object, int *fdout, struct file **filp) + { + struct file *file; + int fd; +@@ -4565,8 +4565,8 @@ static int get_object_fd(enum dxg_sharedobject_type type, + return -ENOTRECOVERABLE; + } + +- fd_install(fd, file); + *fdout = fd; ++ *filp = file; + return 0; + } + +@@ -4581,6 +4581,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + struct dxgsharedresource *shared_resource = NULL; + struct d3dkmthandle *handles = NULL; + int object_fd = -1; ++ struct file *filp = NULL; + void *obj = NULL; + u32 handle_size; + int ret; +@@ -4660,7 +4661,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + switch (object_type) { + case HMGRENTRY_TYPE_DXGSYNCOBJECT: + ret = get_object_fd(DXG_SHARED_SYNCOBJECT, shared_syncobj, +- &object_fd); ++ &object_fd, &filp); + if (ret < 0) { + DXG_ERR("get_object_fd failed for sync object"); + goto cleanup; +@@ -4675,7 +4676,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + break; + case HMGRENTRY_TYPE_DXGRESOURCE: + ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource, +- &object_fd); ++ &object_fd, &filp); + if (ret < 0) { + DXG_ERR("get_object_fd failed for resource"); + goto cleanup; +@@ -4708,10 +4709,15 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + if (ret) { + DXG_ERR("failed to copy shared handle"); + ret = -EFAULT; ++ goto cleanup; + } + ++ fd_install(object_fd, filp); ++ + cleanup: + if (ret < 0) { ++ if (filp) ++ fput(filp); + if (object_fd >= 0) + put_unused_fd(object_fd); + } +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1705-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch b/patch/kernel/archive/wsl2-arm64-6.1/1705-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch new file mode 100644 index 000000000000..db8494533f63 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1705-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch @@ -0,0 +1,51 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 29 Jun 2022 10:04:23 -0700 +Subject: drivers: hv: dxgkrnl: Added missed NULL check for resource object + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/ioctl.c | 10 ++++++---- + 1 file changed, 6 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 69324510c9e2..98350583943e 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1589,7 +1589,8 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + &process->handle_table, + HMGRENTRY_TYPE_DXGRESOURCE, + args.resource); +- kref_get(&resource->resource_kref); ++ if (resource != NULL) ++ kref_get(&resource->resource_kref); + dxgprocess_ht_lock_shared_up(process); + + if (resource == NULL || resource->device != device) { +@@ -1693,10 +1694,8 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + &standard_alloc); + cleanup: + +- if (resource_mutex_acquired) { ++ if (resource_mutex_acquired) + mutex_unlock(&resource->resource_mutex); +- kref_put(&resource->resource_kref, dxgresource_release); +- } + if (ret < 0) { + if (dxgalloc) { + for (i = 0; i < args.alloc_count; i++) { +@@ -1727,6 +1726,9 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + if (adapter) + dxgadapter_release_lock_shared(adapter); + ++ if (resource && !args.flags.create_resource) ++ kref_put(&resource->resource_kref, dxgresource_release); ++ + if (device) { + dxgdevice_release_lock_shared(device); + kref_put(&device->device_kref, dxgdevice_release); +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1706-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch b/patch/kernel/archive/wsl2-arm64-6.1/1706-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch new file mode 100644 index 000000000000..3efcc7ef401b --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1706-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch @@ -0,0 +1,84 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Thu, 26 Jan 2023 10:49:41 -0800 +Subject: drivers: hv: dxgkrnl: Fixed dxgkrnl to build for the 6.1 kernel + +Definition for GPADL was changed from u32 to struct vmbus_gpadl. + +Signed-off-by: Iouri Tarassov +--- + drivers/hv/dxgkrnl/dxgadapter.c | 8 -------- + drivers/hv/dxgkrnl/dxgkrnl.h | 4 ---- + drivers/hv/dxgkrnl/dxgvmbus.c | 8 -------- + 3 files changed, 20 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index d9d45bd4a31e..bcd19b7267d1 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -927,19 +927,11 @@ void dxgallocation_destroy(struct dxgallocation *alloc) + alloc->owner.device, + &args, &alloc->alloc_handle); + } +-#ifdef _MAIN_KERNEL_ + if (alloc->gpadl.gpadl_handle) { + DXG_TRACE("Teardown gpadl %d", alloc->gpadl.gpadl_handle); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); + alloc->gpadl.gpadl_handle = 0; + } +-#else +- if (alloc->gpadl) { +- DXG_TRACE("Teardown gpadl %d", alloc->gpadl); +- vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); +- alloc->gpadl = 0; +- } +-#endif + if (alloc->priv_drv_data) + vfree(alloc->priv_drv_data); + if (alloc->cpu_address_mapped) +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 1b40d6e39085..c5ed23cb90df 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -728,11 +728,7 @@ struct dxgallocation { + u32 cached:1; + u32 handle_valid:1; + /* GPADL address list for existing sysmem allocations */ +-#ifdef _MAIN_KERNEL_ + struct vmbus_gpadl gpadl; +-#else +- u32 gpadl; +-#endif + /* Number of pages in the 'pages' array */ + u32 num_pages; + /* +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 8c99f141482e..eb3f4c5153a6 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1493,22 +1493,14 @@ int create_existing_sysmem(struct dxgdevice *device, + ret = -ENOMEM; + goto cleanup; + } +-#ifdef _MAIN_KERNEL_ + DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); +-#else +- DXG_TRACE("New gpadl %d", dxgalloc->gpadl); +-#endif + + command_vgpu_to_host_init2(&set_store_command->hdr, + DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, + device->process->host_handle); + set_store_command->device = device->handle; + set_store_command->allocation = host_alloc->allocation; +-#ifdef _MAIN_KERNEL_ + set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; +-#else +- set_store_command->gpadl = dxgalloc->gpadl; +-#endif + ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, + msg.size); + if (ret < 0) +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1707-virtio-pmem-Support-PCI-BAR-relative-addresses.patch b/patch/kernel/archive/wsl2-arm64-6.1/1707-virtio-pmem-Support-PCI-BAR-relative-addresses.patch new file mode 100644 index 000000000000..1367f97d3e36 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1707-virtio-pmem-Support-PCI-BAR-relative-addresses.patch @@ -0,0 +1,80 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Taylor Stark +Date: Thu, 15 Jul 2021 15:35:05 -0700 +Subject: virtio-pmem: Support PCI BAR-relative addresses + +Update virtio-pmem to allow for the pmem region to be specified in either +guest absolute terms or as a PCI BAR-relative address. This is required +to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices +to operate on PCI memory ranges defined via BARs. + +Virtio-pmem will check for a shared memory window and use that if found, +else it will fallback to using the guest absolute addresses in +virtio_pmem_config. This was chosen over defining a new feature bit, +since it's similar to how virtio-fs is configured. + +Signed-off-by: Taylor Stark + +Link: https://lore.kernel.org/r/20210715223505.GA29329@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net +Signed-off-by: Tyler Hicks +--- + drivers/nvdimm/virtio_pmem.c | 21 ++++++++-- + drivers/nvdimm/virtio_pmem.h | 3 ++ + 2 files changed, 20 insertions(+), 4 deletions(-) + +diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c +index 20da455d2ef6..8998a0c03c2f 100644 +--- a/drivers/nvdimm/virtio_pmem.c ++++ b/drivers/nvdimm/virtio_pmem.c +@@ -37,6 +37,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + struct virtio_pmem *vpmem; + struct resource res; + int err = 0; ++ bool have_shm_region; ++ struct virtio_shm_region pmem_region; + + if (!vdev->config->get) { + dev_err(&vdev->dev, "%s failure: config access disabled\n", +@@ -58,10 +60,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + goto out_err; + } + +- virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, +- start, &vpmem->start); +- virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, +- size, &vpmem->size); ++ /* Retrieve the pmem device's address and size. It may have been supplied ++ * as a PCI BAR-relative shared memory region, or as a guest absolute address. ++ */ ++ have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region, ++ VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION); ++ ++ if (have_shm_region) { ++ vpmem->start = pmem_region.addr; ++ vpmem->size = pmem_region.len; ++ } else { ++ virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, ++ start, &vpmem->start); ++ virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, ++ size, &vpmem->size); ++ } + + res.start = vpmem->start; + res.end = vpmem->start + vpmem->size - 1; +diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h +index 0dddefe594c4..62bb564e81cb 100644 +--- a/drivers/nvdimm/virtio_pmem.h ++++ b/drivers/nvdimm/virtio_pmem.h +@@ -50,6 +50,9 @@ struct virtio_pmem { + __u64 size; + }; + ++/* For the id field in virtio_pci_shm_cap */ ++#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0 ++ + void virtio_pmem_host_ack(struct virtqueue *vq); + int async_pmem_flush(struct nd_region *nd_region, struct bio *bio); + #endif +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1708-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch b/patch/kernel/archive/wsl2-arm64-6.1/1708-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch new file mode 100644 index 000000000000..a69a86d80c4a --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1708-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch @@ -0,0 +1,52 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Taylor Stark +Date: Thu, 15 Jul 2021 15:36:38 -0700 +Subject: virtio-pmem: Set DRIVER_OK status prior to creating pmem region + +Update virtio-pmem to call virtio_device_ready prior to creating the pmem +region. Otherwise, the guest may try to access the pmem region prior to +the DRIVER_OK status being set. + +In the case of Hyper-V, the backing pmem file isn't mapped to the guest +until the DRIVER_OK status is set. Therefore, attempts to access the pmem +region can cause the guest to crash. Hyper-V could map the file earlier, +for example at VM creation, but we didn't want to pay the mapping cost if +the device is never used. Additionally, it felt weird to allow the guest +to access the region prior to the device fully coming online. + +Signed-off-by: Taylor Stark +Reviewed-by: Pankaj Gupta + +Link: https://lore.kernel.org/r/20210715223638.GA29649@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net +Signed-off-by: Tyler Hicks +--- + drivers/nvdimm/virtio_pmem.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c +index 8998a0c03c2f..1b5924caa1c6 100644 +--- a/drivers/nvdimm/virtio_pmem.c ++++ b/drivers/nvdimm/virtio_pmem.c +@@ -91,6 +91,11 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + + dev_set_drvdata(&vdev->dev, vpmem->nvdimm_bus); + ++ /* Online the device prior to creating a pmem region, to ensure that ++ * the region is never touched while the device is offline. ++ */ ++ virtio_device_ready(vdev); ++ + ndr_desc.res = &res; + ndr_desc.numa_node = nid; + ndr_desc.flush = async_pmem_flush; +@@ -111,6 +116,7 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + } + return 0; + out_nd: ++ vdev->config->reset(vdev); + virtio_reset_device(vdev); + nvdimm_bus_unregister(vpmem->nvdimm_bus); + out_vq: +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1709-mm-page_reporting-Add-checks-for-page_reporting_order-param.patch b/patch/kernel/archive/wsl2-arm64-6.1/1709-mm-page_reporting-Add-checks-for-page_reporting_order-param.patch new file mode 100644 index 000000000000..39ed2ab2dacd --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1709-mm-page_reporting-Add-checks-for-page_reporting_order-param.patch @@ -0,0 +1,104 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Shradha Gupta +Date: Thu, 29 Sep 2022 23:01:38 -0700 +Subject: mm/page_reporting: Add checks for page_reporting_order param + +Current code allows the page_reporting_order parameter to be changed +via sysfs to any integer value. The new value is used immediately +in page reporting code with no validation, which could cause incorrect +behavior. Fix this by adding validation of the new value. +Export this parameter for use in the driver that is calling the +page_reporting_register(). + +This is needed by drivers like hv_balloon to know the order of the +pages reported. Traditionally the values provided in the kernel boot +line or subsequently changed via sysfs take priority therefore, if +page_reporting_order parameter's value is set, it takes precedence +over the value passed while registering with the driver. + +Signed-off-by: Shradha Gupta +Reviewed-by: Michael Kelley +Acked-by: Andrew Morton +Link: https://lore.kernel.org/r/1664517699-1085-2-git-send-email-shradhagupta@linux.microsoft.com +Signed-off-by: Wei Liu +(cherry picked from commit aebb02ce8b36d20464404206b89069dc9239a7f0) +Link: https://microsoft.visualstudio.com/OS/_workitems/edit/42270731/ +Signed-off-by: Kelsey Steele +--- + mm/page_reporting.c | 50 +++++++++- + 1 file changed, 45 insertions(+), 5 deletions(-) + +diff --git a/mm/page_reporting.c b/mm/page_reporting.c +index 382958eef8a9..79a8554f024c 100644 +--- a/mm/page_reporting.c ++++ b/mm/page_reporting.c +@@ -11,10 +11,42 @@ + #include "page_reporting.h" + #include "internal.h" + +-unsigned int page_reporting_order = MAX_ORDER; +-module_param(page_reporting_order, uint, 0644); ++/* Initialize to an unsupported value */ ++unsigned int page_reporting_order = -1; ++ ++static int page_order_update_notify(const char *val, const struct kernel_param *kp) ++{ ++ /* ++ * If param is set beyond this limit, order is set to default ++ * pageblock_order value ++ */ ++ return param_set_uint_minmax(val, kp, 0, MAX_ORDER-1); ++} ++ ++static const struct kernel_param_ops page_reporting_param_ops = { ++ .set = &page_order_update_notify, ++ /* ++ * For the get op, use param_get_int instead of param_get_uint. ++ * This is to make sure that when unset the initialized value of ++ * -1 is shown correctly ++ */ ++ .get = ¶m_get_int, ++}; ++ ++module_param_cb(page_reporting_order, &page_reporting_param_ops, ++ &page_reporting_order, 0644); + MODULE_PARM_DESC(page_reporting_order, "Set page reporting order"); + ++/* ++ * This symbol is also a kernel parameter. Export the page_reporting_order ++ * symbol so that other drivers can access it to control order values without ++ * having to introduce another configurable parameter. Only one driver can ++ * register with the page_reporting driver for the service, so we have just ++ * one control parameter for the use case(which can be accessed in both ++ * drivers) ++ */ ++EXPORT_SYMBOL_GPL(page_reporting_order); ++ + #define PAGE_REPORTING_DELAY (2 * HZ) + static struct page_reporting_dev_info __rcu *pr_dev_info __read_mostly; + +@@ -330,10 +362,18 @@ int page_reporting_register(struct page_reporting_dev_info *prdev) + } + + /* +- * Update the page reporting order if it's specified by driver. +- * Otherwise, it falls back to @pageblock_order. ++ * If the page_reporting_order value is not set, we check if ++ * an order is provided from the driver that is performing the ++ * registration. If that is not provided either, we default to ++ * pageblock_order. + */ +- page_reporting_order = prdev->order ? : pageblock_order; ++ ++ if (page_reporting_order == -1) { ++ if (prdev->order > 0 && prdev->order <= MAX_ORDER) ++ page_reporting_order = prdev->order; ++ else ++ page_reporting_order = pageblock_order; ++ } + + /* initialize state and work structures */ + atomic_set(&prdev->state, PAGE_REPORTING_IDLE); +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.1/1710-hv_balloon-Add-support-for-configurable-order-free-page-reporting.patch b/patch/kernel/archive/wsl2-arm64-6.1/1710-hv_balloon-Add-support-for-configurable-order-free-page-reporting.patch new file mode 100644 index 000000000000..25af8da6825e --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.1/1710-hv_balloon-Add-support-for-configurable-order-free-page-reporting.patch @@ -0,0 +1,202 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Shradha Gupta +Date: Thu, 29 Sep 2022 23:01:39 -0700 +Subject: hv_balloon: Add support for configurable order free page reporting + +Newer versions of Hyper-V allow reporting unused guest pages in chunks +smaller than 2 Mbytes. Using smaller chunks allows reporting more +unused guest pages, but with increased overhead in the finding the +small chunks. To make this tradeoff configurable, use the existing +page_reporting_order module parameter to control the reporting order. +Drop and refine checks that restricted the minimun page reporting order +to 2Mbytes size pages. Add appropriate checks to make sure the +underlying Hyper-V versions support cold discard hints of any order +(and not just starting from 9) + +Signed-off-by: Shradha Gupta +Reviewed-by: Michael Kelley +Link: https://lore.kernel.org/r/1664517699-1085-3-git-send-email-shradhagupta@linux.microsoft.com +Signed-off-by: Wei Liu +(cherry picked from commit dc60f2db39c3f8da4490c1ed827022bbc925d81c) +Link: https://microsoft.visualstudio.com/OS/_workitems/edit/42270731/ +Signed-off-by: Kelsey Steele +--- + drivers/hv/hv_balloon.c | 94 +++++++--- + 1 file changed, 73 insertions(+), 21 deletions(-) + +diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c +index f98c849096f7..64ac5bdee3a6 100644 +--- a/drivers/hv/hv_balloon.c ++++ b/drivers/hv/hv_balloon.c +@@ -469,12 +469,16 @@ static bool do_hot_add; + * the specified number of seconds. + */ + static uint pressure_report_delay = 45; ++extern unsigned int page_reporting_order; ++#define HV_MAX_FAILURES 2 + + /* + * The last time we posted a pressure report to host. + */ + static unsigned long last_post_time; + ++static int hv_hypercall_multi_failure; ++ + module_param(hot_add, bool, (S_IRUGO | S_IWUSR)); + MODULE_PARM_DESC(hot_add, "If set attempt memory hot_add"); + +@@ -579,6 +583,10 @@ static struct hv_dynmem_device dm_device; + + static void post_status(struct hv_dynmem_device *dm); + ++static void enable_page_reporting(void); ++ ++static void disable_page_reporting(void); ++ + #ifdef CONFIG_MEMORY_HOTPLUG + static inline bool has_pfn_is_backed(struct hv_hotadd_state *has, + unsigned long pfn) +@@ -1418,6 +1426,18 @@ static int dm_thread_func(void *dm_dev) + */ + reinit_completion(&dm_device.config_event); + post_status(dm); ++ /* ++ * disable free page reporting if multiple hypercall ++ * failure flag set. It is not done in the page_reporting ++ * callback context as that causes a deadlock between ++ * page_reporting_process() and page_reporting_unregister() ++ */ ++ if (hv_hypercall_multi_failure >= HV_MAX_FAILURES) { ++ pr_err("Multiple failures in cold memory discard hypercall, disabling page reporting\n"); ++ disable_page_reporting(); ++ /* Reset the flag after disabling reporting */ ++ hv_hypercall_multi_failure = 0; ++ } + } + + return 0; +@@ -1593,20 +1613,20 @@ static void balloon_onchannelcallback(void *context) + + } + +-/* Hyper-V only supports reporting 2MB pages or higher */ +-#define HV_MIN_PAGE_REPORTING_ORDER 9 +-#define HV_MIN_PAGE_REPORTING_LEN (HV_HYP_PAGE_SIZE << HV_MIN_PAGE_REPORTING_ORDER) ++#define HV_LARGE_REPORTING_ORDER 9 ++#define HV_LARGE_REPORTING_LEN (HV_HYP_PAGE_SIZE << \ ++ HV_LARGE_REPORTING_ORDER) + static int hv_free_page_report(struct page_reporting_dev_info *pr_dev_info, + struct scatterlist *sgl, unsigned int nents) + { + unsigned long flags; + struct hv_memory_hint *hint; +- int i; ++ int i, order; + u64 status; + struct scatterlist *sg; + + WARN_ON_ONCE(nents > HV_MEMORY_HINT_MAX_GPA_PAGE_RANGES); +- WARN_ON_ONCE(sgl->length < HV_MIN_PAGE_REPORTING_LEN); ++ WARN_ON_ONCE(sgl->length < (HV_HYP_PAGE_SIZE << page_reporting_order)); + local_irq_save(flags); + hint = *(struct hv_memory_hint **)this_cpu_ptr(hyperv_pcpu_input_arg); + if (!hint) { +@@ -1621,21 +1641,53 @@ static int hv_free_page_report(struct page_reporting_dev_info *pr_dev_info, + + range = &hint->ranges[i]; + range->address_space = 0; +- /* page reporting only reports 2MB pages or higher */ +- range->page.largepage = 1; +- range->page.additional_pages = +- (sg->length / HV_MIN_PAGE_REPORTING_LEN) - 1; +- range->page_size = HV_GPA_PAGE_RANGE_PAGE_SIZE_2MB; +- range->base_large_pfn = +- page_to_hvpfn(sg_page(sg)) >> HV_MIN_PAGE_REPORTING_ORDER; ++ order = get_order(sg->length); ++ /* ++ * Hyper-V expects the additional_pages field in the units ++ * of one of these 3 sizes, 4Kbytes, 2Mbytes or 1Gbytes. ++ * This is dictated by the values of the fields page.largesize ++ * and page_size. ++ * This code however, only uses 4Kbytes and 2Mbytes units ++ * and not 1Gbytes unit. ++ */ ++ ++ /* page reporting for pages 2MB or higher */ ++ if (order >= HV_LARGE_REPORTING_ORDER ) { ++ range->page.largepage = 1; ++ range->page_size = HV_GPA_PAGE_RANGE_PAGE_SIZE_2MB; ++ range->base_large_pfn = page_to_hvpfn( ++ sg_page(sg)) >> HV_LARGE_REPORTING_ORDER; ++ range->page.additional_pages = ++ (sg->length / HV_LARGE_REPORTING_LEN) - 1; ++ } else { ++ /* Page reporting for pages below 2MB */ ++ range->page.basepfn = page_to_hvpfn(sg_page(sg)); ++ range->page.largepage = false; ++ range->page.additional_pages = ++ (sg->length / HV_HYP_PAGE_SIZE) - 1; ++ } ++ + } + + status = hv_do_rep_hypercall(HV_EXT_CALL_MEMORY_HEAT_HINT, nents, 0, + hint, NULL); + local_irq_restore(flags); +- if ((status & HV_HYPERCALL_RESULT_MASK) != HV_STATUS_SUCCESS) { ++ if (!hv_result_success(status)) { ++ + pr_err("Cold memory discard hypercall failed with status %llx\n", +- status); ++ status); ++ if (hv_hypercall_multi_failure > 0) ++ hv_hypercall_multi_failure++; ++ ++ if (hv_result(status) == HV_STATUS_INVALID_PARAMETER) { ++ pr_err("Underlying Hyper-V does not support order less than 9. Hypercall failed\n"); ++ pr_err("Defaulting to page_reporting_order %d\n", ++ pageblock_order); ++ page_reporting_order = pageblock_order; ++ hv_hypercall_multi_failure++; ++ return -EINVAL; ++ } ++ + return -EINVAL; + } + +@@ -1646,12 +1698,6 @@ static void enable_page_reporting(void) + { + int ret; + +- /* Essentially, validating 'PAGE_REPORTING_MIN_ORDER' is big enough. */ +- if (pageblock_order < HV_MIN_PAGE_REPORTING_ORDER) { +- pr_debug("Cold memory discard is only supported on 2MB pages and above\n"); +- return; +- } +- + if (!hv_query_ext_cap(HV_EXT_CAPABILITY_MEMORY_COLD_DISCARD_HINT)) { + pr_debug("Cold memory discard hint not supported by Hyper-V\n"); + return; +@@ -1659,12 +1705,18 @@ static void enable_page_reporting(void) + + BUILD_BUG_ON(PAGE_REPORTING_CAPACITY > HV_MEMORY_HINT_MAX_GPA_PAGE_RANGES); + dm_device.pr_dev_info.report = hv_free_page_report; ++ /* ++ * We let the page_reporting_order parameter decide the order ++ * in the page_reporting code ++ */ ++ dm_device.pr_dev_info.order = 0; + ret = page_reporting_register(&dm_device.pr_dev_info); + if (ret < 0) { + dm_device.pr_dev_info.report = NULL; + pr_err("Failed to enable cold memory discard: %d\n", ret); + } else { +- pr_info("Cold memory discard hint enabled\n"); ++ pr_info("Cold memory discard hint enabled with order %d\n", ++ page_reporting_order); + } + } + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch b/patch/kernel/archive/wsl2-arm64-6.6/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch new file mode 100644 index 000000000000..0c35ef5b5fa1 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1666-Hyper-V-ARM64-Always-use-the-Hyper-V-hypercall-interface.patch @@ -0,0 +1,239 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Sunil Muthuswamy +Date: Mon, 3 May 2021 14:17:52 -0700 +Subject: Hyper-V: ARM64: Always use the Hyper-V hypercall interface + +This patch forces the use of the Hyper-V hypercall interface, +instead of the architectural SMCCC interface on ARM64 because +not all versions of Windows support the SMCCC interface. All +versions of Windows will support the Hyper-V hypercall interface, +so this change should be both forward and backward compatible. + +Signed-off-by: Sunil Muthuswamy + +[tyhicks: Forward ported to v5.15] +Signed-off-by: Tyler Hicks +[kms: Forward ported to v6.1] +Signed-off-by: Kelsey Steele +--- + arch/arm64/hyperv/Makefile | 2 +- + arch/arm64/hyperv/hv_core.c | 57 ++++----- + arch/arm64/hyperv/hv_hvc.S | 61 ++++++++++ + arch/arm64/include/asm/mshyperv.h | 4 + + 4 files changed, 91 insertions(+), 33 deletions(-) + +diff --git a/arch/arm64/hyperv/Makefile b/arch/arm64/hyperv/Makefile +index 87c31c001da9..4cbeaa36d189 100644 +--- a/arch/arm64/hyperv/Makefile ++++ b/arch/arm64/hyperv/Makefile +@@ -1,2 +1,2 @@ + # SPDX-License-Identifier: GPL-2.0 +-obj-y := hv_core.o mshyperv.o ++obj-y := hv_core.o mshyperv.o hv_hvc.o +diff --git a/arch/arm64/hyperv/hv_core.c b/arch/arm64/hyperv/hv_core.c +index b54c34793701..e7010b2a587c 100644 +--- a/arch/arm64/hyperv/hv_core.c ++++ b/arch/arm64/hyperv/hv_core.c +@@ -23,16 +23,13 @@ + */ + u64 hv_do_hypercall(u64 control, void *input, void *output) + { +- struct arm_smccc_res res; + u64 input_address; + u64 output_address; + + input_address = input ? virt_to_phys(input) : 0; + output_address = output ? virt_to_phys(output) : 0; + +- arm_smccc_1_1_hvc(HV_FUNC_ID, control, +- input_address, output_address, &res); +- return res.a0; ++ return hv_do_hvc(control, input_address, output_address); + } + EXPORT_SYMBOL_GPL(hv_do_hypercall); + +@@ -41,27 +38,33 @@ EXPORT_SYMBOL_GPL(hv_do_hypercall); + * with arguments in registers instead of physical memory. + * Avoids the overhead of virt_to_phys for simple hypercalls. + */ +- + u64 hv_do_fast_hypercall8(u16 code, u64 input) + { +- struct arm_smccc_res res; + u64 control; + + control = (u64)code | HV_HYPERCALL_FAST_BIT; +- +- arm_smccc_1_1_hvc(HV_FUNC_ID, control, input, &res); +- return res.a0; ++ return hv_do_hvc(control, input); + } + EXPORT_SYMBOL_GPL(hv_do_fast_hypercall8); + ++union hv_hypercall_status { ++ u64 as_uint64; ++ struct { ++ u16 status; ++ u16 reserved; ++ u16 reps_completed; /* Low 12 bits */ ++ u16 reserved2; ++ }; ++}; ++ + /* + * Set a single VP register to a 64-bit value. + */ + void hv_set_vpreg(u32 msr, u64 value) + { +- struct arm_smccc_res res; ++ union hv_hypercall_status status; + +- arm_smccc_1_1_hvc(HV_FUNC_ID, ++ status.as_uint64 = hv_do_hvc( + HVCALL_SET_VP_REGISTERS | HV_HYPERCALL_FAST_BIT | + HV_HYPERCALL_REP_COMP_1, + HV_PARTITION_ID_SELF, +@@ -69,15 +72,14 @@ void hv_set_vpreg(u32 msr, u64 value) + msr, + 0, + value, +- 0, +- &res); ++ 0); + + /* + * Something is fundamentally broken in the hypervisor if + * setting a VP register fails. There's really no way to + * continue as a guest VM, so panic. + */ +- BUG_ON(!hv_result_success(res.a0)); ++ BUG_ON(status.status != HV_STATUS_SUCCESS); + } + EXPORT_SYMBOL_GPL(hv_set_vpreg); + +@@ -90,31 +92,22 @@ EXPORT_SYMBOL_GPL(hv_set_vpreg); + + void hv_get_vpreg_128(u32 msr, struct hv_get_vp_registers_output *result) + { +- struct arm_smccc_1_2_regs args; +- struct arm_smccc_1_2_regs res; +- +- args.a0 = HV_FUNC_ID; +- args.a1 = HVCALL_GET_VP_REGISTERS | HV_HYPERCALL_FAST_BIT | +- HV_HYPERCALL_REP_COMP_1; +- args.a2 = HV_PARTITION_ID_SELF; +- args.a3 = HV_VP_INDEX_SELF; +- args.a4 = msr; ++ u64 status; + +- /* +- * Use the SMCCC 1.2 interface because the results are in registers +- * beyond X0-X3. +- */ +- arm_smccc_1_2_hvc(&args, &res); ++ status = hv_do_hvc_fast_get( ++ HVCALL_GET_VP_REGISTERS | HV_HYPERCALL_FAST_BIT | ++ HV_HYPERCALL_REP_COMP_1, ++ HV_PARTITION_ID_SELF, ++ HV_VP_INDEX_SELF, ++ msr, ++ result); + + /* + * Something is fundamentally broken in the hypervisor if + * getting a VP register fails. There's really no way to + * continue as a guest VM, so panic. + */ +- BUG_ON(!hv_result_success(res.a0)); +- +- result->as64.low = res.a6; +- result->as64.high = res.a7; ++ BUG_ON((status & HV_HYPERCALL_RESULT_MASK) != HV_STATUS_SUCCESS); + } + EXPORT_SYMBOL_GPL(hv_get_vpreg_128); + +diff --git a/arch/arm64/hyperv/hv_hvc.S b/arch/arm64/hyperv/hv_hvc.S +new file mode 100644 +index 000000000000..c22d34ccd0aa +--- /dev/null ++++ b/arch/arm64/hyperv/hv_hvc.S +@@ -0,0 +1,61 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Microsoft Hyper-V hypervisor invocation routines ++ * ++ * Copyright (C) 2018, Microsoft, Inc. ++ * ++ * Author : Michael Kelley ++ * ++ * This program is free software; you can redistribute it and/or modify it ++ * under the terms of the GNU General Public License version 2 as published ++ * by the Free Software Foundation. ++ * ++ * This program is distributed in the hope that it will be useful, but ++ * WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or ++ * NON INFRINGEMENT. See the GNU General Public License for more ++ * details. ++ */ ++ ++#include ++#include ++ ++ .text ++/* ++ * Do the HVC instruction. For Hyper-V the argument is always 1. ++ * x0 contains the hypercall control value, while additional registers ++ * vary depending on the hypercall, and whether the hypercall arguments ++ * are in memory or in registers (a "fast" hypercall per the Hyper-V ++ * TLFS). When the arguments are in memory x1 is the guest physical ++ * address of the input arguments, and x2 is the guest physical ++ * address of the output arguments. When the arguments are in ++ * registers, the register values depends on the hypercall. Note ++ * that this version cannot return any values in registers. ++ */ ++SYM_FUNC_START(hv_do_hvc) ++ hvc #1 ++ ret ++SYM_FUNC_END(hv_do_hvc) ++ ++/* ++ * This variant of HVC invocation is for hv_get_vpreg and ++ * hv_get_vpreg_128. The input parameters are passed in registers ++ * along with a pointer in x4 to where the output result should ++ * be stored. The output is returned in x15 and x16. x19 is used as ++ * scratch space to avoid buildng a stack frame, as Hyper-V does ++ * not preserve registers x0-x17. ++ */ ++SYM_FUNC_START(hv_do_hvc_fast_get) ++ /* ++ * Stash away x19 register so that it can be used as a scratch ++ * register and pop it at the end. ++ */ ++ str x19, [sp, #-16]! ++ mov x19, x4 ++ hvc #1 ++ str x15,[x19] ++ str x16,[x19,#8] ++ ldr x19, [sp], #16 ++ ret ++SYM_FUNC_END(hv_do_hvc_fast_get) +diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h +index 20070a847304..f87a450e5b6b 100644 +--- a/arch/arm64/include/asm/mshyperv.h ++++ b/arch/arm64/include/asm/mshyperv.h +@@ -22,6 +22,10 @@ + #include + #include + ++extern u64 hv_do_hvc(u64 control, ...); ++extern u64 hv_do_hvc_fast_get(u64 control, u64 input1, u64 input2, u64 input3, ++ struct hv_get_vp_registers_output *output); ++ + /* + * Declare calls to get and set Hyper-V VP register values on ARM64, which + * requires a hypercall. +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1667-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch b/patch/kernel/archive/wsl2-arm64-6.6/1667-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch new file mode 100644 index 000000000000..e76bcefe5392 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1667-drivers-hv-dxgkrnl-Add-virtual-compute-device-VMBus-channel-guids.patch @@ -0,0 +1,45 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 18:11:52 -0800 +Subject: drivers: hv: dxgkrnl: Add virtual compute device VMBus channel guids + +Add VMBus channel guids, which are used by hyper-v virtual compute +device driver. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + include/linux/hyperv.h | 16 ++++++++++ + 1 file changed, 16 insertions(+) + +diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h +index 2b00faf98017..caf62f602cf8 100644 +--- a/include/linux/hyperv.h ++++ b/include/linux/hyperv.h +@@ -1451,6 +1451,22 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size); + .guid = GUID_INIT(0xda0a7802, 0xe377, 0x4aac, 0x8e, 0x77, \ + 0x05, 0x58, 0xeb, 0x10, 0x73, 0xf8) + ++/* ++ * GPU paravirtualization global DXGK channel ++ * {DDE9CBC0-5060-4436-9448-EA1254A5D177} ++ */ ++#define HV_GPUP_DXGK_GLOBAL_GUID \ ++ .guid = GUID_INIT(0xdde9cbc0, 0x5060, 0x4436, 0x94, 0x48, \ ++ 0xea, 0x12, 0x54, 0xa5, 0xd1, 0x77) ++ ++/* ++ * GPU paravirtualization per virtual GPU DXGK channel ++ * {6E382D18-3336-4F4B-ACC4-2B7703D4DF4A} ++ */ ++#define HV_GPUP_DXGK_VGPU_GUID \ ++ .guid = GUID_INIT(0x6e382d18, 0x3336, 0x4f4b, 0xac, 0xc4, \ ++ 0x2b, 0x77, 0x3, 0xd4, 0xdf, 0x4a) ++ + /* + * Synthetic FC GUID + * {2f9bcc4a-0069-4af3-b76b-6fd0be528cda} +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1668-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch b/patch/kernel/archive/wsl2-arm64-6.6/1668-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch new file mode 100644 index 000000000000..ef5bafdeb009 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1668-drivers-hv-dxgkrnl-Driver-initialization-and-loading.patch @@ -0,0 +1,966 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 24 Mar 2021 11:10:28 -0700 +Subject: drivers: hv: dxgkrnl: Driver initialization and loading + +- Create skeleton and add basic functionality for the Hyper-V +compute device driver (dxgkrnl). + +- Register for PCI and VMBus driver notifications and handle +initialization of VMBus channels. + +- Connect the dxgkrnl module to the drivers/hv/ Makefile and Kconfig + +- Create a MAINTAINERS entry + +A VMBus channel is a communication interface between the Hyper-V guest +and the host. The are two type of VMBus channels, used in the driver: + - the global channel + - per virtual compute device channel + +A PCI device is created for each virtual compute device, projected +by the host. The device vendor is PCI_VENDOR_ID_MICROSOFT and device +id is PCI_DEVICE_ID_VIRTUAL_RENDER. dxg_pci_probe_device handles +arrival of such devices. The PCI config space of the virtual compute +device has luid of the corresponding virtual compute device VM +bus channel. This is how the compute device adapter objects are +linked to VMBus channels. + +VMBus interface version is exchanged by reading/writing the PCI config +space of the virtual compute device. + +The IO space is used to handle CPU accessible compute device +allocations. Hyper-V allocates IO space for the global VMBus channel. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + MAINTAINERS | 7 + + drivers/hv/Kconfig | 2 + + drivers/hv/Makefile | 1 + + drivers/hv/dxgkrnl/Kconfig | 26 + + drivers/hv/dxgkrnl/Makefile | 5 + + drivers/hv/dxgkrnl/dxgkrnl.h | 155 +++ + drivers/hv/dxgkrnl/dxgmodule.c | 506 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.c | 92 ++ + drivers/hv/dxgkrnl/dxgvmbus.h | 19 + + include/uapi/misc/d3dkmthk.h | 27 + + 10 files changed, 840 insertions(+) + +diff --git a/MAINTAINERS b/MAINTAINERS +index dd5de540ec0b..67a87715c001 100644 +--- a/MAINTAINERS ++++ b/MAINTAINERS +@@ -9771,6 +9771,13 @@ F: Documentation/devicetree/bindings/mtd/ti,am654-hbmc.yaml + F: drivers/mtd/hyperbus/ + F: include/linux/mtd/hyperbus.h + ++Hyper-V vGPU DRIVER ++M: Iouri Tarassov ++L: linux-hyperv@vger.kernel.org ++S: Supported ++F: drivers/hv/dxgkrnl/ ++F: include/uapi/misc/d3dkmthk.h ++ + HYPERVISOR VIRTUAL CONSOLE DRIVER + L: linuxppc-dev@lists.ozlabs.org + S: Odd Fixes +diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig +index 00242107d62e..51cce0cc9d5c 100644 +--- a/drivers/hv/Kconfig ++++ b/drivers/hv/Kconfig +@@ -54,4 +54,6 @@ config HYPERV_BALLOON + help + Select this option to enable Hyper-V Balloon driver. + ++source "drivers/hv/dxgkrnl/Kconfig" ++ + endmenu +diff --git a/drivers/hv/Makefile b/drivers/hv/Makefile +index d76df5c8c2a9..aa1cbdb5d0d2 100644 +--- a/drivers/hv/Makefile ++++ b/drivers/hv/Makefile +@@ -2,6 +2,7 @@ + obj-$(CONFIG_HYPERV) += hv_vmbus.o + obj-$(CONFIG_HYPERV_UTILS) += hv_utils.o + obj-$(CONFIG_HYPERV_BALLOON) += hv_balloon.o ++obj-$(CONFIG_DXGKRNL) += dxgkrnl/ + + CFLAGS_hv_trace.o = -I$(src) + CFLAGS_hv_balloon.o = -I$(src) +diff --git a/drivers/hv/dxgkrnl/Kconfig b/drivers/hv/dxgkrnl/Kconfig +new file mode 100644 +index 000000000000..bcd92bbff939 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/Kconfig +@@ -0,0 +1,26 @@ ++# SPDX-License-Identifier: GPL-2.0 ++# Configuration for the hyper-v virtual compute driver (dxgkrnl) ++# ++ ++config DXGKRNL ++ tristate "Microsoft Paravirtualized GPU support" ++ depends on HYPERV ++ depends on 64BIT || COMPILE_TEST ++ help ++ This driver supports paravirtualized virtual compute devices, exposed ++ by Microsoft Hyper-V when Linux is running inside of a virtual machine ++ hosted by Windows. The virtual machines needs to be configured to use ++ host compute adapters. The driver name is dxgkrnl. ++ ++ An example of such virtual machine is a Windows Subsystem for ++ Linux container. When such container is instantiated, the Windows host ++ assigns compatible host GPU adapters to the container. The corresponding ++ virtual GPU devices appear on the PCI bus in the container. These ++ devices are enumerated and accessed by this driver. ++ ++ Communications with the driver are done by using the Microsoft libdxcore ++ library, which translates the D3DKMT interface ++ ++ to the driver IOCTLs. The virtual GPU devices are paravirtualized, ++ which means that access to the hardware is done in the host. The driver ++ communicates with the host using Hyper-V VM bus communication channels. +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +new file mode 100644 +index 000000000000..76349064b60a +--- /dev/null ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -0,0 +1,5 @@ ++# SPDX-License-Identifier: GPL-2.0 ++# Makefile for the hyper-v compute device driver (dxgkrnl). ++ ++obj-$(CONFIG_DXGKRNL) += dxgkrnl.o ++dxgkrnl-y := dxgmodule.o dxgvmbus.o +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +new file mode 100644 +index 000000000000..f7900840d1ed +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -0,0 +1,155 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Headers for internal objects ++ * ++ */ ++ ++#ifndef _DXGKRNL_H ++#define _DXGKRNL_H ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++ ++struct dxgadapter; ++ ++/* ++ * Driver private data. ++ * A single /dev/dxg device is created per virtual machine. ++ */ ++struct dxgdriver{ ++ struct dxgglobal *dxgglobal; ++ struct device *dxgdev; ++ struct pci_driver pci_drv; ++ struct hv_driver vmbus_drv; ++}; ++extern struct dxgdriver dxgdrv; ++ ++#define DXGDEV dxgdrv.dxgdev ++ ++struct dxgvmbuschannel { ++ struct vmbus_channel *channel; ++ struct hv_device *hdev; ++ spinlock_t packet_list_mutex; ++ struct list_head packet_list_head; ++ struct kmem_cache *packet_cache; ++ atomic64_t packet_request_id; ++}; ++ ++int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); ++void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); ++void dxgvmbuschannel_receive(void *ctx); ++ ++/* ++ * The structure defines an offered vGPU vm bus channel. ++ */ ++struct dxgvgpuchannel { ++ struct list_head vgpu_ch_list_entry; ++ struct winluid adapter_luid; ++ struct hv_device *hdev; ++}; ++ ++struct dxgglobal { ++ struct dxgdriver *drvdata; ++ struct dxgvmbuschannel channel; ++ struct hv_device *hdev; ++ u32 num_adapters; ++ u32 vmbus_ver; /* Interface version */ ++ struct resource *mem; ++ u64 mmiospace_base; ++ u64 mmiospace_size; ++ struct miscdevice dxgdevice; ++ struct mutex device_mutex; ++ ++ /* ++ * List of the vGPU VM bus channels (dxgvgpuchannel) ++ * Protected by device_mutex ++ */ ++ struct list_head vgpu_ch_list_head; ++ ++ /* protects acces to the global VM bus channel */ ++ struct rw_semaphore channel_lock; ++ ++ bool global_channel_initialized; ++ bool async_msg_enabled; ++ bool misc_registered; ++ bool pci_registered; ++ bool vmbus_registered; ++}; ++ ++static inline struct dxgglobal *dxggbl(void) ++{ ++ return dxgdrv.dxgglobal; ++} ++ ++struct dxgprocess { ++ /* Placeholder */ ++}; ++ ++/* ++ * The convention is that VNBus instance id is a GUID, but the host sets ++ * the lower part of the value to the host adapter LUID. The function ++ * provides the necessary conversion. ++ */ ++static inline void guid_to_luid(guid_t *guid, struct winluid *luid) ++{ ++ *luid = *(struct winluid *)&guid->b[0]; ++} ++ ++/* ++ * VM bus interface ++ * ++ */ ++ ++/* ++ * The interface version is used to ensure that the host and the guest use the ++ * same VM bus protocol. It needs to be incremented every time the VM bus ++ * interface changes. DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION is ++ * incremented each time the earlier versions of the interface are no longer ++ * compatible with the current version. ++ */ ++#define DXGK_VMBUS_INTERFACE_VERSION_OLD 27 ++#define DXGK_VMBUS_INTERFACE_VERSION 40 ++#define DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION 16 ++ ++#ifdef DEBUG ++ ++void dxgk_validate_ioctls(void); ++ ++#define DXG_TRACE(fmt, ...) do { \ ++ trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++} while (0) ++ ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ ++ trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++} while (0) ++ ++#else ++ ++#define DXG_TRACE(...) ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ ++} while (0) ++ ++#endif /* DEBUG */ ++ ++#endif +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +new file mode 100644 +index 000000000000..de02edc4d023 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -0,0 +1,506 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Interface with Linux kernel, PCI driver and the VM bus driver ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include "dxgkrnl.h" ++ ++#define PCI_VENDOR_ID_MICROSOFT 0x1414 ++#define PCI_DEVICE_ID_VIRTUAL_RENDER 0x008E ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++/* ++ * Interface from dxgglobal ++ */ ++ ++struct vmbus_channel *dxgglobal_get_vmbus(void) ++{ ++ return dxggbl()->channel.channel; ++} ++ ++struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void) ++{ ++ return &dxggbl()->channel; ++} ++ ++int dxgglobal_acquire_channel_lock(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ down_read(&dxgglobal->channel_lock); ++ if (dxgglobal->channel.channel == NULL) { ++ DXG_ERR("Failed to acquire global channel lock"); ++ return -ENODEV; ++ } else { ++ return 0; ++ } ++} ++ ++void dxgglobal_release_channel_lock(void) ++{ ++ up_read(&dxggbl()->channel_lock); ++} ++ ++const struct file_operations dxgk_fops = { ++ .owner = THIS_MODULE, ++}; ++ ++/* ++ * Interface with the PCI driver ++ */ ++ ++/* ++ * Part of the PCI config space of the compute device is used for ++ * configuration data. Reading/writing of the PCI config space is forwarded ++ * to the host. ++ * ++ * Below are offsets in the PCI config spaces for various configuration values. ++ */ ++ ++/* Compute device VM bus channel instance ID */ ++#define DXGK_VMBUS_CHANNEL_ID_OFFSET 192 ++ ++/* DXGK_VMBUS_INTERFACE_VERSION (u32) */ ++#define DXGK_VMBUS_VERSION_OFFSET (DXGK_VMBUS_CHANNEL_ID_OFFSET + \ ++ sizeof(guid_t)) ++ ++/* Luid of the virtual GPU on the host (struct winluid) */ ++#define DXGK_VMBUS_VGPU_LUID_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ ++ sizeof(u32)) ++ ++/* The guest writes its capabilities to this address */ ++#define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ ++ sizeof(u32)) ++ ++/* Capabilities of the guest driver, reported to the host */ ++struct dxgk_vmbus_guestcaps { ++ union { ++ struct { ++ u32 wsl2 : 1; ++ u32 reserved : 31; ++ }; ++ u32 guest_caps; ++ }; ++}; ++ ++/* ++ * A helper function to read PCI config space. ++ */ ++static int dxg_pci_read_dwords(struct pci_dev *dev, int offset, int size, ++ void *val) ++{ ++ int off = offset; ++ int ret; ++ int i; ++ ++ /* Make sure the offset and size are 32 bit aligned */ ++ if (offset & 3 || size & 3) ++ return -EINVAL; ++ ++ for (i = 0; i < size / sizeof(int); i++) { ++ ret = pci_read_config_dword(dev, off, &((int *)val)[i]); ++ if (ret) { ++ DXG_ERR("Failed to read PCI config: %d", off); ++ return ret; ++ } ++ off += sizeof(int); ++ } ++ return 0; ++} ++ ++static int dxg_pci_probe_device(struct pci_dev *dev, ++ const struct pci_device_id *id) ++{ ++ int ret; ++ guid_t guid; ++ u32 vmbus_interface_ver = DXGK_VMBUS_INTERFACE_VERSION; ++ struct winluid vgpu_luid = {}; ++ struct dxgk_vmbus_guestcaps guest_caps = {.wsl2 = 1}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ if (dxgglobal->vmbus_ver == 0) { ++ /* Report capabilities to the host */ ++ ++ ret = pci_write_config_dword(dev, DXGK_VMBUS_GUESTCAPS_OFFSET, ++ guest_caps.guest_caps); ++ if (ret) ++ goto cleanup; ++ ++ /* Negotiate the VM bus version */ ++ ++ ret = pci_read_config_dword(dev, DXGK_VMBUS_VERSION_OFFSET, ++ &vmbus_interface_ver); ++ if (ret == 0 && vmbus_interface_ver != 0) ++ dxgglobal->vmbus_ver = vmbus_interface_ver; ++ else ++ dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION_OLD; ++ ++ if (dxgglobal->vmbus_ver < DXGK_VMBUS_INTERFACE_VERSION) ++ goto read_channel_id; ++ ++ ret = pci_write_config_dword(dev, DXGK_VMBUS_VERSION_OFFSET, ++ DXGK_VMBUS_INTERFACE_VERSION); ++ if (ret) ++ goto cleanup; ++ ++ if (dxgglobal->vmbus_ver > DXGK_VMBUS_INTERFACE_VERSION) ++ dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION; ++ } ++ ++read_channel_id: ++ ++ /* Get the VM bus channel ID for the virtual GPU */ ++ ret = dxg_pci_read_dwords(dev, DXGK_VMBUS_CHANNEL_ID_OFFSET, ++ sizeof(guid), (int *)&guid); ++ if (ret) ++ goto cleanup; ++ ++ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) { ++ ret = dxg_pci_read_dwords(dev, DXGK_VMBUS_VGPU_LUID_OFFSET, ++ sizeof(vgpu_luid), &vgpu_luid); ++ if (ret) ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Adapter channel: %pUb", &guid); ++ DXG_TRACE("Vmbus interface version: %d", dxgglobal->vmbus_ver); ++ DXG_TRACE("Host luid: %x-%x", vgpu_luid.b, vgpu_luid.a); ++ ++cleanup: ++ ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static void dxg_pci_remove_device(struct pci_dev *dev) ++{ ++ /* Placeholder */ ++} ++ ++static struct pci_device_id dxg_pci_id_table[] = { ++ { ++ .vendor = PCI_VENDOR_ID_MICROSOFT, ++ .device = PCI_DEVICE_ID_VIRTUAL_RENDER, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID ++ }, ++ { 0 } ++}; ++ ++/* ++ * Interface with the VM bus driver ++ */ ++ ++static int dxgglobal_getiospace(struct dxgglobal *dxgglobal) ++{ ++ /* Get mmio space for the global channel */ ++ struct hv_device *hdev = dxgglobal->hdev; ++ struct vmbus_channel *channel = hdev->channel; ++ resource_size_t pot_start = 0; ++ resource_size_t pot_end = -1; ++ int ret; ++ ++ dxgglobal->mmiospace_size = channel->offermsg.offer.mmio_megabytes; ++ if (dxgglobal->mmiospace_size == 0) { ++ DXG_TRACE("Zero mmio space is offered"); ++ return -ENOMEM; ++ } ++ dxgglobal->mmiospace_size <<= 20; ++ DXG_TRACE("mmio offered: %llx", dxgglobal->mmiospace_size); ++ ++ ret = vmbus_allocate_mmio(&dxgglobal->mem, hdev, pot_start, pot_end, ++ dxgglobal->mmiospace_size, 0x10000, false); ++ if (ret) { ++ DXG_ERR("Unable to allocate mmio memory: %d", ret); ++ return ret; ++ } ++ dxgglobal->mmiospace_size = dxgglobal->mem->end - ++ dxgglobal->mem->start + 1; ++ dxgglobal->mmiospace_base = dxgglobal->mem->start; ++ DXG_TRACE("mmio allocated %llx %llx %llx %llx", ++ dxgglobal->mmiospace_base, dxgglobal->mmiospace_size, ++ dxgglobal->mem->start, dxgglobal->mem->end); ++ ++ return 0; ++} ++ ++int dxgglobal_init_global_channel(void) ++{ ++ int ret = 0; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = dxgvmbuschannel_init(&dxgglobal->channel, dxgglobal->hdev); ++ if (ret) { ++ DXG_ERR("dxgvmbuschannel_init failed: %d", ret); ++ goto error; ++ } ++ ++ ret = dxgglobal_getiospace(dxgglobal); ++ if (ret) { ++ DXG_ERR("getiospace failed: %d", ret); ++ goto error; ++ } ++ ++ hv_set_drvdata(dxgglobal->hdev, dxgglobal); ++ ++error: ++ return ret; ++} ++ ++void dxgglobal_destroy_global_channel(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ down_write(&dxgglobal->channel_lock); ++ ++ dxgglobal->global_channel_initialized = false; ++ ++ if (dxgglobal->mem) { ++ vmbus_free_mmio(dxgglobal->mmiospace_base, ++ dxgglobal->mmiospace_size); ++ dxgglobal->mem = NULL; ++ } ++ ++ dxgvmbuschannel_destroy(&dxgglobal->channel); ++ ++ if (dxgglobal->hdev) { ++ hv_set_drvdata(dxgglobal->hdev, NULL); ++ dxgglobal->hdev = NULL; ++ } ++ ++ up_write(&dxgglobal->channel_lock); ++} ++ ++static const struct hv_vmbus_device_id dxg_vmbus_id_table[] = { ++ /* Per GPU Device GUID */ ++ { HV_GPUP_DXGK_VGPU_GUID }, ++ /* Global Dxgkgnl channel for the virtual machine */ ++ { HV_GPUP_DXGK_GLOBAL_GUID }, ++ { } ++}; ++ ++static int dxg_probe_vmbus(struct hv_device *hdev, ++ const struct hv_vmbus_device_id *dev_id) ++{ ++ int ret = 0; ++ struct winluid luid; ++ struct dxgvgpuchannel *vgpuch; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) { ++ /* This is a new virtual GPU channel */ ++ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, &luid); ++ DXG_TRACE("vGPU channel: %pUb", ++ &hdev->channel->offermsg.offer.if_instance); ++ vgpuch = kzalloc(sizeof(struct dxgvgpuchannel), GFP_KERNEL); ++ if (vgpuch == NULL) { ++ ret = -ENOMEM; ++ goto error; ++ } ++ vgpuch->adapter_luid = luid; ++ vgpuch->hdev = hdev; ++ list_add_tail(&vgpuch->vgpu_ch_list_entry, ++ &dxgglobal->vgpu_ch_list_head); ++ } else if (uuid_le_cmp(hdev->dev_type, ++ dxg_vmbus_id_table[1].guid) == 0) { ++ /* This is the global Dxgkgnl channel */ ++ DXG_TRACE("Global channel: %pUb", ++ &hdev->channel->offermsg.offer.if_instance); ++ if (dxgglobal->hdev) { ++ /* This device should appear only once */ ++ DXG_ERR("global channel already exists"); ++ ret = -EBADE; ++ goto error; ++ } ++ dxgglobal->hdev = hdev; ++ } else { ++ /* Unknown device type */ ++ DXG_ERR("Unknown VM bus device type"); ++ ret = -ENODEV; ++ } ++ ++error: ++ ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ return ret; ++} ++ ++static int dxg_remove_vmbus(struct hv_device *hdev) ++{ ++ int ret = 0; ++ struct dxgvgpuchannel *vgpu_channel; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) { ++ DXG_TRACE("Remove virtual GPU channel"); ++ list_for_each_entry(vgpu_channel, ++ &dxgglobal->vgpu_ch_list_head, ++ vgpu_ch_list_entry) { ++ if (vgpu_channel->hdev == hdev) { ++ list_del(&vgpu_channel->vgpu_ch_list_entry); ++ kfree(vgpu_channel); ++ break; ++ } ++ } ++ } else if (uuid_le_cmp(hdev->dev_type, ++ dxg_vmbus_id_table[1].guid) == 0) { ++ DXG_TRACE("Remove global channel device"); ++ dxgglobal_destroy_global_channel(); ++ } else { ++ /* Unknown device type */ ++ DXG_ERR("Unknown device type"); ++ ret = -ENODEV; ++ } ++ ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ return ret; ++} ++ ++MODULE_DEVICE_TABLE(vmbus, dxg_vmbus_id_table); ++MODULE_DEVICE_TABLE(pci, dxg_pci_id_table); ++ ++/* ++ * Global driver data ++ */ ++ ++struct dxgdriver dxgdrv = { ++ .vmbus_drv.name = KBUILD_MODNAME, ++ .vmbus_drv.id_table = dxg_vmbus_id_table, ++ .vmbus_drv.probe = dxg_probe_vmbus, ++ .vmbus_drv.remove = dxg_remove_vmbus, ++ .vmbus_drv.driver = { ++ .probe_type = PROBE_PREFER_ASYNCHRONOUS, ++ }, ++ .pci_drv.name = KBUILD_MODNAME, ++ .pci_drv.id_table = dxg_pci_id_table, ++ .pci_drv.probe = dxg_pci_probe_device, ++ .pci_drv.remove = dxg_pci_remove_device ++}; ++ ++static struct dxgglobal *dxgglobal_create(void) ++{ ++ struct dxgglobal *dxgglobal; ++ ++ dxgglobal = kzalloc(sizeof(struct dxgglobal), GFP_KERNEL); ++ if (!dxgglobal) ++ return NULL; ++ ++ mutex_init(&dxgglobal->device_mutex); ++ ++ INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); ++ ++ init_rwsem(&dxgglobal->channel_lock); ++ ++ return dxgglobal; ++} ++ ++static void dxgglobal_destroy(struct dxgglobal *dxgglobal) ++{ ++ if (dxgglobal) { ++ mutex_lock(&dxgglobal->device_mutex); ++ dxgglobal_destroy_global_channel(); ++ mutex_unlock(&dxgglobal->device_mutex); ++ ++ if (dxgglobal->vmbus_registered) ++ vmbus_driver_unregister(&dxgdrv.vmbus_drv); ++ ++ dxgglobal_destroy_global_channel(); ++ ++ if (dxgglobal->pci_registered) ++ pci_unregister_driver(&dxgdrv.pci_drv); ++ ++ if (dxgglobal->misc_registered) ++ misc_deregister(&dxgglobal->dxgdevice); ++ ++ dxgglobal->drvdata->dxgdev = NULL; ++ ++ kfree(dxgglobal); ++ dxgglobal = NULL; ++ } ++} ++ ++static int __init dxg_drv_init(void) ++{ ++ int ret; ++ struct dxgglobal *dxgglobal = NULL; ++ ++ dxgglobal = dxgglobal_create(); ++ if (dxgglobal == NULL) { ++ pr_err("dxgglobal_init failed"); ++ ret = -ENOMEM; ++ goto error; ++ } ++ dxgglobal->drvdata = &dxgdrv; ++ ++ dxgglobal->dxgdevice.minor = MISC_DYNAMIC_MINOR; ++ dxgglobal->dxgdevice.name = "dxg"; ++ dxgglobal->dxgdevice.fops = &dxgk_fops; ++ dxgglobal->dxgdevice.mode = 0666; ++ ret = misc_register(&dxgglobal->dxgdevice); ++ if (ret) { ++ pr_err("misc_register failed: %d", ret); ++ goto error; ++ } ++ dxgglobal->misc_registered = true; ++ dxgdrv.dxgdev = dxgglobal->dxgdevice.this_device; ++ dxgdrv.dxgglobal = dxgglobal; ++ ++ ret = vmbus_driver_register(&dxgdrv.vmbus_drv); ++ if (ret) { ++ DXG_ERR("vmbus_driver_register failed: %d", ret); ++ goto error; ++ } ++ dxgglobal->vmbus_registered = true; ++ ++ ret = pci_register_driver(&dxgdrv.pci_drv); ++ if (ret) { ++ DXG_ERR("pci_driver_register failed: %d", ret); ++ goto error; ++ } ++ dxgglobal->pci_registered = true; ++ ++ return 0; ++ ++error: ++ /* This function does the cleanup */ ++ dxgglobal_destroy(dxgglobal); ++ dxgdrv.dxgglobal = NULL; ++ ++ return ret; ++} ++ ++static void __exit dxg_drv_exit(void) ++{ ++ dxgglobal_destroy(dxgdrv.dxgglobal); ++} ++ ++module_init(dxg_drv_init); ++module_exit(dxg_drv_exit); ++ ++MODULE_LICENSE("GPL"); ++MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver"); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +new file mode 100644 +index 000000000000..deb880e34377 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -0,0 +1,92 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * VM bus interface implementation ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include "dxgkrnl.h" ++#include "dxgvmbus.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++#define RING_BUFSIZE (256 * 1024) ++ ++/* ++ * The structure is used to track VM bus packets, waiting for completion. ++ */ ++struct dxgvmbuspacket { ++ struct list_head packet_list_entry; ++ u64 request_id; ++ struct completion wait; ++ void *buffer; ++ u32 buffer_length; ++ int status; ++ bool completed; ++}; ++ ++int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev) ++{ ++ int ret; ++ ++ ch->hdev = hdev; ++ spin_lock_init(&ch->packet_list_mutex); ++ INIT_LIST_HEAD(&ch->packet_list_head); ++ atomic64_set(&ch->packet_request_id, 0); ++ ++ ch->packet_cache = kmem_cache_create("DXGK packet cache", ++ sizeof(struct dxgvmbuspacket), 0, ++ 0, NULL); ++ if (ch->packet_cache == NULL) { ++ DXG_ERR("packet_cache alloc failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++#if LINUX_VERSION_CODE >= KERNEL_VERSION(5,15,0) ++ hdev->channel->max_pkt_size = DXG_MAX_VM_BUS_PACKET_SIZE; ++#endif ++ ret = vmbus_open(hdev->channel, RING_BUFSIZE, RING_BUFSIZE, ++ NULL, 0, dxgvmbuschannel_receive, ch); ++ if (ret) { ++ DXG_ERR("vmbus_open failed: %d", ret); ++ goto cleanup; ++ } ++ ++ ch->channel = hdev->channel; ++ ++cleanup: ++ ++ return ret; ++} ++ ++void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch) ++{ ++ kmem_cache_destroy(ch->packet_cache); ++ ch->packet_cache = NULL; ++ ++ if (ch->channel) { ++ vmbus_close(ch->channel); ++ ch->channel = NULL; ++ } ++} ++ ++/* Receive callback for messages from the host */ ++void dxgvmbuschannel_receive(void *ctx) ++{ ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +new file mode 100644 +index 000000000000..6cdca5e03d1f +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -0,0 +1,19 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * VM bus interface with the host definitions ++ * ++ */ ++ ++#ifndef _DXGVMBUS_H ++#define _DXGVMBUS_H ++ ++#define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128) ++ ++#endif /* _DXGVMBUS_H */ +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +new file mode 100644 +index 000000000000..5d973604400c +--- /dev/null ++++ b/include/uapi/misc/d3dkmthk.h +@@ -0,0 +1,27 @@ ++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ ++ ++/* ++ * Copyright (c) 2019, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * User mode WDDM interface definitions ++ * ++ */ ++ ++#ifndef _D3DKMTHK_H ++#define _D3DKMTHK_H ++ ++/* ++ * Matches the Windows LUID definition. ++ * LUID is a locally unique identifier (similar to GUID, but not global), ++ * which is guaranteed to be unique intil the computer is rebooted. ++ */ ++struct winluid { ++ __u32 a; ++ __u32 b; ++}; ++ ++#endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1669-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch b/patch/kernel/archive/wsl2-arm64-6.6/1669-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch new file mode 100644 index 000000000000..384618c56dd0 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1669-drivers-hv-dxgkrnl-Add-VMBus-message-support-initialize-VMBus-channels.patch @@ -0,0 +1,660 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 18:53:07 -0800 +Subject: drivers: hv: dxgkrnl: Add VMBus message support, initialize VMBus + channels. + +Implement support for sending/receiving VMBus messages between +the host and the guest. + +Initialize the VMBus channels and notify the host about IO space +settings of the VMBus global channel. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 14 + + drivers/hv/dxgkrnl/dxgmodule.c | 9 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 318 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 67 ++ + drivers/hv/dxgkrnl/ioctl.c | 24 + + drivers/hv/dxgkrnl/misc.h | 72 +++ + include/uapi/misc/d3dkmthk.h | 34 + + 7 files changed, 536 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index f7900840d1ed..52b9e82c51e6 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -28,6 +28,8 @@ + #include + #include + #include ++#include "misc.h" ++#include + + struct dxgadapter; + +@@ -100,6 +102,13 @@ static inline struct dxgglobal *dxggbl(void) + return dxgdrv.dxgglobal; + } + ++int dxgglobal_init_global_channel(void); ++void dxgglobal_destroy_global_channel(void); ++struct vmbus_channel *dxgglobal_get_vmbus(void); ++struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void); ++int dxgglobal_acquire_channel_lock(void); ++void dxgglobal_release_channel_lock(void); ++ + struct dxgprocess { + /* Placeholder */ + }; +@@ -130,6 +139,11 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid) + #define DXGK_VMBUS_INTERFACE_VERSION 40 + #define DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION 16 + ++void dxgvmb_initialize(void); ++int dxgvmb_send_set_iospace_region(u64 start, u64 len); ++ ++int ntstatus2int(struct ntstatus status); ++ + #ifdef DEBUG + + void dxgk_validate_ioctls(void); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index de02edc4d023..e55639dc0adc 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -260,6 +260,13 @@ int dxgglobal_init_global_channel(void) + goto error; + } + ++ ret = dxgvmb_send_set_iospace_region(dxgglobal->mmiospace_base, ++ dxgglobal->mmiospace_size); ++ if (ret < 0) { ++ DXG_ERR("send_set_iospace_region failed"); ++ goto error; ++ } ++ + hv_set_drvdata(dxgglobal->hdev, dxgglobal); + + error: +@@ -429,8 +436,6 @@ static void dxgglobal_destroy(struct dxgglobal *dxgglobal) + if (dxgglobal->vmbus_registered) + vmbus_driver_unregister(&dxgdrv.vmbus_drv); + +- dxgglobal_destroy_global_channel(); +- + if (dxgglobal->pci_registered) + pci_unregister_driver(&dxgdrv.pci_drv); + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index deb880e34377..a4365739826a 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -40,6 +40,121 @@ struct dxgvmbuspacket { + bool completed; + }; + ++struct dxgvmb_ext_header { ++ /* Offset from the start of the message to DXGKVMB_COMMAND_BASE */ ++ u32 command_offset; ++ u32 reserved; ++ struct winluid vgpu_luid; ++}; ++ ++#define VMBUSMESSAGEONSTACK 64 ++ ++struct dxgvmbusmsg { ++/* Points to the allocated buffer */ ++ struct dxgvmb_ext_header *hdr; ++/* Points to dxgkvmb_command_vm_to_host or dxgkvmb_command_vgpu_to_host */ ++ void *msg; ++/* The vm bus channel, used to pass the message to the host */ ++ struct dxgvmbuschannel *channel; ++/* Message size in bytes including the header and the payload */ ++ u32 size; ++/* Buffer used for small messages */ ++ char msg_on_stack[VMBUSMESSAGEONSTACK]; ++}; ++ ++struct dxgvmbusmsgres { ++/* Points to the allocated buffer */ ++ struct dxgvmb_ext_header *hdr; ++/* Points to dxgkvmb_command_vm_to_host or dxgkvmb_command_vgpu_to_host */ ++ void *msg; ++/* The vm bus channel, used to pass the message to the host */ ++ struct dxgvmbuschannel *channel; ++/* Message size in bytes including the header, the payload and the result */ ++ u32 size; ++/* Result buffer size in bytes */ ++ u32 res_size; ++/* Points to the result within the allocated buffer */ ++ void *res; ++}; ++ ++static int init_message(struct dxgvmbusmsg *msg, ++ struct dxgprocess *process, u32 size) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ bool use_ext_header = dxgglobal->vmbus_ver >= ++ DXGK_VMBUS_INTERFACE_VERSION; ++ ++ if (use_ext_header) ++ size += sizeof(struct dxgvmb_ext_header); ++ msg->size = size; ++ if (size <= VMBUSMESSAGEONSTACK) { ++ msg->hdr = (void *)msg->msg_on_stack; ++ memset(msg->hdr, 0, size); ++ } else { ++ msg->hdr = vzalloc(size); ++ if (msg->hdr == NULL) ++ return -ENOMEM; ++ } ++ if (use_ext_header) { ++ msg->msg = (char *)&msg->hdr[1]; ++ msg->hdr->command_offset = sizeof(msg->hdr[0]); ++ } else { ++ msg->msg = (char *)msg->hdr; ++ } ++ msg->channel = &dxgglobal->channel; ++ return 0; ++} ++ ++static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process) ++{ ++ if (msg->hdr && (char *)msg->hdr != msg->msg_on_stack) ++ vfree(msg->hdr); ++} ++ ++/* ++ * Helper functions ++ */ ++ ++int ntstatus2int(struct ntstatus status) ++{ ++ if (NT_SUCCESS(status)) ++ return (int)status.v; ++ switch (status.v) { ++ case STATUS_OBJECT_NAME_COLLISION: ++ return -EEXIST; ++ case STATUS_NO_MEMORY: ++ return -ENOMEM; ++ case STATUS_INVALID_PARAMETER: ++ return -EINVAL; ++ case STATUS_OBJECT_NAME_INVALID: ++ case STATUS_OBJECT_NAME_NOT_FOUND: ++ return -ENOENT; ++ case STATUS_TIMEOUT: ++ return -EAGAIN; ++ case STATUS_BUFFER_TOO_SMALL: ++ return -EOVERFLOW; ++ case STATUS_DEVICE_REMOVED: ++ return -ENODEV; ++ case STATUS_ACCESS_DENIED: ++ return -EACCES; ++ case STATUS_NOT_SUPPORTED: ++ return -EPERM; ++ case STATUS_ILLEGAL_INSTRUCTION: ++ return -EOPNOTSUPP; ++ case STATUS_INVALID_HANDLE: ++ return -EBADF; ++ case STATUS_GRAPHICS_ALLOCATION_BUSY: ++ return -EINPROGRESS; ++ case STATUS_OBJECT_TYPE_MISMATCH: ++ return -EPROTOTYPE; ++ case STATUS_NOT_IMPLEMENTED: ++ return -EPERM; ++ default: ++ return -EINVAL; ++ } ++} ++ + int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev) + { + int ret; +@@ -86,7 +201,210 @@ void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch) + } + } + ++static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command, ++ enum dxgkvmb_commandtype_global type) ++{ ++ command->command_type = type; ++ command->process.v = 0; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VM_TO_HOST; ++} ++ ++static void process_inband_packet(struct dxgvmbuschannel *channel, ++ struct vmpacket_descriptor *desc) ++{ ++ u32 packet_length = hv_pkt_datalen(desc); ++ struct dxgkvmb_command_host_to_vm *packet; ++ ++ if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) { ++ DXG_ERR("Invalid global packet"); ++ } else { ++ packet = hv_pkt_data(desc); ++ DXG_TRACE("global packet %d", ++ packet->command_type); ++ switch (packet->command_type) { ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: ++ break; ++ case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: ++ break; ++ default: ++ DXG_ERR("unexpected host message %d", ++ packet->command_type); ++ } ++ } ++} ++ ++static void process_completion_packet(struct dxgvmbuschannel *channel, ++ struct vmpacket_descriptor *desc) ++{ ++ struct dxgvmbuspacket *packet = NULL; ++ struct dxgvmbuspacket *entry; ++ u32 packet_length = hv_pkt_datalen(desc); ++ unsigned long flags; ++ ++ spin_lock_irqsave(&channel->packet_list_mutex, flags); ++ list_for_each_entry(entry, &channel->packet_list_head, ++ packet_list_entry) { ++ if (desc->trans_id == entry->request_id) { ++ packet = entry; ++ list_del(&packet->packet_list_entry); ++ packet->completed = true; ++ break; ++ } ++ } ++ spin_unlock_irqrestore(&channel->packet_list_mutex, flags); ++ if (packet) { ++ if (packet->buffer_length) { ++ if (packet_length < packet->buffer_length) { ++ DXG_TRACE("invalid size %d Expected:%d", ++ packet_length, ++ packet->buffer_length); ++ packet->status = -EOVERFLOW; ++ } else { ++ memcpy(packet->buffer, hv_pkt_data(desc), ++ packet->buffer_length); ++ } ++ } ++ complete(&packet->wait); ++ } else { ++ DXG_ERR("did not find packet to complete"); ++ } ++} ++ + /* Receive callback for messages from the host */ + void dxgvmbuschannel_receive(void *ctx) + { ++ struct dxgvmbuschannel *channel = ctx; ++ struct vmpacket_descriptor *desc; ++ u32 packet_length = 0; ++ ++ foreach_vmbus_pkt(desc, channel->channel) { ++ packet_length = hv_pkt_datalen(desc); ++ DXG_TRACE("next packet (id, size, type): %llu %d %d", ++ desc->trans_id, packet_length, desc->type); ++ if (desc->type == VM_PKT_COMP) { ++ process_completion_packet(channel, desc); ++ } else { ++ if (desc->type != VM_PKT_DATA_INBAND) ++ DXG_ERR("unexpected packet type"); ++ else ++ process_inband_packet(channel, desc); ++ } ++ } ++} ++ ++int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, ++ void *command, ++ u32 cmd_size, ++ void *result, ++ u32 result_size) ++{ ++ int ret; ++ struct dxgvmbuspacket *packet = NULL; ++ ++ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE || ++ result_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("%s invalid data size", __func__); ++ return -EINVAL; ++ } ++ ++ packet = kmem_cache_alloc(channel->packet_cache, 0); ++ if (packet == NULL) { ++ DXG_ERR("kmem_cache_alloc failed"); ++ return -ENOMEM; ++ } ++ ++ packet->request_id = atomic64_inc_return(&channel->packet_request_id); ++ init_completion(&packet->wait); ++ packet->buffer = result; ++ packet->buffer_length = result_size; ++ packet->status = 0; ++ packet->completed = false; ++ spin_lock_irq(&channel->packet_list_mutex); ++ list_add_tail(&packet->packet_list_entry, &channel->packet_list_head); ++ spin_unlock_irq(&channel->packet_list_mutex); ++ ++ ret = vmbus_sendpacket(channel->channel, command, cmd_size, ++ packet->request_id, VM_PKT_DATA_INBAND, ++ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); ++ if (ret) { ++ DXG_ERR("vmbus_sendpacket failed: %x", ret); ++ spin_lock_irq(&channel->packet_list_mutex); ++ list_del(&packet->packet_list_entry); ++ spin_unlock_irq(&channel->packet_list_mutex); ++ goto cleanup; ++ } ++ ++ DXG_TRACE("waiting completion: %llu", packet->request_id); ++ ret = wait_for_completion_killable(&packet->wait); ++ if (ret) { ++ DXG_ERR("wait_for_completion failed: %x", ret); ++ spin_lock_irq(&channel->packet_list_mutex); ++ if (!packet->completed) ++ list_del(&packet->packet_list_entry); ++ spin_unlock_irq(&channel->packet_list_mutex); ++ goto cleanup; ++ } ++ DXG_TRACE("completion done: %llu %x", ++ packet->request_id, packet->status); ++ ret = packet->status; ++ ++cleanup: ++ ++ kmem_cache_free(channel->packet_cache, packet); ++ if (ret < 0) ++ DXG_TRACE("Error: %x", ret); ++ return ret; ++} ++ ++static int ++dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel, ++ void *command, u32 cmd_size) ++{ ++ struct ntstatus status; ++ int ret; ++ ++ ret = dxgvmb_send_sync_msg(channel, command, cmd_size, ++ &status, sizeof(status)); ++ if (ret >= 0) ++ ret = ntstatus2int(status); ++ return ret; ++} ++ ++/* ++ * Global messages to the host ++ */ ++ ++int dxgvmb_send_set_iospace_region(u64 start, u64 len) ++{ ++ int ret; ++ struct dxgkvmb_command_setiospaceregion *command; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init1(&command->hdr, ++ DXGK_VMBCOMMAND_SETIOSPACEREGION); ++ command->start = start; ++ command->length = len; ++ ret = dxgvmb_send_sync_msg_ntstatus(&dxgglobal->channel, msg.hdr, ++ msg.size); ++ if (ret < 0) ++ DXG_ERR("send_set_iospace_region failed %x", ret); ++ ++ dxgglobal_release_channel_lock(); ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_TRACE("Error: %d", ret); ++ return ret; + } +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 6cdca5e03d1f..b1bdd6039b73 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -16,4 +16,71 @@ + + #define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128) + ++enum dxgkvmb_commandchanneltype { ++ DXGKVMB_VGPU_TO_HOST, ++ DXGKVMB_VM_TO_HOST, ++ DXGKVMB_HOST_TO_VM ++}; ++ ++/* ++ * ++ * Commands, sent to the host via the guest global VM bus channel ++ * DXG_GUEST_GLOBAL_VMBUS ++ * ++ */ ++ ++enum dxgkvmb_commandtype_global { ++ DXGK_VMBCOMMAND_VM_TO_HOST_FIRST = 1000, ++ DXGK_VMBCOMMAND_CREATEPROCESS = DXGK_VMBCOMMAND_VM_TO_HOST_FIRST, ++ DXGK_VMBCOMMAND_DESTROYPROCESS = 1001, ++ DXGK_VMBCOMMAND_OPENSYNCOBJECT = 1002, ++ DXGK_VMBCOMMAND_DESTROYSYNCOBJECT = 1003, ++ DXGK_VMBCOMMAND_CREATENTSHAREDOBJECT = 1004, ++ DXGK_VMBCOMMAND_DESTROYNTSHAREDOBJECT = 1005, ++ DXGK_VMBCOMMAND_SIGNALFENCE = 1006, ++ DXGK_VMBCOMMAND_NOTIFYPROCESSFREEZE = 1007, ++ DXGK_VMBCOMMAND_NOTIFYPROCESSTHAW = 1008, ++ DXGK_VMBCOMMAND_QUERYETWSESSION = 1009, ++ DXGK_VMBCOMMAND_SETIOSPACEREGION = 1010, ++ DXGK_VMBCOMMAND_COMPLETETRANSACTION = 1011, ++ DXGK_VMBCOMMAND_SHAREOBJECTWITHHOST = 1021, ++ DXGK_VMBCOMMAND_INVALID_VM_TO_HOST ++}; ++ ++/* ++ * Commands, sent by the host to the VM ++ */ ++enum dxgkvmb_commandtype_host_to_vm { ++ DXGK_VMBCOMMAND_SIGNALGUESTEVENT, ++ DXGK_VMBCOMMAND_PROPAGATEPRESENTHISTORYTOKEN, ++ DXGK_VMBCOMMAND_SETGUESTDATA, ++ DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE, ++ DXGK_VMBCOMMAND_SENDWNFNOTIFICATION, ++ DXGK_VMBCOMMAND_INVALID_HOST_TO_VM ++}; ++ ++struct dxgkvmb_command_vm_to_host { ++ u64 command_id; ++ struct d3dkmthandle process; ++ enum dxgkvmb_commandchanneltype channel_type; ++ enum dxgkvmb_commandtype_global command_type; ++}; ++ ++struct dxgkvmb_command_host_to_vm { ++ u64 command_id; ++ struct d3dkmthandle process; ++ u32 channel_type : 8; ++ u32 async_msg : 1; ++ u32 reserved : 23; ++ enum dxgkvmb_commandtype_host_to_vm command_type; ++}; ++ ++/* Returns ntstatus */ ++struct dxgkvmb_command_setiospaceregion { ++ struct dxgkvmb_command_vm_to_host hdr; ++ u64 start; ++ u64 length; ++ u32 shared_page_gpadl; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +new file mode 100644 +index 000000000000..23ecd15b0cd7 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -0,0 +1,24 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Ioctl implementation ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++#include "dxgvmbus.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +new file mode 100644 +index 000000000000..4c6047c32a20 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -0,0 +1,72 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Misc definitions ++ * ++ */ ++ ++#ifndef _MISC_H_ ++#define _MISC_H_ ++ ++extern const struct d3dkmthandle zerohandle; ++ ++/* ++ * Synchronization lock hierarchy. ++ * ++ * The higher enum value, the higher is the lock order. ++ * When a lower lock ois held, the higher lock should not be acquired. ++ * ++ * channel_lock ++ * device_mutex ++ */ ++ ++/* ++ * Some of the Windows return codes, which needs to be translated to Linux ++ * IOCTL return codes. Positive values are success codes and need to be ++ * returned from the driver IOCTLs. libdxcore.so depends on returning ++ * specific return codes. ++ */ ++#define STATUS_SUCCESS ((int)(0)) ++#define STATUS_OBJECT_NAME_INVALID ((int)(0xC0000033L)) ++#define STATUS_DEVICE_REMOVED ((int)(0xC00002B6L)) ++#define STATUS_INVALID_HANDLE ((int)(0xC0000008L)) ++#define STATUS_ILLEGAL_INSTRUCTION ((int)(0xC000001DL)) ++#define STATUS_NOT_IMPLEMENTED ((int)(0xC0000002L)) ++#define STATUS_PENDING ((int)(0x00000103L)) ++#define STATUS_ACCESS_DENIED ((int)(0xC0000022L)) ++#define STATUS_BUFFER_TOO_SMALL ((int)(0xC0000023L)) ++#define STATUS_OBJECT_TYPE_MISMATCH ((int)(0xC0000024L)) ++#define STATUS_GRAPHICS_ALLOCATION_BUSY ((int)(0xC01E0102L)) ++#define STATUS_NOT_SUPPORTED ((int)(0xC00000BBL)) ++#define STATUS_TIMEOUT ((int)(0x00000102L)) ++#define STATUS_INVALID_PARAMETER ((int)(0xC000000DL)) ++#define STATUS_NO_MEMORY ((int)(0xC0000017L)) ++#define STATUS_OBJECT_NAME_COLLISION ((int)(0xC0000035L)) ++#define STATUS_OBJECT_NAME_NOT_FOUND ((int)(0xC0000034L)) ++ ++ ++#define NT_SUCCESS(status) (status.v >= 0) ++ ++#ifndef DEBUG ++ ++#define DXGKRNL_ASSERT(exp) ++ ++#else ++ ++#define DXGKRNL_ASSERT(exp) \ ++do { \ ++ if (!(exp)) { \ ++ dump_stack(); \ ++ BUG_ON(true); \ ++ } \ ++} while (0) ++ ++#endif /* DEBUG */ ++ ++#endif /* _MISC_H_ */ +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 5d973604400c..2ea04cc02a1f 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -14,6 +14,40 @@ + #ifndef _D3DKMTHK_H + #define _D3DKMTHK_H + ++/* ++ * This structure matches the definition of D3DKMTHANDLE in Windows. ++ * The handle is opaque in user mode. It is used by user mode applications to ++ * represent kernel mode objects, created by dxgkrnl. ++ */ ++struct d3dkmthandle { ++ union { ++ struct { ++ __u32 instance : 6; ++ __u32 index : 24; ++ __u32 unique : 2; ++ }; ++ __u32 v; ++ }; ++}; ++ ++/* ++ * VM bus messages return Windows' NTSTATUS, which is integer and only negative ++ * value indicates a failure. A positive number is a success and needs to be ++ * returned to user mode as the IOCTL return code. Negative status codes are ++ * converted to Linux error codes. ++ */ ++struct ntstatus { ++ union { ++ struct { ++ int code : 16; ++ int facility : 13; ++ int customer : 1; ++ int severity : 2; ++ }; ++ int v; ++ }; ++}; ++ + /* + * Matches the Windows LUID definition. + * LUID is a locally unique identifier (similar to GUID, but not global), +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1670-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch b/patch/kernel/archive/wsl2-arm64-6.6/1670-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch new file mode 100644 index 000000000000..901bfd24efea --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1670-drivers-hv-dxgkrnl-Creation-of-dxgadapter-object.patch @@ -0,0 +1,1160 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 19:00:38 -0800 +Subject: drivers: hv: dxgkrnl: Creation of dxgadapter object + +Handle creation and destruction of dxgadapter object, which +represents a virtual compute device, projected to the VM by +the host. The dxgadapter object is created when the +corresponding VMBus channel is offered by Hyper-V. + +There could be multiple virtual compute device objects, projected +by the host to VM. They are enumerated by issuing IOCTLs to +the /dev/dxg device. + +The adapter object can start functioning only when the global VMBus +channel and the corresponding per device VMBus channel are +initialized. Notifications about arrival of a virtual compute PCI +device and VMBus channels can happen in any order. Therefore, +the initial dxgadapter object state is DXGADAPTER_STATE_WAITING_VMBUS. +A list of VMBus channels and a list of waiting dxgadapter objects +are maintained. When dxgkrnl is notified about a VMBus channel +arrival, if tries to start all adapters, which are not started yet. + +Properties of the adapter object are determined by sending VMBus +messages to the host to the corresponding VMBus channel. + +When the per virtual compute device VMBus channel or the global +channel are destroyed, the adapter object is destroyed. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgadapter.c | 170 ++++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 85 ++++ + drivers/hv/dxgkrnl/dxgmodule.c | 204 ++++++++- + drivers/hv/dxgkrnl/dxgvmbus.c | 217 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 128 ++++++ + drivers/hv/dxgkrnl/misc.c | 37 ++ + drivers/hv/dxgkrnl/misc.h | 24 +- + 8 files changed, 844 insertions(+), 23 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index 76349064b60a..2ed07d877c91 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o dxgvmbus.o ++dxgkrnl-y := dxgmodule.o misc.o dxgadapter.o ioctl.o dxgvmbus.o +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +new file mode 100644 +index 000000000000..07d47699d255 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -0,0 +1,170 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Implementation of dxgadapter and its objects ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev) ++{ ++ int ret; ++ ++ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, ++ &adapter->luid); ++ DXG_TRACE("%x:%x %p %pUb", ++ adapter->luid.b, adapter->luid.a, hdev->channel, ++ &hdev->channel->offermsg.offer.if_instance); ++ ++ ret = dxgvmbuschannel_init(&adapter->channel, hdev); ++ if (ret) ++ goto cleanup; ++ ++ adapter->channel.adapter = adapter; ++ adapter->hv_dev = hdev; ++ ++ ret = dxgvmb_send_open_adapter(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_open_adapter failed: %d", ret); ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_get_internal_adapter_info(adapter); ++ ++cleanup: ++ if (ret) ++ DXG_ERR("Failed to set vmbus: %d", ret); ++ return ret; ++} ++ ++void dxgadapter_start(struct dxgadapter *adapter) ++{ ++ struct dxgvgpuchannel *ch = NULL; ++ struct dxgvgpuchannel *entry; ++ int ret; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ DXG_TRACE("%x-%x", adapter->luid.a, adapter->luid.b); ++ ++ /* Find the corresponding vGPU vm bus channel */ ++ list_for_each_entry(entry, &dxgglobal->vgpu_ch_list_head, ++ vgpu_ch_list_entry) { ++ if (memcmp(&adapter->luid, ++ &entry->adapter_luid, ++ sizeof(struct winluid)) == 0) { ++ ch = entry; ++ break; ++ } ++ } ++ if (ch == NULL) { ++ DXG_TRACE("vGPU chanel is not ready"); ++ return; ++ } ++ ++ /* The global channel is initialized when the first adapter starts */ ++ if (!dxgglobal->global_channel_initialized) { ++ ret = dxgglobal_init_global_channel(); ++ if (ret) { ++ dxgglobal_destroy_global_channel(); ++ return; ++ } ++ dxgglobal->global_channel_initialized = true; ++ } ++ ++ /* Initialize vGPU vm bus channel */ ++ ret = dxgadapter_set_vmbus(adapter, ch->hdev); ++ if (ret) { ++ DXG_ERR("Failed to start adapter %p", adapter); ++ adapter->adapter_state = DXGADAPTER_STATE_STOPPED; ++ return; ++ } ++ ++ adapter->adapter_state = DXGADAPTER_STATE_ACTIVE; ++ DXG_TRACE("Adapter started %p", adapter); ++} ++ ++void dxgadapter_stop(struct dxgadapter *adapter) ++{ ++ bool adapter_stopped = false; ++ ++ down_write(&adapter->core_lock); ++ if (!adapter->stopping_adapter) ++ adapter->stopping_adapter = true; ++ else ++ adapter_stopped = true; ++ up_write(&adapter->core_lock); ++ ++ if (adapter_stopped) ++ return; ++ ++ if (dxgadapter_acquire_lock_exclusive(adapter) == 0) { ++ dxgvmb_send_close_adapter(adapter); ++ dxgadapter_release_lock_exclusive(adapter); ++ } ++ dxgvmbuschannel_destroy(&adapter->channel); ++ ++ adapter->adapter_state = DXGADAPTER_STATE_STOPPED; ++} ++ ++void dxgadapter_release(struct kref *refcount) ++{ ++ struct dxgadapter *adapter; ++ ++ adapter = container_of(refcount, struct dxgadapter, adapter_kref); ++ DXG_TRACE("%p", adapter); ++ kfree(adapter); ++} ++ ++bool dxgadapter_is_active(struct dxgadapter *adapter) ++{ ++ return adapter->adapter_state == DXGADAPTER_STATE_ACTIVE; ++} ++ ++int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter) ++{ ++ down_write(&adapter->core_lock); ++ if (adapter->adapter_state != DXGADAPTER_STATE_ACTIVE) { ++ dxgadapter_release_lock_exclusive(adapter); ++ return -ENODEV; ++ } ++ return 0; ++} ++ ++void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter) ++{ ++ down_write(&adapter->core_lock); ++} ++ ++void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter) ++{ ++ up_write(&adapter->core_lock); ++} ++ ++int dxgadapter_acquire_lock_shared(struct dxgadapter *adapter) ++{ ++ down_read(&adapter->core_lock); ++ if (adapter->adapter_state == DXGADAPTER_STATE_ACTIVE) ++ return 0; ++ dxgadapter_release_lock_shared(adapter); ++ return -ENODEV; ++} ++ ++void dxgadapter_release_lock_shared(struct dxgadapter *adapter) ++{ ++ up_read(&adapter->core_lock); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 52b9e82c51e6..ba2a7c6001aa 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -47,9 +47,39 @@ extern struct dxgdriver dxgdrv; + + #define DXGDEV dxgdrv.dxgdev + ++struct dxgk_device_types { ++ u32 post_device:1; ++ u32 post_device_certain:1; ++ u32 software_device:1; ++ u32 soft_gpu_device:1; ++ u32 warp_device:1; ++ u32 bdd_device:1; ++ u32 support_miracast:1; ++ u32 mismatched_lda:1; ++ u32 indirect_display_device:1; ++ u32 xbox_one_device:1; ++ u32 child_id_support_dwm_clone:1; ++ u32 child_id_support_dwm_clone2:1; ++ u32 has_internal_panel:1; ++ u32 rfx_vgpu_device:1; ++ u32 virtual_render_device:1; ++ u32 support_preserve_boot_display:1; ++ u32 is_uefi_frame_buffer:1; ++ u32 removable_device:1; ++ u32 virtual_monitor_device:1; ++}; ++ ++enum dxgobjectstate { ++ DXGOBJECTSTATE_CREATED, ++ DXGOBJECTSTATE_ACTIVE, ++ DXGOBJECTSTATE_STOPPED, ++ DXGOBJECTSTATE_DESTROYED, ++}; ++ + struct dxgvmbuschannel { + struct vmbus_channel *channel; + struct hv_device *hdev; ++ struct dxgadapter *adapter; + spinlock_t packet_list_mutex; + struct list_head packet_list_head; + struct kmem_cache *packet_cache; +@@ -81,6 +111,10 @@ struct dxgglobal { + struct miscdevice dxgdevice; + struct mutex device_mutex; + ++ /* list of created adapters */ ++ struct list_head adapter_list_head; ++ struct rw_semaphore adapter_list_lock; ++ + /* + * List of the vGPU VM bus channels (dxgvgpuchannel) + * Protected by device_mutex +@@ -102,6 +136,10 @@ static inline struct dxgglobal *dxggbl(void) + return dxgdrv.dxgglobal; + } + ++int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, ++ struct winluid host_vgpu_luid); ++void dxgglobal_acquire_adapter_list_lock(enum dxglockstate state); ++void dxgglobal_release_adapter_list_lock(enum dxglockstate state); + int dxgglobal_init_global_channel(void); + void dxgglobal_destroy_global_channel(void); + struct vmbus_channel *dxgglobal_get_vmbus(void); +@@ -113,6 +151,47 @@ struct dxgprocess { + /* Placeholder */ + }; + ++enum dxgadapter_state { ++ DXGADAPTER_STATE_ACTIVE = 0, ++ DXGADAPTER_STATE_STOPPED = 1, ++ DXGADAPTER_STATE_WAITING_VMBUS = 2, ++}; ++ ++/* ++ * This object represents the grapchis adapter. ++ * Objects, which take reference on the adapter: ++ * - dxgglobal ++ * - adapter handle (struct d3dkmthandle) ++ */ ++struct dxgadapter { ++ struct rw_semaphore core_lock; ++ struct kref adapter_kref; ++ /* Entry in the list of adapters in dxgglobal */ ++ struct list_head adapter_list_entry; ++ struct pci_dev *pci_dev; ++ struct hv_device *hv_dev; ++ struct dxgvmbuschannel channel; ++ struct d3dkmthandle host_handle; ++ enum dxgadapter_state adapter_state; ++ struct winluid host_adapter_luid; ++ struct winluid host_vgpu_luid; ++ struct winluid luid; /* VM bus channel luid */ ++ u16 device_description[80]; ++ u16 device_instance_id[WIN_MAX_PATH]; ++ bool stopping_adapter; ++}; ++ ++int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev); ++bool dxgadapter_is_active(struct dxgadapter *adapter); ++void dxgadapter_start(struct dxgadapter *adapter); ++void dxgadapter_stop(struct dxgadapter *adapter); ++void dxgadapter_release(struct kref *refcount); ++int dxgadapter_acquire_lock_shared(struct dxgadapter *adapter); ++void dxgadapter_release_lock_shared(struct dxgadapter *adapter); ++int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); ++void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++ + /* + * The convention is that VNBus instance id is a GUID, but the host sets + * the lower part of the value to the host adapter LUID. The function +@@ -141,6 +220,12 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid) + + void dxgvmb_initialize(void); + int dxgvmb_send_set_iospace_region(u64 start, u64 len); ++int dxgvmb_send_open_adapter(struct dxgadapter *adapter); ++int dxgvmb_send_close_adapter(struct dxgadapter *adapter); ++int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter); ++int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, ++ void *command, ++ u32 cmd_size); + + int ntstatus2int(struct ntstatus status); + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index e55639dc0adc..ef80b920f010 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -55,6 +55,156 @@ void dxgglobal_release_channel_lock(void) + up_read(&dxggbl()->channel_lock); + } + ++void dxgglobal_acquire_adapter_list_lock(enum dxglockstate state) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (state == DXGLOCK_EXCL) ++ down_write(&dxgglobal->adapter_list_lock); ++ else ++ down_read(&dxgglobal->adapter_list_lock); ++} ++ ++void dxgglobal_release_adapter_list_lock(enum dxglockstate state) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (state == DXGLOCK_EXCL) ++ up_write(&dxgglobal->adapter_list_lock); ++ else ++ up_read(&dxgglobal->adapter_list_lock); ++} ++ ++/* ++ * Returns a pointer to dxgadapter object, which corresponds to the given PCI ++ * device, or NULL. ++ */ ++static struct dxgadapter *find_pci_adapter(struct pci_dev *dev) ++{ ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dev == entry->pci_dev) { ++ adapter = entry; ++ break; ++ } ++ } ++ ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ return adapter; ++} ++ ++/* ++ * Returns a pointer to dxgadapter object, which has the givel LUID ++ * device, or NULL. ++ */ ++static struct dxgadapter *find_adapter(struct winluid *luid) ++{ ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (memcmp(luid, &entry->luid, sizeof(struct winluid)) == 0) { ++ adapter = entry; ++ break; ++ } ++ } ++ ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ return adapter; ++} ++ ++/* ++ * Creates a new dxgadapter object, which represents a virtual GPU, projected ++ * by the host. ++ * The adapter is in the waiting state. It will become active when the global ++ * VM bus channel and the adapter VM bus channel are created. ++ */ ++int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, ++ struct winluid host_vgpu_luid) ++{ ++ struct dxgadapter *adapter; ++ int ret = 0; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ adapter = kzalloc(sizeof(struct dxgadapter), GFP_KERNEL); ++ if (adapter == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapter->adapter_state = DXGADAPTER_STATE_WAITING_VMBUS; ++ adapter->host_vgpu_luid = host_vgpu_luid; ++ kref_init(&adapter->adapter_kref); ++ init_rwsem(&adapter->core_lock); ++ ++ adapter->pci_dev = dev; ++ guid_to_luid(guid, &adapter->luid); ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ ++ list_add_tail(&adapter->adapter_list_entry, ++ &dxgglobal->adapter_list_head); ++ dxgglobal->num_adapters++; ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ ++ DXG_TRACE("new adapter added %p %x-%x", adapter, ++ adapter->luid.a, adapter->luid.b); ++cleanup: ++ return ret; ++} ++ ++/* ++ * Attempts to start dxgadapter objects, which are not active yet. ++ */ ++static void dxgglobal_start_adapters(void) ++{ ++ struct dxgadapter *adapter; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (dxgglobal->hdev == NULL) { ++ DXG_TRACE("Global channel is not ready"); ++ return; ++ } ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ list_for_each_entry(adapter, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (adapter->adapter_state == DXGADAPTER_STATE_WAITING_VMBUS) ++ dxgadapter_start(adapter); ++ } ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++} ++ ++/* ++ * Stopsthe active dxgadapter objects. ++ */ ++static void dxgglobal_stop_adapters(void) ++{ ++ struct dxgadapter *adapter; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (dxgglobal->hdev == NULL) { ++ DXG_TRACE("Global channel is not ready"); ++ return; ++ } ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ list_for_each_entry(adapter, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (adapter->adapter_state == DXGADAPTER_STATE_ACTIVE) ++ dxgadapter_stop(adapter); ++ } ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++} ++ + const struct file_operations dxgk_fops = { + .owner = THIS_MODULE, + }; +@@ -182,6 +332,15 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + DXG_TRACE("Vmbus interface version: %d", dxgglobal->vmbus_ver); + DXG_TRACE("Host luid: %x-%x", vgpu_luid.b, vgpu_luid.a); + ++ /* Create new virtual GPU adapter */ ++ ret = dxgglobal_create_adapter(dev, &guid, vgpu_luid); ++ if (ret) ++ goto cleanup; ++ ++ /* Attempt to start the adapter in case VM bus channels are created */ ++ ++ dxgglobal_start_adapters(); ++ + cleanup: + + mutex_unlock(&dxgglobal->device_mutex); +@@ -193,7 +352,25 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + + static void dxg_pci_remove_device(struct pci_dev *dev) + { +- /* Placeholder */ ++ struct dxgadapter *adapter; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->device_mutex); ++ ++ adapter = find_pci_adapter(dev); ++ if (adapter) { ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL); ++ list_del(&adapter->adapter_list_entry); ++ dxgglobal->num_adapters--; ++ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); ++ ++ dxgadapter_stop(adapter); ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ } else { ++ DXG_ERR("Failed to find dxgadapter for pcidev"); ++ } ++ ++ mutex_unlock(&dxgglobal->device_mutex); + } + + static struct pci_device_id dxg_pci_id_table[] = { +@@ -297,6 +474,25 @@ void dxgglobal_destroy_global_channel(void) + up_write(&dxgglobal->channel_lock); + } + ++static void dxgglobal_stop_adapter_vmbus(struct hv_device *hdev) ++{ ++ struct dxgadapter *adapter = NULL; ++ struct winluid luid; ++ ++ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, &luid); ++ ++ DXG_TRACE("Stopping adapter %x:%x", luid.b, luid.a); ++ ++ adapter = find_adapter(&luid); ++ ++ if (adapter && adapter->adapter_state == DXGADAPTER_STATE_ACTIVE) { ++ down_write(&adapter->core_lock); ++ dxgvmbuschannel_destroy(&adapter->channel); ++ adapter->adapter_state = DXGADAPTER_STATE_STOPPED; ++ up_write(&adapter->core_lock); ++ } ++} ++ + static const struct hv_vmbus_device_id dxg_vmbus_id_table[] = { + /* Per GPU Device GUID */ + { HV_GPUP_DXGK_VGPU_GUID }, +@@ -329,6 +525,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev, + vgpuch->hdev = hdev; + list_add_tail(&vgpuch->vgpu_ch_list_entry, + &dxgglobal->vgpu_ch_list_head); ++ dxgglobal_start_adapters(); + } else if (uuid_le_cmp(hdev->dev_type, + dxg_vmbus_id_table[1].guid) == 0) { + /* This is the global Dxgkgnl channel */ +@@ -341,6 +538,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev, + goto error; + } + dxgglobal->hdev = hdev; ++ dxgglobal_start_adapters(); + } else { + /* Unknown device type */ + DXG_ERR("Unknown VM bus device type"); +@@ -364,6 +562,7 @@ static int dxg_remove_vmbus(struct hv_device *hdev) + + if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) { + DXG_TRACE("Remove virtual GPU channel"); ++ dxgglobal_stop_adapter_vmbus(hdev); + list_for_each_entry(vgpu_channel, + &dxgglobal->vgpu_ch_list_head, + vgpu_ch_list_entry) { +@@ -420,6 +619,8 @@ static struct dxgglobal *dxgglobal_create(void) + mutex_init(&dxgglobal->device_mutex); + + INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); ++ INIT_LIST_HEAD(&dxgglobal->adapter_list_head); ++ init_rwsem(&dxgglobal->adapter_list_lock); + + init_rwsem(&dxgglobal->channel_lock); + +@@ -430,6 +631,7 @@ static void dxgglobal_destroy(struct dxgglobal *dxgglobal) + { + if (dxgglobal) { + mutex_lock(&dxgglobal->device_mutex); ++ dxgglobal_stop_adapters(); + dxgglobal_destroy_global_channel(); + mutex_unlock(&dxgglobal->device_mutex); + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index a4365739826a..6d4b8d9d8d07 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -77,7 +77,7 @@ struct dxgvmbusmsgres { + void *res; + }; + +-static int init_message(struct dxgvmbusmsg *msg, ++static int init_message(struct dxgvmbusmsg *msg, struct dxgadapter *adapter, + struct dxgprocess *process, u32 size) + { + struct dxgglobal *dxgglobal = dxggbl(); +@@ -99,10 +99,15 @@ static int init_message(struct dxgvmbusmsg *msg, + if (use_ext_header) { + msg->msg = (char *)&msg->hdr[1]; + msg->hdr->command_offset = sizeof(msg->hdr[0]); ++ if (adapter) ++ msg->hdr->vgpu_luid = adapter->host_vgpu_luid; + } else { + msg->msg = (char *)msg->hdr; + } +- msg->channel = &dxgglobal->channel; ++ if (adapter && !dxgglobal->async_msg_enabled) ++ msg->channel = &adapter->channel; ++ else ++ msg->channel = &dxgglobal->channel; + return 0; + } + +@@ -116,6 +121,37 @@ static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process) + * Helper functions + */ + ++static void command_vm_to_host_init2(struct dxgkvmb_command_vm_to_host *command, ++ enum dxgkvmb_commandtype_global t, ++ struct d3dkmthandle process) ++{ ++ command->command_type = t; ++ command->process = process; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VM_TO_HOST; ++} ++ ++static void command_vgpu_to_host_init1(struct dxgkvmb_command_vgpu_to_host ++ *command, ++ enum dxgkvmb_commandtype type) ++{ ++ command->command_type = type; ++ command->process.v = 0; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VGPU_TO_HOST; ++} ++ ++static void command_vgpu_to_host_init2(struct dxgkvmb_command_vgpu_to_host ++ *command, ++ enum dxgkvmb_commandtype type, ++ struct d3dkmthandle process) ++{ ++ command->command_type = type; ++ command->process = process; ++ command->command_id = 0; ++ command->channel_type = DXGKVMB_VGPU_TO_HOST; ++} ++ + int ntstatus2int(struct ntstatus status) + { + if (NT_SUCCESS(status)) +@@ -216,22 +252,26 @@ static void process_inband_packet(struct dxgvmbuschannel *channel, + u32 packet_length = hv_pkt_datalen(desc); + struct dxgkvmb_command_host_to_vm *packet; + +- if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) { +- DXG_ERR("Invalid global packet"); +- } else { +- packet = hv_pkt_data(desc); +- DXG_TRACE("global packet %d", +- packet->command_type); +- switch (packet->command_type) { +- case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: +- case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: +- break; +- case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: +- break; +- default: +- DXG_ERR("unexpected host message %d", ++ if (channel->adapter == NULL) { ++ if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) { ++ DXG_ERR("Invalid global packet"); ++ } else { ++ packet = hv_pkt_data(desc); ++ DXG_TRACE("global packet %d", + packet->command_type); ++ switch (packet->command_type) { ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: ++ case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: ++ break; ++ case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: ++ break; ++ default: ++ DXG_ERR("unexpected host message %d", ++ packet->command_type); ++ } + } ++ } else { ++ DXG_ERR("Unexpected packet for adapter channel"); + } + } + +@@ -279,6 +319,7 @@ void dxgvmbuschannel_receive(void *ctx) + struct vmpacket_descriptor *desc; + u32 packet_length = 0; + ++ DXG_TRACE("New adapter message: %p", channel->adapter); + foreach_vmbus_pkt(desc, channel->channel) { + packet_length = hv_pkt_datalen(desc); + DXG_TRACE("next packet (id, size, type): %llu %d %d", +@@ -302,6 +343,8 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, + { + int ret; + struct dxgvmbuspacket *packet = NULL; ++ struct dxgkvmb_command_vm_to_host *cmd1; ++ struct dxgkvmb_command_vgpu_to_host *cmd2; + + if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE || + result_size > DXG_MAX_VM_BUS_PACKET_SIZE) { +@@ -315,6 +358,16 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, + return -ENOMEM; + } + ++ if (channel->adapter == NULL) { ++ cmd1 = command; ++ DXG_TRACE("send_sync_msg global: %d %p %d %d", ++ cmd1->command_type, command, cmd_size, result_size); ++ } else { ++ cmd2 = command; ++ DXG_TRACE("send_sync_msg adapter: %d %p %d %d", ++ cmd2->command_type, command, cmd_size, result_size); ++ } ++ + packet->request_id = atomic64_inc_return(&channel->packet_request_id); + init_completion(&packet->wait); + packet->buffer = result; +@@ -358,6 +411,41 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, + return ret; + } + ++int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, ++ void *command, ++ u32 cmd_size) ++{ ++ int ret; ++ int try_count = 0; ++ ++ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("%s invalid data size", __func__); ++ return -EINVAL; ++ } ++ ++ if (channel->adapter) { ++ DXG_ERR("Async message sent to the adapter channel"); ++ return -EINVAL; ++ } ++ ++ do { ++ ret = vmbus_sendpacket(channel->channel, command, cmd_size, ++ 0, VM_PKT_DATA_INBAND, 0); ++ /* ++ * -EAGAIN is returned when the VM bus ring buffer if full. ++ * Wait 2ms to allow the host to process messages and try again. ++ */ ++ if (ret == -EAGAIN) { ++ usleep_range(1000, 2000); ++ try_count++; ++ } ++ } while (ret == -EAGAIN && try_count < 5000); ++ if (ret < 0) ++ DXG_ERR("vmbus_sendpacket failed: %x", ret); ++ ++ return ret; ++} ++ + static int + dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel, + void *command, u32 cmd_size) +@@ -383,7 +471,7 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len) + struct dxgvmbusmsg msg; + struct dxgglobal *dxgglobal = dxggbl(); + +- ret = init_message(&msg, NULL, sizeof(*command)); ++ ret = init_message(&msg, NULL, NULL, sizeof(*command)); + if (ret) + return ret; + command = (void *)msg.msg; +@@ -408,3 +496,98 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len) + DXG_TRACE("Error: %d", ret); + return ret; + } ++ ++/* ++ * Virtual GPU messages to the host ++ */ ++ ++int dxgvmb_send_open_adapter(struct dxgadapter *adapter) ++{ ++ int ret; ++ struct dxgkvmb_command_openadapter *command; ++ struct dxgkvmb_command_openadapter_return result = { }; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, adapter, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_OPENADAPTER); ++ command->vmbus_interface_version = dxgglobal->vmbus_ver; ++ command->vmbus_last_compatible_interface_version = ++ DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ adapter->host_handle = result.host_adapter_handle; ++ ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_ERR("Failed to open adapter: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_close_adapter(struct dxgadapter *adapter) ++{ ++ int ret; ++ struct dxgkvmb_command_closeadapter *command; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, adapter, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_CLOSEADAPTER); ++ command->host_handle = adapter->host_handle; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ NULL, 0); ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_ERR("Failed to close adapter: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter) ++{ ++ int ret; ++ struct dxgkvmb_command_getinternaladapterinfo *command; ++ struct dxgkvmb_command_getinternaladapterinfo_return result = { }; ++ struct dxgvmbusmsg msg; ++ u32 result_size = sizeof(result); ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, adapter, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init1(&command->hdr, ++ DXGK_VMBCOMMAND_GETINTERNALADAPTERINFO); ++ if (dxgglobal->vmbus_ver < DXGK_VMBUS_INTERFACE_VERSION) ++ result_size -= sizeof(struct winluid); ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, result_size); ++ if (ret >= 0) { ++ adapter->host_adapter_luid = result.host_adapter_luid; ++ adapter->host_vgpu_luid = result.host_vgpu_luid; ++ wcsncpy(adapter->device_description, result.device_description, ++ sizeof(adapter->device_description) / sizeof(u16)); ++ wcsncpy(adapter->device_instance_id, result.device_instance_id, ++ sizeof(adapter->device_instance_id) / sizeof(u16)); ++ dxgglobal->async_msg_enabled = result.async_msg_enabled != 0; ++ } ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_ERR("Failed to get adapter info: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index b1bdd6039b73..584cdd3db6c0 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -47,6 +47,83 @@ enum dxgkvmb_commandtype_global { + DXGK_VMBCOMMAND_INVALID_VM_TO_HOST + }; + ++/* ++ * ++ * Commands, sent to the host via the per adapter VM bus channel ++ * DXG_GUEST_VGPU_VMBUS ++ * ++ */ ++ ++enum dxgkvmb_commandtype { ++ DXGK_VMBCOMMAND_CREATEDEVICE = 0, ++ DXGK_VMBCOMMAND_DESTROYDEVICE = 1, ++ DXGK_VMBCOMMAND_QUERYADAPTERINFO = 2, ++ DXGK_VMBCOMMAND_DDIQUERYADAPTERINFO = 3, ++ DXGK_VMBCOMMAND_CREATEALLOCATION = 4, ++ DXGK_VMBCOMMAND_DESTROYALLOCATION = 5, ++ DXGK_VMBCOMMAND_CREATECONTEXTVIRTUAL = 6, ++ DXGK_VMBCOMMAND_DESTROYCONTEXT = 7, ++ DXGK_VMBCOMMAND_CREATESYNCOBJECT = 8, ++ DXGK_VMBCOMMAND_CREATEPAGINGQUEUE = 9, ++ DXGK_VMBCOMMAND_DESTROYPAGINGQUEUE = 10, ++ DXGK_VMBCOMMAND_MAKERESIDENT = 11, ++ DXGK_VMBCOMMAND_EVICT = 12, ++ DXGK_VMBCOMMAND_ESCAPE = 13, ++ DXGK_VMBCOMMAND_OPENADAPTER = 14, ++ DXGK_VMBCOMMAND_CLOSEADAPTER = 15, ++ DXGK_VMBCOMMAND_FREEGPUVIRTUALADDRESS = 16, ++ DXGK_VMBCOMMAND_MAPGPUVIRTUALADDRESS = 17, ++ DXGK_VMBCOMMAND_RESERVEGPUVIRTUALADDRESS = 18, ++ DXGK_VMBCOMMAND_UPDATEGPUVIRTUALADDRESS = 19, ++ DXGK_VMBCOMMAND_SUBMITCOMMAND = 20, ++ dxgk_vmbcommand_queryvideomemoryinfo = 21, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMCPU = 22, ++ DXGK_VMBCOMMAND_LOCK2 = 23, ++ DXGK_VMBCOMMAND_UNLOCK2 = 24, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMGPU = 25, ++ DXGK_VMBCOMMAND_SIGNALSYNCOBJECT = 26, ++ DXGK_VMBCOMMAND_SIGNALFENCENTSHAREDBYREF = 27, ++ DXGK_VMBCOMMAND_GETDEVICESTATE = 28, ++ DXGK_VMBCOMMAND_MARKDEVICEASERROR = 29, ++ DXGK_VMBCOMMAND_ADAPTERSTOP = 30, ++ DXGK_VMBCOMMAND_SETQUEUEDLIMIT = 31, ++ DXGK_VMBCOMMAND_OPENRESOURCE = 32, ++ DXGK_VMBCOMMAND_SETCONTEXTSCHEDULINGPRIORITY = 33, ++ DXGK_VMBCOMMAND_PRESENTHISTORYTOKEN = 34, ++ DXGK_VMBCOMMAND_SETREDIRECTEDFLIPFENCEVALUE = 35, ++ DXGK_VMBCOMMAND_GETINTERNALADAPTERINFO = 36, ++ DXGK_VMBCOMMAND_FLUSHHEAPTRANSITIONS = 37, ++ DXGK_VMBCOMMAND_BLT = 38, ++ DXGK_VMBCOMMAND_DDIGETSTANDARDALLOCATIONDRIVERDATA = 39, ++ DXGK_VMBCOMMAND_CDDGDICOMMAND = 40, ++ DXGK_VMBCOMMAND_QUERYALLOCATIONRESIDENCY = 41, ++ DXGK_VMBCOMMAND_FLUSHDEVICE = 42, ++ DXGK_VMBCOMMAND_FLUSHADAPTER = 43, ++ DXGK_VMBCOMMAND_DDIGETNODEMETADATA = 44, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE = 45, ++ DXGK_VMBCOMMAND_ISSYNCOBJECTSIGNALED = 46, ++ DXGK_VMBCOMMAND_CDDSYNCGPUACCESS = 47, ++ DXGK_VMBCOMMAND_QUERYSTATISTICS = 48, ++ DXGK_VMBCOMMAND_CHANGEVIDEOMEMORYRESERVATION = 49, ++ DXGK_VMBCOMMAND_CREATEHWQUEUE = 50, ++ DXGK_VMBCOMMAND_DESTROYHWQUEUE = 51, ++ DXGK_VMBCOMMAND_SUBMITCOMMANDTOHWQUEUE = 52, ++ DXGK_VMBCOMMAND_GETDRIVERSTOREFILE = 53, ++ DXGK_VMBCOMMAND_READDRIVERSTOREFILE = 54, ++ DXGK_VMBCOMMAND_GETNEXTHARDLINK = 55, ++ DXGK_VMBCOMMAND_UPDATEALLOCATIONPROPERTY = 56, ++ DXGK_VMBCOMMAND_OFFERALLOCATIONS = 57, ++ DXGK_VMBCOMMAND_RECLAIMALLOCATIONS = 58, ++ DXGK_VMBCOMMAND_SETALLOCATIONPRIORITY = 59, ++ DXGK_VMBCOMMAND_GETALLOCATIONPRIORITY = 60, ++ DXGK_VMBCOMMAND_GETCONTEXTSCHEDULINGPRIORITY = 61, ++ DXGK_VMBCOMMAND_QUERYCLOCKCALIBRATION = 62, ++ DXGK_VMBCOMMAND_QUERYRESOURCEINFO = 64, ++ DXGK_VMBCOMMAND_LOGEVENT = 65, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES = 66, ++ DXGK_VMBCOMMAND_INVALID ++}; ++ + /* + * Commands, sent by the host to the VM + */ +@@ -66,6 +143,15 @@ struct dxgkvmb_command_vm_to_host { + enum dxgkvmb_commandtype_global command_type; + }; + ++struct dxgkvmb_command_vgpu_to_host { ++ u64 command_id; ++ struct d3dkmthandle process; ++ u32 channel_type : 8; ++ u32 async_msg : 1; ++ u32 reserved : 23; ++ enum dxgkvmb_commandtype command_type; ++}; ++ + struct dxgkvmb_command_host_to_vm { + u64 command_id; + struct d3dkmthandle process; +@@ -83,4 +169,46 @@ struct dxgkvmb_command_setiospaceregion { + u32 shared_page_gpadl; + }; + ++struct dxgkvmb_command_openadapter { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ u32 vmbus_interface_version; ++ u32 vmbus_last_compatible_interface_version; ++ struct winluid guest_adapter_luid; ++}; ++ ++struct dxgkvmb_command_openadapter_return { ++ struct d3dkmthandle host_adapter_handle; ++ struct ntstatus status; ++ u32 vmbus_interface_version; ++ u32 vmbus_last_compatible_interface_version; ++}; ++ ++struct dxgkvmb_command_closeadapter { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle host_handle; ++}; ++ ++struct dxgkvmb_command_getinternaladapterinfo { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++}; ++ ++struct dxgkvmb_command_getinternaladapterinfo_return { ++ struct dxgk_device_types device_types; ++ u32 driver_store_copy_mode; ++ u32 driver_ddi_version; ++ u32 secure_virtual_machine : 1; ++ u32 virtual_machine_reset : 1; ++ u32 is_vail_supported : 1; ++ u32 hw_sch_enabled : 1; ++ u32 hw_sch_capable : 1; ++ u32 va_backed_vm : 1; ++ u32 async_msg_enabled : 1; ++ u32 hw_support_state : 2; ++ u32 reserved : 23; ++ struct winluid host_adapter_luid; ++ u16 device_description[80]; ++ u16 device_instance_id[WIN_MAX_PATH]; ++ struct winluid host_vgpu_luid; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c +new file mode 100644 +index 000000000000..cb1e0635bebc +--- /dev/null ++++ b/drivers/hv/dxgkrnl/misc.c +@@ -0,0 +1,37 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2019, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Helper functions ++ * ++ */ ++ ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++#include "misc.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++u16 *wcsncpy(u16 *dest, const u16 *src, size_t n) ++{ ++ int i; ++ ++ for (i = 0; i < n; i++) { ++ dest[i] = src[i]; ++ if (src[i] == 0) { ++ i++; ++ break; ++ } ++ } ++ dest[i - 1] = 0; ++ return dest; ++} +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index 4c6047c32a20..d292e9a9bb7f 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -14,18 +14,34 @@ + #ifndef _MISC_H_ + #define _MISC_H_ + ++/* Max characters in Windows path */ ++#define WIN_MAX_PATH 260 ++ + extern const struct d3dkmthandle zerohandle; + + /* + * Synchronization lock hierarchy. + * +- * The higher enum value, the higher is the lock order. +- * When a lower lock ois held, the higher lock should not be acquired. ++ * The locks here are in the order from lowest to highest. ++ * When a lower lock is held, the higher lock should not be acquired. + * +- * channel_lock +- * device_mutex ++ * channel_lock (VMBus channel lock) ++ * fd_mutex ++ * plistmutex (process list mutex) ++ * table_lock (handle table lock) ++ * core_lock (dxgadapter lock) ++ * device_lock (dxgdevice lock) ++ * adapter_list_lock ++ * device_mutex (dxgglobal mutex) + */ + ++u16 *wcsncpy(u16 *dest, const u16 *src, size_t n); ++ ++enum dxglockstate { ++ DXGLOCK_SHARED, ++ DXGLOCK_EXCL ++}; ++ + /* + * Some of the Windows return codes, which needs to be translated to Linux + * IOCTL return codes. Positive values are success codes and need to be +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1671-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch b/patch/kernel/archive/wsl2-arm64-6.6/1671-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch new file mode 100644 index 000000000000..413f14c3461c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1671-drivers-hv-dxgkrnl-Opening-of-dev-dxg-device-and-dxgprocess-creation.patch @@ -0,0 +1,1847 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 15 Feb 2022 19:12:48 -0800 +Subject: drivers: hv: dxgkrnl: Opening of /dev/dxg device and dxgprocess + creation + +- Implement opening of the device (/dev/dxg) file object and creation of +dxgprocess objects. + +- Add VM bus messages to create and destroy the host side of a dxgprocess +object. + +- Implement the handle manager, which manages d3dkmthandle handles +for the internal process objects. The handles are used by a user mode +client to reference dxgkrnl objects. + +dxgprocess is created for each process, which opens /dev/dxg. +dxgprocess is ref counted, so the existing dxgprocess objects is used +for a process, which opens the device object multiple time. +dxgprocess is destroyed when the file object is released. + +A corresponding dxgprocess object is created on the host for every +dxgprocess object in the guest. + +When a dxgkrnl object is created, in most cases the corresponding +object is created in the host. The VM references the host objects by +handles (d3dkmthandle). d3dkmthandle values for a host object and +the corresponding VM object are the same. A host handle is allocated +first and its value is assigned to the guest object. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgadapter.c | 72 ++ + drivers/hv/dxgkrnl/dxgkrnl.h | 95 +- + drivers/hv/dxgkrnl/dxgmodule.c | 97 ++ + drivers/hv/dxgkrnl/dxgprocess.c | 262 +++++ + drivers/hv/dxgkrnl/dxgvmbus.c | 164 +++ + drivers/hv/dxgkrnl/dxgvmbus.h | 36 + + drivers/hv/dxgkrnl/hmgr.c | 563 ++++++++++ + drivers/hv/dxgkrnl/hmgr.h | 112 ++ + drivers/hv/dxgkrnl/ioctl.c | 60 + + drivers/hv/dxgkrnl/misc.h | 9 +- + include/uapi/misc/d3dkmthk.h | 103 ++ + 12 files changed, 1569 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index 2ed07d877c91..9d821e83448a 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o misc.o dxgadapter.o ioctl.o dxgvmbus.o ++dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 07d47699d255..fa0d6beca157 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -100,6 +100,7 @@ void dxgadapter_start(struct dxgadapter *adapter) + + void dxgadapter_stop(struct dxgadapter *adapter) + { ++ struct dxgprocess_adapter *entry; + bool adapter_stopped = false; + + down_write(&adapter->core_lock); +@@ -112,6 +113,15 @@ void dxgadapter_stop(struct dxgadapter *adapter) + if (adapter_stopped) + return; + ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &adapter->adapter_process_list_head, ++ adapter_process_list_entry) { ++ dxgprocess_adapter_stop(entry); ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ + if (dxgadapter_acquire_lock_exclusive(adapter) == 0) { + dxgvmb_send_close_adapter(adapter); + dxgadapter_release_lock_exclusive(adapter); +@@ -135,6 +145,21 @@ bool dxgadapter_is_active(struct dxgadapter *adapter) + return adapter->adapter_state == DXGADAPTER_STATE_ACTIVE; + } + ++/* Protected by dxgglobal_acquire_process_adapter_lock */ ++void dxgadapter_add_process(struct dxgadapter *adapter, ++ struct dxgprocess_adapter *process_info) ++{ ++ DXG_TRACE("%p %p", adapter, process_info); ++ list_add_tail(&process_info->adapter_process_list_entry, ++ &adapter->adapter_process_list_head); ++} ++ ++void dxgadapter_remove_process(struct dxgprocess_adapter *process_info) ++{ ++ DXG_TRACE("%p %p", process_info->adapter, process_info); ++ list_del(&process_info->adapter_process_list_entry); ++} ++ + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter) + { + down_write(&adapter->core_lock); +@@ -168,3 +193,50 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter) + { + up_read(&adapter->core_lock); + } ++ ++struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, ++ struct dxgadapter *adapter) ++{ ++ struct dxgprocess_adapter *adapter_info; ++ ++ adapter_info = kzalloc(sizeof(*adapter_info), GFP_KERNEL); ++ if (adapter_info) { ++ if (kref_get_unless_zero(&adapter->adapter_kref) == 0) { ++ DXG_ERR("failed to acquire adapter reference"); ++ goto cleanup; ++ } ++ adapter_info->adapter = adapter; ++ adapter_info->process = process; ++ adapter_info->refcount = 1; ++ list_add_tail(&adapter_info->process_adapter_list_entry, ++ &process->process_adapter_list_head); ++ dxgadapter_add_process(adapter, adapter_info); ++ } ++ return adapter_info; ++cleanup: ++ if (adapter_info) ++ kfree(adapter_info); ++ return NULL; ++} ++ ++void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info) ++{ ++} ++ ++void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info) ++{ ++ dxgadapter_remove_process(adapter_info); ++ kref_put(&adapter_info->adapter->adapter_kref, dxgadapter_release); ++ list_del(&adapter_info->process_adapter_list_entry); ++ kfree(adapter_info); ++} ++ ++/* ++ * Must be called when dxgglobal::process_adapter_mutex is held ++ */ ++void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter_info) ++{ ++ adapter_info->refcount--; ++ if (adapter_info->refcount == 0) ++ dxgprocess_adapter_destroy(adapter_info); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index ba2a7c6001aa..b089d126f801 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -29,8 +29,10 @@ + #include + #include + #include "misc.h" ++#include "hmgr.h" + #include + ++struct dxgprocess; + struct dxgadapter; + + /* +@@ -111,6 +113,10 @@ struct dxgglobal { + struct miscdevice dxgdevice; + struct mutex device_mutex; + ++ /* list of created processes */ ++ struct list_head plisthead; ++ struct mutex plistmutex; ++ + /* list of created adapters */ + struct list_head adapter_list_head; + struct rw_semaphore adapter_list_lock; +@@ -124,6 +130,9 @@ struct dxgglobal { + /* protects acces to the global VM bus channel */ + struct rw_semaphore channel_lock; + ++ /* protects the dxgprocess_adapter lists */ ++ struct mutex process_adapter_mutex; ++ + bool global_channel_initialized; + bool async_msg_enabled; + bool misc_registered; +@@ -144,13 +153,84 @@ int dxgglobal_init_global_channel(void); + void dxgglobal_destroy_global_channel(void); + struct vmbus_channel *dxgglobal_get_vmbus(void); + struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void); ++void dxgglobal_acquire_process_adapter_lock(void); ++void dxgglobal_release_process_adapter_lock(void); + int dxgglobal_acquire_channel_lock(void); + void dxgglobal_release_channel_lock(void); + ++/* ++ * Describes adapter information for each process ++ */ ++struct dxgprocess_adapter { ++ /* Entry in dxgadapter::adapter_process_list_head */ ++ struct list_head adapter_process_list_entry; ++ /* Entry in dxgprocess::process_adapter_list_head */ ++ struct list_head process_adapter_list_entry; ++ struct dxgadapter *adapter; ++ struct dxgprocess *process; ++ int refcount; ++}; ++ ++struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, ++ struct dxgadapter ++ *adapter); ++void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter); ++void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info); ++void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info); ++ ++/* ++ * The structure represents a process, which opened the /dev/dxg device. ++ * A corresponding object is created on the host. ++ */ + struct dxgprocess { +- /* Placeholder */ ++ /* ++ * Process list entry in dxgglobal. ++ * Protected by the dxgglobal->plistmutex. ++ */ ++ struct list_head plistentry; ++ pid_t pid; ++ pid_t tgid; ++ /* how many time the process was opened */ ++ struct kref process_kref; ++ /* ++ * This handle table is used for all objects except dxgadapter ++ * The handle table lock order is higher than the local_handle_table ++ * lock ++ */ ++ struct hmgrtable handle_table; ++ /* ++ * This handle table is used for dxgadapter objects. ++ * The handle table lock order is lowest. ++ */ ++ struct hmgrtable local_handle_table; ++ /* Handle of the corresponding objec on the host */ ++ struct d3dkmthandle host_handle; ++ ++ /* List of opened adapters (dxgprocess_adapter) */ ++ struct list_head process_adapter_list_head; + }; + ++struct dxgprocess *dxgprocess_create(void); ++void dxgprocess_destroy(struct dxgprocess *process); ++void dxgprocess_release(struct kref *refcount); ++int dxgprocess_open_adapter(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle *handle); ++int dxgprocess_close_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++void dxgprocess_ht_lock_shared_down(struct dxgprocess *process); ++void dxgprocess_ht_lock_shared_up(struct dxgprocess *process); ++void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process); ++void dxgprocess_ht_lock_exclusive_up(struct dxgprocess *process); ++struct dxgprocess_adapter *dxgprocess_get_adapter_info(struct dxgprocess ++ *process, ++ struct dxgadapter ++ *adapter); ++ + enum dxgadapter_state { + DXGADAPTER_STATE_ACTIVE = 0, + DXGADAPTER_STATE_STOPPED = 1, +@@ -168,6 +248,8 @@ struct dxgadapter { + struct kref adapter_kref; + /* Entry in the list of adapters in dxgglobal */ + struct list_head adapter_list_entry; ++ /* The list of dxgprocess_adapter entries */ ++ struct list_head adapter_process_list_head; + struct pci_dev *pci_dev; + struct hv_device *hv_dev; + struct dxgvmbuschannel channel; +@@ -191,6 +273,12 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter); + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); + void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); + void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_add_process(struct dxgadapter *adapter, ++ struct dxgprocess_adapter *process_info); ++void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); ++ ++long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); ++long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + + /* + * The convention is that VNBus instance id is a GUID, but the host sets +@@ -220,9 +308,14 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid) + + void dxgvmb_initialize(void); + int dxgvmb_send_set_iospace_region(u64 start, u64 len); ++int dxgvmb_send_create_process(struct dxgprocess *process); ++int dxgvmb_send_destroy_process(struct d3dkmthandle process); + int dxgvmb_send_open_adapter(struct dxgadapter *adapter); + int dxgvmb_send_close_adapter(struct dxgadapter *adapter); + int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter); ++int dxgvmb_send_query_adapter_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryadapterinfo *args); + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index ef80b920f010..17c22001ca6c 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -123,6 +123,20 @@ static struct dxgadapter *find_adapter(struct winluid *luid) + return adapter; + } + ++void dxgglobal_acquire_process_adapter_lock(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->process_adapter_mutex); ++} ++ ++void dxgglobal_release_process_adapter_lock(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_unlock(&dxgglobal->process_adapter_mutex); ++} ++ + /* + * Creates a new dxgadapter object, which represents a virtual GPU, projected + * by the host. +@@ -147,6 +161,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + kref_init(&adapter->adapter_kref); + init_rwsem(&adapter->core_lock); + ++ INIT_LIST_HEAD(&adapter->adapter_process_list_head); + adapter->pci_dev = dev; + guid_to_luid(guid, &adapter->luid); + +@@ -205,8 +220,87 @@ static void dxgglobal_stop_adapters(void) + dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL); + } + ++/* ++ * Returns dxgprocess for the current executing process. ++ * Creates dxgprocess if it doesn't exist. ++ */ ++static struct dxgprocess *dxgglobal_get_current_process(void) ++{ ++ /* ++ * Find the DXG process for the current process. ++ * A new process is created if necessary. ++ */ ++ struct dxgprocess *process = NULL; ++ struct dxgprocess *entry = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ mutex_lock(&dxgglobal->plistmutex); ++ list_for_each_entry(entry, &dxgglobal->plisthead, plistentry) { ++ /* All threads of a process have the same thread group ID */ ++ if (entry->tgid == current->tgid) { ++ if (kref_get_unless_zero(&entry->process_kref)) { ++ process = entry; ++ DXG_TRACE("found dxgprocess"); ++ } else { ++ DXG_TRACE("process is destroyed"); ++ } ++ break; ++ } ++ } ++ mutex_unlock(&dxgglobal->plistmutex); ++ ++ if (process == NULL) ++ process = dxgprocess_create(); ++ ++ return process; ++} ++ ++/* ++ * File operations for the /dev/dxg device ++ */ ++ ++static int dxgk_open(struct inode *n, struct file *f) ++{ ++ int ret = 0; ++ struct dxgprocess *process; ++ ++ DXG_TRACE("%p %d %d", f, current->pid, current->tgid); ++ ++ /* Find/create a dxgprocess structure for this process */ ++ process = dxgglobal_get_current_process(); ++ ++ if (process) { ++ f->private_data = process; ++ } else { ++ DXG_TRACE("cannot create dxgprocess"); ++ ret = -EBADF; ++ } ++ ++ return ret; ++} ++ ++static int dxgk_release(struct inode *n, struct file *f) ++{ ++ struct dxgprocess *process; ++ ++ process = (struct dxgprocess *)f->private_data; ++ DXG_TRACE("%p, %p", f, process); ++ ++ if (process == NULL) ++ return -EINVAL; ++ ++ kref_put(&process->process_kref, dxgprocess_release); ++ ++ f->private_data = NULL; ++ return 0; ++} ++ + const struct file_operations dxgk_fops = { + .owner = THIS_MODULE, ++ .open = dxgk_open, ++ .release = dxgk_release, ++ .compat_ioctl = dxgk_compat_ioctl, ++ .unlocked_ioctl = dxgk_unlocked_ioctl, + }; + + /* +@@ -616,7 +710,10 @@ static struct dxgglobal *dxgglobal_create(void) + if (!dxgglobal) + return NULL; + ++ INIT_LIST_HEAD(&dxgglobal->plisthead); ++ mutex_init(&dxgglobal->plistmutex); + mutex_init(&dxgglobal->device_mutex); ++ mutex_init(&dxgglobal->process_adapter_mutex); + + INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); + INIT_LIST_HEAD(&dxgglobal->adapter_list_head); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +new file mode 100644 +index 000000000000..ab9a01e3c8c8 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -0,0 +1,262 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * DXGPROCESS implementation ++ * ++ */ ++ ++#include "dxgkrnl.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++/* ++ * Creates a new dxgprocess object ++ * Must be called when dxgglobal->plistmutex is held ++ */ ++struct dxgprocess *dxgprocess_create(void) ++{ ++ struct dxgprocess *process; ++ int ret; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ process = kzalloc(sizeof(struct dxgprocess), GFP_KERNEL); ++ if (process != NULL) { ++ DXG_TRACE("new dxgprocess created"); ++ process->pid = current->pid; ++ process->tgid = current->tgid; ++ ret = dxgvmb_send_create_process(process); ++ if (ret < 0) { ++ DXG_TRACE("send_create_process failed"); ++ kfree(process); ++ process = NULL; ++ } else { ++ INIT_LIST_HEAD(&process->plistentry); ++ kref_init(&process->process_kref); ++ ++ mutex_lock(&dxgglobal->plistmutex); ++ list_add_tail(&process->plistentry, ++ &dxgglobal->plisthead); ++ mutex_unlock(&dxgglobal->plistmutex); ++ ++ hmgrtable_init(&process->handle_table, process); ++ hmgrtable_init(&process->local_handle_table, process); ++ INIT_LIST_HEAD(&process->process_adapter_list_head); ++ } ++ } ++ return process; ++} ++ ++void dxgprocess_destroy(struct dxgprocess *process) ++{ ++ int i; ++ enum hmgrentry_type t; ++ struct d3dkmthandle h; ++ void *o; ++ struct dxgprocess_adapter *entry; ++ struct dxgprocess_adapter *tmp; ++ ++ /* Destroy all adapter state */ ++ dxgglobal_acquire_process_adapter_lock(); ++ list_for_each_entry_safe(entry, tmp, ++ &process->process_adapter_list_head, ++ process_adapter_list_entry) { ++ dxgprocess_adapter_destroy(entry); ++ } ++ dxgglobal_release_process_adapter_lock(); ++ ++ i = 0; ++ while (hmgrtable_next_entry(&process->local_handle_table, ++ &i, &t, &h, &o)) { ++ switch (t) { ++ case HMGRENTRY_TYPE_DXGADAPTER: ++ dxgprocess_close_adapter(process, h); ++ break; ++ default: ++ DXG_ERR("invalid entry in handle table %d", t); ++ break; ++ } ++ } ++ ++ hmgrtable_destroy(&process->handle_table); ++ hmgrtable_destroy(&process->local_handle_table); ++} ++ ++void dxgprocess_release(struct kref *refcount) ++{ ++ struct dxgprocess *process; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ process = container_of(refcount, struct dxgprocess, process_kref); ++ ++ mutex_lock(&dxgglobal->plistmutex); ++ list_del(&process->plistentry); ++ mutex_unlock(&dxgglobal->plistmutex); ++ ++ dxgprocess_destroy(process); ++ ++ if (process->host_handle.v) ++ dxgvmb_send_destroy_process(process->host_handle); ++ kfree(process); ++} ++ ++struct dxgprocess_adapter *dxgprocess_get_adapter_info(struct dxgprocess ++ *process, ++ struct dxgadapter ++ *adapter) ++{ ++ struct dxgprocess_adapter *entry; ++ ++ list_for_each_entry(entry, &process->process_adapter_list_head, ++ process_adapter_list_entry) { ++ if (adapter == entry->adapter) { ++ DXG_TRACE("Found process info %p", entry); ++ return entry; ++ } ++ } ++ return NULL; ++} ++ ++/* ++ * Dxgprocess takes references on dxgadapter and dxgprocess_adapter. ++ * ++ * The process_adapter lock is held. ++ * ++ */ ++int dxgprocess_open_adapter(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle *h) ++{ ++ int ret = 0; ++ struct dxgprocess_adapter *adapter_info; ++ struct d3dkmthandle handle; ++ ++ h->v = 0; ++ adapter_info = dxgprocess_get_adapter_info(process, adapter); ++ if (adapter_info == NULL) { ++ DXG_TRACE("creating new process adapter info"); ++ adapter_info = dxgprocess_adapter_create(process, adapter); ++ if (adapter_info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ } else { ++ adapter_info->refcount++; ++ } ++ ++ handle = hmgrtable_alloc_handle_safe(&process->local_handle_table, ++ adapter, HMGRENTRY_TYPE_DXGADAPTER, ++ true); ++ if (handle.v) { ++ *h = handle; ++ } else { ++ DXG_ERR("failed to create adapter handle"); ++ ret = -ENOMEM; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (adapter_info) ++ dxgprocess_adapter_release(adapter_info); ++ } ++ ++ return ret; ++} ++ ++int dxgprocess_close_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ struct dxgadapter *adapter; ++ struct dxgprocess_adapter *adapter_info; ++ int ret = 0; ++ ++ if (handle.v == 0) ++ return 0; ++ ++ hmgrtable_lock(&process->local_handle_table, DXGLOCK_EXCL); ++ adapter = dxgprocess_get_adapter(process, handle); ++ if (adapter) ++ hmgrtable_free_handle(&process->local_handle_table, ++ HMGRENTRY_TYPE_DXGADAPTER, handle); ++ hmgrtable_unlock(&process->local_handle_table, DXGLOCK_EXCL); ++ ++ if (adapter) { ++ adapter_info = dxgprocess_get_adapter_info(process, adapter); ++ if (adapter_info) { ++ dxgglobal_acquire_process_adapter_lock(); ++ dxgprocess_adapter_release(adapter_info); ++ dxgglobal_release_process_adapter_lock(); ++ } else { ++ ret = -EINVAL; ++ } ++ } else { ++ DXG_ERR("Adapter not found %x", handle.v); ++ ret = -EINVAL; ++ } ++ ++ return ret; ++} ++ ++struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ struct dxgadapter *adapter; ++ ++ adapter = hmgrtable_get_object_by_type(&process->local_handle_table, ++ HMGRENTRY_TYPE_DXGADAPTER, ++ handle); ++ if (adapter == NULL) ++ DXG_ERR("Adapter not found %x", handle.v); ++ return adapter; ++} ++ ++/* ++ * Gets the adapter object from the process handle table. ++ * The adapter object is referenced. ++ * The function acquired the handle table lock shared. ++ */ ++struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ struct dxgadapter *adapter; ++ ++ hmgrtable_lock(&process->local_handle_table, DXGLOCK_SHARED); ++ adapter = hmgrtable_get_object_by_type(&process->local_handle_table, ++ HMGRENTRY_TYPE_DXGADAPTER, ++ handle); ++ if (adapter == NULL) ++ DXG_ERR("adapter_by_handle failed %x", handle.v); ++ else if (kref_get_unless_zero(&adapter->adapter_kref) == 0) { ++ DXG_ERR("failed to acquire adapter reference"); ++ adapter = NULL; ++ } ++ hmgrtable_unlock(&process->local_handle_table, DXGLOCK_SHARED); ++ return adapter; ++} ++ ++void dxgprocess_ht_lock_shared_down(struct dxgprocess *process) ++{ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++} ++ ++void dxgprocess_ht_lock_shared_up(struct dxgprocess *process) ++{ ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++} ++ ++void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process) ++{ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++} ++ ++void dxgprocess_ht_lock_exclusive_up(struct dxgprocess *process) ++{ ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 6d4b8d9d8d07..0abf45d0d3f7 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -497,6 +497,87 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len) + return ret; + } + ++int dxgvmb_send_create_process(struct dxgprocess *process) ++{ ++ int ret; ++ struct dxgkvmb_command_createprocess *command; ++ struct dxgkvmb_command_createprocess_return result = { 0 }; ++ struct dxgvmbusmsg msg; ++ char s[WIN_MAX_PATH]; ++ int i; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_CREATEPROCESS); ++ command->process = process; ++ command->process_id = process->pid; ++ command->linux_process = 1; ++ s[0] = 0; ++ __get_task_comm(s, WIN_MAX_PATH, current); ++ for (i = 0; i < WIN_MAX_PATH; i++) { ++ command->process_name[i] = s[i]; ++ if (s[i] == 0) ++ break; ++ } ++ ++ ret = dxgvmb_send_sync_msg(&dxgglobal->channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("create_process failed %d", ret); ++ } else if (result.hprocess.v == 0) { ++ DXG_ERR("create_process returned 0 handle"); ++ ret = -ENOTRECOVERABLE; ++ } else { ++ process->host_handle = result.hprocess; ++ DXG_TRACE("create_process returned %x", ++ process->host_handle.v); ++ } ++ ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_process(struct d3dkmthandle process) ++{ ++ int ret; ++ struct dxgkvmb_command_destroyprocess *command; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_DESTROYPROCESS, ++ process); ++ ret = dxgvmb_send_sync_msg_ntstatus(&dxgglobal->channel, ++ msg.hdr, msg.size); ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + /* + * Virtual GPU messages to the host + */ +@@ -591,3 +672,86 @@ int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter) + DXG_ERR("Failed to get adapter info: %d", ret); + return ret; + } ++ ++int dxgvmb_send_query_adapter_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryadapterinfo *args) ++{ ++ struct dxgkvmb_command_queryadapterinfo *command; ++ u32 cmd_size = sizeof(*command) + args->private_data_size - 1; ++ int ret; ++ u32 private_data_size; ++ void *private_data; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ ret = copy_from_user(command->private_data, ++ args->private_data, args->private_data_size); ++ if (ret) { ++ DXG_ERR("Faled to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYADAPTERINFO, ++ process->host_handle); ++ command->private_data_size = args->private_data_size; ++ command->query_type = args->type; ++ ++ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) { ++ private_data = msg.msg; ++ private_data_size = command->private_data_size + ++ sizeof(struct ntstatus); ++ } else { ++ private_data = command->private_data; ++ private_data_size = command->private_data_size; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ private_data, private_data_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) { ++ ret = ntstatus2int(*(struct ntstatus *)private_data); ++ if (ret < 0) ++ goto cleanup; ++ private_data = (char *)private_data + sizeof(struct ntstatus); ++ } ++ ++ switch (args->type) { ++ case _KMTQAITYPE_ADAPTERTYPE: ++ case _KMTQAITYPE_ADAPTERTYPE_RENDER: ++ { ++ struct d3dkmt_adaptertype *adapter_type = ++ (void *)private_data; ++ adapter_type->paravirtualized = 1; ++ adapter_type->display_supported = 0; ++ adapter_type->post_device = 0; ++ adapter_type->indirect_display_device = 0; ++ adapter_type->acg_supported = 0; ++ adapter_type->support_set_timings_from_vidpn = 0; ++ break; ++ } ++ default: ++ break; ++ } ++ ret = copy_to_user(args->private_data, private_data, ++ args->private_data_size); ++ if (ret) { ++ DXG_ERR("Faled to copy private data to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 584cdd3db6c0..a805a396e083 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -14,7 +14,11 @@ + #ifndef _DXGVMBUS_H + #define _DXGVMBUS_H + ++struct dxgprocess; ++struct dxgadapter; ++ + #define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128) ++#define DXG_VM_PROCESS_NAME_LENGTH 260 + + enum dxgkvmb_commandchanneltype { + DXGKVMB_VGPU_TO_HOST, +@@ -169,6 +173,26 @@ struct dxgkvmb_command_setiospaceregion { + u32 shared_page_gpadl; + }; + ++struct dxgkvmb_command_createprocess { ++ struct dxgkvmb_command_vm_to_host hdr; ++ void *process; ++ u64 process_id; ++ u16 process_name[DXG_VM_PROCESS_NAME_LENGTH + 1]; ++ u8 csrss_process:1; ++ u8 dwm_process:1; ++ u8 wow64_process:1; ++ u8 linux_process:1; ++}; ++ ++struct dxgkvmb_command_createprocess_return { ++ struct d3dkmthandle hprocess; ++}; ++ ++// The command returns ntstatus ++struct dxgkvmb_command_destroyprocess { ++ struct dxgkvmb_command_vm_to_host hdr; ++}; ++ + struct dxgkvmb_command_openadapter { + struct dxgkvmb_command_vgpu_to_host hdr; + u32 vmbus_interface_version; +@@ -211,4 +235,16 @@ struct dxgkvmb_command_getinternaladapterinfo_return { + struct winluid host_vgpu_luid; + }; + ++struct dxgkvmb_command_queryadapterinfo { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ enum kmtqueryadapterinfotype query_type; ++ u32 private_data_size; ++ u8 private_data[1]; ++}; ++ ++struct dxgkvmb_command_queryadapterinfo_return { ++ struct ntstatus status; ++ u8 private_data[1]; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c +new file mode 100644 +index 000000000000..526b50f46d96 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/hmgr.c +@@ -0,0 +1,563 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Handle manager implementation ++ * ++ */ ++ ++#include ++#include ++#include ++ ++#include "misc.h" ++#include "dxgkrnl.h" ++#include "hmgr.h" ++ ++#undef pr_fmt ++#define pr_fmt(fmt) "dxgk: " fmt ++ ++const struct d3dkmthandle zerohandle; ++ ++/* ++ * Handle parameters ++ */ ++#define HMGRHANDLE_INSTANCE_BITS 6 ++#define HMGRHANDLE_INDEX_BITS 24 ++#define HMGRHANDLE_UNIQUE_BITS 2 ++ ++#define HMGRHANDLE_INSTANCE_SHIFT 0 ++#define HMGRHANDLE_INDEX_SHIFT \ ++ (HMGRHANDLE_INSTANCE_BITS + HMGRHANDLE_INSTANCE_SHIFT) ++#define HMGRHANDLE_UNIQUE_SHIFT \ ++ (HMGRHANDLE_INDEX_BITS + HMGRHANDLE_INDEX_SHIFT) ++ ++#define HMGRHANDLE_INSTANCE_MASK \ ++ (((1 << HMGRHANDLE_INSTANCE_BITS) - 1) << HMGRHANDLE_INSTANCE_SHIFT) ++#define HMGRHANDLE_INDEX_MASK \ ++ (((1 << HMGRHANDLE_INDEX_BITS) - 1) << HMGRHANDLE_INDEX_SHIFT) ++#define HMGRHANDLE_UNIQUE_MASK \ ++ (((1 << HMGRHANDLE_UNIQUE_BITS) - 1) << HMGRHANDLE_UNIQUE_SHIFT) ++ ++#define HMGRHANDLE_INSTANCE_MAX ((1 << HMGRHANDLE_INSTANCE_BITS) - 1) ++#define HMGRHANDLE_INDEX_MAX ((1 << HMGRHANDLE_INDEX_BITS) - 1) ++#define HMGRHANDLE_UNIQUE_MAX ((1 << HMGRHANDLE_UNIQUE_BITS) - 1) ++ ++/* ++ * Handle entry ++ */ ++struct hmgrentry { ++ union { ++ void *object; ++ struct { ++ u32 prev_free_index; ++ u32 next_free_index; ++ }; ++ }; ++ u32 type:HMGRENTRY_TYPE_BITS + 1; ++ u32 unique:HMGRHANDLE_UNIQUE_BITS; ++ u32 instance:HMGRHANDLE_INSTANCE_BITS; ++ u32 destroyed:1; ++}; ++ ++#define HMGRTABLE_SIZE_INCREMENT 1024 ++#define HMGRTABLE_MIN_FREE_ENTRIES 128 ++#define HMGRTABLE_INVALID_INDEX (~((1 << HMGRHANDLE_INDEX_BITS) - 1)) ++#define HMGRTABLE_SIZE_MAX 0xFFFFFFF ++ ++static u32 table_size_increment = HMGRTABLE_SIZE_INCREMENT; ++ ++static u32 get_unique(struct d3dkmthandle h) ++{ ++ return (h.v & HMGRHANDLE_UNIQUE_MASK) >> HMGRHANDLE_UNIQUE_SHIFT; ++} ++ ++static u32 get_index(struct d3dkmthandle h) ++{ ++ return (h.v & HMGRHANDLE_INDEX_MASK) >> HMGRHANDLE_INDEX_SHIFT; ++} ++ ++static bool is_handle_valid(struct hmgrtable *table, struct d3dkmthandle h, ++ bool ignore_destroyed, enum hmgrentry_type t) ++{ ++ u32 index = get_index(h); ++ u32 unique = get_unique(h); ++ struct hmgrentry *entry; ++ ++ if (index >= table->table_size) { ++ DXG_ERR("Invalid index %x %d", h.v, index); ++ return false; ++ } ++ ++ entry = &table->entry_table[index]; ++ if (unique != entry->unique) { ++ DXG_ERR("Invalid unique %x %d %d %d %p", ++ h.v, unique, entry->unique, index, entry->object); ++ return false; ++ } ++ ++ if (entry->destroyed && !ignore_destroyed) { ++ DXG_ERR("Invalid destroyed value"); ++ return false; ++ } ++ ++ if (entry->type == HMGRENTRY_TYPE_FREE) { ++ DXG_ERR("Entry is freed %x %d", h.v, index); ++ return false; ++ } ++ ++ if (t != HMGRENTRY_TYPE_FREE && t != entry->type) { ++ DXG_ERR("type mismatch %x %d %d", h.v, t, entry->type); ++ return false; ++ } ++ ++ return true; ++} ++ ++static struct d3dkmthandle build_handle(u32 index, u32 unique, u32 instance) ++{ ++ struct d3dkmthandle handle; ++ ++ handle.v = (index << HMGRHANDLE_INDEX_SHIFT) & HMGRHANDLE_INDEX_MASK; ++ handle.v |= (unique << HMGRHANDLE_UNIQUE_SHIFT) & ++ HMGRHANDLE_UNIQUE_MASK; ++ handle.v |= (instance << HMGRHANDLE_INSTANCE_SHIFT) & ++ HMGRHANDLE_INSTANCE_MASK; ++ ++ return handle; ++} ++ ++inline u32 hmgrtable_get_used_entry_count(struct hmgrtable *table) ++{ ++ DXGKRNL_ASSERT(table->table_size >= table->free_count); ++ return (table->table_size - table->free_count); ++} ++ ++bool hmgrtable_mark_destroyed(struct hmgrtable *table, struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE)) ++ return false; ++ ++ table->entry_table[get_index(h)].destroyed = true; ++ return true; ++} ++ ++bool hmgrtable_unmark_destroyed(struct hmgrtable *table, struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, true, HMGRENTRY_TYPE_FREE)) ++ return true; ++ ++ DXGKRNL_ASSERT(table->entry_table[get_index(h)].destroyed); ++ table->entry_table[get_index(h)].destroyed = 0; ++ return true; ++} ++ ++static bool expand_table(struct hmgrtable *table, u32 NumEntries) ++{ ++ u32 new_table_size; ++ struct hmgrentry *new_entry; ++ u32 table_index; ++ u32 new_free_count; ++ u32 prev_free_index; ++ u32 tail_index = table->free_handle_list_tail; ++ ++ /* The tail should point to the last free element in the list */ ++ if (table->free_count != 0) { ++ if (tail_index >= table->table_size || ++ table->entry_table[tail_index].next_free_index != ++ HMGRTABLE_INVALID_INDEX) { ++ DXG_ERR("corruption"); ++ DXG_ERR("tail_index: %x", tail_index); ++ DXG_ERR("table size: %x", table->table_size); ++ DXG_ERR("free_count: %d", table->free_count); ++ DXG_ERR("NumEntries: %x", NumEntries); ++ return false; ++ } ++ } ++ ++ new_free_count = table_size_increment + table->free_count; ++ new_table_size = table->table_size + table_size_increment; ++ if (new_table_size < NumEntries) { ++ new_free_count += NumEntries - new_table_size; ++ new_table_size = NumEntries; ++ } ++ ++ if (new_table_size > HMGRHANDLE_INDEX_MAX) { ++ DXG_ERR("Invalid new table size"); ++ return false; ++ } ++ ++ new_entry = (struct hmgrentry *) ++ vzalloc(new_table_size * sizeof(struct hmgrentry)); ++ if (new_entry == NULL) { ++ DXG_ERR("allocation failed"); ++ return false; ++ } ++ ++ if (table->entry_table) { ++ memcpy(new_entry, table->entry_table, ++ table->table_size * sizeof(struct hmgrentry)); ++ vfree(table->entry_table); ++ } else { ++ table->free_handle_list_head = 0; ++ } ++ ++ table->entry_table = new_entry; ++ ++ /* Initialize new table entries and add to the free list */ ++ table_index = table->table_size; ++ ++ prev_free_index = table->free_handle_list_tail; ++ ++ while (table_index < new_table_size) { ++ struct hmgrentry *entry = &table->entry_table[table_index]; ++ ++ entry->prev_free_index = prev_free_index; ++ entry->next_free_index = table_index + 1; ++ entry->type = HMGRENTRY_TYPE_FREE; ++ entry->unique = 1; ++ entry->instance = 0; ++ prev_free_index = table_index; ++ ++ table_index++; ++ } ++ ++ table->entry_table[table_index - 1].next_free_index = ++ (u32) HMGRTABLE_INVALID_INDEX; ++ ++ if (table->free_count != 0) { ++ /* Link the current free list with the new entries */ ++ struct hmgrentry *entry; ++ ++ entry = &table->entry_table[table->free_handle_list_tail]; ++ entry->next_free_index = table->table_size; ++ } ++ table->free_handle_list_tail = new_table_size - 1; ++ if (table->free_handle_list_head == HMGRTABLE_INVALID_INDEX) ++ table->free_handle_list_head = table->table_size; ++ ++ table->table_size = new_table_size; ++ table->free_count = new_free_count; ++ ++ return true; ++} ++ ++void hmgrtable_init(struct hmgrtable *table, struct dxgprocess *process) ++{ ++ table->process = process; ++ table->entry_table = NULL; ++ table->table_size = 0; ++ table->free_handle_list_head = HMGRTABLE_INVALID_INDEX; ++ table->free_handle_list_tail = HMGRTABLE_INVALID_INDEX; ++ table->free_count = 0; ++ init_rwsem(&table->table_lock); ++} ++ ++void hmgrtable_destroy(struct hmgrtable *table) ++{ ++ if (table->entry_table) { ++ vfree(table->entry_table); ++ table->entry_table = NULL; ++ } ++} ++ ++void hmgrtable_lock(struct hmgrtable *table, enum dxglockstate state) ++{ ++ if (state == DXGLOCK_EXCL) ++ down_write(&table->table_lock); ++ else ++ down_read(&table->table_lock); ++} ++ ++void hmgrtable_unlock(struct hmgrtable *table, enum dxglockstate state) ++{ ++ if (state == DXGLOCK_EXCL) ++ up_write(&table->table_lock); ++ else ++ up_read(&table->table_lock); ++} ++ ++struct d3dkmthandle hmgrtable_alloc_handle(struct hmgrtable *table, ++ void *object, ++ enum hmgrentry_type type, ++ bool make_valid) ++{ ++ u32 index; ++ struct hmgrentry *entry; ++ u32 unique; ++ ++ DXGKRNL_ASSERT(type <= HMGRENTRY_TYPE_LIMIT); ++ DXGKRNL_ASSERT(type > HMGRENTRY_TYPE_FREE); ++ ++ if (table->free_count <= HMGRTABLE_MIN_FREE_ENTRIES) { ++ if (!expand_table(table, 0)) { ++ DXG_ERR("hmgrtable expand_table failed"); ++ return zerohandle; ++ } ++ } ++ ++ if (table->free_handle_list_head >= table->table_size) { ++ DXG_ERR("hmgrtable corrupted handle table head"); ++ return zerohandle; ++ } ++ ++ index = table->free_handle_list_head; ++ entry = &table->entry_table[index]; ++ ++ if (entry->type != HMGRENTRY_TYPE_FREE) { ++ DXG_ERR("hmgrtable expected free handle"); ++ return zerohandle; ++ } ++ ++ table->free_handle_list_head = entry->next_free_index; ++ ++ if (entry->next_free_index != table->free_handle_list_tail) { ++ if (entry->next_free_index >= table->table_size) { ++ DXG_ERR("hmgrtable invalid next free index"); ++ return zerohandle; ++ } ++ table->entry_table[entry->next_free_index].prev_free_index = ++ HMGRTABLE_INVALID_INDEX; ++ } ++ ++ unique = table->entry_table[index].unique; ++ ++ table->entry_table[index].object = object; ++ table->entry_table[index].type = type; ++ table->entry_table[index].instance = 0; ++ table->entry_table[index].destroyed = !make_valid; ++ table->free_count--; ++ DXGKRNL_ASSERT(table->free_count <= table->table_size); ++ ++ return build_handle(index, unique, table->entry_table[index].instance); ++} ++ ++int hmgrtable_assign_handle_safe(struct hmgrtable *table, ++ void *object, ++ enum hmgrentry_type type, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ ++ hmgrtable_lock(table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(table, object, type, h); ++ hmgrtable_unlock(table, DXGLOCK_EXCL); ++ return ret; ++} ++ ++int hmgrtable_assign_handle(struct hmgrtable *table, void *object, ++ enum hmgrentry_type type, struct d3dkmthandle h) ++{ ++ u32 index = get_index(h); ++ u32 unique = get_unique(h); ++ struct hmgrentry *entry = NULL; ++ ++ DXG_TRACE("%x, %d %p, %p", h.v, index, object, table); ++ ++ if (index >= HMGRHANDLE_INDEX_MAX) { ++ DXG_ERR("handle index is too big: %x %d", h.v, index); ++ return -EINVAL; ++ } ++ ++ if (index >= table->table_size) { ++ u32 new_size = index + table_size_increment; ++ ++ if (new_size > HMGRHANDLE_INDEX_MAX) ++ new_size = HMGRHANDLE_INDEX_MAX; ++ if (!expand_table(table, new_size)) { ++ DXG_ERR("failed to expand handle table %d", ++ new_size); ++ return -ENOMEM; ++ } ++ } ++ ++ entry = &table->entry_table[index]; ++ ++ if (entry->type != HMGRENTRY_TYPE_FREE) { ++ DXG_ERR("the entry is not free: %d %x", entry->type, ++ hmgrtable_build_entry_handle(table, index).v); ++ return -EINVAL; ++ } ++ ++ if (index != table->free_handle_list_tail) { ++ if (entry->next_free_index >= table->table_size) { ++ DXG_ERR("hmgr: invalid next free index %d", ++ entry->next_free_index); ++ return -EINVAL; ++ } ++ table->entry_table[entry->next_free_index].prev_free_index = ++ entry->prev_free_index; ++ } else { ++ table->free_handle_list_tail = entry->prev_free_index; ++ } ++ ++ if (index != table->free_handle_list_head) { ++ if (entry->prev_free_index >= table->table_size) { ++ DXG_ERR("hmgr: invalid next prev index %d", ++ entry->prev_free_index); ++ return -EINVAL; ++ } ++ table->entry_table[entry->prev_free_index].next_free_index = ++ entry->next_free_index; ++ } else { ++ table->free_handle_list_head = entry->next_free_index; ++ } ++ ++ entry->prev_free_index = HMGRTABLE_INVALID_INDEX; ++ entry->next_free_index = HMGRTABLE_INVALID_INDEX; ++ entry->object = object; ++ entry->type = type; ++ entry->instance = 0; ++ entry->unique = unique; ++ entry->destroyed = false; ++ ++ table->free_count--; ++ DXGKRNL_ASSERT(table->free_count <= table->table_size); ++ return 0; ++} ++ ++struct d3dkmthandle hmgrtable_alloc_handle_safe(struct hmgrtable *table, ++ void *obj, ++ enum hmgrentry_type type, ++ bool make_valid) ++{ ++ struct d3dkmthandle h; ++ ++ hmgrtable_lock(table, DXGLOCK_EXCL); ++ h = hmgrtable_alloc_handle(table, obj, type, make_valid); ++ hmgrtable_unlock(table, DXGLOCK_EXCL); ++ return h; ++} ++ ++void hmgrtable_free_handle(struct hmgrtable *table, enum hmgrentry_type t, ++ struct d3dkmthandle h) ++{ ++ struct hmgrentry *entry; ++ u32 i = get_index(h); ++ ++ DXG_TRACE("%p %x", table, h.v); ++ ++ /* Ignore the destroyed flag when checking the handle */ ++ if (is_handle_valid(table, h, true, t)) { ++ DXGKRNL_ASSERT(table->free_count < table->table_size); ++ entry = &table->entry_table[i]; ++ entry->unique = 1; ++ entry->type = HMGRENTRY_TYPE_FREE; ++ entry->destroyed = 0; ++ if (entry->unique != HMGRHANDLE_UNIQUE_MAX) ++ entry->unique += 1; ++ else ++ entry->unique = 1; ++ ++ table->free_count++; ++ DXGKRNL_ASSERT(table->free_count <= table->table_size); ++ ++ /* ++ * Insert the index to the free list at the tail. ++ */ ++ entry->next_free_index = HMGRTABLE_INVALID_INDEX; ++ entry->prev_free_index = table->free_handle_list_tail; ++ entry = &table->entry_table[table->free_handle_list_tail]; ++ entry->next_free_index = i; ++ table->free_handle_list_tail = i; ++ } else { ++ DXG_ERR("Invalid handle to free: %d %x", i, h.v); ++ } ++} ++ ++void hmgrtable_free_handle_safe(struct hmgrtable *table, enum hmgrentry_type t, ++ struct d3dkmthandle h) ++{ ++ hmgrtable_lock(table, DXGLOCK_EXCL); ++ hmgrtable_free_handle(table, t, h); ++ hmgrtable_unlock(table, DXGLOCK_EXCL); ++} ++ ++struct d3dkmthandle hmgrtable_build_entry_handle(struct hmgrtable *table, ++ u32 index) ++{ ++ DXGKRNL_ASSERT(index < table->table_size); ++ ++ return build_handle(index, table->entry_table[index].unique, ++ table->entry_table[index].instance); ++} ++ ++void *hmgrtable_get_object(struct hmgrtable *table, struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE)) ++ return NULL; ++ ++ return table->entry_table[get_index(h)].object; ++} ++ ++void *hmgrtable_get_object_by_type(struct hmgrtable *table, ++ enum hmgrentry_type type, ++ struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, type)) { ++ DXG_ERR("Invalid handle %x", h.v); ++ return NULL; ++ } ++ return table->entry_table[get_index(h)].object; ++} ++ ++void *hmgrtable_get_entry_object(struct hmgrtable *table, u32 index) ++{ ++ DXGKRNL_ASSERT(index < table->table_size); ++ DXGKRNL_ASSERT(table->entry_table[index].type != HMGRENTRY_TYPE_FREE); ++ ++ return table->entry_table[index].object; ++} ++ ++static enum hmgrentry_type hmgrtable_get_entry_type(struct hmgrtable *table, ++ u32 index) ++{ ++ DXGKRNL_ASSERT(index < table->table_size); ++ return (enum hmgrentry_type)table->entry_table[index].type; ++} ++ ++enum hmgrentry_type hmgrtable_get_object_type(struct hmgrtable *table, ++ struct d3dkmthandle h) ++{ ++ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE)) ++ return HMGRENTRY_TYPE_FREE; ++ ++ return hmgrtable_get_entry_type(table, get_index(h)); ++} ++ ++void *hmgrtable_get_object_ignore_destroyed(struct hmgrtable *table, ++ struct d3dkmthandle h, ++ enum hmgrentry_type type) ++{ ++ if (!is_handle_valid(table, h, true, type)) ++ return NULL; ++ return table->entry_table[get_index(h)].object; ++} ++ ++bool hmgrtable_next_entry(struct hmgrtable *tbl, ++ u32 *index, ++ enum hmgrentry_type *type, ++ struct d3dkmthandle *handle, ++ void **object) ++{ ++ u32 i; ++ struct hmgrentry *entry; ++ ++ for (i = *index; i < tbl->table_size; i++) { ++ entry = &tbl->entry_table[i]; ++ if (entry->type != HMGRENTRY_TYPE_FREE) { ++ *index = i + 1; ++ *object = entry->object; ++ *handle = build_handle(i, entry->unique, ++ entry->instance); ++ *type = entry->type; ++ return true; ++ } ++ } ++ return false; ++} +diff --git a/drivers/hv/dxgkrnl/hmgr.h b/drivers/hv/dxgkrnl/hmgr.h +new file mode 100644 +index 000000000000..23eec301137f +--- /dev/null ++++ b/drivers/hv/dxgkrnl/hmgr.h +@@ -0,0 +1,112 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Handle manager definitions ++ * ++ */ ++ ++#ifndef _HMGR_H_ ++#define _HMGR_H_ ++ ++#include "misc.h" ++ ++struct hmgrentry; ++ ++/* ++ * Handle manager table. ++ * ++ * Implementation notes: ++ * A list of free handles is built on top of the array of table entries. ++ * free_handle_list_head is the index of the first entry in the list. ++ * m_FreeHandleListTail is the index of an entry in the list, which is ++ * HMGRTABLE_MIN_FREE_ENTRIES from the head. It means that when a handle is ++ * freed, the next time the handle can be re-used is after allocating ++ * HMGRTABLE_MIN_FREE_ENTRIES number of handles. ++ * Handles are allocated from the start of the list and free handles are ++ * inserted after the tail of the list. ++ * ++ */ ++struct hmgrtable { ++ struct dxgprocess *process; ++ struct hmgrentry *entry_table; ++ u32 free_handle_list_head; ++ u32 free_handle_list_tail; ++ u32 table_size; ++ u32 free_count; ++ struct rw_semaphore table_lock; ++}; ++ ++/* ++ * Handle entry data types. ++ */ ++#define HMGRENTRY_TYPE_BITS 5 ++ ++enum hmgrentry_type { ++ HMGRENTRY_TYPE_FREE = 0, ++ HMGRENTRY_TYPE_DXGADAPTER = 1, ++ HMGRENTRY_TYPE_DXGSHAREDRESOURCE = 2, ++ HMGRENTRY_TYPE_DXGDEVICE = 3, ++ HMGRENTRY_TYPE_DXGRESOURCE = 4, ++ HMGRENTRY_TYPE_DXGALLOCATION = 5, ++ HMGRENTRY_TYPE_DXGOVERLAY = 6, ++ HMGRENTRY_TYPE_DXGCONTEXT = 7, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT = 8, ++ HMGRENTRY_TYPE_DXGKEYEDMUTEX = 9, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE = 10, ++ HMGRENTRY_TYPE_DXGDEVICESYNCOBJECT = 11, ++ HMGRENTRY_TYPE_DXGPROCESS = 12, ++ HMGRENTRY_TYPE_DXGSHAREDVMOBJECT = 13, ++ HMGRENTRY_TYPE_DXGPROTECTEDSESSION = 14, ++ HMGRENTRY_TYPE_DXGHWQUEUE = 15, ++ HMGRENTRY_TYPE_DXGREMOTEBUNDLEOBJECT = 16, ++ HMGRENTRY_TYPE_DXGCOMPOSITIONSURFACEOBJECT = 17, ++ HMGRENTRY_TYPE_DXGCOMPOSITIONSURFACEPROXY = 18, ++ HMGRENTRY_TYPE_DXGTRACKEDWORKLOAD = 19, ++ HMGRENTRY_TYPE_LIMIT = ((1 << HMGRENTRY_TYPE_BITS) - 1), ++ HMGRENTRY_TYPE_MONITOREDFENCE = HMGRENTRY_TYPE_LIMIT + 1, ++}; ++ ++void hmgrtable_init(struct hmgrtable *tbl, struct dxgprocess *process); ++void hmgrtable_destroy(struct hmgrtable *tbl); ++void hmgrtable_lock(struct hmgrtable *tbl, enum dxglockstate state); ++void hmgrtable_unlock(struct hmgrtable *tbl, enum dxglockstate state); ++struct d3dkmthandle hmgrtable_alloc_handle(struct hmgrtable *tbl, void *object, ++ enum hmgrentry_type t, bool make_valid); ++struct d3dkmthandle hmgrtable_alloc_handle_safe(struct hmgrtable *tbl, ++ void *obj, ++ enum hmgrentry_type t, ++ bool reserve); ++int hmgrtable_assign_handle(struct hmgrtable *tbl, void *obj, ++ enum hmgrentry_type, struct d3dkmthandle h); ++int hmgrtable_assign_handle_safe(struct hmgrtable *tbl, void *obj, ++ enum hmgrentry_type t, struct d3dkmthandle h); ++void hmgrtable_free_handle(struct hmgrtable *tbl, enum hmgrentry_type t, ++ struct d3dkmthandle h); ++void hmgrtable_free_handle_safe(struct hmgrtable *tbl, enum hmgrentry_type t, ++ struct d3dkmthandle h); ++struct d3dkmthandle hmgrtable_build_entry_handle(struct hmgrtable *tbl, ++ u32 index); ++enum hmgrentry_type hmgrtable_get_object_type(struct hmgrtable *tbl, ++ struct d3dkmthandle h); ++void *hmgrtable_get_object(struct hmgrtable *tbl, struct d3dkmthandle h); ++void *hmgrtable_get_object_by_type(struct hmgrtable *tbl, enum hmgrentry_type t, ++ struct d3dkmthandle h); ++void *hmgrtable_get_object_ignore_destroyed(struct hmgrtable *tbl, ++ struct d3dkmthandle h, ++ enum hmgrentry_type t); ++bool hmgrtable_mark_destroyed(struct hmgrtable *tbl, struct d3dkmthandle h); ++bool hmgrtable_unmark_destroyed(struct hmgrtable *tbl, struct d3dkmthandle h); ++void *hmgrtable_get_entry_object(struct hmgrtable *tbl, u32 index); ++bool hmgrtable_next_entry(struct hmgrtable *tbl, ++ u32 *start_index, ++ enum hmgrentry_type *type, ++ struct d3dkmthandle *handle, ++ void **object); ++ ++#endif +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 23ecd15b0cd7..60e38d104517 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -22,3 +22,63 @@ + + #undef pr_fmt + #define pr_fmt(fmt) "dxgk: " fmt ++ ++struct ioctl_desc { ++ int (*ioctl_callback)(struct dxgprocess *p, void __user *arg); ++ u32 ioctl; ++ u32 arg_size; ++}; ++ ++static struct ioctl_desc ioctls[] = { ++ ++}; ++ ++/* ++ * IOCTL processing ++ * The driver IOCTLs return ++ * - 0 in case of success ++ * - positive values, which are Windows NTSTATUS (for example, STATUS_PENDING). ++ * Positive values are success codes. ++ * - Linux negative error codes ++ */ ++static int dxgk_ioctl(struct file *f, unsigned int p1, unsigned long p2) ++{ ++ int code = _IOC_NR(p1); ++ int status; ++ struct dxgprocess *process; ++ ++ if (code < 1 || code >= ARRAY_SIZE(ioctls)) { ++ DXG_ERR("bad ioctl %x %x %x %x", ++ code, _IOC_TYPE(p1), _IOC_SIZE(p1), _IOC_DIR(p1)); ++ return -ENOTTY; ++ } ++ if (ioctls[code].ioctl_callback == NULL) { ++ DXG_ERR("ioctl callback is NULL %x", code); ++ return -ENOTTY; ++ } ++ if (ioctls[code].ioctl != p1) { ++ DXG_ERR("ioctl mismatch. Code: %x User: %x Kernel: %x", ++ code, p1, ioctls[code].ioctl); ++ return -ENOTTY; ++ } ++ process = (struct dxgprocess *)f->private_data; ++ if (process->tgid != current->tgid) { ++ DXG_ERR("Call from a wrong process: %d %d", ++ process->tgid, current->tgid); ++ return -ENOTTY; ++ } ++ status = ioctls[code].ioctl_callback(process, (void *__user)p2); ++ return status; ++} ++ ++long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2) ++{ ++ DXG_TRACE("compat ioctl %x", p1); ++ return dxgk_ioctl(f, p1, p2); ++} ++ ++long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2) ++{ ++ DXG_TRACE("unlocked ioctl %x Code:%d", p1, _IOC_NR(p1)); ++ return dxgk_ioctl(f, p1, p2); ++} +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index d292e9a9bb7f..dc849a8ed3f2 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -27,10 +27,11 @@ extern const struct d3dkmthandle zerohandle; + * + * channel_lock (VMBus channel lock) + * fd_mutex +- * plistmutex (process list mutex) +- * table_lock (handle table lock) +- * core_lock (dxgadapter lock) +- * device_lock (dxgdevice lock) ++ * plistmutex ++ * table_lock ++ * core_lock ++ * device_lock ++ * process_adapter_mutex + * adapter_list_lock + * device_mutex (dxgglobal mutex) + */ +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 2ea04cc02a1f..c675d5827ed5 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -58,4 +58,107 @@ struct winluid { + __u32 b; + }; + ++#define D3DKMT_ADAPTERS_MAX 64 ++ ++struct d3dkmt_adapterinfo { ++ struct d3dkmthandle adapter_handle; ++ struct winluid adapter_luid; ++ __u32 num_sources; ++ __u32 present_move_regions_preferred; ++}; ++ ++struct d3dkmt_enumadapters2 { ++ __u32 num_adapters; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmt_adapterinfo *adapters; ++#else ++ __u64 *adapters; ++#endif ++}; ++ ++struct d3dkmt_closeadapter { ++ struct d3dkmthandle adapter_handle; ++}; ++ ++struct d3dkmt_openadapterfromluid { ++ struct winluid adapter_luid; ++ struct d3dkmthandle adapter_handle; ++}; ++ ++struct d3dkmt_adaptertype { ++ union { ++ struct { ++ __u32 render_supported:1; ++ __u32 display_supported:1; ++ __u32 software_device:1; ++ __u32 post_device:1; ++ __u32 hybrid_discrete:1; ++ __u32 hybrid_integrated:1; ++ __u32 indirect_display_device:1; ++ __u32 paravirtualized:1; ++ __u32 acg_supported:1; ++ __u32 support_set_timings_from_vidpn:1; ++ __u32 detachable:1; ++ __u32 compute_only:1; ++ __u32 prototype:1; ++ __u32 reserved:19; ++ }; ++ __u32 value; ++ }; ++}; ++ ++enum kmtqueryadapterinfotype { ++ _KMTQAITYPE_UMDRIVERPRIVATE = 0, ++ _KMTQAITYPE_ADAPTERTYPE = 15, ++ _KMTQAITYPE_ADAPTERTYPE_RENDER = 57 ++}; ++ ++struct d3dkmt_queryadapterinfo { ++ struct d3dkmthandle adapter; ++ enum kmtqueryadapterinfotype type; ++#ifdef __KERNEL__ ++ void *private_data; ++#else ++ __u64 private_data; ++#endif ++ __u32 private_data_size; ++}; ++ ++union d3dkmt_enumadapters_filter { ++ struct { ++ __u64 include_compute_only:1; ++ __u64 include_display_only:1; ++ __u64 reserved:62; ++ }; ++ __u64 value; ++}; ++ ++struct d3dkmt_enumadapters3 { ++ union d3dkmt_enumadapters_filter filter; ++ __u32 adapter_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmt_adapterinfo *adapters; ++#else ++ __u64 adapters; ++#endif ++}; ++ ++/* ++ * Dxgkrnl Graphics Port Driver ioctl definitions ++ * ++ */ ++ ++#define LX_DXOPENADAPTERFROMLUID \ ++ _IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid) ++#define LX_DXQUERYADAPTERINFO \ ++ _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXENUMADAPTERS2 \ ++ _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) ++#define LX_DXCLOSEADAPTER \ ++ _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXENUMADAPTERS3 \ ++ _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) ++ + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1672-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1672-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch new file mode 100644 index 000000000000..42920ec0d2cc --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1672-drivers-hv-dxgkrnl-Enumerate-and-open-dxgadapter-objects.patch @@ -0,0 +1,554 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 21 Mar 2022 19:18:50 -0700 +Subject: drivers: hv: dxgkrnl: Enumerate and open dxgadapter objects + +Implement ioctls to enumerate dxgadapter objects: + - The LX_DXENUMADAPTERS2 ioctl + - The LX_DXENUMADAPTERS3 ioctl. + +Implement ioctls to open adapter by LUID and to close adapter +handle: + - The LX_DXOPENADAPTERFROMLUID ioctl + - the LX_DXCLOSEADAPTER ioctl + +Impllement the ioctl to query dxgadapter information: + - The LX_DXQUERYADAPTERINFO ioctl + +When a dxgadapter is enumerated, it is implicitely opened and +a handle (d3dkmthandle) is created in the current process handle +table. The handle is returned to the caller and can be used +by user mode to reference the VGPU adapter in other ioctls. + +The caller is responsible to close the adapter when it is not +longer used by sending the LX_DXCLOSEADAPTER ioctl. + +A dxgprocess has a list of opened dxgadapter objects +(dxgprocess_adapter is used to represent the entry in the list). +A dxgadapter also has a list of dxgprocess_adapter objects. +This is needed for cleanup because either a process or an adapter +could be destroyed first. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgmodule.c | 3 + + drivers/hv/dxgkrnl/ioctl.c | 482 +++++++++- + 2 files changed, 484 insertions(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 17c22001ca6c..fbe1c58ecb46 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -721,6 +721,9 @@ static struct dxgglobal *dxgglobal_create(void) + + init_rwsem(&dxgglobal->channel_lock); + ++#ifdef DEBUG ++ dxgk_validate_ioctls(); ++#endif + return dxgglobal; + } + +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 60e38d104517..b08ea9430093 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -29,8 +29,472 @@ struct ioctl_desc { + u32 arg_size; + }; + +-static struct ioctl_desc ioctls[] = { ++#ifdef DEBUG ++static char *errorstr(int ret) ++{ ++ return ret < 0 ? "err" : ""; ++} ++#endif ++ ++static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_openadapterfromluid args; ++ int ret; ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmt_openadapterfromluid *__user result = inargs; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("Faled to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ if (*(u64 *) &entry->luid == ++ *(u64 *) &args.adapter_luid) { ++ ret = dxgprocess_open_adapter(process, entry, ++ &args.adapter_handle); ++ ++ if (ret >= 0) { ++ ret = copy_to_user( ++ &result->adapter_handle, ++ &args.adapter_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) ++ ret = -EINVAL; ++ } ++ adapter = entry; ++ } ++ dxgadapter_release_lock_shared(entry); ++ if (adapter) ++ break; ++ } ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ ++ if (args.adapter_handle.v == 0) ++ ret = -EINVAL; ++ ++cleanup: ++ ++ if (ret < 0) ++ dxgprocess_close_adapter(process, args.adapter_handle); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkp_enum_adapters(struct dxgprocess *process, ++ union d3dkmt_enumadapters_filter filter, ++ u32 adapter_count_max, ++ struct d3dkmt_adapterinfo *__user info_out, ++ u32 * __user adapter_count_out) ++{ ++ int ret = 0; ++ struct dxgadapter *entry; ++ struct d3dkmt_adapterinfo *info = NULL; ++ struct dxgadapter **adapters = NULL; ++ int adapter_count = 0; ++ int i; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (info_out == NULL || adapter_count_max == 0) { ++ ret = copy_to_user(adapter_count_out, ++ &dxgglobal->num_adapters, sizeof(u32)); ++ if (ret) { ++ DXG_ERR("copy_to_user faled"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ ++ if (adapter_count_max > 0xFFFF) { ++ DXG_ERR("too many adapters"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ info = vzalloc(sizeof(struct d3dkmt_adapterinfo) * adapter_count_max); ++ if (info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapters = vzalloc(sizeof(struct dxgadapter *) * adapter_count_max); ++ if (adapters == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ dxgglobal_acquire_process_adapter_lock(); + ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ struct d3dkmt_adapterinfo *inf = &info[adapter_count]; ++ ++ ret = dxgprocess_open_adapter(process, entry, ++ &inf->adapter_handle); ++ if (ret >= 0) { ++ inf->adapter_luid = entry->luid; ++ adapters[adapter_count] = entry; ++ DXG_TRACE("adapter: %x %x:%x", ++ inf->adapter_handle.v, ++ inf->adapter_luid.b, ++ inf->adapter_luid.a); ++ adapter_count++; ++ } ++ dxgadapter_release_lock_shared(entry); ++ } ++ if (ret < 0) ++ break; ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ ++ if (adapter_count > adapter_count_max) { ++ ret = STATUS_BUFFER_TOO_SMALL; ++ DXG_TRACE("Too many adapters"); ++ ret = copy_to_user(adapter_count_out, ++ &dxgglobal->num_adapters, sizeof(u32)); ++ if (ret) { ++ DXG_ERR("copy_to_user failed"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(adapter_count_out, &adapter_count, ++ sizeof(adapter_count)); ++ if (ret) { ++ DXG_ERR("failed to copy adapter_count"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(info_out, info, sizeof(info[0]) * adapter_count); ++ if (ret) { ++ DXG_ERR("failed to copy adapter info"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret >= 0) { ++ DXG_TRACE("found %d adapters", adapter_count); ++ goto success; ++ } ++ if (info) { ++ for (i = 0; i < adapter_count; i++) ++ dxgprocess_close_adapter(process, ++ info[i].adapter_handle); ++ } ++success: ++ if (info) ++ vfree(info); ++ if (adapters) ++ vfree(adapters); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_enumadapters2 args; ++ int ret; ++ struct dxgadapter *entry; ++ struct d3dkmt_adapterinfo *info = NULL; ++ struct dxgadapter **adapters = NULL; ++ int adapter_count = 0; ++ int i; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.adapters == NULL) { ++ DXG_TRACE("buffer is NULL"); ++ args.num_adapters = dxgglobal->num_adapters; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy args to user"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ if (args.num_adapters < dxgglobal->num_adapters) { ++ args.num_adapters = dxgglobal->num_adapters; ++ DXG_TRACE("buffer is too small"); ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ ++ if (args.num_adapters > D3DKMT_ADAPTERS_MAX) { ++ DXG_TRACE("too many adapters"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ info = vzalloc(sizeof(struct d3dkmt_adapterinfo) * args.num_adapters); ++ if (info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapters = vzalloc(sizeof(struct dxgadapter *) * args.num_adapters); ++ if (adapters == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ struct d3dkmt_adapterinfo *inf = &info[adapter_count]; ++ ++ ret = dxgprocess_open_adapter(process, entry, ++ &inf->adapter_handle); ++ if (ret >= 0) { ++ inf->adapter_luid = entry->luid; ++ adapters[adapter_count] = entry; ++ DXG_TRACE("adapter: %x %llx", ++ inf->adapter_handle.v, ++ *(u64 *) &inf->adapter_luid); ++ adapter_count++; ++ } ++ dxgadapter_release_lock_shared(entry); ++ } ++ if (ret < 0) ++ break; ++ } ++ ++ dxgglobal_release_process_adapter_lock(); ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ ++ args.num_adapters = adapter_count; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy args to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(args.adapters, info, ++ sizeof(info[0]) * args.num_adapters); ++ if (ret) { ++ DXG_ERR("failed to copy adapter info to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (info) { ++ for (i = 0; i < args.num_adapters; i++) { ++ dxgprocess_close_adapter(process, ++ info[i].adapter_handle); ++ } ++ } ++ } else { ++ DXG_TRACE("found %d adapters", args.num_adapters); ++ } ++ ++ if (info) ++ vfree(info); ++ if (adapters) ++ vfree(adapters); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_enumadapters3 args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgkp_enum_adapters(process, args.filter, ++ args.adapter_count, ++ args.adapters, ++ &((struct d3dkmt_enumadapters3 *)inargs)-> ++ adapter_count); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmthandle args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgprocess_close_adapter(process, args); ++ if (ret < 0) ++ DXG_ERR("failed to close adapter: %d", ret); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryadapterinfo args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.private_data_size > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.private_data_size == 0) { ++ DXG_ERR("invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Type: %d Size: %x", args.type, args.private_data_size); ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_query_adapter_info(process, adapter, &args); ++ ++ dxgadapter_release_lock_shared(adapter); ++ ++cleanup: ++ ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static struct ioctl_desc ioctls[] = { ++/* 0x00 */ {}, ++/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, ++/* 0x02 */ {}, ++/* 0x03 */ {}, ++/* 0x04 */ {}, ++/* 0x05 */ {}, ++/* 0x06 */ {}, ++/* 0x07 */ {}, ++/* 0x08 */ {}, ++/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, ++/* 0x0a */ {}, ++/* 0x0b */ {}, ++/* 0x0c */ {}, ++/* 0x0d */ {}, ++/* 0x0e */ {}, ++/* 0x0f */ {}, ++/* 0x10 */ {}, ++/* 0x11 */ {}, ++/* 0x12 */ {}, ++/* 0x13 */ {}, ++/* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, ++/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, ++/* 0x16 */ {}, ++/* 0x17 */ {}, ++/* 0x18 */ {}, ++/* 0x19 */ {}, ++/* 0x1a */ {}, ++/* 0x1b */ {}, ++/* 0x1c */ {}, ++/* 0x1d */ {}, ++/* 0x1e */ {}, ++/* 0x1f */ {}, ++/* 0x20 */ {}, ++/* 0x21 */ {}, ++/* 0x22 */ {}, ++/* 0x23 */ {}, ++/* 0x24 */ {}, ++/* 0x25 */ {}, ++/* 0x26 */ {}, ++/* 0x27 */ {}, ++/* 0x28 */ {}, ++/* 0x29 */ {}, ++/* 0x2a */ {}, ++/* 0x2b */ {}, ++/* 0x2c */ {}, ++/* 0x2d */ {}, ++/* 0x2e */ {}, ++/* 0x2f */ {}, ++/* 0x30 */ {}, ++/* 0x31 */ {}, ++/* 0x32 */ {}, ++/* 0x33 */ {}, ++/* 0x34 */ {}, ++/* 0x35 */ {}, ++/* 0x36 */ {}, ++/* 0x37 */ {}, ++/* 0x38 */ {}, ++/* 0x39 */ {}, ++/* 0x3a */ {}, ++/* 0x3b */ {}, ++/* 0x3c */ {}, ++/* 0x3d */ {}, ++/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, ++/* 0x3f */ {}, ++/* 0x40 */ {}, ++/* 0x41 */ {}, ++/* 0x42 */ {}, ++/* 0x43 */ {}, ++/* 0x44 */ {}, ++/* 0x45 */ {}, + }; + + /* +@@ -82,3 +546,19 @@ long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2) + DXG_TRACE("unlocked ioctl %x Code:%d", p1, _IOC_NR(p1)); + return dxgk_ioctl(f, p1, p2); + } ++ ++#ifdef DEBUG ++void dxgk_validate_ioctls(void) ++{ ++ int i; ++ ++ for (i=0; i < ARRAY_SIZE(ioctls); i++) ++ { ++ if (ioctls[i].ioctl && _IOC_NR(ioctls[i].ioctl) != i) ++ { ++ DXG_ERR("Invalid ioctl"); ++ DXGKRNL_ASSERT(0); ++ } ++ } ++} ++#endif +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1673-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1673-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch new file mode 100644 index 000000000000..28ae3c0856b3 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1673-drivers-hv-dxgkrnl-Creation-of-dxgdevice-objects.patch @@ -0,0 +1,828 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 17:23:58 -0800 +Subject: drivers: hv: dxgkrnl: Creation of dxgdevice objects + +Implement ioctls for creation and destruction of dxgdevice +objects: + - the LX_DXCREATEDEVICE ioctl + - the LX_DXDESTROYDEVICE ioctl + +A dxgdevice object represents a container of other virtual +compute device objects (allocations, sync objects, contexts, +etc.). It belongs to a dxgadapter object. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 187 ++++++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 58 +++ + drivers/hv/dxgkrnl/dxgprocess.c | 43 +++ + drivers/hv/dxgkrnl/dxgvmbus.c | 80 ++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 22 ++ + drivers/hv/dxgkrnl/ioctl.c | 130 ++++++- + drivers/hv/dxgkrnl/misc.h | 8 +- + include/uapi/misc/d3dkmthk.h | 82 ++++ + 8 files changed, 604 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index fa0d6beca157..a9a341716eba 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -194,6 +194,122 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter) + up_read(&adapter->core_lock); + } + ++struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, ++ struct dxgprocess *process) ++{ ++ struct dxgdevice *device; ++ int ret; ++ ++ device = kzalloc(sizeof(struct dxgdevice), GFP_KERNEL); ++ if (device) { ++ kref_init(&device->device_kref); ++ device->adapter = adapter; ++ device->process = process; ++ kref_get(&adapter->adapter_kref); ++ init_rwsem(&device->device_lock); ++ INIT_LIST_HEAD(&device->pqueue_list_head); ++ device->object_state = DXGOBJECTSTATE_CREATED; ++ device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; ++ ++ ret = dxgprocess_adapter_add_device(process, adapter, device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ } ++ } ++ return device; ++} ++ ++void dxgdevice_stop(struct dxgdevice *device) ++{ ++} ++ ++void dxgdevice_mark_destroyed(struct dxgdevice *device) ++{ ++ down_write(&device->device_lock); ++ device->object_state = DXGOBJECTSTATE_DESTROYED; ++ up_write(&device->device_lock); ++} ++ ++void dxgdevice_destroy(struct dxgdevice *device) ++{ ++ struct dxgprocess *process = device->process; ++ struct dxgadapter *adapter = device->adapter; ++ struct d3dkmthandle device_handle = {}; ++ ++ DXG_TRACE("Destroying device: %p", device); ++ ++ down_write(&device->device_lock); ++ ++ if (device->object_state != DXGOBJECTSTATE_ACTIVE) ++ goto cleanup; ++ ++ device->object_state = DXGOBJECTSTATE_DESTROYED; ++ ++ dxgdevice_stop(device); ++ ++ /* Guest handles need to be released before the host handles */ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (device->handle_valid) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, device->handle); ++ device_handle = device->handle; ++ device->handle_valid = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (device_handle.v) { ++ up_write(&device->device_lock); ++ if (dxgadapter_acquire_lock_shared(adapter) == 0) { ++ dxgvmb_send_destroy_device(adapter, process, ++ device_handle); ++ dxgadapter_release_lock_shared(adapter); ++ } ++ down_write(&device->device_lock); ++ } ++ ++cleanup: ++ ++ if (device->adapter) { ++ dxgprocess_adapter_remove_device(device); ++ kref_put(&device->adapter->adapter_kref, dxgadapter_release); ++ device->adapter = NULL; ++ } ++ ++ up_write(&device->device_lock); ++ ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("Device destroyed"); ++} ++ ++int dxgdevice_acquire_lock_shared(struct dxgdevice *device) ++{ ++ down_read(&device->device_lock); ++ if (!dxgdevice_is_active(device)) { ++ up_read(&device->device_lock); ++ return -ENODEV; ++ } ++ return 0; ++} ++ ++void dxgdevice_release_lock_shared(struct dxgdevice *device) ++{ ++ up_read(&device->device_lock); ++} ++ ++bool dxgdevice_is_active(struct dxgdevice *device) ++{ ++ return device->object_state == DXGOBJECTSTATE_ACTIVE; ++} ++ ++void dxgdevice_release(struct kref *refcount) ++{ ++ struct dxgdevice *device; ++ ++ device = container_of(refcount, struct dxgdevice, device_kref); ++ kfree(device); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +@@ -208,6 +324,8 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + adapter_info->adapter = adapter; + adapter_info->process = process; + adapter_info->refcount = 1; ++ mutex_init(&adapter_info->device_list_mutex); ++ INIT_LIST_HEAD(&adapter_info->device_list_head); + list_add_tail(&adapter_info->process_adapter_list_entry, + &process->process_adapter_list_head); + dxgadapter_add_process(adapter, adapter_info); +@@ -221,10 +339,34 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + + void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info) + { ++ struct dxgdevice *device; ++ ++ mutex_lock(&adapter_info->device_list_mutex); ++ list_for_each_entry(device, &adapter_info->device_list_head, ++ device_list_entry) { ++ dxgdevice_stop(device); ++ } ++ mutex_unlock(&adapter_info->device_list_mutex); + } + + void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info) + { ++ struct dxgdevice *device; ++ ++ mutex_lock(&adapter_info->device_list_mutex); ++ while (!list_empty(&adapter_info->device_list_head)) { ++ device = list_first_entry(&adapter_info->device_list_head, ++ struct dxgdevice, device_list_entry); ++ list_del(&device->device_list_entry); ++ device->device_list_entry.next = NULL; ++ mutex_unlock(&adapter_info->device_list_mutex); ++ dxgvmb_send_flush_device(device, ++ DXGDEVICE_FLUSHSCHEDULER_DEVICE_TERMINATE); ++ dxgdevice_destroy(device); ++ mutex_lock(&adapter_info->device_list_mutex); ++ } ++ mutex_unlock(&adapter_info->device_list_mutex); ++ + dxgadapter_remove_process(adapter_info); + kref_put(&adapter_info->adapter->adapter_kref, dxgadapter_release); + list_del(&adapter_info->process_adapter_list_entry); +@@ -240,3 +382,48 @@ void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter_info) + if (adapter_info->refcount == 0) + dxgprocess_adapter_destroy(adapter_info); + } ++ ++int dxgprocess_adapter_add_device(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct dxgdevice *device) ++{ ++ struct dxgprocess_adapter *entry; ++ struct dxgprocess_adapter *adapter_info = NULL; ++ int ret = 0; ++ ++ dxgglobal_acquire_process_adapter_lock(); ++ ++ list_for_each_entry(entry, &process->process_adapter_list_head, ++ process_adapter_list_entry) { ++ if (entry->adapter == adapter) { ++ adapter_info = entry; ++ break; ++ } ++ } ++ if (adapter_info == NULL) { ++ DXG_ERR("failed to find process adapter info"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ mutex_lock(&adapter_info->device_list_mutex); ++ list_add_tail(&device->device_list_entry, ++ &adapter_info->device_list_head); ++ device->adapter_info = adapter_info; ++ mutex_unlock(&adapter_info->device_list_mutex); ++ ++cleanup: ++ ++ dxgglobal_release_process_adapter_lock(); ++ return ret; ++} ++ ++void dxgprocess_adapter_remove_device(struct dxgdevice *device) ++{ ++ DXG_TRACE("Removing device: %p", device); ++ mutex_lock(&device->adapter_info->device_list_mutex); ++ if (device->device_list_entry.next) { ++ list_del(&device->device_list_entry); ++ device->device_list_entry.next = NULL; ++ } ++ mutex_unlock(&device->adapter_info->device_list_mutex); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b089d126f801..45ac1f25cc5e 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -34,6 +34,7 @@ + + struct dxgprocess; + struct dxgadapter; ++struct dxgdevice; + + /* + * Driver private data. +@@ -71,6 +72,10 @@ struct dxgk_device_types { + u32 virtual_monitor_device:1; + }; + ++enum dxgdevice_flushschedulerreason { ++ DXGDEVICE_FLUSHSCHEDULER_DEVICE_TERMINATE = 4, ++}; ++ + enum dxgobjectstate { + DXGOBJECTSTATE_CREATED, + DXGOBJECTSTATE_ACTIVE, +@@ -166,6 +171,9 @@ struct dxgprocess_adapter { + struct list_head adapter_process_list_entry; + /* Entry in dxgprocess::process_adapter_list_head */ + struct list_head process_adapter_list_entry; ++ /* List of all dxgdevice objects created for the process on adapter */ ++ struct list_head device_list_head; ++ struct mutex device_list_mutex; + struct dxgadapter *adapter; + struct dxgprocess *process; + int refcount; +@@ -175,6 +183,10 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter + *adapter); + void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter); ++int dxgprocess_adapter_add_device(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct dxgdevice *device); ++void dxgprocess_adapter_remove_device(struct dxgdevice *device); + void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info); + void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info); + +@@ -222,6 +234,11 @@ struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process, + struct d3dkmthandle handle); + struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, + struct d3dkmthandle handle); ++struct dxgdevice *dxgprocess_device_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle); ++struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, ++ enum hmgrentry_type t, ++ struct d3dkmthandle h); + void dxgprocess_ht_lock_shared_down(struct dxgprocess *process); + void dxgprocess_ht_lock_shared_up(struct dxgprocess *process); + void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process); +@@ -241,6 +258,7 @@ enum dxgadapter_state { + * This object represents the grapchis adapter. + * Objects, which take reference on the adapter: + * - dxgglobal ++ * - dxgdevice + * - adapter handle (struct d3dkmthandle) + */ + struct dxgadapter { +@@ -277,6 +295,38 @@ void dxgadapter_add_process(struct dxgadapter *adapter, + struct dxgprocess_adapter *process_info); + void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); + ++/* ++ * The object represent the device object. ++ * The following objects take reference on the device ++ * - device handle (struct d3dkmthandle) ++ */ ++struct dxgdevice { ++ enum dxgobjectstate object_state; ++ /* Device takes reference on the adapter */ ++ struct dxgadapter *adapter; ++ struct dxgprocess_adapter *adapter_info; ++ struct dxgprocess *process; ++ /* Entry in the DGXPROCESS_ADAPTER device list */ ++ struct list_head device_list_entry; ++ struct kref device_kref; ++ /* Protects destcruction of the device object */ ++ struct rw_semaphore device_lock; ++ /* List of paging queues. Protected by process handle table lock. */ ++ struct list_head pqueue_list_head; ++ struct d3dkmthandle handle; ++ enum d3dkmt_deviceexecution_state execution_state; ++ u32 handle_valid; ++}; ++ ++struct dxgdevice *dxgdevice_create(struct dxgadapter *a, struct dxgprocess *p); ++void dxgdevice_destroy(struct dxgdevice *device); ++void dxgdevice_stop(struct dxgdevice *device); ++void dxgdevice_mark_destroyed(struct dxgdevice *device); ++int dxgdevice_acquire_lock_shared(struct dxgdevice *dev); ++void dxgdevice_release_lock_shared(struct dxgdevice *dev); ++void dxgdevice_release(struct kref *refcount); ++bool dxgdevice_is_active(struct dxgdevice *dev); ++ + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + +@@ -313,6 +363,14 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process); + int dxgvmb_send_open_adapter(struct dxgadapter *adapter); + int dxgvmb_send_close_adapter(struct dxgadapter *adapter); + int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter); ++struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createdevice *args); ++int dxgvmb_send_destroy_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h); ++int dxgvmb_send_flush_device(struct dxgdevice *device, ++ enum dxgdevice_flushschedulerreason reason); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index ab9a01e3c8c8..8373f681e822 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -241,6 +241,49 @@ struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, + return adapter; + } + ++struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, ++ enum hmgrentry_type t, ++ struct d3dkmthandle handle) ++{ ++ struct dxgdevice *device = NULL; ++ void *obj; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ obj = hmgrtable_get_object_by_type(&process->handle_table, t, handle); ++ if (obj) { ++ struct d3dkmthandle device_handle = {}; ++ ++ switch (t) { ++ case HMGRENTRY_TYPE_DXGDEVICE: ++ device = obj; ++ break; ++ default: ++ DXG_ERR("invalid handle type: %d", t); ++ break; ++ } ++ if (device == NULL) ++ device = hmgrtable_get_object_by_type( ++ &process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ device_handle); ++ if (device) ++ if (kref_get_unless_zero(&device->device_kref) == 0) ++ device = NULL; ++ } ++ if (device == NULL) ++ DXG_ERR("device_by_handle failed: %d %x", t, handle.v); ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ return device; ++} ++ ++struct dxgdevice *dxgprocess_device_by_handle(struct dxgprocess *process, ++ struct d3dkmthandle handle) ++{ ++ return dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ handle); ++} ++ + void dxgprocess_ht_lock_shared_down(struct dxgprocess *process) + { + hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 0abf45d0d3f7..73804d11ec49 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -673,6 +673,86 @@ int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter) + return ret; + } + ++struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createdevice *args) ++{ ++ int ret; ++ struct dxgkvmb_command_createdevice *command; ++ struct dxgkvmb_command_createdevice_return result = { }; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_CREATEDEVICE, ++ process->host_handle); ++ command->flags = args->flags; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ result.device.v = 0; ++ free_message(&msg, process); ++cleanup: ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return result.device; ++} ++ ++int dxgvmb_send_destroy_device(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ struct dxgkvmb_command_destroydevice *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_DESTROYDEVICE, ++ process->host_handle); ++ command->device = h; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_flush_device(struct dxgdevice *device, ++ enum dxgdevice_flushschedulerreason reason) ++{ ++ int ret; ++ struct dxgkvmb_command_flushdevice *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgprocess *process = device->process; ++ ++ ret = init_message(&msg, device->adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_FLUSHDEVICE, ++ process->host_handle); ++ command->device = device->handle; ++ command->reason = reason; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index a805a396e083..4ccf45765954 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -247,4 +247,26 @@ struct dxgkvmb_command_queryadapterinfo_return { + u8 private_data[1]; + }; + ++struct dxgkvmb_command_createdevice { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_createdeviceflags flags; ++ bool cdd_device; ++ void *error_code; ++}; ++ ++struct dxgkvmb_command_createdevice_return { ++ struct d3dkmthandle device; ++}; ++ ++struct dxgkvmb_command_destroydevice { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++}; ++ ++struct dxgkvmb_command_flushdevice { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ enum dxgdevice_flushschedulerreason reason; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index b08ea9430093..405e8b92913e 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -424,10 +424,136 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_device(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createdevice args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct d3dkmthandle host_device_handle = {}; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* The call acquires reference on the adapter */ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgdevice_create(adapter, process); ++ if (device == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) ++ goto cleanup; ++ ++ adapter_locked = true; ++ ++ host_device_handle = dxgvmb_send_create_device(adapter, process, &args); ++ if (host_device_handle.v) { ++ ret = copy_to_user(&((struct d3dkmt_createdevice *)inargs)-> ++ device, &host_device_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy device handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, device, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ host_device_handle); ++ if (ret >= 0) { ++ device->handle = host_device_handle; ++ device->handle_valid = 1; ++ device->object_state = DXGOBJECTSTATE_ACTIVE; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_device_handle.v) ++ dxgvmb_send_destroy_device(adapter, process, ++ host_device_handle); ++ if (device) ++ dxgdevice_destroy(device); ++ } ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroydevice args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ device = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, ++ args.device); ++ if (device) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGDEVICE, args.device); ++ device->handle_valid = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (device == NULL) { ++ DXG_ERR("invalid device handle: %x", args.device.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ++ dxgdevice_destroy(device); ++ ++ if (dxgadapter_acquire_lock_shared(adapter) == 0) { ++ dxgvmb_send_destroy_device(adapter, process, args.device); ++ dxgadapter_release_lock_shared(adapter); ++ } ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +-/* 0x02 */ {}, ++/* 0x02 */ {dxgkio_create_device, LX_DXCREATEDEVICE}, + /* 0x03 */ {}, + /* 0x04 */ {}, + /* 0x05 */ {}, +@@ -450,7 +576,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x16 */ {}, + /* 0x17 */ {}, + /* 0x18 */ {}, +-/* 0x19 */ {}, ++/* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, + /* 0x1a */ {}, + /* 0x1b */ {}, + /* 0x1c */ {}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index dc849a8ed3f2..e0bd33b365b0 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -27,10 +27,10 @@ extern const struct d3dkmthandle zerohandle; + * + * channel_lock (VMBus channel lock) + * fd_mutex +- * plistmutex +- * table_lock +- * core_lock +- * device_lock ++ * plistmutex (process list mutex) ++ * table_lock (handle table lock) ++ * core_lock (dxgadapter lock) ++ * device_lock (dxgdevice lock) + * process_adapter_mutex + * adapter_list_lock + * device_mutex (dxgglobal mutex) +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index c675d5827ed5..7414f0f5ce8e 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -86,6 +86,74 @@ struct d3dkmt_openadapterfromluid { + struct d3dkmthandle adapter_handle; + }; + ++struct d3dddi_allocationlist { ++ struct d3dkmthandle allocation; ++ union { ++ struct { ++ __u32 write_operation :1; ++ __u32 do_not_retire_instance :1; ++ __u32 offer_priority :3; ++ __u32 reserved :27; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_patchlocationlist { ++ __u32 allocation_index; ++ union { ++ struct { ++ __u32 slot_id:24; ++ __u32 reserved:8; ++ }; ++ __u32 value; ++ }; ++ __u32 driver_id; ++ __u32 allocation_offset; ++ __u32 patch_offset; ++ __u32 split_offset; ++}; ++ ++struct d3dkmt_createdeviceflags { ++ __u32 legacy_mode:1; ++ __u32 request_vSync:1; ++ __u32 disable_gpu_timeout:1; ++ __u32 gdi_device:1; ++ __u32 reserved:28; ++}; ++ ++struct d3dkmt_createdevice { ++ struct d3dkmthandle adapter; ++ __u32 reserved3; ++ struct d3dkmt_createdeviceflags flags; ++ struct d3dkmthandle device; ++#ifdef __KERNEL__ ++ void *command_buffer; ++#else ++ __u64 command_buffer; ++#endif ++ __u32 command_buffer_size; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dddi_allocationlist *allocation_list; ++#else ++ __u64 allocation_list; ++#endif ++ __u32 allocation_list_size; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dddi_patchlocationlist *patch_location_list; ++#else ++ __u64 patch_location_list; ++#endif ++ __u32 patch_location_list_size; ++ __u32 reserved2; ++}; ++ ++struct d3dkmt_destroydevice { ++ struct d3dkmthandle device; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -125,6 +193,16 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + ++enum d3dkmt_deviceexecution_state { ++ _D3DKMT_DEVICEEXECUTION_ACTIVE = 1, ++ _D3DKMT_DEVICEEXECUTION_RESET = 2, ++ _D3DKMT_DEVICEEXECUTION_HUNG = 3, ++ _D3DKMT_DEVICEEXECUTION_STOPPED = 4, ++ _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, ++}; ++ + union d3dkmt_enumadapters_filter { + struct { + __u64 include_compute_only:1; +@@ -152,12 +230,16 @@ struct d3dkmt_enumadapters3 { + + #define LX_DXOPENADAPTERFROMLUID \ + _IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid) ++#define LX_DXCREATEDEVICE \ ++ _IOWR(0x47, 0x02, struct d3dkmt_createdevice) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXENUMADAPTERS2 \ + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXDESTROYDEVICE \ ++ _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1674-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1674-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch new file mode 100644 index 000000000000..73403cb5b4a1 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1674-drivers-hv-dxgkrnl-Creation-of-dxgcontext-objects.patch @@ -0,0 +1,668 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 17:03:47 -0800 +Subject: drivers: hv: dxgkrnl: Creation of dxgcontext objects + +Implement ioctls for creation/destruction of dxgcontext +objects: + - the LX_DXCREATECONTEXTVIRTUAL ioctl + - the LX_DXDESTROYCONTEXT ioctl. + +A dxgcontext object represents a compute device execution thread. +Ccompute device DMA buffers and synchronization operations are +submitted for execution to a dxgcontext. dxgcontexts objects +belong to a dxgdevice object. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 103 ++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 38 +++ + drivers/hv/dxgkrnl/dxgprocess.c | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 101 +++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 18 + + drivers/hv/dxgkrnl/ioctl.c | 168 +++++++++- + drivers/hv/dxgkrnl/misc.h | 1 + + include/uapi/misc/d3dkmthk.h | 47 +++ + 8 files changed, 477 insertions(+), 3 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index a9a341716eba..cd103e092ac2 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -206,7 +206,9 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + device->adapter = adapter; + device->process = process; + kref_get(&adapter->adapter_kref); ++ INIT_LIST_HEAD(&device->context_list_head); + init_rwsem(&device->device_lock); ++ init_rwsem(&device->context_list_lock); + INIT_LIST_HEAD(&device->pqueue_list_head); + device->object_state = DXGOBJECTSTATE_CREATED; + device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; +@@ -248,6 +250,20 @@ void dxgdevice_destroy(struct dxgdevice *device) + + dxgdevice_stop(device); + ++ { ++ struct dxgcontext *context; ++ struct dxgcontext *tmp; ++ ++ DXG_TRACE("destroying contexts"); ++ dxgdevice_acquire_context_list_lock(device); ++ list_for_each_entry_safe(context, tmp, ++ &device->context_list_head, ++ context_list_entry) { ++ dxgcontext_destroy(process, context); ++ } ++ dxgdevice_release_context_list_lock(device); ++ } ++ + /* Guest handles need to be released before the host handles */ + hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); + if (device->handle_valid) { +@@ -302,6 +318,32 @@ bool dxgdevice_is_active(struct dxgdevice *device) + return device->object_state == DXGOBJECTSTATE_ACTIVE; + } + ++void dxgdevice_acquire_context_list_lock(struct dxgdevice *device) ++{ ++ down_write(&device->context_list_lock); ++} ++ ++void dxgdevice_release_context_list_lock(struct dxgdevice *device) ++{ ++ up_write(&device->context_list_lock); ++} ++ ++void dxgdevice_add_context(struct dxgdevice *device, struct dxgcontext *context) ++{ ++ down_write(&device->context_list_lock); ++ list_add_tail(&context->context_list_entry, &device->context_list_head); ++ up_write(&device->context_list_lock); ++} ++ ++void dxgdevice_remove_context(struct dxgdevice *device, ++ struct dxgcontext *context) ++{ ++ if (context->context_list_entry.next) { ++ list_del(&context->context_list_entry); ++ context->context_list_entry.next = NULL; ++ } ++} ++ + void dxgdevice_release(struct kref *refcount) + { + struct dxgdevice *device; +@@ -310,6 +352,67 @@ void dxgdevice_release(struct kref *refcount) + kfree(device); + } + ++struct dxgcontext *dxgcontext_create(struct dxgdevice *device) ++{ ++ struct dxgcontext *context; ++ ++ context = kzalloc(sizeof(struct dxgcontext), GFP_KERNEL); ++ if (context) { ++ kref_init(&context->context_kref); ++ context->device = device; ++ context->process = device->process; ++ context->device_handle = device->handle; ++ kref_get(&device->device_kref); ++ INIT_LIST_HEAD(&context->hwqueue_list_head); ++ init_rwsem(&context->hwqueue_list_lock); ++ dxgdevice_add_context(device, context); ++ context->object_state = DXGOBJECTSTATE_ACTIVE; ++ } ++ return context; ++} ++ ++/* ++ * Called when the device context list lock is held ++ */ ++void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context) ++{ ++ DXG_TRACE("Destroying context %p", context); ++ context->object_state = DXGOBJECTSTATE_DESTROYED; ++ if (context->device) { ++ if (context->handle.v) { ++ hmgrtable_free_handle_safe(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ context->handle); ++ } ++ dxgdevice_remove_context(context->device, context); ++ kref_put(&context->device->device_kref, dxgdevice_release); ++ } ++ kref_put(&context->context_kref, dxgcontext_release); ++} ++ ++void dxgcontext_destroy_safe(struct dxgprocess *process, ++ struct dxgcontext *context) ++{ ++ struct dxgdevice *device = context->device; ++ ++ dxgdevice_acquire_context_list_lock(device); ++ dxgcontext_destroy(process, context); ++ dxgdevice_release_context_list_lock(device); ++} ++ ++bool dxgcontext_is_active(struct dxgcontext *context) ++{ ++ return context->object_state == DXGOBJECTSTATE_ACTIVE; ++} ++ ++void dxgcontext_release(struct kref *refcount) ++{ ++ struct dxgcontext *context; ++ ++ context = container_of(refcount, struct dxgcontext, context_kref); ++ kfree(context); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 45ac1f25cc5e..a3d8d3c9f37d 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -35,6 +35,7 @@ + struct dxgprocess; + struct dxgadapter; + struct dxgdevice; ++struct dxgcontext; + + /* + * Driver private data. +@@ -298,6 +299,7 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); + /* + * The object represent the device object. + * The following objects take reference on the device ++ * - dxgcontext + * - device handle (struct d3dkmthandle) + */ + struct dxgdevice { +@@ -311,6 +313,8 @@ struct dxgdevice { + struct kref device_kref; + /* Protects destcruction of the device object */ + struct rw_semaphore device_lock; ++ struct rw_semaphore context_list_lock; ++ struct list_head context_list_head; + /* List of paging queues. Protected by process handle table lock. */ + struct list_head pqueue_list_head; + struct d3dkmthandle handle; +@@ -325,7 +329,33 @@ void dxgdevice_mark_destroyed(struct dxgdevice *device); + int dxgdevice_acquire_lock_shared(struct dxgdevice *dev); + void dxgdevice_release_lock_shared(struct dxgdevice *dev); + void dxgdevice_release(struct kref *refcount); ++void dxgdevice_add_context(struct dxgdevice *dev, struct dxgcontext *ctx); ++void dxgdevice_remove_context(struct dxgdevice *dev, struct dxgcontext *ctx); + bool dxgdevice_is_active(struct dxgdevice *dev); ++void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev); ++void dxgdevice_release_context_list_lock(struct dxgdevice *dev); ++ ++/* ++ * The object represent the execution context of a device. ++ */ ++struct dxgcontext { ++ enum dxgobjectstate object_state; ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ /* entry in the device context list */ ++ struct list_head context_list_entry; ++ struct list_head hwqueue_list_head; ++ struct rw_semaphore hwqueue_list_lock; ++ struct kref context_kref; ++ struct d3dkmthandle handle; ++ struct d3dkmthandle device_handle; ++}; ++ ++struct dxgcontext *dxgcontext_create(struct dxgdevice *dev); ++void dxgcontext_destroy(struct dxgprocess *pr, struct dxgcontext *ctx); ++void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); ++void dxgcontext_release(struct kref *refcount); ++bool dxgcontext_is_active(struct dxgcontext *ctx); + + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); +@@ -371,6 +401,14 @@ int dxgvmb_send_destroy_device(struct dxgadapter *adapter, + struct d3dkmthandle h); + int dxgvmb_send_flush_device(struct dxgdevice *device, + enum dxgdevice_flushschedulerreason reason); ++struct d3dkmthandle ++dxgvmb_send_create_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createcontextvirtual ++ *args); ++int dxgvmb_send_destroy_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index 8373f681e822..ca307beb9a9a 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -257,6 +257,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, + case HMGRENTRY_TYPE_DXGDEVICE: + device = obj; + break; ++ case HMGRENTRY_TYPE_DXGCONTEXT: ++ device_handle = ++ ((struct dxgcontext *)obj)->device_handle; ++ break; + default: + DXG_ERR("invalid handle type: %d", t); + break; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 73804d11ec49..e66aac7c13cb 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -731,7 +731,7 @@ int dxgvmb_send_flush_device(struct dxgdevice *device, + enum dxgdevice_flushschedulerreason reason) + { + int ret; +- struct dxgkvmb_command_flushdevice *command; ++ struct dxgkvmb_command_flushdevice *command = NULL; + struct dxgvmbusmsg msg = {.hdr = NULL}; + struct dxgprocess *process = device->process; + +@@ -745,6 +745,105 @@ int dxgvmb_send_flush_device(struct dxgdevice *device, + command->device = device->handle; + command->reason = reason; + ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++struct d3dkmthandle ++dxgvmb_send_create_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmt_createcontextvirtual *args) ++{ ++ struct dxgkvmb_command_createcontextvirtual *command = NULL; ++ u32 cmd_size; ++ int ret; ++ struct d3dkmthandle context = {}; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("PrivateDriverDataSize is invalid"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ cmd_size = sizeof(struct dxgkvmb_command_createcontextvirtual) + ++ args->priv_drv_data_size - 1; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATECONTEXTVIRTUAL, ++ process->host_handle); ++ command->device = args->device; ++ command->node_ordinal = args->node_ordinal; ++ command->engine_affinity = args->engine_affinity; ++ command->flags = args->flags; ++ command->client_hint = args->client_hint; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user(command->priv_drv_data, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("Faled to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ /* Input command is returned back as output */ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ command, cmd_size); ++ if (ret < 0) { ++ goto cleanup; ++ } else { ++ context = command->context; ++ if (args->priv_drv_data_size) { ++ ret = copy_to_user(args->priv_drv_data, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "Faled to copy private data to user"); ++ ret = -EINVAL; ++ dxgvmb_send_destroy_context(adapter, process, ++ context); ++ context.v = 0; ++ } ++ } ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return context; ++} ++ ++int dxgvmb_send_destroy_context(struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ struct dxgkvmb_command_destroycontext *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYCONTEXT, ++ process->host_handle); ++ command->context = h; ++ + ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); + cleanup: + free_message(&msg, process); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 4ccf45765954..ebcb7b0f62c1 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -269,4 +269,22 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_createcontextvirtual { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ struct d3dkmthandle device; ++ u32 node_ordinal; ++ u32 engine_affinity; ++ struct d3dddi_createcontextflags flags; ++ enum d3dkmt_clienthint client_hint; ++ u32 priv_drv_data_size; ++ u8 priv_drv_data[1]; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroycontext { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 405e8b92913e..5d10ebd2ce6a 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -550,13 +550,177 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createcontextvirtual args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgcontext *context = NULL; ++ struct d3dkmthandle host_context_handle = {}; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ context = dxgcontext_create(device); ++ if (context == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ host_context_handle = dxgvmb_send_create_context(adapter, ++ process, &args); ++ if (host_context_handle.v) { ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, context, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ host_context_handle); ++ if (ret >= 0) ++ context->handle = host_context_handle; ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(&((struct d3dkmt_createcontextvirtual *) ++ inargs)->context, &host_context_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy context handle"); ++ ret = -EINVAL; ++ } ++ } else { ++ DXG_ERR("invalid host handle"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_context_handle.v) { ++ dxgvmb_send_destroy_context(adapter, process, ++ host_context_handle); ++ } ++ if (context) ++ dxgcontext_destroy_safe(process, context); ++ } ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroycontext args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgcontext *context = NULL; ++ struct dxgdevice *device = NULL; ++ struct d3dkmthandle device_handle = {}; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ context = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (context) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, args.context); ++ context->handle.v = 0; ++ device_handle = context->device_handle; ++ context->object_state = DXGOBJECTSTATE_DESTROYED; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (context == NULL) { ++ DXG_ERR("invalid context handle: %x", args.context.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_destroy_context(adapter, process, args.context); ++ ++ dxgcontext_destroy_safe(process, context); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, + /* 0x02 */ {dxgkio_create_device, LX_DXCREATEDEVICE}, + /* 0x03 */ {}, +-/* 0x04 */ {}, +-/* 0x05 */ {}, ++/* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL}, ++/* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, + /* 0x06 */ {}, + /* 0x07 */ {}, + /* 0x08 */ {}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index e0bd33b365b0..3a9637f0b5e2 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -29,6 +29,7 @@ extern const struct d3dkmthandle zerohandle; + * fd_mutex + * plistmutex (process list mutex) + * table_lock (handle table lock) ++ * context_list_lock + * core_lock (dxgadapter lock) + * device_lock (dxgdevice lock) + * process_adapter_mutex +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 7414f0f5ce8e..4ba0070b061f 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -154,6 +154,49 @@ struct d3dkmt_destroydevice { + struct d3dkmthandle device; + }; + ++enum d3dkmt_clienthint { ++ _D3DKMT_CLIENTHNT_UNKNOWN = 0, ++ _D3DKMT_CLIENTHINT_OPENGL = 1, ++ _D3DKMT_CLIENTHINT_CDD = 2, ++ _D3DKMT_CLIENTHINT_DX7 = 7, ++ _D3DKMT_CLIENTHINT_DX8 = 8, ++ _D3DKMT_CLIENTHINT_DX9 = 9, ++ _D3DKMT_CLIENTHINT_DX10 = 10, ++}; ++ ++struct d3dddi_createcontextflags { ++ union { ++ struct { ++ __u32 null_rendering:1; ++ __u32 initial_data:1; ++ __u32 disable_gpu_timeout:1; ++ __u32 synchronization_only:1; ++ __u32 hw_queue_supported:1; ++ __u32 reserved:27; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_destroycontext { ++ struct d3dkmthandle context; ++}; ++ ++struct d3dkmt_createcontextvirtual { ++ struct d3dkmthandle device; ++ __u32 node_ordinal; ++ __u32 engine_affinity; ++ struct d3dddi_createcontextflags flags; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ enum d3dkmt_clienthint client_hint; ++ struct d3dkmthandle context; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -232,6 +275,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid) + #define LX_DXCREATEDEVICE \ + _IOWR(0x47, 0x02, struct d3dkmt_createdevice) ++#define LX_DXCREATECONTEXTVIRTUAL \ ++ _IOWR(0x47, 0x04, struct d3dkmt_createcontextvirtual) ++#define LX_DXDESTROYCONTEXT \ ++ _IOWR(0x47, 0x05, struct d3dkmt_destroycontext) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXENUMADAPTERS2 \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1675-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch b/patch/kernel/archive/wsl2-arm64-6.6/1675-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch new file mode 100644 index 000000000000..d4323904b8b4 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1675-drivers-hv-dxgkrnl-Creation-of-compute-device-allocations-and-resources.patch @@ -0,0 +1,2263 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 15:37:52 -0800 +Subject: drivers: hv: dxgkrnl: Creation of compute device allocations and + resources + +Implemented ioctls to create and destroy virtual compute device +allocations (dxgallocation) and resources (dxgresource): + - the LX_DXCREATEALLOCATION ioctl, + - the LX_DXDESTROYALLOCATION2 ioctl. + +Compute device allocations (dxgallocation objects) represent memory +allocation, which could be accessible by the device. Allocations can +be created around existing system memory (provided by an application) +or memory, allocated by dxgkrnl on the host. + +Compute device resources (dxgresource objects) represent containers of +compute device allocations. Allocations could be dynamically added, +removed from a resource. + +Each allocation/resource has associated driver private data, which +is provided during creation. + +Each created resource or allocation have a handle (d3dkmthandle), +which is used to reference the corresponding object in other ioctls. + +A dxgallocation can be resident (meaning that it is accessible by +the compute device) or evicted. When an allocation is evicted, +its content is stored in the backing store in system memory. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 282 ++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 113 ++ + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgvmbus.c | 649 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 123 ++ + drivers/hv/dxgkrnl/ioctl.c | 631 ++++++++- + drivers/hv/dxgkrnl/misc.h | 3 + + include/uapi/misc/d3dkmthk.h | 204 +++ + 8 files changed, 2004 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index cd103e092ac2..402caa81a5db 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -207,8 +207,11 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + device->process = process; + kref_get(&adapter->adapter_kref); + INIT_LIST_HEAD(&device->context_list_head); ++ INIT_LIST_HEAD(&device->alloc_list_head); ++ INIT_LIST_HEAD(&device->resource_list_head); + init_rwsem(&device->device_lock); + init_rwsem(&device->context_list_lock); ++ init_rwsem(&device->alloc_list_lock); + INIT_LIST_HEAD(&device->pqueue_list_head); + device->object_state = DXGOBJECTSTATE_CREATED; + device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; +@@ -224,6 +227,14 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + + void dxgdevice_stop(struct dxgdevice *device) + { ++ struct dxgallocation *alloc; ++ ++ DXG_TRACE("Destroying device: %p", device); ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_for_each_entry(alloc, &device->alloc_list_head, alloc_list_entry) { ++ dxgallocation_stop(alloc); ++ } ++ dxgdevice_release_alloc_list_lock(device); + } + + void dxgdevice_mark_destroyed(struct dxgdevice *device) +@@ -250,6 +261,33 @@ void dxgdevice_destroy(struct dxgdevice *device) + + dxgdevice_stop(device); + ++ dxgdevice_acquire_alloc_list_lock(device); ++ ++ { ++ struct dxgallocation *alloc; ++ struct dxgallocation *tmp; ++ ++ DXG_TRACE("destroying allocations"); ++ list_for_each_entry_safe(alloc, tmp, &device->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_destroy(alloc); ++ } ++ } ++ ++ { ++ struct dxgresource *resource; ++ struct dxgresource *tmp; ++ ++ DXG_TRACE("destroying resources"); ++ list_for_each_entry_safe(resource, tmp, ++ &device->resource_list_head, ++ resource_list_entry) { ++ dxgresource_destroy(resource); ++ } ++ } ++ ++ dxgdevice_release_alloc_list_lock(device); ++ + { + struct dxgcontext *context; + struct dxgcontext *tmp; +@@ -328,6 +366,26 @@ void dxgdevice_release_context_list_lock(struct dxgdevice *device) + up_write(&device->context_list_lock); + } + ++void dxgdevice_acquire_alloc_list_lock(struct dxgdevice *device) ++{ ++ down_write(&device->alloc_list_lock); ++} ++ ++void dxgdevice_release_alloc_list_lock(struct dxgdevice *device) ++{ ++ up_write(&device->alloc_list_lock); ++} ++ ++void dxgdevice_acquire_alloc_list_lock_shared(struct dxgdevice *device) ++{ ++ down_read(&device->alloc_list_lock); ++} ++ ++void dxgdevice_release_alloc_list_lock_shared(struct dxgdevice *device) ++{ ++ up_read(&device->alloc_list_lock); ++} ++ + void dxgdevice_add_context(struct dxgdevice *device, struct dxgcontext *context) + { + down_write(&device->context_list_lock); +@@ -344,6 +402,161 @@ void dxgdevice_remove_context(struct dxgdevice *device, + } + } + ++void dxgdevice_add_alloc(struct dxgdevice *device, struct dxgallocation *alloc) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&alloc->alloc_list_entry, &device->alloc_list_head); ++ kref_get(&device->device_kref); ++ alloc->owner.device = device; ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_alloc(struct dxgdevice *device, ++ struct dxgallocation *alloc) ++{ ++ if (alloc->alloc_list_entry.next) { ++ list_del(&alloc->alloc_list_entry); ++ alloc->alloc_list_entry.next = NULL; ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++} ++ ++void dxgdevice_remove_alloc_safe(struct dxgdevice *device, ++ struct dxgallocation *alloc) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ dxgdevice_remove_alloc(device, alloc); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_add_resource(struct dxgdevice *device, struct dxgresource *res) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&res->resource_list_entry, &device->resource_list_head); ++ kref_get(&device->device_kref); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_resource(struct dxgdevice *device, ++ struct dxgresource *res) ++{ ++ if (res->resource_list_entry.next) { ++ list_del(&res->resource_list_entry); ++ res->resource_list_entry.next = NULL; ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++} ++ ++struct dxgresource *dxgresource_create(struct dxgdevice *device) ++{ ++ struct dxgresource *resource; ++ ++ resource = kzalloc(sizeof(struct dxgresource), GFP_KERNEL); ++ if (resource) { ++ kref_init(&resource->resource_kref); ++ resource->device = device; ++ resource->process = device->process; ++ resource->object_state = DXGOBJECTSTATE_ACTIVE; ++ mutex_init(&resource->resource_mutex); ++ INIT_LIST_HEAD(&resource->alloc_list_head); ++ dxgdevice_add_resource(device, resource); ++ } ++ return resource; ++} ++ ++void dxgresource_free_handle(struct dxgresource *resource) ++{ ++ struct dxgallocation *alloc; ++ struct dxgprocess *process; ++ ++ if (resource->handle_valid) { ++ process = resource->device->process; ++ hmgrtable_free_handle_safe(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ resource->handle); ++ resource->handle_valid = 0; ++ } ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_free_handle(alloc); ++ } ++} ++ ++void dxgresource_destroy(struct dxgresource *resource) ++{ ++ /* device->alloc_list_lock is held */ ++ struct dxgallocation *alloc; ++ struct dxgallocation *tmp; ++ struct d3dkmt_destroyallocation2 args = { }; ++ int destroyed = test_and_set_bit(0, &resource->flags); ++ struct dxgdevice *device = resource->device; ++ ++ if (!destroyed) { ++ dxgresource_free_handle(resource); ++ if (resource->handle.v) { ++ args.device = device->handle; ++ args.resource = resource->handle; ++ dxgvmb_send_destroy_allocation(device->process, ++ device, &args, NULL); ++ resource->handle.v = 0; ++ } ++ list_for_each_entry_safe(alloc, tmp, &resource->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_destroy(alloc); ++ } ++ dxgdevice_remove_resource(device, resource); ++ } ++ kref_put(&resource->resource_kref, dxgresource_release); ++} ++ ++void dxgresource_release(struct kref *refcount) ++{ ++ struct dxgresource *resource; ++ ++ resource = container_of(refcount, struct dxgresource, resource_kref); ++ kfree(resource); ++} ++ ++bool dxgresource_is_active(struct dxgresource *resource) ++{ ++ return resource->object_state == DXGOBJECTSTATE_ACTIVE; ++} ++ ++int dxgresource_add_alloc(struct dxgresource *resource, ++ struct dxgallocation *alloc) ++{ ++ int ret = -ENODEV; ++ struct dxgdevice *device = resource->device; ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (dxgresource_is_active(resource)) { ++ list_add_tail(&alloc->alloc_list_entry, ++ &resource->alloc_list_head); ++ alloc->owner.resource = resource; ++ ret = 0; ++ } ++ alloc->resource_owner = 1; ++ dxgdevice_release_alloc_list_lock(device); ++ return ret; ++} ++ ++void dxgresource_remove_alloc(struct dxgresource *resource, ++ struct dxgallocation *alloc) ++{ ++ if (alloc->alloc_list_entry.next) { ++ list_del(&alloc->alloc_list_entry); ++ alloc->alloc_list_entry.next = NULL; ++ } ++} ++ ++void dxgresource_remove_alloc_safe(struct dxgresource *resource, ++ struct dxgallocation *alloc) ++{ ++ dxgdevice_acquire_alloc_list_lock(resource->device); ++ dxgresource_remove_alloc(resource, alloc); ++ dxgdevice_release_alloc_list_lock(resource->device); ++} ++ + void dxgdevice_release(struct kref *refcount) + { + struct dxgdevice *device; +@@ -413,6 +626,75 @@ void dxgcontext_release(struct kref *refcount) + kfree(context); + } + ++struct dxgallocation *dxgallocation_create(struct dxgprocess *process) ++{ ++ struct dxgallocation *alloc; ++ ++ alloc = kzalloc(sizeof(struct dxgallocation), GFP_KERNEL); ++ if (alloc) ++ alloc->process = process; ++ return alloc; ++} ++ ++void dxgallocation_stop(struct dxgallocation *alloc) ++{ ++ if (alloc->pages) { ++ release_pages(alloc->pages, alloc->num_pages); ++ vfree(alloc->pages); ++ alloc->pages = NULL; ++ } ++} ++ ++void dxgallocation_free_handle(struct dxgallocation *alloc) ++{ ++ dxgprocess_ht_lock_exclusive_down(alloc->process); ++ if (alloc->handle_valid) { ++ hmgrtable_free_handle(&alloc->process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ alloc->alloc_handle); ++ alloc->handle_valid = 0; ++ } ++ dxgprocess_ht_lock_exclusive_up(alloc->process); ++} ++ ++void dxgallocation_destroy(struct dxgallocation *alloc) ++{ ++ struct dxgprocess *process = alloc->process; ++ struct d3dkmt_destroyallocation2 args = { }; ++ ++ dxgallocation_stop(alloc); ++ if (alloc->resource_owner) ++ dxgresource_remove_alloc(alloc->owner.resource, alloc); ++ else if (alloc->owner.device) ++ dxgdevice_remove_alloc(alloc->owner.device, alloc); ++ dxgallocation_free_handle(alloc); ++ if (alloc->alloc_handle.v && !alloc->resource_owner) { ++ args.device = alloc->owner.device->handle; ++ args.alloc_count = 1; ++ dxgvmb_send_destroy_allocation(process, ++ alloc->owner.device, ++ &args, &alloc->alloc_handle); ++ } ++#ifdef _MAIN_KERNEL_ ++ if (alloc->gpadl.gpadl_handle) { ++ DXG_TRACE("Teardown gpadl %d", ++ alloc->gpadl.gpadl_handle); ++ vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); ++ alloc->gpadl.gpadl_handle = 0; ++ } ++else ++ if (alloc->gpadl) { ++ DXG_TRACE("Teardown gpadl %d", ++ alloc->gpadl); ++ vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); ++ alloc->gpadl = 0; ++ } ++#endif ++ if (alloc->priv_drv_data) ++ vfree(alloc->priv_drv_data); ++ kfree(alloc); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index a3d8d3c9f37d..fa053fb6ac9c 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -36,6 +36,8 @@ struct dxgprocess; + struct dxgadapter; + struct dxgdevice; + struct dxgcontext; ++struct dxgallocation; ++struct dxgresource; + + /* + * Driver private data. +@@ -269,6 +271,8 @@ struct dxgadapter { + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ + struct list_head adapter_process_list_head; ++ /* This lock protects shared resource and syncobject lists */ ++ struct rw_semaphore shared_resource_list_lock; + struct pci_dev *pci_dev; + struct hv_device *hv_dev; + struct dxgvmbuschannel channel; +@@ -315,6 +319,10 @@ struct dxgdevice { + struct rw_semaphore device_lock; + struct rw_semaphore context_list_lock; + struct list_head context_list_head; ++ /* List of device allocations */ ++ struct rw_semaphore alloc_list_lock; ++ struct list_head alloc_list_head; ++ struct list_head resource_list_head; + /* List of paging queues. Protected by process handle table lock. */ + struct list_head pqueue_list_head; + struct d3dkmthandle handle; +@@ -331,9 +339,19 @@ void dxgdevice_release_lock_shared(struct dxgdevice *dev); + void dxgdevice_release(struct kref *refcount); + void dxgdevice_add_context(struct dxgdevice *dev, struct dxgcontext *ctx); + void dxgdevice_remove_context(struct dxgdevice *dev, struct dxgcontext *ctx); ++void dxgdevice_add_alloc(struct dxgdevice *dev, struct dxgallocation *a); ++void dxgdevice_remove_alloc(struct dxgdevice *dev, struct dxgallocation *a); ++void dxgdevice_remove_alloc_safe(struct dxgdevice *dev, ++ struct dxgallocation *a); ++void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res); ++void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res); + bool dxgdevice_is_active(struct dxgdevice *dev); + void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev); + void dxgdevice_release_context_list_lock(struct dxgdevice *dev); ++void dxgdevice_acquire_alloc_list_lock(struct dxgdevice *dev); ++void dxgdevice_release_alloc_list_lock(struct dxgdevice *dev); ++void dxgdevice_acquire_alloc_list_lock_shared(struct dxgdevice *dev); ++void dxgdevice_release_alloc_list_lock_shared(struct dxgdevice *dev); + + /* + * The object represent the execution context of a device. +@@ -357,6 +375,83 @@ void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_release(struct kref *refcount); + bool dxgcontext_is_active(struct dxgcontext *ctx); + ++struct dxgresource { ++ struct kref resource_kref; ++ enum dxgobjectstate object_state; ++ struct d3dkmthandle handle; ++ struct list_head alloc_list_head; ++ struct list_head resource_list_entry; ++ struct list_head shared_resource_list_entry; ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ /* Protects adding allocations to resource and resource destruction */ ++ struct mutex resource_mutex; ++ u64 private_runtime_handle; ++ union { ++ struct { ++ u32 destroyed:1; /* Must be the first */ ++ u32 handle_valid:1; ++ u32 reserved:30; ++ }; ++ long flags; ++ }; ++}; ++ ++struct dxgresource *dxgresource_create(struct dxgdevice *dev); ++void dxgresource_destroy(struct dxgresource *res); ++void dxgresource_free_handle(struct dxgresource *res); ++void dxgresource_release(struct kref *refcount); ++int dxgresource_add_alloc(struct dxgresource *res, ++ struct dxgallocation *a); ++void dxgresource_remove_alloc(struct dxgresource *res, struct dxgallocation *a); ++void dxgresource_remove_alloc_safe(struct dxgresource *res, ++ struct dxgallocation *a); ++bool dxgresource_is_active(struct dxgresource *res); ++ ++struct privdata { ++ u32 data_size; ++ u8 data[1]; ++}; ++ ++struct dxgallocation { ++ /* Entry in the device list or resource list (when resource exists) */ ++ struct list_head alloc_list_entry; ++ /* Allocation owner */ ++ union { ++ struct dxgdevice *device; ++ struct dxgresource *resource; ++ } owner; ++ struct dxgprocess *process; ++ /* Pointer to private driver data desc. Used for shared resources */ ++ struct privdata *priv_drv_data; ++ struct d3dkmthandle alloc_handle; ++ /* Set to 1 when allocation belongs to resource. */ ++ u32 resource_owner:1; ++ /* Set to 1 when the allocatio is mapped as cached */ ++ u32 cached:1; ++ u32 handle_valid:1; ++ /* GPADL address list for existing sysmem allocations */ ++#ifdef _MAIN_KERNEL_ ++ struct vmbus_gpadl gpadl; ++#else ++ u32 gpadl; ++#endif ++ /* Number of pages in the 'pages' array */ ++ u32 num_pages; ++ /* ++ * CPU address from the existing sysmem allocation, or ++ * mapped to the CPU visible backing store in the IO space ++ */ ++ void *cpu_address; ++ /* Describes pages for the existing sysmem allocation */ ++ struct page **pages; ++}; ++ ++struct dxgallocation *dxgallocation_create(struct dxgprocess *process); ++void dxgallocation_stop(struct dxgallocation *a); ++void dxgallocation_destroy(struct dxgallocation *a); ++void dxgallocation_free_handle(struct dxgallocation *a); ++ + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + +@@ -409,9 +504,27 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + struct dxgprocess *process, + struct d3dkmthandle h); ++int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, ++ struct d3dkmt_createallocation *args, ++ struct d3dkmt_createallocation *__user inargs, ++ struct dxgresource *res, ++ struct dxgallocation **allocs, ++ struct d3dddi_allocationinfo2 *alloc_info, ++ struct d3dkmt_createstandardallocation *stda); ++int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, ++ struct d3dkmt_destroyallocation2 *args, ++ struct d3dkmthandle *alloc_handles); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, ++ enum d3dkmdt_standardallocationtype t, ++ struct d3dkmdt_gdisurfacedata *data, ++ u32 physical_adapter_index, ++ u32 *alloc_priv_driver_size, ++ void *prive_alloc_data, ++ u32 *res_priv_data_size, ++ void *priv_res_data); + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index fbe1c58ecb46..053ce6f3e083 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -162,6 +162,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + init_rwsem(&adapter->core_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); ++ init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; + guid_to_luid(guid, &adapter->luid); + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index e66aac7c13cb..14b51a3c6afc 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -111,6 +111,41 @@ static int init_message(struct dxgvmbusmsg *msg, struct dxgadapter *adapter, + return 0; + } + ++static int init_message_res(struct dxgvmbusmsgres *msg, ++ struct dxgadapter *adapter, ++ struct dxgprocess *process, ++ u32 size, ++ u32 result_size) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ bool use_ext_header = dxgglobal->vmbus_ver >= ++ DXGK_VMBUS_INTERFACE_VERSION; ++ ++ if (use_ext_header) ++ size += sizeof(struct dxgvmb_ext_header); ++ msg->size = size; ++ msg->res_size += (result_size + 7) & ~7; ++ size += msg->res_size; ++ msg->hdr = vzalloc(size); ++ if (msg->hdr == NULL) { ++ DXG_ERR("Failed to allocate VM bus message: %d", size); ++ return -ENOMEM; ++ } ++ if (use_ext_header) { ++ msg->msg = (char *)&msg->hdr[1]; ++ msg->hdr->command_offset = sizeof(msg->hdr[0]); ++ msg->hdr->vgpu_luid = adapter->host_vgpu_luid; ++ } else { ++ msg->msg = (char *)msg->hdr; ++ } ++ msg->res = (char *)msg->hdr + msg->size; ++ if (dxgglobal->async_msg_enabled) ++ msg->channel = &dxgglobal->channel; ++ else ++ msg->channel = &adapter->channel; ++ return 0; ++} ++ + static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process) + { + if (msg->hdr && (char *)msg->hdr != msg->msg_on_stack) +@@ -852,6 +887,620 @@ int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + return ret; + } + ++static int ++copy_private_data(struct d3dkmt_createallocation *args, ++ struct dxgkvmb_command_createallocation *command, ++ struct d3dddi_allocationinfo2 *input_alloc_info, ++ struct d3dkmt_createstandardallocation *standard_alloc) ++{ ++ struct dxgkvmb_command_createallocation_allocinfo *alloc_info; ++ struct d3dddi_allocationinfo2 *input_alloc; ++ int ret = 0; ++ int i; ++ u8 *private_data_dest = (u8 *) &command[1] + ++ (args->alloc_count * ++ sizeof(struct dxgkvmb_command_createallocation_allocinfo)); ++ ++ if (args->private_runtime_data_size) { ++ ret = copy_from_user(private_data_dest, ++ args->private_runtime_data, ++ args->private_runtime_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy runtime data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ private_data_dest += args->private_runtime_data_size; ++ } ++ ++ if (args->flags.standard_allocation) { ++ DXG_TRACE("private data offset %d", ++ (u32) (private_data_dest - (u8 *) command)); ++ ++ args->priv_drv_data_size = sizeof(*args->standard_allocation); ++ memcpy(private_data_dest, standard_alloc, ++ sizeof(*standard_alloc)); ++ private_data_dest += args->priv_drv_data_size; ++ } else if (args->priv_drv_data_size) { ++ ret = copy_from_user(private_data_dest, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ private_data_dest += args->priv_drv_data_size; ++ } ++ ++ alloc_info = (void *)&command[1]; ++ input_alloc = input_alloc_info; ++ if (input_alloc_info[0].sysmem) ++ command->flags.existing_sysmem = 1; ++ for (i = 0; i < args->alloc_count; i++) { ++ alloc_info->flags = input_alloc->flags.value; ++ alloc_info->vidpn_source_id = input_alloc->vidpn_source_id; ++ alloc_info->priv_drv_data_size = ++ input_alloc->priv_drv_data_size; ++ if (input_alloc->priv_drv_data_size) { ++ ret = copy_from_user(private_data_dest, ++ input_alloc->priv_drv_data, ++ input_alloc->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ private_data_dest += input_alloc->priv_drv_data_size; ++ } ++ alloc_info++; ++ input_alloc++; ++ } ++ ++cleanup: ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static ++int create_existing_sysmem(struct dxgdevice *device, ++ struct dxgkvmb_command_allocinfo_return *host_alloc, ++ struct dxgallocation *dxgalloc, ++ bool read_only, ++ const void *sysmem) ++{ ++ int ret1 = 0; ++ void *kmem = NULL; ++ int ret = 0; ++ struct dxgkvmb_command_setexistingsysmemstore *set_store_command; ++ u64 alloc_size = host_alloc->allocation_size; ++ u32 npages = alloc_size >> PAGE_SHIFT; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, device->adapter, device->process, ++ sizeof(*set_store_command)); ++ if (ret) ++ goto cleanup; ++ set_store_command = (void *)msg.msg; ++ ++ /* ++ * Create a guest physical address list and set it as the allocation ++ * backing store in the host. This is done after creating the host ++ * allocation, because only now the allocation size is known. ++ */ ++ ++ DXG_TRACE("Alloc size: %lld", alloc_size); ++ ++ dxgalloc->cpu_address = (void *)sysmem; ++ dxgalloc->pages = vzalloc(npages * sizeof(void *)); ++ if (dxgalloc->pages == NULL) { ++ DXG_ERR("failed to allocate pages"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret1 = get_user_pages_fast((unsigned long)sysmem, npages, !read_only, ++ dxgalloc->pages); ++ if (ret1 != npages) { ++ DXG_ERR("get_user_pages_fast failed: %d", ret1); ++ if (ret1 > 0 && ret1 < npages) ++ release_pages(dxgalloc->pages, ret1); ++ vfree(dxgalloc->pages); ++ dxgalloc->pages = NULL; ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL); ++ if (kmem == NULL) { ++ DXG_ERR("vmap failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem, ++ alloc_size, &dxgalloc->gpadl); ++ if (ret1) { ++ DXG_ERR("establish_gpadl failed: %d", ret1); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); ++ ++ command_vgpu_to_host_init2(&set_store_command->hdr, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, ++ device->process->host_handle); ++ set_store_command->device = device->handle; ++ set_store_command->device = device->handle; ++ set_store_command->allocation = host_alloc->allocation; ++#ifdef _MAIN_KERNEL_ ++ set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; ++#else ++ set_store_command->gpadl = dxgalloc->gpadl; ++#endif ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ if (ret < 0) ++ DXG_ERR("failed to set existing store: %x", ret); ++ ++cleanup: ++ if (kmem) ++ vunmap(kmem); ++ free_message(&msg, device->process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static int ++process_allocation_handles(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createallocation *args, ++ struct dxgkvmb_command_createallocation_return *res, ++ struct dxgallocation **dxgalloc, ++ struct dxgresource *resource) ++{ ++ int ret = 0; ++ int i; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (args->flags.create_resource) { ++ ret = hmgrtable_assign_handle(&process->handle_table, resource, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ res->resource); ++ if (ret < 0) { ++ DXG_ERR("failed to assign resource handle %x", ++ res->resource.v); ++ } else { ++ resource->handle = res->resource; ++ resource->handle_valid = 1; ++ } ++ } ++ for (i = 0; i < args->alloc_count; i++) { ++ struct dxgkvmb_command_allocinfo_return *host_alloc; ++ ++ host_alloc = &res->allocation_info[i]; ++ ret = hmgrtable_assign_handle(&process->handle_table, ++ dxgalloc[i], ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ host_alloc->allocation); ++ if (ret < 0) { ++ DXG_ERR("failed assign alloc handle %x %d %d", ++ host_alloc->allocation.v, ++ args->alloc_count, i); ++ break; ++ } ++ dxgalloc[i]->alloc_handle = host_alloc->allocation; ++ dxgalloc[i]->handle_valid = 1; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static int ++create_local_allocations(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createallocation *args, ++ struct d3dkmt_createallocation *__user input_args, ++ struct d3dddi_allocationinfo2 *alloc_info, ++ struct dxgkvmb_command_createallocation_return *result, ++ struct dxgresource *resource, ++ struct dxgallocation **dxgalloc, ++ u32 destroy_buffer_size) ++{ ++ int i; ++ int alloc_count = args->alloc_count; ++ u8 *alloc_private_data = NULL; ++ int ret = 0; ++ int ret1; ++ struct dxgkvmb_command_destroyallocation *destroy_buf; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, device->adapter, process, ++ destroy_buffer_size); ++ if (ret) ++ goto cleanup; ++ destroy_buf = (void *)msg.msg; ++ ++ /* Prepare the command to destroy allocation in case of failure */ ++ command_vgpu_to_host_init2(&destroy_buf->hdr, ++ DXGK_VMBCOMMAND_DESTROYALLOCATION, ++ process->host_handle); ++ destroy_buf->device = args->device; ++ destroy_buf->resource = args->resource; ++ destroy_buf->alloc_count = alloc_count; ++ destroy_buf->flags.assume_not_in_use = 1; ++ for (i = 0; i < alloc_count; i++) { ++ DXG_TRACE("host allocation: %d %x", ++ i, result->allocation_info[i].allocation.v); ++ destroy_buf->allocations[i] = ++ result->allocation_info[i].allocation; ++ } ++ ++ if (args->flags.create_resource) { ++ DXG_TRACE("new resource: %x", result->resource.v); ++ ret = copy_to_user(&input_args->resource, &result->resource, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy resource handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ alloc_private_data = (u8 *) result + ++ sizeof(struct dxgkvmb_command_createallocation_return) + ++ sizeof(struct dxgkvmb_command_allocinfo_return) * (alloc_count - 1); ++ ++ for (i = 0; i < alloc_count; i++) { ++ struct dxgkvmb_command_allocinfo_return *host_alloc; ++ struct d3dddi_allocationinfo2 *user_alloc; ++ ++ host_alloc = &result->allocation_info[i]; ++ user_alloc = &alloc_info[i]; ++ dxgalloc[i]->num_pages = ++ host_alloc->allocation_size >> PAGE_SHIFT; ++ if (user_alloc->sysmem) { ++ ret = create_existing_sysmem(device, host_alloc, ++ dxgalloc[i], ++ args->flags.read_only != 0, ++ user_alloc->sysmem); ++ if (ret < 0) ++ goto cleanup; ++ } ++ dxgalloc[i]->cached = host_alloc->allocation_flags.cached; ++ if (host_alloc->priv_drv_data_size) { ++ ret = copy_to_user(user_alloc->priv_drv_data, ++ alloc_private_data, ++ host_alloc->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ alloc_private_data += host_alloc->priv_drv_data_size; ++ } ++ ret = copy_to_user(&args->allocation_info[i].allocation, ++ &host_alloc->allocation, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = process_allocation_handles(process, device, args, result, ++ dxgalloc, resource); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input_args->global_share, &args->global_share, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy global share"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ /* Free local handles before freeing the handles in the host */ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (dxgalloc) ++ for (i = 0; i < alloc_count; i++) ++ if (dxgalloc[i]) ++ dxgallocation_free_handle(dxgalloc[i]); ++ if (resource && args->flags.create_resource) ++ dxgresource_free_handle(resource); ++ dxgdevice_release_alloc_list_lock(device); ++ ++ /* Destroy allocations in the host to unmap gpadls */ ++ ret1 = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ if (ret1 < 0) ++ DXG_ERR("failed to destroy allocations: %x", ++ ret1); ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (dxgalloc) { ++ for (i = 0; i < alloc_count; i++) { ++ if (dxgalloc[i]) { ++ dxgalloc[i]->alloc_handle.v = 0; ++ dxgallocation_destroy(dxgalloc[i]); ++ dxgalloc[i] = NULL; ++ } ++ } ++ } ++ if (resource && args->flags.create_resource) { ++ /* ++ * Prevent the resource memory from freeing. ++ * It will be freed in the top level function. ++ */ ++ kref_get(&resource->resource_kref); ++ dxgresource_destroy(resource); ++ } ++ dxgdevice_release_alloc_list_lock(device); ++ } ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_create_allocation(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createallocation *args, ++ struct d3dkmt_createallocation *__user ++ input_args, ++ struct dxgresource *resource, ++ struct dxgallocation **dxgalloc, ++ struct d3dddi_allocationinfo2 *alloc_info, ++ struct d3dkmt_createstandardallocation ++ *standard_alloc) ++{ ++ struct dxgkvmb_command_createallocation *command = NULL; ++ struct dxgkvmb_command_createallocation_return *result = NULL; ++ int ret = -EINVAL; ++ int i; ++ u32 result_size = 0; ++ u32 cmd_size = 0; ++ u32 destroy_buffer_size = 0; ++ u32 priv_drv_data_size; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->private_runtime_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE || ++ args->priv_drv_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ ++ /* ++ * Preallocate the buffer, which will be used for destruction in case ++ * of a failure ++ */ ++ destroy_buffer_size = sizeof(struct dxgkvmb_command_destroyallocation) + ++ args->alloc_count * sizeof(struct d3dkmthandle); ++ ++ /* Compute the total private driver size */ ++ ++ priv_drv_data_size = 0; ++ ++ for (i = 0; i < args->alloc_count; i++) { ++ if (alloc_info[i].priv_drv_data_size >= ++ DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } else { ++ priv_drv_data_size += alloc_info[i].priv_drv_data_size; ++ } ++ if (priv_drv_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ } ++ ++ /* ++ * Private driver data for the result includes only per allocation ++ * private data ++ */ ++ result_size = sizeof(struct dxgkvmb_command_createallocation_return) + ++ (args->alloc_count - 1) * ++ sizeof(struct dxgkvmb_command_allocinfo_return) + ++ priv_drv_data_size; ++ result = vzalloc(result_size); ++ if (result == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ /* Private drv data for the command includes the global private data */ ++ priv_drv_data_size += args->priv_drv_data_size; ++ ++ cmd_size = sizeof(struct dxgkvmb_command_createallocation) + ++ args->alloc_count * ++ sizeof(struct dxgkvmb_command_createallocation_allocinfo) + ++ args->private_runtime_data_size + priv_drv_data_size; ++ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EOVERFLOW; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("command size, driver_data_size %d %d %ld %ld", ++ cmd_size, priv_drv_data_size, ++ sizeof(struct dxgkvmb_command_createallocation), ++ sizeof(struct dxgkvmb_command_createallocation_allocinfo)); ++ ++ ret = init_message(&msg, device->adapter, process, ++ cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATEALLOCATION, ++ process->host_handle); ++ command->device = args->device; ++ command->flags = args->flags; ++ command->resource = args->resource; ++ command->private_runtime_resource_handle = ++ args->private_runtime_resource_handle; ++ command->alloc_count = args->alloc_count; ++ command->private_runtime_data_size = args->private_runtime_data_size; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ ++ ret = copy_private_data(args, command, alloc_info, standard_alloc); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, result_size); ++ if (ret < 0) { ++ DXG_ERR("send_create_allocation failed %x", ret); ++ goto cleanup; ++ } ++ ++ ret = create_local_allocations(process, device, args, input_args, ++ alloc_info, result, resource, dxgalloc, ++ destroy_buffer_size); ++cleanup: ++ ++ if (result) ++ vfree(result); ++ free_message(&msg, process); ++ ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_allocation(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_destroyallocation2 *args, ++ struct d3dkmthandle *alloc_handles) ++{ ++ struct dxgkvmb_command_destroyallocation *destroy_buffer; ++ u32 destroy_buffer_size; ++ int ret; ++ int allocations_size = args->alloc_count * sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ destroy_buffer_size = sizeof(struct dxgkvmb_command_destroyallocation) + ++ allocations_size; ++ ++ ret = init_message(&msg, device->adapter, process, ++ destroy_buffer_size); ++ if (ret) ++ goto cleanup; ++ destroy_buffer = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&destroy_buffer->hdr, ++ DXGK_VMBCOMMAND_DESTROYALLOCATION, ++ process->host_handle); ++ destroy_buffer->device = args->device; ++ destroy_buffer->resource = args->resource; ++ destroy_buffer->alloc_count = args->alloc_count; ++ destroy_buffer->flags = args->flags; ++ if (allocations_size) ++ memcpy(destroy_buffer->allocations, alloc_handles, ++ allocations_size); ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, ++ enum d3dkmdt_standardallocationtype alloctype, ++ struct d3dkmdt_gdisurfacedata *alloc_data, ++ u32 physical_adapter_index, ++ u32 *alloc_priv_driver_size, ++ void *priv_alloc_data, ++ u32 *res_priv_data_size, ++ void *priv_res_data) ++{ ++ struct dxgkvmb_command_getstandardallocprivdata *command; ++ struct dxgkvmb_command_getstandardallocprivdata_return *result = NULL; ++ u32 result_size = sizeof(*result); ++ int ret; ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (priv_alloc_data) ++ result_size += *alloc_priv_driver_size; ++ if (priv_res_data) ++ result_size += *res_priv_data_size; ++ ret = init_message_res(&msg, device->adapter, device->process, ++ sizeof(*command), result_size); ++ if (ret) ++ goto cleanup; ++ command = msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DDIGETSTANDARDALLOCATIONDRIVERDATA, ++ device->process->host_handle); ++ ++ command->alloc_type = alloctype; ++ command->priv_driver_data_size = *alloc_priv_driver_size; ++ command->physical_adapter_index = physical_adapter_index; ++ command->priv_driver_resource_size = *res_priv_data_size; ++ switch (alloctype) { ++ case _D3DKMDT_STANDARDALLOCATION_GDISURFACE: ++ command->gdi_surface = *alloc_data; ++ break; ++ case _D3DKMDT_STANDARDALLOCATION_SHAREDPRIMARYSURFACE: ++ case _D3DKMDT_STANDARDALLOCATION_SHADOWSURFACE: ++ case _D3DKMDT_STANDARDALLOCATION_STAGINGSURFACE: ++ default: ++ DXG_ERR("Invalid standard alloc type"); ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (*alloc_priv_driver_size && ++ result->priv_driver_data_size != *alloc_priv_driver_size) { ++ DXG_ERR("Priv data size mismatch"); ++ goto cleanup; ++ } ++ if (*res_priv_data_size && ++ result->priv_driver_resource_size != *res_priv_data_size) { ++ DXG_ERR("Resource priv data size mismatch"); ++ goto cleanup; ++ } ++ *alloc_priv_driver_size = result->priv_driver_data_size; ++ *res_priv_data_size = result->priv_driver_resource_size; ++ if (priv_alloc_data) { ++ memcpy(priv_alloc_data, &result[1], ++ result->priv_driver_data_size); ++ } ++ if (priv_res_data) { ++ memcpy(priv_res_data, ++ (char *)(&result[1]) + result->priv_driver_data_size, ++ result->priv_driver_resource_size); ++ } ++ ++cleanup: ++ ++ free_message((struct dxgvmbusmsg *)&msg, device->process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index ebcb7b0f62c1..4b7466d1b9f2 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -173,6 +173,14 @@ struct dxgkvmb_command_setiospaceregion { + u32 shared_page_gpadl; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setexistingsysmemstore { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++ u32 gpadl; ++}; ++ + struct dxgkvmb_command_createprocess { + struct dxgkvmb_command_vm_to_host hdr; + void *process; +@@ -269,6 +277,121 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_createallocation_allocinfo { ++ u32 flags; ++ u32 priv_drv_data_size; ++ u32 vidpn_source_id; ++}; ++ ++struct dxgkvmb_command_createallocation { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 private_runtime_data_size; ++ u32 priv_drv_data_size; ++ u32 alloc_count; ++ struct d3dkmt_createallocationflags flags; ++ u64 private_runtime_resource_handle; ++ bool make_resident; ++/* dxgkvmb_command_createallocation_allocinfo alloc_info[alloc_count]; */ ++/* u8 private_rutime_data[private_runtime_data_size] */ ++/* u8 priv_drv_data[] for each alloc_info */ ++}; ++ ++struct dxgkvmb_command_getstandardallocprivdata { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ enum d3dkmdt_standardallocationtype alloc_type; ++ u32 priv_driver_data_size; ++ u32 priv_driver_resource_size; ++ u32 physical_adapter_index; ++ union { ++ struct d3dkmdt_sharedprimarysurfacedata primary; ++ struct d3dkmdt_shadowsurfacedata shadow; ++ struct d3dkmdt_stagingsurfacedata staging; ++ struct d3dkmdt_gdisurfacedata gdi_surface; ++ }; ++}; ++ ++struct dxgkvmb_command_getstandardallocprivdata_return { ++ struct ntstatus status; ++ u32 priv_driver_data_size; ++ u32 priv_driver_resource_size; ++ union { ++ struct d3dkmdt_sharedprimarysurfacedata primary; ++ struct d3dkmdt_shadowsurfacedata shadow; ++ struct d3dkmdt_stagingsurfacedata staging; ++ struct d3dkmdt_gdisurfacedata gdi_surface; ++ }; ++/* char alloc_priv_data[priv_driver_data_size]; */ ++/* char resource_priv_data[priv_driver_resource_size]; */ ++}; ++ ++struct dxgkarg_describeallocation { ++ u64 allocation; ++ u32 width; ++ u32 height; ++ u32 format; ++ u32 multisample_method; ++ struct d3dddi_rational refresh_rate; ++ u32 private_driver_attribute; ++ u32 flags; ++ u32 rotation; ++}; ++ ++struct dxgkvmb_allocflags { ++ union { ++ u32 flags; ++ struct { ++ u32 primary:1; ++ u32 cdd_primary:1; ++ u32 dod_primary:1; ++ u32 overlay:1; ++ u32 reserved6:1; ++ u32 capture:1; ++ u32 reserved0:4; ++ u32 reserved1:1; ++ u32 existing_sysmem:1; ++ u32 stereo:1; ++ u32 direct_flip:1; ++ u32 hardware_protected:1; ++ u32 reserved2:1; ++ u32 reserved3:1; ++ u32 reserved4:1; ++ u32 protected:1; ++ u32 cached:1; ++ u32 independent_primary:1; ++ u32 reserved:11; ++ }; ++ }; ++}; ++ ++struct dxgkvmb_command_allocinfo_return { ++ struct d3dkmthandle allocation; ++ u32 priv_drv_data_size; ++ struct dxgkvmb_allocflags allocation_flags; ++ u64 allocation_size; ++ struct dxgkarg_describeallocation driver_info; ++}; ++ ++struct dxgkvmb_command_createallocation_return { ++ struct d3dkmt_createallocationflags flags; ++ struct d3dkmthandle resource; ++ struct d3dkmthandle global_share; ++ u32 vgpu_flags; ++ struct dxgkvmb_command_allocinfo_return allocation_info[1]; ++ /* Private driver data for allocations */ ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroyallocation { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 alloc_count; ++ struct d3dddicb_destroyallocation2flags flags; ++ struct d3dkmthandle allocations[1]; ++}; ++ + struct dxgkvmb_command_createcontextvirtual { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmthandle context; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 5d10ebd2ce6a..0eaa577d7ed4 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -714,6 +714,633 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++get_standard_alloc_priv_data(struct dxgdevice *device, ++ struct d3dkmt_createstandardallocation *alloc_info, ++ u32 *standard_alloc_priv_data_size, ++ void **standard_alloc_priv_data, ++ u32 *standard_res_priv_data_size, ++ void **standard_res_priv_data) ++{ ++ int ret; ++ struct d3dkmdt_gdisurfacedata gdi_data = { }; ++ u32 priv_data_size = 0; ++ u32 res_priv_data_size = 0; ++ void *priv_data = NULL; ++ void *res_priv_data = NULL; ++ ++ gdi_data.type = _D3DKMDT_GDISURFACE_TEXTURE_CROSSADAPTER; ++ gdi_data.width = alloc_info->existing_heap_data.size; ++ gdi_data.height = 1; ++ gdi_data.format = _D3DDDIFMT_UNKNOWN; ++ ++ *standard_alloc_priv_data_size = 0; ++ ret = dxgvmb_send_get_stdalloc_data(device, ++ _D3DKMDT_STANDARDALLOCATION_GDISURFACE, ++ &gdi_data, 0, ++ &priv_data_size, NULL, ++ &res_priv_data_size, ++ NULL); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Priv data size: %d", priv_data_size); ++ if (priv_data_size == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ priv_data = vzalloc(priv_data_size); ++ if (priv_data == NULL) { ++ ret = -ENOMEM; ++ DXG_ERR("failed to allocate memory for priv data: %d", ++ priv_data_size); ++ goto cleanup; ++ } ++ if (res_priv_data_size) { ++ res_priv_data = vzalloc(res_priv_data_size); ++ if (res_priv_data == NULL) { ++ ret = -ENOMEM; ++ dev_err(DXGDEV, ++ "failed to alloc memory for res priv data: %d", ++ res_priv_data_size); ++ goto cleanup; ++ } ++ } ++ ret = dxgvmb_send_get_stdalloc_data(device, ++ _D3DKMDT_STANDARDALLOCATION_GDISURFACE, ++ &gdi_data, 0, ++ &priv_data_size, ++ priv_data, ++ &res_priv_data_size, ++ res_priv_data); ++ if (ret < 0) ++ goto cleanup; ++ *standard_alloc_priv_data_size = priv_data_size; ++ *standard_alloc_priv_data = priv_data; ++ *standard_res_priv_data_size = res_priv_data_size; ++ *standard_res_priv_data = res_priv_data; ++ priv_data = NULL; ++ res_priv_data = NULL; ++ ++cleanup: ++ if (priv_data) ++ vfree(priv_data); ++ if (res_priv_data) ++ vfree(res_priv_data); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++static int ++dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createallocation args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct d3dddi_allocationinfo2 *alloc_info = NULL; ++ struct d3dkmt_createstandardallocation standard_alloc; ++ u32 alloc_info_size = 0; ++ struct dxgresource *resource = NULL; ++ struct dxgallocation **dxgalloc = NULL; ++ bool resource_mutex_acquired = false; ++ u32 standard_alloc_priv_data_size = 0; ++ void *standard_alloc_priv_data = NULL; ++ u32 res_priv_data_size = 0; ++ void *res_priv_data = NULL; ++ int i; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_CREATEALLOCATION_MAX || ++ args.alloc_count == 0) { ++ DXG_ERR("invalid number of allocations to create"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ alloc_info_size = sizeof(struct d3dddi_allocationinfo2) * ++ args.alloc_count; ++ alloc_info = vzalloc(alloc_info_size); ++ if (alloc_info == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(alloc_info, args.allocation_info, ++ alloc_info_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc info"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ for (i = 0; i < args.alloc_count; i++) { ++ if (args.flags.standard_allocation) { ++ if (alloc_info[i].priv_drv_data_size != 0) { ++ DXG_ERR("private data size not zero"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if (alloc_info[i].priv_drv_data_size >= ++ DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("private data size too big: %d %d %ld", ++ i, alloc_info[i].priv_drv_data_size, ++ sizeof(alloc_info[0])); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (args.flags.existing_section || args.flags.create_protected) { ++ DXG_ERR("invalid allocation flags"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.standard_allocation) { ++ if (args.standard_allocation == NULL) { ++ DXG_ERR("invalid standard allocation"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_from_user(&standard_alloc, ++ args.standard_allocation, ++ sizeof(standard_alloc)); ++ if (ret) { ++ DXG_ERR("failed to copy std alloc data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (standard_alloc.type == ++ _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP) { ++ if (alloc_info[0].sysmem == NULL || ++ (unsigned long)alloc_info[0].sysmem & ++ (PAGE_SIZE - 1)) { ++ DXG_ERR("invalid sysmem pointer"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ if (!args.flags.existing_sysmem) { ++ DXG_ERR("expect existing_sysmem flag"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ } else if (standard_alloc.type == ++ _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER) { ++ if (args.flags.existing_sysmem) { ++ DXG_ERR("existing_sysmem flag invalid"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ ++ } ++ if (alloc_info[0].sysmem != NULL) { ++ DXG_ERR("sysmem should be NULL"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ } else { ++ DXG_ERR("invalid standard allocation type"); ++ ret = STATUS_INVALID_PARAMETER; ++ goto cleanup; ++ } ++ ++ if (args.priv_drv_data_size != 0 || ++ args.alloc_count != 1 || ++ standard_alloc.existing_heap_data.size == 0 || ++ standard_alloc.existing_heap_data.size & (PAGE_SIZE - 1)) { ++ DXG_ERR("invalid standard allocation"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ args.priv_drv_data_size = ++ sizeof(struct d3dkmt_createstandardallocation); ++ } ++ ++ if (args.flags.create_shared && !args.flags.create_resource) { ++ DXG_ERR("create_resource must be set for create_shared"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ if (args.flags.standard_allocation) { ++ ret = get_standard_alloc_priv_data(device, ++ &standard_alloc, ++ &standard_alloc_priv_data_size, ++ &standard_alloc_priv_data, ++ &res_priv_data_size, ++ &res_priv_data); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Alloc private data: %d", ++ standard_alloc_priv_data_size); ++ } ++ ++ if (args.flags.create_resource) { ++ resource = dxgresource_create(device); ++ if (resource == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ resource->private_runtime_handle = ++ args.private_runtime_resource_handle; ++ } else { ++ if (args.resource.v) { ++ /* Adding new allocations to the given resource */ ++ ++ dxgprocess_ht_lock_shared_down(process); ++ resource = hmgrtable_get_object_by_type( ++ &process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ args.resource); ++ kref_get(&resource->resource_kref); ++ dxgprocess_ht_lock_shared_up(process); ++ ++ if (resource == NULL || resource->device != device) { ++ DXG_ERR("invalid resource handle %x", ++ args.resource.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ /* Synchronize with resource destruction */ ++ mutex_lock(&resource->resource_mutex); ++ if (!dxgresource_is_active(resource)) { ++ mutex_unlock(&resource->resource_mutex); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ resource_mutex_acquired = true; ++ } ++ } ++ ++ dxgalloc = vzalloc(sizeof(struct dxgallocation *) * args.alloc_count); ++ if (dxgalloc == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ for (i = 0; i < args.alloc_count; i++) { ++ struct dxgallocation *alloc; ++ u32 priv_data_size; ++ ++ if (args.flags.standard_allocation) ++ priv_data_size = standard_alloc_priv_data_size; ++ else ++ priv_data_size = alloc_info[i].priv_drv_data_size; ++ ++ if (alloc_info[i].sysmem && !args.flags.standard_allocation) { ++ if ((unsigned long) ++ alloc_info[i].sysmem & (PAGE_SIZE - 1)) { ++ DXG_ERR("invalid sysmem alloc %d, %p", ++ i, alloc_info[i].sysmem); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if ((alloc_info[0].sysmem == NULL) != ++ (alloc_info[i].sysmem == NULL)) { ++ DXG_ERR("All allocs must have sysmem pointer"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgalloc[i] = dxgallocation_create(process); ++ if (dxgalloc[i] == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ alloc = dxgalloc[i]; ++ ++ if (resource) { ++ ret = dxgresource_add_alloc(resource, alloc); ++ if (ret < 0) ++ goto cleanup; ++ } else { ++ dxgdevice_add_alloc(device, alloc); ++ } ++ if (args.flags.create_shared) { ++ /* Remember alloc private data to use it during open */ ++ alloc->priv_drv_data = vzalloc(priv_data_size + ++ offsetof(struct privdata, data)); ++ if (alloc->priv_drv_data == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ if (args.flags.standard_allocation) { ++ memcpy(alloc->priv_drv_data->data, ++ standard_alloc_priv_data, ++ priv_data_size); ++ } else { ++ ret = copy_from_user( ++ alloc->priv_drv_data->data, ++ alloc_info[i].priv_drv_data, ++ priv_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "failed to copy priv data"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ } ++ alloc->priv_drv_data->data_size = priv_data_size; ++ } ++ } ++ ++ ret = dxgvmb_send_create_allocation(process, device, &args, inargs, ++ resource, dxgalloc, alloc_info, ++ &standard_alloc); ++cleanup: ++ ++ if (resource_mutex_acquired) { ++ mutex_unlock(&resource->resource_mutex); ++ kref_put(&resource->resource_kref, dxgresource_release); ++ } ++ if (ret < 0) { ++ if (dxgalloc) { ++ for (i = 0; i < args.alloc_count; i++) { ++ if (dxgalloc[i]) ++ dxgallocation_destroy(dxgalloc[i]); ++ } ++ } ++ if (resource && args.flags.create_resource) { ++ dxgresource_destroy(resource); ++ } ++ } ++ if (dxgalloc) ++ vfree(dxgalloc); ++ if (standard_alloc_priv_data) ++ vfree(standard_alloc_priv_data); ++ if (res_priv_data) ++ vfree(res_priv_data); ++ if (alloc_info) ++ vfree(alloc_info); ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int validate_alloc(struct dxgallocation *alloc0, ++ struct dxgallocation *alloc, ++ struct dxgdevice *device, ++ struct d3dkmthandle alloc_handle) ++{ ++ u32 fail_reason; ++ ++ if (alloc == NULL) { ++ fail_reason = 1; ++ goto cleanup; ++ } ++ if (alloc->resource_owner != alloc0->resource_owner) { ++ fail_reason = 2; ++ goto cleanup; ++ } ++ if (alloc->resource_owner) { ++ if (alloc->owner.resource != alloc0->owner.resource) { ++ fail_reason = 3; ++ goto cleanup; ++ } ++ if (alloc->owner.resource->device != device) { ++ fail_reason = 4; ++ goto cleanup; ++ } ++ } else { ++ if (alloc->owner.device != device) { ++ fail_reason = 6; ++ goto cleanup; ++ } ++ } ++ return 0; ++cleanup: ++ DXG_ERR("Alloc validation failed: reason: %d %x", ++ fail_reason, alloc_handle.v); ++ return -EINVAL; ++} ++ ++static int ++dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroyallocation2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ struct d3dkmthandle *alloc_handles = NULL; ++ struct dxgallocation **allocs = NULL; ++ struct dxgresource *resource = NULL; ++ int i; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_CREATEALLOCATION_MAX || ++ ((args.alloc_count == 0) == (args.resource.v == 0))) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count) { ++ u32 handle_size = sizeof(struct d3dkmthandle) * ++ args.alloc_count; ++ ++ alloc_handles = vzalloc(handle_size); ++ if (alloc_handles == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ allocs = vzalloc(sizeof(struct dxgallocation *) * ++ args.alloc_count); ++ if (allocs == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(alloc_handles, args.allocations, ++ handle_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* Acquire the device lock to synchronize with the device destriction */ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ /* ++ * Destroy the local allocation handles first. If the host handle ++ * is destroyed first, another object could be assigned to the process ++ * table at the same place as the allocation handle and it will fail. ++ */ ++ if (args.alloc_count) { ++ dxgprocess_ht_lock_exclusive_down(process); ++ for (i = 0; i < args.alloc_count; i++) { ++ allocs[i] = ++ hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ alloc_handles[i]); ++ ret = ++ validate_alloc(allocs[0], allocs[i], device, ++ alloc_handles[i]); ++ if (ret < 0) { ++ dxgprocess_ht_lock_exclusive_up(process); ++ goto cleanup; ++ } ++ } ++ dxgprocess_ht_lock_exclusive_up(process); ++ for (i = 0; i < args.alloc_count; i++) ++ dxgallocation_free_handle(allocs[i]); ++ } else { ++ struct dxgallocation *alloc; ++ ++ dxgprocess_ht_lock_exclusive_down(process); ++ resource = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ args.resource); ++ if (resource == NULL) { ++ DXG_ERR("Invalid resource handle: %x", ++ args.resource.v); ++ ret = -EINVAL; ++ } else if (resource->device != device) { ++ DXG_ERR("Resource belongs to wrong device: %x", ++ args.resource.v); ++ ret = -EINVAL; ++ } else { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ args.resource); ++ resource->object_state = DXGOBJECTSTATE_DESTROYED; ++ resource->handle.v = 0; ++ resource->handle_valid = 0; ++ } ++ dxgprocess_ht_lock_exclusive_up(process); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ dxgdevice_acquire_alloc_list_lock_shared(device); ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ dxgallocation_free_handle(alloc); ++ } ++ dxgdevice_release_alloc_list_lock_shared(device); ++ } ++ ++ if (args.alloc_count && allocs[0]->resource_owner) ++ resource = allocs[0]->owner.resource; ++ ++ if (resource) { ++ kref_get(&resource->resource_kref); ++ mutex_lock(&resource->resource_mutex); ++ } ++ ++ ret = dxgvmb_send_destroy_allocation(process, device, &args, ++ alloc_handles); ++ ++ /* ++ * Destroy the allocations after the host destroyed it. ++ * The allocation gpadl teardown will wait until the host unmaps its ++ * gpadl. ++ */ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (args.alloc_count) { ++ for (i = 0; i < args.alloc_count; i++) { ++ if (allocs[i]) { ++ allocs[i]->alloc_handle.v = 0; ++ dxgallocation_destroy(allocs[i]); ++ } ++ } ++ } else { ++ dxgresource_destroy(resource); ++ } ++ dxgdevice_release_alloc_list_lock(device); ++ ++ if (resource) { ++ mutex_unlock(&resource->resource_mutex); ++ kref_put(&resource->resource_kref, dxgresource_release); ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ if (alloc_handles) ++ vfree(alloc_handles); ++ ++ if (allocs) ++ vfree(allocs); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -721,7 +1348,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x03 */ {}, + /* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL}, + /* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, +-/* 0x06 */ {}, ++/* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION}, + /* 0x07 */ {}, + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, +@@ -734,7 +1361,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x10 */ {}, + /* 0x11 */ {}, + /* 0x12 */ {}, +-/* 0x13 */ {}, ++/* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, + /* 0x16 */ {}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index 3a9637f0b5e2..a51b29a6a68f 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -30,6 +30,9 @@ extern const struct d3dkmthandle zerohandle; + * plistmutex (process list mutex) + * table_lock (handle table lock) + * context_list_lock ++ * alloc_list_lock ++ * resource_mutex ++ * shared_resource_list_lock + * core_lock (dxgadapter lock) + * device_lock (dxgdevice lock) + * process_adapter_mutex +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 4ba0070b061f..cf670b9c4dc2 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -58,6 +58,7 @@ struct winluid { + __u32 b; + }; + ++#define D3DKMT_CREATEALLOCATION_MAX 1024 + #define D3DKMT_ADAPTERS_MAX 64 + + struct d3dkmt_adapterinfo { +@@ -197,6 +198,205 @@ struct d3dkmt_createcontextvirtual { + struct d3dkmthandle context; + }; + ++enum d3dkmdt_gdisurfacetype { ++ _D3DKMDT_GDISURFACE_INVALID = 0, ++ _D3DKMDT_GDISURFACE_TEXTURE = 1, ++ _D3DKMDT_GDISURFACE_STAGING_CPUVISIBLE = 2, ++ _D3DKMDT_GDISURFACE_STAGING = 3, ++ _D3DKMDT_GDISURFACE_LOOKUPTABLE = 4, ++ _D3DKMDT_GDISURFACE_EXISTINGSYSMEM = 5, ++ _D3DKMDT_GDISURFACE_TEXTURE_CPUVISIBLE = 6, ++ _D3DKMDT_GDISURFACE_TEXTURE_CROSSADAPTER = 7, ++ _D3DKMDT_GDISURFACE_TEXTURE_CPUVISIBLE_CROSSADAPTER = 8, ++}; ++ ++struct d3dddi_rational { ++ __u32 numerator; ++ __u32 denominator; ++}; ++ ++enum d3dddiformat { ++ _D3DDDIFMT_UNKNOWN = 0, ++}; ++ ++struct d3dkmdt_gdisurfacedata { ++ __u32 width; ++ __u32 height; ++ __u32 format; ++ enum d3dkmdt_gdisurfacetype type; ++ __u32 flags; ++ __u32 pitch; ++}; ++ ++struct d3dkmdt_stagingsurfacedata { ++ __u32 width; ++ __u32 height; ++ __u32 pitch; ++}; ++ ++struct d3dkmdt_sharedprimarysurfacedata { ++ __u32 width; ++ __u32 height; ++ enum d3dddiformat format; ++ struct d3dddi_rational refresh_rate; ++ __u32 vidpn_source_id; ++}; ++ ++struct d3dkmdt_shadowsurfacedata { ++ __u32 width; ++ __u32 height; ++ enum d3dddiformat format; ++ __u32 pitch; ++}; ++ ++enum d3dkmdt_standardallocationtype { ++ _D3DKMDT_STANDARDALLOCATION_SHAREDPRIMARYSURFACE = 1, ++ _D3DKMDT_STANDARDALLOCATION_SHADOWSURFACE = 2, ++ _D3DKMDT_STANDARDALLOCATION_STAGINGSURFACE = 3, ++ _D3DKMDT_STANDARDALLOCATION_GDISURFACE = 4, ++}; ++ ++enum d3dkmt_standardallocationtype { ++ _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, ++ _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, ++}; ++ ++struct d3dkmt_standardallocation_existingheap { ++ __u64 size; ++}; ++ ++struct d3dkmt_createstandardallocationflags { ++ union { ++ struct { ++ __u32 reserved:32; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_createstandardallocation { ++ enum d3dkmt_standardallocationtype type; ++ __u32 reserved; ++ struct d3dkmt_standardallocation_existingheap existing_heap_data; ++ struct d3dkmt_createstandardallocationflags flags; ++ __u32 reserved1; ++}; ++ ++struct d3dddi_allocationinfo2 { ++ struct d3dkmthandle allocation; ++#ifdef __KERNEL__ ++ const void *sysmem; ++#else ++ __u64 sysmem; ++#endif ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ __u32 vidpn_source_id; ++ union { ++ struct { ++ __u32 primary:1; ++ __u32 stereo:1; ++ __u32 override_priority:1; ++ __u32 reserved:29; ++ }; ++ __u32 value; ++ } flags; ++ __u64 gpu_virtual_address; ++ union { ++ __u32 priority; ++ __u64 unused; ++ }; ++ __u64 reserved[5]; ++}; ++ ++struct d3dkmt_createallocationflags { ++ union { ++ struct { ++ __u32 create_resource:1; ++ __u32 create_shared:1; ++ __u32 non_secure:1; ++ __u32 create_protected:1; ++ __u32 restrict_shared_access:1; ++ __u32 existing_sysmem:1; ++ __u32 nt_security_sharing:1; ++ __u32 read_only:1; ++ __u32 create_write_combined:1; ++ __u32 create_cached:1; ++ __u32 swap_chain_back_buffer:1; ++ __u32 cross_adapter:1; ++ __u32 open_cross_adapter:1; ++ __u32 partial_shared_creation:1; ++ __u32 zeroed:1; ++ __u32 write_watch:1; ++ __u32 standard_allocation:1; ++ __u32 existing_section:1; ++ __u32 reserved:14; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_createallocation { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ struct d3dkmthandle global_share; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ const void *private_runtime_data; ++#else ++ __u64 private_runtime_data; ++#endif ++ __u32 private_runtime_data_size; ++ __u32 reserved1; ++ union { ++#ifdef __KERNEL__ ++ struct d3dkmt_createstandardallocation *standard_allocation; ++ const void *priv_drv_data; ++#else ++ __u64 standard_allocation; ++ __u64 priv_drv_data; ++#endif ++ }; ++ __u32 priv_drv_data_size; ++ __u32 alloc_count; ++#ifdef __KERNEL__ ++ struct d3dddi_allocationinfo2 *allocation_info; ++#else ++ __u64 allocation_info; ++#endif ++ struct d3dkmt_createallocationflags flags; ++ __u32 reserved2; ++ __u64 private_runtime_resource_handle; ++}; ++ ++struct d3dddicb_destroyallocation2flags { ++ union { ++ struct { ++ __u32 assume_not_in_use:1; ++ __u32 synchronous_destroy:1; ++ __u32 reserved:29; ++ __u32 system_use_only:1; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_destroyallocation2 { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocations; ++#else ++ __u64 allocations; ++#endif ++ __u32 alloc_count; ++ struct d3dddicb_destroyallocation2flags flags; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -279,8 +479,12 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x04, struct d3dkmt_createcontextvirtual) + #define LX_DXDESTROYCONTEXT \ + _IOWR(0x47, 0x05, struct d3dkmt_destroycontext) ++#define LX_DXCREATEALLOCATION \ ++ _IOWR(0x47, 0x06, struct d3dkmt_createallocation) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXDESTROYALLOCATION2 \ ++ _IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2) + #define LX_DXENUMADAPTERS2 \ + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch new file mode 100644 index 000000000000..3b0d750f67c2 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1676-drivers-hv-dxgkrnl-Creation-of-compute-device-sync-objects.patch @@ -0,0 +1,1016 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 14:38:32 -0800 +Subject: drivers: hv: dxgkrnl: Creation of compute device sync objects + +Implement ioctls to create and destroy compute devicesync objects: + - the LX_DXCREATESYNCHRONIZATIONOBJECT ioctl, + - the LX_DXDESTROYSYNCHRONIZATIONOBJECT ioctl. + +Compute device synchronization objects are used to synchronize +execution of compute device commands, which are queued to +different execution contexts (dxgcontext objects). + +There are several types of sync objects (mutex, monitored +fence, CPU event, fence). A "signal" or a "wait" operation +could be queued to an execution context. + +Monitored fence sync objects are particular important. +A monitored fence object has a fence value, which could be +monitored by the compute device or by CPU. Therefore, a CPU +virtual address is allocated during object creation to allow +an application to read the fence value. dxg_map_iospace and +dxg_unmap_iospace implement creation of the CPU virtual address. +This is done as follow: +- The host allocates a portion of the guest IO space, which is mapped + to the actual fence value memory on the host +- The host returns the guest IO space address to the guest +- The guest allocates a CPU virtual address and updates page tables + to point to the IO space address + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 184 +++++++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 80 ++++ + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgprocess.c | 16 + + drivers/hv/dxgkrnl/dxgvmbus.c | 205 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 20 + + drivers/hv/dxgkrnl/ioctl.c | 130 +++++- + include/uapi/misc/d3dkmthk.h | 95 +++++ + 8 files changed, 729 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 402caa81a5db..d2f2b96527e6 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -160,6 +160,24 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info) + list_del(&process_info->adapter_process_list_entry); + } + ++void dxgadapter_add_syncobj(struct dxgadapter *adapter, ++ struct dxgsyncobject *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ list_add_tail(&object->syncobj_list_entry, &adapter->syncobj_list_head); ++ up_write(&adapter->shared_resource_list_lock); ++} ++ ++void dxgadapter_remove_syncobj(struct dxgsyncobject *object) ++{ ++ down_write(&object->adapter->shared_resource_list_lock); ++ if (object->syncobj_list_entry.next) { ++ list_del(&object->syncobj_list_entry); ++ object->syncobj_list_entry.next = NULL; ++ } ++ up_write(&object->adapter->shared_resource_list_lock); ++} ++ + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter) + { + down_write(&adapter->core_lock); +@@ -213,6 +231,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + init_rwsem(&device->context_list_lock); + init_rwsem(&device->alloc_list_lock); + INIT_LIST_HEAD(&device->pqueue_list_head); ++ INIT_LIST_HEAD(&device->syncobj_list_head); + device->object_state = DXGOBJECTSTATE_CREATED; + device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE; + +@@ -228,6 +247,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + void dxgdevice_stop(struct dxgdevice *device) + { + struct dxgallocation *alloc; ++ struct dxgsyncobject *syncobj; + + DXG_TRACE("Destroying device: %p", device); + dxgdevice_acquire_alloc_list_lock(device); +@@ -235,6 +255,14 @@ void dxgdevice_stop(struct dxgdevice *device) + dxgallocation_stop(alloc); + } + dxgdevice_release_alloc_list_lock(device); ++ ++ hmgrtable_lock(&device->process->handle_table, DXGLOCK_EXCL); ++ list_for_each_entry(syncobj, &device->syncobj_list_head, ++ syncobj_list_entry) { ++ dxgsyncobject_stop(syncobj); ++ } ++ hmgrtable_unlock(&device->process->handle_table, DXGLOCK_EXCL); ++ DXG_TRACE("Device stopped: %p", device); + } + + void dxgdevice_mark_destroyed(struct dxgdevice *device) +@@ -263,6 +291,20 @@ void dxgdevice_destroy(struct dxgdevice *device) + + dxgdevice_acquire_alloc_list_lock(device); + ++ while (!list_empty(&device->syncobj_list_head)) { ++ struct dxgsyncobject *syncobj = ++ list_first_entry(&device->syncobj_list_head, ++ struct dxgsyncobject, ++ syncobj_list_entry); ++ list_del(&syncobj->syncobj_list_entry); ++ syncobj->syncobj_list_entry.next = NULL; ++ dxgdevice_release_alloc_list_lock(device); ++ ++ dxgsyncobject_destroy(process, syncobj); ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ } ++ + { + struct dxgallocation *alloc; + struct dxgallocation *tmp; +@@ -565,6 +607,30 @@ void dxgdevice_release(struct kref *refcount) + kfree(device); + } + ++void dxgdevice_add_syncobj(struct dxgdevice *device, ++ struct dxgsyncobject *syncobj) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&syncobj->syncobj_list_entry, &device->syncobj_list_head); ++ kref_get(&syncobj->syncobj_kref); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_syncobj(struct dxgsyncobject *entry) ++{ ++ struct dxgdevice *device = entry->device; ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (entry->syncobj_list_entry.next) { ++ list_del(&entry->syncobj_list_entry); ++ entry->syncobj_list_entry.next = NULL; ++ kref_put(&entry->syncobj_kref, dxgsyncobject_release); ++ } ++ dxgdevice_release_alloc_list_lock(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ entry->device = NULL; ++} ++ + struct dxgcontext *dxgcontext_create(struct dxgdevice *device) + { + struct dxgcontext *context; +@@ -812,3 +878,121 @@ void dxgprocess_adapter_remove_device(struct dxgdevice *device) + } + mutex_unlock(&device->adapter_info->device_list_mutex); + } ++ ++struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct dxgadapter *adapter, ++ enum ++ d3dddi_synchronizationobject_type ++ type, ++ struct ++ d3dddi_synchronizationobject_flags ++ flags) ++{ ++ struct dxgsyncobject *syncobj; ++ ++ syncobj = kzalloc(sizeof(*syncobj), GFP_KERNEL); ++ if (syncobj == NULL) ++ goto cleanup; ++ syncobj->type = type; ++ syncobj->process = process; ++ switch (type) { ++ case _D3DDDI_MONITORED_FENCE: ++ case _D3DDDI_PERIODIC_MONITORED_FENCE: ++ syncobj->monitored_fence = 1; ++ break; ++ default: ++ break; ++ } ++ if (flags.shared) { ++ syncobj->shared = 1; ++ if (!flags.nt_security_sharing) { ++ DXG_ERR("nt_security_sharing must be set"); ++ goto cleanup; ++ } ++ } ++ ++ kref_init(&syncobj->syncobj_kref); ++ ++ if (syncobj->monitored_fence) { ++ syncobj->device = device; ++ syncobj->device_handle = device->handle; ++ kref_get(&device->device_kref); ++ dxgdevice_add_syncobj(device, syncobj); ++ } else { ++ dxgadapter_add_syncobj(adapter, syncobj); ++ } ++ syncobj->adapter = adapter; ++ kref_get(&adapter->adapter_kref); ++ ++ DXG_TRACE("Syncobj created: %p", syncobj); ++ return syncobj; ++cleanup: ++ if (syncobj) ++ kfree(syncobj); ++ return NULL; ++} ++ ++void dxgsyncobject_destroy(struct dxgprocess *process, ++ struct dxgsyncobject *syncobj) ++{ ++ int destroyed; ++ ++ DXG_TRACE("Destroying syncobj: %p", syncobj); ++ ++ dxgsyncobject_stop(syncobj); ++ ++ destroyed = test_and_set_bit(0, &syncobj->flags); ++ if (!destroyed) { ++ DXG_TRACE("Deleting handle: %x", syncobj->handle.v); ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (syncobj->handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ syncobj->handle); ++ syncobj->handle.v = 0; ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (syncobj->monitored_fence) ++ dxgdevice_remove_syncobj(syncobj); ++ else ++ dxgadapter_remove_syncobj(syncobj); ++ if (syncobj->adapter) { ++ kref_put(&syncobj->adapter->adapter_kref, ++ dxgadapter_release); ++ syncobj->adapter = NULL; ++ } ++ } ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++} ++ ++void dxgsyncobject_stop(struct dxgsyncobject *syncobj) ++{ ++ int stopped = test_and_set_bit(1, &syncobj->flags); ++ ++ if (!stopped) { ++ DXG_TRACE("Stopping syncobj"); ++ if (syncobj->monitored_fence) { ++ if (syncobj->mapped_address) { ++ int ret = ++ dxg_unmap_iospace(syncobj->mapped_address, ++ PAGE_SIZE); ++ ++ (void)ret; ++ DXG_TRACE("unmap fence %d %p", ++ ret, syncobj->mapped_address); ++ syncobj->mapped_address = NULL; ++ } ++ } ++ } ++} ++ ++void dxgsyncobject_release(struct kref *refcount) ++{ ++ struct dxgsyncobject *syncobj; ++ ++ syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref); ++ kfree(syncobj); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index fa053fb6ac9c..1b9410c9152b 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -38,6 +38,7 @@ struct dxgdevice; + struct dxgcontext; + struct dxgallocation; + struct dxgresource; ++struct dxgsyncobject; + + /* + * Driver private data. +@@ -100,6 +101,56 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); + void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); + void dxgvmbuschannel_receive(void *ctx); + ++/* ++ * This is GPU synchronization object, which is used to synchronize execution ++ * between GPU contextx/hardware queues or for tracking GPU execution progress. ++ * A dxgsyncobject is created when somebody creates a syncobject or opens a ++ * shared syncobject. ++ * A syncobject belongs to an adapter, unless it is a cross-adapter object. ++ * Cross adapter syncobjects are currently not implemented. ++ * ++ * D3DDDI_MONITORED_FENCE and D3DDDI_PERIODIC_MONITORED_FENCE are called ++ * "device" syncobject, because the belong to a device (dxgdevice). ++ * Device syncobjects are inserted to a list in dxgdevice. ++ * ++ */ ++struct dxgsyncobject { ++ struct kref syncobj_kref; ++ enum d3dddi_synchronizationobject_type type; ++ /* ++ * List entry in dxgdevice for device sync objects. ++ * List entry in dxgadapter for other objects ++ */ ++ struct list_head syncobj_list_entry; ++ /* Adapter, the syncobject belongs to. NULL for stopped sync obejcts. */ ++ struct dxgadapter *adapter; ++ /* ++ * Pointer to the device, which was used to create the object. ++ * This is NULL for non-device syncbjects ++ */ ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ /* CPU virtual address of the fence value for "device" syncobjects */ ++ void *mapped_address; ++ /* Handle in the process handle table */ ++ struct d3dkmthandle handle; ++ /* Cached handle of the device. Used to avoid device dereference. */ ++ struct d3dkmthandle device_handle; ++ union { ++ struct { ++ /* Must be the first bit */ ++ u32 destroyed:1; ++ /* Must be the second bit */ ++ u32 stopped:1; ++ /* device syncobject */ ++ u32 monitored_fence:1; ++ u32 shared:1; ++ u32 reserved:27; ++ }; ++ long flags; ++ }; ++}; ++ + /* + * The structure defines an offered vGPU vm bus channel. + */ +@@ -109,6 +160,20 @@ struct dxgvgpuchannel { + struct hv_device *hdev; + }; + ++struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct dxgadapter *adapter, ++ enum ++ d3dddi_synchronizationobject_type ++ type, ++ struct ++ d3dddi_synchronizationobject_flags ++ flags); ++void dxgsyncobject_destroy(struct dxgprocess *process, ++ struct dxgsyncobject *syncobj); ++void dxgsyncobject_stop(struct dxgsyncobject *syncobj); ++void dxgsyncobject_release(struct kref *refcount); ++ + struct dxgglobal { + struct dxgdriver *drvdata; + struct dxgvmbuschannel channel; +@@ -271,6 +336,8 @@ struct dxgadapter { + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ + struct list_head adapter_process_list_head; ++ /* List of all non-device dxgsyncobject objects */ ++ struct list_head syncobj_list_head; + /* This lock protects shared resource and syncobject lists */ + struct rw_semaphore shared_resource_list_lock; + struct pci_dev *pci_dev; +@@ -296,6 +363,9 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter); + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); + void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); + void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_add_syncobj(struct dxgadapter *adapter, ++ struct dxgsyncobject *so); ++void dxgadapter_remove_syncobj(struct dxgsyncobject *so); + void dxgadapter_add_process(struct dxgadapter *adapter, + struct dxgprocess_adapter *process_info); + void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); +@@ -325,6 +395,7 @@ struct dxgdevice { + struct list_head resource_list_head; + /* List of paging queues. Protected by process handle table lock. */ + struct list_head pqueue_list_head; ++ struct list_head syncobj_list_head; + struct d3dkmthandle handle; + enum d3dkmt_deviceexecution_state execution_state; + u32 handle_valid; +@@ -345,6 +416,8 @@ void dxgdevice_remove_alloc_safe(struct dxgdevice *dev, + struct dxgallocation *a); + void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res); + void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res); ++void dxgdevice_add_syncobj(struct dxgdevice *dev, struct dxgsyncobject *so); ++void dxgdevice_remove_syncobj(struct dxgsyncobject *so); + bool dxgdevice_is_active(struct dxgdevice *dev); + void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev); + void dxgdevice_release_context_list_lock(struct dxgdevice *dev); +@@ -455,6 +528,7 @@ void dxgallocation_free_handle(struct dxgallocation *a); + long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2); + long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2); + ++int dxg_unmap_iospace(void *va, u32 size); + /* + * The convention is that VNBus instance id is a GUID, but the host sets + * the lower part of the value to the host adapter LUID. The function +@@ -514,6 +588,12 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_destroyallocation2 *args, + struct d3dkmthandle *alloc_handles); ++int dxgvmb_send_create_sync_object(struct dxgprocess *pr, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createsynchronizationobject2 ++ *args, struct dxgsyncobject *so); ++int dxgvmb_send_destroy_sync_object(struct dxgprocess *pr, ++ struct d3dkmthandle h); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 053ce6f3e083..9bc8931c5043 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -162,6 +162,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + init_rwsem(&adapter->core_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); ++ INIT_LIST_HEAD(&adapter->syncobj_list_head); + init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; + guid_to_luid(guid, &adapter->luid); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index ca307beb9a9a..a41985ef438d 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -59,6 +59,7 @@ void dxgprocess_destroy(struct dxgprocess *process) + enum hmgrentry_type t; + struct d3dkmthandle h; + void *o; ++ struct dxgsyncobject *syncobj; + struct dxgprocess_adapter *entry; + struct dxgprocess_adapter *tmp; + +@@ -84,6 +85,21 @@ void dxgprocess_destroy(struct dxgprocess *process) + } + } + ++ i = 0; ++ while (hmgrtable_next_entry(&process->handle_table, &i, &t, &h, &o)) { ++ switch (t) { ++ case HMGRENTRY_TYPE_DXGSYNCOBJECT: ++ DXG_TRACE("Destroy syncobj: %p %d", o, i); ++ syncobj = o; ++ syncobj->handle.v = 0; ++ dxgsyncobject_destroy(process, syncobj); ++ break; ++ default: ++ DXG_ERR("invalid entry in handle table %d", t); ++ break; ++ } ++ } ++ + hmgrtable_destroy(&process->handle_table); + hmgrtable_destroy(&process->local_handle_table); + } +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 14b51a3c6afc..d323afc85249 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -495,6 +495,88 @@ dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel, + return ret; + } + ++static int check_iospace_address(unsigned long address, u32 size) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (address < dxgglobal->mmiospace_base || ++ size > dxgglobal->mmiospace_size || ++ address >= (dxgglobal->mmiospace_base + ++ dxgglobal->mmiospace_size - size)) { ++ DXG_ERR("invalid iospace address %lx", address); ++ return -EINVAL; ++ } ++ return 0; ++} ++ ++int dxg_unmap_iospace(void *va, u32 size) ++{ ++ int ret = 0; ++ ++ DXG_TRACE("Unmapping io space: %p %x", va, size); ++ ++ /* ++ * When an app calls exit(), dxgkrnl is called to close the device ++ * with current->mm equal to NULL. ++ */ ++ if (current->mm) { ++ ret = vm_munmap((unsigned long)va, size); ++ if (ret) { ++ DXG_ERR("vm_munmap failed %d", ret); ++ return -ENOTRECOVERABLE; ++ } ++ } ++ return 0; ++} ++ ++static u8 *dxg_map_iospace(u64 iospace_address, u32 size, ++ unsigned long protection, bool cached) ++{ ++ struct vm_area_struct *vma; ++ unsigned long va; ++ int ret = 0; ++ ++ DXG_TRACE("Mapping io space: %llx %x %lx", ++ iospace_address, size, protection); ++ if (check_iospace_address(iospace_address, size) < 0) { ++ DXG_ERR("invalid address to map"); ++ return NULL; ++ } ++ ++ va = vm_mmap(NULL, 0, size, protection, MAP_SHARED | MAP_ANONYMOUS, 0); ++ if ((long)va <= 0) { ++ DXG_ERR("vm_mmap failed %lx %d", va, size); ++ return NULL; ++ } ++ ++ mmap_read_lock(current->mm); ++ vma = find_vma(current->mm, (unsigned long)va); ++ if (vma) { ++ pgprot_t prot = vma->vm_page_prot; ++ ++ if (!cached) ++ prot = pgprot_writecombine(prot); ++ DXG_TRACE("vma: %lx %lx %lx", ++ vma->vm_start, vma->vm_end, va); ++ vma->vm_pgoff = iospace_address >> PAGE_SHIFT; ++ ret = io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, ++ size, prot); ++ if (ret) ++ DXG_ERR("io_remap_pfn_range failed: %d", ret); ++ } else { ++ DXG_ERR("failed to find vma: %p %lx", vma, va); ++ ret = -ENOMEM; ++ } ++ mmap_read_unlock(current->mm); ++ ++ if (ret) { ++ dxg_unmap_iospace((void *)va, size); ++ return NULL; ++ } ++ DXG_TRACE("Mapped VA: %lx", va); ++ return (u8 *) va; ++} ++ + /* + * Global messages to the host + */ +@@ -613,6 +695,39 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process) + return ret; + } + ++int dxgvmb_send_destroy_sync_object(struct dxgprocess *process, ++ struct d3dkmthandle sync_object) ++{ ++ struct dxgkvmb_command_destroysyncobject *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYSYNCOBJECT, ++ process->host_handle); ++ command->sync_object = sync_object; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size); ++ ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + /* + * Virtual GPU messages to the host + */ +@@ -1023,7 +1138,11 @@ int create_existing_sysmem(struct dxgdevice *device, + ret = -ENOMEM; + goto cleanup; + } ++#ifdef _MAIN_KERNEL_ + DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); ++#else ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl); ++#endif + + command_vgpu_to_host_init2(&set_store_command->hdr, + DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, +@@ -1501,6 +1620,92 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + return ret; + } + ++static void set_result(struct d3dkmt_createsynchronizationobject2 *args, ++ u64 fence_gpu_va, u8 *va) ++{ ++ args->info.periodic_monitored_fence.fence_gpu_virtual_address = ++ fence_gpu_va; ++ args->info.periodic_monitored_fence.fence_cpu_virtual_address = va; ++} ++ ++int ++dxgvmb_send_create_sync_object(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createsynchronizationobject2 *args, ++ struct dxgsyncobject *syncobj) ++{ ++ struct dxgkvmb_command_createsyncobject_return result = { }; ++ struct dxgkvmb_command_createsyncobject *command; ++ int ret; ++ u8 *va = 0; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATESYNCOBJECT, ++ process->host_handle); ++ command->args = *args; ++ command->client_hint = 1; /* CLIENTHINT_UMD */ ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("failed %d", ret); ++ goto cleanup; ++ } ++ args->sync_object = result.sync_object; ++ if (syncobj->shared) { ++ if (result.global_sync_object.v == 0) { ++ DXG_ERR("shared handle is 0"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ args->info.shared_handle = result.global_sync_object; ++ } ++ ++ if (syncobj->monitored_fence) { ++ va = dxg_map_iospace(result.fence_storage_address, PAGE_SIZE, ++ PROT_READ | PROT_WRITE, true); ++ if (va == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ if (args->info.type == _D3DDDI_MONITORED_FENCE) { ++ args->info.monitored_fence.fence_gpu_virtual_address = ++ result.fence_gpu_va; ++ args->info.monitored_fence.fence_cpu_virtual_address = ++ va; ++ { ++ unsigned long value; ++ ++ DXG_TRACE("fence cpu va: %p", va); ++ ret = copy_from_user(&value, va, ++ sizeof(u64)); ++ if (ret) { ++ DXG_ERR("failed to read fence"); ++ ret = -EINVAL; ++ } else { ++ DXG_TRACE("fence value:%lx", ++ value); ++ } ++ } ++ } else { ++ set_result(args, result.fence_gpu_va, va); ++ } ++ syncobj->mapped_address = va; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 4b7466d1b9f2..bbf5f31cdf81 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -410,4 +410,24 @@ struct dxgkvmb_command_destroycontext { + struct d3dkmthandle context; + }; + ++struct dxgkvmb_command_createsyncobject { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_createsynchronizationobject2 args; ++ u32 client_hint; ++}; ++ ++struct dxgkvmb_command_createsyncobject_return { ++ struct d3dkmthandle sync_object; ++ struct d3dkmthandle global_sync_object; ++ u64 fence_gpu_va; ++ u64 fence_storage_address; ++ u32 fence_storage_offset; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroysyncobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle sync_object; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 0eaa577d7ed4..4bba1e209f33 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1341,6 +1341,132 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_createsynchronizationobject2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ syncobj = dxgsyncobject_create(process, device, adapter, args.info.type, ++ args.info.flags); ++ if (syncobj == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_create_sync_object(process, adapter, &args, syncobj); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, syncobj, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ if (ret >= 0) ++ syncobj->handle = args.sync_object; ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (syncobj) { ++ dxgsyncobject_destroy(process, syncobj); ++ if (args.sync_object.v) ++ dxgvmb_send_destroy_sync_object(process, ++ args.sync_object); ++ } ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_destroysynchronizationobject args; ++ struct dxgsyncobject *syncobj = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("handle 0x%x", args.sync_object.v); ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ syncobj = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ if (syncobj) { ++ DXG_TRACE("syncobj 0x%p", syncobj); ++ syncobj->handle.v = 0; ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (syncobj == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgsyncobject_destroy(process, syncobj); ++ ++ ret = dxgvmb_send_destroy_sync_object(process, args.sync_object); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -1358,7 +1484,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0d */ {}, + /* 0x0e */ {}, + /* 0x0f */ {}, +-/* 0x10 */ {}, ++/* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, + /* 0x11 */ {}, + /* 0x12 */ {}, + /* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, +@@ -1371,7 +1497,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1a */ {}, + /* 0x1b */ {}, + /* 0x1c */ {}, +-/* 0x1d */ {}, ++/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, + /* 0x1f */ {}, + /* 0x20 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index cf670b9c4dc2..4e1069f41d76 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -256,6 +256,97 @@ enum d3dkmdt_standardallocationtype { + _D3DKMDT_STANDARDALLOCATION_GDISURFACE = 4, + }; + ++struct d3dddi_synchronizationobject_flags { ++ union { ++ struct { ++ __u32 shared:1; ++ __u32 nt_security_sharing:1; ++ __u32 cross_adapter:1; ++ __u32 top_of_pipeline:1; ++ __u32 no_signal:1; ++ __u32 no_wait:1; ++ __u32 no_signal_max_value_on_tdr:1; ++ __u32 no_gpu_access:1; ++ __u32 reserved:23; ++ }; ++ __u32 value; ++ }; ++}; ++ ++enum d3dddi_synchronizationobject_type { ++ _D3DDDI_SYNCHRONIZATION_MUTEX = 1, ++ _D3DDDI_SEMAPHORE = 2, ++ _D3DDDI_FENCE = 3, ++ _D3DDDI_CPU_NOTIFICATION = 4, ++ _D3DDDI_MONITORED_FENCE = 5, ++ _D3DDDI_PERIODIC_MONITORED_FENCE = 6, ++ _D3DDDI_SYNCHRONIZATION_TYPE_LIMIT ++}; ++ ++struct d3dddi_synchronizationobjectinfo2 { ++ enum d3dddi_synchronizationobject_type type; ++ struct d3dddi_synchronizationobject_flags flags; ++ union { ++ struct { ++ __u32 initial_state; ++ } synchronization_mutex; ++ ++ struct { ++ __u32 max_count; ++ __u32 initial_count; ++ } semaphore; ++ ++ struct { ++ __u64 fence_value; ++ } fence; ++ ++ struct { ++ __u64 event; ++ } cpu_notification; ++ ++ struct { ++ __u64 initial_fence_value; ++#ifdef __KERNEL__ ++ void *fence_cpu_virtual_address; ++#else ++ __u64 *fence_cpu_virtual_address; ++#endif ++ __u64 fence_gpu_virtual_address; ++ __u32 engine_affinity; ++ } monitored_fence; ++ ++ struct { ++ struct d3dkmthandle adapter; ++ __u32 vidpn_target_id; ++ __u64 time; ++#ifdef __KERNEL__ ++ void *fence_cpu_virtual_address; ++#else ++ __u64 fence_cpu_virtual_address; ++#endif ++ __u64 fence_gpu_virtual_address; ++ __u32 engine_affinity; ++ } periodic_monitored_fence; ++ ++ struct { ++ __u64 reserved[8]; ++ } reserved; ++ }; ++ struct d3dkmthandle shared_handle; ++}; ++ ++struct d3dkmt_createsynchronizationobject2 { ++ struct d3dkmthandle device; ++ __u32 reserved; ++ struct d3dddi_synchronizationobjectinfo2 info; ++ struct d3dkmthandle sync_object; ++ __u32 reserved1; ++}; ++ ++struct d3dkmt_destroysynchronizationobject { ++ struct d3dkmthandle sync_object; ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -483,6 +574,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x06, struct d3dkmt_createallocation) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXCREATESYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2) + #define LX_DXDESTROYALLOCATION2 \ + _IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2) + #define LX_DXENUMADAPTERS2 \ +@@ -491,6 +584,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) + #define LX_DXDESTROYDEVICE \ + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) ++#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1677-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1677-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch new file mode 100644 index 000000000000..731bcaac98c6 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1677-drivers-hv-dxgkrnl-Operations-using-sync-objects.patch @@ -0,0 +1,1689 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 1 Feb 2022 13:59:23 -0800 +Subject: drivers: hv: dxgkrnl: Operations using sync objects + +Implement ioctls to submit operations with compute device +sync objects: + - the LX_DXSIGNALSYNCHRONIZATIONOBJECT ioctl. + The ioctl is used to submit a signal to a sync object. + - the LX_DXWAITFORSYNCHRONIZATIONOBJECT ioctl. + The ioctl is used to submit a wait for a sync object + - the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU ioctl + The ioctl is used to signal to a monitored fence sync object + from a CPU thread. + - the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU ioctl. + The ioctl is used to submit a signal to a monitored fence + sync object.. + - the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 ioctl. + The ioctl is used to submit a signal to a monitored fence + sync object. + - the LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU ioctl. + The ioctl is used to submit a wait for a monitored fence + sync object. + +Compute device synchronization objects are used to synchronize +execution of DMA buffers between different execution contexts. +Operations with sync objects include "signal" and "wait". A wait +for a sync object is satisfied when the sync object is signaled. + +A signal operation could be submitted to a compute device context or +the sync object could be signaled by a CPU thread. + +To improve performance, submitting operations to the host is done +asynchronously when the host supports it. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 38 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 62 + + drivers/hv/dxgkrnl/dxgmodule.c | 102 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 219 ++- + drivers/hv/dxgkrnl/dxgvmbus.h | 48 + + drivers/hv/dxgkrnl/ioctl.c | 702 +++++++++- + drivers/hv/dxgkrnl/misc.h | 2 + + include/uapi/misc/d3dkmthk.h | 159 +++ + 8 files changed, 1311 insertions(+), 21 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index d2f2b96527e6..04d827a15c54 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -249,7 +249,7 @@ void dxgdevice_stop(struct dxgdevice *device) + struct dxgallocation *alloc; + struct dxgsyncobject *syncobj; + +- DXG_TRACE("Destroying device: %p", device); ++ DXG_TRACE("Stopping device: %p", device); + dxgdevice_acquire_alloc_list_lock(device); + list_for_each_entry(alloc, &device->alloc_list_head, alloc_list_entry) { + dxgallocation_stop(alloc); +@@ -743,15 +743,13 @@ void dxgallocation_destroy(struct dxgallocation *alloc) + } + #ifdef _MAIN_KERNEL_ + if (alloc->gpadl.gpadl_handle) { +- DXG_TRACE("Teardown gpadl %d", +- alloc->gpadl.gpadl_handle); ++ DXG_TRACE("Teardown gpadl %d", alloc->gpadl.gpadl_handle); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); + alloc->gpadl.gpadl_handle = 0; + } + else + if (alloc->gpadl) { +- DXG_TRACE("Teardown gpadl %d", +- alloc->gpadl); ++ DXG_TRACE("Teardown gpadl %d", alloc->gpadl); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); + alloc->gpadl = 0; + } +@@ -901,6 +899,13 @@ struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + case _D3DDDI_PERIODIC_MONITORED_FENCE: + syncobj->monitored_fence = 1; + break; ++ case _D3DDDI_CPU_NOTIFICATION: ++ syncobj->cpu_event = 1; ++ syncobj->host_event = kzalloc(sizeof(*syncobj->host_event), ++ GFP_KERNEL); ++ if (syncobj->host_event == NULL) ++ goto cleanup; ++ break; + default: + break; + } +@@ -928,6 +933,8 @@ struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + DXG_TRACE("Syncobj created: %p", syncobj); + return syncobj; + cleanup: ++ if (syncobj->host_event) ++ kfree(syncobj->host_event); + if (syncobj) + kfree(syncobj); + return NULL; +@@ -937,6 +944,7 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + struct dxgsyncobject *syncobj) + { + int destroyed; ++ struct dxghosteventcpu *host_event; + + DXG_TRACE("Destroying syncobj: %p", syncobj); + +@@ -955,6 +963,16 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + } + hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); + ++ if (syncobj->cpu_event) { ++ host_event = syncobj->host_event; ++ if (host_event->cpu_event) { ++ eventfd_ctx_put(host_event->cpu_event); ++ if (host_event->hdr.event_id) ++ dxgglobal_remove_host_event( ++ &host_event->hdr); ++ host_event->cpu_event = NULL; ++ } ++ } + if (syncobj->monitored_fence) + dxgdevice_remove_syncobj(syncobj); + else +@@ -971,16 +989,14 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + void dxgsyncobject_stop(struct dxgsyncobject *syncobj) + { + int stopped = test_and_set_bit(1, &syncobj->flags); ++ int ret; + + if (!stopped) { + DXG_TRACE("Stopping syncobj"); + if (syncobj->monitored_fence) { + if (syncobj->mapped_address) { +- int ret = +- dxg_unmap_iospace(syncobj->mapped_address, +- PAGE_SIZE); +- +- (void)ret; ++ ret = dxg_unmap_iospace(syncobj->mapped_address, ++ PAGE_SIZE); + DXG_TRACE("unmap fence %d %p", + ret, syncobj->mapped_address); + syncobj->mapped_address = NULL; +@@ -994,5 +1010,7 @@ void dxgsyncobject_release(struct kref *refcount) + struct dxgsyncobject *syncobj; + + syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref); ++ if (syncobj->host_event) ++ kfree(syncobj->host_event); + kfree(syncobj); + } +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 1b9410c9152b..8431523f42de 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -101,6 +101,29 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); + void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); + void dxgvmbuschannel_receive(void *ctx); + ++/* ++ * The structure describes an event, which will be signaled by ++ * a message from host. ++ */ ++enum dxghosteventtype { ++ dxghostevent_cpu_event = 1, ++}; ++ ++struct dxghostevent { ++ struct list_head host_event_list_entry; ++ u64 event_id; ++ enum dxghosteventtype event_type; ++}; ++ ++struct dxghosteventcpu { ++ struct dxghostevent hdr; ++ struct dxgprocess *process; ++ struct eventfd_ctx *cpu_event; ++ struct completion *completion_event; ++ bool destroy_after_signal; ++ bool remove_from_list; ++}; ++ + /* + * This is GPU synchronization object, which is used to synchronize execution + * between GPU contextx/hardware queues or for tracking GPU execution progress. +@@ -130,6 +153,8 @@ struct dxgsyncobject { + */ + struct dxgdevice *device; + struct dxgprocess *process; ++ /* Used by D3DDDI_CPU_NOTIFICATION objects */ ++ struct dxghosteventcpu *host_event; + /* CPU virtual address of the fence value for "device" syncobjects */ + void *mapped_address; + /* Handle in the process handle table */ +@@ -144,6 +169,7 @@ struct dxgsyncobject { + u32 stopped:1; + /* device syncobject */ + u32 monitored_fence:1; ++ u32 cpu_event:1; + u32 shared:1; + u32 reserved:27; + }; +@@ -206,6 +232,11 @@ struct dxgglobal { + /* protects the dxgprocess_adapter lists */ + struct mutex process_adapter_mutex; + ++ /* list of events, waiting to be signaled by the host */ ++ struct list_head host_event_list_head; ++ spinlock_t host_event_list_mutex; ++ atomic64_t host_event_id; ++ + bool global_channel_initialized; + bool async_msg_enabled; + bool misc_registered; +@@ -228,6 +259,11 @@ struct vmbus_channel *dxgglobal_get_vmbus(void); + struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void); + void dxgglobal_acquire_process_adapter_lock(void); + void dxgglobal_release_process_adapter_lock(void); ++void dxgglobal_add_host_event(struct dxghostevent *hostevent); ++void dxgglobal_remove_host_event(struct dxghostevent *hostevent); ++u64 dxgglobal_new_host_event_id(void); ++void dxgglobal_signal_host_event(u64 event_id); ++struct dxghostevent *dxgglobal_get_host_event(u64 event_id); + int dxgglobal_acquire_channel_lock(void); + void dxgglobal_release_channel_lock(void); + +@@ -594,6 +630,31 @@ int dxgvmb_send_create_sync_object(struct dxgprocess *pr, + *args, struct dxgsyncobject *so); + int dxgvmb_send_destroy_sync_object(struct dxgprocess *pr, + struct d3dkmthandle h); ++int dxgvmb_send_signal_sync_object(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddicb_signalflags flags, ++ u64 legacy_fence_value, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle *object, ++ u32 context_count, ++ struct d3dkmthandle *contexts, ++ u32 fence_count, u64 *fences, ++ struct eventfd_ctx *cpu_event, ++ struct d3dkmthandle device); ++int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle *objects, ++ u64 *fences, ++ bool legacy_fence); ++int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct ++ d3dkmt_waitforsynchronizationobjectfromcpu ++ *args, ++ u64 cpu_event); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); +@@ -609,6 +670,7 @@ int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); + ++void signal_host_cpu_event(struct dxghostevent *eventhdr); + int ntstatus2int(struct ntstatus status); + + #ifdef DEBUG +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 9bc8931c5043..5a5ca8791d27 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -123,6 +123,102 @@ static struct dxgadapter *find_adapter(struct winluid *luid) + return adapter; + } + ++void dxgglobal_add_host_event(struct dxghostevent *event) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ spin_lock_irq(&dxgglobal->host_event_list_mutex); ++ list_add_tail(&event->host_event_list_entry, ++ &dxgglobal->host_event_list_head); ++ spin_unlock_irq(&dxgglobal->host_event_list_mutex); ++} ++ ++void dxgglobal_remove_host_event(struct dxghostevent *event) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ spin_lock_irq(&dxgglobal->host_event_list_mutex); ++ if (event->host_event_list_entry.next != NULL) { ++ list_del(&event->host_event_list_entry); ++ event->host_event_list_entry.next = NULL; ++ } ++ spin_unlock_irq(&dxgglobal->host_event_list_mutex); ++} ++ ++void signal_host_cpu_event(struct dxghostevent *eventhdr) ++{ ++ struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; ++ ++ if (event->remove_from_list || ++ event->destroy_after_signal) { ++ list_del(&eventhdr->host_event_list_entry); ++ eventhdr->host_event_list_entry.next = NULL; ++ } ++ if (event->cpu_event) { ++ DXG_TRACE("signal cpu event"); ++ eventfd_signal(event->cpu_event, 1); ++ if (event->destroy_after_signal) ++ eventfd_ctx_put(event->cpu_event); ++ } else { ++ DXG_TRACE("signal completion"); ++ complete(event->completion_event); ++ } ++ if (event->destroy_after_signal) { ++ DXG_TRACE("destroying event %p", event); ++ kfree(event); ++ } ++} ++ ++void dxgglobal_signal_host_event(u64 event_id) ++{ ++ struct dxghostevent *event; ++ unsigned long flags; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ DXG_TRACE("Signaling host event %lld", event_id); ++ ++ spin_lock_irqsave(&dxgglobal->host_event_list_mutex, flags); ++ list_for_each_entry(event, &dxgglobal->host_event_list_head, ++ host_event_list_entry) { ++ if (event->event_id == event_id) { ++ DXG_TRACE("found event to signal"); ++ if (event->event_type == dxghostevent_cpu_event) ++ signal_host_cpu_event(event); ++ else ++ DXG_ERR("Unknown host event type"); ++ break; ++ } ++ } ++ spin_unlock_irqrestore(&dxgglobal->host_event_list_mutex, flags); ++} ++ ++struct dxghostevent *dxgglobal_get_host_event(u64 event_id) ++{ ++ struct dxghostevent *entry; ++ struct dxghostevent *event = NULL; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ spin_lock_irq(&dxgglobal->host_event_list_mutex); ++ list_for_each_entry(entry, &dxgglobal->host_event_list_head, ++ host_event_list_entry) { ++ if (entry->event_id == event_id) { ++ list_del(&entry->host_event_list_entry); ++ entry->host_event_list_entry.next = NULL; ++ event = entry; ++ break; ++ } ++ } ++ spin_unlock_irq(&dxgglobal->host_event_list_mutex); ++ return event; ++} ++ ++u64 dxgglobal_new_host_event_id(void) ++{ ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ return atomic64_inc_return(&dxgglobal->host_event_id); ++} ++ + void dxgglobal_acquire_process_adapter_lock(void) + { + struct dxgglobal *dxgglobal = dxggbl(); +@@ -720,12 +816,16 @@ static struct dxgglobal *dxgglobal_create(void) + INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head); + INIT_LIST_HEAD(&dxgglobal->adapter_list_head); + init_rwsem(&dxgglobal->adapter_list_lock); +- + init_rwsem(&dxgglobal->channel_lock); + ++ INIT_LIST_HEAD(&dxgglobal->host_event_list_head); ++ spin_lock_init(&dxgglobal->host_event_list_mutex); ++ atomic64_set(&dxgglobal->host_event_id, 1); ++ + #ifdef DEBUG + dxgk_validate_ioctls(); + #endif ++ + return dxgglobal; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index d323afc85249..6b2dea24a509 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -281,6 +281,22 @@ static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command, + command->channel_type = DXGKVMB_VM_TO_HOST; + } + ++static void signal_guest_event(struct dxgkvmb_command_host_to_vm *packet, ++ u32 packet_length) ++{ ++ struct dxgkvmb_command_signalguestevent *command = (void *)packet; ++ ++ if (packet_length < sizeof(struct dxgkvmb_command_signalguestevent)) { ++ DXG_ERR("invalid signal guest event packet size"); ++ return; ++ } ++ if (command->event == 0) { ++ DXG_ERR("invalid event pointer"); ++ return; ++ } ++ dxgglobal_signal_host_event(command->event); ++} ++ + static void process_inband_packet(struct dxgvmbuschannel *channel, + struct vmpacket_descriptor *desc) + { +@@ -297,6 +313,7 @@ static void process_inband_packet(struct dxgvmbuschannel *channel, + switch (packet->command_type) { + case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: + case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: ++ signal_guest_event(packet, packet_length); + break; + case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION: + break; +@@ -959,7 +976,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + command->priv_drv_data, + args->priv_drv_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "Faled to copy private data to user"); + ret = -EINVAL; + dxgvmb_send_destroy_context(adapter, process, +@@ -1706,6 +1723,206 @@ dxgvmb_send_create_sync_object(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_signal_sync_object(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddicb_signalflags flags, ++ u64 legacy_fence_value, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle __user *objects, ++ u32 context_count, ++ struct d3dkmthandle __user *contexts, ++ u32 fence_count, ++ u64 __user *fences, ++ struct eventfd_ctx *cpu_event_handle, ++ struct d3dkmthandle device) ++{ ++ int ret; ++ struct dxgkvmb_command_signalsyncobject *command; ++ u32 object_size = object_count * sizeof(struct d3dkmthandle); ++ u32 context_size = context_count * sizeof(struct d3dkmthandle); ++ u32 fence_size = fences ? fence_count * sizeof(u64) : 0; ++ u8 *current_pos; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_signalsyncobject) + ++ object_size + context_size + fence_size; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (context.v) ++ cmd_size += sizeof(struct d3dkmthandle); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SIGNALSYNCOBJECT, ++ process->host_handle); ++ ++ if (flags.enqueue_cpu_event) ++ command->cpu_event_handle = (u64) cpu_event_handle; ++ else ++ command->device = device; ++ command->flags = flags; ++ command->fence_value = legacy_fence_value; ++ command->object_count = object_count; ++ command->context_count = context_count; ++ current_pos = (u8 *) &command[1]; ++ ret = copy_from_user(current_pos, objects, object_size); ++ if (ret) { ++ DXG_ERR("Failed to read objects %p %d", ++ objects, object_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += object_size; ++ if (context.v) { ++ command->context_count++; ++ *(struct d3dkmthandle *) current_pos = context; ++ current_pos += sizeof(struct d3dkmthandle); ++ } ++ if (context_size) { ++ ret = copy_from_user(current_pos, contexts, context_size); ++ if (ret) { ++ DXG_ERR("Failed to read contexts %p %d", ++ contexts, context_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += context_size; ++ } ++ if (fence_size) { ++ ret = copy_from_user(current_pos, fences, fence_size); ++ if (ret) { ++ DXG_ERR("Failed to read fences %p %d", ++ fences, fence_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct ++ d3dkmt_waitforsynchronizationobjectfromcpu ++ *args, ++ u64 cpu_event) ++{ ++ int ret = -EINVAL; ++ struct dxgkvmb_command_waitforsyncobjectfromcpu *command; ++ u32 object_size = args->object_count * sizeof(struct d3dkmthandle); ++ u32 fence_size = args->object_count * sizeof(u64); ++ u8 *current_pos; ++ u32 cmd_size = sizeof(*command) + object_size + fence_size; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMCPU, ++ process->host_handle); ++ command->device = args->device; ++ command->flags = args->flags; ++ command->object_count = args->object_count; ++ command->guest_event_pointer = (u64) cpu_event; ++ current_pos = (u8 *) &command[1]; ++ ++ ret = copy_from_user(current_pos, args->objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += object_size; ++ ret = copy_from_user(current_pos, args->fence_values, ++ fence_size); ++ if (ret) { ++ DXG_ERR("failed to copy fences"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ u32 object_count, ++ struct d3dkmthandle *objects, ++ u64 *fences, ++ bool legacy_fence) ++{ ++ int ret; ++ struct dxgkvmb_command_waitforsyncobjectfromgpu *command; ++ u32 fence_size = object_count * sizeof(u64); ++ u32 object_size = object_count * sizeof(struct d3dkmthandle); ++ u8 *current_pos; ++ u32 cmd_size = object_size + fence_size - sizeof(u64) + ++ sizeof(struct dxgkvmb_command_waitforsyncobjectfromgpu); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ if (object_count == 0 || object_count > D3DDDI_MAX_OBJECT_WAITED_ON) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMGPU, ++ process->host_handle); ++ command->context = context; ++ command->object_count = object_count; ++ command->legacy_fence_object = legacy_fence; ++ current_pos = (u8 *) command->fence_values; ++ memcpy(current_pos, fences, fence_size); ++ current_pos += fence_size; ++ memcpy(current_pos, objects, object_size); ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index bbf5f31cdf81..89fecbcefbc8 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -165,6 +165,13 @@ struct dxgkvmb_command_host_to_vm { + enum dxgkvmb_commandtype_host_to_vm command_type; + }; + ++struct dxgkvmb_command_signalguestevent { ++ struct dxgkvmb_command_host_to_vm hdr; ++ u64 event; ++ u64 process_id; ++ bool dereference_event; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_setiospaceregion { + struct dxgkvmb_command_vm_to_host hdr; +@@ -430,4 +437,45 @@ struct dxgkvmb_command_destroysyncobject { + struct d3dkmthandle sync_object; + }; + ++/* The command returns ntstatus */ ++struct dxgkvmb_command_signalsyncobject { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ u32 object_count; ++ struct d3dddicb_signalflags flags; ++ u32 context_count; ++ u64 fence_value; ++ union { ++ /* Pointer to the guest event object */ ++ u64 cpu_event_handle; ++ /* Non zero when signal from CPU is done */ ++ struct d3dkmthandle device; ++ }; ++ /* struct d3dkmthandle ObjectHandleArray[object_count] */ ++ /* struct d3dkmthandle ContextArray[context_count] */ ++ /* u64 MonitoredFenceValueArray[object_count] */ ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_waitforsyncobjectfromcpu { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ u32 object_count; ++ struct d3dddi_waitforsynchronizationobjectfromcpu_flags flags; ++ u64 guest_event_pointer; ++ bool dereference_event; ++ /* struct d3dkmthandle ObjectHandleArray[object_count] */ ++ /* u64 FenceValueArray [object_count] */ ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_waitforsyncobjectfromgpu { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ /* Must be 1 when bLegacyFenceObject is TRUE */ ++ u32 object_count; ++ bool legacy_fence_object; ++ u64 fence_values[1]; ++ /* struct d3dkmthandle ObjectHandles[object_count] */ ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 4bba1e209f33..0025e1ee2d4d 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -759,7 +759,7 @@ get_standard_alloc_priv_data(struct dxgdevice *device, + res_priv_data = vzalloc(res_priv_data_size); + if (res_priv_data == NULL) { + ret = -ENOMEM; +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to alloc memory for res priv data: %d", + res_priv_data_size); + goto cleanup; +@@ -1065,7 +1065,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + alloc_info[i].priv_drv_data, + priv_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to copy priv data"); + ret = -EFAULT; + goto cleanup; +@@ -1348,8 +1348,10 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + struct d3dkmt_createsynchronizationobject2 args; + struct dxgdevice *device = NULL; + struct dxgadapter *adapter = NULL; ++ struct eventfd_ctx *event = NULL; + struct dxgsyncobject *syncobj = NULL; + bool device_lock_acquired = false; ++ struct dxghosteventcpu *host_event = NULL; + + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { +@@ -1384,6 +1386,27 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + goto cleanup; + } + ++ if (args.info.type == _D3DDDI_CPU_NOTIFICATION) { ++ event = eventfd_ctx_fdget((int) ++ args.info.cpu_notification.event); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ host_event = syncobj->host_event; ++ host_event->hdr.event_id = dxgglobal_new_host_event_id(); ++ host_event->cpu_event = event; ++ host_event->remove_from_list = false; ++ host_event->destroy_after_signal = false; ++ host_event->hdr.event_type = dxghostevent_cpu_event; ++ dxgglobal_add_host_event(&host_event->hdr); ++ args.info.cpu_notification.event = host_event->hdr.event_id; ++ DXG_TRACE("creating CPU notification event: %lld", ++ args.info.cpu_notification.event); ++ } ++ + ret = dxgvmb_send_create_sync_object(process, adapter, &args, syncobj); + if (ret < 0) + goto cleanup; +@@ -1411,7 +1434,10 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (args.sync_object.v) + dxgvmb_send_destroy_sync_object(process, + args.sync_object); ++ event = NULL; + } ++ if (event) ++ eventfd_ctx_put(event); + } + if (adapter) + dxgadapter_release_lock_shared(adapter); +@@ -1467,6 +1493,659 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobject2 args; ++ struct d3dkmt_signalsynchronizationobject2 *__user in_args = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ u32 fence_count = 1; ++ struct eventfd_ctx *event = NULL; ++ struct dxghosteventcpu *host_event = NULL; ++ bool host_event_added = false; ++ u64 host_event_id = 0; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.context_count >= D3DDDI_MAX_BROADCAST_CONTEXT || ++ args.object_count > D3DDDI_MAX_OBJECT_SIGNALED) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.enqueue_cpu_event) { ++ host_event = kzalloc(sizeof(*host_event), GFP_KERNEL); ++ if (host_event == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ host_event->process = process; ++ event = eventfd_ctx_fdget((int)args.cpu_event_handle); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ fence_count = 0; ++ host_event->cpu_event = event; ++ host_event_id = dxgglobal_new_host_event_id(); ++ host_event->hdr.event_type = dxghostevent_cpu_event; ++ host_event->hdr.event_id = host_event_id; ++ host_event->remove_from_list = true; ++ host_event->destroy_after_signal = true; ++ dxgglobal_add_host_event(&host_event->hdr); ++ host_event_added = true; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, args.fence.fence_value, ++ args.context, args.object_count, ++ in_args->object_array, ++ args.context_count, ++ in_args->contexts, fence_count, ++ NULL, (void *)host_event_id, ++ zerohandle); ++ ++ /* ++ * When the send operation succeeds, the host event will be destroyed ++ * after signal from the host ++ */ ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_event_added) { ++ /* The event might be signaled and destroyed by host */ ++ host_event = (struct dxghosteventcpu *) ++ dxgglobal_get_host_event(host_event_id); ++ if (host_event) { ++ eventfd_ctx_put(event); ++ event = NULL; ++ kfree(host_event); ++ host_event = NULL; ++ } ++ } ++ if (event) ++ eventfd_ctx_put(event); ++ if (host_event) ++ kfree(host_event); ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobjectfromcpu args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args.object_count == 0 || ++ args.object_count > D3DDDI_MAX_OBJECT_SIGNALED) { ++ DXG_TRACE("Too many syncobjects : %d", args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, 0, zerohandle, ++ args.object_count, args.objects, 0, ++ NULL, args.object_count, ++ args.fence_values, NULL, ++ args.device); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobjectfromgpu args; ++ struct d3dkmt_signalsynchronizationobjectfromgpu *__user user_args = ++ inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dddicb_signalflags flags = { }; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count == 0 || ++ args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ flags, 0, zerohandle, ++ args.object_count, ++ args.objects, 1, ++ &user_args->context, ++ args.object_count, ++ args.monitored_fence_values, NULL, ++ zerohandle); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_signalsynchronizationobjectfromgpu2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle context_handle; ++ struct eventfd_ctx *event = NULL; ++ u64 *fences = NULL; ++ u32 fence_count = 0; ++ int ret; ++ struct dxghosteventcpu *host_event = NULL; ++ bool host_event_added = false; ++ u64 host_event_id = 0; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.enqueue_cpu_event) { ++ if (args.object_count != 0 || args.cpu_event_handle == 0) { ++ DXG_ERR("Bad input in EnqueueCpuEvent: %d %lld", ++ args.object_count, args.cpu_event_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else if (args.object_count == 0 || ++ args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.context_count == 0 || ++ args.context_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("Invalid input: %d %d", ++ args.object_count, args.context_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(&context_handle, args.contexts, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy context handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.flags.enqueue_cpu_event) { ++ host_event = kzalloc(sizeof(*host_event), GFP_KERNEL); ++ if (host_event == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ host_event->process = process; ++ event = eventfd_ctx_fdget((int)args.cpu_event_handle); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ fence_count = 0; ++ host_event->cpu_event = event; ++ host_event_id = dxgglobal_new_host_event_id(); ++ host_event->hdr.event_id = host_event_id; ++ host_event->hdr.event_type = dxghostevent_cpu_event; ++ host_event->remove_from_list = true; ++ host_event->destroy_after_signal = true; ++ dxgglobal_add_host_event(&host_event->hdr); ++ host_event_added = true; ++ } else { ++ fences = args.monitored_fence_values; ++ fence_count = args.object_count; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ context_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, 0, zerohandle, ++ args.object_count, args.objects, ++ args.context_count, args.contexts, ++ fence_count, fences, ++ (void *)host_event_id, zerohandle); ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (host_event_added) { ++ /* The event might be signaled and destroyed by host */ ++ host_event = (struct dxghosteventcpu *) ++ dxgglobal_get_host_event(host_event_id); ++ if (host_event) { ++ eventfd_ctx_put(event); ++ event = NULL; ++ kfree(host_event); ++ host_event = NULL; ++ } ++ } ++ if (event) ++ eventfd_ctx_put(event); ++ if (host_event) ++ kfree(host_event); ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitforsynchronizationobject2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > D3DDDI_MAX_OBJECT_WAITED_ON || ++ args.object_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Fence value: %lld", args.fence.fence_value); ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.context, args.object_count, ++ args.object_array, ++ &args.fence.fence_value, true); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitforsynchronizationobjectfromcpu args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct eventfd_ctx *event = NULL; ++ struct dxghosteventcpu host_event = { }; ++ struct dxghosteventcpu *async_host_event = NULL; ++ struct completion local_event = { }; ++ u64 event_id = 0; ++ int ret; ++ bool host_event_added = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.object_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.async_event) { ++ async_host_event = kzalloc(sizeof(*async_host_event), ++ GFP_KERNEL); ++ if (async_host_event == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ async_host_event->process = process; ++ event = eventfd_ctx_fdget((int)args.async_event); ++ if (IS_ERR(event)) { ++ DXG_ERR("failed to reference the event"); ++ event = NULL; ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ async_host_event->cpu_event = event; ++ async_host_event->hdr.event_id = dxgglobal_new_host_event_id(); ++ async_host_event->destroy_after_signal = true; ++ async_host_event->hdr.event_type = dxghostevent_cpu_event; ++ dxgglobal_add_host_event(&async_host_event->hdr); ++ event_id = async_host_event->hdr.event_id; ++ host_event_added = true; ++ } else { ++ init_completion(&local_event); ++ host_event.completion_event = &local_event; ++ host_event.hdr.event_id = dxgglobal_new_host_event_id(); ++ host_event.hdr.event_type = dxghostevent_cpu_event; ++ dxgglobal_add_host_event(&host_event.hdr); ++ event_id = host_event.hdr.event_id; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_wait_sync_object_cpu(process, adapter, ++ &args, event_id); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (args.async_event == 0) { ++ dxgadapter_release_lock_shared(adapter); ++ adapter = NULL; ++ ret = wait_for_completion_interruptible(&local_event); ++ if (ret) { ++ DXG_ERR("wait_completion_interruptible: %d", ++ ret); ++ ret = -ERESTARTSYS; ++ } ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ if (host_event.hdr.event_id) ++ dxgglobal_remove_host_event(&host_event.hdr); ++ if (ret < 0) { ++ if (host_event_added) { ++ async_host_event = (struct dxghosteventcpu *) ++ dxgglobal_get_host_event(event_id); ++ if (async_host_event) { ++ if (async_host_event->hdr.event_type == ++ dxghostevent_cpu_event) { ++ eventfd_ctx_put(event); ++ event = NULL; ++ kfree(async_host_event); ++ async_host_event = NULL; ++ } else { ++ DXG_ERR("Invalid event type"); ++ DXGKRNL_ASSERT(0); ++ } ++ } ++ } ++ if (event) ++ eventfd_ctx_put(event); ++ if (async_host_event) ++ kfree(async_host_event); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitforsynchronizationobjectfromgpu args; ++ struct dxgcontext *context = NULL; ++ struct d3dkmthandle device_handle = {}; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; ++ struct d3dkmthandle *objects = NULL; ++ u32 object_size; ++ u64 *fences = NULL; ++ int ret; ++ enum hmgrentry_type syncobj_type = HMGRENTRY_TYPE_FREE; ++ bool monitored_fence = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE || ++ args.object_count == 0) { ++ DXG_ERR("Invalid object count: %d", args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ object_size = sizeof(struct d3dkmthandle) * args.object_count; ++ objects = vzalloc(object_size); ++ if (objects == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(objects, args.objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ context = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (context) { ++ device_handle = context->device_handle; ++ syncobj_type = ++ hmgrtable_get_object_type(&process->handle_table, ++ objects[0]); ++ } ++ if (device_handle.v == 0) { ++ DXG_ERR("Invalid context handle: %x", args.context.v); ++ ret = -EINVAL; ++ } else { ++ if (syncobj_type == HMGRENTRY_TYPE_MONITOREDFENCE) { ++ monitored_fence = true; ++ } else if (syncobj_type == HMGRENTRY_TYPE_DXGSYNCOBJECT) { ++ syncobj = ++ hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ objects[0]); ++ if (syncobj == NULL) { ++ DXG_ERR("Invalid syncobj: %x", ++ objects[0].v); ++ ret = -EINVAL; ++ } else { ++ monitored_fence = syncobj->monitored_fence; ++ } ++ } else { ++ DXG_ERR("Invalid syncobj type: %x", ++ objects[0].v); ++ ret = -EINVAL; ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ if (monitored_fence) { ++ object_size = sizeof(u64) * args.object_count; ++ fences = vzalloc(object_size); ++ if (fences == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(fences, args.monitored_fence_values, ++ object_size); ++ if (ret) { ++ DXG_ERR("failed to copy fences"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ fences = &args.fence_value; ++ } ++ ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.context, args.object_count, ++ objects, fences, ++ !monitored_fence); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ if (objects) ++ vfree(objects); ++ if (fences && fences != &args.fence_value) ++ vfree(fences); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -1485,8 +2164,8 @@ static struct ioctl_desc ioctls[] = { + /* 0x0e */ {}, + /* 0x0f */ {}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, +-/* 0x11 */ {}, +-/* 0x12 */ {}, ++/* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT}, ++/* 0x12 */ {dxgkio_wait_sync_object, LX_DXWAITFORSYNCHRONIZATIONOBJECT}, + /* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, +@@ -1517,17 +2196,22 @@ static struct ioctl_desc ioctls[] = { + /* 0x2e */ {}, + /* 0x2f */ {}, + /* 0x30 */ {}, +-/* 0x31 */ {}, +-/* 0x32 */ {}, +-/* 0x33 */ {}, ++/* 0x31 */ {dxgkio_signal_sync_object_cpu, ++ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU}, ++/* 0x32 */ {dxgkio_signal_sync_object_gpu, ++ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU}, ++/* 0x33 */ {dxgkio_signal_sync_object_gpu2, ++ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, + /* 0x34 */ {}, + /* 0x35 */ {}, + /* 0x36 */ {}, + /* 0x37 */ {}, + /* 0x38 */ {}, + /* 0x39 */ {}, +-/* 0x3a */ {}, +-/* 0x3b */ {}, ++/* 0x3a */ {dxgkio_wait_sync_object_cpu, ++ LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU}, ++/* 0x3b */ {dxgkio_wait_sync_object_gpu, ++ LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU}, + /* 0x3c */ {}, + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index a51b29a6a68f..ee2ebfdd1c13 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -25,6 +25,8 @@ extern const struct d3dkmthandle zerohandle; + * The locks here are in the order from lowest to highest. + * When a lower lock is held, the higher lock should not be acquired. + * ++ * device_list_mutex ++ * host_event_list_mutex + * channel_lock (VMBus channel lock) + * fd_mutex + * plistmutex (process list mutex) +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 4e1069f41d76..39055b0c1069 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -60,6 +60,9 @@ struct winluid { + + #define D3DKMT_CREATEALLOCATION_MAX 1024 + #define D3DKMT_ADAPTERS_MAX 64 ++#define D3DDDI_MAX_BROADCAST_CONTEXT 64 ++#define D3DDDI_MAX_OBJECT_WAITED_ON 32 ++#define D3DDDI_MAX_OBJECT_SIGNALED 32 + + struct d3dkmt_adapterinfo { + struct d3dkmthandle adapter_handle; +@@ -343,6 +346,148 @@ struct d3dkmt_createsynchronizationobject2 { + __u32 reserved1; + }; + ++struct d3dkmt_waitforsynchronizationobject2 { ++ struct d3dkmthandle context; ++ __u32 object_count; ++ struct d3dkmthandle object_array[D3DDDI_MAX_OBJECT_WAITED_ON]; ++ union { ++ struct { ++ __u64 fence_value; ++ } fence; ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dddicb_signalflags { ++ union { ++ struct { ++ __u32 signal_at_submission:1; ++ __u32 enqueue_cpu_event:1; ++ __u32 allow_fence_rewind:1; ++ __u32 reserved:28; ++ __u32 DXGK_SIGNAL_FLAG_INTERNAL0:1; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_signalsynchronizationobject2 { ++ struct d3dkmthandle context; ++ __u32 object_count; ++ struct d3dkmthandle object_array[D3DDDI_MAX_OBJECT_SIGNALED]; ++ struct d3dddicb_signalflags flags; ++ __u32 context_count; ++ struct d3dkmthandle contexts[D3DDDI_MAX_BROADCAST_CONTEXT]; ++ union { ++ struct { ++ __u64 fence_value; ++ } fence; ++ __u64 cpu_event_handle; ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dddi_waitforsynchronizationobjectfromcpu_flags { ++ union { ++ struct { ++ __u32 wait_any:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_waitforsynchronizationobjectfromcpu { ++ struct d3dkmthandle device; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++ __u64 async_event; ++ struct d3dddi_waitforsynchronizationobjectfromcpu_flags flags; ++}; ++ ++struct d3dkmt_signalsynchronizationobjectfromcpu { ++ struct d3dkmthandle device; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++ struct d3dddicb_signalflags flags; ++}; ++ ++struct d3dkmt_waitforsynchronizationobjectfromgpu { ++ struct d3dkmthandle context; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++#else ++ __u64 objects; ++#endif ++ union { ++#ifdef __KERNEL__ ++ __u64 *monitored_fence_values; ++#else ++ __u64 monitored_fence_values; ++#endif ++ __u64 fence_value; ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dkmt_signalsynchronizationobjectfromgpu { ++ struct d3dkmthandle context; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++#else ++ __u64 objects; ++#endif ++ union { ++#ifdef __KERNEL__ ++ __u64 *monitored_fence_values; ++#else ++ __u64 monitored_fence_values; ++#endif ++ __u64 reserved[8]; ++ }; ++}; ++ ++struct d3dkmt_signalsynchronizationobjectfromgpu2 { ++ __u32 object_count; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++#else ++ __u64 objects; ++#endif ++ struct d3dddicb_signalflags flags; ++ __u32 context_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *contexts; ++#else ++ __u64 contexts; ++#endif ++ union { ++ __u64 fence_value; ++ __u64 cpu_event_handle; ++#ifdef __KERNEL__ ++ __u64 *monitored_fence_values; ++#else ++ __u64 monitored_fence_values; ++#endif ++ __u64 reserved[8]; ++ }; ++}; ++ + struct d3dkmt_destroysynchronizationobject { + struct d3dkmthandle sync_object; + }; +@@ -576,6 +721,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x11, struct d3dkmt_signalsynchronizationobject2) ++#define LX_DXWAITFORSYNCHRONIZATIONOBJECT \ ++ _IOWR(0x47, 0x12, struct d3dkmt_waitforsynchronizationobject2) + #define LX_DXDESTROYALLOCATION2 \ + _IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2) + #define LX_DXENUMADAPTERS2 \ +@@ -586,6 +735,16 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ ++ _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ ++ _IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu) ++#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \ ++ _IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2) ++#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ ++ _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) ++#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ ++ _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1678-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1678-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch new file mode 100644 index 000000000000..b199663fd638 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1678-drivers-hv-dxgkrnl-Sharing-of-dxgresource-objects.patch @@ -0,0 +1,1464 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 31 Jan 2022 17:52:31 -0800 +Subject: drivers: hv: dxgkrnl: Sharing of dxgresource objects + +Implement creation of shared resources and ioctls for sharing +dxgresource objects between processes in the virtual machine. + +A dxgresource object is a collection of dxgallocation objects. +The driver API allows addition/removal of allocations to a resource, +but has limitations on addition/removal of allocations to a shared +resource. When a resource is "sealed", addition/removal of allocations +is not allowed. + +Resources are shared using file descriptor (FD) handles. The name +"NT handle" is used to be compatible with Windows implementation. + +An FD handle is created by the LX_DXSHAREOBJECTS ioctl. The given FD +handle could be sent to another process using any Linux API. + +To use a shared resource object in other ioctls the object needs to be +opened using its FD handle. An resource object is opened by the +LX_DXOPENRESOURCEFROMNTHANDLE ioctl. This ioctl returns a d3dkmthandle +value, which can be used to reference the resource object. + +The LX_DXQUERYRESOURCEINFOFROMNTHANDLE ioctl is used to query private +driver data of a shared resource object. This private data needs to be +used to actually open the object using the LX_DXOPENRESOURCEFROMNTHANDLE +ioctl. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 81 + + drivers/hv/dxgkrnl/dxgkrnl.h | 77 + + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgvmbus.c | 127 ++ + drivers/hv/dxgkrnl/dxgvmbus.h | 30 + + drivers/hv/dxgkrnl/ioctl.c | 792 +++++++++- + include/uapi/misc/d3dkmthk.h | 96 ++ + 7 files changed, 1200 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 04d827a15c54..26fce9aba4f3 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -160,6 +160,17 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info) + list_del(&process_info->adapter_process_list_entry); + } + ++void dxgadapter_remove_shared_resource(struct dxgadapter *adapter, ++ struct dxgsharedresource *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ if (object->shared_resource_list_entry.next) { ++ list_del(&object->shared_resource_list_entry); ++ object->shared_resource_list_entry.next = NULL; ++ } ++ up_write(&adapter->shared_resource_list_lock); ++} ++ + void dxgadapter_add_syncobj(struct dxgadapter *adapter, + struct dxgsyncobject *object) + { +@@ -489,6 +500,69 @@ void dxgdevice_remove_resource(struct dxgdevice *device, + } + } + ++struct dxgsharedresource *dxgsharedresource_create(struct dxgadapter *adapter) ++{ ++ struct dxgsharedresource *resource; ++ ++ resource = kzalloc(sizeof(*resource), GFP_KERNEL); ++ if (resource) { ++ INIT_LIST_HEAD(&resource->resource_list_head); ++ kref_init(&resource->sresource_kref); ++ mutex_init(&resource->fd_mutex); ++ resource->adapter = adapter; ++ } ++ return resource; ++} ++ ++void dxgsharedresource_destroy(struct kref *refcount) ++{ ++ struct dxgsharedresource *resource; ++ ++ resource = container_of(refcount, struct dxgsharedresource, ++ sresource_kref); ++ if (resource->runtime_private_data) ++ vfree(resource->runtime_private_data); ++ if (resource->resource_private_data) ++ vfree(resource->resource_private_data); ++ if (resource->alloc_private_data_sizes) ++ vfree(resource->alloc_private_data_sizes); ++ if (resource->alloc_private_data) ++ vfree(resource->alloc_private_data); ++ kfree(resource); ++} ++ ++void dxgsharedresource_add_resource(struct dxgsharedresource *shared_resource, ++ struct dxgresource *resource) ++{ ++ down_write(&shared_resource->adapter->shared_resource_list_lock); ++ DXG_TRACE("Adding resource: %p %p", shared_resource, resource); ++ list_add_tail(&resource->shared_resource_list_entry, ++ &shared_resource->resource_list_head); ++ kref_get(&shared_resource->sresource_kref); ++ kref_get(&resource->resource_kref); ++ resource->shared_owner = shared_resource; ++ up_write(&shared_resource->adapter->shared_resource_list_lock); ++} ++ ++void dxgsharedresource_remove_resource(struct dxgsharedresource ++ *shared_resource, ++ struct dxgresource *resource) ++{ ++ struct dxgadapter *adapter = shared_resource->adapter; ++ ++ down_write(&adapter->shared_resource_list_lock); ++ DXG_TRACE("Removing resource: %p %p", shared_resource, resource); ++ if (resource->shared_resource_list_entry.next) { ++ list_del(&resource->shared_resource_list_entry); ++ resource->shared_resource_list_entry.next = NULL; ++ kref_put(&shared_resource->sresource_kref, ++ dxgsharedresource_destroy); ++ resource->shared_owner = NULL; ++ kref_put(&resource->resource_kref, dxgresource_release); ++ } ++ up_write(&adapter->shared_resource_list_lock); ++} ++ + struct dxgresource *dxgresource_create(struct dxgdevice *device) + { + struct dxgresource *resource; +@@ -532,6 +606,7 @@ void dxgresource_destroy(struct dxgresource *resource) + struct d3dkmt_destroyallocation2 args = { }; + int destroyed = test_and_set_bit(0, &resource->flags); + struct dxgdevice *device = resource->device; ++ struct dxgsharedresource *shared_resource; + + if (!destroyed) { + dxgresource_free_handle(resource); +@@ -547,6 +622,12 @@ void dxgresource_destroy(struct dxgresource *resource) + dxgallocation_destroy(alloc); + } + dxgdevice_remove_resource(device, resource); ++ shared_resource = resource->shared_owner; ++ if (shared_resource) { ++ dxgsharedresource_remove_resource(shared_resource, ++ resource); ++ resource->shared_owner = NULL; ++ } + } + kref_put(&resource->resource_kref, dxgresource_release); + } +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 8431523f42de..0336e1843223 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -38,6 +38,7 @@ struct dxgdevice; + struct dxgcontext; + struct dxgallocation; + struct dxgresource; ++struct dxgsharedresource; + struct dxgsyncobject; + + /* +@@ -372,6 +373,8 @@ struct dxgadapter { + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ + struct list_head adapter_process_list_head; ++ /* List of all dxgsharedresource objects */ ++ struct list_head shared_resource_list_head; + /* List of all non-device dxgsyncobject objects */ + struct list_head syncobj_list_head; + /* This lock protects shared resource and syncobject lists */ +@@ -405,6 +408,8 @@ void dxgadapter_remove_syncobj(struct dxgsyncobject *so); + void dxgadapter_add_process(struct dxgadapter *adapter, + struct dxgprocess_adapter *process_info); + void dxgadapter_remove_process(struct dxgprocess_adapter *process_info); ++void dxgadapter_remove_shared_resource(struct dxgadapter *adapter, ++ struct dxgsharedresource *object); + + /* + * The object represent the device object. +@@ -484,6 +489,64 @@ void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_release(struct kref *refcount); + bool dxgcontext_is_active(struct dxgcontext *ctx); + ++/* ++ * A shared resource object is created to track the list of dxgresource objects, ++ * which are opened for the same underlying shared resource. ++ * Objects are shared by using a file descriptor handle. ++ * FD is created by calling dxgk_share_objects and providing shandle to ++ * dxgsharedresource. The FD points to a dxgresource object, which is created ++ * by calling dxgk_open_resource_nt. dxgresource object is referenced by the ++ * FD. ++ * ++ * The object is referenced by every dxgresource in its list. ++ * ++ */ ++struct dxgsharedresource { ++ /* Every dxgresource object in the resource list takes a reference */ ++ struct kref sresource_kref; ++ struct dxgadapter *adapter; ++ /* List of dxgresource objects, opened for the shared resource. */ ++ /* Protected by dxgadapter::shared_resource_list_lock */ ++ struct list_head resource_list_head; ++ /* Entry in the list of dxgsharedresource in dxgadapter */ ++ /* Protected by dxgadapter::shared_resource_list_lock */ ++ struct list_head shared_resource_list_entry; ++ struct mutex fd_mutex; ++ /* Referenced by file descriptors */ ++ int host_shared_handle_nt_reference; ++ /* Corresponding global handle in the host */ ++ struct d3dkmthandle host_shared_handle; ++ /* ++ * When the sync object is shared by NT handle, this is the ++ * corresponding handle in the host ++ */ ++ struct d3dkmthandle host_shared_handle_nt; ++ /* Values below are computed when the resource is sealed */ ++ u32 runtime_private_data_size; ++ u32 alloc_private_data_size; ++ u32 resource_private_data_size; ++ u32 allocation_count; ++ union { ++ struct { ++ /* Cannot add new allocations */ ++ u32 sealed:1; ++ u32 reserved:31; ++ }; ++ long flags; ++ }; ++ u32 *alloc_private_data_sizes; ++ u8 *alloc_private_data; ++ u8 *runtime_private_data; ++ u8 *resource_private_data; ++}; ++ ++struct dxgsharedresource *dxgsharedresource_create(struct dxgadapter *adapter); ++void dxgsharedresource_destroy(struct kref *refcount); ++void dxgsharedresource_add_resource(struct dxgsharedresource *sres, ++ struct dxgresource *res); ++void dxgsharedresource_remove_resource(struct dxgsharedresource *sres, ++ struct dxgresource *res); ++ + struct dxgresource { + struct kref resource_kref; + enum dxgobjectstate object_state; +@@ -504,6 +567,8 @@ struct dxgresource { + }; + long flags; + }; ++ /* Owner of the shared resource */ ++ struct dxgsharedresource *shared_owner; + }; + + struct dxgresource *dxgresource_create(struct dxgdevice *dev); +@@ -658,6 +723,18 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, ++ struct d3dkmthandle object, ++ struct d3dkmthandle *shared_handle); ++int dxgvmb_send_destroy_nt_shared_object(struct d3dkmthandle shared_handle); ++int dxgvmb_send_open_resource(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmthandle global_share, ++ u32 allocation_count, ++ u32 total_priv_drv_data_size, ++ struct d3dkmthandle *resource_handle, ++ struct d3dkmthandle *alloc_handles); + int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + enum d3dkmdt_standardallocationtype t, + struct d3dkmdt_gdisurfacedata *data, +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 5a5ca8791d27..69e221613af9 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -258,6 +258,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + init_rwsem(&adapter->core_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); ++ INIT_LIST_HEAD(&adapter->shared_resource_list_head); + INIT_LIST_HEAD(&adapter->syncobj_list_head); + init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 6b2dea24a509..b3a4377c8b0b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -712,6 +712,79 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process) + return ret; + } + ++int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, ++ struct d3dkmthandle object, ++ struct d3dkmthandle *shared_handle) ++{ ++ struct dxgkvmb_command_createntsharedobject *command; ++ int ret; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATENTSHAREDOBJECT, ++ process->host_handle); ++ command->object = object; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size, shared_handle, ++ sizeof(*shared_handle)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret < 0) ++ goto cleanup; ++ if (shared_handle->v == 0) { ++ DXG_ERR("failed to create NT shared object"); ++ ret = -ENOTRECOVERABLE; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_nt_shared_object(struct d3dkmthandle shared_handle) ++{ ++ struct dxgkvmb_command_destroyntsharedobject *command; ++ int ret; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, NULL, NULL, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init1(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYNTSHAREDOBJECT); ++ command->shared_handle = shared_handle; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size); ++ ++ dxgglobal_release_channel_lock(); ++ ++cleanup: ++ free_message(&msg, NULL); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_destroy_sync_object(struct dxgprocess *process, + struct d3dkmthandle sync_object) + { +@@ -1552,6 +1625,60 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_open_resource(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmthandle global_share, ++ u32 allocation_count, ++ u32 total_priv_drv_data_size, ++ struct d3dkmthandle *resource_handle, ++ struct d3dkmthandle *alloc_handles) ++{ ++ struct dxgkvmb_command_openresource *command; ++ struct dxgkvmb_command_openresource_return *result; ++ struct d3dkmthandle *handles; ++ int ret; ++ int i; ++ u32 result_size = allocation_count * sizeof(struct d3dkmthandle) + ++ sizeof(*result); ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ ret = init_message_res(&msg, adapter, process, sizeof(*command), ++ result_size); ++ if (ret) ++ goto cleanup; ++ command = msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENRESOURCE, ++ process->host_handle); ++ command->device = device; ++ command->nt_security_sharing = 1; ++ command->global_share = global_share; ++ command->allocation_count = allocation_count; ++ command->total_priv_drv_data_size = total_priv_drv_data_size; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ *resource_handle = result->resource; ++ handles = (struct d3dkmthandle *) &result[1]; ++ for (i = 0; i < allocation_count; i++) ++ alloc_handles[i] = handles[i]; ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + enum d3dkmdt_standardallocationtype alloctype, + struct d3dkmdt_gdisurfacedata *alloc_data, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 89fecbcefbc8..73d7adac60a1 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -172,6 +172,21 @@ struct dxgkvmb_command_signalguestevent { + bool dereference_event; + }; + ++/* ++ * The command returns struct d3dkmthandle of a shared object for the ++ * given pre-process object ++ */ ++struct dxgkvmb_command_createntsharedobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle object; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroyntsharedobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle shared_handle; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_setiospaceregion { + struct dxgkvmb_command_vm_to_host hdr; +@@ -305,6 +320,21 @@ struct dxgkvmb_command_createallocation { + /* u8 priv_drv_data[] for each alloc_info */ + }; + ++struct dxgkvmb_command_openresource { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ bool nt_security_sharing; ++ struct d3dkmthandle global_share; ++ u32 allocation_count; ++ u32 total_priv_drv_data_size; ++}; ++ ++struct dxgkvmb_command_openresource_return { ++ struct d3dkmthandle resource; ++ struct ntstatus status; ++/* struct d3dkmthandle allocation[allocation_count]; */ ++}; ++ + struct dxgkvmb_command_getstandardallocprivdata { + struct dxgkvmb_command_vgpu_to_host hdr; + enum d3dkmdt_standardallocationtype alloc_type; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 0025e1ee2d4d..abb64f6c3a59 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -36,8 +36,35 @@ static char *errorstr(int ret) + } + #endif + ++static int dxgsharedresource_release(struct inode *inode, struct file *file) ++{ ++ struct dxgsharedresource *resource = file->private_data; ++ ++ DXG_TRACE("Release resource: %p", resource); ++ mutex_lock(&resource->fd_mutex); ++ kref_get(&resource->sresource_kref); ++ resource->host_shared_handle_nt_reference--; ++ if (resource->host_shared_handle_nt_reference == 0) { ++ if (resource->host_shared_handle_nt.v) { ++ dxgvmb_send_destroy_nt_shared_object( ++ resource->host_shared_handle_nt); ++ DXG_TRACE("Resource host_handle_nt destroyed: %x", ++ resource->host_shared_handle_nt.v); ++ resource->host_shared_handle_nt.v = 0; ++ } ++ kref_put(&resource->sresource_kref, dxgsharedresource_destroy); ++ } ++ mutex_unlock(&resource->fd_mutex); ++ kref_put(&resource->sresource_kref, dxgsharedresource_destroy); ++ return 0; ++} ++ ++static const struct file_operations dxg_resource_fops = { ++ .release = dxgsharedresource_release, ++}; ++ + static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, +- void *__user inargs) ++ void *__user inargs) + { + struct d3dkmt_openadapterfromluid args; + int ret; +@@ -212,6 +239,98 @@ dxgkp_enum_adapters(struct dxgprocess *process, + return ret; + } + ++static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource) ++{ ++ int ret = 0; ++ int i = 0; ++ u8 *private_data; ++ u32 data_size; ++ struct dxgresource *resource; ++ struct dxgallocation *alloc; ++ ++ DXG_TRACE("Sealing resource: %p", shared_resource); ++ ++ down_write(&shared_resource->adapter->shared_resource_list_lock); ++ if (shared_resource->sealed) { ++ DXG_TRACE("Resource already sealed"); ++ goto cleanup; ++ } ++ shared_resource->sealed = 1; ++ if (!list_empty(&shared_resource->resource_list_head)) { ++ resource = ++ list_first_entry(&shared_resource->resource_list_head, ++ struct dxgresource, ++ shared_resource_list_entry); ++ DXG_TRACE("First resource: %p", resource); ++ mutex_lock(&resource->resource_mutex); ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ DXG_TRACE("Resource alloc: %p %d", alloc, ++ alloc->priv_drv_data->data_size); ++ shared_resource->allocation_count++; ++ shared_resource->alloc_private_data_size += ++ alloc->priv_drv_data->data_size; ++ if (shared_resource->alloc_private_data_size < ++ alloc->priv_drv_data->data_size) { ++ DXG_ERR("alloc private data overflow"); ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ } ++ if (shared_resource->alloc_private_data_size == 0) { ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ shared_resource->alloc_private_data = ++ vzalloc(shared_resource->alloc_private_data_size); ++ if (shared_resource->alloc_private_data == NULL) { ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ shared_resource->alloc_private_data_sizes = ++ vzalloc(sizeof(u32)*shared_resource->allocation_count); ++ if (shared_resource->alloc_private_data_sizes == NULL) { ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ private_data = shared_resource->alloc_private_data; ++ data_size = shared_resource->alloc_private_data_size; ++ i = 0; ++ list_for_each_entry(alloc, &resource->alloc_list_head, ++ alloc_list_entry) { ++ u32 alloc_data_size = alloc->priv_drv_data->data_size; ++ ++ if (alloc_data_size) { ++ if (data_size < alloc_data_size) { ++ dev_err(DXGDEV, ++ "Invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup1; ++ } ++ shared_resource->alloc_private_data_sizes[i] = ++ alloc_data_size; ++ memcpy(private_data, ++ alloc->priv_drv_data->data, ++ alloc_data_size); ++ vfree(alloc->priv_drv_data); ++ alloc->priv_drv_data = NULL; ++ private_data += alloc_data_size; ++ data_size -= alloc_data_size; ++ } ++ i++; ++ } ++ if (data_size != 0) { ++ DXG_ERR("Data size mismatch"); ++ ret = -EINVAL; ++ } ++cleanup1: ++ mutex_unlock(&resource->resource_mutex); ++ } ++cleanup: ++ up_write(&shared_resource->adapter->shared_resource_list_lock); ++ return ret; ++} ++ + static int + dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + { +@@ -803,6 +922,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + u32 alloc_info_size = 0; + struct dxgresource *resource = NULL; + struct dxgallocation **dxgalloc = NULL; ++ struct dxgsharedresource *shared_resource = NULL; + bool resource_mutex_acquired = false; + u32 standard_alloc_priv_data_size = 0; + void *standard_alloc_priv_data = NULL; +@@ -973,6 +1093,76 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + } + resource->private_runtime_handle = + args.private_runtime_resource_handle; ++ if (args.flags.create_shared) { ++ if (!args.flags.nt_security_sharing) { ++ dev_err(DXGDEV, ++ "nt_security_sharing must be set"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ shared_resource = dxgsharedresource_create(adapter); ++ if (shared_resource == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ shared_resource->runtime_private_data_size = ++ args.priv_drv_data_size; ++ shared_resource->resource_private_data_size = ++ args.priv_drv_data_size; ++ ++ shared_resource->runtime_private_data_size = ++ args.private_runtime_data_size; ++ shared_resource->resource_private_data_size = ++ args.priv_drv_data_size; ++ dxgsharedresource_add_resource(shared_resource, ++ resource); ++ if (args.flags.standard_allocation) { ++ shared_resource->resource_private_data = ++ res_priv_data; ++ shared_resource->resource_private_data_size = ++ res_priv_data_size; ++ res_priv_data = NULL; ++ } ++ if (args.private_runtime_data_size) { ++ shared_resource->runtime_private_data = ++ vzalloc(args.private_runtime_data_size); ++ if (shared_resource->runtime_private_data == ++ NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user( ++ shared_resource->runtime_private_data, ++ args.private_runtime_data, ++ args.private_runtime_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "failed to copy runtime data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if (args.priv_drv_data_size && ++ !args.flags.standard_allocation) { ++ shared_resource->resource_private_data = ++ vzalloc(args.priv_drv_data_size); ++ if (shared_resource->resource_private_data == ++ NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user( ++ shared_resource->resource_private_data, ++ args.priv_drv_data, ++ args.priv_drv_data_size); ++ if (ret) { ++ dev_err(DXGDEV, ++ "failed to copy res data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ } + } else { + if (args.resource.v) { + /* Adding new allocations to the given resource */ +@@ -991,6 +1181,12 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + ret = -EINVAL; + goto cleanup; + } ++ if (resource->shared_owner && ++ resource->shared_owner->sealed) { ++ DXG_ERR("Resource is sealed"); ++ ret = -EINVAL; ++ goto cleanup; ++ } + /* Synchronize with resource destruction */ + mutex_lock(&resource->resource_mutex); + if (!dxgresource_is_active(resource)) { +@@ -1092,9 +1288,16 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + } + } + if (resource && args.flags.create_resource) { ++ if (shared_resource) { ++ dxgsharedresource_remove_resource ++ (shared_resource, resource); ++ } + dxgresource_destroy(resource); + } + } ++ if (shared_resource) ++ kref_put(&shared_resource->sresource_kref, ++ dxgsharedresource_destroy); + if (dxgalloc) + vfree(dxgalloc); + if (standard_alloc_priv_data) +@@ -1140,6 +1343,10 @@ static int validate_alloc(struct dxgallocation *alloc0, + fail_reason = 4; + goto cleanup; + } ++ if (alloc->owner.resource->shared_owner) { ++ fail_reason = 5; ++ goto cleanup; ++ } + } else { + if (alloc->owner.device != device) { + fail_reason = 6; +@@ -2146,6 +2353,582 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource, ++ struct dxgprocess *process, ++ struct d3dkmthandle objecthandle) ++{ ++ int ret = 0; ++ ++ mutex_lock(&resource->fd_mutex); ++ if (resource->host_shared_handle_nt_reference == 0) { ++ ret = dxgvmb_send_create_nt_shared_object(process, ++ objecthandle, ++ &resource->host_shared_handle_nt); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Resource host_shared_handle_ht: %x", ++ resource->host_shared_handle_nt.v); ++ kref_get(&resource->sresource_kref); ++ } ++ resource->host_shared_handle_nt_reference++; ++cleanup: ++ mutex_unlock(&resource->fd_mutex); ++ return ret; ++} ++ ++enum dxg_sharedobject_type { ++ DXG_SHARED_RESOURCE ++}; ++ ++static int get_object_fd(enum dxg_sharedobject_type type, ++ void *object, int *fdout) ++{ ++ struct file *file; ++ int fd; ++ ++ fd = get_unused_fd_flags(O_CLOEXEC); ++ if (fd < 0) { ++ DXG_ERR("get_unused_fd_flags failed: %x", fd); ++ return -ENOTRECOVERABLE; ++ } ++ ++ switch (type) { ++ case DXG_SHARED_RESOURCE: ++ file = anon_inode_getfile("dxgresource", ++ &dxg_resource_fops, object, 0); ++ break; ++ default: ++ return -EINVAL; ++ }; ++ if (IS_ERR(file)) { ++ DXG_ERR("anon_inode_getfile failed: %x", fd); ++ put_unused_fd(fd); ++ return -ENOTRECOVERABLE; ++ } ++ ++ fd_install(fd, file); ++ *fdout = fd; ++ return 0; ++} ++ ++static int ++dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_shareobjects args; ++ enum hmgrentry_type object_type; ++ struct dxgsyncobject *syncobj = NULL; ++ struct dxgresource *resource = NULL; ++ struct dxgsharedresource *shared_resource = NULL; ++ struct d3dkmthandle *handles = NULL; ++ int object_fd = -1; ++ void *obj = NULL; ++ u32 handle_size; ++ int ret; ++ u64 tmp = 0; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count == 0 || args.object_count > 1) { ++ DXG_ERR("invalid object count %d", args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ handle_size = args.object_count * sizeof(struct d3dkmthandle); ++ ++ handles = vzalloc(handle_size); ++ if (handles == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(handles, args.objects, handle_size); ++ if (ret) { ++ DXG_ERR("failed to copy object handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Sharing handle: %x", handles[0].v); ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ object_type = hmgrtable_get_object_type(&process->handle_table, ++ handles[0]); ++ obj = hmgrtable_get_object(&process->handle_table, handles[0]); ++ if (obj == NULL) { ++ DXG_ERR("invalid object handle %x", handles[0].v); ++ ret = -EINVAL; ++ } else { ++ switch (object_type) { ++ case HMGRENTRY_TYPE_DXGRESOURCE: ++ resource = obj; ++ if (resource->shared_owner) { ++ kref_get(&resource->resource_kref); ++ shared_resource = resource->shared_owner; ++ } else { ++ resource = NULL; ++ DXG_ERR("resource object shared"); ++ ret = -EINVAL; ++ } ++ break; ++ default: ++ DXG_ERR("invalid object type %d", object_type); ++ ret = -EINVAL; ++ break; ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ switch (object_type) { ++ case HMGRENTRY_TYPE_DXGRESOURCE: ++ ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource, ++ &object_fd); ++ if (ret < 0) { ++ DXG_ERR("get_object_fd failed for resource"); ++ goto cleanup; ++ } ++ ret = dxgsharedresource_get_host_nt_handle(shared_resource, ++ process, handles[0]); ++ if (ret < 0) { ++ DXG_ERR("get_host_res_nt_handle failed"); ++ goto cleanup; ++ } ++ ret = dxgsharedresource_seal(shared_resource); ++ if (ret < 0) { ++ DXG_ERR("dxgsharedresource_seal failed"); ++ goto cleanup; ++ } ++ break; ++ default: ++ ret = -EINVAL; ++ break; ++ } ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ DXG_TRACE("Object FD: %x", object_fd); ++ ++ tmp = (u64) object_fd; ++ ++ ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64)); ++ if (ret < 0) ++ DXG_ERR("failed to copy shared handle"); ++ ++cleanup: ++ if (ret < 0) { ++ if (object_fd >= 0) ++ put_unused_fd(object_fd); ++ } ++ ++ if (handles) ++ vfree(handles); ++ ++ if (syncobj) ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ ++ if (resource) ++ kref_put(&resource->resource_kref, dxgresource_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryresourceinfofromnthandle args; ++ int ret; ++ struct dxgdevice *device = NULL; ++ struct dxgsharedresource *shared_resource = NULL; ++ struct file *file = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ file = fget(args.nt_handle); ++ if (!file) { ++ DXG_ERR("failed to get file from handle: %llx", ++ args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (file->f_op != &dxg_resource_fops) { ++ DXG_ERR("invalid fd: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ shared_resource = file->private_data; ++ if (shared_resource == NULL) { ++ DXG_ERR("invalid private data: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgsharedresource_seal(shared_resource); ++ if (ret < 0) ++ goto cleanup; ++ ++ args.private_runtime_data_size = ++ shared_resource->runtime_private_data_size; ++ args.resource_priv_drv_data_size = ++ shared_resource->resource_private_data_size; ++ args.allocation_count = shared_resource->allocation_count; ++ args.total_priv_drv_data_size = ++ shared_resource->alloc_private_data_size; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (file) ++ fput(file); ++ if (device) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++assign_resource_handles(struct dxgprocess *process, ++ struct dxgsharedresource *shared_resource, ++ struct d3dkmt_openresourcefromnthandle *args, ++ struct d3dkmthandle resource_handle, ++ struct dxgresource *resource, ++ struct dxgallocation **allocs, ++ struct d3dkmthandle *handles) ++{ ++ int ret; ++ int i; ++ u8 *cur_priv_data; ++ u32 total_priv_data_size = 0; ++ struct d3dddi_openallocationinfo2 open_alloc_info = { }; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, resource, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ resource_handle); ++ if (ret < 0) ++ goto cleanup; ++ resource->handle = resource_handle; ++ resource->handle_valid = 1; ++ cur_priv_data = args->total_priv_drv_data; ++ for (i = 0; i < args->allocation_count; i++) { ++ ret = hmgrtable_assign_handle(&process->handle_table, allocs[i], ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ handles[i]); ++ if (ret < 0) ++ goto cleanup; ++ allocs[i]->alloc_handle = handles[i]; ++ allocs[i]->handle_valid = 1; ++ open_alloc_info.allocation = handles[i]; ++ if (shared_resource->alloc_private_data_sizes) ++ open_alloc_info.priv_drv_data_size = ++ shared_resource->alloc_private_data_sizes[i]; ++ else ++ open_alloc_info.priv_drv_data_size = 0; ++ ++ total_priv_data_size += open_alloc_info.priv_drv_data_size; ++ open_alloc_info.priv_drv_data = cur_priv_data; ++ cur_priv_data += open_alloc_info.priv_drv_data_size; ++ ++ ret = copy_to_user(&args->open_alloc_info[i], ++ &open_alloc_info, ++ sizeof(open_alloc_info)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc info"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ args->total_priv_drv_data_size = total_priv_data_size; ++cleanup: ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (ret < 0) { ++ for (i = 0; i < args->allocation_count; i++) ++ dxgallocation_free_handle(allocs[i]); ++ dxgresource_free_handle(resource); ++ } ++ return ret; ++} ++ ++static int ++open_resource(struct dxgprocess *process, ++ struct d3dkmt_openresourcefromnthandle *args, ++ __user struct d3dkmthandle *res_out, ++ __user u32 *total_driver_data_size_out) ++{ ++ int ret = 0; ++ int i; ++ struct d3dkmthandle *alloc_handles = NULL; ++ int alloc_handles_size = sizeof(struct d3dkmthandle) * ++ args->allocation_count; ++ struct dxgsharedresource *shared_resource = NULL; ++ struct dxgresource *resource = NULL; ++ struct dxgallocation **allocs = NULL; ++ struct d3dkmthandle global_share = {}; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle resource_handle = {}; ++ struct file *file = NULL; ++ ++ DXG_TRACE("Opening resource handle: %llx", args->nt_handle); ++ ++ file = fget(args->nt_handle); ++ if (!file) { ++ DXG_ERR("failed to get file from handle: %llx", ++ args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (file->f_op != &dxg_resource_fops) { ++ DXG_ERR("invalid fd type: %llx", args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ shared_resource = file->private_data; ++ if (shared_resource == NULL) { ++ DXG_ERR("invalid private data: %llx", args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (kref_get_unless_zero(&shared_resource->sresource_kref) == 0) ++ shared_resource = NULL; ++ else ++ global_share = shared_resource->host_shared_handle_nt; ++ ++ if (shared_resource == NULL) { ++ DXG_ERR("Invalid shared resource handle: %x", ++ (u32)args->nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ DXG_TRACE("Shared resource: %p %x", shared_resource, ++ global_share.v); ++ ++ device = dxgprocess_device_by_handle(process, args->device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgsharedresource_seal(shared_resource); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (args->allocation_count != shared_resource->allocation_count || ++ args->private_runtime_data_size < ++ shared_resource->runtime_private_data_size || ++ args->resource_priv_drv_data_size < ++ shared_resource->resource_private_data_size || ++ args->total_priv_drv_data_size < ++ shared_resource->alloc_private_data_size) { ++ ret = -EINVAL; ++ DXG_ERR("Invalid data sizes"); ++ goto cleanup; ++ } ++ ++ alloc_handles = vzalloc(alloc_handles_size); ++ if (alloc_handles == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ allocs = vzalloc(sizeof(void *) * args->allocation_count); ++ if (allocs == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ resource = dxgresource_create(device); ++ if (resource == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dxgsharedresource_add_resource(shared_resource, resource); ++ ++ for (i = 0; i < args->allocation_count; i++) { ++ allocs[i] = dxgallocation_create(process); ++ if (allocs[i] == NULL) ++ goto cleanup; ++ ret = dxgresource_add_alloc(resource, allocs[i]); ++ if (ret < 0) ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_open_resource(process, adapter, ++ device->handle, global_share, ++ args->allocation_count, ++ args->total_priv_drv_data_size, ++ &resource_handle, alloc_handles); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_open_resource failed"); ++ goto cleanup; ++ } ++ ++ if (shared_resource->runtime_private_data_size) { ++ ret = copy_to_user(args->private_runtime_data, ++ shared_resource->runtime_private_data, ++ shared_resource->runtime_private_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy runtime data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (shared_resource->resource_private_data_size) { ++ ret = copy_to_user(args->resource_priv_drv_data, ++ shared_resource->resource_private_data, ++ shared_resource->resource_private_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy resource data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ if (shared_resource->alloc_private_data_size) { ++ ret = copy_to_user(args->total_priv_drv_data, ++ shared_resource->alloc_private_data, ++ shared_resource->alloc_private_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = assign_resource_handles(process, shared_resource, args, ++ resource_handle, resource, allocs, ++ alloc_handles); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(res_out, &resource_handle, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy resource handle to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(total_driver_data_size_out, ++ &args->total_priv_drv_data_size, sizeof(u32)); ++ if (ret) { ++ DXG_ERR("failed to copy total driver data size"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (resource_handle.v) { ++ struct d3dkmt_destroyallocation2 tmp = { }; ++ ++ tmp.flags.assume_not_in_use = 1; ++ tmp.device = args->device; ++ tmp.resource = resource_handle; ++ ret = dxgvmb_send_destroy_allocation(process, device, ++ &tmp, NULL); ++ } ++ if (resource) ++ dxgresource_destroy(resource); ++ } ++ ++ if (file) ++ fput(file); ++ if (allocs) ++ vfree(allocs); ++ if (shared_resource) ++ kref_put(&shared_resource->sresource_kref, ++ dxgsharedresource_destroy); ++ if (alloc_handles) ++ vfree(alloc_handles); ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ ++static int ++dxgkio_open_resource_nt(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_openresourcefromnthandle args; ++ struct d3dkmt_openresourcefromnthandle *__user args_user = inargs; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = open_resource(process, &args, ++ &args_user->resource, ++ &args_user->total_priv_drv_data_size); ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -2215,10 +2998,11 @@ static struct ioctl_desc ioctls[] = { + /* 0x3c */ {}, + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, +-/* 0x3f */ {}, ++/* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, + /* 0x40 */ {}, +-/* 0x41 */ {}, +-/* 0x42 */ {}, ++/* 0x41 */ {dxgkio_query_resource_info_nt, ++ LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, ++/* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, + /* 0x43 */ {}, + /* 0x44 */ {}, + /* 0x45 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 39055b0c1069..f74564cf7ee9 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -682,6 +682,94 @@ enum d3dkmt_deviceexecution_state { + _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, + }; + ++struct d3dddi_openallocationinfo2 { ++ struct d3dkmthandle allocation; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ __u64 gpu_va; ++ __u64 reserved[6]; ++}; ++ ++struct d3dkmt_openresourcefromnthandle { ++ struct d3dkmthandle device; ++ __u32 reserved; ++ __u64 nt_handle; ++ __u32 allocation_count; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dddi_openallocationinfo2 *open_alloc_info; ++#else ++ __u64 open_alloc_info; ++#endif ++ int private_runtime_data_size; ++ __u32 reserved2; ++#ifdef __KERNEL__ ++ void *private_runtime_data; ++#else ++ __u64 private_runtime_data; ++#endif ++ __u32 resource_priv_drv_data_size; ++ __u32 reserved3; ++#ifdef __KERNEL__ ++ void *resource_priv_drv_data; ++#else ++ __u64 resource_priv_drv_data; ++#endif ++ __u32 total_priv_drv_data_size; ++#ifdef __KERNEL__ ++ void *total_priv_drv_data; ++#else ++ __u64 total_priv_drv_data; ++#endif ++ struct d3dkmthandle resource; ++ struct d3dkmthandle keyed_mutex; ++#ifdef __KERNEL__ ++ void *keyed_mutex_private_data; ++#else ++ __u64 keyed_mutex_private_data; ++#endif ++ __u32 keyed_mutex_private_data_size; ++ struct d3dkmthandle sync_object; ++}; ++ ++struct d3dkmt_queryresourceinfofromnthandle { ++ struct d3dkmthandle device; ++ __u32 reserved; ++ __u64 nt_handle; ++#ifdef __KERNEL__ ++ void *private_runtime_data; ++#else ++ __u64 private_runtime_data; ++#endif ++ __u32 private_runtime_data_size; ++ __u32 total_priv_drv_data_size; ++ __u32 resource_priv_drv_data_size; ++ __u32 allocation_count; ++}; ++ ++struct d3dkmt_shareobjects { ++ __u32 object_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *objects; ++ void *object_attr; /* security attributes */ ++#else ++ __u64 objects; ++ __u64 object_attr; ++#endif ++ __u32 desired_access; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ __u64 *shared_handle; /* output file descriptors */ ++#else ++ __u64 shared_handle; ++#endif ++}; ++ + union d3dkmt_enumadapters_filter { + struct { + __u64 include_compute_only:1; +@@ -747,5 +835,13 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) ++#define LX_DXSHAREOBJECTS \ ++ _IOWR(0x47, 0x3f, struct d3dkmt_shareobjects) ++#define LX_DXOPENSYNCOBJECTFROMNTHANDLE2 \ ++ _IOWR(0x47, 0x40, struct d3dkmt_opensyncobjectfromnthandle2) ++#define LX_DXQUERYRESOURCEINFOFROMNTHANDLE \ ++ _IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle) ++#define LX_DXOPENRESOURCEFROMNTHANDLE \ ++ _IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1679-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1679-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch new file mode 100644 index 000000000000..5e47bde59c2c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1679-drivers-hv-dxgkrnl-Sharing-of-sync-objects.patch @@ -0,0 +1,1555 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 31 Jan 2022 16:41:28 -0800 +Subject: drivers: hv: dxgkrnl: Sharing of sync objects + +Implement creation of a shared sync objects and the ioctl for sharing +dxgsyncobject objects between processes in the virtual machine. + +Sync objects are shared using file descriptor (FD) handles. +The name "NT handle" is used to be compatible with Windows implementation. + +An FD handle is created by the LX_DXSHAREOBJECTS ioctl. The created FD +handle could be sent to another process using any Linux API. + +To use a shared sync object in other ioctls, the object needs to be +opened using its FD handle. A sync object is opened by the +LX_DXOPENSYNCOBJECTFROMNTHANDLE2 ioctl, which returns a d3dkmthandle +value. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 181 ++- + drivers/hv/dxgkrnl/dxgkrnl.h | 96 ++ + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgprocess.c | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 221 ++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 35 + + drivers/hv/dxgkrnl/ioctl.c | 556 +++++++++- + include/uapi/misc/d3dkmthk.h | 93 ++ + 8 files changed, 1181 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 26fce9aba4f3..f59173f13559 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -171,6 +171,26 @@ void dxgadapter_remove_shared_resource(struct dxgadapter *adapter, + up_write(&adapter->shared_resource_list_lock); + } + ++void dxgadapter_add_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ list_add_tail(&object->adapter_shared_syncobj_list_entry, ++ &adapter->adapter_shared_syncobj_list_head); ++ up_write(&adapter->shared_resource_list_lock); ++} ++ ++void dxgadapter_remove_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *object) ++{ ++ down_write(&adapter->shared_resource_list_lock); ++ if (object->adapter_shared_syncobj_list_entry.next) { ++ list_del(&object->adapter_shared_syncobj_list_entry); ++ object->adapter_shared_syncobj_list_entry.next = NULL; ++ } ++ up_write(&adapter->shared_resource_list_lock); ++} ++ + void dxgadapter_add_syncobj(struct dxgadapter *adapter, + struct dxgsyncobject *object) + { +@@ -622,7 +642,7 @@ void dxgresource_destroy(struct dxgresource *resource) + dxgallocation_destroy(alloc); + } + dxgdevice_remove_resource(device, resource); +- shared_resource = resource->shared_owner; ++ shared_resource = resource->shared_owner; + if (shared_resource) { + dxgsharedresource_remove_resource(shared_resource, + resource); +@@ -736,6 +756,9 @@ struct dxgcontext *dxgcontext_create(struct dxgdevice *device) + */ + void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context) + { ++ struct dxghwqueue *hwqueue; ++ struct dxghwqueue *tmp; ++ + DXG_TRACE("Destroying context %p", context); + context->object_state = DXGOBJECTSTATE_DESTROYED; + if (context->device) { +@@ -747,6 +770,10 @@ void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context) + dxgdevice_remove_context(context->device, context); + kref_put(&context->device->device_kref, dxgdevice_release); + } ++ list_for_each_entry_safe(hwqueue, tmp, &context->hwqueue_list_head, ++ hwqueue_list_entry) { ++ dxghwqueue_destroy(process, hwqueue); ++ } + kref_put(&context->context_kref, dxgcontext_release); + } + +@@ -773,6 +800,38 @@ void dxgcontext_release(struct kref *refcount) + kfree(context); + } + ++int dxgcontext_add_hwqueue(struct dxgcontext *context, ++ struct dxghwqueue *hwqueue) ++{ ++ int ret = 0; ++ ++ down_write(&context->hwqueue_list_lock); ++ if (dxgcontext_is_active(context)) ++ list_add_tail(&hwqueue->hwqueue_list_entry, ++ &context->hwqueue_list_head); ++ else ++ ret = -ENODEV; ++ up_write(&context->hwqueue_list_lock); ++ return ret; ++} ++ ++void dxgcontext_remove_hwqueue(struct dxgcontext *context, ++ struct dxghwqueue *hwqueue) ++{ ++ if (hwqueue->hwqueue_list_entry.next) { ++ list_del(&hwqueue->hwqueue_list_entry); ++ hwqueue->hwqueue_list_entry.next = NULL; ++ } ++} ++ ++void dxgcontext_remove_hwqueue_safe(struct dxgcontext *context, ++ struct dxghwqueue *hwqueue) ++{ ++ down_write(&context->hwqueue_list_lock); ++ dxgcontext_remove_hwqueue(context, hwqueue); ++ up_write(&context->hwqueue_list_lock); ++} ++ + struct dxgallocation *dxgallocation_create(struct dxgprocess *process) + { + struct dxgallocation *alloc; +@@ -958,6 +1017,63 @@ void dxgprocess_adapter_remove_device(struct dxgdevice *device) + mutex_unlock(&device->adapter_info->device_list_mutex); + } + ++struct dxgsharedsyncobject *dxgsharedsyncobj_create(struct dxgadapter *adapter, ++ struct dxgsyncobject *so) ++{ ++ struct dxgsharedsyncobject *syncobj; ++ ++ syncobj = kzalloc(sizeof(*syncobj), GFP_KERNEL); ++ if (syncobj) { ++ kref_init(&syncobj->ssyncobj_kref); ++ INIT_LIST_HEAD(&syncobj->shared_syncobj_list_head); ++ syncobj->adapter = adapter; ++ syncobj->type = so->type; ++ syncobj->monitored_fence = so->monitored_fence; ++ dxgadapter_add_shared_syncobj(adapter, syncobj); ++ kref_get(&adapter->adapter_kref); ++ init_rwsem(&syncobj->syncobj_list_lock); ++ mutex_init(&syncobj->fd_mutex); ++ } ++ return syncobj; ++} ++ ++void dxgsharedsyncobj_release(struct kref *refcount) ++{ ++ struct dxgsharedsyncobject *syncobj; ++ ++ syncobj = container_of(refcount, struct dxgsharedsyncobject, ++ ssyncobj_kref); ++ DXG_TRACE("Destroying shared sync object %p", syncobj); ++ if (syncobj->adapter) { ++ dxgadapter_remove_shared_syncobj(syncobj->adapter, ++ syncobj); ++ kref_put(&syncobj->adapter->adapter_kref, ++ dxgadapter_release); ++ } ++ kfree(syncobj); ++} ++ ++void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *shared, ++ struct dxgsyncobject *syncobj) ++{ ++ DXG_TRACE("Add syncobj 0x%p 0x%p", shared, syncobj); ++ kref_get(&shared->ssyncobj_kref); ++ down_write(&shared->syncobj_list_lock); ++ list_add(&syncobj->shared_syncobj_list_entry, ++ &shared->shared_syncobj_list_head); ++ syncobj->shared_owner = shared; ++ up_write(&shared->syncobj_list_lock); ++} ++ ++void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *shared, ++ struct dxgsyncobject *syncobj) ++{ ++ DXG_TRACE("Remove syncobj 0x%p", shared); ++ down_write(&shared->syncobj_list_lock); ++ list_del(&syncobj->shared_syncobj_list_entry); ++ up_write(&shared->syncobj_list_lock); ++} ++ + struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + struct dxgdevice *device, + struct dxgadapter *adapter, +@@ -1091,7 +1207,70 @@ void dxgsyncobject_release(struct kref *refcount) + struct dxgsyncobject *syncobj; + + syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref); ++ if (syncobj->shared_owner) { ++ dxgsharedsyncobj_remove_syncobj(syncobj->shared_owner, ++ syncobj); ++ kref_put(&syncobj->shared_owner->ssyncobj_kref, ++ dxgsharedsyncobj_release); ++ } + if (syncobj->host_event) + kfree(syncobj->host_event); + kfree(syncobj); + } ++ ++struct dxghwqueue *dxghwqueue_create(struct dxgcontext *context) ++{ ++ struct dxgprocess *process = context->device->process; ++ struct dxghwqueue *hwqueue = kzalloc(sizeof(*hwqueue), GFP_KERNEL); ++ ++ if (hwqueue) { ++ kref_init(&hwqueue->hwqueue_kref); ++ hwqueue->context = context; ++ hwqueue->process = process; ++ hwqueue->device_handle = context->device->handle; ++ if (dxgcontext_add_hwqueue(context, hwqueue) < 0) { ++ kref_put(&hwqueue->hwqueue_kref, dxghwqueue_release); ++ hwqueue = NULL; ++ } else { ++ kref_get(&context->context_kref); ++ } ++ } ++ return hwqueue; ++} ++ ++void dxghwqueue_destroy(struct dxgprocess *process, struct dxghwqueue *hwqueue) ++{ ++ DXG_TRACE("Destroyng hwqueue %p", hwqueue); ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (hwqueue->handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ hwqueue->handle); ++ hwqueue->handle.v = 0; ++ } ++ if (hwqueue->progress_fence_sync_object.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ hwqueue->progress_fence_sync_object); ++ hwqueue->progress_fence_sync_object.v = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (hwqueue->progress_fence_mapped_address) { ++ dxg_unmap_iospace(hwqueue->progress_fence_mapped_address, ++ PAGE_SIZE); ++ hwqueue->progress_fence_mapped_address = NULL; ++ } ++ dxgcontext_remove_hwqueue_safe(hwqueue->context, hwqueue); ++ ++ kref_put(&hwqueue->context->context_kref, dxgcontext_release); ++ kref_put(&hwqueue->hwqueue_kref, dxghwqueue_release); ++} ++ ++void dxghwqueue_release(struct kref *refcount) ++{ ++ struct dxghwqueue *hwqueue; ++ ++ hwqueue = container_of(refcount, struct dxghwqueue, hwqueue_kref); ++ kfree(hwqueue); ++} +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 0336e1843223..0330352b9c06 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -40,6 +40,8 @@ struct dxgallocation; + struct dxgresource; + struct dxgsharedresource; + struct dxgsyncobject; ++struct dxgsharedsyncobject; ++struct dxghwqueue; + + /* + * Driver private data. +@@ -137,6 +139,18 @@ struct dxghosteventcpu { + * "device" syncobject, because the belong to a device (dxgdevice). + * Device syncobjects are inserted to a list in dxgdevice. + * ++ * A syncobject can be "shared", meaning that it could be opened by many ++ * processes. ++ * ++ * Shared syncobjects are inserted to a list in its owner ++ * (dxgsharedsyncobject). ++ * A syncobject can be shared by using a global handle or by using ++ * "NT security handle". ++ * When global handle sharing is used, the handle is created durinig object ++ * creation. ++ * When "NT security" is used, the handle for sharing is create be calling ++ * dxgk_share_objects. On Linux "NT handle" is represented by a file ++ * descriptor. FD points to dxgsharedsyncobject. + */ + struct dxgsyncobject { + struct kref syncobj_kref; +@@ -146,6 +160,8 @@ struct dxgsyncobject { + * List entry in dxgadapter for other objects + */ + struct list_head syncobj_list_entry; ++ /* List entry in the dxgsharedsyncobject object for shared synobjects */ ++ struct list_head shared_syncobj_list_entry; + /* Adapter, the syncobject belongs to. NULL for stopped sync obejcts. */ + struct dxgadapter *adapter; + /* +@@ -156,6 +172,8 @@ struct dxgsyncobject { + struct dxgprocess *process; + /* Used by D3DDDI_CPU_NOTIFICATION objects */ + struct dxghosteventcpu *host_event; ++ /* Owner object for shared syncobjects */ ++ struct dxgsharedsyncobject *shared_owner; + /* CPU virtual address of the fence value for "device" syncobjects */ + void *mapped_address; + /* Handle in the process handle table */ +@@ -187,6 +205,41 @@ struct dxgvgpuchannel { + struct hv_device *hdev; + }; + ++/* ++ * The object is used as parent of all sync objects, created for a shared ++ * syncobject. When a shared syncobject is created without NT security, the ++ * handle in the global handle table will point to this object. ++ */ ++struct dxgsharedsyncobject { ++ struct kref ssyncobj_kref; ++ /* Referenced by file descriptors */ ++ int host_shared_handle_nt_reference; ++ /* Corresponding handle in the host global handle table */ ++ struct d3dkmthandle host_shared_handle; ++ /* ++ * When the sync object is shared by NT handle, this is the ++ * corresponding handle in the host ++ */ ++ struct d3dkmthandle host_shared_handle_nt; ++ /* Protects access to host_shared_handle_nt */ ++ struct mutex fd_mutex; ++ struct rw_semaphore syncobj_list_lock; ++ struct list_head shared_syncobj_list_head; ++ struct list_head adapter_shared_syncobj_list_entry; ++ struct dxgadapter *adapter; ++ enum d3dddi_synchronizationobject_type type; ++ u32 monitored_fence:1; ++}; ++ ++struct dxgsharedsyncobject *dxgsharedsyncobj_create(struct dxgadapter *adapter, ++ struct dxgsyncobject ++ *syncobj); ++void dxgsharedsyncobj_release(struct kref *refcount); ++void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *sharedsyncobj, ++ struct dxgsyncobject *syncobj); ++void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *sharedsyncobj, ++ struct dxgsyncobject *syncobj); ++ + struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + struct dxgdevice *device, + struct dxgadapter *adapter, +@@ -375,6 +428,8 @@ struct dxgadapter { + struct list_head adapter_process_list_head; + /* List of all dxgsharedresource objects */ + struct list_head shared_resource_list_head; ++ /* List of all dxgsharedsyncobject objects */ ++ struct list_head adapter_shared_syncobj_list_head; + /* List of all non-device dxgsyncobject objects */ + struct list_head syncobj_list_head; + /* This lock protects shared resource and syncobject lists */ +@@ -402,6 +457,10 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter); + int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter); + void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter); + void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter); ++void dxgadapter_add_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *so); ++void dxgadapter_remove_shared_syncobj(struct dxgadapter *adapter, ++ struct dxgsharedsyncobject *so); + void dxgadapter_add_syncobj(struct dxgadapter *adapter, + struct dxgsyncobject *so); + void dxgadapter_remove_syncobj(struct dxgsyncobject *so); +@@ -487,8 +546,32 @@ struct dxgcontext *dxgcontext_create(struct dxgdevice *dev); + void dxgcontext_destroy(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx); + void dxgcontext_release(struct kref *refcount); ++int dxgcontext_add_hwqueue(struct dxgcontext *ctx, ++ struct dxghwqueue *hq); ++void dxgcontext_remove_hwqueue(struct dxgcontext *ctx, struct dxghwqueue *hq); ++void dxgcontext_remove_hwqueue_safe(struct dxgcontext *ctx, ++ struct dxghwqueue *hq); + bool dxgcontext_is_active(struct dxgcontext *ctx); + ++/* ++ * The object represent the execution hardware queue of a device. ++ */ ++struct dxghwqueue { ++ /* entry in the context hw queue list */ ++ struct list_head hwqueue_list_entry; ++ struct kref hwqueue_kref; ++ struct dxgcontext *context; ++ struct dxgprocess *process; ++ struct d3dkmthandle progress_fence_sync_object; ++ struct d3dkmthandle handle; ++ struct d3dkmthandle device_handle; ++ void *progress_fence_mapped_address; ++}; ++ ++struct dxghwqueue *dxghwqueue_create(struct dxgcontext *ctx); ++void dxghwqueue_destroy(struct dxgprocess *pr, struct dxghwqueue *hq); ++void dxghwqueue_release(struct kref *refcount); ++ + /* + * A shared resource object is created to track the list of dxgresource objects, + * which are opened for the same underlying shared resource. +@@ -720,9 +803,22 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + d3dkmt_waitforsynchronizationobjectfromcpu + *args, + u64 cpu_event); ++int dxgvmb_send_create_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createhwqueue *args, ++ struct d3dkmt_createhwqueue *__user inargs, ++ struct dxghwqueue *hq); ++int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle handle); + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, ++ struct dxgvmbuschannel *channel, ++ struct d3dkmt_opensyncobjectfromnthandle2 ++ *args, ++ struct dxgsyncobject *syncobj); + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 69e221613af9..8cbe1095599f 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -259,6 +259,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); + INIT_LIST_HEAD(&adapter->shared_resource_list_head); ++ INIT_LIST_HEAD(&adapter->adapter_shared_syncobj_list_head); + INIT_LIST_HEAD(&adapter->syncobj_list_head); + init_rwsem(&adapter->shared_resource_list_lock); + adapter->pci_dev = dev; +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index a41985ef438d..4021084ebd78 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -277,6 +277,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, + device_handle = + ((struct dxgcontext *)obj)->device_handle; + break; ++ case HMGRENTRY_TYPE_DXGHWQUEUE: ++ device_handle = ++ ((struct dxghwqueue *)obj)->device_handle; ++ break; + default: + DXG_ERR("invalid handle type: %d", t); + break; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index b3a4377c8b0b..e83600945de1 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -712,6 +712,69 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process) + return ret; + } + ++int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, ++ struct dxgvmbuschannel *channel, ++ struct d3dkmt_opensyncobjectfromnthandle2 ++ *args, ++ struct dxgsyncobject *syncobj) ++{ ++ struct dxgkvmb_command_opensyncobject *command; ++ struct dxgkvmb_command_opensyncobject_return result = { }; ++ int ret; ++ struct dxgvmbusmsg msg; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENSYNCOBJECT, ++ process->host_handle); ++ command->device = args->device; ++ command->global_sync_object = syncobj->shared_owner->host_shared_handle; ++ command->flags = args->flags; ++ if (syncobj->monitored_fence) ++ command->engine_affinity = ++ args->monitored_fence.engine_affinity; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ args->sync_object = result.sync_object; ++ if (syncobj->monitored_fence) { ++ void *va = dxg_map_iospace(result.guest_cpu_physical_address, ++ PAGE_SIZE, PROT_READ | PROT_WRITE, ++ true); ++ if (va == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ args->monitored_fence.fence_value_cpu_va = va; ++ args->monitored_fence.fence_value_gpu_va = ++ result.gpu_virtual_address; ++ syncobj->mapped_address = va; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle) +@@ -2050,6 +2113,164 @@ int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_create_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_createhwqueue *args, ++ struct d3dkmt_createhwqueue *__user inargs, ++ struct dxghwqueue *hwqueue) ++{ ++ struct dxgkvmb_command_createhwqueue *command = NULL; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_createhwqueue); ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid private driver data size: %d", ++ args->priv_drv_data_size); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args->priv_drv_data_size) ++ cmd_size += args->priv_drv_data_size - 1; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATEHWQUEUE, ++ process->host_handle); ++ command->context = args->context; ++ command->flags = args->flags; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user(command->priv_drv_data, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ command, cmd_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(command->status); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_sync_msg failed: %x", ++ command->status.v); ++ goto cleanup; ++ } ++ ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ command->hwqueue); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, ++ NULL, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ command->hwqueue_progress_fence); ++ if (ret < 0) ++ goto cleanup; ++ ++ hwqueue->handle = command->hwqueue; ++ hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence; ++ ++ hwqueue->progress_fence_mapped_address = ++ dxg_map_iospace((u64)command->hwqueue_progress_fence_cpuva, ++ PAGE_SIZE, PROT_READ | PROT_WRITE, true); ++ if (hwqueue->progress_fence_mapped_address == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(&inargs->queue, &command->hwqueue, ++ sizeof(struct d3dkmthandle)); ++ if (ret < 0) { ++ DXG_ERR("failed to copy hwqueue handle"); ++ goto cleanup; ++ } ++ ret = copy_to_user(&inargs->queue_progress_fence, ++ &command->hwqueue_progress_fence, ++ sizeof(struct d3dkmthandle)); ++ if (ret < 0) { ++ DXG_ERR("failed to progress fence"); ++ goto cleanup; ++ } ++ ret = copy_to_user(&inargs->queue_progress_fence_cpu_va, ++ &hwqueue->progress_fence_mapped_address, ++ sizeof(inargs->queue_progress_fence_cpu_va)); ++ if (ret < 0) { ++ DXG_ERR("failed to copy fence cpu va"); ++ goto cleanup; ++ } ++ ret = copy_to_user(&inargs->queue_progress_fence_gpu_va, ++ &command->hwqueue_progress_fence_gpuva, ++ sizeof(u64)); ++ if (ret < 0) { ++ DXG_ERR("failed to copy fence gpu va"); ++ goto cleanup; ++ } ++ if (args->priv_drv_data_size) { ++ ret = copy_to_user(args->priv_drv_data, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret < 0) ++ DXG_ERR("failed to copy private data"); ++ } ++ ++cleanup: ++ if (ret < 0) { ++ DXG_ERR("failed %x", ret); ++ if (hwqueue->handle.v) { ++ hmgrtable_free_handle_safe(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ hwqueue->handle); ++ hwqueue->handle.v = 0; ++ } ++ if (command && command->hwqueue.v) ++ dxgvmb_send_destroy_hwqueue(process, adapter, ++ command->hwqueue); ++ } ++ free_message(&msg, process); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle handle) ++{ ++ int ret; ++ struct dxgkvmb_command_destroyhwqueue *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYHWQUEUE, ++ process->host_handle); ++ command->hwqueue = handle; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 73d7adac60a1..2e2fd1ae5ec2 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -172,6 +172,21 @@ struct dxgkvmb_command_signalguestevent { + bool dereference_event; + }; + ++struct dxgkvmb_command_opensyncobject { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle global_sync_object; ++ u32 engine_affinity; ++ struct d3dddi_synchronizationobject_flags flags; ++}; ++ ++struct dxgkvmb_command_opensyncobject_return { ++ struct d3dkmthandle sync_object; ++ struct ntstatus status; ++ u64 gpu_virtual_address; ++ u64 guest_cpu_physical_address; ++}; ++ + /* + * The command returns struct d3dkmthandle of a shared object for the + * given pre-process object +@@ -508,4 +523,24 @@ struct dxgkvmb_command_waitforsyncobjectfromgpu { + /* struct d3dkmthandle ObjectHandles[object_count] */ + }; + ++/* Returns the same structure */ ++struct dxgkvmb_command_createhwqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct ntstatus status; ++ struct d3dkmthandle hwqueue; ++ struct d3dkmthandle hwqueue_progress_fence; ++ void *hwqueue_progress_fence_cpuva; ++ u64 hwqueue_progress_fence_gpuva; ++ struct d3dkmthandle context; ++ struct d3dddi_createhwqueueflags flags; ++ u32 priv_drv_data_size; ++ char priv_drv_data[1]; ++}; ++ ++/* The command returns ntstatus */ ++struct dxgkvmb_command_destroyhwqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle hwqueue; ++}; ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index abb64f6c3a59..3cfc1c40e0bb 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -36,6 +36,33 @@ static char *errorstr(int ret) + } + #endif + ++static int dxgsyncobj_release(struct inode *inode, struct file *file) ++{ ++ struct dxgsharedsyncobject *syncobj = file->private_data; ++ ++ DXG_TRACE("Release syncobj: %p", syncobj); ++ mutex_lock(&syncobj->fd_mutex); ++ kref_get(&syncobj->ssyncobj_kref); ++ syncobj->host_shared_handle_nt_reference--; ++ if (syncobj->host_shared_handle_nt_reference == 0) { ++ if (syncobj->host_shared_handle_nt.v) { ++ dxgvmb_send_destroy_nt_shared_object( ++ syncobj->host_shared_handle_nt); ++ DXG_TRACE("Syncobj host_handle_nt destroyed: %x", ++ syncobj->host_shared_handle_nt.v); ++ syncobj->host_shared_handle_nt.v = 0; ++ } ++ kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release); ++ } ++ mutex_unlock(&syncobj->fd_mutex); ++ kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release); ++ return 0; ++} ++ ++static const struct file_operations dxg_syncobj_fops = { ++ .release = dxgsyncobj_release, ++}; ++ + static int dxgsharedresource_release(struct inode *inode, struct file *file) + { + struct dxgsharedresource *resource = file->private_data; +@@ -833,6 +860,156 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createhwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgcontext *context = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxghwqueue *hwqueue = NULL; ++ int ret; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ context = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (context == NULL) { ++ DXG_ERR("Invalid context handle %x", args.context.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hwqueue = dxghwqueue_create(context); ++ if (hwqueue == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_create_hwqueue(process, adapter, &args, ++ inargs, hwqueue); ++ ++cleanup: ++ ++ if (ret < 0 && hwqueue) ++ dxghwqueue_destroy(process, hwqueue); ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int dxgkio_destroy_hwqueue(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_destroyhwqueue args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxghwqueue *hwqueue = NULL; ++ struct d3dkmthandle device_handle = {}; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ hwqueue = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ args.queue); ++ if (hwqueue) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGHWQUEUE, args.queue); ++ hwqueue->handle.v = 0; ++ device_handle = hwqueue->device_handle; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (hwqueue == NULL) { ++ DXG_ERR("invalid hwqueue handle: %x", args.queue.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_destroy_hwqueue(process, adapter, args.queue); ++ ++ dxghwqueue_destroy(process, hwqueue); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + get_standard_alloc_priv_data(struct dxgdevice *device, + struct d3dkmt_createstandardallocation *alloc_info, +@@ -1548,6 +1725,164 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_submitsignalsyncobjectstohwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle hwqueue = {}; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.hwqueue_count > D3DDDI_MAX_BROADCAST_CONTEXT || ++ args.hwqueue_count == 0) { ++ DXG_ERR("invalid hwqueue count: %d", ++ args.hwqueue_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > D3DDDI_MAX_OBJECT_SIGNALED || ++ args.object_count == 0) { ++ DXG_ERR("invalid number of syncobjects: %d", ++ args.object_count); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(&hwqueue, args.hwqueues, ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy hwqueue handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ hwqueue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_signal_sync_object(process, adapter, ++ args.flags, 0, zerohandle, ++ args.object_count, args.objects, ++ args.hwqueue_count, args.hwqueues, ++ args.object_count, ++ args.fence_values, NULL, ++ zerohandle); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_submitwaitforsyncobjectstohwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int ret; ++ struct d3dkmthandle *objects = NULL; ++ u32 object_size; ++ u64 *fences = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.object_count > D3DDDI_MAX_OBJECT_WAITED_ON || ++ args.object_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ object_size = sizeof(struct d3dkmthandle) * args.object_count; ++ objects = vzalloc(object_size); ++ if (objects == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(objects, args.objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ object_size = sizeof(u64) * args.object_count; ++ fences = vzalloc(object_size); ++ if (fences == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret = copy_from_user(fences, args.fence_values, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy fence values"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ args.hwqueue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.hwqueue, args.object_count, ++ objects, fences, false); ++ ++cleanup: ++ ++ if (objects) ++ vfree(objects); ++ if (fences) ++ vfree(fences); ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + { +@@ -1558,6 +1893,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + struct eventfd_ctx *event = NULL; + struct dxgsyncobject *syncobj = NULL; + bool device_lock_acquired = false; ++ struct dxgsharedsyncobject *syncobjgbl = NULL; + struct dxghosteventcpu *host_event = NULL; + + ret = copy_from_user(&args, inargs, sizeof(args)); +@@ -1618,6 +1954,22 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (ret < 0) + goto cleanup; + ++ if (args.info.flags.shared) { ++ if (args.info.shared_handle.v == 0) { ++ DXG_ERR("shared handle should not be 0"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ syncobjgbl = dxgsharedsyncobj_create(device->adapter, syncobj); ++ if (syncobjgbl == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dxgsharedsyncobj_add_syncobj(syncobjgbl, syncobj); ++ ++ syncobjgbl->host_shared_handle = args.info.shared_handle; ++ } ++ + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +@@ -1646,6 +1998,8 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (event) + eventfd_ctx_put(event); + } ++ if (syncobjgbl) ++ kref_put(&syncobjgbl->ssyncobj_kref, dxgsharedsyncobj_release); + if (adapter) + dxgadapter_release_lock_shared(adapter); + if (device_lock_acquired) +@@ -1700,6 +2054,140 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_opensyncobjectfromnthandle2 args; ++ struct dxgsyncobject *syncobj = NULL; ++ struct dxgsharedsyncobject *syncobj_fd = NULL; ++ struct file *file = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dddi_synchronizationobject_flags flags = { }; ++ int ret; ++ bool device_lock_acquired = false; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ args.sync_object.v = 0; ++ ++ if (args.device.v) { ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ return -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ DXG_ERR("device handle is missing"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ file = fget(args.nt_handle); ++ if (!file) { ++ DXG_ERR("failed to get file from handle: %llx", ++ args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (file->f_op != &dxg_syncobj_fops) { ++ DXG_ERR("invalid fd: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ syncobj_fd = file->private_data; ++ if (syncobj_fd == NULL) { ++ DXG_ERR("invalid private data: %llx", args.nt_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ flags.shared = 1; ++ flags.nt_security_sharing = 1; ++ syncobj = dxgsyncobject_create(process, device, adapter, ++ syncobj_fd->type, flags); ++ if (syncobj == NULL) { ++ DXG_ERR("failed to create sync object"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ dxgsharedsyncobj_add_syncobj(syncobj_fd, syncobj); ++ ++ ret = dxgvmb_send_open_sync_object_nt(process, &dxgglobal->channel, ++ &args, syncobj); ++ if (ret < 0) { ++ DXG_ERR("failed to open sync object on host: %x", ++ syncobj_fd->host_shared_handle.v); ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, syncobj, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ args.sync_object); ++ if (ret >= 0) { ++ syncobj->handle = args.sync_object; ++ kref_get(&syncobj->syncobj_kref); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret == 0) ++ goto success; ++ DXG_ERR("failed to copy output args"); ++ ++cleanup: ++ ++ if (syncobj) { ++ dxgsyncobject_destroy(process, syncobj); ++ syncobj = NULL; ++ } ++ ++ if (args.sync_object.v) ++ dxgvmb_send_destroy_sync_object(process, args.sync_object); ++ ++success: ++ ++ if (file) ++ fput(file); ++ if (syncobj) ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) + { +@@ -2353,6 +2841,30 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, ++ struct dxgprocess *process, ++ struct d3dkmthandle objecthandle) ++{ ++ int ret = 0; ++ ++ mutex_lock(&syncobj->fd_mutex); ++ if (syncobj->host_shared_handle_nt_reference == 0) { ++ ret = dxgvmb_send_create_nt_shared_object(process, ++ objecthandle, ++ &syncobj->host_shared_handle_nt); ++ if (ret < 0) ++ goto cleanup; ++ DXG_TRACE("Host_shared_handle_ht: %x", ++ syncobj->host_shared_handle_nt.v); ++ kref_get(&syncobj->ssyncobj_kref); ++ } ++ syncobj->host_shared_handle_nt_reference++; ++cleanup: ++ mutex_unlock(&syncobj->fd_mutex); ++ return ret; ++} ++ + static int + dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource, + struct dxgprocess *process, +@@ -2378,6 +2890,7 @@ dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource, + } + + enum dxg_sharedobject_type { ++ DXG_SHARED_SYNCOBJECT, + DXG_SHARED_RESOURCE + }; + +@@ -2394,6 +2907,10 @@ static int get_object_fd(enum dxg_sharedobject_type type, + } + + switch (type) { ++ case DXG_SHARED_SYNCOBJECT: ++ file = anon_inode_getfile("dxgsyncobj", ++ &dxg_syncobj_fops, object, 0); ++ break; + case DXG_SHARED_RESOURCE: + file = anon_inode_getfile("dxgresource", + &dxg_resource_fops, object, 0); +@@ -2419,6 +2936,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + enum hmgrentry_type object_type; + struct dxgsyncobject *syncobj = NULL; + struct dxgresource *resource = NULL; ++ struct dxgsharedsyncobject *shared_syncobj = NULL; + struct dxgsharedresource *shared_resource = NULL; + struct d3dkmthandle *handles = NULL; + int object_fd = -1; +@@ -2465,6 +2983,17 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = -EINVAL; + } else { + switch (object_type) { ++ case HMGRENTRY_TYPE_DXGSYNCOBJECT: ++ syncobj = obj; ++ if (syncobj->shared) { ++ kref_get(&syncobj->syncobj_kref); ++ shared_syncobj = syncobj->shared_owner; ++ } else { ++ DXG_ERR("sync object is not shared"); ++ syncobj = NULL; ++ ret = -EINVAL; ++ } ++ break; + case HMGRENTRY_TYPE_DXGRESOURCE: + resource = obj; + if (resource->shared_owner) { +@@ -2488,6 +3017,21 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + goto cleanup; + + switch (object_type) { ++ case HMGRENTRY_TYPE_DXGSYNCOBJECT: ++ ret = get_object_fd(DXG_SHARED_SYNCOBJECT, shared_syncobj, ++ &object_fd); ++ if (ret < 0) { ++ DXG_ERR("get_object_fd failed for sync object"); ++ goto cleanup; ++ } ++ ret = dxgsharedsyncobj_get_host_nt_handle(shared_syncobj, ++ process, ++ handles[0]); ++ if (ret < 0) { ++ DXG_ERR("get_host_nt_handle failed"); ++ goto cleanup; ++ } ++ break; + case HMGRENTRY_TYPE_DXGRESOURCE: + ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource, + &object_fd); +@@ -2954,10 +3498,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, + /* 0x16 */ {}, + /* 0x17 */ {}, +-/* 0x18 */ {}, ++/* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE}, + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, + /* 0x1a */ {}, +-/* 0x1b */ {}, ++/* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE}, + /* 0x1c */ {}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, +@@ -2986,8 +3530,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x33 */ {dxgkio_signal_sync_object_gpu2, + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, + /* 0x34 */ {}, +-/* 0x35 */ {}, +-/* 0x36 */ {}, ++/* 0x35 */ {dxgkio_submit_signal_to_hwqueue, ++ LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, ++/* 0x36 */ {dxgkio_submit_wait_to_hwqueue, ++ LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {}, + /* 0x38 */ {}, + /* 0x39 */ {}, +@@ -2999,7 +3545,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, + /* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, +-/* 0x40 */ {}, ++/* 0x40 */ {dxgkio_open_sync_object_nt, LX_DXOPENSYNCOBJECTFROMNTHANDLE2}, + /* 0x41 */ {dxgkio_query_resource_info_nt, + LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index f74564cf7ee9..a78252901c8d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -201,6 +201,16 @@ struct d3dkmt_createcontextvirtual { + struct d3dkmthandle context; + }; + ++struct d3dddi_createhwqueueflags { ++ union { ++ struct { ++ __u32 disable_gpu_timeout:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ + enum d3dkmdt_gdisurfacetype { + _D3DKMDT_GDISURFACE_INVALID = 0, + _D3DKMDT_GDISURFACE_TEXTURE = 1, +@@ -694,6 +704,81 @@ struct d3dddi_openallocationinfo2 { + __u64 reserved[6]; + }; + ++struct d3dkmt_createhwqueue { ++ struct d3dkmthandle context; ++ struct d3dddi_createhwqueueflags flags; ++ __u32 priv_drv_data_size; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ struct d3dkmthandle queue; ++ struct d3dkmthandle queue_progress_fence; ++#ifdef __KERNEL__ ++ void *queue_progress_fence_cpu_va; ++#else ++ __u64 queue_progress_fence_cpu_va; ++#endif ++ __u64 queue_progress_fence_gpu_va; ++}; ++ ++struct d3dkmt_destroyhwqueue { ++ struct d3dkmthandle queue; ++}; ++ ++struct d3dkmt_submitwaitforsyncobjectstohwqueue { ++ struct d3dkmthandle hwqueue; ++ __u32 object_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++}; ++ ++struct d3dkmt_submitsignalsyncobjectstohwqueue { ++ struct d3dddicb_signalflags flags; ++ __u32 hwqueue_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *hwqueues; ++#else ++ __u64 hwqueues; ++#endif ++ __u32 object_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *objects; ++ __u64 *fence_values; ++#else ++ __u64 objects; ++ __u64 fence_values; ++#endif ++}; ++ ++struct d3dkmt_opensyncobjectfromnthandle2 { ++ __u64 nt_handle; ++ struct d3dkmthandle device; ++ struct d3dddi_synchronizationobject_flags flags; ++ struct d3dkmthandle sync_object; ++ __u32 reserved1; ++ union { ++ struct { ++#ifdef __KERNEL__ ++ void *fence_value_cpu_va; ++#else ++ __u64 fence_value_cpu_va; ++#endif ++ __u64 fence_value_gpu_va; ++ __u32 engine_affinity; ++ } monitored_fence; ++ __u64 reserved[8]; ++ }; ++}; ++ + struct d3dkmt_openresourcefromnthandle { + struct d3dkmthandle device; + __u32 reserved; +@@ -819,6 +904,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXCREATEHWQUEUE \ ++ _IOWR(0x47, 0x18, struct d3dkmt_createhwqueue) ++#define LX_DXDESTROYHWQUEUE \ ++ _IOWR(0x47, 0x1b, struct d3dkmt_destroyhwqueue) + #define LX_DXDESTROYDEVICE \ + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ +@@ -829,6 +918,10 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \ + _IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2) ++#define LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE \ ++ _IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue) ++#define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \ ++ _IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1680-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1680-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch new file mode 100644 index 000000000000..4410308be34e --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1680-drivers-hv-dxgkrnl-Creation-of-paging-queue-objects.patch @@ -0,0 +1,640 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Thu, 20 Jan 2022 15:15:18 -0800 +Subject: drivers: hv: dxgkrnl: Creation of paging queue objects. + +Implement ioctls for creation/destruction of the paging queue objects: + - LX_DXCREATEPAGINGQUEUE, + - LX_DXDESTROYPAGINGQUEUE + +Paging queue objects (dxgpagingqueue) contain operations, which +handle residency of device accessible allocations. An allocation is +resident, when the device has access to it. For example, the allocation +resides in local device memory or device page tables point to system +memory which is made non-pageable. + +Each paging queue has an associated monitored fence sync object, which +is used to detect when a paging operation is completed. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 89 +++++ + drivers/hv/dxgkrnl/dxgkrnl.h | 24 ++ + drivers/hv/dxgkrnl/dxgprocess.c | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 74 ++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 17 + + drivers/hv/dxgkrnl/ioctl.c | 189 +++++++++- + include/uapi/misc/d3dkmthk.h | 27 ++ + 7 files changed, 418 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index f59173f13559..410f08768bad 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -278,6 +278,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + void dxgdevice_stop(struct dxgdevice *device) + { + struct dxgallocation *alloc; ++ struct dxgpagingqueue *pqueue; + struct dxgsyncobject *syncobj; + + DXG_TRACE("Stopping device: %p", device); +@@ -288,6 +289,10 @@ void dxgdevice_stop(struct dxgdevice *device) + dxgdevice_release_alloc_list_lock(device); + + hmgrtable_lock(&device->process->handle_table, DXGLOCK_EXCL); ++ list_for_each_entry(pqueue, &device->pqueue_list_head, ++ pqueue_list_entry) { ++ dxgpagingqueue_stop(pqueue); ++ } + list_for_each_entry(syncobj, &device->syncobj_list_head, + syncobj_list_entry) { + dxgsyncobject_stop(syncobj); +@@ -375,6 +380,17 @@ void dxgdevice_destroy(struct dxgdevice *device) + dxgdevice_release_context_list_lock(device); + } + ++ { ++ struct dxgpagingqueue *tmp; ++ struct dxgpagingqueue *pqueue; ++ ++ DXG_TRACE("destroying paging queues"); ++ list_for_each_entry_safe(pqueue, tmp, &device->pqueue_list_head, ++ pqueue_list_entry) { ++ dxgpagingqueue_destroy(pqueue); ++ } ++ } ++ + /* Guest handles need to be released before the host handles */ + hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); + if (device->handle_valid) { +@@ -708,6 +724,26 @@ void dxgdevice_release(struct kref *refcount) + kfree(device); + } + ++void dxgdevice_add_paging_queue(struct dxgdevice *device, ++ struct dxgpagingqueue *entry) ++{ ++ dxgdevice_acquire_alloc_list_lock(device); ++ list_add_tail(&entry->pqueue_list_entry, &device->pqueue_list_head); ++ dxgdevice_release_alloc_list_lock(device); ++} ++ ++void dxgdevice_remove_paging_queue(struct dxgpagingqueue *pqueue) ++{ ++ struct dxgdevice *device = pqueue->device; ++ ++ dxgdevice_acquire_alloc_list_lock(device); ++ if (pqueue->pqueue_list_entry.next) { ++ list_del(&pqueue->pqueue_list_entry); ++ pqueue->pqueue_list_entry.next = NULL; ++ } ++ dxgdevice_release_alloc_list_lock(device); ++} ++ + void dxgdevice_add_syncobj(struct dxgdevice *device, + struct dxgsyncobject *syncobj) + { +@@ -899,6 +935,59 @@ else + kfree(alloc); + } + ++struct dxgpagingqueue *dxgpagingqueue_create(struct dxgdevice *device) ++{ ++ struct dxgpagingqueue *pqueue; ++ ++ pqueue = kzalloc(sizeof(*pqueue), GFP_KERNEL); ++ if (pqueue) { ++ pqueue->device = device; ++ pqueue->process = device->process; ++ pqueue->device_handle = device->handle; ++ dxgdevice_add_paging_queue(device, pqueue); ++ } ++ return pqueue; ++} ++ ++void dxgpagingqueue_stop(struct dxgpagingqueue *pqueue) ++{ ++ int ret; ++ ++ if (pqueue->mapped_address) { ++ ret = dxg_unmap_iospace(pqueue->mapped_address, PAGE_SIZE); ++ DXG_TRACE("fence is unmapped %d %p", ++ ret, pqueue->mapped_address); ++ pqueue->mapped_address = NULL; ++ } ++} ++ ++void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue) ++{ ++ struct dxgprocess *process = pqueue->process; ++ ++ DXG_TRACE("Destroying pqueue %p %x", pqueue, pqueue->handle.v); ++ ++ dxgpagingqueue_stop(pqueue); ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ if (pqueue->handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ pqueue->handle); ++ pqueue->handle.v = 0; ++ } ++ if (pqueue->syncobj_handle.v) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ pqueue->syncobj_handle); ++ pqueue->syncobj_handle.v = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (pqueue->device) ++ dxgdevice_remove_paging_queue(pqueue); ++ kfree(pqueue); ++} ++ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 0330352b9c06..440d1f9b8882 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -104,6 +104,16 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev); + void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch); + void dxgvmbuschannel_receive(void *ctx); + ++struct dxgpagingqueue { ++ struct dxgdevice *device; ++ struct dxgprocess *process; ++ struct list_head pqueue_list_entry; ++ struct d3dkmthandle device_handle; ++ struct d3dkmthandle handle; ++ struct d3dkmthandle syncobj_handle; ++ void *mapped_address; ++}; ++ + /* + * The structure describes an event, which will be signaled by + * a message from host. +@@ -127,6 +137,10 @@ struct dxghosteventcpu { + bool remove_from_list; + }; + ++struct dxgpagingqueue *dxgpagingqueue_create(struct dxgdevice *device); ++void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue); ++void dxgpagingqueue_stop(struct dxgpagingqueue *pqueue); ++ + /* + * This is GPU synchronization object, which is used to synchronize execution + * between GPU contextx/hardware queues or for tracking GPU execution progress. +@@ -516,6 +530,9 @@ void dxgdevice_remove_alloc_safe(struct dxgdevice *dev, + struct dxgallocation *a); + void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res); + void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res); ++void dxgdevice_add_paging_queue(struct dxgdevice *dev, ++ struct dxgpagingqueue *pqueue); ++void dxgdevice_remove_paging_queue(struct dxgpagingqueue *pqueue); + void dxgdevice_add_syncobj(struct dxgdevice *dev, struct dxgsyncobject *so); + void dxgdevice_remove_syncobj(struct dxgsyncobject *so); + bool dxgdevice_is_active(struct dxgdevice *dev); +@@ -762,6 +779,13 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + struct dxgprocess *process, + struct d3dkmthandle h); ++int dxgvmb_send_create_paging_queue(struct dxgprocess *pr, ++ struct dxgdevice *dev, ++ struct d3dkmt_createpagingqueue *args, ++ struct dxgpagingqueue *pq); ++int dxgvmb_send_destroy_paging_queue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle h); + int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_createallocation *args, + struct d3dkmt_createallocation *__user inargs, +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index 4021084ebd78..5de3f8ccb448 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -277,6 +277,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process, + device_handle = + ((struct dxgcontext *)obj)->device_handle; + break; ++ case HMGRENTRY_TYPE_DXGPAGINGQUEUE: ++ device_handle = ++ ((struct dxgpagingqueue *)obj)->device_handle; ++ break; + case HMGRENTRY_TYPE_DXGHWQUEUE: + device_handle = + ((struct dxghwqueue *)obj)->device_handle; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index e83600945de1..c9c00b288ae0 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1155,6 +1155,80 @@ int dxgvmb_send_destroy_context(struct dxgadapter *adapter, + return ret; + } + ++int dxgvmb_send_create_paging_queue(struct dxgprocess *process, ++ struct dxgdevice *device, ++ struct d3dkmt_createpagingqueue *args, ++ struct dxgpagingqueue *pqueue) ++{ ++ struct dxgkvmb_command_createpagingqueue_return result; ++ struct dxgkvmb_command_createpagingqueue *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, device->adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CREATEPAGINGQUEUE, ++ process->host_handle); ++ command->args = *args; ++ args->paging_queue.v = 0; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("send_create_paging_queue failed %x", ret); ++ goto cleanup; ++ } ++ ++ args->paging_queue = result.paging_queue; ++ args->sync_object = result.sync_object; ++ args->fence_cpu_virtual_address = ++ dxg_map_iospace(result.fence_storage_physical_address, PAGE_SIZE, ++ PROT_READ | PROT_WRITE, true); ++ if (args->fence_cpu_virtual_address == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ pqueue->mapped_address = args->fence_cpu_virtual_address; ++ pqueue->handle = args->paging_queue; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_destroy_paging_queue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle h) ++{ ++ int ret; ++ struct dxgkvmb_command_destroypagingqueue *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_DESTROYPAGINGQUEUE, ++ process->host_handle); ++ command->paging_queue = h; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, NULL, 0); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + static int + copy_private_data(struct d3dkmt_createallocation *args, + struct dxgkvmb_command_createallocation *command, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 2e2fd1ae5ec2..aba075d374c9 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -462,6 +462,23 @@ struct dxgkvmb_command_destroycontext { + struct d3dkmthandle context; + }; + ++struct dxgkvmb_command_createpagingqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_createpagingqueue args; ++}; ++ ++struct dxgkvmb_command_createpagingqueue_return { ++ struct d3dkmthandle paging_queue; ++ struct d3dkmthandle sync_object; ++ u64 fence_storage_physical_address; ++ u64 fence_storage_offset; ++}; ++ ++struct dxgkvmb_command_destroypagingqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle paging_queue; ++}; ++ + struct dxgkvmb_command_createsyncobject { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_createsynchronizationobject2 args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 3cfc1c40e0bb..a2d236f5eff5 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -329,7 +329,7 @@ static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource) + + if (alloc_data_size) { + if (data_size < alloc_data_size) { +- dev_err(DXGDEV, ++ DXG_ERR( + "Invalid private data size"); + ret = -EINVAL; + goto cleanup1; +@@ -1010,6 +1010,183 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process, + return ret; + } + ++static int ++dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createpagingqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgpagingqueue *pqueue = NULL; ++ int ret; ++ struct d3dkmthandle host_handle = {}; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) ++ goto cleanup; ++ ++ device_lock_acquired = true; ++ adapter = device->adapter; ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ pqueue = dxgpagingqueue_create(device); ++ if (pqueue == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_create_paging_queue(process, device, &args, pqueue); ++ if (ret >= 0) { ++ host_handle = args.paging_queue; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, pqueue, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ host_handle); ++ if (ret >= 0) { ++ pqueue->handle = host_handle; ++ ret = hmgrtable_assign_handle(&process->handle_table, ++ NULL, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ args.sync_object); ++ if (ret >= 0) ++ pqueue->syncobj_handle = args.sync_object; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ /* should not fail after this */ ++ } ++ ++cleanup: ++ ++ if (ret < 0) { ++ if (pqueue) ++ dxgpagingqueue_destroy(pqueue); ++ if (host_handle.v) ++ dxgvmb_send_destroy_paging_queue(process, ++ adapter, ++ host_handle); ++ } ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dddi_destroypagingqueue args; ++ struct dxgpagingqueue *paging_queue = NULL; ++ int ret; ++ struct d3dkmthandle device_handle = {}; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ paging_queue = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (paging_queue) { ++ device_handle = paging_queue->device_handle; ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ paging_queue->syncobj_handle); ++ paging_queue->syncobj_handle.v = 0; ++ paging_queue->handle.v = 0; ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ if (device_handle.v) ++ device = dxgprocess_device_by_handle(process, device_handle); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_destroy_paging_queue(process, adapter, ++ args.paging_queue); ++ ++ dxgpagingqueue_destroy(paging_queue); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + get_standard_alloc_priv_data(struct dxgdevice *device, + struct d3dkmt_createstandardallocation *alloc_info, +@@ -1272,7 +1449,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + args.private_runtime_resource_handle; + if (args.flags.create_shared) { + if (!args.flags.nt_security_sharing) { +- dev_err(DXGDEV, ++ DXG_ERR( + "nt_security_sharing must be set"); + ret = -EINVAL; + goto cleanup; +@@ -1313,7 +1490,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + args.private_runtime_data, + args.private_runtime_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to copy runtime data"); + ret = -EINVAL; + goto cleanup; +@@ -1333,7 +1510,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + args.priv_drv_data, + args.priv_drv_data_size); + if (ret) { +- dev_err(DXGDEV, ++ DXG_ERR( + "failed to copy res data"); + ret = -EINVAL; + goto cleanup; +@@ -3481,7 +3658,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL}, + /* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, + /* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION}, +-/* 0x07 */ {}, ++/* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE}, + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, + /* 0x0a */ {}, +@@ -3502,7 +3679,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, + /* 0x1a */ {}, + /* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE}, +-/* 0x1c */ {}, ++/* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, + /* 0x1f */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index a78252901c8d..6ec70852de6e 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -211,6 +211,29 @@ struct d3dddi_createhwqueueflags { + }; + }; + ++enum d3dddi_pagingqueue_priority { ++ _D3DDDI_PAGINGQUEUE_PRIORITY_BELOW_NORMAL = -1, ++ _D3DDDI_PAGINGQUEUE_PRIORITY_NORMAL = 0, ++ _D3DDDI_PAGINGQUEUE_PRIORITY_ABOVE_NORMAL = 1, ++}; ++ ++struct d3dkmt_createpagingqueue { ++ struct d3dkmthandle device; ++ enum d3dddi_pagingqueue_priority priority; ++ struct d3dkmthandle paging_queue; ++ struct d3dkmthandle sync_object; ++#ifdef __KERNEL__ ++ void *fence_cpu_virtual_address; ++#else ++ __u64 fence_cpu_virtual_address; ++#endif ++ __u32 physical_adapter_index; ++}; ++ ++struct d3dddi_destroypagingqueue { ++ struct d3dkmthandle paging_queue; ++}; ++ + enum d3dkmdt_gdisurfacetype { + _D3DKMDT_GDISURFACE_INVALID = 0, + _D3DKMDT_GDISURFACE_TEXTURE = 1, +@@ -890,6 +913,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x05, struct d3dkmt_destroycontext) + #define LX_DXCREATEALLOCATION \ + _IOWR(0x47, 0x06, struct d3dkmt_createallocation) ++#define LX_DXCREATEPAGINGQUEUE \ ++ _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ +@@ -908,6 +933,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x18, struct d3dkmt_createhwqueue) + #define LX_DXDESTROYHWQUEUE \ + _IOWR(0x47, 0x1b, struct d3dkmt_destroyhwqueue) ++#define LX_DXDESTROYPAGINGQUEUE \ ++ _IOWR(0x47, 0x1c, struct d3dddi_destroypagingqueue) + #define LX_DXDESTROYDEVICE \ + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1681-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch b/patch/kernel/archive/wsl2-arm64-6.6/1681-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch new file mode 100644 index 000000000000..243b807aa1a4 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1681-drivers-hv-dxgkrnl-Submit-execution-commands-to-the-compute-device.patch @@ -0,0 +1,450 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 18:02:09 -0800 +Subject: drivers: hv: dxgkrnl: Submit execution commands to the compute device + +Implements ioctls for submission of compute device buffers for execution: + - LX_DXSUBMITCOMMAND + The ioctl is used to submit a command buffer to the device, + working in the "packet scheduling" mode. + + - LX_DXSUBMITCOMMANDTOHWQUEUE + The ioctl is used to submit a command buffer to the device, + working in the "hardware scheduling" mode. + +To improve performance both ioctls use asynchronous VM bus messages +to communicate with the host as these are high frequency operations. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 6 + + drivers/hv/dxgkrnl/dxgvmbus.c | 113 +++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 14 + + drivers/hv/dxgkrnl/ioctl.c | 127 +++++++++- + include/uapi/misc/d3dkmthk.h | 58 +++++ + 5 files changed, 316 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 440d1f9b8882..ab97bc53b124 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -796,6 +796,9 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_destroyallocation2 *args, + struct d3dkmthandle *alloc_handles); ++int dxgvmb_send_submit_command(struct dxgprocess *pr, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommand *args); + int dxgvmb_send_create_sync_object(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_createsynchronizationobject2 +@@ -838,6 +841,9 @@ int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process, + int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryadapterinfo *args); ++int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommandtohwqueue *a); + int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct dxgvmbuschannel *channel, + struct d3dkmt_opensyncobjectfromnthandle2 +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index c9c00b288ae0..7cb04fec217e 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1901,6 +1901,61 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + return ret; + } + ++int dxgvmb_send_submit_command(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommand *args) ++{ ++ int ret; ++ u32 cmd_size; ++ struct dxgkvmb_command_submitcommand *command; ++ u32 hbufsize = args->num_history_buffers * sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ cmd_size = sizeof(struct dxgkvmb_command_submitcommand) + ++ hbufsize + args->priv_drv_data_size; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ ret = copy_from_user(&command[1], args->history_buffer_array, ++ hbufsize); ++ if (ret) { ++ DXG_ERR(" failed to copy history buffer"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_from_user((u8 *) &command[1] + hbufsize, ++ args->priv_drv_data, args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy history priv data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SUBMITCOMMAND, ++ process->host_handle); ++ command->args = *args; ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + static void set_result(struct d3dkmt_createsynchronizationobject2 *args, + u64 fence_gpu_va, u8 *va) + { +@@ -2427,3 +2482,61 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + DXG_TRACE("err: %d", ret); + return ret; + } ++ ++int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_submitcommandtohwqueue ++ *args) ++{ ++ int ret = -EINVAL; ++ u32 cmd_size; ++ struct dxgkvmb_command_submitcommandtohwqueue *command; ++ u32 primaries_size = args->num_primaries * sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ cmd_size = sizeof(*command) + args->priv_drv_data_size + primaries_size; ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ if (primaries_size) { ++ ret = copy_from_user(&command[1], args->written_primaries, ++ primaries_size); ++ if (ret) { ++ DXG_ERR("failed to copy primaries handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user((char *)&command[1] + primaries_size, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy primaries data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SUBMITCOMMANDTOHWQUEUE, ++ process->host_handle); ++ command->args = *args; ++ ++ if (dxgglobal->async_msg_enabled) { ++ command->hdr.async_msg = 1; ++ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size); ++ } else { ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index aba075d374c9..acfdbde09e82 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -314,6 +314,20 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_submitcommand { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_submitcommand args; ++ /* HistoryBufferHandles */ ++ /* PrivateDriverData */ ++}; ++ ++struct dxgkvmb_command_submitcommandtohwqueue { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_submitcommandtohwqueue args; ++ /* Written primaries */ ++ /* PrivateDriverData */ ++}; ++ + struct dxgkvmb_command_createallocation_allocinfo { + u32 flags; + u32 priv_drv_data_size; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index a2d236f5eff5..9128694c8e78 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1902,6 +1902,129 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_submitcommand args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.broadcast_context_count > D3DDDI_MAX_BROADCAST_CONTEXT || ++ args.broadcast_context_count == 0) { ++ DXG_ERR("invalid number of contexts"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.num_history_buffers > 1024) { ++ DXG_ERR("invalid number of history buffers"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.num_primaries > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid number of primaries"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.broadcast_context[0]); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_submit_command(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_submitcommandtohwqueue args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid private data size"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.num_primaries > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ DXG_ERR("invalid number of primaries"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ args.hwqueue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_submit_command_hwqueue(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + { +@@ -3666,7 +3789,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0c */ {}, + /* 0x0d */ {}, + /* 0x0e */ {}, +-/* 0x0f */ {}, ++/* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, + /* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT}, + /* 0x12 */ {dxgkio_wait_sync_object, LX_DXWAITFORSYNCHRONIZATIONOBJECT}, +@@ -3706,7 +3829,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU}, + /* 0x33 */ {dxgkio_signal_sync_object_gpu2, + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, +-/* 0x34 */ {}, ++/* 0x34 */ {dxgkio_submit_command_to_hwqueue, LX_DXSUBMITCOMMANDTOHWQUEUE}, + /* 0x35 */ {dxgkio_submit_signal_to_hwqueue, + LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 6ec70852de6e..9238115d165d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -58,6 +58,8 @@ struct winluid { + __u32 b; + }; + ++#define D3DDDI_MAX_WRITTEN_PRIMARIES 16 ++ + #define D3DKMT_CREATEALLOCATION_MAX 1024 + #define D3DKMT_ADAPTERS_MAX 64 + #define D3DDDI_MAX_BROADCAST_CONTEXT 64 +@@ -525,6 +527,58 @@ struct d3dkmt_destroysynchronizationobject { + struct d3dkmthandle sync_object; + }; + ++struct d3dkmt_submitcommandflags { ++ __u32 null_rendering:1; ++ __u32 present_redirected:1; ++ __u32 reserved:30; ++}; ++ ++struct d3dkmt_submitcommand { ++ __u64 command_buffer; ++ __u32 command_length; ++ struct d3dkmt_submitcommandflags flags; ++ __u64 present_history_token; ++ __u32 broadcast_context_count; ++ struct d3dkmthandle broadcast_context[D3DDDI_MAX_BROADCAST_CONTEXT]; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ __u32 num_primaries; ++ struct d3dkmthandle written_primaries[D3DDDI_MAX_WRITTEN_PRIMARIES]; ++ __u32 num_history_buffers; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *history_buffer_array; ++#else ++ __u64 history_buffer_array; ++#endif ++}; ++ ++struct d3dkmt_submitcommandtohwqueue { ++ struct d3dkmthandle hwqueue; ++ __u32 reserved; ++ __u64 hwqueue_progress_fence_id; ++ __u64 command_buffer; ++ __u32 command_length; ++ __u32 priv_drv_data_size; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 num_primaries; ++ __u32 reserved1; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *written_primaries; ++#else ++ __u64 written_primaries; ++#endif ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -917,6 +971,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXSUBMITCOMMAND \ ++ _IOWR(0x47, 0x0f, struct d3dkmt_submitcommand) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECT \ +@@ -945,6 +1001,8 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \ + _IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2) ++#define LX_DXSUBMITCOMMANDTOHWQUEUE \ ++ _IOWR(0x47, 0x34, struct d3dkmt_submitcommandtohwqueue) + #define LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE \ + _IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue) + #define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1682-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch b/patch/kernel/archive/wsl2-arm64-6.6/1682-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch new file mode 100644 index 000000000000..433f03238b69 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1682-drivers-hv-dxgkrnl-Share-objects-with-the-host.patch @@ -0,0 +1,271 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Sat, 7 Aug 2021 18:11:34 -0700 +Subject: drivers: hv: dxgkrnl: Share objects with the host + +Implement the LX_DXSHAREOBJECTWITHHOST ioctl. +This ioctl is used to create a Windows NT handle on the host +for the given shared object (resource or sync object). The NT +handle is returned to the caller. The caller could share the NT +handle with a host application, which needs to access the object. +The host application can open the shared resource using the NT +handle. This way the guest and the host have access to the same +object. + +Fix incorrect handling of error results from copy_from_user(). + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 2 + + drivers/hv/dxgkrnl/dxgvmbus.c | 60 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 18 +++ + drivers/hv/dxgkrnl/ioctl.c | 38 +++++- + include/uapi/misc/d3dkmthk.h | 9 ++ + 5 files changed, 120 insertions(+), 7 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index ab97bc53b124..a39d11d76e41 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -872,6 +872,8 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); ++int dxgvmb_send_share_object_with_host(struct dxgprocess *process, ++ struct d3dkmt_shareobjectwithhost *args); + + void signal_host_cpu_event(struct dxghostevent *eventhdr); + int ntstatus2int(struct ntstatus status); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 7cb04fec217e..67a16de622e0 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -881,6 +881,50 @@ int dxgvmb_send_destroy_sync_object(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_share_object_with_host(struct dxgprocess *process, ++ struct d3dkmt_shareobjectwithhost *args) ++{ ++ struct dxgkvmb_command_shareobjectwithhost *command; ++ struct dxgkvmb_command_shareobjectwithhost_return result = {}; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ command_vm_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SHAREOBJECTWITHHOST, ++ process->host_handle); ++ command->device_handle = args->device_handle; ++ command->object_handle = args->object_handle; ++ ++ ret = dxgvmb_send_sync_msg(dxgglobal_get_dxgvmbuschannel(), ++ msg.hdr, msg.size, &result, sizeof(result)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret || !NT_SUCCESS(result.status)) { ++ if (ret == 0) ++ ret = ntstatus2int(result.status); ++ DXG_ERR("Host failed to share object with host: %d %x", ++ ret, result.status.v); ++ goto cleanup; ++ } ++ args->object_vail_nt_handle = result.vail_nt_handle; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_ERR("err: %d", ret); ++ return ret; ++} ++ + /* + * Virtual GPU messages to the host + */ +@@ -2323,37 +2367,43 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + + ret = copy_to_user(&inargs->queue, &command->hwqueue, + sizeof(struct d3dkmthandle)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to copy hwqueue handle"); ++ ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence, + &command->hwqueue_progress_fence, + sizeof(struct d3dkmthandle)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to progress fence"); ++ ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_cpu_va, + &hwqueue->progress_fence_mapped_address, + sizeof(inargs->queue_progress_fence_cpu_va)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to copy fence cpu va"); ++ ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_gpu_va, + &command->hwqueue_progress_fence_gpuva, + sizeof(u64)); +- if (ret < 0) { ++ if (ret) { + DXG_ERR("failed to copy fence gpu va"); ++ ret = -EINVAL; + goto cleanup; + } + if (args->priv_drv_data_size) { + ret = copy_to_user(args->priv_drv_data, + command->priv_drv_data, + args->priv_drv_data_size); +- if (ret < 0) ++ if (ret) { + DXG_ERR("failed to copy private data"); ++ ret = -EINVAL; ++ } + } + + cleanup: +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index acfdbde09e82..c1f693917d99 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -574,4 +574,22 @@ struct dxgkvmb_command_destroyhwqueue { + struct d3dkmthandle hwqueue; + }; + ++struct dxgkvmb_command_shareobjectwithhost { ++ struct dxgkvmb_command_vm_to_host hdr; ++ struct d3dkmthandle device_handle; ++ struct d3dkmthandle object_handle; ++ u64 reserved; ++}; ++ ++struct dxgkvmb_command_shareobjectwithhost_return { ++ struct ntstatus status; ++ u32 alignment; ++ u64 vail_nt_handle; ++}; ++ ++int ++dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel, ++ void *command, u32 command_size, void *result, ++ u32 result_size); ++ + #endif /* _DXGVMBUS_H */ +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 9128694c8e78..ac052836ce27 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -2460,6 +2460,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + if (ret == 0) + goto success; + DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; + + cleanup: + +@@ -3364,8 +3365,10 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + tmp = (u64) object_fd; + + ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64)); +- if (ret < 0) ++ if (ret) { + DXG_ERR("failed to copy shared handle"); ++ ret = -EINVAL; ++ } + + cleanup: + if (ret < 0) { +@@ -3773,6 +3776,37 @@ dxgkio_open_resource_nt(struct dxgprocess *process, + return ret; + } + ++static int ++dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_shareobjectwithhost args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_share_object_with_host(process, &args); ++ if (ret) { ++ DXG_ERR("dxgvmb_send_share_object_with_host dailed"); ++ goto cleanup; ++ } ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy data to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static struct ioctl_desc ioctls[] = { + /* 0x00 */ {}, + /* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID}, +@@ -3850,7 +3884,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, + /* 0x43 */ {}, +-/* 0x44 */ {}, ++/* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, + /* 0x45 */ {}, + }; + +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 9238115d165d..895861505e6e 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -952,6 +952,13 @@ struct d3dkmt_enumadapters3 { + #endif + }; + ++struct d3dkmt_shareobjectwithhost { ++ struct d3dkmthandle device_handle; ++ struct d3dkmthandle object_handle; ++ __u64 reserved; ++ __u64 object_vail_nt_handle; ++}; ++ + /* + * Dxgkrnl Graphics Port Driver ioctl definitions + * +@@ -1021,5 +1028,7 @@ struct d3dkmt_enumadapters3 { + _IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle) + #define LX_DXOPENRESOURCEFROMNTHANDLE \ + _IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle) ++#define LX_DXSHAREOBJECTWITHHOST \ ++ _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1683-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch b/patch/kernel/archive/wsl2-arm64-6.6/1683-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch new file mode 100644 index 000000000000..604dcb0ffaf5 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1683-drivers-hv-dxgkrnl-Query-the-dxgdevice-state.patch @@ -0,0 +1,454 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 16:53:47 -0800 +Subject: drivers: hv: dxgkrnl: Query the dxgdevice state + +Implement the ioctl to query the dxgdevice state - LX_DXGETDEVICESTATE. +The IOCTL is used to query the state of the given dxgdevice object (active, +error, etc.). + +A call to the dxgdevice execution state could be high frequency. +The following method is used to avoid sending a synchronous VM +bus message to the host for every call: +- When a dxgdevice is created, a pointer to dxgglobal->device_state_counter + is sent to the host +- Every time the device state on the host is changed, the host will send + an asynchronous message to the guest (DXGK_VMBCOMMAND_SETGUESTDATA) and + the guest will increment the device_state_counter value. +- the dxgdevice object has execution_state_counter member, which is equal + to dxgglobal->device_state_counter value at the time when + LX_DXGETDEVICESTATE was last processed.. +- if execution_state_counter is different from device_state_counter, the + dxgk_vmbcommand_getdevicestate VM bus message is sent to the host. + Otherwise, the cached value is returned to the caller. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 11 + + drivers/hv/dxgkrnl/dxgmodule.c | 1 - + drivers/hv/dxgkrnl/dxgvmbus.c | 68 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 26 +++ + drivers/hv/dxgkrnl/ioctl.c | 66 +++++- + include/uapi/misc/d3dkmthk.h | 101 +++++++++- + 6 files changed, 261 insertions(+), 12 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index a39d11d76e41..b131c3b43838 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -268,12 +268,18 @@ void dxgsyncobject_destroy(struct dxgprocess *process, + void dxgsyncobject_stop(struct dxgsyncobject *syncobj); + void dxgsyncobject_release(struct kref *refcount); + ++/* ++ * device_state_counter - incremented every time the execition state of ++ * a DXGDEVICE is changed in the host. Used to optimize access to the ++ * device execution state. ++ */ + struct dxgglobal { + struct dxgdriver *drvdata; + struct dxgvmbuschannel channel; + struct hv_device *hdev; + u32 num_adapters; + u32 vmbus_ver; /* Interface version */ ++ atomic_t device_state_counter; + struct resource *mem; + u64 mmiospace_base; + u64 mmiospace_size; +@@ -512,6 +518,7 @@ struct dxgdevice { + struct list_head syncobj_list_head; + struct d3dkmthandle handle; + enum d3dkmt_deviceexecution_state execution_state; ++ int execution_state_counter; + u32 handle_valid; + }; + +@@ -849,6 +856,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct d3dkmt_opensyncobjectfromnthandle2 + *args, + struct dxgsyncobject *syncobj); ++int dxgvmb_send_get_device_state(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getdevicestate *args, ++ struct d3dkmt_getdevicestate *__user inargs); + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle); +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 8cbe1095599f..5c364a46b65f 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -827,7 +827,6 @@ static struct dxgglobal *dxgglobal_create(void) + #ifdef DEBUG + dxgk_validate_ioctls(); + #endif +- + return dxgglobal; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 67a16de622e0..ed800dc09180 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -281,6 +281,24 @@ static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command, + command->channel_type = DXGKVMB_VM_TO_HOST; + } + ++static void set_guest_data(struct dxgkvmb_command_host_to_vm *packet, ++ u32 packet_length) ++{ ++ struct dxgkvmb_command_setguestdata *command = (void *)packet; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ DXG_TRACE("Setting guest data: %d %d %p %p", ++ command->data_type, ++ command->data32, ++ command->guest_pointer, ++ &dxgglobal->device_state_counter); ++ if (command->data_type == SETGUESTDATA_DATATYPE_DWORD && ++ command->guest_pointer == &dxgglobal->device_state_counter && ++ command->data32 != 0) { ++ atomic_inc(&dxgglobal->device_state_counter); ++ } ++} ++ + static void signal_guest_event(struct dxgkvmb_command_host_to_vm *packet, + u32 packet_length) + { +@@ -311,6 +329,9 @@ static void process_inband_packet(struct dxgvmbuschannel *channel, + DXG_TRACE("global packet %d", + packet->command_type); + switch (packet->command_type) { ++ case DXGK_VMBCOMMAND_SETGUESTDATA: ++ set_guest_data(packet, packet_length); ++ break; + case DXGK_VMBCOMMAND_SIGNALGUESTEVENT: + case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE: + signal_guest_event(packet, packet_length); +@@ -1028,6 +1049,7 @@ struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, + struct dxgkvmb_command_createdevice *command; + struct dxgkvmb_command_createdevice_return result = { }; + struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); + + ret = init_message(&msg, adapter, process, sizeof(*command)); + if (ret) +@@ -1037,6 +1059,7 @@ struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter, + command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_CREATEDEVICE, + process->host_handle); + command->flags = args->flags; ++ command->error_code = &dxgglobal->device_state_counter; + + ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, + &result, sizeof(result)); +@@ -1806,6 +1829,51 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_get_device_state(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getdevicestate *args, ++ struct d3dkmt_getdevicestate *__user output) ++{ ++ int ret; ++ struct dxgkvmb_command_getdevicestate *command; ++ struct dxgkvmb_command_getdevicestate_return result = { }; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_GETDEVICESTATE, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(output, &result.args, sizeof(result.args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++ if (args->state_type == _D3DKMT_DEVICESTATE_EXECUTION) ++ args->execution_state = result.args.execution_state; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_open_resource(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle device, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index c1f693917d99..6ca1068b0d4c 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -172,6 +172,22 @@ struct dxgkvmb_command_signalguestevent { + bool dereference_event; + }; + ++enum set_guestdata_type { ++ SETGUESTDATA_DATATYPE_DWORD = 0, ++ SETGUESTDATA_DATATYPE_UINT64 = 1 ++}; ++ ++struct dxgkvmb_command_setguestdata { ++ struct dxgkvmb_command_host_to_vm hdr; ++ void *guest_pointer; ++ union { ++ u64 data64; ++ u32 data32; ++ }; ++ u32 dereference : 1; ++ u32 data_type : 4; ++}; ++ + struct dxgkvmb_command_opensyncobject { + struct dxgkvmb_command_vm_to_host hdr; + struct d3dkmthandle device; +@@ -574,6 +590,16 @@ struct dxgkvmb_command_destroyhwqueue { + struct d3dkmthandle hwqueue; + }; + ++struct dxgkvmb_command_getdevicestate { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_getdevicestate args; ++}; ++ ++struct dxgkvmb_command_getdevicestate_return { ++ struct d3dkmt_getdevicestate args; ++ struct ntstatus status; ++}; ++ + struct dxgkvmb_command_shareobjectwithhost { + struct dxgkvmb_command_vm_to_host hdr; + struct d3dkmthandle device_handle; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index ac052836ce27..26d410fd6e99 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3142,6 +3142,70 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_getdevicestate args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int global_device_state_counter = 0; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ if (args.state_type == _D3DKMT_DEVICESTATE_EXECUTION) { ++ global_device_state_counter = ++ atomic_read(&dxgglobal->device_state_counter); ++ if (device->execution_state_counter == ++ global_device_state_counter) { ++ args.execution_state = device->execution_state; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy args to user"); ++ ret = -EINVAL; ++ } ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_get_device_state(process, adapter, &args, inargs); ++ ++ if (ret == 0 && args.state_type == _D3DKMT_DEVICESTATE_EXECUTION) { ++ device->execution_state = args.execution_state; ++ device->execution_state_counter = global_device_state_counter; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ if (ret < 0) ++ DXG_ERR("Failed to get device state %x", ret); ++ ++ return ret; ++} ++ + static int + dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, + struct dxgprocess *process, +@@ -3822,7 +3886,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0b */ {}, + /* 0x0c */ {}, + /* 0x0d */ {}, +-/* 0x0e */ {}, ++/* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, + /* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, + /* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 895861505e6e..8a013b07e88a 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -236,6 +236,95 @@ struct d3dddi_destroypagingqueue { + struct d3dkmthandle paging_queue; + }; + ++enum dxgk_render_pipeline_stage { ++ _DXGK_RENDER_PIPELINE_STAGE_UNKNOWN = 0, ++ _DXGK_RENDER_PIPELINE_STAGE_INPUT_ASSEMBLER = 1, ++ _DXGK_RENDER_PIPELINE_STAGE_VERTEX_SHADER = 2, ++ _DXGK_RENDER_PIPELINE_STAGE_GEOMETRY_SHADER = 3, ++ _DXGK_RENDER_PIPELINE_STAGE_STREAM_OUTPUT = 4, ++ _DXGK_RENDER_PIPELINE_STAGE_RASTERIZER = 5, ++ _DXGK_RENDER_PIPELINE_STAGE_PIXEL_SHADER = 6, ++ _DXGK_RENDER_PIPELINE_STAGE_OUTPUT_MERGER = 7, ++}; ++ ++enum dxgk_page_fault_flags { ++ _DXGK_PAGE_FAULT_WRITE = 0x1, ++ _DXGK_PAGE_FAULT_FENCE_INVALID = 0x2, ++ _DXGK_PAGE_FAULT_ADAPTER_RESET_REQUIRED = 0x4, ++ _DXGK_PAGE_FAULT_ENGINE_RESET_REQUIRED = 0x8, ++ _DXGK_PAGE_FAULT_FATAL_HARDWARE_ERROR = 0x10, ++ _DXGK_PAGE_FAULT_IOMMU = 0x20, ++ _DXGK_PAGE_FAULT_HW_CONTEXT_VALID = 0x40, ++ _DXGK_PAGE_FAULT_PROCESS_HANDLE_VALID = 0x80, ++}; ++ ++enum dxgk_general_error_code { ++ _DXGK_GENERAL_ERROR_PAGE_FAULT = 0, ++ _DXGK_GENERAL_ERROR_INVALID_INSTRUCTION = 1, ++}; ++ ++struct dxgk_fault_error_code { ++ union { ++ struct { ++ __u32 is_device_specific_code:1; ++ enum dxgk_general_error_code general_error_code:31; ++ }; ++ struct { ++ __u32 is_device_specific_code_reserved_bit:1; ++ __u32 device_specific_code:31; ++ }; ++ }; ++}; ++ ++struct d3dkmt_devicereset_state { ++ union { ++ struct { ++ __u32 desktop_switched:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_devicepagefault_state { ++ __u64 faulted_primitive_api_sequence_number; ++ enum dxgk_render_pipeline_stage faulted_pipeline_stage; ++ __u32 faulted_bind_table_entry; ++ enum dxgk_page_fault_flags page_fault_flags; ++ struct dxgk_fault_error_code fault_error_code; ++ __u64 faulted_virtual_address; ++}; ++ ++enum d3dkmt_deviceexecution_state { ++ _D3DKMT_DEVICEEXECUTION_ACTIVE = 1, ++ _D3DKMT_DEVICEEXECUTION_RESET = 2, ++ _D3DKMT_DEVICEEXECUTION_HUNG = 3, ++ _D3DKMT_DEVICEEXECUTION_STOPPED = 4, ++ _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6, ++ _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, ++}; ++ ++enum d3dkmt_devicestate_type { ++ _D3DKMT_DEVICESTATE_EXECUTION = 1, ++ _D3DKMT_DEVICESTATE_PRESENT = 2, ++ _D3DKMT_DEVICESTATE_RESET = 3, ++ _D3DKMT_DEVICESTATE_PRESENT_DWM = 4, ++ _D3DKMT_DEVICESTATE_PAGE_FAULT = 5, ++ _D3DKMT_DEVICESTATE_PRESENT_QUEUE = 6, ++}; ++ ++struct d3dkmt_getdevicestate { ++ struct d3dkmthandle device; ++ enum d3dkmt_devicestate_type state_type; ++ union { ++ enum d3dkmt_deviceexecution_state execution_state; ++ struct d3dkmt_devicereset_state reset_state; ++ struct d3dkmt_devicepagefault_state page_fault_state; ++ char alignment[48]; ++ }; ++}; ++ + enum d3dkmdt_gdisurfacetype { + _D3DKMDT_GDISURFACE_INVALID = 0, + _D3DKMDT_GDISURFACE_TEXTURE = 1, +@@ -759,16 +848,6 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + +-enum d3dkmt_deviceexecution_state { +- _D3DKMT_DEVICEEXECUTION_ACTIVE = 1, +- _D3DKMT_DEVICEEXECUTION_RESET = 2, +- _D3DKMT_DEVICEEXECUTION_HUNG = 3, +- _D3DKMT_DEVICEEXECUTION_STOPPED = 4, +- _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5, +- _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6, +- _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7, +-}; +- + struct d3dddi_openallocationinfo2 { + struct d3dkmthandle allocation; + #ifdef __KERNEL__ +@@ -978,6 +1057,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXGETDEVICESTATE \ ++ _IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate) + #define LX_DXSUBMITCOMMAND \ + _IOWR(0x47, 0x0f, struct d3dkmt_submitcommand) + #define LX_DXCREATESYNCHRONIZATIONOBJECT \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1684-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch b/patch/kernel/archive/wsl2-arm64-6.6/1684-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch new file mode 100644 index 000000000000..0f2e123431fd --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1684-drivers-hv-dxgkrnl-Map-unmap-CPU-address-to-device-allocation.patch @@ -0,0 +1,498 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 13:58:28 -0800 +Subject: drivers: hv: dxgkrnl: Map(unmap) CPU address to device allocation + +Implement ioctls to map/unmap CPU virtual addresses to compute device +allocations - LX_DXLOCK2 and LX_DXUNLOCK2. + +The LX_DXLOCK2 ioctl maps a CPU virtual address to a compute device +allocation. The allocation could be located in system memory or local +device memory on the host. When the device allocation is created +from the guest system memory (existing sysmem allocation), the +allocation CPU address is known and is returned to the caller. +For other CPU visible allocations the code flow is the following: +1. A VM bus message is sent to the host to map the allocation +2. The host allocates a portion of the guest IO space and maps it + to the allocation backing store. The IO space address of the + allocation is returned back to the guest. +3. The guest allocates a CPU virtual address and maps it to the IO + space (see the dxg_map_iospace function). +4. The CPU VA is returned back to the caller +cpu_address_mapped and cpu_address_refcount are used to track how +many times an allocation was mapped. + +The LX_DXUNLOCK2 ioctl unmaps a CPU virtual address from a compute +device allocation. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 11 + + drivers/hv/dxgkrnl/dxgkrnl.h | 14 + + drivers/hv/dxgkrnl/dxgvmbus.c | 107 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 19 ++ + drivers/hv/dxgkrnl/ioctl.c | 160 +++++++++- + include/uapi/misc/d3dkmthk.h | 30 ++ + 6 files changed, 339 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 410f08768bad..23f00db7637e 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -885,6 +885,15 @@ void dxgallocation_stop(struct dxgallocation *alloc) + vfree(alloc->pages); + alloc->pages = NULL; + } ++ dxgprocess_ht_lock_exclusive_down(alloc->process); ++ if (alloc->cpu_address_mapped) { ++ dxg_unmap_iospace(alloc->cpu_address, ++ alloc->num_pages << PAGE_SHIFT); ++ alloc->cpu_address_mapped = false; ++ alloc->cpu_address = NULL; ++ alloc->cpu_address_refcount = 0; ++ } ++ dxgprocess_ht_lock_exclusive_up(alloc->process); + } + + void dxgallocation_free_handle(struct dxgallocation *alloc) +@@ -932,6 +941,8 @@ else + #endif + if (alloc->priv_drv_data) + vfree(alloc->priv_drv_data); ++ if (alloc->cpu_address_mapped) ++ pr_err("Alloc IO space is mapped: %p", alloc); + kfree(alloc); + } + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b131c3b43838..1d6b552f1c1a 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -708,6 +708,8 @@ struct dxgallocation { + struct d3dkmthandle alloc_handle; + /* Set to 1 when allocation belongs to resource. */ + u32 resource_owner:1; ++ /* Set to 1 when 'cpu_address' is mapped to the IO space. */ ++ u32 cpu_address_mapped:1; + /* Set to 1 when the allocatio is mapped as cached */ + u32 cached:1; + u32 handle_valid:1; +@@ -719,6 +721,11 @@ struct dxgallocation { + #endif + /* Number of pages in the 'pages' array */ + u32 num_pages; ++ /* ++ * How many times dxgk_lock2 is called to allocation, which is mapped ++ * to IO space. ++ */ ++ u32 cpu_address_refcount; + /* + * CPU address from the existing sysmem allocation, or + * mapped to the CPU visible backing store in the IO space +@@ -837,6 +844,13 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + d3dkmt_waitforsynchronizationobjectfromcpu + *args, + u64 cpu_event); ++int dxgvmb_send_lock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_lock2 *args, ++ struct d3dkmt_lock2 *__user outargs); ++int dxgvmb_send_unlock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_unlock2 *args); + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index ed800dc09180..a80f84d9065a 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2354,6 +2354,113 @@ int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_lock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_lock2 *args, ++ struct d3dkmt_lock2 *__user outargs) ++{ ++ int ret; ++ struct dxgkvmb_command_lock2 *command; ++ struct dxgkvmb_command_lock2_return result = { }; ++ struct dxgallocation *alloc = NULL; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_LOCK2, process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ alloc = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ args->allocation); ++ if (alloc == NULL) { ++ DXG_ERR("invalid alloc"); ++ ret = -EINVAL; ++ } else { ++ if (alloc->cpu_address) { ++ args->data = alloc->cpu_address; ++ if (alloc->cpu_address_mapped) ++ alloc->cpu_address_refcount++; ++ } else { ++ u64 offset = (u64)result.cpu_visible_buffer_offset; ++ ++ args->data = dxg_map_iospace(offset, ++ alloc->num_pages << PAGE_SHIFT, ++ PROT_READ | PROT_WRITE, alloc->cached); ++ if (args->data) { ++ alloc->cpu_address_refcount = 1; ++ alloc->cpu_address_mapped = true; ++ alloc->cpu_address = args->data; ++ } ++ } ++ if (args->data == NULL) { ++ ret = -ENOMEM; ++ } else { ++ ret = copy_to_user(&outargs->data, &args->data, ++ sizeof(args->data)); ++ if (ret) { ++ DXG_ERR("failed to copy data"); ++ ret = -EINVAL; ++ alloc->cpu_address_refcount--; ++ if (alloc->cpu_address_refcount == 0) { ++ dxg_unmap_iospace(alloc->cpu_address, ++ alloc->num_pages << PAGE_SHIFT); ++ alloc->cpu_address_mapped = false; ++ alloc->cpu_address = NULL; ++ } ++ } ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_unlock2(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_unlock2 *args) ++{ ++ int ret; ++ struct dxgkvmb_command_unlock2 *command; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_UNLOCK2, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 6ca1068b0d4c..447bb1ba391b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -570,6 +570,25 @@ struct dxgkvmb_command_waitforsyncobjectfromgpu { + /* struct d3dkmthandle ObjectHandles[object_count] */ + }; + ++struct dxgkvmb_command_lock2 { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_lock2 args; ++ bool use_legacy_lock; ++ u32 flags; ++ u32 priv_drv_data; ++}; ++ ++struct dxgkvmb_command_lock2_return { ++ struct ntstatus status; ++ void *cpu_visible_buffer_offset; ++}; ++ ++struct dxgkvmb_command_unlock2 { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_unlock2 args; ++ bool use_legacy_unlock; ++}; ++ + /* Returns the same structure */ + struct dxgkvmb_command_createhwqueue { + struct dxgkvmb_command_vgpu_to_host hdr; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 26d410fd6e99..37e218443310 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3142,6 +3142,162 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_lock2(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_lock2 args; ++ struct d3dkmt_lock2 *__user result = inargs; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgallocation *alloc = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ args.data = NULL; ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ alloc = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ args.allocation); ++ if (alloc == NULL) { ++ ret = -EINVAL; ++ } else { ++ if (alloc->cpu_address) { ++ ret = copy_to_user(&result->data, ++ &alloc->cpu_address, ++ sizeof(args.data)); ++ if (ret == 0) { ++ args.data = alloc->cpu_address; ++ if (alloc->cpu_address_mapped) ++ alloc->cpu_address_refcount++; ++ } else { ++ DXG_ERR("Failed to copy cpu address"); ++ ret = -EINVAL; ++ } ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (ret < 0) ++ goto cleanup; ++ if (args.data) ++ goto success; ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_lock2(process, adapter, &args, result); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++success: ++ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ return ret; ++} ++ ++static int ++dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_unlock2 args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgallocation *alloc = NULL; ++ bool done = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ alloc = hmgrtable_get_object_by_type(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ args.allocation); ++ if (alloc == NULL) { ++ ret = -EINVAL; ++ } else { ++ if (alloc->cpu_address == NULL) { ++ DXG_ERR("Allocation is not locked: %p", alloc); ++ ret = -EINVAL; ++ } else if (alloc->cpu_address_mapped) { ++ if (alloc->cpu_address_refcount > 0) { ++ alloc->cpu_address_refcount--; ++ if (alloc->cpu_address_refcount != 0) { ++ done = true; ++ } else { ++ dxg_unmap_iospace(alloc->cpu_address, ++ alloc->num_pages << PAGE_SHIFT); ++ alloc->cpu_address_mapped = false; ++ alloc->cpu_address = NULL; ++ } ++ } else { ++ DXG_ERR("Invalid cpu access refcount"); ++ done = true; ++ } ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ if (done) ++ goto success; ++ if (ret < 0) ++ goto cleanup; ++ ++ /* ++ * The call acquires reference on the device. It is safe to access the ++ * adapter, because the device holds reference on it. ++ */ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_unlock2(process, adapter, &args); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++success: ++ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ return ret; ++} ++ + static int + dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + { +@@ -3909,7 +4065,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x22 */ {}, + /* 0x23 */ {}, + /* 0x24 */ {}, +-/* 0x25 */ {}, ++/* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, + /* 0x26 */ {}, + /* 0x27 */ {}, + /* 0x28 */ {}, +@@ -3932,7 +4088,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, +-/* 0x37 */ {}, ++/* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, + /* 0x38 */ {}, + /* 0x39 */ {}, + /* 0x3a */ {dxgkio_wait_sync_object_cpu, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 8a013b07e88a..b498f09e694d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -668,6 +668,32 @@ struct d3dkmt_submitcommandtohwqueue { + #endif + }; + ++struct d3dddicb_lock2flags { ++ union { ++ struct { ++ __u32 reserved:32; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_lock2 { ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++ struct d3dddicb_lock2flags flags; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ void *data; ++#else ++ __u64 data; ++#endif ++}; ++ ++struct d3dkmt_unlock2 { ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -1083,6 +1109,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXLOCK2 \ ++ _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ +@@ -1095,6 +1123,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue) + #define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \ + _IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue) ++#define LX_DXUNLOCK2 \ ++ _IOWR(0x47, 0x37, struct d3dkmt_unlock2) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1685-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch b/patch/kernel/archive/wsl2-arm64-6.6/1685-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch new file mode 100644 index 000000000000..02b8c5eeba69 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1685-drivers-hv-dxgkrnl-Manage-device-allocation-properties.patch @@ -0,0 +1,912 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 19 Jan 2022 11:14:22 -0800 +Subject: drivers: hv: dxgkrnl: Manage device allocation properties + +Implement ioctls to manage properties of a compute device allocation: + - LX_DXUPDATEALLOCPROPERTY, + - LX_DXSETALLOCATIONPRIORITY, + - LX_DXGETALLOCATIONPRIORITY, + - LX_DXQUERYALLOCATIONRESIDENCY. + - LX_DXCHANGEVIDEOMEMORYRESERVATION, + +The LX_DXUPDATEALLOCPROPERTY ioctl requests the host to update +various properties of a compute devoce allocation. + +The LX_DXSETALLOCATIONPRIORITY and LX_DXGETALLOCATIONPRIORITY ioctls +are used to set/get allocation priority, which defines the +importance of the allocation to be in the local device memory. + +The LX_DXQUERYALLOCATIONRESIDENCY ioctl queries if the allocation +is located in the compute device accessible memory. + +The LX_DXCHANGEVIDEOMEMORYRESERVATION ioctl changes compute device +memory reservation of an allocation. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 21 + + drivers/hv/dxgkrnl/dxgvmbus.c | 300 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 50 ++ + drivers/hv/dxgkrnl/ioctl.c | 217 ++++++- + include/uapi/misc/d3dkmthk.h | 127 ++++ + 5 files changed, 708 insertions(+), 7 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 1d6b552f1c1a..7fefe4617488 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -851,6 +851,23 @@ int dxgvmb_send_lock2(struct dxgprocess *process, + int dxgvmb_send_unlock2(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_unlock2 *args); ++int dxgvmb_send_update_alloc_property(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_updateallocproperty *args, ++ struct d3dddi_updateallocproperty *__user ++ inargs); ++int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_setallocationpriority *a); ++int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getallocationpriority *a); ++int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle other_process, ++ struct ++ d3dkmt_changevideomemoryreservation ++ *args); + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +@@ -870,6 +887,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct d3dkmt_opensyncobjectfromnthandle2 + *args, + struct dxgsyncobject *syncobj); ++int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryallocationresidency ++ *args); + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index a80f84d9065a..dd2c97fee27b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1829,6 +1829,79 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryallocationresidency ++ *args) ++{ ++ int ret = -EINVAL; ++ struct dxgkvmb_command_queryallocationresidency *command = NULL; ++ u32 cmd_size = sizeof(*command); ++ u32 alloc_size = 0; ++ u32 result_allocation_size = 0; ++ struct dxgkvmb_command_queryallocationresidency_return *result = NULL; ++ u32 result_size = sizeof(*result); ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args->allocation_count) { ++ alloc_size = args->allocation_count * ++ sizeof(struct d3dkmthandle); ++ cmd_size += alloc_size; ++ result_allocation_size = args->allocation_count * ++ sizeof(args->residency_status[0]); ++ } else { ++ result_allocation_size = sizeof(args->residency_status[0]); ++ } ++ result_size += result_allocation_size; ++ ++ ret = init_message_res(&msg, adapter, process, cmd_size, result_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYALLOCATIONRESIDENCY, ++ process->host_handle); ++ command->args = *args; ++ if (alloc_size) { ++ ret = copy_from_user(&command[1], args->allocations, ++ alloc_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(args->residency_status, &result[1], ++ result_allocation_size); ++ if (ret) { ++ DXG_ERR("failed to copy residency status"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +@@ -2461,6 +2534,233 @@ int dxgvmb_send_unlock2(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_update_alloc_property(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_updateallocproperty *args, ++ struct d3dddi_updateallocproperty *__user ++ inargs) ++{ ++ int ret; ++ int ret1; ++ struct dxgkvmb_command_updateallocationproperty *command; ++ struct dxgkvmb_command_updateallocationproperty_return result = { }; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_UPDATEALLOCATIONPROPERTY, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ ++ if (ret < 0) ++ goto cleanup; ++ ret = ntstatus2int(result.status); ++ /* STATUS_PENING is a success code > 0 */ ++ if (ret == STATUS_PENDING) { ++ ret1 = copy_to_user(&inargs->paging_fence_value, ++ &result.paging_fence_value, ++ sizeof(u64)); ++ if (ret1) { ++ DXG_ERR("failed to copy paging fence"); ++ ret = -EINVAL; ++ } ++ } ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_setallocationpriority *args) ++{ ++ u32 cmd_size = sizeof(struct dxgkvmb_command_setallocationpriority); ++ u32 alloc_size = 0; ++ u32 priority_size = 0; ++ struct dxgkvmb_command_setallocationpriority *command; ++ int ret; ++ struct d3dkmthandle *allocations; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args->resource.v) { ++ priority_size = sizeof(u32); ++ if (args->allocation_count != 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ if (args->allocation_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ alloc_size = args->allocation_count * ++ sizeof(struct d3dkmthandle); ++ cmd_size += alloc_size; ++ priority_size = sizeof(u32) * args->allocation_count; ++ } ++ cmd_size += priority_size; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SETALLOCATIONPRIORITY, ++ process->host_handle); ++ command->device = args->device; ++ command->allocation_count = args->allocation_count; ++ command->resource = args->resource; ++ allocations = (struct d3dkmthandle *) &command[1]; ++ ret = copy_from_user(allocations, args->allocation_list, ++ alloc_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handle"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_from_user((u8 *) allocations + alloc_size, ++ args->priorities, priority_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc priority"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_getallocationpriority *args) ++{ ++ u32 cmd_size = sizeof(struct dxgkvmb_command_getallocationpriority); ++ u32 result_size; ++ u32 alloc_size = 0; ++ u32 priority_size = 0; ++ struct dxgkvmb_command_getallocationpriority *command; ++ struct dxgkvmb_command_getallocationpriority_return *result; ++ int ret; ++ struct d3dkmthandle *allocations; ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args->resource.v) { ++ priority_size = sizeof(u32); ++ if (args->allocation_count != 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ if (args->allocation_count == 0) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ alloc_size = args->allocation_count * ++ sizeof(struct d3dkmthandle); ++ cmd_size += alloc_size; ++ priority_size = sizeof(u32) * args->allocation_count; ++ } ++ result_size = sizeof(*result) + priority_size; ++ ++ ret = init_message_res(&msg, adapter, process, cmd_size, result_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_GETALLOCATIONPRIORITY, ++ process->host_handle); ++ command->device = args->device; ++ command->allocation_count = args->allocation_count; ++ command->resource = args->resource; ++ allocations = (struct d3dkmthandle *) &command[1]; ++ ret = copy_from_user(allocations, args->allocation_list, ++ alloc_size); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, ++ msg.size + msg.res_size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result->status); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(args->priorities, ++ (u8 *) result + sizeof(*result), ++ priority_size); ++ if (ret) { ++ DXG_ERR("failed to copy priorities"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle other_process, ++ struct ++ d3dkmt_changevideomemoryreservation ++ *args) ++{ ++ struct dxgkvmb_command_changevideomemoryreservation *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_CHANGEVIDEOMEMORYRESERVATION, ++ process->host_handle); ++ command->args = *args; ++ command->args.process = other_process.v; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_createhwqueue *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 447bb1ba391b..dbb01b9ab066 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -308,6 +308,29 @@ struct dxgkvmb_command_queryadapterinfo_return { + u8 private_data[1]; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setallocationpriority { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 allocation_count; ++ /* struct d3dkmthandle allocations[allocation_count or 0]; */ ++ /* u32 priorities[allocation_count or 1]; */ ++}; ++ ++struct dxgkvmb_command_getallocationpriority { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++ u32 allocation_count; ++ /* struct d3dkmthandle allocations[allocation_count or 0]; */ ++}; ++ ++struct dxgkvmb_command_getallocationpriority_return { ++ struct ntstatus status; ++ /* u32 priorities[allocation_count or 1]; */ ++}; ++ + struct dxgkvmb_command_createdevice { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_createdeviceflags flags; +@@ -589,6 +612,22 @@ struct dxgkvmb_command_unlock2 { + bool use_legacy_unlock; + }; + ++struct dxgkvmb_command_updateallocationproperty { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dddi_updateallocproperty args; ++}; ++ ++struct dxgkvmb_command_updateallocationproperty_return { ++ u64 paging_fence_value; ++ struct ntstatus status; ++}; ++ ++/* Returns ntstatus */ ++struct dxgkvmb_command_changevideomemoryreservation { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_changevideomemoryreservation args; ++}; ++ + /* Returns the same structure */ + struct dxgkvmb_command_createhwqueue { + struct dxgkvmb_command_vgpu_to_host hdr; +@@ -609,6 +648,17 @@ struct dxgkvmb_command_destroyhwqueue { + struct d3dkmthandle hwqueue; + }; + ++struct dxgkvmb_command_queryallocationresidency { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_queryallocationresidency args; ++ /* struct d3dkmthandle allocations[0 or number of allocations] */ ++}; ++ ++struct dxgkvmb_command_queryallocationresidency_return { ++ struct ntstatus status; ++ /* d3dkmt_allocationresidencystatus[NumAllocations] */ ++}; ++ + struct dxgkvmb_command_getdevicestate { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_getdevicestate args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 37e218443310..b626e2518ff2 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3214,7 +3214,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); + return ret; + } + +@@ -3294,7 +3294,209 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dddi_updateallocproperty args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_update_alloc_property(process, adapter, ++ &args, inargs); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryallocationresidency args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if ((args.allocation_count == 0) == (args.resource.v == 0)) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_query_alloc_residency(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_setallocationpriority args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_set_allocation_priority(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_getallocationpriority args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_get_allocation_priority(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_changevideomemoryreservation args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.process != 0) { ++ DXG_ERR("setting memory reservation for other process"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ args.adapter.v = 0; ++ ret = dxgvmb_send_change_vidmem_reservation(process, adapter, ++ zerohandle, &args); ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); + return ret; + } + +@@ -4050,7 +4252,8 @@ static struct ioctl_desc ioctls[] = { + /* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2}, + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, +-/* 0x16 */ {}, ++/* 0x16 */ {dxgkio_change_vidmem_reservation, ++ LX_DXCHANGEVIDEOMEMORYRESERVATION}, + /* 0x17 */ {}, + /* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE}, + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, +@@ -4070,11 +4273,11 @@ static struct ioctl_desc ioctls[] = { + /* 0x27 */ {}, + /* 0x28 */ {}, + /* 0x29 */ {}, +-/* 0x2a */ {}, ++/* 0x2a */ {dxgkio_query_alloc_residency, LX_DXQUERYALLOCATIONRESIDENCY}, + /* 0x2b */ {}, + /* 0x2c */ {}, + /* 0x2d */ {}, +-/* 0x2e */ {}, ++/* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY}, + /* 0x2f */ {}, + /* 0x30 */ {}, + /* 0x31 */ {dxgkio_signal_sync_object_cpu, +@@ -4089,13 +4292,13 @@ static struct ioctl_desc ioctls[] = { + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, +-/* 0x38 */ {}, ++/* 0x38 */ {dxgkio_update_alloc_property, LX_DXUPDATEALLOCPROPERTY}, + /* 0x39 */ {}, + /* 0x3a */ {dxgkio_wait_sync_object_cpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU}, + /* 0x3b */ {dxgkio_wait_sync_object_gpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU}, +-/* 0x3c */ {}, ++/* 0x3c */ {dxgkio_get_allocation_priority, LX_DXGETALLOCATIONPRIORITY}, + /* 0x3d */ {}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, + /* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index b498f09e694d..af381101fd90 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -668,6 +668,63 @@ struct d3dkmt_submitcommandtohwqueue { + #endif + }; + ++struct d3dkmt_setallocationpriority { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocation_list; ++#else ++ __u64 allocation_list; ++#endif ++ __u32 allocation_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ const __u32 *priorities; ++#else ++ __u64 priorities; ++#endif ++}; ++ ++struct d3dkmt_getallocationpriority { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocation_list; ++#else ++ __u64 allocation_list; ++#endif ++ __u32 allocation_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ __u32 *priorities; ++#else ++ __u64 priorities; ++#endif ++}; ++ ++enum d3dkmt_allocationresidencystatus { ++ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_RESIDENTINGPUMEMORY = 1, ++ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_RESIDENTINSHAREDMEMORY = 2, ++ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_NOTRESIDENT = 3, ++}; ++ ++struct d3dkmt_queryallocationresidency { ++ struct d3dkmthandle device; ++ struct d3dkmthandle resource; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *allocations; ++#else ++ __u64 allocations; ++#endif ++ __u32 allocation_count; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ enum d3dkmt_allocationresidencystatus *residency_status; ++#else ++ __u64 residency_status; ++#endif ++}; ++ + struct d3dddicb_lock2flags { + union { + struct { +@@ -835,6 +892,11 @@ struct d3dkmt_destroyallocation2 { + struct d3dddicb_destroyallocation2flags flags; + }; + ++enum d3dkmt_memory_segment_group { ++ _D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0, ++ _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -886,6 +948,61 @@ struct d3dddi_openallocationinfo2 { + __u64 reserved[6]; + }; + ++struct d3dddi_updateallocproperty_flags { ++ union { ++ struct { ++ __u32 accessed_physically:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_segmentpreference { ++ union { ++ struct { ++ __u32 segment_id0:5; ++ __u32 direction0:1; ++ __u32 segment_id1:5; ++ __u32 direction1:1; ++ __u32 segment_id2:5; ++ __u32 direction2:1; ++ __u32 segment_id3:5; ++ __u32 direction3:1; ++ __u32 segment_id4:5; ++ __u32 direction4:1; ++ __u32 reserved:2; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_updateallocproperty { ++ struct d3dkmthandle paging_queue; ++ struct d3dkmthandle allocation; ++ __u32 supported_segment_set; ++ struct d3dddi_segmentpreference preferred_segment; ++ struct d3dddi_updateallocproperty_flags flags; ++ __u64 paging_fence_value; ++ union { ++ struct { ++ __u32 set_accessed_physically:1; ++ __u32 set_supported_segmentSet:1; ++ __u32 set_preferred_segment:1; ++ __u32 reserved:29; ++ }; ++ __u32 property_mask_value; ++ }; ++}; ++ ++struct d3dkmt_changevideomemoryreservation { ++ __u64 process; ++ struct d3dkmthandle adapter; ++ enum d3dkmt_memory_segment_group memory_segment_group; ++ __u64 reservation; ++ __u32 physical_adapter_index; ++}; ++ + struct d3dkmt_createhwqueue { + struct d3dkmthandle context; + struct d3dddi_createhwqueueflags flags; +@@ -1099,6 +1216,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2) + #define LX_DXCLOSEADAPTER \ + _IOWR(0x47, 0x15, struct d3dkmt_closeadapter) ++#define LX_DXCHANGEVIDEOMEMORYRESERVATION \ ++ _IOWR(0x47, 0x16, struct d3dkmt_changevideomemoryreservation) + #define LX_DXCREATEHWQUEUE \ + _IOWR(0x47, 0x18, struct d3dkmt_createhwqueue) + #define LX_DXDESTROYHWQUEUE \ +@@ -1111,6 +1230,10 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) ++#define LX_DXQUERYALLOCATIONRESIDENCY \ ++ _IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency) ++#define LX_DXSETALLOCATIONPRIORITY \ ++ _IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ +@@ -1125,10 +1248,14 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue) + #define LX_DXUNLOCK2 \ + _IOWR(0x47, 0x37, struct d3dkmt_unlock2) ++#define LX_DXUPDATEALLOCPROPERTY \ ++ _IOWR(0x47, 0x38, struct d3dddi_updateallocproperty) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ + _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) ++#define LX_DXGETALLOCATIONPRIORITY \ ++ _IOWR(0x47, 0x3c, struct d3dkmt_getallocationpriority) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + #define LX_DXSHAREOBJECTS \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1686-drivers-hv-dxgkrnl-Flush-heap-transitions.patch b/patch/kernel/archive/wsl2-arm64-6.6/1686-drivers-hv-dxgkrnl-Flush-heap-transitions.patch new file mode 100644 index 000000000000..61a4f862b297 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1686-drivers-hv-dxgkrnl-Flush-heap-transitions.patch @@ -0,0 +1,194 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 18 Jan 2022 17:25:37 -0800 +Subject: drivers: hv: dxgkrnl: Flush heap transitions + +Implement the ioctl to flush heap transitions +(LX_DXFLUSHHEAPTRANSITIONS). + +The ioctl is used to ensure that the video memory manager on the host +flushes all internal operations. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 2 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 23 +++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 5 + + drivers/hv/dxgkrnl/ioctl.c | 49 +++++++++- + include/uapi/misc/d3dkmthk.h | 6 ++ + 6 files changed, 86 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 23f00db7637e..6f763e326a65 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -942,7 +942,7 @@ else + if (alloc->priv_drv_data) + vfree(alloc->priv_drv_data); + if (alloc->cpu_address_mapped) +- pr_err("Alloc IO space is mapped: %p", alloc); ++ DXG_ERR("Alloc IO space is mapped: %p", alloc); + kfree(alloc); + } + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 7fefe4617488..ced9dd294f5f 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -882,6 +882,9 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_submitcommandtohwqueue *a); ++int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_flushheaptransitions *arg); + int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct dxgvmbuschannel *channel, + struct d3dkmt_opensyncobjectfromnthandle2 +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index dd2c97fee27b..928fad5f133b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1829,6 +1829,29 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_flushheaptransitions *args) ++{ ++ struct dxgkvmb_command_flushheaptransitions *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_FLUSHHEAPTRANSITIONS, ++ process->host_handle); ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index dbb01b9ab066..d232eb234e2c 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -367,6 +367,11 @@ struct dxgkvmb_command_submitcommandtohwqueue { + /* PrivateDriverData */ + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_flushheaptransitions { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++}; ++ + struct dxgkvmb_command_createallocation_allocinfo { + u32 flags; + u32 priv_drv_data_size; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index b626e2518ff2..8b7d00e4c881 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3500,6 +3500,53 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + return ret; + } + ++static int ++dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_flushheaptransitions args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_flush_heap_transitions(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ return ret; ++} ++ + static int + dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + { +@@ -4262,7 +4309,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {}, +-/* 0x1f */ {}, ++/* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, + /* 0x20 */ {}, + /* 0x21 */ {}, + /* 0x22 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index af381101fd90..873feb951129 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -936,6 +936,10 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + ++struct d3dkmt_flushheaptransitions { ++ struct d3dkmthandle adapter; ++}; ++ + struct d3dddi_openallocationinfo2 { + struct d3dkmthandle allocation; + #ifdef __KERNEL__ +@@ -1228,6 +1232,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXFLUSHHEAPTRANSITIONS \ ++ _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXQUERYALLOCATIONRESIDENCY \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1687-drivers-hv-dxgkrnl-Query-video-memory-information.patch b/patch/kernel/archive/wsl2-arm64-6.6/1687-drivers-hv-dxgkrnl-Query-video-memory-information.patch new file mode 100644 index 000000000000..462c1f6b3f2f --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1687-drivers-hv-dxgkrnl-Query-video-memory-information.patch @@ -0,0 +1,237 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 8 Feb 2022 18:34:07 -0800 +Subject: drivers: hv: dxgkrnl: Query video memory information + +Implement the ioctl to query video memory information from the host +(LX_DXQUERYVIDEOMEMORYINFO). + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 5 + + drivers/hv/dxgkrnl/dxgvmbus.c | 64 ++++++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 14 ++ + drivers/hv/dxgkrnl/ioctl.c | 50 +++++++- + include/uapi/misc/d3dkmthk.h | 13 ++ + 5 files changed, 145 insertions(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index ced9dd294f5f..b6a7288a4177 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -894,6 +894,11 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency + *args); ++int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryvideomemoryinfo *args, ++ struct d3dkmt_queryvideomemoryinfo ++ *__user iargs); + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 928fad5f133b..48ff49456057 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1925,6 +1925,70 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryvideomemoryinfo *args, ++ struct d3dkmt_queryvideomemoryinfo *__user ++ output) ++{ ++ int ret; ++ struct dxgkvmb_command_queryvideomemoryinfo *command; ++ struct dxgkvmb_command_queryvideomemoryinfo_return result = { }; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&command->hdr, ++ dxgk_vmbcommand_queryvideomemoryinfo, ++ process->host_handle); ++ command->adapter = args->adapter; ++ command->memory_segment_group = args->memory_segment_group; ++ command->physical_adapter_index = args->physical_adapter_index; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&output->budget, &result.budget, ++ sizeof(output->budget)); ++ if (ret) { ++ pr_err("%s failed to copy budget", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(&output->current_usage, &result.current_usage, ++ sizeof(output->current_usage)); ++ if (ret) { ++ pr_err("%s failed to copy current usage", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(&output->current_reservation, ++ &result.current_reservation, ++ sizeof(output->current_reservation)); ++ if (ret) { ++ pr_err("%s failed to copy reservation", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = copy_to_user(&output->available_for_reservation, ++ &result.available_for_reservation, ++ sizeof(output->available_for_reservation)); ++ if (ret) { ++ pr_err("%s failed to copy avail reservation", __func__); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ dev_dbg(DXGDEV, "err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_get_device_state(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getdevicestate *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index d232eb234e2c..a1549983d50f 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -664,6 +664,20 @@ struct dxgkvmb_command_queryallocationresidency_return { + /* d3dkmt_allocationresidencystatus[NumAllocations] */ + }; + ++struct dxgkvmb_command_queryvideomemoryinfo { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle adapter; ++ enum d3dkmt_memory_segment_group memory_segment_group; ++ u32 physical_adapter_index; ++}; ++ ++struct dxgkvmb_command_queryvideomemoryinfo_return { ++ u64 budget; ++ u64 current_usage; ++ u64 current_reservation; ++ u64 available_for_reservation; ++}; ++ + struct dxgkvmb_command_getdevicestate { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_getdevicestate args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 8b7d00e4c881..e692b127e219 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3547,6 +3547,54 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryvideomemoryinfo args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.process != 0) { ++ DXG_ERR("query vidmem info from another process"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_query_vidmem_info(process, adapter, &args, inargs); ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ if (ret < 0) ++ DXG_ERR("failed: %x", ret); ++ return ret; ++} ++ + static int + dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + { +@@ -4287,7 +4335,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE}, + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, +-/* 0x0a */ {}, ++/* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, + /* 0x0b */ {}, + /* 0x0c */ {}, + /* 0x0d */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 873feb951129..b7d8b1d91cfc 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -897,6 +897,17 @@ enum d3dkmt_memory_segment_group { + _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 + }; + ++struct d3dkmt_queryvideomemoryinfo { ++ __u64 process; ++ struct d3dkmthandle adapter; ++ enum d3dkmt_memory_segment_group memory_segment_group; ++ __u64 budget; ++ __u64 current_usage; ++ __u64 current_reservation; ++ __u64 available_for_reservation; ++ __u32 physical_adapter_index; ++}; ++ + struct d3dkmt_adaptertype { + union { + struct { +@@ -1204,6 +1215,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) ++#define LX_DXQUERYVIDEOMEMORYINFO \ ++ _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) + #define LX_DXGETDEVICESTATE \ + _IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate) + #define LX_DXSUBMITCOMMAND \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1688-drivers-hv-dxgkrnl-The-escape-ioctl.patch b/patch/kernel/archive/wsl2-arm64-6.6/1688-drivers-hv-dxgkrnl-The-escape-ioctl.patch new file mode 100644 index 000000000000..30de89e8310e --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1688-drivers-hv-dxgkrnl-The-escape-ioctl.patch @@ -0,0 +1,305 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 18 Jan 2022 15:50:30 -0800 +Subject: drivers: hv: dxgkrnl: The escape ioctl + +Implement the escape ioctl (LX_DXESCAPE). + +This ioctl is used to send/receive private data between user mode +compute device driver (guest) and kernel mode compute device +driver (host). It allows the user mode driver to extend the virtual +compute device API. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 75 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 12 ++ + drivers/hv/dxgkrnl/ioctl.c | 42 +++++- + include/uapi/misc/d3dkmthk.h | 41 +++++ + 5 files changed, 167 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b6a7288a4177..dafc721ed6cf 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -894,6 +894,9 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency + *args); ++int dxgvmb_send_escape(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_escape *args); + int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryvideomemoryinfo *args, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 48ff49456057..8bdd49bc7aa6 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1925,6 +1925,70 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_escape(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_escape *args) ++{ ++ int ret; ++ struct dxgkvmb_command_escape *command = NULL; ++ u32 cmd_size = sizeof(*command); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ cmd_size = cmd_size - sizeof(args->priv_drv_data[0]) + ++ args->priv_drv_data_size; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_ESCAPE, ++ process->host_handle); ++ command->adapter = args->adapter; ++ command->device = args->device; ++ command->type = args->type; ++ command->flags = args->flags; ++ command->priv_drv_data_size = args->priv_drv_data_size; ++ command->context = args->context; ++ if (args->priv_drv_data_size) { ++ ret = copy_from_user(command->priv_drv_data, ++ args->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy priv data"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ if (args->priv_drv_data_size) { ++ ret = copy_to_user(args->priv_drv_data, ++ command->priv_drv_data, ++ args->priv_drv_data_size); ++ if (ret) { ++ DXG_ERR("failed to copy priv data"); ++ ret = -EINVAL; ++ } ++ } ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryvideomemoryinfo *args, +@@ -1955,14 +2019,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + ret = copy_to_user(&output->budget, &result.budget, + sizeof(output->budget)); + if (ret) { +- pr_err("%s failed to copy budget", __func__); ++ DXG_ERR("failed to copy budget"); + ret = -EINVAL; + goto cleanup; + } + ret = copy_to_user(&output->current_usage, &result.current_usage, + sizeof(output->current_usage)); + if (ret) { +- pr_err("%s failed to copy current usage", __func__); ++ DXG_ERR("failed to copy current usage"); + ret = -EINVAL; + goto cleanup; + } +@@ -1970,7 +2034,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + &result.current_reservation, + sizeof(output->current_reservation)); + if (ret) { +- pr_err("%s failed to copy reservation", __func__); ++ DXG_ERR("failed to copy reservation"); + ret = -EINVAL; + goto cleanup; + } +@@ -1978,14 +2042,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + &result.available_for_reservation, + sizeof(output->available_for_reservation)); + if (ret) { +- pr_err("%s failed to copy avail reservation", __func__); ++ DXG_ERR("failed to copy avail reservation"); + ret = -EINVAL; + } + + cleanup: + free_message(&msg, process); + if (ret) +- dev_dbg(DXGDEV, "err: %d", ret); ++ DXG_TRACE("err: %d", ret); + return ret; + } + +@@ -3152,3 +3216,4 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + DXG_TRACE("err: %d", ret); + return ret; + } ++ +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index a1549983d50f..e1c2ed7b1580 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -664,6 +664,18 @@ struct dxgkvmb_command_queryallocationresidency_return { + /* d3dkmt_allocationresidencystatus[NumAllocations] */ + }; + ++/* Returns only private data */ ++struct dxgkvmb_command_escape { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle adapter; ++ struct d3dkmthandle device; ++ enum d3dkmt_escapetype type; ++ struct d3dddi_escapeflags flags; ++ u32 priv_drv_data_size; ++ struct d3dkmthandle context; ++ u8 priv_drv_data[1]; ++}; ++ + struct dxgkvmb_command_queryvideomemoryinfo { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmthandle adapter; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index e692b127e219..78de76abce2d 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3547,6 +3547,46 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_escape(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_escape args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_escape(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs) + { +@@ -4338,7 +4378,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, + /* 0x0b */ {}, + /* 0x0c */ {}, +-/* 0x0d */ {}, ++/* 0x0d */ {dxgkio_escape, LX_DXESCAPE}, + /* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, + /* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, + /* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index b7d8b1d91cfc..749edf28bd43 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -236,6 +236,45 @@ struct d3dddi_destroypagingqueue { + struct d3dkmthandle paging_queue; + }; + ++enum d3dkmt_escapetype { ++ _D3DKMT_ESCAPE_DRIVERPRIVATE = 0, ++ _D3DKMT_ESCAPE_VIDMM = 1, ++ _D3DKMT_ESCAPE_VIDSCH = 3, ++ _D3DKMT_ESCAPE_DEVICE = 4, ++ _D3DKMT_ESCAPE_DRT_TEST = 8, ++}; ++ ++struct d3dddi_escapeflags { ++ union { ++ struct { ++ __u32 hardware_access:1; ++ __u32 device_status_query:1; ++ __u32 change_frame_latency:1; ++ __u32 no_adapter_synchronization:1; ++ __u32 reserved:1; ++ __u32 virtual_machine_data:1; ++ __u32 driver_known_escape:1; ++ __u32 driver_common_escape:1; ++ __u32 reserved2:24; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_escape { ++ struct d3dkmthandle adapter; ++ struct d3dkmthandle device; ++ enum d3dkmt_escapetype type; ++ struct d3dddi_escapeflags flags; ++#ifdef __KERNEL__ ++ void *priv_drv_data; ++#else ++ __u64 priv_drv_data; ++#endif ++ __u32 priv_drv_data_size; ++ struct d3dkmthandle context; ++}; ++ + enum dxgk_render_pipeline_stage { + _DXGK_RENDER_PIPELINE_STAGE_UNKNOWN = 0, + _DXGK_RENDER_PIPELINE_STAGE_INPUT_ASSEMBLER = 1, +@@ -1217,6 +1256,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXQUERYVIDEOMEMORYINFO \ + _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) ++#define LX_DXESCAPE \ ++ _IOWR(0x47, 0x0d, struct d3dkmt_escape) + #define LX_DXGETDEVICESTATE \ + _IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate) + #define LX_DXSUBMITCOMMAND \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1689-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch b/patch/kernel/archive/wsl2-arm64-6.6/1689-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch new file mode 100644 index 000000000000..faf991f53182 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1689-drivers-hv-dxgkrnl-Ioctl-to-put-device-to-error-state.patch @@ -0,0 +1,180 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 9 Feb 2022 10:57:57 -0800 +Subject: drivers: hv: dxgkrnl: Ioctl to put device to error state + +Implement the ioctl to put the virtual compute device to the error +state (LX_DXMARKDEVICEASERROR). + +This ioctl is used by the user mode driver when it detects an +unrecoverable error condition. + +When a compute device is put to the error state, all subsequent +ioctl calls to the device will fail. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 25 ++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 5 ++ + drivers/hv/dxgkrnl/ioctl.c | 38 +++++++++- + include/uapi/misc/d3dkmthk.h | 12 +++ + 5 files changed, 82 insertions(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index dafc721ed6cf..b454c7430f06 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -856,6 +856,9 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process, + struct d3dddi_updateallocproperty *args, + struct d3dddi_updateallocproperty *__user + inargs); ++int dxgvmb_send_mark_device_as_error(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_markdeviceaserror *args); + int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_setallocationpriority *a); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 8bdd49bc7aa6..f7264b12a477 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2730,6 +2730,31 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_mark_device_as_error(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_markdeviceaserror *args) ++{ ++ struct dxgkvmb_command_markdeviceaserror *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_MARKDEVICEASERROR, ++ process->host_handle); ++ command->args = *args; ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_setallocationpriority *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index e1c2ed7b1580..a66e11097bb2 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -627,6 +627,11 @@ struct dxgkvmb_command_updateallocationproperty_return { + struct ntstatus status; + }; + ++struct dxgkvmb_command_markdeviceaserror { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_markdeviceaserror args; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_changevideomemoryreservation { + struct dxgkvmb_command_vgpu_to_host hdr; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 78de76abce2d..ce4af610ada7 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3341,6 +3341,42 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_markdeviceaserror args; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ device->execution_state = _D3DKMT_DEVICEEXECUTION_RESET; ++ ret = dxgvmb_send_mark_device_as_error(process, adapter, &args); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) + { +@@ -4404,7 +4440,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x23 */ {}, + /* 0x24 */ {}, + /* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, +-/* 0x26 */ {}, ++/* 0x26 */ {dxgkio_mark_device_as_error, LX_DXMARKDEVICEASERROR}, + /* 0x27 */ {}, + /* 0x28 */ {}, + /* 0x29 */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 749edf28bd43..ce5a638a886d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -790,6 +790,16 @@ struct d3dkmt_unlock2 { + struct d3dkmthandle allocation; + }; + ++enum d3dkmt_device_error_reason { ++ _D3DKMT_DEVICE_ERROR_REASON_GENERIC = 0x80000000, ++ _D3DKMT_DEVICE_ERROR_REASON_DRIVER_ERROR = 0x80000006, ++}; ++ ++struct d3dkmt_markdeviceaserror { ++ struct d3dkmthandle device; ++ enum d3dkmt_device_error_reason reason; ++}; ++ + enum d3dkmt_standardallocationtype { + _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1, + _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2, +@@ -1290,6 +1300,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) ++#define LX_DXMARKDEVICEASERROR \ ++ _IOWR(0x47, 0x26, struct d3dkmt_markdeviceaserror) + #define LX_DXQUERYALLOCATIONRESIDENCY \ + _IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency) + #define LX_DXSETALLOCATIONPRIORITY \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1690-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch b/patch/kernel/archive/wsl2-arm64-6.6/1690-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch new file mode 100644 index 000000000000..32760960bd12 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1690-drivers-hv-dxgkrnl-Ioctls-to-query-statistics-and-clock-calibration.patch @@ -0,0 +1,423 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 9 Feb 2022 11:01:57 -0800 +Subject: drivers: hv: dxgkrnl: Ioctls to query statistics and clock + calibration + +Implement ioctls to query statistics from the VGPU device +(LX_DXQUERYSTATISTICS) and to query clock calibration +(LX_DXQUERYCLOCKCALIBRATION). + +The LX_DXQUERYSTATISTICS ioctl is used to query various statistics from +the compute device on the host. + +The LX_DXQUERYCLOCKCALIBRATION ioctl queries the compute device clock +and is used for performance monitoring. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 8 + + drivers/hv/dxgkrnl/dxgvmbus.c | 77 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 21 ++ + drivers/hv/dxgkrnl/ioctl.c | 111 +++++++++- + include/uapi/misc/d3dkmthk.h | 62 ++++++ + 5 files changed, 277 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index b454c7430f06..a55873bdd9a6 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -885,6 +885,11 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_submitcommandtohwqueue *a); ++int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryclockcalibration *a, ++ struct d3dkmt_queryclockcalibration ++ *__user inargs); + int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_flushheaptransitions *arg); +@@ -929,6 +934,9 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + void *prive_alloc_data, + u32 *res_priv_data_size, + void *priv_res_data); ++int dxgvmb_send_query_statistics(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_querystatistics *args); + int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel, + void *command, + u32 cmd_size); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index f7264b12a477..9a1864bb4e14 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1829,6 +1829,48 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_queryclockcalibration ++ *args, ++ struct d3dkmt_queryclockcalibration ++ *__user inargs) ++{ ++ struct dxgkvmb_command_queryclockcalibration *command; ++ struct dxgkvmb_command_queryclockcalibration_return result; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYCLOCKCALIBRATION, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(&inargs->clock_data, &result.clock_data, ++ sizeof(result.clock_data)); ++ if (ret) { ++ pr_err("%s failed to copy clock data", __func__); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ret = ntstatus2int(result.status); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_flushheaptransitions *args) +@@ -3242,3 +3284,38 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_query_statistics(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_querystatistics *args) ++{ ++ struct dxgkvmb_command_querystatistics *command; ++ struct dxgkvmb_command_querystatistics_return *result; ++ int ret; ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ ret = init_message_res(&msg, adapter, process, sizeof(*command), ++ sizeof(*result)); ++ if (ret) ++ goto cleanup; ++ command = msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_QUERYSTATISTICS, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ++ args->result = result->result; ++ ret = ntstatus2int(result->status); ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index a66e11097bb2..17768ed0e68d 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -372,6 +372,16 @@ struct dxgkvmb_command_flushheaptransitions { + struct dxgkvmb_command_vgpu_to_host hdr; + }; + ++struct dxgkvmb_command_queryclockcalibration { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_queryclockcalibration args; ++}; ++ ++struct dxgkvmb_command_queryclockcalibration_return { ++ struct ntstatus status; ++ struct dxgk_gpuclockdata clock_data; ++}; ++ + struct dxgkvmb_command_createallocation_allocinfo { + u32 flags; + u32 priv_drv_data_size; +@@ -408,6 +418,17 @@ struct dxgkvmb_command_openresource_return { + /* struct d3dkmthandle allocation[allocation_count]; */ + }; + ++struct dxgkvmb_command_querystatistics { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_querystatistics args; ++}; ++ ++struct dxgkvmb_command_querystatistics_return { ++ struct ntstatus status; ++ u32 reserved; ++ struct d3dkmt_querystatistics_result result; ++}; ++ + struct dxgkvmb_command_getstandardallocprivdata { + struct dxgkvmb_command_vgpu_to_host hdr; + enum d3dkmdt_standardallocationtype alloc_type; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index ce4af610ada7..4babb21f38a9 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -149,6 +149,65 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + return ret; + } + ++static int dxgkio_query_statistics(struct dxgprocess *process, ++ void __user *inargs) ++{ ++ struct d3dkmt_querystatistics *args; ++ int ret; ++ struct dxgadapter *entry; ++ struct dxgadapter *adapter = NULL; ++ struct winluid tmp; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ args = vzalloc(sizeof(struct d3dkmt_querystatistics)); ++ if (args == NULL) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(args, inargs, sizeof(*args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED); ++ list_for_each_entry(entry, &dxgglobal->adapter_list_head, ++ adapter_list_entry) { ++ if (dxgadapter_acquire_lock_shared(entry) == 0) { ++ if (*(u64 *) &entry->luid == ++ *(u64 *) &args->adapter_luid) { ++ adapter = entry; ++ break; ++ } ++ dxgadapter_release_lock_shared(entry); ++ } ++ } ++ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED); ++ if (adapter) { ++ tmp = args->adapter_luid; ++ args->adapter_luid = adapter->host_adapter_luid; ++ ret = dxgvmb_send_query_statistics(process, adapter, args); ++ if (ret >= 0) { ++ args->adapter_luid = tmp; ++ ret = copy_to_user(inargs, args, sizeof(*args)); ++ if (ret) { ++ DXG_ERR("failed to copy args"); ++ ret = -EINVAL; ++ } ++ } ++ dxgadapter_release_lock_shared(adapter); ++ } ++ ++cleanup: ++ if (args) ++ vfree(args); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkp_enum_adapters(struct dxgprocess *process, + union d3dkmt_enumadapters_filter filter, +@@ -3536,6 +3595,54 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + return ret; + } + ++static int ++dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_queryclockcalibration args; ++ int ret; ++ struct dxgadapter *adapter = NULL; ++ bool adapter_locked = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ adapter_locked = true; ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_query_clock_calibration(process, adapter, ++ &args, inargs); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter_locked) ++ dxgadapter_release_lock_shared(adapter); ++ if (adapter) ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ return ret; ++} ++ + static int + dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + { +@@ -4470,14 +4577,14 @@ static struct ioctl_desc ioctls[] = { + /* 0x3b */ {dxgkio_wait_sync_object_gpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU}, + /* 0x3c */ {dxgkio_get_allocation_priority, LX_DXGETALLOCATIONPRIORITY}, +-/* 0x3d */ {}, ++/* 0x3d */ {dxgkio_query_clock_calibration, LX_DXQUERYCLOCKCALIBRATION}, + /* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3}, + /* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS}, + /* 0x40 */ {dxgkio_open_sync_object_nt, LX_DXOPENSYNCOBJECTFROMNTHANDLE2}, + /* 0x41 */ {dxgkio_query_resource_info_nt, + LX_DXQUERYRESOURCEINFOFROMNTHANDLE}, + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, +-/* 0x43 */ {}, ++/* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS}, + /* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, + /* 0x45 */ {}, + }; +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index ce5a638a886d..ea18242ceb83 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -996,6 +996,34 @@ struct d3dkmt_queryadapterinfo { + __u32 private_data_size; + }; + ++#pragma pack(push, 1) ++ ++struct dxgk_gpuclockdata_flags { ++ union { ++ struct { ++ __u32 context_management_processor:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct dxgk_gpuclockdata { ++ __u64 gpu_frequency; ++ __u64 gpu_clock_counter; ++ __u64 cpu_clock_counter; ++ struct dxgk_gpuclockdata_flags flags; ++} __packed; ++ ++struct d3dkmt_queryclockcalibration { ++ struct d3dkmthandle adapter; ++ __u32 node_ordinal; ++ __u32 physical_adapter_index; ++ struct dxgk_gpuclockdata clock_data; ++}; ++ ++#pragma pack(pop) ++ + struct d3dkmt_flushheaptransitions { + struct d3dkmthandle adapter; + }; +@@ -1238,6 +1266,36 @@ struct d3dkmt_enumadapters3 { + #endif + }; + ++enum d3dkmt_querystatistics_type { ++ _D3DKMT_QUERYSTATISTICS_ADAPTER = 0, ++ _D3DKMT_QUERYSTATISTICS_PROCESS = 1, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_ADAPTER = 2, ++ _D3DKMT_QUERYSTATISTICS_SEGMENT = 3, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_SEGMENT = 4, ++ _D3DKMT_QUERYSTATISTICS_NODE = 5, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_NODE = 6, ++ _D3DKMT_QUERYSTATISTICS_VIDPNSOURCE = 7, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_VIDPNSOURCE = 8, ++ _D3DKMT_QUERYSTATISTICS_PROCESS_SEGMENT_GROUP = 9, ++ _D3DKMT_QUERYSTATISTICS_PHYSICAL_ADAPTER = 10, ++}; ++ ++struct d3dkmt_querystatistics_result { ++ char size[0x308]; ++}; ++ ++struct d3dkmt_querystatistics { ++ union { ++ struct { ++ enum d3dkmt_querystatistics_type type; ++ struct winluid adapter_luid; ++ __u64 process; ++ struct d3dkmt_querystatistics_result result; ++ }; ++ char size[0x328]; ++ }; ++}; ++ + struct d3dkmt_shareobjectwithhost { + struct d3dkmthandle device_handle; + struct d3dkmthandle object_handle; +@@ -1328,6 +1386,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu) + #define LX_DXGETALLOCATIONPRIORITY \ + _IOWR(0x47, 0x3c, struct d3dkmt_getallocationpriority) ++#define LX_DXQUERYCLOCKCALIBRATION \ ++ _IOWR(0x47, 0x3d, struct d3dkmt_queryclockcalibration) + #define LX_DXENUMADAPTERS3 \ + _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3) + #define LX_DXSHAREOBJECTS \ +@@ -1338,6 +1398,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle) + #define LX_DXOPENRESOURCEFROMNTHANDLE \ + _IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle) ++#define LX_DXQUERYSTATISTICS \ ++ _IOWR(0x47, 0x43, struct d3dkmt_querystatistics) + #define LX_DXSHAREOBJECTWITHHOST \ + _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1691-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch b/patch/kernel/archive/wsl2-arm64-6.6/1691-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch new file mode 100644 index 000000000000..b4a0af9e1ada --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1691-drivers-hv-dxgkrnl-Offer-and-reclaim-allocations.patch @@ -0,0 +1,466 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 18 Jan 2022 15:01:55 -0800 +Subject: drivers: hv: dxgkrnl: Offer and reclaim allocations + +Implement ioctls to offer and reclaim compute device allocations: + - LX_DXOFFERALLOCATIONS, + - LX_DXRECLAIMALLOCATIONS2 + +When a user mode driver (UMD) does not need to access an allocation, +it can "offer" it by issuing the LX_DXOFFERALLOCATIONS ioctl. This +means that the allocation is not in use and its local device memory +could be evicted. The freed space could be given to another allocation. +When the allocation is again needed, the UMD can attempt to"reclaim" +the allocation by issuing the LX_DXRECLAIMALLOCATIONS2 ioctl. If the +allocation is still not evicted, the reclaim operation succeeds and no +other action is required. If the reclaim operation fails, the caller +must restore the content of the allocation before it can be used by +the device. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 8 + + drivers/hv/dxgkrnl/dxgvmbus.c | 124 +++++++++- + drivers/hv/dxgkrnl/dxgvmbus.h | 27 ++ + drivers/hv/dxgkrnl/ioctl.c | 117 ++++++++- + include/uapi/misc/d3dkmthk.h | 67 +++++ + 5 files changed, 340 insertions(+), 3 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index a55873bdd9a6..494ea8fb0bb3 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -865,6 +865,14 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getallocationpriority *a); ++int dxgvmb_send_offer_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_offerallocations *args); ++int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmt_reclaimallocations2 *args, ++ u64 __user *paging_fence_value); + int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle other_process, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 9a1864bb4e14..8448fd78975b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1858,7 +1858,7 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, + ret = copy_to_user(&inargs->clock_data, &result.clock_data, + sizeof(result.clock_data)); + if (ret) { +- pr_err("%s failed to copy clock data", __func__); ++ DXG_ERR("failed to copy clock data"); + ret = -EINVAL; + goto cleanup; + } +@@ -2949,6 +2949,128 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_offer_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_offerallocations *args) ++{ ++ struct dxgkvmb_command_offerallocations *command; ++ int ret = -EINVAL; ++ u32 alloc_size = sizeof(struct d3dkmthandle) * args->allocation_count; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_offerallocations) + ++ alloc_size - sizeof(struct d3dkmthandle); ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_OFFERALLOCATIONS, ++ process->host_handle); ++ command->flags = args->flags; ++ command->priority = args->priority; ++ command->device = args->device; ++ command->allocation_count = args->allocation_count; ++ if (args->resources) { ++ command->resources = true; ++ ret = copy_from_user(command->allocations, args->resources, ++ alloc_size); ++ } else { ++ ret = copy_from_user(command->allocations, ++ args->allocations, alloc_size); ++ } ++ if (ret) { ++ DXG_ERR("failed to copy input handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ pr_debug("err: %s %d", __func__, ret); ++ return ret; ++} ++ ++int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle device, ++ struct d3dkmt_reclaimallocations2 *args, ++ u64 __user *paging_fence_value) ++{ ++ struct dxgkvmb_command_reclaimallocations *command; ++ struct dxgkvmb_command_reclaimallocations_return *result; ++ int ret; ++ u32 alloc_size = sizeof(struct d3dkmthandle) * args->allocation_count; ++ u32 cmd_size = sizeof(struct dxgkvmb_command_reclaimallocations) + ++ alloc_size - sizeof(struct d3dkmthandle); ++ u32 result_size = sizeof(*result); ++ struct dxgvmbusmsgres msg = {.hdr = NULL}; ++ ++ if (args->results) ++ result_size += (args->allocation_count - 1) * ++ sizeof(enum d3dddi_reclaim_result); ++ ++ ret = init_message_res(&msg, adapter, process, cmd_size, result_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ result = msg.res; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_RECLAIMALLOCATIONS, ++ process->host_handle); ++ command->device = device; ++ command->paging_queue = args->paging_queue; ++ command->allocation_count = args->allocation_count; ++ command->write_results = args->results != NULL; ++ if (args->resources) { ++ command->resources = true; ++ ret = copy_from_user(command->allocations, args->resources, ++ alloc_size); ++ } else { ++ ret = copy_from_user(command->allocations, ++ args->allocations, alloc_size); ++ } ++ if (ret) { ++ DXG_ERR("failed to copy input handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ result, msg.res_size); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(paging_fence_value, ++ &result->paging_fence_value, sizeof(u64)); ++ if (ret) { ++ DXG_ERR("failed to copy paging fence"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = ntstatus2int(result->status); ++ if (NT_SUCCESS(result->status) && args->results) { ++ ret = copy_to_user(args->results, result->discarded, ++ sizeof(result->discarded[0]) * ++ args->allocation_count); ++ if (ret) { ++ DXG_ERR("failed to copy results"); ++ ret = -EINVAL; ++ } ++ } ++ ++cleanup: ++ free_message((struct dxgvmbusmsg *)&msg, process); ++ if (ret) ++ pr_debug("err: %s %d", __func__, ret); ++ return ret; ++} ++ + int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle other_process, +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 17768ed0e68d..558c6576a262 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -653,6 +653,33 @@ struct dxgkvmb_command_markdeviceaserror { + struct d3dkmt_markdeviceaserror args; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_offerallocations { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ u32 allocation_count; ++ enum d3dkmt_offer_priority priority; ++ struct d3dkmt_offer_flags flags; ++ bool resources; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_reclaimallocations { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle paging_queue; ++ u32 allocation_count; ++ bool resources; ++ bool write_results; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_reclaimallocations_return { ++ u64 paging_fence_value; ++ struct ntstatus status; ++ enum d3dddi_reclaim_result discarded[1]; ++}; ++ + /* Returns ntstatus */ + struct dxgkvmb_command_changevideomemoryreservation { + struct dxgkvmb_command_vgpu_to_host hdr; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 4babb21f38a9..fa880aa0196a 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1961,6 +1961,119 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_offerallocations args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.allocation_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.allocation_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if ((args.resources == NULL) == (args.allocations == NULL)) { ++ DXG_ERR("invalid pointer to resources/allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_offer_allocations(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_reclaimallocations2 args; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmt_reclaimallocations2 * __user in_args = inargs; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.allocation_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.allocation_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if ((args.resources == NULL) == (args.allocations == NULL)) { ++ DXG_ERR("invalid pointer to resources/allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_reclaim_allocations(process, adapter, ++ device->handle, &args, ++ &in_args->paging_fence_value); ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + { +@@ -4548,12 +4661,12 @@ static struct ioctl_desc ioctls[] = { + /* 0x24 */ {}, + /* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, + /* 0x26 */ {dxgkio_mark_device_as_error, LX_DXMARKDEVICEASERROR}, +-/* 0x27 */ {}, ++/* 0x27 */ {dxgkio_offer_allocations, LX_DXOFFERALLOCATIONS}, + /* 0x28 */ {}, + /* 0x29 */ {}, + /* 0x2a */ {dxgkio_query_alloc_residency, LX_DXQUERYALLOCATIONRESIDENCY}, + /* 0x2b */ {}, +-/* 0x2c */ {}, ++/* 0x2c */ {dxgkio_reclaim_allocations, LX_DXRECLAIMALLOCATIONS2}, + /* 0x2d */ {}, + /* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY}, + /* 0x2f */ {}, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index ea18242ceb83..46b9f6d303bf 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -61,6 +61,7 @@ struct winluid { + #define D3DDDI_MAX_WRITTEN_PRIMARIES 16 + + #define D3DKMT_CREATEALLOCATION_MAX 1024 ++#define D3DKMT_MAKERESIDENT_ALLOC_MAX (1024 * 10) + #define D3DKMT_ADAPTERS_MAX 64 + #define D3DDDI_MAX_BROADCAST_CONTEXT 64 + #define D3DDDI_MAX_OBJECT_WAITED_ON 32 +@@ -1087,6 +1088,68 @@ struct d3dddi_updateallocproperty { + }; + }; + ++enum d3dkmt_offer_priority { ++ _D3DKMT_OFFER_PRIORITY_LOW = 1, ++ _D3DKMT_OFFER_PRIORITY_NORMAL = 2, ++ _D3DKMT_OFFER_PRIORITY_HIGH = 3, ++ _D3DKMT_OFFER_PRIORITY_AUTO = 4, ++}; ++ ++struct d3dkmt_offer_flags { ++ union { ++ struct { ++ __u32 offer_immediately:1; ++ __u32 allow_decommit:1; ++ __u32 reserved:30; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_offerallocations { ++ struct d3dkmthandle device; ++ __u32 reserved; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *resources; ++ const struct d3dkmthandle *allocations; ++#else ++ __u64 resources; ++ __u64 allocations; ++#endif ++ __u32 allocation_count; ++ enum d3dkmt_offer_priority priority; ++ struct d3dkmt_offer_flags flags; ++ __u32 reserved1; ++}; ++ ++enum d3dddi_reclaim_result { ++ _D3DDDI_RECLAIM_RESULT_OK = 0, ++ _D3DDDI_RECLAIM_RESULT_DISCARDED = 1, ++ _D3DDDI_RECLAIM_RESULT_NOT_COMMITTED = 2, ++}; ++ ++struct d3dkmt_reclaimallocations2 { ++ struct d3dkmthandle paging_queue; ++ __u32 allocation_count; ++#ifdef __KERNEL__ ++ struct d3dkmthandle *resources; ++ struct d3dkmthandle *allocations; ++#else ++ __u64 resources; ++ __u64 allocations; ++#endif ++ union { ++#ifdef __KERNEL__ ++ __u32 *discarded; ++ enum d3dddi_reclaim_result *results; ++#else ++ __u64 discarded; ++ __u64 results; ++#endif ++ }; ++ __u64 paging_fence_value; ++}; ++ + struct d3dkmt_changevideomemoryreservation { + __u64 process; + struct d3dkmthandle adapter; +@@ -1360,8 +1423,12 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXMARKDEVICEASERROR \ + _IOWR(0x47, 0x26, struct d3dkmt_markdeviceaserror) ++#define LX_DXOFFERALLOCATIONS \ ++ _IOWR(0x47, 0x27, struct d3dkmt_offerallocations) + #define LX_DXQUERYALLOCATIONRESIDENCY \ + _IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency) ++#define LX_DXRECLAIMALLOCATIONS2 \ ++ _IOWR(0x47, 0x2c, struct d3dkmt_reclaimallocations2) + #define LX_DXSETALLOCATIONPRIORITY \ + _IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1692-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch b/patch/kernel/archive/wsl2-arm64-6.6/1692-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch new file mode 100644 index 000000000000..3e736173ad4c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1692-drivers-hv-dxgkrnl-Ioctls-to-manage-scheduling-priority.patch @@ -0,0 +1,427 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 14 Jan 2022 17:57:41 -0800 +Subject: drivers: hv: dxgkrnl: Ioctls to manage scheduling priority + +Implement iocts to manage compute device scheduling priority: + - LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY + - LX_DXGETCONTEXTSCHEDULINGPRIORITY + - LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY + - LX_DXSETCONTEXTSCHEDULINGPRIORITY + +Each compute device execution context has an assigned scheduling +priority. It is used by the compute device scheduler on the host to +pick contexts for execution. There is a global priority and a +priority within a process. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 9 + + drivers/hv/dxgkrnl/dxgvmbus.c | 67 +++- + drivers/hv/dxgkrnl/dxgvmbus.h | 19 + + drivers/hv/dxgkrnl/ioctl.c | 177 +++++++++- + include/uapi/misc/d3dkmthk.h | 28 ++ + 5 files changed, 294 insertions(+), 6 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 494ea8fb0bb3..02d10bdcc820 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -865,6 +865,15 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_getallocationpriority *a); ++int dxgvmb_send_set_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int priority, bool in_process); ++int dxgvmb_send_get_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int *priority, ++ bool in_process); + int dxgvmb_send_offer_allocations(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_offerallocations *args); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 8448fd78975b..9a610d48bed7 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2949,6 +2949,69 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_set_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int priority, ++ bool in_process) ++{ ++ struct dxgkvmb_command_setcontextschedulingpriority2 *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_SETCONTEXTSCHEDULINGPRIORITY, ++ process->host_handle); ++ command->context = context; ++ command->priority = priority; ++ command->in_process = in_process; ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_get_context_sch_priority(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmthandle context, ++ int *priority, ++ bool in_process) ++{ ++ struct dxgkvmb_command_getcontextschedulingpriority *command; ++ struct dxgkvmb_command_getcontextschedulingpriority_return result = { }; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_GETCONTEXTSCHEDULINGPRIORITY, ++ process->host_handle); ++ command->context = context; ++ command->in_process = in_process; ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret >= 0) { ++ ret = ntstatus2int(result.status); ++ *priority = result.priority; ++ } ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_offer_allocations(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_offerallocations *args) +@@ -2991,7 +3054,7 @@ int dxgvmb_send_offer_allocations(struct dxgprocess *process, + cleanup: + free_message(&msg, process); + if (ret) +- pr_debug("err: %s %d", __func__, ret); ++ DXG_TRACE("err: %d", ret); + return ret; + } + +@@ -3067,7 +3130,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + cleanup: + free_message((struct dxgvmbusmsg *)&msg, process); + if (ret) +- pr_debug("err: %s %d", __func__, ret); ++ DXG_TRACE("err: %d", ret); + return ret; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 558c6576a262..509482e1f870 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -331,6 +331,25 @@ struct dxgkvmb_command_getallocationpriority_return { + /* u32 priorities[allocation_count or 1]; */ + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setcontextschedulingpriority2 { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ int priority; ++ bool in_process; ++}; ++ ++struct dxgkvmb_command_getcontextschedulingpriority { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle context; ++ bool in_process; ++}; ++ ++struct dxgkvmb_command_getcontextschedulingpriority_return { ++ struct ntstatus status; ++ int priority; ++}; ++ + struct dxgkvmb_command_createdevice { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_createdeviceflags flags; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index fa880aa0196a..bc0adebe52ae 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -3660,6 +3660,171 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++set_context_scheduling_priority(struct dxgprocess *process, ++ struct d3dkmthandle hcontext, ++ int priority, bool in_process) ++{ ++ int ret = 0; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ hcontext); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_set_context_sch_priority(process, adapter, ++ hcontext, priority, ++ in_process); ++ if (ret < 0) ++ DXG_ERR("send_set_context_scheduling_priority failed"); ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ ++static int ++dxgkio_set_context_scheduling_priority(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_setcontextschedulingpriority args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = set_context_scheduling_priority(process, args.context, ++ args.priority, false); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++get_context_scheduling_priority(struct dxgprocess *process, ++ struct d3dkmthandle hcontext, ++ int __user *priority, ++ bool in_process) ++{ ++ int ret; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ int pri = 0; ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ hcontext); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ret = dxgvmb_send_get_context_sch_priority(process, adapter, ++ hcontext, &pri, in_process); ++ if (ret < 0) ++ goto cleanup; ++ ret = copy_to_user(priority, &pri, sizeof(pri)); ++ if (ret) { ++ DXG_ERR("failed to copy priority to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ ++static int ++dxgkio_get_context_scheduling_priority(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_getcontextschedulingpriority args; ++ struct d3dkmt_getcontextschedulingpriority __user *input = inargs; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = get_context_scheduling_priority(process, args.context, ++ &input->priority, false); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_set_context_process_scheduling_priority(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_setcontextinprocessschedulingpriority args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = set_context_scheduling_priority(process, args.context, ++ args.priority, true); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process, ++ void __user *inargs) ++{ ++ struct d3dkmt_getcontextinprocessschedulingpriority args; ++ int ret; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = get_context_scheduling_priority(process, args.context, ++ &((struct d3dkmt_getcontextinprocessschedulingpriority *) ++ inargs)->priority, true); ++cleanup: ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs) + { +@@ -4655,8 +4820,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x1e */ {}, + /* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, + /* 0x20 */ {}, +-/* 0x21 */ {}, +-/* 0x22 */ {}, ++/* 0x21 */ {dxgkio_get_context_process_scheduling_priority, ++ LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY}, ++/* 0x22 */ {dxgkio_get_context_scheduling_priority, ++ LX_DXGETCONTEXTSCHEDULINGPRIORITY}, + /* 0x23 */ {}, + /* 0x24 */ {}, + /* 0x25 */ {dxgkio_lock2, LX_DXLOCK2}, +@@ -4669,8 +4836,10 @@ static struct ioctl_desc ioctls[] = { + /* 0x2c */ {dxgkio_reclaim_allocations, LX_DXRECLAIMALLOCATIONS2}, + /* 0x2d */ {}, + /* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY}, +-/* 0x2f */ {}, +-/* 0x30 */ {}, ++/* 0x2f */ {dxgkio_set_context_process_scheduling_priority, ++ LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY}, ++/* 0x30 */ {dxgkio_set_context_scheduling_priority, ++ LX_DXSETCONTEXTSCHEDULINGPRIORITY}, + /* 0x31 */ {dxgkio_signal_sync_object_cpu, + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU}, + /* 0x32 */ {dxgkio_signal_sync_object_gpu, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 46b9f6d303bf..a9bafab97c18 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -708,6 +708,26 @@ struct d3dkmt_submitcommandtohwqueue { + #endif + }; + ++struct d3dkmt_setcontextschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ ++struct d3dkmt_setcontextinprocessschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ ++struct d3dkmt_getcontextschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ ++struct d3dkmt_getcontextinprocessschedulingpriority { ++ struct d3dkmthandle context; ++ int priority; ++}; ++ + struct d3dkmt_setallocationpriority { + struct d3dkmthandle device; + struct d3dkmthandle resource; +@@ -1419,6 +1439,10 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) + #define LX_DXFLUSHHEAPTRANSITIONS \ + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) ++#define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x21, struct d3dkmt_getcontextinprocessschedulingpriority) ++#define LX_DXGETCONTEXTSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x22, struct d3dkmt_getcontextschedulingpriority) + #define LX_DXLOCK2 \ + _IOWR(0x47, 0x25, struct d3dkmt_lock2) + #define LX_DXMARKDEVICEASERROR \ +@@ -1431,6 +1455,10 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x2c, struct d3dkmt_reclaimallocations2) + #define LX_DXSETALLOCATIONPRIORITY \ + _IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority) ++#define LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x2f, struct d3dkmt_setcontextinprocessschedulingpriority) ++#define LX_DXSETCONTEXTSCHEDULINGPRIORITY \ ++ _IOWR(0x47, 0x30, struct d3dkmt_setcontextschedulingpriority) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu) + #define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1693-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch b/patch/kernel/archive/wsl2-arm64-6.6/1693-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch new file mode 100644 index 000000000000..4c579a39fe80 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1693-drivers-hv-dxgkrnl-Manage-residency-of-allocations.patch @@ -0,0 +1,447 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 14 Jan 2022 17:33:52 -0800 +Subject: drivers: hv: dxgkrnl: Manage residency of allocations + +Implement ioctls to manage residency of compute device allocations: + - LX_DXMAKERESIDENT, + - LX_DXEVICT. + +An allocation is "resident" when the compute devoce is setup to +access it. It means that the allocation is in the local device +memory or in non-pageable system memory. + +The current design does not support on demand compute device page +faulting. An allocation must be resident before the compute device +is allowed to access it. + +The LX_DXMAKERESIDENT ioctl instructs the video memory manager to +make the given allocations resident. The operation is submitted to +a paging queue (dxgpagingqueue). When the ioctl returns a "pending" +status, a monitored fence sync object can be used to synchronize +with the completion of the operation. + +The LX_DXEVICT ioctl istructs the video memory manager to evict +the given allocations from device accessible memory. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 4 + + drivers/hv/dxgkrnl/dxgvmbus.c | 98 +++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 27 ++ + drivers/hv/dxgkrnl/ioctl.c | 141 +++++++++- + include/uapi/misc/d3dkmthk.h | 54 ++++ + 5 files changed, 322 insertions(+), 2 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 02d10bdcc820..93c3ceb23865 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -810,6 +810,10 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev, + struct d3dkmt_destroyallocation2 *args, + struct d3dkmthandle *alloc_handles); ++int dxgvmb_send_make_resident(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dddi_makeresident *args); ++int dxgvmb_send_evict(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dkmt_evict *args); + int dxgvmb_send_submit_command(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_submitcommand *args); +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 9a610d48bed7..f4c4a7e7ad8b 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2279,6 +2279,104 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device, + return ret; + } + ++int dxgvmb_send_make_resident(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_makeresident *args) ++{ ++ int ret; ++ u32 cmd_size; ++ struct dxgkvmb_command_makeresident_return result = { }; ++ struct dxgkvmb_command_makeresident *command = NULL; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ cmd_size = (args->alloc_count - 1) * sizeof(struct d3dkmthandle) + ++ sizeof(struct dxgkvmb_command_makeresident); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ ret = copy_from_user(command->allocations, args->allocation_list, ++ args->alloc_count * ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_MAKERESIDENT, ++ process->host_handle); ++ command->alloc_count = args->alloc_count; ++ command->paging_queue = args->paging_queue; ++ command->flags = args->flags; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("send_make_resident failed %x", ret); ++ goto cleanup; ++ } ++ ++ args->paging_fence_value = result.paging_fence_value; ++ args->num_bytes_to_trim = result.num_bytes_to_trim; ++ ret = ntstatus2int(result.status); ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_evict(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_evict *args) ++{ ++ int ret; ++ u32 cmd_size; ++ struct dxgkvmb_command_evict_return result = { }; ++ struct dxgkvmb_command_evict *command = NULL; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ cmd_size = (args->alloc_count - 1) * sizeof(struct d3dkmthandle) + ++ sizeof(struct dxgkvmb_command_evict); ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ret = copy_from_user(command->allocations, args->allocations, ++ args->alloc_count * ++ sizeof(struct d3dkmthandle)); ++ if (ret) { ++ DXG_ERR("failed to copy alloc handles"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_EVICT, process->host_handle); ++ command->alloc_count = args->alloc_count; ++ command->device = args->device; ++ command->flags = args->flags; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ if (ret < 0) { ++ DXG_ERR("send_evict failed %x", ret); ++ goto cleanup; ++ } ++ args->num_bytes_to_trim = result.num_bytes_to_trim; ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_submit_command(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_submitcommand *args) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 509482e1f870..23f92ab9f8ad 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -372,6 +372,33 @@ struct dxgkvmb_command_flushdevice { + enum dxgdevice_flushschedulerreason reason; + }; + ++struct dxgkvmb_command_makeresident { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle paging_queue; ++ struct d3dddi_makeresident_flags flags; ++ u32 alloc_count; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_makeresident_return { ++ u64 paging_fence_value; ++ u64 num_bytes_to_trim; ++ struct ntstatus status; ++}; ++ ++struct dxgkvmb_command_evict { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dddi_evict_flags flags; ++ u32 alloc_count; ++ struct d3dkmthandle allocations[1]; ++}; ++ ++struct dxgkvmb_command_evict_return { ++ u64 num_bytes_to_trim; ++}; ++ + struct dxgkvmb_command_submitcommand { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_submitcommand args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index bc0adebe52ae..2700da51bc01 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1961,6 +1961,143 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret, ret2; ++ struct d3dddi_makeresident args; ++ struct d3dddi_makeresident *input = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.alloc_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ if (args.paging_queue.v == 0) { ++ DXG_ERR("paging queue is missing"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_make_resident(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ /* STATUS_PENING is a success code > 0. It is returned to user mode */ ++ if (!(ret == STATUS_PENDING || ret == 0)) { ++ DXG_ERR("Unexpected error %x", ret); ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->paging_fence_value, ++ &args.paging_fence_value, sizeof(u64)); ++ if (ret2) { ++ DXG_ERR("failed to copy paging fence"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->num_bytes_to_trim, ++ &args.num_bytes_to_trim, sizeof(u64)); ++ if (ret2) { ++ DXG_ERR("failed to copy bytes to trim"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ ++ return ret; ++} ++ ++static int ++dxgkio_evict(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_evict args; ++ struct d3dkmt_evict *input = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ if (args.alloc_count > D3DKMT_MAKERESIDENT_ALLOC_MAX || ++ args.alloc_count == 0) { ++ DXG_ERR("invalid number of allocations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_evict(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input->num_bytes_to_trim, ++ &args.num_bytes_to_trim, sizeof(u64)); ++ if (ret) { ++ DXG_ERR("failed to copy bytes to trim to user"); ++ ret = -EINVAL; ++ } ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static int + dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) + { +@@ -4797,7 +4934,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x08 */ {}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, + /* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, +-/* 0x0b */ {}, ++/* 0x0b */ {dxgkio_make_resident, LX_DXMAKERESIDENT}, + /* 0x0c */ {}, + /* 0x0d */ {dxgkio_escape, LX_DXESCAPE}, + /* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, +@@ -4817,7 +4954,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE}, + /* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE}, + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, +-/* 0x1e */ {}, ++/* 0x1e */ {dxgkio_evict, LX_DXEVICT}, + /* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, + /* 0x20 */ {}, + /* 0x21 */ {dxgkio_get_context_process_scheduling_priority, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index a9bafab97c18..944f9d1e73d6 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -962,6 +962,56 @@ struct d3dkmt_destroyallocation2 { + struct d3dddicb_destroyallocation2flags flags; + }; + ++struct d3dddi_makeresident_flags { ++ union { ++ struct { ++ __u32 cant_trim_further:1; ++ __u32 must_succeed:1; ++ __u32 reserved:30; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dddi_makeresident { ++ struct d3dkmthandle paging_queue; ++ __u32 alloc_count; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocation_list; ++ const __u32 *priority_list; ++#else ++ __u64 allocation_list; ++ __u64 priority_list; ++#endif ++ struct d3dddi_makeresident_flags flags; ++ __u64 paging_fence_value; ++ __u64 num_bytes_to_trim; ++}; ++ ++struct d3dddi_evict_flags { ++ union { ++ struct { ++ __u32 evict_only_if_necessary:1; ++ __u32 not_written_to:1; ++ __u32 reserved:30; ++ }; ++ __u32 value; ++ }; ++}; ++ ++struct d3dkmt_evict { ++ struct d3dkmthandle device; ++ __u32 alloc_count; ++#ifdef __KERNEL__ ++ const struct d3dkmthandle *allocations; ++#else ++ __u64 allocations; ++#endif ++ struct d3dddi_evict_flags flags; ++ __u32 reserved; ++ __u64 num_bytes_to_trim; ++}; ++ + enum d3dkmt_memory_segment_group { + _D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0, + _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 +@@ -1407,6 +1457,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXQUERYVIDEOMEMORYINFO \ + _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) ++#define LX_DXMAKERESIDENT \ ++ _IOWR(0x47, 0x0b, struct d3dddi_makeresident) + #define LX_DXESCAPE \ + _IOWR(0x47, 0x0d, struct d3dkmt_escape) + #define LX_DXGETDEVICESTATE \ +@@ -1437,6 +1489,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x19, struct d3dkmt_destroydevice) + #define LX_DXDESTROYSYNCHRONIZATIONOBJECT \ + _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject) ++#define LX_DXEVICT \ ++ _IOWR(0x47, 0x1e, struct d3dkmt_evict) + #define LX_DXFLUSHHEAPTRANSITIONS \ + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) + #define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1694-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch b/patch/kernel/archive/wsl2-arm64-6.6/1694-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch new file mode 100644 index 000000000000..633f1005806c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1694-drivers-hv-dxgkrnl-Manage-compute-device-virtual-addresses.patch @@ -0,0 +1,703 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 14 Jan 2022 17:13:04 -0800 +Subject: drivers: hv: dxgkrnl: Manage compute device virtual addresses + +Implement ioctls to manage compute device virtual addresses (VA): + - LX_DXRESERVEGPUVIRTUALADDRESS, + - LX_DXFREEGPUVIRTUALADDRESS, + - LX_DXMAPGPUVIRTUALADDRESS, + - LX_DXUPDATEGPUVIRTUALADDRESS. + +Compute devices access memory by using virtual addressses. +Each process has a dedicated VA space. The video memory manager +on the host is responsible with updating device page tables +before submitting a DMA buffer for execution. + +The LX_DXRESERVEGPUVIRTUALADDRESS ioctl reserves a portion of the +process compute device VA space. + +The LX_DXMAPGPUVIRTUALADDRESS ioctl reserves a portion of the process +compute device VA space and maps it to the given compute device +allocation. + +The LX_DXFREEGPUVIRTUALADDRESS frees the previously reserved portion +of the compute device VA space. + +The LX_DXUPDATEGPUVIRTUALADDRESS ioctl adds operations to modify the +compute device VA space to a compute device execution context. It +allows the operations to be queued and synchronized with execution +of other compute device DMA buffers.. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 10 + + drivers/hv/dxgkrnl/dxgvmbus.c | 150 ++++++ + drivers/hv/dxgkrnl/dxgvmbus.h | 38 ++ + drivers/hv/dxgkrnl/ioctl.c | 228 +++++++++- + include/uapi/misc/d3dkmthk.h | 126 +++++ + 5 files changed, 548 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 93c3ceb23865..93bc9b41aa41 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -817,6 +817,16 @@ int dxgvmb_send_evict(struct dxgprocess *pr, struct dxgadapter *adapter, + int dxgvmb_send_submit_command(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_submitcommand *args); ++int dxgvmb_send_map_gpu_va(struct dxgprocess *pr, struct d3dkmthandle h, ++ struct dxgadapter *adapter, ++ struct d3dddi_mapgpuvirtualaddress *args); ++int dxgvmb_send_reserve_gpu_va(struct dxgprocess *pr, ++ struct dxgadapter *adapter, ++ struct d3dddi_reservegpuvirtualaddress *args); ++int dxgvmb_send_free_gpu_va(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dkmt_freegpuvirtualaddress *args); ++int dxgvmb_send_update_gpu_va(struct dxgprocess *pr, struct dxgadapter *adapter, ++ struct d3dkmt_updategpuvirtualaddress *args); + int dxgvmb_send_create_sync_object(struct dxgprocess *pr, + struct dxgadapter *adapter, + struct d3dkmt_createsynchronizationobject2 +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index f4c4a7e7ad8b..425a1ab87bd6 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2432,6 +2432,156 @@ int dxgvmb_send_submit_command(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_map_gpu_va(struct dxgprocess *process, ++ struct d3dkmthandle device, ++ struct dxgadapter *adapter, ++ struct d3dddi_mapgpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_mapgpuvirtualaddress *command; ++ struct dxgkvmb_command_mapgpuvirtualaddress_return result; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_MAPGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->args = *args; ++ command->device = device; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ if (ret < 0) ++ goto cleanup; ++ args->virtual_address = result.virtual_address; ++ args->paging_fence_value = result.paging_fence_value; ++ ret = ntstatus2int(result.status); ++ ++cleanup: ++ ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_reserve_gpu_va(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dddi_reservegpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_reservegpuvirtualaddress *command; ++ struct dxgkvmb_command_reservegpuvirtualaddress_return result; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_RESERVEGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result, ++ sizeof(result)); ++ args->virtual_address = result.virtual_address; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_free_gpu_va(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_freegpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_freegpuvirtualaddress *command; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ ret = init_message(&msg, adapter, process, sizeof(*command)); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_FREEGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->args = *args; ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ ++int dxgvmb_send_update_gpu_va(struct dxgprocess *process, ++ struct dxgadapter *adapter, ++ struct d3dkmt_updategpuvirtualaddress *args) ++{ ++ struct dxgkvmb_command_updategpuvirtualaddress *command; ++ u32 cmd_size; ++ u32 op_size; ++ int ret; ++ struct dxgvmbusmsg msg = {.hdr = NULL}; ++ ++ if (args->num_operations == 0 || ++ (DXG_MAX_VM_BUS_PACKET_SIZE / ++ sizeof(struct d3dddi_updategpuvirtualaddress_operation)) < ++ args->num_operations) { ++ ret = -EINVAL; ++ DXG_ERR("Invalid number of operations: %d", ++ args->num_operations); ++ goto cleanup; ++ } ++ ++ op_size = args->num_operations * ++ sizeof(struct d3dddi_updategpuvirtualaddress_operation); ++ cmd_size = sizeof(struct dxgkvmb_command_updategpuvirtualaddress) + ++ op_size - sizeof(args->operations[0]); ++ ++ ret = init_message(&msg, adapter, process, cmd_size); ++ if (ret) ++ goto cleanup; ++ command = (void *)msg.msg; ++ ++ command_vgpu_to_host_init2(&command->hdr, ++ DXGK_VMBCOMMAND_UPDATEGPUVIRTUALADDRESS, ++ process->host_handle); ++ command->fence_value = args->fence_value; ++ command->device = args->device; ++ command->context = args->context; ++ command->fence_object = args->fence_object; ++ command->num_operations = args->num_operations; ++ command->flags = args->flags.value; ++ ret = copy_from_user(command->operations, args->operations, ++ op_size); ++ if (ret) { ++ DXG_ERR("failed to copy operations"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + static void set_result(struct d3dkmt_createsynchronizationobject2 *args, + u64 fence_gpu_va, u8 *va) + { +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 23f92ab9f8ad..88967ff6a505 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -418,6 +418,44 @@ struct dxgkvmb_command_flushheaptransitions { + struct dxgkvmb_command_vgpu_to_host hdr; + }; + ++struct dxgkvmb_command_freegpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmt_freegpuvirtualaddress args; ++}; ++ ++struct dxgkvmb_command_mapgpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dddi_mapgpuvirtualaddress args; ++ struct d3dkmthandle device; ++}; ++ ++struct dxgkvmb_command_mapgpuvirtualaddress_return { ++ u64 virtual_address; ++ u64 paging_fence_value; ++ struct ntstatus status; ++}; ++ ++struct dxgkvmb_command_reservegpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dddi_reservegpuvirtualaddress args; ++}; ++ ++struct dxgkvmb_command_reservegpuvirtualaddress_return { ++ u64 virtual_address; ++ u64 paging_fence_value; ++}; ++ ++struct dxgkvmb_command_updategpuvirtualaddress { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ u64 fence_value; ++ struct d3dkmthandle device; ++ struct d3dkmthandle context; ++ struct d3dkmthandle fence_object; ++ u32 num_operations; ++ u32 flags; ++ struct d3dddi_updategpuvirtualaddress_operation operations[1]; ++}; ++ + struct dxgkvmb_command_queryclockcalibration { + struct dxgkvmb_command_vgpu_to_host hdr; + struct d3dkmt_queryclockcalibration args; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 2700da51bc01..f6700e974f25 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -2492,6 +2492,226 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++static int ++dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret, ret2; ++ struct d3dddi_mapgpuvirtualaddress args; ++ struct d3dddi_mapgpuvirtualaddress *input = inargs; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.paging_queue); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_map_gpu_va(process, zerohandle, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ /* STATUS_PENING is a success code > 0. It is returned to user mode */ ++ if (!(ret == STATUS_PENDING || ret == 0)) { ++ DXG_ERR("Unexpected error %x", ret); ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->paging_fence_value, ++ &args.paging_fence_value, sizeof(u64)); ++ if (ret2) { ++ DXG_ERR("failed to copy paging fence to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret2 = copy_to_user(&input->virtual_address, &args.virtual_address, ++ sizeof(args.virtual_address)); ++ if (ret2) { ++ DXG_ERR("failed to copy va to user"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dddi_reservegpuvirtualaddress args; ++ struct d3dddi_reservegpuvirtualaddress *input = inargs; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGPAGINGQUEUE, ++ args.adapter); ++ if (device == NULL) { ++ DXG_ERR("invalid adapter or paging queue: 0x%x", ++ args.adapter.v); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ adapter = device->adapter; ++ kref_get(&adapter->adapter_kref); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } else { ++ args.adapter = adapter->host_handle; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_reserve_gpu_va(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input->virtual_address, &args.virtual_address, ++ sizeof(args.virtual_address)); ++ if (ret) { ++ DXG_ERR("failed to copy VA to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter) { ++ dxgadapter_release_lock_shared(adapter); ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static int ++dxgkio_free_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_freegpuvirtualaddress args; ++ struct dxgadapter *adapter = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = dxgprocess_adapter_by_handle(process, args.adapter); ++ if (adapter == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ args.adapter = adapter->host_handle; ++ ret = dxgvmb_send_free_gpu_va(process, adapter, &args); ++ ++cleanup: ++ ++ if (adapter) { ++ dxgadapter_release_lock_shared(adapter); ++ kref_put(&adapter->adapter_kref, dxgadapter_release); ++ } ++ ++ return ret; ++} ++ ++static int ++dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs) ++{ ++ int ret; ++ struct d3dkmt_updategpuvirtualaddress args; ++ struct d3dkmt_updategpuvirtualaddress *input = inargs; ++ struct dxgadapter *adapter = NULL; ++ struct dxgdevice *device = NULL; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ ret = dxgvmb_send_update_gpu_va(process, adapter, &args); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = copy_to_user(&input->fence_value, &args.fence_value, ++ sizeof(args.fence_value)); ++ if (ret) { ++ DXG_ERR("failed to copy fence value to user"); ++ ret = -EINVAL; ++ } ++ ++cleanup: ++ ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ kref_put(&device->device_kref, dxgdevice_release); ++ ++ return ret; ++} ++ + static int + dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + { +@@ -4931,11 +5151,11 @@ static struct ioctl_desc ioctls[] = { + /* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT}, + /* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION}, + /* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE}, +-/* 0x08 */ {}, ++/* 0x08 */ {dxgkio_reserve_gpu_va, LX_DXRESERVEGPUVIRTUALADDRESS}, + /* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO}, + /* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO}, + /* 0x0b */ {dxgkio_make_resident, LX_DXMAKERESIDENT}, +-/* 0x0c */ {}, ++/* 0x0c */ {dxgkio_map_gpu_va, LX_DXMAPGPUVIRTUALADDRESS}, + /* 0x0d */ {dxgkio_escape, LX_DXESCAPE}, + /* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE}, + /* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND}, +@@ -4956,7 +5176,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT}, + /* 0x1e */ {dxgkio_evict, LX_DXEVICT}, + /* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS}, +-/* 0x20 */ {}, ++/* 0x20 */ {dxgkio_free_gpu_va, LX_DXFREEGPUVIRTUALADDRESS}, + /* 0x21 */ {dxgkio_get_context_process_scheduling_priority, + LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY}, + /* 0x22 */ {dxgkio_get_context_scheduling_priority, +@@ -4990,7 +5210,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, + /* 0x38 */ {dxgkio_update_alloc_property, LX_DXUPDATEALLOCPROPERTY}, +-/* 0x39 */ {}, ++/* 0x39 */ {dxgkio_update_gpu_va, LX_DXUPDATEGPUVIRTUALADDRESS}, + /* 0x3a */ {dxgkio_wait_sync_object_cpu, + LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU}, + /* 0x3b */ {dxgkio_wait_sync_object_gpu, +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 944f9d1e73d6..1f60f5120e1d 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -1012,6 +1012,124 @@ struct d3dkmt_evict { + __u64 num_bytes_to_trim; + }; + ++struct d3dddigpuva_protection_type { ++ union { ++ struct { ++ __u64 write:1; ++ __u64 execute:1; ++ __u64 zero:1; ++ __u64 no_access:1; ++ __u64 system_use_only:1; ++ __u64 reserved:59; ++ }; ++ __u64 value; ++ }; ++}; ++ ++enum d3dddi_updategpuvirtualaddress_operation_type { ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_MAP = 0, ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_UNMAP = 1, ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_COPY = 2, ++ _D3DDDI_UPDATEGPUVIRTUALADDRESS_MAP_PROTECT = 3, ++}; ++ ++struct d3dddi_updategpuvirtualaddress_operation { ++ enum d3dddi_updategpuvirtualaddress_operation_type operation; ++ union { ++ struct { ++ __u64 base_address; ++ __u64 size; ++ struct d3dkmthandle allocation; ++ __u64 allocation_offset; ++ __u64 allocation_size; ++ } map; ++ struct { ++ __u64 base_address; ++ __u64 size; ++ struct d3dkmthandle allocation; ++ __u64 allocation_offset; ++ __u64 allocation_size; ++ struct d3dddigpuva_protection_type protection; ++ __u64 driver_protection; ++ } map_protect; ++ struct { ++ __u64 base_address; ++ __u64 size; ++ struct d3dddigpuva_protection_type protection; ++ } unmap; ++ struct { ++ __u64 source_address; ++ __u64 size; ++ __u64 dest_address; ++ } copy; ++ }; ++}; ++ ++enum d3dddigpuva_reservation_type { ++ _D3DDDIGPUVA_RESERVE_NO_ACCESS = 0, ++ _D3DDDIGPUVA_RESERVE_ZERO = 1, ++ _D3DDDIGPUVA_RESERVE_NO_COMMIT = 2 ++}; ++ ++struct d3dkmt_updategpuvirtualaddress { ++ struct d3dkmthandle device; ++ struct d3dkmthandle context; ++ struct d3dkmthandle fence_object; ++ __u32 num_operations; ++#ifdef __KERNEL__ ++ struct d3dddi_updategpuvirtualaddress_operation *operations; ++#else ++ __u64 operations; ++#endif ++ __u32 reserved0; ++ __u32 reserved1; ++ __u64 reserved2; ++ __u64 fence_value; ++ union { ++ struct { ++ __u32 do_not_wait:1; ++ __u32 reserved:31; ++ }; ++ __u32 value; ++ } flags; ++ __u32 reserved3; ++}; ++ ++struct d3dddi_mapgpuvirtualaddress { ++ struct d3dkmthandle paging_queue; ++ __u64 base_address; ++ __u64 minimum_address; ++ __u64 maximum_address; ++ struct d3dkmthandle allocation; ++ __u64 offset_in_pages; ++ __u64 size_in_pages; ++ struct d3dddigpuva_protection_type protection; ++ __u64 driver_protection; ++ __u32 reserved0; ++ __u64 reserved1; ++ __u64 virtual_address; ++ __u64 paging_fence_value; ++}; ++ ++struct d3dddi_reservegpuvirtualaddress { ++ struct d3dkmthandle adapter; ++ __u64 base_address; ++ __u64 minimum_address; ++ __u64 maximum_address; ++ __u64 size; ++ enum d3dddigpuva_reservation_type reservation_type; ++ __u64 driver_protection; ++ __u64 virtual_address; ++ __u64 paging_fence_value; ++}; ++ ++struct d3dkmt_freegpuvirtualaddress { ++ struct d3dkmthandle adapter; ++ __u32 reserved; ++ __u64 base_address; ++ __u64 size; ++}; ++ + enum d3dkmt_memory_segment_group { + _D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0, + _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1 +@@ -1453,12 +1571,16 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x06, struct d3dkmt_createallocation) + #define LX_DXCREATEPAGINGQUEUE \ + _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue) ++#define LX_DXRESERVEGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x08, struct d3dddi_reservegpuvirtualaddress) + #define LX_DXQUERYADAPTERINFO \ + _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo) + #define LX_DXQUERYVIDEOMEMORYINFO \ + _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo) + #define LX_DXMAKERESIDENT \ + _IOWR(0x47, 0x0b, struct d3dddi_makeresident) ++#define LX_DXMAPGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x0c, struct d3dddi_mapgpuvirtualaddress) + #define LX_DXESCAPE \ + _IOWR(0x47, 0x0d, struct d3dkmt_escape) + #define LX_DXGETDEVICESTATE \ +@@ -1493,6 +1615,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x1e, struct d3dkmt_evict) + #define LX_DXFLUSHHEAPTRANSITIONS \ + _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions) ++#define LX_DXFREEGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x20, struct d3dkmt_freegpuvirtualaddress) + #define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \ + _IOWR(0x47, 0x21, struct d3dkmt_getcontextinprocessschedulingpriority) + #define LX_DXGETCONTEXTSCHEDULINGPRIORITY \ +@@ -1529,6 +1653,8 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x37, struct d3dkmt_unlock2) + #define LX_DXUPDATEALLOCPROPERTY \ + _IOWR(0x47, 0x38, struct d3dddi_updateallocproperty) ++#define LX_DXUPDATEGPUVIRTUALADDRESS \ ++ _IOWR(0x47, 0x39, struct d3dkmt_updategpuvirtualaddress) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \ + _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu) + #define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1695-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch b/patch/kernel/archive/wsl2-arm64-6.6/1695-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch new file mode 100644 index 000000000000..1d4f77001bf6 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1695-drivers-hv-dxgkrnl-Add-support-to-map-guest-pages-by-host.patch @@ -0,0 +1,313 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 8 Oct 2021 14:17:39 -0700 +Subject: drivers: hv: dxgkrnl: Add support to map guest pages by host + +Implement support for mapping guest memory pages by the host. +This removes hyper-v limitations of using GPADL (guest physical +address list). + +Dxgkrnl uses hyper-v GPADLs to share guest system memory with the +host. This method has limitations: +- a single GPADL can represent only ~32MB of memory +- there is a limit of how much memory the total size of GPADLs + in a VM can represent. +To avoid these limitations the host implemented mapping guest memory +pages. Presence of this support is determined by reading PCI config +space. When the support is enabled, dxgkrnl does not use GPADLs and +instead uses the following code flow: +- memory pages of an existing system memory buffer are pinned +- PFNs of the pages are sent to the host via a VM bus message +- the host maps the PFNs to get access to the memory + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 1 + + drivers/hv/dxgkrnl/dxgmodule.c | 33 ++- + drivers/hv/dxgkrnl/dxgvmbus.c | 117 +++++++--- + drivers/hv/dxgkrnl/dxgvmbus.h | 10 + + drivers/hv/dxgkrnl/misc.c | 1 + + 6 files changed, 129 insertions(+), 35 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index 9d821e83448a..fc85a47a6ad5 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o ++dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 93bc9b41aa41..091dbe999d33 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -316,6 +316,7 @@ struct dxgglobal { + bool misc_registered; + bool pci_registered; + bool vmbus_registered; ++ bool map_guest_pages_enabled; + }; + + static inline struct dxgglobal *dxggbl(void) +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 5c364a46b65f..b1b612b90fc1 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -147,7 +147,7 @@ void dxgglobal_remove_host_event(struct dxghostevent *event) + + void signal_host_cpu_event(struct dxghostevent *eventhdr) + { +- struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; ++ struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; + + if (event->remove_from_list || + event->destroy_after_signal) { +@@ -426,7 +426,11 @@ const struct file_operations dxgk_fops = { + #define DXGK_VMBUS_VGPU_LUID_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ + sizeof(u32)) + +-/* The guest writes its capabilities to this address */ ++/* The host caps (dxgk_vmbus_hostcaps) */ ++#define DXGK_VMBUS_HOSTCAPS_OFFSET (DXGK_VMBUS_VGPU_LUID_OFFSET + \ ++ sizeof(struct winluid)) ++ ++/* The guest writes its capavilities to this adderss */ + #define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ + sizeof(u32)) + +@@ -441,6 +445,23 @@ struct dxgk_vmbus_guestcaps { + }; + }; + ++/* ++ * The structure defines features, supported by the host. ++ * ++ * map_guest_memory ++ * Host can map guest memory pages, so the guest can avoid using GPADLs ++ * to represent existing system memory allocations. ++ */ ++struct dxgk_vmbus_hostcaps { ++ union { ++ struct { ++ u32 map_guest_memory : 1; ++ u32 reserved : 31; ++ }; ++ u32 host_caps; ++ }; ++}; ++ + /* + * A helper function to read PCI config space. + */ +@@ -475,6 +496,7 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + struct winluid vgpu_luid = {}; + struct dxgk_vmbus_guestcaps guest_caps = {.wsl2 = 1}; + struct dxgglobal *dxgglobal = dxggbl(); ++ struct dxgk_vmbus_hostcaps host_caps = {}; + + mutex_lock(&dxgglobal->device_mutex); + +@@ -503,6 +525,13 @@ static int dxg_pci_probe_device(struct pci_dev *dev, + if (ret) + goto cleanup; + ++ ret = pci_read_config_dword(dev, DXGK_VMBUS_HOSTCAPS_OFFSET, ++ &host_caps.host_caps); ++ if (ret == 0) { ++ if (host_caps.map_guest_memory) ++ dxgglobal->map_guest_pages_enabled = true; ++ } ++ + if (dxgglobal->vmbus_ver > DXGK_VMBUS_INTERFACE_VERSION) + dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION; + } +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 425a1ab87bd6..4d7807909284 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1383,15 +1383,19 @@ int create_existing_sysmem(struct dxgdevice *device, + void *kmem = NULL; + int ret = 0; + struct dxgkvmb_command_setexistingsysmemstore *set_store_command; ++ struct dxgkvmb_command_setexistingsysmempages *set_pages_command; + u64 alloc_size = host_alloc->allocation_size; + u32 npages = alloc_size >> PAGE_SHIFT; + struct dxgvmbusmsg msg = {.hdr = NULL}; +- +- ret = init_message(&msg, device->adapter, device->process, +- sizeof(*set_store_command)); +- if (ret) +- goto cleanup; +- set_store_command = (void *)msg.msg; ++ const u32 max_pfns_in_message = ++ (DXG_MAX_VM_BUS_PACKET_SIZE - sizeof(*set_pages_command) - ++ PAGE_SIZE) / sizeof(__u64); ++ u32 alloc_offset_in_pages = 0; ++ struct page **page_in; ++ u64 *pfn; ++ u32 pages_to_send; ++ u32 i; ++ struct dxgglobal *dxgglobal = dxggbl(); + + /* + * Create a guest physical address list and set it as the allocation +@@ -1402,6 +1406,7 @@ int create_existing_sysmem(struct dxgdevice *device, + DXG_TRACE("Alloc size: %lld", alloc_size); + + dxgalloc->cpu_address = (void *)sysmem; ++ + dxgalloc->pages = vzalloc(npages * sizeof(void *)); + if (dxgalloc->pages == NULL) { + DXG_ERR("failed to allocate pages"); +@@ -1419,39 +1424,87 @@ int create_existing_sysmem(struct dxgdevice *device, + ret = -ENOMEM; + goto cleanup; + } +- kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL); +- if (kmem == NULL) { +- DXG_ERR("vmap failed"); +- ret = -ENOMEM; +- goto cleanup; +- } +- ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem, +- alloc_size, &dxgalloc->gpadl); +- if (ret1) { +- DXG_ERR("establish_gpadl failed: %d", ret1); +- ret = -ENOMEM; +- goto cleanup; +- } ++ if (!dxgglobal->map_guest_pages_enabled) { ++ ret = init_message(&msg, device->adapter, device->process, ++ sizeof(*set_store_command)); ++ if (ret) ++ goto cleanup; ++ set_store_command = (void *)msg.msg; ++ ++ kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL); ++ if (kmem == NULL) { ++ DXG_ERR("vmap failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem, ++ alloc_size, &dxgalloc->gpadl); ++ if (ret1) { ++ DXG_ERR("establish_gpadl failed: %d", ret1); ++ ret = -ENOMEM; ++ goto cleanup; ++ } + #ifdef _MAIN_KERNEL_ +- DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); + #else +- DXG_TRACE("New gpadl %d", dxgalloc->gpadl); ++ DXG_TRACE("New gpadl %d", dxgalloc->gpadl); + #endif + +- command_vgpu_to_host_init2(&set_store_command->hdr, +- DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, +- device->process->host_handle); +- set_store_command->device = device->handle; +- set_store_command->device = device->handle; +- set_store_command->allocation = host_alloc->allocation; ++ command_vgpu_to_host_init2(&set_store_command->hdr, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, ++ device->process->host_handle); ++ set_store_command->device = device->handle; ++ set_store_command->allocation = host_alloc->allocation; + #ifdef _MAIN_KERNEL_ +- set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; ++ set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; + #else +- set_store_command->gpadl = dxgalloc->gpadl; ++ set_store_command->gpadl = dxgalloc->gpadl; + #endif +- ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); +- if (ret < 0) +- DXG_ERR("failed to set existing store: %x", ret); ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, ++ msg.size); ++ if (ret < 0) ++ DXG_ERR("failed set existing store: %x", ret); ++ } else { ++ /* ++ * Send the list of the allocation PFNs to the host. The host ++ * will map the pages for GPU access. ++ */ ++ ++ ret = init_message(&msg, device->adapter, device->process, ++ sizeof(*set_pages_command) + ++ max_pfns_in_message * sizeof(u64)); ++ if (ret) ++ goto cleanup; ++ set_pages_command = (void *)msg.msg; ++ command_vgpu_to_host_init2(&set_pages_command->hdr, ++ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES, ++ device->process->host_handle); ++ set_pages_command->device = device->handle; ++ set_pages_command->allocation = host_alloc->allocation; ++ ++ page_in = dxgalloc->pages; ++ while (alloc_offset_in_pages < npages) { ++ pfn = (u64 *)((char *)msg.msg + ++ sizeof(*set_pages_command)); ++ pages_to_send = min(npages - alloc_offset_in_pages, ++ max_pfns_in_message); ++ set_pages_command->num_pages = pages_to_send; ++ set_pages_command->alloc_offset_in_pages = ++ alloc_offset_in_pages; ++ ++ for (i = 0; i < pages_to_send; i++) ++ *pfn++ = page_to_pfn(*page_in++); ++ ++ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, ++ msg.hdr, ++ msg.size); ++ if (ret < 0) { ++ DXG_ERR("failed set existing pages: %x", ret); ++ break; ++ } ++ alloc_offset_in_pages += pages_to_send; ++ } ++ } + + cleanup: + if (kmem) +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h +index 88967ff6a505..b4a98f7c2522 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.h ++++ b/drivers/hv/dxgkrnl/dxgvmbus.h +@@ -234,6 +234,16 @@ struct dxgkvmb_command_setexistingsysmemstore { + u32 gpadl; + }; + ++/* Returns ntstatus */ ++struct dxgkvmb_command_setexistingsysmempages { ++ struct dxgkvmb_command_vgpu_to_host hdr; ++ struct d3dkmthandle device; ++ struct d3dkmthandle allocation; ++ u32 num_pages; ++ u32 alloc_offset_in_pages; ++ /* u64 pfn_array[num_pages] */ ++}; ++ + struct dxgkvmb_command_createprocess { + struct dxgkvmb_command_vm_to_host hdr; + void *process; +diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c +index cb1e0635bebc..4a1309d80ee5 100644 +--- a/drivers/hv/dxgkrnl/misc.c ++++ b/drivers/hv/dxgkrnl/misc.c +@@ -35,3 +35,4 @@ u16 *wcsncpy(u16 *dest, const u16 *src, size_t n) + dest[i - 1] = 0; + return dest; + } ++ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1696-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch b/patch/kernel/archive/wsl2-arm64-6.6/1696-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch new file mode 100644 index 000000000000..3dd5107c34f8 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1696-drivers-hv-dxgkrnl-Removed-struct-vmbus_gpadl-which-was-defined-in-the-main-linux-branch.patch @@ -0,0 +1,29 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 21 Mar 2022 20:32:44 -0700 +Subject: drivers: hv: dxgkrnl: Removed struct vmbus_gpadl, which was defined + in the main linux branch + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 6f763e326a65..236febbc6fca 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -932,7 +932,7 @@ void dxgallocation_destroy(struct dxgallocation *alloc) + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); + alloc->gpadl.gpadl_handle = 0; + } +-else ++#else + if (alloc->gpadl) { + DXG_TRACE("Teardown gpadl %d", alloc->gpadl); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1697-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch b/patch/kernel/archive/wsl2-arm64-6.6/1697-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch new file mode 100644 index 000000000000..ab84c3939431 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1697-drivers-hv-dxgkrnl-Remove-dxgk_init_ioctls.patch @@ -0,0 +1,100 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 22 Mar 2022 10:32:54 -0700 +Subject: drivers: hv: dxgkrnl: Remove dxgk_init_ioctls + +The array of ioctls is initialized statically to remove the unnecessary +function. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgmodule.c | 2 +- + drivers/hv/dxgkrnl/ioctl.c | 15 +++++----- + 2 files changed, 8 insertions(+), 9 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index b1b612b90fc1..f1245a9d8826 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -300,7 +300,7 @@ static void dxgglobal_start_adapters(void) + } + + /* +- * Stopsthe active dxgadapter objects. ++ * Stop the active dxgadapter objects. + */ + static void dxgglobal_stop_adapters(void) + { +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index f6700e974f25..8732a66040a0 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -26,7 +26,6 @@ + struct ioctl_desc { + int (*ioctl_callback)(struct dxgprocess *p, void __user *arg); + u32 ioctl; +- u32 arg_size; + }; + + #ifdef DEBUG +@@ -91,7 +90,7 @@ static const struct file_operations dxg_resource_fops = { + }; + + static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, +- void *__user inargs) ++ void *__user inargs) + { + struct d3dkmt_openadapterfromluid args; + int ret; +@@ -1002,7 +1001,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) + } + + static int dxgkio_destroy_hwqueue(struct dxgprocess *process, +- void *__user inargs) ++ void *__user inargs) + { + struct d3dkmt_destroyhwqueue args; + int ret; +@@ -2280,7 +2279,8 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + } + + static int +-dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, void *__user inargs) ++dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, ++ void *__user inargs) + { + int ret; + struct d3dkmt_submitcommandtohwqueue args; +@@ -5087,8 +5087,7 @@ open_resource(struct dxgprocess *process, + } + + static int +-dxgkio_open_resource_nt(struct dxgprocess *process, +- void *__user inargs) ++dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs) + { + struct d3dkmt_openresourcefromnthandle args; + struct d3dkmt_openresourcefromnthandle *__user args_user = inargs; +@@ -5166,7 +5165,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2}, + /* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER}, + /* 0x16 */ {dxgkio_change_vidmem_reservation, +- LX_DXCHANGEVIDEOMEMORYRESERVATION}, ++ LX_DXCHANGEVIDEOMEMORYRESERVATION}, + /* 0x17 */ {}, + /* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE}, + /* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE}, +@@ -5205,7 +5204,7 @@ static struct ioctl_desc ioctls[] = { + LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2}, + /* 0x34 */ {dxgkio_submit_command_to_hwqueue, LX_DXSUBMITCOMMANDTOHWQUEUE}, + /* 0x35 */ {dxgkio_submit_signal_to_hwqueue, +- LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, ++ LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE}, + /* 0x36 */ {dxgkio_submit_wait_to_hwqueue, + LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE}, + /* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2}, +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1698-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch b/patch/kernel/archive/wsl2-arm64-6.6/1698-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch new file mode 100644 index 000000000000..221f67a88890 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1698-drivers-hv-dxgkrnl-Creation-of-dxgsyncfile-objects.patch @@ -0,0 +1,482 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 22 Mar 2022 11:02:49 -0700 +Subject: drivers: hv: dxgkrnl: Creation of dxgsyncfile objects + +Implement the ioctl to create a dxgsyncfile object +(LX_DXCREATESYNCFILE). This object is a wrapper around a monitored +fence sync object and a fence value. + +dxgsyncfile is built on top of the Linux sync_file object and +provides a way for the user mode to synchronize with the execution +of the device DMA packets. + +The ioctl creates a dxgsyncfile object for the given GPU synchronization +object and a fence value. A file descriptor of the sync_file object +is returned to the caller. The caller could wait for the object by using +poll(). When the underlying GPU synchronization object is signaled on +the host, the host sends a message to the virtual machine and the +sync_file object is signaled. + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/Kconfig | 2 + + drivers/hv/dxgkrnl/Makefile | 2 +- + drivers/hv/dxgkrnl/dxgkrnl.h | 2 + + drivers/hv/dxgkrnl/dxgmodule.c | 12 + + drivers/hv/dxgkrnl/dxgsyncfile.c | 215 ++++++++++ + drivers/hv/dxgkrnl/dxgsyncfile.h | 30 ++ + drivers/hv/dxgkrnl/dxgvmbus.c | 33 +- + drivers/hv/dxgkrnl/ioctl.c | 5 +- + include/uapi/misc/d3dkmthk.h | 9 + + 9 files changed, 294 insertions(+), 16 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/Kconfig b/drivers/hv/dxgkrnl/Kconfig +index bcd92bbff939..782692610887 100644 +--- a/drivers/hv/dxgkrnl/Kconfig ++++ b/drivers/hv/dxgkrnl/Kconfig +@@ -6,6 +6,8 @@ config DXGKRNL + tristate "Microsoft Paravirtualized GPU support" + depends on HYPERV + depends on 64BIT || COMPILE_TEST ++ select DMA_SHARED_BUFFER ++ select SYNC_FILE + help + This driver supports paravirtualized virtual compute devices, exposed + by Microsoft Hyper-V when Linux is running inside of a virtual machine +diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile +index fc85a47a6ad5..89824cda670a 100644 +--- a/drivers/hv/dxgkrnl/Makefile ++++ b/drivers/hv/dxgkrnl/Makefile +@@ -2,4 +2,4 @@ + # Makefile for the hyper-v compute device driver (dxgkrnl). + + obj-$(CONFIG_DXGKRNL) += dxgkrnl.o +-dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o ++dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o dxgsyncfile.o +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 091dbe999d33..3a69e3b34e1c 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -120,6 +120,7 @@ struct dxgpagingqueue { + */ + enum dxghosteventtype { + dxghostevent_cpu_event = 1, ++ dxghostevent_dma_fence = 2, + }; + + struct dxghostevent { +@@ -858,6 +859,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + struct + d3dkmt_waitforsynchronizationobjectfromcpu + *args, ++ bool user_address, + u64 cpu_event); + int dxgvmb_send_lock2(struct dxgprocess *process, + struct dxgadapter *adapter, +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index f1245a9d8826..af51fcd35697 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -16,6 +16,7 @@ + #include + #include + #include "dxgkrnl.h" ++#include "dxgsyncfile.h" + + #define PCI_VENDOR_ID_MICROSOFT 0x1414 + #define PCI_DEVICE_ID_VIRTUAL_RENDER 0x008E +@@ -145,6 +146,15 @@ void dxgglobal_remove_host_event(struct dxghostevent *event) + spin_unlock_irq(&dxgglobal->host_event_list_mutex); + } + ++static void signal_dma_fence(struct dxghostevent *eventhdr) ++{ ++ struct dxgsyncpoint *event = (struct dxgsyncpoint *)eventhdr; ++ ++ event->fence_value++; ++ list_del(&eventhdr->host_event_list_entry); ++ dma_fence_signal(&event->base); ++} ++ + void signal_host_cpu_event(struct dxghostevent *eventhdr) + { + struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr; +@@ -184,6 +194,8 @@ void dxgglobal_signal_host_event(u64 event_id) + DXG_TRACE("found event to signal"); + if (event->event_type == dxghostevent_cpu_event) + signal_host_cpu_event(event); ++ else if (event->event_type == dxghostevent_dma_fence) ++ signal_dma_fence(event); + else + DXG_ERR("Unknown host event type"); + break; +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c +new file mode 100644 +index 000000000000..88fd78f08fbe +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.c +@@ -0,0 +1,215 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Ioctl implementation ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++ ++#include "dxgkrnl.h" ++#include "dxgvmbus.h" ++#include "dxgsyncfile.h" ++ ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt ++ ++#ifdef DEBUG ++static char *errorstr(int ret) ++{ ++ return ret < 0 ? "err" : ""; ++} ++#endif ++ ++static const struct dma_fence_ops dxgdmafence_ops; ++ ++static struct dxgsyncpoint *to_syncpoint(struct dma_fence *fence) ++{ ++ if (fence->ops != &dxgdmafence_ops) ++ return NULL; ++ return container_of(fence, struct dxgsyncpoint, base); ++} ++ ++int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_createsyncfile args; ++ struct dxgsyncpoint *pt = NULL; ++ int ret = 0; ++ int fd = get_unused_fd_flags(O_CLOEXEC); ++ struct sync_file *sync_file = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmt_waitforsynchronizationobjectfromcpu waitargs = {}; ++ ++ if (fd < 0) { ++ DXG_ERR("get_unused_fd_flags failed: %d", fd); ++ ret = fd; ++ goto cleanup; ++ } ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ DXG_ERR("dxgprocess_device_by_handle failed"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ DXG_ERR("dxgdevice_acquire_lock_shared failed"); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgadapter_acquire_lock_shared failed"); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ pt = kzalloc(sizeof(*pt), GFP_KERNEL); ++ if (!pt) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ spin_lock_init(&pt->lock); ++ pt->fence_value = args.fence_value; ++ pt->context = dma_fence_context_alloc(1); ++ pt->hdr.event_id = dxgglobal_new_host_event_id(); ++ pt->hdr.event_type = dxghostevent_dma_fence; ++ dxgglobal_add_host_event(&pt->hdr); ++ ++ dma_fence_init(&pt->base, &dxgdmafence_ops, &pt->lock, ++ pt->context, args.fence_value); ++ ++ sync_file = sync_file_create(&pt->base); ++ if (sync_file == NULL) { ++ DXG_ERR("sync_file_create failed"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dma_fence_put(&pt->base); ++ ++ waitargs.device = args.device; ++ waitargs.object_count = 1; ++ waitargs.objects = &args.monitored_fence; ++ waitargs.fence_values = &args.fence_value; ++ ret = dxgvmb_send_wait_sync_object_cpu(process, adapter, ++ &waitargs, false, ++ pt->hdr.event_id); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_wait_sync_object_cpu failed"); ++ goto cleanup; ++ } ++ ++ args.sync_file_handle = (u64)fd; ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ fd_install(fd, sync_file->file); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) ++ dxgdevice_release_lock_shared(device); ++ if (ret) { ++ if (sync_file) { ++ fput(sync_file->file); ++ /* sync_file_release will destroy dma_fence */ ++ pt = NULL; ++ } ++ if (pt) ++ dma_fence_put(&pt->base); ++ if (fd >= 0) ++ put_unused_fd(fd); ++ } ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++static const char *dxgdmafence_get_driver_name(struct dma_fence *fence) ++{ ++ return "dxgkrnl"; ++} ++ ++static const char *dxgdmafence_get_timeline_name(struct dma_fence *fence) ++{ ++ return "no_timeline"; ++} ++ ++static void dxgdmafence_release(struct dma_fence *fence) ++{ ++ struct dxgsyncpoint *syncpoint; ++ ++ syncpoint = to_syncpoint(fence); ++ if (syncpoint) { ++ if (syncpoint->hdr.event_id) ++ dxgglobal_get_host_event(syncpoint->hdr.event_id); ++ kfree(syncpoint); ++ } ++} ++ ++static bool dxgdmafence_signaled(struct dma_fence *fence) ++{ ++ struct dxgsyncpoint *syncpoint; ++ ++ syncpoint = to_syncpoint(fence); ++ if (syncpoint == 0) ++ return true; ++ return __dma_fence_is_later(syncpoint->fence_value, fence->seqno, ++ fence->ops); ++} ++ ++static bool dxgdmafence_enable_signaling(struct dma_fence *fence) ++{ ++ return true; ++} ++ ++static void dxgdmafence_value_str(struct dma_fence *fence, ++ char *str, int size) ++{ ++ snprintf(str, size, "%lld", fence->seqno); ++} ++ ++static void dxgdmafence_timeline_value_str(struct dma_fence *fence, ++ char *str, int size) ++{ ++ struct dxgsyncpoint *syncpoint; ++ ++ syncpoint = to_syncpoint(fence); ++ snprintf(str, size, "%lld", syncpoint->fence_value); ++} ++ ++static const struct dma_fence_ops dxgdmafence_ops = { ++ .get_driver_name = dxgdmafence_get_driver_name, ++ .get_timeline_name = dxgdmafence_get_timeline_name, ++ .enable_signaling = dxgdmafence_enable_signaling, ++ .signaled = dxgdmafence_signaled, ++ .release = dxgdmafence_release, ++ .fence_value_str = dxgdmafence_value_str, ++ .timeline_value_str = dxgdmafence_timeline_value_str, ++}; +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.h b/drivers/hv/dxgkrnl/dxgsyncfile.h +new file mode 100644 +index 000000000000..207ef9b30f67 +--- /dev/null ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.h +@@ -0,0 +1,30 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++ ++/* ++ * Copyright (c) 2022, Microsoft Corporation. ++ * ++ * Author: ++ * Iouri Tarassov ++ * ++ * Dxgkrnl Graphics Driver ++ * Headers for sync file objects ++ * ++ */ ++ ++#ifndef _DXGSYNCFILE_H ++#define _DXGSYNCFILE_H ++ ++#include ++ ++int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs); ++ ++struct dxgsyncpoint { ++ struct dxghostevent hdr; ++ struct dma_fence base; ++ u64 fence_value; ++ u64 context; ++ spinlock_t lock; ++ u64 u64; ++}; ++ ++#endif /* _DXGSYNCFILE_H */ +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 4d7807909284..913ea3cabb31 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -2820,6 +2820,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + struct + d3dkmt_waitforsynchronizationobjectfromcpu + *args, ++ bool user_address, + u64 cpu_event) + { + int ret = -EINVAL; +@@ -2844,19 +2845,25 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + command->guest_event_pointer = (u64) cpu_event; + current_pos = (u8 *) &command[1]; + +- ret = copy_from_user(current_pos, args->objects, object_size); +- if (ret) { +- DXG_ERR("failed to copy objects"); +- ret = -EINVAL; +- goto cleanup; +- } +- current_pos += object_size; +- ret = copy_from_user(current_pos, args->fence_values, +- fence_size); +- if (ret) { +- DXG_ERR("failed to copy fences"); +- ret = -EINVAL; +- goto cleanup; ++ if (user_address) { ++ ret = copy_from_user(current_pos, args->objects, object_size); ++ if (ret) { ++ DXG_ERR("failed to copy objects"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ current_pos += object_size; ++ ret = copy_from_user(current_pos, args->fence_values, ++ fence_size); ++ if (ret) { ++ DXG_ERR("failed to copy fences"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ } else { ++ memcpy(current_pos, args->objects, object_size); ++ current_pos += object_size; ++ memcpy(current_pos, args->fence_values, fence_size); + } + + ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size); +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 8732a66040a0..6c26aafb0619 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -19,6 +19,7 @@ + + #include "dxgkrnl.h" + #include "dxgvmbus.h" ++#include "dxgsyncfile.h" + + #undef pr_fmt + #define pr_fmt(fmt) "dxgk: " fmt +@@ -3488,7 +3489,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + } + + ret = dxgvmb_send_wait_sync_object_cpu(process, adapter, +- &args, event_id); ++ &args, true, event_id); + if (ret < 0) + goto cleanup; + +@@ -5224,7 +5225,7 @@ static struct ioctl_desc ioctls[] = { + /* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE}, + /* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS}, + /* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, +-/* 0x45 */ {}, ++/* 0x45 */ {dxgkio_create_sync_file, LX_DXCREATESYNCFILE}, + }; + + /* +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index 1f60f5120e1d..c7f168425dc7 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -1554,6 +1554,13 @@ struct d3dkmt_shareobjectwithhost { + __u64 object_vail_nt_handle; + }; + ++struct d3dkmt_createsyncfile { ++ struct d3dkmthandle device; ++ struct d3dkmthandle monitored_fence; ++ __u64 fence_value; ++ __u64 sync_file_handle; /* out */ ++}; ++ + /* + * Dxgkrnl Graphics Port Driver ioctl definitions + * +@@ -1677,5 +1684,7 @@ struct d3dkmt_shareobjectwithhost { + _IOWR(0x47, 0x43, struct d3dkmt_querystatistics) + #define LX_DXSHAREOBJECTWITHHOST \ + _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) ++#define LX_DXCREATESYNCFILE \ ++ _IOWR(0x47, 0x45, struct d3dkmt_createsyncfile) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1699-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch b/patch/kernel/archive/wsl2-arm64-6.6/1699-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch new file mode 100644 index 000000000000..5795bc96d37c --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1699-drivers-hv-dxgkrnl-Use-tracing-instead-of-dev_dbg.patch @@ -0,0 +1,205 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Thu, 24 Mar 2022 15:03:41 -0700 +Subject: drivers: hv: dxgkrnl: Use tracing instead of dev_dbg + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 4 +-- + drivers/hv/dxgkrnl/dxgmodule.c | 5 ++- + drivers/hv/dxgkrnl/dxgprocess.c | 6 ++-- + drivers/hv/dxgkrnl/dxgvmbus.c | 4 +-- + drivers/hv/dxgkrnl/hmgr.c | 16 +++++----- + drivers/hv/dxgkrnl/ioctl.c | 8 ++--- + drivers/hv/dxgkrnl/misc.c | 4 +-- + 7 files changed, 25 insertions(+), 22 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 236febbc6fca..3d8bec295b87 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -18,8 +18,8 @@ + + #include "dxgkrnl.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev) + { +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index af51fcd35697..08feae97e845 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -24,6 +24,9 @@ + #undef pr_fmt + #define pr_fmt(fmt) "dxgk: " fmt + ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt ++ + /* + * Interface from dxgglobal + */ +@@ -442,7 +445,7 @@ const struct file_operations dxgk_fops = { + #define DXGK_VMBUS_HOSTCAPS_OFFSET (DXGK_VMBUS_VGPU_LUID_OFFSET + \ + sizeof(struct winluid)) + +-/* The guest writes its capavilities to this adderss */ ++/* The guest writes its capabilities to this address */ + #define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \ + sizeof(u32)) + +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index 5de3f8ccb448..afef196c0588 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -13,8 +13,8 @@ + + #include "dxgkrnl.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + /* + * Creates a new dxgprocess object +@@ -248,7 +248,7 @@ struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process, + HMGRENTRY_TYPE_DXGADAPTER, + handle); + if (adapter == NULL) +- DXG_ERR("adapter_by_handle failed %x", handle.v); ++ DXG_TRACE("adapter_by_handle failed %x", handle.v); + else if (kref_get_unless_zero(&adapter->adapter_kref) == 0) { + DXG_ERR("failed to acquire adapter reference"); + adapter = NULL; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 913ea3cabb31..d53d4254be63 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -22,8 +22,8 @@ + #include "dxgkrnl.h" + #include "dxgvmbus.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + #define RING_BUFSIZE (256 * 1024) + +diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c +index 526b50f46d96..24101d0091ab 100644 +--- a/drivers/hv/dxgkrnl/hmgr.c ++++ b/drivers/hv/dxgkrnl/hmgr.c +@@ -19,8 +19,8 @@ + #include "dxgkrnl.h" + #include "hmgr.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + const struct d3dkmthandle zerohandle; + +@@ -90,29 +90,29 @@ static bool is_handle_valid(struct hmgrtable *table, struct d3dkmthandle h, + struct hmgrentry *entry; + + if (index >= table->table_size) { +- DXG_ERR("Invalid index %x %d", h.v, index); ++ DXG_TRACE("Invalid index %x %d", h.v, index); + return false; + } + + entry = &table->entry_table[index]; + if (unique != entry->unique) { +- DXG_ERR("Invalid unique %x %d %d %d %p", ++ DXG_TRACE("Invalid unique %x %d %d %d %p", + h.v, unique, entry->unique, index, entry->object); + return false; + } + + if (entry->destroyed && !ignore_destroyed) { +- DXG_ERR("Invalid destroyed value"); ++ DXG_TRACE("Invalid destroyed value"); + return false; + } + + if (entry->type == HMGRENTRY_TYPE_FREE) { +- DXG_ERR("Entry is freed %x %d", h.v, index); ++ DXG_TRACE("Entry is freed %x %d", h.v, index); + return false; + } + + if (t != HMGRENTRY_TYPE_FREE && t != entry->type) { +- DXG_ERR("type mismatch %x %d %d", h.v, t, entry->type); ++ DXG_TRACE("type mismatch %x %d %d", h.v, t, entry->type); + return false; + } + +@@ -500,7 +500,7 @@ void *hmgrtable_get_object_by_type(struct hmgrtable *table, + struct d3dkmthandle h) + { + if (!is_handle_valid(table, h, false, type)) { +- DXG_ERR("Invalid handle %x", h.v); ++ DXG_TRACE("Invalid handle %x", h.v); + return NULL; + } + return table->entry_table[get_index(h)].object; +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 6c26aafb0619..4db23cd55b24 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -21,8 +21,8 @@ + #include "dxgvmbus.h" + #include "dxgsyncfile.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + struct ioctl_desc { + int (*ioctl_callback)(struct dxgprocess *p, void __user *arg); +@@ -556,7 +556,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); + return ret; + } + +@@ -5242,7 +5242,7 @@ static int dxgk_ioctl(struct file *f, unsigned int p1, unsigned long p2) + int status; + struct dxgprocess *process; + +- if (code < 1 || code >= ARRAY_SIZE(ioctls)) { ++ if (code < 1 || code >= ARRAY_SIZE(ioctls)) { + DXG_ERR("bad ioctl %x %x %x %x", + code, _IOC_TYPE(p1), _IOC_SIZE(p1), _IOC_DIR(p1)); + return -ENOTTY; +diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c +index 4a1309d80ee5..4bf6fe80d22a 100644 +--- a/drivers/hv/dxgkrnl/misc.c ++++ b/drivers/hv/dxgkrnl/misc.c +@@ -18,8 +18,8 @@ + #include "dxgkrnl.h" + #include "misc.h" + +-#undef pr_fmt +-#define pr_fmt(fmt) "dxgk: " fmt ++#undef dev_fmt ++#define dev_fmt(fmt) "dxgk: " fmt + + u16 *wcsncpy(u16 *dest, const u16 *src, size_t n) + { +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1700-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch b/patch/kernel/archive/wsl2-arm64-6.6/1700-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch new file mode 100644 index 000000000000..dbf33a8e06c1 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1700-drivers-hv-dxgkrnl-Implement-D3DKMTWaitSyncFile.patch @@ -0,0 +1,658 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 2 May 2022 11:46:48 -0700 +Subject: drivers: hv: dxgkrnl: Implement D3DKMTWaitSyncFile + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 11 + + drivers/hv/dxgkrnl/dxgmodule.c | 7 +- + drivers/hv/dxgkrnl/dxgprocess.c | 12 +- + drivers/hv/dxgkrnl/dxgsyncfile.c | 291 +++++++++- + drivers/hv/dxgkrnl/dxgsyncfile.h | 3 + + drivers/hv/dxgkrnl/dxgvmbus.c | 49 ++ + drivers/hv/dxgkrnl/ioctl.c | 16 +- + include/uapi/misc/d3dkmthk.h | 23 + + 8 files changed, 396 insertions(+), 16 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 3a69e3b34e1c..d92e1348ccfb 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -254,6 +254,10 @@ void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *sharedsyncobj, + struct dxgsyncobject *syncobj); + void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *sharedsyncobj, + struct dxgsyncobject *syncobj); ++int dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, ++ struct dxgprocess *process, ++ struct d3dkmthandle objecthandle); ++void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj); + + struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process, + struct dxgdevice *device, +@@ -384,6 +388,8 @@ struct dxgprocess { + pid_t tgid; + /* how many time the process was opened */ + struct kref process_kref; ++ /* protects the object memory */ ++ struct kref process_mem_kref; + /* + * This handle table is used for all objects except dxgadapter + * The handle table lock order is higher than the local_handle_table +@@ -405,6 +411,7 @@ struct dxgprocess { + struct dxgprocess *dxgprocess_create(void); + void dxgprocess_destroy(struct dxgprocess *process); + void dxgprocess_release(struct kref *refcount); ++void dxgprocess_mem_release(struct kref *refcount); + int dxgprocess_open_adapter(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmthandle *handle); +@@ -932,6 +939,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + struct d3dkmt_opensyncobjectfromnthandle2 + *args, + struct dxgsyncobject *syncobj); ++int dxgvmb_send_open_sync_object(struct dxgprocess *process, ++ struct d3dkmthandle device, ++ struct d3dkmthandle host_shared_syncobj, ++ struct d3dkmthandle *syncobj); + int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + struct dxgadapter *adapter, + struct d3dkmt_queryallocationresidency +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 08feae97e845..5570f35954d4 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -149,10 +149,11 @@ void dxgglobal_remove_host_event(struct dxghostevent *event) + spin_unlock_irq(&dxgglobal->host_event_list_mutex); + } + +-static void signal_dma_fence(struct dxghostevent *eventhdr) ++static void dxg_signal_dma_fence(struct dxghostevent *eventhdr) + { + struct dxgsyncpoint *event = (struct dxgsyncpoint *)eventhdr; + ++ DXG_TRACE("syncpoint: %px, fence: %lld", event, event->fence_value); + event->fence_value++; + list_del(&eventhdr->host_event_list_entry); + dma_fence_signal(&event->base); +@@ -198,7 +199,7 @@ void dxgglobal_signal_host_event(u64 event_id) + if (event->event_type == dxghostevent_cpu_event) + signal_host_cpu_event(event); + else if (event->event_type == dxghostevent_dma_fence) +- signal_dma_fence(event); ++ dxg_signal_dma_fence(event); + else + DXG_ERR("Unknown host event type"); + break; +@@ -355,6 +356,7 @@ static struct dxgprocess *dxgglobal_get_current_process(void) + if (entry->tgid == current->tgid) { + if (kref_get_unless_zero(&entry->process_kref)) { + process = entry; ++ kref_get(&entry->process_mem_kref); + DXG_TRACE("found dxgprocess"); + } else { + DXG_TRACE("process is destroyed"); +@@ -405,6 +407,7 @@ static int dxgk_release(struct inode *n, struct file *f) + return -EINVAL; + + kref_put(&process->process_kref, dxgprocess_release); ++ kref_put(&process->process_mem_kref, dxgprocess_mem_release); + + f->private_data = NULL; + return 0; +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index afef196c0588..e77e3a4983f8 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -39,6 +39,7 @@ struct dxgprocess *dxgprocess_create(void) + } else { + INIT_LIST_HEAD(&process->plistentry); + kref_init(&process->process_kref); ++ kref_init(&process->process_mem_kref); + + mutex_lock(&dxgglobal->plistmutex); + list_add_tail(&process->plistentry, +@@ -117,8 +118,17 @@ void dxgprocess_release(struct kref *refcount) + + dxgprocess_destroy(process); + +- if (process->host_handle.v) ++ if (process->host_handle.v) { + dxgvmb_send_destroy_process(process->host_handle); ++ process->host_handle.v = 0; ++ } ++} ++ ++void dxgprocess_mem_release(struct kref *refcount) ++{ ++ struct dxgprocess *process; ++ ++ process = container_of(refcount, struct dxgprocess, process_mem_kref); + kfree(process); + } + +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c +index 88fd78f08fbe..9d5832c90ad7 100644 +--- a/drivers/hv/dxgkrnl/dxgsyncfile.c ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.c +@@ -9,6 +9,20 @@ + * Dxgkrnl Graphics Driver + * Ioctl implementation + * ++ * dxgsyncpoint: ++ * - pointer to dxgsharedsyncobject ++ * - host_shared_handle_nt_reference incremented ++ * - list of (process, local syncobj d3dkmthandle) pairs ++ * wait for sync file ++ * - get dxgsyncpoint ++ * - if process doesn't have a local syncobj ++ * - create local dxgsyncobject ++ * - send open syncobj to the host ++ * - Send wait for syncobj to the context ++ * dxgsyncpoint destruction ++ * - walk the list of (process, local syncobj) ++ * - destroy syncobj ++ * - remove reference to dxgsharedsyncobject + */ + + #include +@@ -45,12 +59,15 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + struct d3dkmt_createsyncfile args; + struct dxgsyncpoint *pt = NULL; + int ret = 0; +- int fd = get_unused_fd_flags(O_CLOEXEC); ++ int fd; + struct sync_file *sync_file = NULL; + struct dxgdevice *device = NULL; + struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; + struct d3dkmt_waitforsynchronizationobjectfromcpu waitargs = {}; ++ bool device_lock_acquired = false; + ++ fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) { + DXG_ERR("get_unused_fd_flags failed: %d", fd); + ret = fd; +@@ -74,9 +91,9 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + ret = dxgdevice_acquire_lock_shared(device); + if (ret < 0) { + DXG_ERR("dxgdevice_acquire_lock_shared failed"); +- device = NULL; + goto cleanup; + } ++ device_lock_acquired = true; + + adapter = device->adapter; + ret = dxgadapter_acquire_lock_shared(adapter); +@@ -109,6 +126,30 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + } + dma_fence_put(&pt->base); + ++ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED); ++ syncobj = hmgrtable_get_object(&process->handle_table, ++ args.monitored_fence); ++ if (syncobj == NULL) { ++ DXG_ERR("invalid syncobj handle %x", args.monitored_fence.v); ++ ret = -EINVAL; ++ } else { ++ if (syncobj->shared) { ++ kref_get(&syncobj->syncobj_kref); ++ pt->shared_syncobj = syncobj->shared_owner; ++ } ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED); ++ ++ if (pt->shared_syncobj) { ++ ret = dxgsharedsyncobj_get_host_nt_handle(pt->shared_syncobj, ++ process, ++ args.monitored_fence); ++ if (ret) ++ pt->shared_syncobj = NULL; ++ } ++ if (ret) ++ goto cleanup; ++ + waitargs.device = args.device; + waitargs.object_count = 1; + waitargs.objects = &args.monitored_fence; +@@ -132,10 +173,15 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + fd_install(fd, sync_file->file); + + cleanup: ++ if (syncobj && syncobj->shared) ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); + if (adapter) + dxgadapter_release_lock_shared(adapter); +- if (device) +- dxgdevice_release_lock_shared(device); ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } + if (ret) { + if (sync_file) { + fput(sync_file->file); +@@ -151,6 +197,228 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + return ret; + } + ++int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *process, ++ void *__user inargs) ++{ ++ struct d3dkmt_opensyncobjectfromsyncfile args; ++ int ret = 0; ++ struct dxgsyncpoint *pt = NULL; ++ struct dma_fence *dmafence = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct dxgsyncobject *syncobj = NULL; ++ struct d3dddi_synchronizationobject_flags flags = { }; ++ struct d3dkmt_opensyncobjectfromnthandle2 openargs = { }; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ dmafence = sync_file_get_fence(args.sync_file_handle); ++ if (dmafence == NULL) { ++ DXG_ERR("failed to get dmafence from handle: %llx", ++ args.sync_file_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ pt = to_syncpoint(dmafence); ++ if (pt->shared_syncobj == NULL) { ++ DXG_ERR("Sync object is not shared"); ++ goto cleanup; ++ } ++ ++ device = dxgprocess_device_by_handle(process, args.device); ++ if (device == NULL) { ++ DXG_ERR("dxgprocess_device_by_handle failed"); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ DXG_ERR("dxgdevice_acquire_lock_shared failed"); ++ kref_put(&device->device_kref, dxgdevice_release); ++ device = NULL; ++ goto cleanup; ++ } ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgadapter_acquire_lock_shared failed"); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ flags.shared = 1; ++ flags.nt_security_sharing = 1; ++ syncobj = dxgsyncobject_create(process, device, adapter, ++ _D3DDDI_MONITORED_FENCE, flags); ++ if (syncobj == NULL) { ++ DXG_ERR("failed to create sync object"); ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ dxgsharedsyncobj_add_syncobj(pt->shared_syncobj, syncobj); ++ ++ /* Open the shared syncobj to get a local handle */ ++ ++ openargs.device = device->handle; ++ openargs.flags.shared = 1; ++ openargs.flags.nt_security_sharing = 1; ++ openargs.flags.no_signal = 1; ++ ++ ret = dxgvmb_send_open_sync_object_nt(process, ++ &dxgglobal->channel, &openargs, syncobj); ++ if (ret) { ++ DXG_ERR("Failed to open shared syncobj on host"); ++ goto cleanup; ++ } ++ ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, ++ syncobj, ++ HMGRENTRY_TYPE_DXGSYNCOBJECT, ++ openargs.sync_object); ++ if (ret == 0) { ++ syncobj->handle = openargs.sync_object; ++ kref_get(&syncobj->syncobj_kref); ++ } ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); ++ ++ args.syncobj = openargs.sync_object; ++ args.fence_value = pt->fence_value; ++ args.fence_value_cpu_va = openargs.monitored_fence.fence_value_cpu_va; ++ args.fence_value_gpu_va = openargs.monitored_fence.fence_value_gpu_va; ++ ++ ret = copy_to_user(inargs, &args, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy output args"); ++ ret = -EFAULT; ++ } ++ ++cleanup: ++ if (dmafence) ++ dma_fence_put(dmafence); ++ if (ret) { ++ if (syncobj) { ++ dxgsyncobject_destroy(process, syncobj); ++ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release); ++ } ++ } ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) { ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ ++int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs) ++{ ++ struct d3dkmt_waitsyncfile args; ++ struct dma_fence *dmafence = NULL; ++ int ret = 0; ++ struct dxgsyncpoint *pt = NULL; ++ struct dxgdevice *device = NULL; ++ struct dxgadapter *adapter = NULL; ++ struct d3dkmthandle syncobj_handle = {}; ++ bool device_lock_acquired = false; ++ ++ ret = copy_from_user(&args, inargs, sizeof(args)); ++ if (ret) { ++ DXG_ERR("failed to copy input args"); ++ ret = -EFAULT; ++ goto cleanup; ++ } ++ ++ dmafence = sync_file_get_fence(args.sync_file_handle); ++ if (dmafence == NULL) { ++ DXG_ERR("failed to get dmafence from handle: %llx", ++ args.sync_file_handle); ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ pt = to_syncpoint(dmafence); ++ ++ device = dxgprocess_device_by_object_handle(process, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ args.context); ++ if (device == NULL) { ++ ret = -EINVAL; ++ goto cleanup; ++ } ++ ++ ret = dxgdevice_acquire_lock_shared(device); ++ if (ret < 0) { ++ DXG_ERR("dxgdevice_acquire_lock_shared failed"); ++ device = NULL; ++ goto cleanup; ++ } ++ device_lock_acquired = true; ++ ++ adapter = device->adapter; ++ ret = dxgadapter_acquire_lock_shared(adapter); ++ if (ret < 0) { ++ DXG_ERR("dxgadapter_acquire_lock_shared failed"); ++ adapter = NULL; ++ goto cleanup; ++ } ++ ++ /* Open the shared syncobj to get a local handle */ ++ if (pt->shared_syncobj == NULL) { ++ DXG_ERR("Sync object is not shared"); ++ goto cleanup; ++ } ++ ret = dxgvmb_send_open_sync_object(process, ++ device->handle, ++ pt->shared_syncobj->host_shared_handle, ++ &syncobj_handle); ++ if (ret) { ++ DXG_ERR("Failed to open shared syncobj on host"); ++ goto cleanup; ++ } ++ ++ /* Ask the host to insert the syncobj to the context queue */ ++ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter, ++ args.context, 1, ++ &syncobj_handle, ++ &pt->fence_value, ++ false); ++ if (ret < 0) { ++ DXG_ERR("dxgvmb_send_wait_sync_object_cpu failed"); ++ goto cleanup; ++ } ++ ++ /* ++ * Destroy the local syncobject immediately. This will not unblock ++ * GPU waiters, but will unblock CPU waiter, which includes the sync ++ * file itself. ++ */ ++ ret = dxgvmb_send_destroy_sync_object(process, syncobj_handle); ++ ++cleanup: ++ if (adapter) ++ dxgadapter_release_lock_shared(adapter); ++ if (device) { ++ if (device_lock_acquired) ++ dxgdevice_release_lock_shared(device); ++ kref_put(&device->device_kref, dxgdevice_release); ++ } ++ if (dmafence) ++ dma_fence_put(dmafence); ++ ++ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ return ret; ++} ++ + static const char *dxgdmafence_get_driver_name(struct dma_fence *fence) + { + return "dxgkrnl"; +@@ -166,11 +434,16 @@ static void dxgdmafence_release(struct dma_fence *fence) + struct dxgsyncpoint *syncpoint; + + syncpoint = to_syncpoint(fence); +- if (syncpoint) { +- if (syncpoint->hdr.event_id) +- dxgglobal_get_host_event(syncpoint->hdr.event_id); +- kfree(syncpoint); +- } ++ if (syncpoint == NULL) ++ return; ++ ++ if (syncpoint->hdr.event_id) ++ dxgglobal_get_host_event(syncpoint->hdr.event_id); ++ ++ if (syncpoint->shared_syncobj) ++ dxgsharedsyncobj_put(syncpoint->shared_syncobj); ++ ++ kfree(syncpoint); + } + + static bool dxgdmafence_signaled(struct dma_fence *fence) +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.h b/drivers/hv/dxgkrnl/dxgsyncfile.h +index 207ef9b30f67..292b7f718987 100644 +--- a/drivers/hv/dxgkrnl/dxgsyncfile.h ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.h +@@ -17,10 +17,13 @@ + #include + + int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs); ++int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs); ++int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *p, void *__user args); + + struct dxgsyncpoint { + struct dxghostevent hdr; + struct dma_fence base; ++ struct dxgsharedsyncobject *shared_syncobj; + u64 fence_value; + u64 context; + spinlock_t lock; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index d53d4254be63..36f4d4e84d3e 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -796,6 +796,55 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process, + return ret; + } + ++int dxgvmb_send_open_sync_object(struct dxgprocess *process, ++ struct d3dkmthandle device, ++ struct d3dkmthandle host_shared_syncobj, ++ struct d3dkmthandle *syncobj) ++{ ++ struct dxgkvmb_command_opensyncobject *command; ++ struct dxgkvmb_command_opensyncobject_return result = { }; ++ int ret; ++ struct dxgvmbusmsg msg; ++ struct dxgglobal *dxgglobal = dxggbl(); ++ ++ ret = init_message(&msg, NULL, process, sizeof(*command)); ++ if (ret) ++ return ret; ++ command = (void *)msg.msg; ++ ++ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENSYNCOBJECT, ++ process->host_handle); ++ command->device = device; ++ command->global_sync_object = host_shared_syncobj; ++ command->flags.shared = 1; ++ command->flags.nt_security_sharing = 1; ++ command->flags.no_signal = 1; ++ ++ ret = dxgglobal_acquire_channel_lock(); ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = dxgvmb_send_sync_msg(&dxgglobal->channel, msg.hdr, msg.size, ++ &result, sizeof(result)); ++ ++ dxgglobal_release_channel_lock(); ++ ++ if (ret < 0) ++ goto cleanup; ++ ++ ret = ntstatus2int(result.status); ++ if (ret < 0) ++ goto cleanup; ++ ++ *syncobj = result.sync_object; ++ ++cleanup: ++ free_message(&msg, process); ++ if (ret) ++ DXG_TRACE("err: %d", ret); ++ return ret; ++} ++ + int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process, + struct d3dkmthandle object, + struct d3dkmthandle *shared_handle) +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 4db23cd55b24..622904d5c3a9 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -36,10 +36,8 @@ static char *errorstr(int ret) + } + #endif + +-static int dxgsyncobj_release(struct inode *inode, struct file *file) ++void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj) + { +- struct dxgsharedsyncobject *syncobj = file->private_data; +- + DXG_TRACE("Release syncobj: %p", syncobj); + mutex_lock(&syncobj->fd_mutex); + kref_get(&syncobj->ssyncobj_kref); +@@ -56,6 +54,13 @@ static int dxgsyncobj_release(struct inode *inode, struct file *file) + } + mutex_unlock(&syncobj->fd_mutex); + kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release); ++} ++ ++static int dxgsyncobj_release(struct inode *inode, struct file *file) ++{ ++ struct dxgsharedsyncobject *syncobj = file->private_data; ++ ++ dxgsharedsyncobj_put(syncobj); + return 0; + } + +@@ -4478,7 +4483,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + return ret; + } + +-static int ++int + dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj, + struct dxgprocess *process, + struct d3dkmthandle objecthandle) +@@ -5226,6 +5231,9 @@ static struct ioctl_desc ioctls[] = { + /* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS}, + /* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST}, + /* 0x45 */ {dxgkio_create_sync_file, LX_DXCREATESYNCFILE}, ++/* 0x46 */ {dxgkio_wait_sync_file, LX_DXWAITSYNCFILE}, ++/* 0x46 */ {dxgkio_open_syncobj_from_syncfile, ++ LX_DXOPENSYNCOBJECTFROMSYNCFILE}, + }; + + /* +diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h +index c7f168425dc7..1eaa3f038322 100644 +--- a/include/uapi/misc/d3dkmthk.h ++++ b/include/uapi/misc/d3dkmthk.h +@@ -1561,6 +1561,25 @@ struct d3dkmt_createsyncfile { + __u64 sync_file_handle; /* out */ + }; + ++struct d3dkmt_waitsyncfile { ++ __u64 sync_file_handle; ++ struct d3dkmthandle context; ++ __u32 reserved; ++}; ++ ++struct d3dkmt_opensyncobjectfromsyncfile { ++ __u64 sync_file_handle; ++ struct d3dkmthandle device; ++ struct d3dkmthandle syncobj; /* out */ ++ __u64 fence_value; /* out */ ++#ifdef __KERNEL__ ++ void *fence_value_cpu_va; /* out */ ++#else ++ __u64 fence_value_cpu_va; /* out */ ++#endif ++ __u64 fence_value_gpu_va; /* out */ ++}; ++ + /* + * Dxgkrnl Graphics Port Driver ioctl definitions + * +@@ -1686,5 +1705,9 @@ struct d3dkmt_createsyncfile { + _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost) + #define LX_DXCREATESYNCFILE \ + _IOWR(0x47, 0x45, struct d3dkmt_createsyncfile) ++#define LX_DXWAITSYNCFILE \ ++ _IOWR(0x47, 0x46, struct d3dkmt_waitsyncfile) ++#define LX_DXOPENSYNCOBJECTFROMSYNCFILE \ ++ _IOWR(0x47, 0x47, struct d3dkmt_opensyncobjectfromsyncfile) + + #endif /* _D3DKMTHK_H */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1701-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch b/patch/kernel/archive/wsl2-arm64-6.6/1701-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch new file mode 100644 index 000000000000..5cda6870cc54 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1701-drivers-hv-dxgkrnl-Improve-tracing-and-return-values-from-copy-from-user.patch @@ -0,0 +1,2000 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Fri, 6 May 2022 19:19:09 -0700 +Subject: drivers: hv: dxgkrnl: Improve tracing and return values from copy + from user + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgkrnl.h | 17 +- + drivers/hv/dxgkrnl/dxgmodule.c | 1 + + drivers/hv/dxgkrnl/dxgsyncfile.c | 13 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 98 +-- + drivers/hv/dxgkrnl/ioctl.c | 327 +++++----- + 5 files changed, 225 insertions(+), 231 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index d92e1348ccfb..f63aa6f7a9dc 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -999,18 +999,25 @@ void dxgk_validate_ioctls(void); + trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \ + } while (0) + +-#define DXG_ERR(fmt, ...) do { \ +- dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ +- trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \ ++ trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \ + } while (0) + + #else + + #define DXG_TRACE(...) +-#define DXG_ERR(fmt, ...) do { \ +- dev_err(DXGDEV, fmt, ##__VA_ARGS__); \ ++#define DXG_ERR(fmt, ...) do { \ ++ dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \ + } while (0) + + #endif /* DEBUG */ + ++#define DXG_TRACE_IOCTL_END(ret) do { \ ++ if (ret < 0) \ ++ DXG_ERR("Ioctl failed: %d", ret); \ ++ else \ ++ DXG_TRACE("Ioctl returned: %d", ret); \ ++} while (0) ++ + #endif +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 5570f35954d4..aa27931a3447 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -961,3 +961,4 @@ module_exit(dxg_drv_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver"); ++MODULE_VERSION("2.0.0"); +diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c +index 9d5832c90ad7..f3b3e8dd4568 100644 +--- a/drivers/hv/dxgkrnl/dxgsyncfile.c ++++ b/drivers/hv/dxgkrnl/dxgsyncfile.c +@@ -38,13 +38,6 @@ + #undef dev_fmt + #define dev_fmt(fmt) "dxgk: " fmt + +-#ifdef DEBUG +-static char *errorstr(int ret) +-{ +- return ret < 0 ? "err" : ""; +-} +-#endif +- + static const struct dma_fence_ops dxgdmafence_ops; + + static struct dxgsyncpoint *to_syncpoint(struct dma_fence *fence) +@@ -193,7 +186,7 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs) + if (fd >= 0) + put_unused_fd(fd); + } +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -317,7 +310,7 @@ int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *process, + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -415,7 +408,7 @@ int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs) + if (dmafence) + dma_fence_put(dmafence); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 36f4d4e84d3e..566ccb6d01c9 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1212,7 +1212,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("Faled to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1230,7 +1230,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter, + if (ret) { + DXG_ERR( + "Faled to copy private data to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + dxgvmb_send_destroy_context(adapter, process, + context); + context.v = 0; +@@ -1365,7 +1365,7 @@ copy_private_data(struct d3dkmt_createallocation *args, + args->private_runtime_data_size); + if (ret) { + DXG_ERR("failed to copy runtime data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + private_data_dest += args->private_runtime_data_size; +@@ -1385,7 +1385,7 @@ copy_private_data(struct d3dkmt_createallocation *args, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + private_data_dest += args->priv_drv_data_size; +@@ -1406,7 +1406,7 @@ copy_private_data(struct d3dkmt_createallocation *args, + input_alloc->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy alloc data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + private_data_dest += input_alloc->priv_drv_data_size; +@@ -1658,7 +1658,7 @@ create_local_allocations(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy resource handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1690,7 +1690,7 @@ create_local_allocations(struct dxgprocess *process, + host_alloc->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + alloc_private_data += host_alloc->priv_drv_data_size; +@@ -1700,7 +1700,7 @@ create_local_allocations(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy alloc handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1714,7 +1714,7 @@ create_local_allocations(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy global share"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -1961,7 +1961,7 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process, + sizeof(result.clock_data)); + if (ret) { + DXG_ERR("failed to copy clock data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = ntstatus2int(result.status); +@@ -2041,7 +2041,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + alloc_size); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -2059,7 +2059,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process, + result_allocation_size); + if (ret) { + DXG_ERR("failed to copy residency status"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2105,7 +2105,7 @@ int dxgvmb_send_escape(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy priv data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -2164,14 +2164,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + sizeof(output->budget)); + if (ret) { + DXG_ERR("failed to copy budget"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&output->current_usage, &result.current_usage, + sizeof(output->current_usage)); + if (ret) { + DXG_ERR("failed to copy current usage"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&output->current_reservation, +@@ -2179,7 +2179,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + sizeof(output->current_reservation)); + if (ret) { + DXG_ERR("failed to copy reservation"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&output->available_for_reservation, +@@ -2187,7 +2187,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process, + sizeof(output->available_for_reservation)); + if (ret) { + DXG_ERR("failed to copy avail reservation"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2229,7 +2229,7 @@ int dxgvmb_send_get_device_state(struct dxgprocess *process, + ret = copy_to_user(output, &result.args, sizeof(result.args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + if (args->state_type == _D3DKMT_DEVICESTATE_EXECUTION) +@@ -2404,7 +2404,7 @@ int dxgvmb_send_make_resident(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + command_vgpu_to_host_init2(&command->hdr, +@@ -2454,7 +2454,7 @@ int dxgvmb_send_evict(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + command_vgpu_to_host_init2(&command->hdr, +@@ -2502,14 +2502,14 @@ int dxgvmb_send_submit_command(struct dxgprocess *process, + hbufsize); + if (ret) { + DXG_ERR(" failed to copy history buffer"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_from_user((u8 *) &command[1] + hbufsize, + args->priv_drv_data, args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy history priv data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2671,7 +2671,7 @@ int dxgvmb_send_update_gpu_va(struct dxgprocess *process, + op_size); + if (ret) { + DXG_ERR("failed to copy operations"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2751,7 +2751,7 @@ dxgvmb_send_create_sync_object(struct dxgprocess *process, + sizeof(u64)); + if (ret) { + DXG_ERR("failed to read fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + } else { + DXG_TRACE("fence value:%lx", + value); +@@ -2820,7 +2820,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process, + if (ret) { + DXG_ERR("Failed to read objects %p %d", + objects, object_size); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + current_pos += object_size; +@@ -2834,7 +2834,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process, + if (ret) { + DXG_ERR("Failed to read contexts %p %d", + contexts, context_size); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + current_pos += context_size; +@@ -2844,7 +2844,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process, + if (ret) { + DXG_ERR("Failed to read fences %p %d", + fences, fence_size); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -2898,7 +2898,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + ret = copy_from_user(current_pos, args->objects, object_size); + if (ret) { + DXG_ERR("failed to copy objects"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + current_pos += object_size; +@@ -2906,7 +2906,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process, + fence_size); + if (ret) { + DXG_ERR("failed to copy fences"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } else { +@@ -3037,7 +3037,7 @@ int dxgvmb_send_lock2(struct dxgprocess *process, + sizeof(args->data)); + if (ret) { + DXG_ERR("failed to copy data"); +- ret = -EINVAL; ++ ret = -EFAULT; + alloc->cpu_address_refcount--; + if (alloc->cpu_address_refcount == 0) { + dxg_unmap_iospace(alloc->cpu_address, +@@ -3119,7 +3119,7 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process, + sizeof(u64)); + if (ret1) { + DXG_ERR("failed to copy paging fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + cleanup: +@@ -3204,14 +3204,14 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process, + alloc_size); + if (ret) { + DXG_ERR("failed to copy alloc handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_from_user((u8 *) allocations + alloc_size, + args->priorities, priority_size); + if (ret) { + DXG_ERR("failed to copy alloc priority"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3277,7 +3277,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + alloc_size); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3296,7 +3296,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process, + priority_size); + if (ret) { + DXG_ERR("failed to copy priorities"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -3402,7 +3402,7 @@ int dxgvmb_send_offer_allocations(struct dxgprocess *process, + } + if (ret) { + DXG_ERR("failed to copy input handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3457,7 +3457,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + } + if (ret) { + DXG_ERR("failed to copy input handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3469,7 +3469,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + &result->paging_fence_value, sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy paging fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3480,7 +3480,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process, + args->allocation_count); + if (ret) { + DXG_ERR("failed to copy results"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + +@@ -3559,7 +3559,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -3604,7 +3604,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy hwqueue handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence, +@@ -3612,7 +3612,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to progress fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_cpu_va, +@@ -3620,7 +3620,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(inargs->queue_progress_fence_cpu_va)); + if (ret) { + DXG_ERR("failed to copy fence cpu va"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(&inargs->queue_progress_fence_gpu_va, +@@ -3628,7 +3628,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy fence gpu va"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + if (args->priv_drv_data_size) { +@@ -3637,7 +3637,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + +@@ -3706,7 +3706,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + args->private_data, args->private_data_size); + if (ret) { + DXG_ERR("Faled to copy private data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3758,7 +3758,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process, + args->private_data_size); + if (ret) { + DXG_ERR("Faled to copy private data to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -3791,7 +3791,7 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + primaries_size); + if (ret) { + DXG_ERR("failed to copy primaries handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -3801,7 +3801,7 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process, + args->priv_drv_data_size); + if (ret) { + DXG_ERR("failed to copy primaries data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 622904d5c3a9..3dc9e76f4f3d 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -29,13 +29,6 @@ struct ioctl_desc { + u32 ioctl; + }; + +-#ifdef DEBUG +-static char *errorstr(int ret) +-{ +- return ret < 0 ? "err" : ""; +-} +-#endif +- + void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj) + { + DXG_TRACE("Release syncobj: %p", syncobj); +@@ -108,7 +101,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("Faled to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -129,7 +122,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + &args.adapter_handle, + sizeof(struct d3dkmthandle)); + if (ret) +- ret = -EINVAL; ++ ret = -EFAULT; + } + adapter = entry; + } +@@ -150,7 +143,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process, + if (ret < 0) + dxgprocess_close_adapter(process, args.adapter_handle); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -173,7 +166,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process, + ret = copy_from_user(args, inargs, sizeof(*args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -199,7 +192,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process, + ret = copy_to_user(inargs, args, sizeof(*args)); + if (ret) { + DXG_ERR("failed to copy args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + dxgadapter_release_lock_shared(adapter); +@@ -209,7 +202,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process, + if (args) + vfree(args); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -233,7 +226,7 @@ dxgkp_enum_adapters(struct dxgprocess *process, + &dxgglobal->num_adapters, sizeof(u32)); + if (ret) { + DXG_ERR("copy_to_user faled"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -291,7 +284,7 @@ dxgkp_enum_adapters(struct dxgprocess *process, + &dxgglobal->num_adapters, sizeof(u32)); + if (ret) { + DXG_ERR("copy_to_user failed"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -300,13 +293,13 @@ dxgkp_enum_adapters(struct dxgprocess *process, + sizeof(adapter_count)); + if (ret) { + DXG_ERR("failed to copy adapter_count"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(info_out, info, sizeof(info[0]) * adapter_count); + if (ret) { + DXG_ERR("failed to copy adapter info"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -326,7 +319,7 @@ dxgkp_enum_adapters(struct dxgprocess *process, + if (adapters) + vfree(adapters); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -437,7 +430,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -447,7 +440,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy args to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -508,14 +501,14 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy args to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + ret = copy_to_user(args.adapters, info, + sizeof(info[0]) * args.num_adapters); + if (ret) { + DXG_ERR("failed to copy adapter info to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -536,7 +529,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs) + if (adapters) + vfree(adapters); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -549,7 +542,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -561,7 +554,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -574,7 +567,7 @@ dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -584,7 +577,7 @@ dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl: %s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -598,7 +591,7 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -630,7 +623,7 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs) + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -647,7 +640,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -677,7 +670,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy device handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -709,7 +702,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -724,7 +717,7 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -756,7 +749,7 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -774,7 +767,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -824,7 +817,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy context handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } else { + DXG_ERR("invalid host handle"); +@@ -851,7 +844,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -868,7 +861,7 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -920,7 +913,7 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -938,7 +931,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1002,7 +995,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1019,7 +1012,7 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1070,7 +1063,7 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process, + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1088,7 +1081,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1128,7 +1121,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1169,7 +1162,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1186,7 +1179,7 @@ dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1247,7 +1240,7 @@ dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1351,7 +1344,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1373,7 +1366,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + alloc_info_size); + if (ret) { + DXG_ERR("failed to copy alloc info"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1412,7 +1405,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + sizeof(standard_alloc)); + if (ret) { + DXG_ERR("failed to copy std alloc data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + if (standard_alloc.type == +@@ -1556,7 +1549,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + if (ret) { + DXG_ERR( + "failed to copy runtime data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1576,7 +1569,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + if (ret) { + DXG_ERR( + "failed to copy res data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1733,7 +1726,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1793,7 +1786,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -1823,7 +1816,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + handle_size); + if (ret) { + DXG_ERR("failed to copy alloc handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -1962,7 +1955,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs) + if (allocs) + vfree(allocs); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -1978,7 +1971,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2022,7 +2015,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + &args.paging_fence_value, sizeof(u64)); + if (ret2) { + DXG_ERR("failed to copy paging fence"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2030,7 +2023,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + &args.num_bytes_to_trim, sizeof(u64)); + if (ret2) { + DXG_ERR("failed to copy bytes to trim"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2041,7 +2034,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + + return ret; + } +@@ -2058,7 +2051,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2090,7 +2083,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs) + &args.num_bytes_to_trim, sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy bytes to trim to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + cleanup: + +@@ -2099,7 +2092,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2114,7 +2107,7 @@ dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2153,7 +2146,7 @@ dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2169,7 +2162,7 @@ dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2212,7 +2205,7 @@ dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2227,7 +2220,7 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2280,7 +2273,7 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2296,7 +2289,7 @@ dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2336,7 +2329,7 @@ dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2352,7 +2345,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2376,7 +2369,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy hwqueue handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2410,7 +2403,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2428,7 +2421,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2447,7 +2440,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(objects, args.objects, object_size); + if (ret) { + DXG_ERR("failed to copy objects"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2460,7 +2453,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(fences, args.fence_values, object_size); + if (ret) { + DXG_ERR("failed to copy fence values"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2494,7 +2487,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2510,7 +2503,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2542,7 +2535,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + &args.paging_fence_value, sizeof(u64)); + if (ret2) { + DXG_ERR("failed to copy paging fence to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2550,7 +2543,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + sizeof(args.virtual_address)); + if (ret2) { + DXG_ERR("failed to copy va to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2561,7 +2554,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2577,7 +2570,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2614,7 +2607,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) + sizeof(args.virtual_address)); + if (ret) { + DXG_ERR("failed to copy VA to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2624,7 +2617,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs) + kref_put(&adapter->adapter_kref, dxgadapter_release); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2638,7 +2631,7 @@ dxgkio_free_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2680,7 +2673,7 @@ dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2705,7 +2698,7 @@ dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs) + sizeof(args.fence_value)); + if (ret) { + DXG_ERR("failed to copy fence value to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -2734,7 +2727,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2808,7 +2801,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2842,7 +2835,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2856,7 +2849,7 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2885,7 +2878,7 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -2906,7 +2899,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -2995,7 +2988,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + if (ret == 0) + goto success; + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + + cleanup: + +@@ -3020,7 +3013,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3041,7 +3034,7 @@ dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3129,7 +3122,7 @@ dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3144,7 +3137,7 @@ dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + if (args.object_count == 0 || +@@ -3181,7 +3174,7 @@ dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3199,7 +3192,7 @@ dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3240,7 +3233,7 @@ dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3262,7 +3255,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3287,7 +3280,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy context handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3365,7 +3358,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3380,7 +3373,7 @@ dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3418,7 +3411,7 @@ dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3439,7 +3432,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3540,7 +3533,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs) + kfree(async_host_event); + } + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3563,7 +3556,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3583,7 +3576,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(objects, args.objects, object_size); + if (ret) { + DXG_ERR("failed to copy objects"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3637,7 +3630,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + object_size); + if (ret) { + DXG_ERR("failed to copy fences"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } else { +@@ -3673,7 +3666,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs) + if (fences && fences != &args.fence_value) + vfree(fences); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3690,7 +3683,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3712,7 +3705,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + alloc->cpu_address_refcount++; + } else { + DXG_ERR("Failed to copy cpu address"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + } + } +@@ -3749,7 +3742,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3766,7 +3759,7 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3829,7 +3822,7 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs) + kref_put(&device->device_kref, dxgdevice_release); + + success: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3844,7 +3837,7 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3872,7 +3865,7 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3887,7 +3880,7 @@ dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + device = dxgprocess_device_by_handle(process, args.device); +@@ -3908,7 +3901,7 @@ dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3923,7 +3916,7 @@ dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -3949,7 +3942,7 @@ dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3964,7 +3957,7 @@ dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + device = dxgprocess_device_by_handle(process, args.device); +@@ -3984,7 +3977,7 @@ dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -3999,7 +3992,7 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + device = dxgprocess_device_by_handle(process, args.device); +@@ -4019,7 +4012,7 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (device) + kref_put(&device->device_kref, dxgdevice_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4069,14 +4062,14 @@ dxgkio_set_context_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + + ret = set_context_scheduling_priority(process, args.context, + args.priority, false); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4111,7 +4104,7 @@ get_context_scheduling_priority(struct dxgprocess *process, + ret = copy_to_user(priority, &pri, sizeof(pri)); + if (ret) { + DXG_ERR("failed to copy priority to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4134,14 +4127,14 @@ dxgkio_get_context_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + + ret = get_context_scheduling_priority(process, args.context, + &input->priority, false); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4155,14 +4148,14 @@ dxgkio_set_context_process_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + + ret = set_context_scheduling_priority(process, args.context, + args.priority, true); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4176,7 +4169,7 @@ dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process, + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4184,7 +4177,7 @@ dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process, + &((struct d3dkmt_getcontextinprocessschedulingpriority *) + inargs)->priority, true); + cleanup: +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4199,7 +4192,7 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4232,7 +4225,7 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4247,7 +4240,7 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4272,7 +4265,7 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4295,7 +4288,7 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4319,7 +4312,7 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4341,7 +4334,7 @@ dxgkio_escape(struct dxgprocess *process, void *__user inargs) + + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4367,7 +4360,7 @@ dxgkio_escape(struct dxgprocess *process, void *__user inargs) + dxgadapter_release_lock_shared(adapter); + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4382,7 +4375,7 @@ dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4432,7 +4425,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4458,7 +4451,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy args to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + goto cleanup; + } +@@ -4590,7 +4583,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4610,7 +4603,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(handles, args.objects, handle_size); + if (ret) { + DXG_ERR("failed to copy object handles"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4708,7 +4701,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64)); + if (ret) { + DXG_ERR("failed to copy shared handle"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4726,7 +4719,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + if (resource) + kref_put(&resource->resource_kref, dxgresource_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4742,7 +4735,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -4795,7 +4788,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy output args"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -4807,7 +4800,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs) + if (device) + kref_put(&device->device_kref, dxgdevice_release); + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -4859,7 +4852,7 @@ assign_resource_handles(struct dxgprocess *process, + sizeof(open_alloc_info)); + if (ret) { + DXG_ERR("failed to copy alloc info"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5009,7 +5002,7 @@ open_resource(struct dxgprocess *process, + shared_resource->runtime_private_data_size); + if (ret) { + DXG_ERR("failed to copy runtime data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5020,7 +5013,7 @@ open_resource(struct dxgprocess *process, + shared_resource->resource_private_data_size); + if (ret) { + DXG_ERR("failed to copy resource data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5031,7 +5024,7 @@ open_resource(struct dxgprocess *process, + shared_resource->alloc_private_data_size); + if (ret) { + DXG_ERR("failed to copy alloc data"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + } +@@ -5046,7 +5039,7 @@ open_resource(struct dxgprocess *process, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy resource handle to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -5054,7 +5047,7 @@ open_resource(struct dxgprocess *process, + &args->total_priv_drv_data_size, sizeof(u32)); + if (ret) { + DXG_ERR("failed to copy total driver data size"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: +@@ -5102,7 +5095,7 @@ dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -5112,7 +5105,7 @@ dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs) + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +@@ -5125,7 +5118,7 @@ dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs) + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy input args"); +- ret = -EINVAL; ++ ret = -EFAULT; + goto cleanup; + } + +@@ -5138,12 +5131,12 @@ dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs) + ret = copy_to_user(inargs, &args, sizeof(args)); + if (ret) { + DXG_ERR("failed to copy data to user"); +- ret = -EINVAL; ++ ret = -EFAULT; + } + + cleanup: + +- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret); ++ DXG_TRACE_IOCTL_END(ret); + return ret; + } + +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1702-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch b/patch/kernel/archive/wsl2-arm64-6.6/1702-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch new file mode 100644 index 000000000000..2c643b7be7d8 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1702-drivers-hv-dxgkrnl-Fix-synchronization-locks.patch @@ -0,0 +1,391 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Mon, 13 Jun 2022 14:18:10 -0700 +Subject: drivers: hv: dxgkrnl: Fix synchronization locks + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/dxgadapter.c | 19 ++- + drivers/hv/dxgkrnl/dxgkrnl.h | 8 +- + drivers/hv/dxgkrnl/dxgmodule.c | 3 +- + drivers/hv/dxgkrnl/dxgprocess.c | 11 +- + drivers/hv/dxgkrnl/dxgvmbus.c | 85 +++++++--- + drivers/hv/dxgkrnl/ioctl.c | 24 ++- + drivers/hv/dxgkrnl/misc.h | 1 + + 7 files changed, 101 insertions(+), 50 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index 3d8bec295b87..d9d45bd4a31e 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -136,7 +136,7 @@ void dxgadapter_release(struct kref *refcount) + struct dxgadapter *adapter; + + adapter = container_of(refcount, struct dxgadapter, adapter_kref); +- DXG_TRACE("%p", adapter); ++ DXG_TRACE("Destroying adapter: %px", adapter); + kfree(adapter); + } + +@@ -270,6 +270,8 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter, + if (ret < 0) { + kref_put(&device->device_kref, dxgdevice_release); + device = NULL; ++ } else { ++ DXG_TRACE("dxgdevice created: %px", device); + } + } + return device; +@@ -413,11 +415,8 @@ void dxgdevice_destroy(struct dxgdevice *device) + + cleanup: + +- if (device->adapter) { ++ if (device->adapter) + dxgprocess_adapter_remove_device(device); +- kref_put(&device->adapter->adapter_kref, dxgadapter_release); +- device->adapter = NULL; +- } + + up_write(&device->device_lock); + +@@ -721,6 +720,8 @@ void dxgdevice_release(struct kref *refcount) + struct dxgdevice *device; + + device = container_of(refcount, struct dxgdevice, device_kref); ++ DXG_TRACE("Destroying device: %px", device); ++ kref_put(&device->adapter->adapter_kref, dxgadapter_release); + kfree(device); + } + +@@ -999,6 +1000,9 @@ void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue) + kfree(pqueue); + } + ++/* ++ * Process_adapter_mutex is held. ++ */ + struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process, + struct dxgadapter *adapter) + { +@@ -1108,7 +1112,7 @@ int dxgprocess_adapter_add_device(struct dxgprocess *process, + + void dxgprocess_adapter_remove_device(struct dxgdevice *device) + { +- DXG_TRACE("Removing device: %p", device); ++ DXG_TRACE("Removing device: %px", device); + mutex_lock(&device->adapter_info->device_list_mutex); + if (device->device_list_entry.next) { + list_del(&device->device_list_entry); +@@ -1147,8 +1151,7 @@ void dxgsharedsyncobj_release(struct kref *refcount) + if (syncobj->adapter) { + dxgadapter_remove_shared_syncobj(syncobj->adapter, + syncobj); +- kref_put(&syncobj->adapter->adapter_kref, +- dxgadapter_release); ++ kref_put(&syncobj->adapter->adapter_kref, dxgadapter_release); + } + kfree(syncobj); + } +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index f63aa6f7a9dc..1b40d6e39085 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -404,7 +404,10 @@ struct dxgprocess { + /* Handle of the corresponding objec on the host */ + struct d3dkmthandle host_handle; + +- /* List of opened adapters (dxgprocess_adapter) */ ++ /* ++ * List of opened adapters (dxgprocess_adapter). ++ * Protected by process_adapter_mutex. ++ */ + struct list_head process_adapter_list_head; + }; + +@@ -451,6 +454,8 @@ enum dxgadapter_state { + struct dxgadapter { + struct rw_semaphore core_lock; + struct kref adapter_kref; ++ /* Protects creation and destruction of dxgdevice objects */ ++ struct mutex device_creation_lock; + /* Entry in the list of adapters in dxgglobal */ + struct list_head adapter_list_entry; + /* The list of dxgprocess_adapter entries */ +@@ -997,6 +1002,7 @@ void dxgk_validate_ioctls(void); + + #define DXG_TRACE(fmt, ...) do { \ + trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \ ++ dev_dbg(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \ + } while (0) + + #define DXG_ERR(fmt, ...) do { \ +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index aa27931a3447..f419597f711a 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -272,6 +272,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid, + adapter->host_vgpu_luid = host_vgpu_luid; + kref_init(&adapter->adapter_kref); + init_rwsem(&adapter->core_lock); ++ mutex_init(&adapter->device_creation_lock); + + INIT_LIST_HEAD(&adapter->adapter_process_list_head); + INIT_LIST_HEAD(&adapter->shared_resource_list_head); +@@ -961,4 +962,4 @@ module_exit(dxg_drv_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver"); +-MODULE_VERSION("2.0.0"); ++MODULE_VERSION("2.0.1"); +diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c +index e77e3a4983f8..fd51fd968049 100644 +--- a/drivers/hv/dxgkrnl/dxgprocess.c ++++ b/drivers/hv/dxgkrnl/dxgprocess.c +@@ -214,14 +214,15 @@ int dxgprocess_close_adapter(struct dxgprocess *process, + hmgrtable_unlock(&process->local_handle_table, DXGLOCK_EXCL); + + if (adapter) { ++ mutex_lock(&adapter->device_creation_lock); ++ dxgglobal_acquire_process_adapter_lock(); + adapter_info = dxgprocess_get_adapter_info(process, adapter); +- if (adapter_info) { +- dxgglobal_acquire_process_adapter_lock(); ++ if (adapter_info) + dxgprocess_adapter_release(adapter_info); +- dxgglobal_release_process_adapter_lock(); +- } else { ++ else + ret = -EINVAL; +- } ++ dxgglobal_release_process_adapter_lock(); ++ mutex_unlock(&adapter->device_creation_lock); + } else { + DXG_ERR("Adapter not found %x", handle.v); + ret = -EINVAL; +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 566ccb6d01c9..8c99f141482e 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1573,8 +1573,27 @@ process_allocation_handles(struct dxgprocess *process, + struct dxgresource *resource) + { + int ret = 0; +- int i; ++ int i = 0; ++ int k; ++ struct dxgkvmb_command_allocinfo_return *host_alloc; + ++ /* ++ * Assign handle to the internal objects, so VM bus messages will be ++ * sent to the host to free them during object destruction. ++ */ ++ if (args->flags.create_resource) ++ resource->handle = res->resource; ++ for (i = 0; i < args->alloc_count; i++) { ++ host_alloc = &res->allocation_info[i]; ++ dxgalloc[i]->alloc_handle = host_alloc->allocation; ++ } ++ ++ /* ++ * Assign handle to the handle table. ++ * In case of a failure all handles should be freed. ++ * When the function returns, the objects could be destroyed by ++ * handle immediately. ++ */ + hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); + if (args->flags.create_resource) { + ret = hmgrtable_assign_handle(&process->handle_table, resource, +@@ -1583,14 +1602,12 @@ process_allocation_handles(struct dxgprocess *process, + if (ret < 0) { + DXG_ERR("failed to assign resource handle %x", + res->resource.v); ++ goto cleanup; + } else { +- resource->handle = res->resource; + resource->handle_valid = 1; + } + } + for (i = 0; i < args->alloc_count; i++) { +- struct dxgkvmb_command_allocinfo_return *host_alloc; +- + host_alloc = &res->allocation_info[i]; + ret = hmgrtable_assign_handle(&process->handle_table, + dxgalloc[i], +@@ -1602,9 +1619,26 @@ process_allocation_handles(struct dxgprocess *process, + args->alloc_count, i); + break; + } +- dxgalloc[i]->alloc_handle = host_alloc->allocation; + dxgalloc[i]->handle_valid = 1; + } ++ if (ret < 0) { ++ if (args->flags.create_resource) { ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGRESOURCE, ++ res->resource); ++ resource->handle_valid = 0; ++ } ++ for (k = 0; k < i; k++) { ++ host_alloc = &res->allocation_info[i]; ++ hmgrtable_free_handle(&process->handle_table, ++ HMGRENTRY_TYPE_DXGALLOCATION, ++ host_alloc->allocation); ++ dxgalloc[i]->handle_valid = 0; ++ } ++ } ++ ++cleanup: ++ + hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); + + if (ret) +@@ -1705,18 +1739,17 @@ create_local_allocations(struct dxgprocess *process, + } + } + +- ret = process_allocation_handles(process, device, args, result, +- dxgalloc, resource); +- if (ret < 0) +- goto cleanup; +- + ret = copy_to_user(&input_args->global_share, &args->global_share, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy global share"); + ret = -EFAULT; ++ goto cleanup; + } + ++ ret = process_allocation_handles(process, device, args, result, ++ dxgalloc, resource); ++ + cleanup: + + if (ret < 0) { +@@ -3576,22 +3609,6 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + goto cleanup; + } + +- ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue, +- HMGRENTRY_TYPE_DXGHWQUEUE, +- command->hwqueue); +- if (ret < 0) +- goto cleanup; +- +- ret = hmgrtable_assign_handle_safe(&process->handle_table, +- NULL, +- HMGRENTRY_TYPE_MONITOREDFENCE, +- command->hwqueue_progress_fence); +- if (ret < 0) +- goto cleanup; +- +- hwqueue->handle = command->hwqueue; +- hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence; +- + hwqueue->progress_fence_mapped_address = + dxg_map_iospace((u64)command->hwqueue_progress_fence_cpuva, + PAGE_SIZE, PROT_READ | PROT_WRITE, true); +@@ -3641,6 +3658,22 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process, + } + } + ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, ++ NULL, ++ HMGRENTRY_TYPE_MONITOREDFENCE, ++ command->hwqueue_progress_fence); ++ if (ret < 0) ++ goto cleanup; ++ ++ hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence; ++ hwqueue->handle = command->hwqueue; ++ ++ ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue, ++ HMGRENTRY_TYPE_DXGHWQUEUE, ++ command->hwqueue); ++ if (ret < 0) ++ hwqueue->handle.v = 0; ++ + cleanup: + if (ret < 0) { + DXG_ERR("failed %x", ret); +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 3dc9e76f4f3d..7c72790f917f 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -636,6 +636,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + struct dxgdevice *device = NULL; + struct d3dkmthandle host_device_handle = {}; + bool adapter_locked = false; ++ bool device_creation_locked = false; + + ret = copy_from_user(&args, inargs, sizeof(args)); + if (ret) { +@@ -651,6 +652,9 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + goto cleanup; + } + ++ mutex_lock(&adapter->device_creation_lock); ++ device_creation_locked = true; ++ + device = dxgdevice_create(adapter, process); + if (device == NULL) { + ret = -ENOMEM; +@@ -699,6 +703,9 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs) + if (adapter_locked) + dxgadapter_release_lock_shared(adapter); + ++ if (device_creation_locked) ++ mutex_unlock(&adapter->device_creation_lock); ++ + if (adapter) + kref_put(&adapter->adapter_kref, dxgadapter_release); + +@@ -803,22 +810,21 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs) + host_context_handle = dxgvmb_send_create_context(adapter, + process, &args); + if (host_context_handle.v) { +- hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); +- ret = hmgrtable_assign_handle(&process->handle_table, context, +- HMGRENTRY_TYPE_DXGCONTEXT, +- host_context_handle); +- if (ret >= 0) +- context->handle = host_context_handle; +- hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); +- if (ret < 0) +- goto cleanup; + ret = copy_to_user(&((struct d3dkmt_createcontextvirtual *) + inargs)->context, &host_context_handle, + sizeof(struct d3dkmthandle)); + if (ret) { + DXG_ERR("failed to copy context handle"); + ret = -EFAULT; ++ goto cleanup; + } ++ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL); ++ ret = hmgrtable_assign_handle(&process->handle_table, context, ++ HMGRENTRY_TYPE_DXGCONTEXT, ++ host_context_handle); ++ if (ret >= 0) ++ context->handle = host_context_handle; ++ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL); + } else { + DXG_ERR("invalid host handle"); + ret = -EINVAL; +diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h +index ee2ebfdd1c13..9fcab4ae2c0c 100644 +--- a/drivers/hv/dxgkrnl/misc.h ++++ b/drivers/hv/dxgkrnl/misc.h +@@ -38,6 +38,7 @@ extern const struct d3dkmthandle zerohandle; + * core_lock (dxgadapter lock) + * device_lock (dxgdevice lock) + * process_adapter_mutex ++ * device_creation_lock in dxgadapter + * adapter_list_lock + * device_mutex (dxgglobal mutex) + */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1703-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch b/patch/kernel/archive/wsl2-arm64-6.6/1703-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch new file mode 100644 index 000000000000..c13eff3e946e --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1703-drivers-hv-dxgkrnl-Close-shared-file-objects-in-case-of-a-failure.patch @@ -0,0 +1,80 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Tue, 28 Jun 2022 17:26:11 -0700 +Subject: drivers: hv: dxgkrnl: Close shared file objects in case of a failure + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/ioctl.c | 14 +++++++--- + 1 file changed, 10 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 7c72790f917f..69324510c9e2 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -4536,7 +4536,7 @@ enum dxg_sharedobject_type { + }; + + static int get_object_fd(enum dxg_sharedobject_type type, +- void *object, int *fdout) ++ void *object, int *fdout, struct file **filp) + { + struct file *file; + int fd; +@@ -4565,8 +4565,8 @@ static int get_object_fd(enum dxg_sharedobject_type type, + return -ENOTRECOVERABLE; + } + +- fd_install(fd, file); + *fdout = fd; ++ *filp = file; + return 0; + } + +@@ -4581,6 +4581,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + struct dxgsharedresource *shared_resource = NULL; + struct d3dkmthandle *handles = NULL; + int object_fd = -1; ++ struct file *filp = NULL; + void *obj = NULL; + u32 handle_size; + int ret; +@@ -4660,7 +4661,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + switch (object_type) { + case HMGRENTRY_TYPE_DXGSYNCOBJECT: + ret = get_object_fd(DXG_SHARED_SYNCOBJECT, shared_syncobj, +- &object_fd); ++ &object_fd, &filp); + if (ret < 0) { + DXG_ERR("get_object_fd failed for sync object"); + goto cleanup; +@@ -4675,7 +4676,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + break; + case HMGRENTRY_TYPE_DXGRESOURCE: + ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource, +- &object_fd); ++ &object_fd, &filp); + if (ret < 0) { + DXG_ERR("get_object_fd failed for resource"); + goto cleanup; +@@ -4708,10 +4709,15 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs) + if (ret) { + DXG_ERR("failed to copy shared handle"); + ret = -EFAULT; ++ goto cleanup; + } + ++ fd_install(object_fd, filp); ++ + cleanup: + if (ret < 0) { ++ if (filp) ++ fput(filp); + if (object_fd >= 0) + put_unused_fd(object_fd); + } +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1704-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch b/patch/kernel/archive/wsl2-arm64-6.6/1704-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch new file mode 100644 index 000000000000..db8494533f63 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1704-drivers-hv-dxgkrnl-Added-missed-NULL-check-for-resource-object.patch @@ -0,0 +1,51 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Wed, 29 Jun 2022 10:04:23 -0700 +Subject: drivers: hv: dxgkrnl: Added missed NULL check for resource object + +Signed-off-by: Iouri Tarassov +[kms: Forward port to v6.1] +Signed-off-by: Kelsey Steele +--- + drivers/hv/dxgkrnl/ioctl.c | 10 ++++++---- + 1 file changed, 6 insertions(+), 4 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c +index 69324510c9e2..98350583943e 100644 +--- a/drivers/hv/dxgkrnl/ioctl.c ++++ b/drivers/hv/dxgkrnl/ioctl.c +@@ -1589,7 +1589,8 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + &process->handle_table, + HMGRENTRY_TYPE_DXGRESOURCE, + args.resource); +- kref_get(&resource->resource_kref); ++ if (resource != NULL) ++ kref_get(&resource->resource_kref); + dxgprocess_ht_lock_shared_up(process); + + if (resource == NULL || resource->device != device) { +@@ -1693,10 +1694,8 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + &standard_alloc); + cleanup: + +- if (resource_mutex_acquired) { ++ if (resource_mutex_acquired) + mutex_unlock(&resource->resource_mutex); +- kref_put(&resource->resource_kref, dxgresource_release); +- } + if (ret < 0) { + if (dxgalloc) { + for (i = 0; i < args.alloc_count; i++) { +@@ -1727,6 +1726,9 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs) + if (adapter) + dxgadapter_release_lock_shared(adapter); + ++ if (resource && !args.flags.create_resource) ++ kref_put(&resource->resource_kref, dxgresource_release); ++ + if (device) { + dxgdevice_release_lock_shared(device); + kref_put(&device->device_kref, dxgdevice_release); +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1705-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch b/patch/kernel/archive/wsl2-arm64-6.6/1705-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch new file mode 100644 index 000000000000..3efcc7ef401b --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1705-drivers-hv-dxgkrnl-Fixed-dxgkrnl-to-build-for-the-6.1-kernel.patch @@ -0,0 +1,84 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Iouri Tarassov +Date: Thu, 26 Jan 2023 10:49:41 -0800 +Subject: drivers: hv: dxgkrnl: Fixed dxgkrnl to build for the 6.1 kernel + +Definition for GPADL was changed from u32 to struct vmbus_gpadl. + +Signed-off-by: Iouri Tarassov +--- + drivers/hv/dxgkrnl/dxgadapter.c | 8 -------- + drivers/hv/dxgkrnl/dxgkrnl.h | 4 ---- + drivers/hv/dxgkrnl/dxgvmbus.c | 8 -------- + 3 files changed, 20 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c +index d9d45bd4a31e..bcd19b7267d1 100644 +--- a/drivers/hv/dxgkrnl/dxgadapter.c ++++ b/drivers/hv/dxgkrnl/dxgadapter.c +@@ -927,19 +927,11 @@ void dxgallocation_destroy(struct dxgallocation *alloc) + alloc->owner.device, + &args, &alloc->alloc_handle); + } +-#ifdef _MAIN_KERNEL_ + if (alloc->gpadl.gpadl_handle) { + DXG_TRACE("Teardown gpadl %d", alloc->gpadl.gpadl_handle); + vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl); + alloc->gpadl.gpadl_handle = 0; + } +-#else +- if (alloc->gpadl) { +- DXG_TRACE("Teardown gpadl %d", alloc->gpadl); +- vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); +- alloc->gpadl = 0; +- } +-#endif + if (alloc->priv_drv_data) + vfree(alloc->priv_drv_data); + if (alloc->cpu_address_mapped) +diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h +index 1b40d6e39085..c5ed23cb90df 100644 +--- a/drivers/hv/dxgkrnl/dxgkrnl.h ++++ b/drivers/hv/dxgkrnl/dxgkrnl.h +@@ -728,11 +728,7 @@ struct dxgallocation { + u32 cached:1; + u32 handle_valid:1; + /* GPADL address list for existing sysmem allocations */ +-#ifdef _MAIN_KERNEL_ + struct vmbus_gpadl gpadl; +-#else +- u32 gpadl; +-#endif + /* Number of pages in the 'pages' array */ + u32 num_pages; + /* +diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c +index 8c99f141482e..eb3f4c5153a6 100644 +--- a/drivers/hv/dxgkrnl/dxgvmbus.c ++++ b/drivers/hv/dxgkrnl/dxgvmbus.c +@@ -1493,22 +1493,14 @@ int create_existing_sysmem(struct dxgdevice *device, + ret = -ENOMEM; + goto cleanup; + } +-#ifdef _MAIN_KERNEL_ + DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle); +-#else +- DXG_TRACE("New gpadl %d", dxgalloc->gpadl); +-#endif + + command_vgpu_to_host_init2(&set_store_command->hdr, + DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE, + device->process->host_handle); + set_store_command->device = device->handle; + set_store_command->allocation = host_alloc->allocation; +-#ifdef _MAIN_KERNEL_ + set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle; +-#else +- set_store_command->gpadl = dxgalloc->gpadl; +-#endif + ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, + msg.size); + if (ret < 0) +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1706-virtio-pmem-Support-PCI-BAR-relative-addresses.patch b/patch/kernel/archive/wsl2-arm64-6.6/1706-virtio-pmem-Support-PCI-BAR-relative-addresses.patch new file mode 100644 index 000000000000..ff20036ef714 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1706-virtio-pmem-Support-PCI-BAR-relative-addresses.patch @@ -0,0 +1,80 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Taylor Stark +Date: Thu, 15 Jul 2021 15:35:05 -0700 +Subject: virtio-pmem: Support PCI BAR-relative addresses + +Update virtio-pmem to allow for the pmem region to be specified in either +guest absolute terms or as a PCI BAR-relative address. This is required +to support virtio-pmem in Hyper-V, since Hyper-V only allows PCI devices +to operate on PCI memory ranges defined via BARs. + +Virtio-pmem will check for a shared memory window and use that if found, +else it will fallback to using the guest absolute addresses in +virtio_pmem_config. This was chosen over defining a new feature bit, +since it's similar to how virtio-fs is configured. + +Signed-off-by: Taylor Stark + +Link: https://lore.kernel.org/r/20210715223505.GA29329@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net +Signed-off-by: Tyler Hicks +--- + drivers/nvdimm/virtio_pmem.c | 21 ++++++++-- + drivers/nvdimm/virtio_pmem.h | 3 ++ + 2 files changed, 20 insertions(+), 4 deletions(-) + +diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c +index a92eb172f0e7..ec20d642f030 100644 +--- a/drivers/nvdimm/virtio_pmem.c ++++ b/drivers/nvdimm/virtio_pmem.c +@@ -36,6 +36,8 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + struct virtio_pmem *vpmem; + struct resource res; + int err = 0; ++ bool have_shm_region; ++ struct virtio_shm_region pmem_region; + + if (!vdev->config->get) { + dev_err(&vdev->dev, "%s failure: config access disabled\n", +@@ -57,10 +59,21 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + goto out_err; + } + +- virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, +- start, &vpmem->start); +- virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, +- size, &vpmem->size); ++ /* Retrieve the pmem device's address and size. It may have been supplied ++ * as a PCI BAR-relative shared memory region, or as a guest absolute address. ++ */ ++ have_shm_region = virtio_get_shm_region(vpmem->vdev, &pmem_region, ++ VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION); ++ ++ if (have_shm_region) { ++ vpmem->start = pmem_region.addr; ++ vpmem->size = pmem_region.len; ++ } else { ++ virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, ++ start, &vpmem->start); ++ virtio_cread_le(vpmem->vdev, struct virtio_pmem_config, ++ size, &vpmem->size); ++ } + + res.start = vpmem->start; + res.end = vpmem->start + vpmem->size - 1; +diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h +index 0dddefe594c4..62bb564e81cb 100644 +--- a/drivers/nvdimm/virtio_pmem.h ++++ b/drivers/nvdimm/virtio_pmem.h +@@ -50,6 +50,9 @@ struct virtio_pmem { + __u64 size; + }; + ++/* For the id field in virtio_pci_shm_cap */ ++#define VIRTIO_PMEM_SHMCAP_ID_PMEM_REGION 0 ++ + void virtio_pmem_host_ack(struct virtqueue *vq); + int async_pmem_flush(struct nd_region *nd_region, struct bio *bio); + #endif +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1707-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch b/patch/kernel/archive/wsl2-arm64-6.6/1707-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch new file mode 100644 index 000000000000..aaf3ef986246 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1707-virtio-pmem-Set-DRIVER_OK-status-prior-to-creating-pmem-region.patch @@ -0,0 +1,52 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Taylor Stark +Date: Thu, 15 Jul 2021 15:36:38 -0700 +Subject: virtio-pmem: Set DRIVER_OK status prior to creating pmem region + +Update virtio-pmem to call virtio_device_ready prior to creating the pmem +region. Otherwise, the guest may try to access the pmem region prior to +the DRIVER_OK status being set. + +In the case of Hyper-V, the backing pmem file isn't mapped to the guest +until the DRIVER_OK status is set. Therefore, attempts to access the pmem +region can cause the guest to crash. Hyper-V could map the file earlier, +for example at VM creation, but we didn't want to pay the mapping cost if +the device is never used. Additionally, it felt weird to allow the guest +to access the region prior to the device fully coming online. + +Signed-off-by: Taylor Stark +Reviewed-by: Pankaj Gupta + +Link: https://lore.kernel.org/r/20210715223638.GA29649@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net +Signed-off-by: Tyler Hicks +--- + drivers/nvdimm/virtio_pmem.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c +index ec20d642f030..48dfc5d1c3a4 100644 +--- a/drivers/nvdimm/virtio_pmem.c ++++ b/drivers/nvdimm/virtio_pmem.c +@@ -90,6 +90,11 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + + dev_set_drvdata(&vdev->dev, vpmem->nvdimm_bus); + ++ /* Online the device prior to creating a pmem region, to ensure that ++ * the region is never touched while the device is offline. ++ */ ++ virtio_device_ready(vdev); ++ + ndr_desc.res = &res; + + ndr_desc.numa_node = memory_add_physaddr_to_nid(res.start); +@@ -118,6 +123,7 @@ static int virtio_pmem_probe(struct virtio_device *vdev) + } + return 0; + out_nd: ++ vdev->config->reset(vdev); + virtio_reset_device(vdev); + nvdimm_bus_unregister(vpmem->nvdimm_bus); + out_vq: +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1708-drivers-hv-dxgkrnl-restore-uuid_le_cmp-removed-from-upstream-in-f5b3c341a.patch b/patch/kernel/archive/wsl2-arm64-6.6/1708-drivers-hv-dxgkrnl-restore-uuid_le_cmp-removed-from-upstream-in-f5b3c341a.patch new file mode 100644 index 000000000000..b892e4871aea --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1708-drivers-hv-dxgkrnl-restore-uuid_le_cmp-removed-from-upstream-in-f5b3c341a.patch @@ -0,0 +1,30 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Shradha Gupta +Date: Fri, 30 Sep 2022 08:01:38 +0200 +Subject: drivers: hv: dxgkrnl: restore `uuid_le_cmp` removed from upstream in + f5b3c341a + +--- + drivers/hv/dxgkrnl/dxgmodule.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index f419597f711a..1deef95a79cf 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -27,6 +27,12 @@ + #undef dev_fmt + #define dev_fmt(fmt) "dxgk: " fmt + ++// Was removed from include/linux/uuid.h in f5b3c341a: "mei: Move uuid_le_cmp() to its only user" -- this would be the 2nd user ;-) ++static inline int uuid_le_cmp(const guid_t u1, const guid_t u2) ++{ ++ return memcmp(&u1, &u2, sizeof(guid_t)); ++} ++ + /* + * Interface from dxgglobal + */ +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-arm64-6.6/1709-drivers-hv-dxgkrnl-adapt-dxg_remove_vmbus-to-96ec29396-s-reality-void-return.patch b/patch/kernel/archive/wsl2-arm64-6.6/1709-drivers-hv-dxgkrnl-adapt-dxg_remove_vmbus-to-96ec29396-s-reality-void-return.patch new file mode 100644 index 000000000000..530c3a288ec2 --- /dev/null +++ b/patch/kernel/archive/wsl2-arm64-6.6/1709-drivers-hv-dxgkrnl-adapt-dxg_remove_vmbus-to-96ec29396-s-reality-void-return.patch @@ -0,0 +1,41 @@ +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 +From: Ricardo Pardini +Date: Sun, 26 Nov 2023 13:44:08 +0100 +Subject: drivers: hv: dxgkrnl: adapt dxg_remove_vmbus to 96ec29396's reality + (void return) + +--- + drivers/hv/dxgkrnl/dxgmodule.c | 6 +----- + 1 file changed, 1 insertion(+), 5 deletions(-) + +diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c +index 1deef95a79cf..c91b659b3c41 100644 +--- a/drivers/hv/dxgkrnl/dxgmodule.c ++++ b/drivers/hv/dxgkrnl/dxgmodule.c +@@ -800,9 +800,8 @@ static int dxg_probe_vmbus(struct hv_device *hdev, + return ret; + } + +-static int dxg_remove_vmbus(struct hv_device *hdev) ++static void dxg_remove_vmbus(struct hv_device *hdev) + { +- int ret = 0; + struct dxgvgpuchannel *vgpu_channel; + struct dxgglobal *dxgglobal = dxggbl(); + +@@ -827,12 +826,9 @@ static int dxg_remove_vmbus(struct hv_device *hdev) + } else { + /* Unknown device type */ + DXG_ERR("Unknown device type"); +- ret = -ENODEV; + } + + mutex_unlock(&dxgglobal->device_mutex); +- +- return ret; + } + + MODULE_DEVICE_TABLE(vmbus, dxg_vmbus_id_table); +-- +Armbian + diff --git a/patch/kernel/archive/wsl2-x86-6.1 b/patch/kernel/archive/wsl2-x86-6.1 new file mode 120000 index 000000000000..7c68d5517bb6 --- /dev/null +++ b/patch/kernel/archive/wsl2-x86-6.1 @@ -0,0 +1 @@ +wsl2-arm64-6.1 \ No newline at end of file diff --git a/patch/kernel/archive/wsl2-x86-6.6 b/patch/kernel/archive/wsl2-x86-6.6 new file mode 120000 index 000000000000..e8710a96aeb5 --- /dev/null +++ b/patch/kernel/archive/wsl2-x86-6.6 @@ -0,0 +1 @@ +wsl2-arm64-6.6 \ No newline at end of file