Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kernel warning wihout nested virtualization #19

Closed
tmakatos opened this issue Nov 15, 2019 · 2 comments
Closed

kernel warning wihout nested virtualization #19

tmakatos opened this issue Nov 15, 2019 · 2 comments
Labels
bug Something isn't working duplicate This issue or pull request already exists

Comments

@tmakatos
Copy link
Member

tmakatos commented Nov 15, 2019

The following was observed when testing MUSER with kernel 5.4 (need to get exact version):

[ 3299.086419] muser muser: muser_create_dev: new device 00000000-0000-0000-0000-000000000000
[ 3299.086445] vfio_mdev 00000000-0000-0000-0000-000000000000: Adding to iommu group 61
[ 3299.086447] vfio_mdev 00000000-0000-0000-0000-000000000000: MDEV: group_id = 61
[ 3333.950405] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
[ 3333.977350] muser muser: libmuser_mmap_dma: mmap_dma: end 0x7F9A995FB000 - start 0x7F9A9955B000 (0xA0000), off = 0x0
[ 3333.977359] ------------[ cut here ]------------
[ 3333.977364] WARNING: CPU: 0 PID: 41540 at /home/changpe1/kernel/linux/mm/rmap.c:1199 page_add_file_rmap+0x1cd/0x210
[ 3333.977365] Modules linked in: muser(OE) vfio_mdev mdev vfio_pci vfio_virqfd vfio_iommu_type1 vfio xt_CHECKSUM iptable_mangle xt_MASQUERADE iptable_nat nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iscsi_tcp libiscsi_tcp rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_umad rdma_cm ib_cm iw_cm mlx5_ib ib_uverbs ib_core intel_rapl_msr intel_rapl_common ppdev parport_pc parport sb_edac x86_pkg_temp_thermal intel_powerclamp fuse coretemp vmw_vsock_vmci_transport kvm_intel vsock kvm vmw_vmci irqbypass sunrpc crct10dif_pclmul crc32_pclmul ghash_clmulni_intel iTCO_wdt iTCO_vendor_support intel_cstate intel_uncore intel_rapl_perf ipmi_si mei_me ipmi_devintf ipmi_msghandler pcspkr joydev mei mxm_wmi ioatdma lpc_ich i2c_i801 acpi_power_meter acpi_pad xfs libcrc32c mlx5_core mgag200 drm_kms_helper drm_vram_helper ttm drm
[ 3333.977401]  igb nvme crc32c_intel nvme_core mlxfw pci_hyperv_intf ptp dca pps_core i2c_algo_bit wmi
[ 3333.977406] CPU: 0 PID: 41540 Comm: reactor_0 Tainted: G           OE     5.4.0-rc7+ #1
[ 3333.977407] Hardware name: Intel Corporation S2600WT2R/S2600WT2R, BIOS SE5C610.86B.01.01.0015.012820160943 01/28/2016
[ 3333.977408] RIP: 0010:page_add_file_rmap+0x1cd/0x210
[ 3333.977410] Code: 49 fd ff 48 63 54 24 04 e9 fc fe ff ff 48 c7 c6 10 0f 13 8f 48 89 df e8 a1 54 fe ff 0f 0b 48 89 87 80 00 00 00 e9 1c ff ff ff <0f> 0b e9 80 fe ff ff be 0f 00 00 00 48 89 c7 e8 3f 30 fd ff e9 0d
[ 3333.977411] RSP: 0018:ffffa9ce06fa7ca8 EFLAGS: 00010246
[ 3333.977412] RAX: 0017ffffc001000e RBX: fffff74218918000 RCX: 0000000000000000
[ 3333.977412] RDX: 0000000000000000 RSI: 0000000000000000 RDI: fffff74218918000
[ 3333.977413] RBP: 00007f9a9955b000 R08: 000fffffffe00000 R09: 00000000000a0000
[ 3333.977414] R10: ffff964547e73d08 R11: 0000000000000000 R12: ffff9645123e0ad8
[ 3333.977414] R13: ffff96451b7e0000 R14: 8000000000000027 R15: ffff9645468ab6a8
[ 3333.977415] FS:  00007f9a8ec63700(0000) GS:ffff96455f200000(0000) knlGS:0000000000000000
[ 3333.977416] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3333.977416] CR2: 00007f9a9814d738 CR3: 000000085a874002 CR4: 00000000001626f0
[ 3333.977417] Call Trace:
[ 3333.977423]  vm_insert_page+0x19a/0x2a0
[ 3333.977428]  vm_insert_pages+0x40/0x150 [muser]
[ 3333.977430]  libmuser_mmap+0xb1/0x380 [muser]
[ 3333.977431]  ? kmem_cache_alloc+0x166/0x220
[ 3333.977433]  mmap_region+0x3fd/0x600
[ 3333.977435]  do_mmap+0x479/0x5f0
[ 3333.977439]  ? security_mmap_file+0x5e/0xc0
[ 3333.977442]  vm_mmap_pgoff+0xd2/0x120
[ 3333.977444]  ksys_mmap_pgoff+0x199/0x230
[ 3333.977459]  do_syscall_64+0x55/0x180
[ 3333.977463]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

This was not with nested virtualization. QEMU was run as follows (need to get exact QEMU version as well):

build/x86_64-softmmu/qemu-system-x86_64 --enable-kvm -cpu host -smp 4 -m 4G -object memory-backend-file,id=mem0,size=4G,mem-path=/dev/hugepages,share=on -numa node,memdev=mem0 -drive file=/root/fedora.img,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -net user,hostfwd=tcp::10022-:22 -net nic -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/00000000-0000-0000-0000-000000000000 -vnc 0.0.0.0:1
@tmakatos tmakatos added the bug Something isn't working label Nov 15, 2019
@tmakatos
Copy link
Member Author

tmakatos commented Feb 5, 2020

I just noticed that qemu used huge pages, so it might have nothing to do with non-nested virtualization. Maybe related to #29.

@tmakatos
Copy link
Member Author

I see libmuser_mmap_dma in the stack trace, so this bug should be fixed by commit ea1c32f.

@tmakatos tmakatos added the duplicate This issue or pull request already exists label Feb 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

1 participant