-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User perms are not handled correctly #18
Comments
Please note that 0x800 is not a valid user permission with compressed 128-bit capabilities. See Table 5.6 (permission bit mapping) in the Cheri Architecture document. Currently bits 11-14 are "emulated" for legacy support while the CheriBSD transitions to the new permission bit model. This should go away at some point. dbc013c should fix the problem you are seeing above but you need to use a valid user permission bit with the |
Agree with 0x800 not being valid. |
start vm with libvirt, when GuestOS running, enter poweroff command using the xhci keyboard, then ASAN shows memory leak stack: Direct leak of 80 byte(s) in 5 object(s) allocated from: #0 0xfffd1e6431cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb) #1 0xfffd1e107163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163) #2 0xaaad39051367 in qemu_sglist_init /qemu/dma-helpers.c:43 #3 0xaaad3947c407 in pci_dma_sglist_init /qemu/include/hw/pci/pci.h:842 #4 0xaaad3947c407 in xhci_xfer_create_sgl /qemu/hw/usb/hcd-xhci.c:1446 #5 0xaaad3947c407 in xhci_setup_packet /qemu/hw/usb/hcd-xhci.c:1618 #6 0xaaad3948625f in xhci_submit /qemu/hw/usb/hcd-xhci.c:1827 #7 0xaaad3948625f in xhci_fire_transfer /qemu/hw/usb/hcd-xhci.c:1839 #8 0xaaad3948625f in xhci_kick_epctx /qemu/hw/usb/hcd-xhci.c:1991 #9 0xaaad3948f537 in xhci_doorbell_write /qemu/hw/usb/hcd-xhci.c:3158 #10 0xaaad38bcbfc7 in memory_region_write_accessor /qemu/memory.c:483 #11 0xaaad38bc654f in access_with_adjusted_size /qemu/memory.c:544 #12 0xaaad38bd1877 in memory_region_dispatch_write /qemu/memory.c:1482 #13 0xaaad38b1c77f in flatview_write_continue /qemu/exec.c:3167 #14 0xaaad38b1ca83 in flatview_write /qemu/exec.c:3207 #15 0xaaad38b268db in address_space_write /qemu/exec.c:3297 #16 0xaaad38bf909b in kvm_cpu_exec /qemu/accel/kvm/kvm-all.c:2383 #17 0xaaad38bb063f in qemu_kvm_cpu_thread_fn /qemu/cpus.c:1246 #18 0xaaad39821c93 in qemu_thread_start /qemu/util/qemu-thread-posix.c:519 #19 0xfffd1c8378bb (/lib64/libpthread.so.0+0x78bb) #20 0xfffd1c77616b (/lib64/libc.so.6+0xd616b) Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com> Message-id: 20200110105855.81144-1-kuhn.chenqun@huawei.com Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
All paths that lead to bdrv_backup_top_drop(), except for the call from backup_clean(), imply that the BDS AioContext has already been acquired, so doing it there too can potentially lead to QEMU hanging on AIO_WAIT_WHILE(). An easy way to trigger this situation is by issuing a two actions transaction, with a proper and a bogus blockdev-backup, so the second one will trigger a rollback. This will trigger a hang with an stack trace like this one: #0 0x00007fb680c75016 in __GI_ppoll (fds=0x55e74580f7c0, nfds=1, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39 #1 0x000055e743386e09 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77 #2 0x000055e743386e09 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:336 #3 0x000055e743388dc4 in aio_poll (ctx=0x55e7458925d0, blocking=blocking@entry=true) at util/aio-posix.c:669 #4 0x000055e743305dea in bdrv_flush (bs=bs@entry=0x55e74593c0d0) at block/io.c:2878 #5 0x000055e7432be58e in bdrv_close (bs=0x55e74593c0d0) at block.c:4017 #6 0x000055e7432be58e in bdrv_delete (bs=<optimized out>) at block.c:4262 #7 0x000055e7432be58e in bdrv_unref (bs=bs@entry=0x55e74593c0d0) at block.c:5644 #8 0x000055e743316b9b in bdrv_backup_top_drop (bs=bs@entry=0x55e74593c0d0) at block/backup-top.c:273 #9 0x000055e74331461f in backup_job_create (job_id=0x0, bs=bs@entry=0x55e7458d5820, target=target@entry=0x55e74589f640, speed=0, sync_mode=MIRROR_SYNC_MODE_FULL, sync_bitmap=sync_bitmap@entry=0x0, bitmap_mode=BITMAP_SYNC_MODE_ON_SUCCESS, compress=false, filter_node_name=0x0, on_source_error=BLOCKDEV_ON_ERROR_REPORT, on_target_error=BLOCKDEV_ON_ERROR_REPORT, creation_flags=0, cb=0x0, opaque=0x0, txn=0x0, errp=0x7ffddfd1efb0) at block/backup.c:478 #10 0x000055e74315bc52 in do_backup_common (backup=backup@entry=0x55e746c066d0, bs=bs@entry=0x55e7458d5820, target_bs=target_bs@entry=0x55e74589f640, aio_context=aio_context@entry=0x55e7458a91e0, txn=txn@entry=0x0, errp=errp@entry=0x7ffddfd1efb0) at blockdev.c:3580 #11 0x000055e74315c37c in do_blockdev_backup (backup=backup@entry=0x55e746c066d0, txn=0x0, errp=errp@entry=0x7ffddfd1efb0) at /usr/src/debug/qemu-kvm-4.2.0-2.module+el8.2.0+5135+ed3b2489.x86_64/./qapi/qapi-types-block-core.h:1492 #12 0x000055e74315c449 in blockdev_backup_prepare (common=0x55e746a8de90, errp=0x7ffddfd1f018) at blockdev.c:1885 #13 0x000055e743160152 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x55e7467fe2c0, errp=errp@entry=0x7ffddfd1f088) at blockdev.c:2340 #14 0x000055e743287ff5 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffddfd1f0f8) at qapi/qapi-commands-transaction.c:44 #15 0x000055e74333de6c in do_qmp_dispatch (errp=0x7ffddfd1f0f0, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55e743c28d60 <qmp_commands>) at qapi/qmp-dispatch.c:132 #16 0x000055e74333de6c in qmp_dispatch (cmds=0x55e743c28d60 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175 #17 0x000055e74325c061 in monitor_qmp_dispatch (mon=0x55e745908030, req=<optimized out>) at monitor/qmp.c:145 #18 0x000055e74325c6fa in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:234 #19 0x000055e743385866 in aio_bh_call (bh=0x55e745807ae0) at util/async.c:117 #20 0x000055e743385866 in aio_bh_poll (ctx=ctx@entry=0x55e7458067a0) at util/async.c:117 #21 0x000055e743388c54 in aio_dispatch (ctx=0x55e7458067a0) at util/aio-posix.c:459 #22 0x000055e743385742 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260 #23 0x00007fb68543e67d in g_main_dispatch (context=0x55e745893a40) at gmain.c:3176 #24 0x00007fb68543e67d in g_main_context_dispatch (context=context@entry=0x55e745893a40) at gmain.c:3829 #25 0x000055e743387d08 in glib_pollfds_poll () at util/main-loop.c:219 #26 0x000055e743387d08 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242 #27 0x000055e743387d08 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518 #28 0x000055e74316a3c1 in main_loop () at vl.c:1828 #29 0x000055e743016a72 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504 Fix this by not acquiring the AioContext there, and ensuring all paths leading to it have it already acquired (backup_clean()). RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782111 Signed-off-by: Sergio Lopez <slp@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Dirty map addition and removal functions are not acquiring to BDS AioContext, while they may call to code that expects it to be acquired. This may trigger a crash with a stack trace like this one: #0 0x00007f0ef146370f in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x00007f0ef144db25 in __GI_abort () at abort.c:79 #2 0x0000565022294dce in error_exit (err=<optimized out>, msg=msg@entry=0x56502243a730 <__func__.16350> "qemu_mutex_unlock_impl") at util/qemu-thread-posix.c:36 #3 0x00005650222950ba in qemu_mutex_unlock_impl (mutex=mutex@entry=0x5650244b0240, file=file@entry=0x565022439adf "util/async.c", line=line@entry=526) at util/qemu-thread-posix.c:108 #4 0x0000565022290029 in aio_context_release (ctx=ctx@entry=0x5650244b01e0) at util/async.c:526 #5 0x000056502221cd08 in bdrv_can_store_new_dirty_bitmap (bs=bs@entry=0x5650244dc820, name=name@entry=0x56502481d360 "bitmap1", granularity=granularity@entry=65536, errp=errp@entry=0x7fff22831718) at block/dirty-bitmap.c:542 #6 0x000056502206ae53 in qmp_block_dirty_bitmap_add (errp=0x7fff22831718, disabled=false, has_disabled=<optimized out>, persistent=<optimized out>, has_persistent=true, granularity=65536, has_granularity=<optimized out>, name=0x56502481d360 "bitmap1", node=<optimized out>) at blockdev.c:2894 #7 0x000056502206ae53 in qmp_block_dirty_bitmap_add (node=<optimized out>, name=0x56502481d360 "bitmap1", has_granularity=<optimized out>, granularity=<optimized out>, has_persistent=true, persistent=<optimized out>, has_disabled=false, disabled=false, errp=0x7fff22831718) at blockdev.c:2856 #8 0x00005650221847a3 in qmp_marshal_block_dirty_bitmap_add (args=<optimized out>, ret=<optimized out>, errp=0x7fff22831798) at qapi/qapi-commands-block-core.c:651 #9 0x0000565022247e6c in do_qmp_dispatch (errp=0x7fff22831790, allow_oob=<optimized out>, request=<optimized out>, cmds=0x565022b32d60 <qmp_commands>) at qapi/qmp-dispatch.c:132 #10 0x0000565022247e6c in qmp_dispatch (cmds=0x565022b32d60 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175 #11 0x0000565022166061 in monitor_qmp_dispatch (mon=0x56502450faa0, req=<optimized out>) at monitor/qmp.c:145 #12 0x00005650221666fa in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:234 #13 0x000056502228f866 in aio_bh_call (bh=0x56502440eae0) at util/async.c:117 #14 0x000056502228f866 in aio_bh_poll (ctx=ctx@entry=0x56502440d7a0) at util/async.c:117 #15 0x0000565022292c54 in aio_dispatch (ctx=0x56502440d7a0) at util/aio-posix.c:459 #16 0x000056502228f742 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260 #17 0x00007f0ef5ce667d in g_main_dispatch (context=0x56502449aa40) at gmain.c:3176 #18 0x00007f0ef5ce667d in g_main_context_dispatch (context=context@entry=0x56502449aa40) at gmain.c:3829 #19 0x0000565022291d08 in glib_pollfds_poll () at util/main-loop.c:219 #20 0x0000565022291d08 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242 #21 0x0000565022291d08 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518 #22 0x00005650220743c1 in main_loop () at vl.c:1828 #23 0x0000565021f20a72 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504 Fix this by acquiring the AioContext at qmp_block_dirty_bitmap_add() and qmp_block_dirty_bitmap_add(). RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782175 Signed-off-by: Sergio Lopez <slp@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
external_snapshot_abort() calls to bdrv_set_backing_hd(), which returns state->old_bs to the main AioContext, as it's intended to be used then the BDS is going to be released. As that's not the case when aborting an external snapshot, return it to the AioContext it was before the call. This issue can be triggered by issuing a transaction with two actions, a proper blockdev-snapshot-sync and a bogus one, so the second will trigger a transaction abort. This results in a crash with an stack trace like this one: #0 0x00007fa1048b28df in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x00007fa10489ccf5 in __GI_abort () at abort.c:79 #2 0x00007fa10489cbc9 in __assert_fail_base (fmt=0x7fa104a03300 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=0x557224014d30 "block.c", line=2240, function=<optimized out>) at assert.c:92 #3 0x00007fa1048aae96 in __GI___assert_fail (assertion=assertion@entry=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=file@entry=0x557224014d30 "block.c", line=line@entry=2240, function=function@entry=0x5572240b5d60 <__PRETTY_FUNCTION__.31620> "bdrv_replace_child_noperm") at assert.c:101 #4 0x0000557223e631f8 in bdrv_replace_child_noperm (child=0x557225b9c980, new_bs=new_bs@entry=0x557225c42e40) at block.c:2240 #5 0x0000557223e68be7 in bdrv_replace_node (from=0x557226951a60, to=0x557225c42e40, errp=0x5572247d6138 <error_abort>) at block.c:4196 #6 0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1731 #7 0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1717 #8 0x0000557223d09013 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x557225cc7d70, errp=errp@entry=0x7ffe704c0c98) at blockdev.c:2360 #9 0x0000557223e32085 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffe704c0d08) at qapi/qapi-commands-transaction.c:44 #10 0x0000557223ee798c in do_qmp_dispatch (errp=0x7ffe704c0d00, allow_oob=<optimized out>, request=<optimized out>, cmds=0x5572247d3cc0 <qmp_commands>) at qapi/qmp-dispatch.c:132 #11 0x0000557223ee798c in qmp_dispatch (cmds=0x5572247d3cc0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175 #12 0x0000557223e06141 in monitor_qmp_dispatch (mon=0x557225c69ff0, req=<optimized out>) at monitor/qmp.c:120 #13 0x0000557223e0678a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209 #14 0x0000557223f2f366 in aio_bh_call (bh=0x557225b9dc60) at util/async.c:117 #15 0x0000557223f2f366 in aio_bh_poll (ctx=ctx@entry=0x557225b9c840) at util/async.c:117 #16 0x0000557223f32754 in aio_dispatch (ctx=0x557225b9c840) at util/aio-posix.c:459 #17 0x0000557223f2f242 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260 #18 0x00007fa10913467d in g_main_dispatch (context=0x557225c28e80) at gmain.c:3176 #19 0x00007fa10913467d in g_main_context_dispatch (context=context@entry=0x557225c28e80) at gmain.c:3829 #20 0x0000557223f31808 in glib_pollfds_poll () at util/main-loop.c:219 #21 0x0000557223f31808 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242 #22 0x0000557223f31808 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518 #23 0x0000557223d13201 in main_loop () at vl.c:1828 #24 0x0000557223bbfb82 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504 RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1779036 Signed-off-by: Sergio Lopez <slp@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Coverity reports: *** CID 1419387: Memory - illegal accesses (OVERRUN) /hw/hppa/dino.c: 267 in dino_chip_read_with_attrs() 261 val = s->ilr & s->imr & s->icr; 262 break; 263 case DINO_TOC_ADDR: 264 val = s->toc_addr; 265 break; 266 case DINO_GMASK ... DINO_TLTIM: >>> CID 1419387: Memory - illegal accesses (OVERRUN) >>> Overrunning array "s->reg800" of 12 4-byte elements at element index 12 (byte offset 48) using index "(addr - 2048UL) / 4UL" (which evaluates to 12). 267 val = s->reg800[(addr - DINO_GMASK) / 4]; 268 if (addr == DINO_PAMR) { 269 val &= ~0x01; /* LSB is hardwired to 0 */ 270 } 271 if (addr == DINO_MLTIM) { 272 val &= ~0x07; /* 3 LSB are hardwired to 0 */ *** CID 1419393: Memory - corruptions (OVERRUN) /hw/hppa/dino.c: 363 in dino_chip_write_with_attrs() 357 /* These registers are read-only. */ 358 break; 359 360 case DINO_GMASK ... DINO_TLTIM: 361 i = (addr - DINO_GMASK) / 4; 362 val &= reg800_keep_bits[i]; >>> CID 1419393: Memory - corruptions (OVERRUN) >>> Overrunning array "s->reg800" of 12 4-byte elements at element index 12 (byte offset 48) using index "i" (which evaluates to 12). 363 s->reg800[i] = val; 364 break; 365 366 default: 367 /* Controlled by dino_chip_mem_valid above. */ 368 g_assert_not_reached(); *** CID 1419394: Memory - illegal accesses (OVERRUN) /hw/hppa/dino.c: 362 in dino_chip_write_with_attrs() 356 case DINO_IRR1: 357 /* These registers are read-only. */ 358 break; 359 360 case DINO_GMASK ... DINO_TLTIM: 361 i = (addr - DINO_GMASK) / 4; >>> CID 1419394: Memory - illegal accesses (OVERRUN) >>> Overrunning array "reg800_keep_bits" of 12 4-byte elements at element index 12 (byte offset 48) using index "i" (which evaluates to 12). 362 val &= reg800_keep_bits[i]; 363 s->reg800[i] = val; 364 break; 365 366 default: 367 /* Controlled by dino_chip_mem_valid above. */ Indeed the array should contain 13 entries, the undocumented register 0x82c is missing. Fix by increasing the array size and adding the missing register. CID 1419387 can be verified with: $ echo x 0xfff80830 | hppa-softmmu/qemu-system-hppa -S -monitor stdio -display none QEMU 4.2.50 monitor - type 'help' for more information (qemu) x 0xfff80830 qemu/hw/hppa/dino.c:267:15: runtime error: index 12 out of bounds for type 'uint32_t [12]' SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /home/phil/source/qemu/hw/hppa/dino.c:267:15 in 00000000fff80830: 0x00000000 and CID 1419393/1419394 with: $ echo writeb 0xfff80830 0x69 \ | hppa-softmmu/qemu-system-hppa -S -accel qtest -qtest stdio -display none [I 1581634452.654113] OPENED [R +4.105415] writeb 0xfff80830 0x69 qemu/hw/hppa/dino.c:362:16: runtime error: index 12 out of bounds for type 'const uint32_t [12]' SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior qemu/hw/hppa/dino.c:362:16 in ================================================================= ==29607==ERROR: AddressSanitizer: global-buffer-overflow on address 0x5577dae32f30 at pc 0x5577d93f2463 bp 0x7ffd97ea11b0 sp 0x7ffd97ea11a8 READ of size 4 at 0x5577dae32f30 thread T0 #0 0x5577d93f2462 in dino_chip_write_with_attrs qemu/hw/hppa/dino.c:362:16 #1 0x5577d9025664 in memory_region_write_with_attrs_accessor qemu/memory.c:503:12 #2 0x5577d9024920 in access_with_adjusted_size qemu/memory.c:539:18 #3 0x5577d9023608 in memory_region_dispatch_write qemu/memory.c:1482:13 #4 0x5577d8e3177a in flatview_write_continue qemu/exec.c:3166:23 #5 0x5577d8e20357 in flatview_write qemu/exec.c:3206:14 #6 0x5577d8e1fef4 in address_space_write qemu/exec.c:3296:18 #7 0x5577d8e20693 in address_space_rw qemu/exec.c:3306:16 #8 0x5577d9011595 in qtest_process_command qemu/qtest.c:432:13 #9 0x5577d900d19f in qtest_process_inbuf qemu/qtest.c:705:9 #10 0x5577d900ca22 in qtest_read qemu/qtest.c:717:5 #11 0x5577da8c4254 in qemu_chr_be_write_impl qemu/chardev/char.c:183:9 #12 0x5577da8c430c in qemu_chr_be_write qemu/chardev/char.c:195:9 #13 0x5577da8cf587 in fd_chr_read qemu/chardev/char-fd.c:68:9 #14 0x5577da9836cd in qio_channel_fd_source_dispatch qemu/io/channel-watch.c:84:12 #15 0x7faf44509ecc in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x4fecc) #16 0x5577dab75f96 in glib_pollfds_poll qemu/util/main-loop.c:219:9 #17 0x5577dab74797 in os_host_main_loop_wait qemu/util/main-loop.c:242:5 #18 0x5577dab7435a in main_loop_wait qemu/util/main-loop.c:518:11 #19 0x5577d9514eb3 in main_loop qemu/vl.c:1682:9 #20 0x5577d950699d in main qemu/vl.c:4450:5 #21 0x7faf41a87f42 in __libc_start_main (/lib64/libc.so.6+0x23f42) #22 0x5577d8cd4d4d in _start (qemu/build/sanitizer/hppa-softmmu/qemu-system-hppa+0x1256d4d) 0x5577dae32f30 is located 0 bytes to the right of global variable 'reg800_keep_bits' defined in 'qemu/hw/hppa/dino.c:87:23' (0x5577dae32f00) of size 48 SUMMARY: AddressSanitizer: global-buffer-overflow qemu/hw/hppa/dino.c:362:16 in dino_chip_write_with_attrs Shadow bytes around the buggy address: 0x0aaf7b5be590: 00 f9 f9 f9 f9 f9 f9 f9 00 02 f9 f9 f9 f9 f9 f9 0x0aaf7b5be5a0: 07 f9 f9 f9 f9 f9 f9 f9 07 f9 f9 f9 f9 f9 f9 f9 0x0aaf7b5be5b0: 07 f9 f9 f9 f9 f9 f9 f9 00 00 00 00 00 00 00 00 0x0aaf7b5be5c0: 00 00 00 02 f9 f9 f9 f9 00 00 00 00 00 00 00 00 0x0aaf7b5be5d0: 00 00 00 00 00 00 00 00 00 00 00 03 f9 f9 f9 f9 =>0x0aaf7b5be5e0: 00 00 00 00 00 00[f9]f9 f9 f9 f9 f9 00 00 00 00 0x0aaf7b5be5f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0aaf7b5be600: 00 00 01 f9 f9 f9 f9 f9 00 00 00 00 07 f9 f9 f9 0x0aaf7b5be610: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00 0x0aaf7b5be620: 00 00 00 05 f9 f9 f9 f9 00 00 00 00 07 f9 f9 f9 0x0aaf7b5be630: f9 f9 f9 f9 00 00 f9 f9 f9 f9 f9 f9 07 f9 f9 f9 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==29607==ABORTING Fixes: Covertiy CID 1419387 / 1419393 / 1419394 (commit 1809259) Acked-by: Helge Deller <deller@gmx.de> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20200218063355.18577-3-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The virtqueue code sets up MemoryRegionCaches to access the virtqueue guest RAM data structures. The code currently assumes that VRingMemoryRegionCaches is initialized before device emulation code accesses the virtqueue. An assertion will fail in vring_get_region_caches() when this is not true. Device fuzzing found a case where this assumption is false (see below). Virtqueue guest RAM addresses can also be changed from a vCPU thread while an IOThread is accessing the virtqueue. This breaks the same assumption but this time the caches could become invalid partway through the virtqueue code. The code fetches the caches RCU pointer multiple times so we will need to validate the pointer every time it is fetched. Add checks each time we call vring_get_region_caches() and treat invalid caches as a nop: memory stores are ignored and memory reads return 0. The fuzz test failure is as follows: $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \ -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \ -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \ -display none \ -qtest stdio endianness outl 0xcf8 0x80002020 outl 0xcfc 0xe0000000 outl 0xcf8 0x80002004 outw 0xcfc 0x7 write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001 inb 0x4 writew 0xe000001c 0x1 write 0xe0000014 0x1 0x0d The following error message is produced: qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed. The backtrace looks like this: #0 0x00007ffff5520625 in raise () at /lib64/libc.so.6 #1 0x00007ffff55098d9 in abort () at /lib64/libc.so.6 #2 0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6 #3 0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6 #4 0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286 #5 vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283 #6 0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398 #7 virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398 #8 virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451 #9 0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444 #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775 #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244 #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429 #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460 #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260 #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219 #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242 #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518 #19 0x00005555559b20c9 in main_loop () at vl.c:1683 #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441 Reported-by: Alexander Bulekov <alxndr@bu.edu> Cc: Michael Tsirkin <mst@redhat.com> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: qemu-stable@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200207104619.164892-1-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Coverity reports: *** CID 1419387: Memory - illegal accesses (OVERRUN) /hw/hppa/dino.c: 267 in dino_chip_read_with_attrs() 261 val = s->ilr & s->imr & s->icr; 262 break; 263 case DINO_TOC_ADDR: 264 val = s->toc_addr; 265 break; 266 case DINO_GMASK ... DINO_TLTIM: >>> CID 1419387: Memory - illegal accesses (OVERRUN) >>> Overrunning array "s->reg800" of 12 4-byte elements at element index 12 (byte offset 48) using index "(addr - 2048UL) / 4UL" (which evaluates to 12). 267 val = s->reg800[(addr - DINO_GMASK) / 4]; 268 if (addr == DINO_PAMR) { 269 val &= ~0x01; /* LSB is hardwired to 0 */ 270 } 271 if (addr == DINO_MLTIM) { 272 val &= ~0x07; /* 3 LSB are hardwired to 0 */ *** CID 1419393: Memory - corruptions (OVERRUN) /hw/hppa/dino.c: 363 in dino_chip_write_with_attrs() 357 /* These registers are read-only. */ 358 break; 359 360 case DINO_GMASK ... DINO_TLTIM: 361 i = (addr - DINO_GMASK) / 4; 362 val &= reg800_keep_bits[i]; >>> CID 1419393: Memory - corruptions (OVERRUN) >>> Overrunning array "s->reg800" of 12 4-byte elements at element index 12 (byte offset 48) using index "i" (which evaluates to 12). 363 s->reg800[i] = val; 364 break; 365 366 default: 367 /* Controlled by dino_chip_mem_valid above. */ 368 g_assert_not_reached(); *** CID 1419394: Memory - illegal accesses (OVERRUN) /hw/hppa/dino.c: 362 in dino_chip_write_with_attrs() 356 case DINO_IRR1: 357 /* These registers are read-only. */ 358 break; 359 360 case DINO_GMASK ... DINO_TLTIM: 361 i = (addr - DINO_GMASK) / 4; >>> CID 1419394: Memory - illegal accesses (OVERRUN) >>> Overrunning array "reg800_keep_bits" of 12 4-byte elements at element index 12 (byte offset 48) using index "i" (which evaluates to 12). 362 val &= reg800_keep_bits[i]; 363 s->reg800[i] = val; 364 break; 365 366 default: 367 /* Controlled by dino_chip_mem_valid above. */ Indeed the array should contain 13 entries, the undocumented register 0x82c is missing. Fix by increasing the array size and adding the missing register. CID 1419387 can be verified with: $ echo x 0xfff80830 | hppa-softmmu/qemu-system-hppa -S -monitor stdio -display none QEMU 4.2.50 monitor - type 'help' for more information (qemu) x 0xfff80830 qemu/hw/hppa/dino.c:267:15: runtime error: index 12 out of bounds for type 'uint32_t [12]' SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /home/phil/source/qemu/hw/hppa/dino.c:267:15 in 00000000fff80830: 0x00000000 and CID 1419393/1419394 with: $ echo writeb 0xfff80830 0x69 \ | hppa-softmmu/qemu-system-hppa -S -accel qtest -qtest stdio -display none [I 1581634452.654113] OPENED [R +4.105415] writeb 0xfff80830 0x69 qemu/hw/hppa/dino.c:362:16: runtime error: index 12 out of bounds for type 'const uint32_t [12]' SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior qemu/hw/hppa/dino.c:362:16 in ================================================================= ==29607==ERROR: AddressSanitizer: global-buffer-overflow on address 0x5577dae32f30 at pc 0x5577d93f2463 bp 0x7ffd97ea11b0 sp 0x7ffd97ea11a8 READ of size 4 at 0x5577dae32f30 thread T0 #0 0x5577d93f2462 in dino_chip_write_with_attrs qemu/hw/hppa/dino.c:362:16 #1 0x5577d9025664 in memory_region_write_with_attrs_accessor qemu/memory.c:503:12 #2 0x5577d9024920 in access_with_adjusted_size qemu/memory.c:539:18 #3 0x5577d9023608 in memory_region_dispatch_write qemu/memory.c:1482:13 #4 0x5577d8e3177a in flatview_write_continue qemu/exec.c:3166:23 #5 0x5577d8e20357 in flatview_write qemu/exec.c:3206:14 #6 0x5577d8e1fef4 in address_space_write qemu/exec.c:3296:18 #7 0x5577d8e20693 in address_space_rw qemu/exec.c:3306:16 #8 0x5577d9011595 in qtest_process_command qemu/qtest.c:432:13 #9 0x5577d900d19f in qtest_process_inbuf qemu/qtest.c:705:9 #10 0x5577d900ca22 in qtest_read qemu/qtest.c:717:5 #11 0x5577da8c4254 in qemu_chr_be_write_impl qemu/chardev/char.c:183:9 #12 0x5577da8c430c in qemu_chr_be_write qemu/chardev/char.c:195:9 #13 0x5577da8cf587 in fd_chr_read qemu/chardev/char-fd.c:68:9 #14 0x5577da9836cd in qio_channel_fd_source_dispatch qemu/io/channel-watch.c:84:12 #15 0x7faf44509ecc in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x4fecc) #16 0x5577dab75f96 in glib_pollfds_poll qemu/util/main-loop.c:219:9 #17 0x5577dab74797 in os_host_main_loop_wait qemu/util/main-loop.c:242:5 #18 0x5577dab7435a in main_loop_wait qemu/util/main-loop.c:518:11 #19 0x5577d9514eb3 in main_loop qemu/vl.c:1682:9 #20 0x5577d950699d in main qemu/vl.c:4450:5 #21 0x7faf41a87f42 in __libc_start_main (/lib64/libc.so.6+0x23f42) #22 0x5577d8cd4d4d in _start (qemu/build/sanitizer/hppa-softmmu/qemu-system-hppa+0x1256d4d) 0x5577dae32f30 is located 0 bytes to the right of global variable 'reg800_keep_bits' defined in 'qemu/hw/hppa/dino.c:87:23' (0x5577dae32f00) of size 48 SUMMARY: AddressSanitizer: global-buffer-overflow qemu/hw/hppa/dino.c:362:16 in dino_chip_write_with_attrs Shadow bytes around the buggy address: 0x0aaf7b5be590: 00 f9 f9 f9 f9 f9 f9 f9 00 02 f9 f9 f9 f9 f9 f9 0x0aaf7b5be5a0: 07 f9 f9 f9 f9 f9 f9 f9 07 f9 f9 f9 f9 f9 f9 f9 0x0aaf7b5be5b0: 07 f9 f9 f9 f9 f9 f9 f9 00 00 00 00 00 00 00 00 0x0aaf7b5be5c0: 00 00 00 02 f9 f9 f9 f9 00 00 00 00 00 00 00 00 0x0aaf7b5be5d0: 00 00 00 00 00 00 00 00 00 00 00 03 f9 f9 f9 f9 =>0x0aaf7b5be5e0: 00 00 00 00 00 00[f9]f9 f9 f9 f9 f9 00 00 00 00 0x0aaf7b5be5f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0aaf7b5be600: 00 00 01 f9 f9 f9 f9 f9 00 00 00 00 07 f9 f9 f9 0x0aaf7b5be610: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00 0x0aaf7b5be620: 00 00 00 05 f9 f9 f9 f9 00 00 00 00 07 f9 f9 f9 0x0aaf7b5be630: f9 f9 f9 f9 00 00 f9 f9 f9 f9 f9 f9 07 f9 f9 f9 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==29607==ABORTING Fixes: Covertiy CID 1419387 / 1419393 / 1419394 (commit 1809259) Acked-by: Helge Deller <deller@gmx.de> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20200218063355.18577-3-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
'crypto_opts' forgot to free in qcow2_close(), this patch fix the bellow leak stack: Direct leak of 24 byte(s) in 1 object(s) allocated from: #0 0x7f0edd81f970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970) #1 0x7f0edc6d149d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d) #2 0x55d7eaede63d in qobject_input_start_struct /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qobject-input-visitor.c:295 #3 0x55d7eaed78b8 in visit_start_struct /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qapi-visit-core.c:49 #4 0x55d7eaf5140b in visit_type_QCryptoBlockOpenOptions qapi/qapi-visit-crypto.c:290 #5 0x55d7eae43af3 in block_crypto_open_opts_init /mnt/sdb/qemu-new/qemu_test/qemu/block/crypto.c:163 #6 0x55d7eacd2924 in qcow2_update_options_prepare /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1148 #7 0x55d7eacd33f7 in qcow2_update_options /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1232 #8 0x55d7eacd9680 in qcow2_do_open /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1512 #9 0x55d7eacdc55e in qcow2_open_entry /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1792 #10 0x55d7eacdc8fe in qcow2_open /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1819 #11 0x55d7eac3742d in bdrv_open_driver /mnt/sdb/qemu-new/qemu_test/qemu/block.c:1317 #12 0x55d7eac3e990 in bdrv_open_common /mnt/sdb/qemu-new/qemu_test/qemu/block.c:1575 #13 0x55d7eac4442c in bdrv_open_inherit /mnt/sdb/qemu-new/qemu_test/qemu/block.c:3126 #14 0x55d7eac45c3f in bdrv_open /mnt/sdb/qemu-new/qemu_test/qemu/block.c:3219 #15 0x55d7ead8e8a4 in blk_new_open /mnt/sdb/qemu-new/qemu_test/qemu/block/block-backend.c:397 #16 0x55d7eacde74c in qcow2_co_create /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:3534 #17 0x55d7eacdfa6d in qcow2_co_create_opts /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:3668 #18 0x55d7eac1c678 in bdrv_create_co_entry /mnt/sdb/qemu-new/qemu_test/qemu/block.c:485 #19 0x55d7eb0024d2 in coroutine_trampoline /mnt/sdb/qemu-new/qemu_test/qemu/util/coroutine-ucontext.c:115 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200227012950.12256-2-pannengyuan@huawei.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
There is a use-after-free possible: bdrv_unref_child() leaves bs->backing freed but not NULL. bdrv_attach_child may produce nested polling loop due to drain, than access of freed pointer is possible. I've produced the following crash on 30 iotest with modified code. It does not reproduce on master, but still seems possible: #0 __strcmp_avx2 () at /lib64/libc.so.6 CTSRD-CHERI#1 bdrv_backing_overridden (bs=0x55c9d3cc2060) at block.c:6350 CTSRD-CHERI#2 bdrv_refresh_filename (bs=0x55c9d3cc2060) at block.c:6404 CTSRD-CHERI#3 bdrv_backing_attach (c=0x55c9d48e5520) at block.c:1063 CTSRD-CHERI#4 bdrv_replace_child_noperm (child=child@entry=0x55c9d48e5520, new_bs=new_bs@entry=0x55c9d3cc2060) at block.c:2290 CTSRD-CHERI#5 bdrv_replace_child (child=child@entry=0x55c9d48e5520, new_bs=new_bs@entry=0x55c9d3cc2060) at block.c:2320 CTSRD-CHERI#6 bdrv_root_attach_child (child_bs=child_bs@entry=0x55c9d3cc2060, child_name=child_name@entry=0x55c9d241d478 "backing", child_role=child_role@entry=0x55c9d26ecee0 <child_backing>, ctx=<optimized out>, perm=<optimized out>, shared_perm=21, opaque=0x55c9d3c5a3d0, errp=0x7ffd117108e0) at block.c:2424 CTSRD-CHERI#7 bdrv_attach_child (parent_bs=parent_bs@entry=0x55c9d3c5a3d0, child_bs=child_bs@entry=0x55c9d3cc2060, child_name=child_name@entry=0x55c9d241d478 "backing", child_role=child_role@entry=0x55c9d26ecee0 <child_backing>, errp=errp@entry=0x7ffd117108e0) at block.c:5876 CTSRD-CHERI#8 in bdrv_set_backing_hd (bs=bs@entry=0x55c9d3c5a3d0, backing_hd=backing_hd@entry=0x55c9d3cc2060, errp=errp@entry=0x7ffd117108e0) at block.c:2576 CTSRD-CHERI#9 stream_prepare (job=0x55c9d49d84a0) at block/stream.c:150 CTSRD-CHERI#10 job_prepare (job=0x55c9d49d84a0) at job.c:761 CTSRD-CHERI#11 job_txn_apply (txn=<optimized out>, fn=<optimized out>) at job.c:145 CTSRD-CHERI#12 job_do_finalize (job=0x55c9d49d84a0) at job.c:778 CTSRD-CHERI#13 job_completed_txn_success (job=0x55c9d49d84a0) at job.c:832 CTSRD-CHERI#14 job_completed (job=0x55c9d49d84a0) at job.c:845 CTSRD-CHERI#15 job_completed (job=0x55c9d49d84a0) at job.c:836 CTSRD-CHERI#16 job_exit (opaque=0x55c9d49d84a0) at job.c:864 CTSRD-CHERI#17 aio_bh_call (bh=0x55c9d471a160) at util/async.c:117 CTSRD-CHERI#18 aio_bh_poll (ctx=ctx@entry=0x55c9d3c46720) at util/async.c:117 CTSRD-CHERI#19 aio_poll (ctx=ctx@entry=0x55c9d3c46720, blocking=blocking@entry=true) at util/aio-posix.c:728 CTSRD-CHERI#20 bdrv_parent_drained_begin_single (poll=true, c=0x55c9d3d558f0) at block/io.c:121 CTSRD-CHERI#21 bdrv_parent_drained_begin_single (c=c@entry=0x55c9d3d558f0, poll=poll@entry=true) at block/io.c:114 CTSRD-CHERI#22 bdrv_replace_child_noperm (child=child@entry=0x55c9d3d558f0, new_bs=new_bs@entry=0x55c9d3d27300) at block.c:2258 CTSRD-CHERI#23 bdrv_replace_child (child=child@entry=0x55c9d3d558f0, new_bs=new_bs@entry=0x55c9d3d27300) at block.c:2320 CTSRD-CHERI#24 bdrv_root_attach_child (child_bs=child_bs@entry=0x55c9d3d27300, child_name=child_name@entry=0x55c9d241d478 "backing", child_role=child_role@entry=0x55c9d26ecee0 <child_backing>, ctx=<optimized out>, perm=<optimized out>, shared_perm=21, opaque=0x55c9d3cc2060, errp=0x7ffd11710c60) at block.c:2424 CTSRD-CHERI#25 bdrv_attach_child (parent_bs=parent_bs@entry=0x55c9d3cc2060, child_bs=child_bs@entry=0x55c9d3d27300, child_name=child_name@entry=0x55c9d241d478 "backing", child_role=child_role@entry=0x55c9d26ecee0 <child_backing>, errp=errp@entry=0x7ffd11710c60) at block.c:5876 CTSRD-CHERI#26 bdrv_set_backing_hd (bs=bs@entry=0x55c9d3cc2060, backing_hd=backing_hd@entry=0x55c9d3d27300, errp=errp@entry=0x7ffd11710c60) at block.c:2576 CTSRD-CHERI#27 stream_prepare (job=0x55c9d495ead0) at block/stream.c:150 ... Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200316060631.30052-2-vsementsov@virtuozzo.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
All the Coverity-specific definitions of qemu_mutex_lock() and friends have a trailing semicolon. This works fine almost everywhere because of QEMU's mandatory-braces coding style and because most callsites are simple, but target/s390x/sigp.c has a use of qemu_mutex_trylock() as an if() statement, which makes the ';' a syntax error: "../target/s390x/sigp.c", line 461: warning CTSRD-CHERI#18: expected a ")" if (qemu_mutex_trylock(&qemu_sigp_mutex)) { ^ Remove the bogus semicolons from the macro definitions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200319193323.2038-4-peter.maydell@linaro.org
when s->inflight is freed, vhost_dev_free_inflight may try to access s->inflight->addr, it will retrigger the following issue. ==7309==ERROR: AddressSanitizer: heap-use-after-free on address 0x604001020d18 at pc 0x555555ce948a bp 0x7fffffffb170 sp 0x7fffffffb160 READ of size 8 at 0x604001020d18 thread T0 #0 0x555555ce9489 in vhost_dev_free_inflight /root/smartx/qemu-el7/qemu-test/hw/virtio/vhost.c:1473 #1 0x555555cd86eb in virtio_reset /root/smartx/qemu-el7/qemu-test/hw/virtio/virtio.c:1214 #2 0x5555560d3eff in virtio_pci_reset hw/virtio/virtio-pci.c:1859 #3 0x555555f2ac53 in device_set_realized hw/core/qdev.c:893 #4 0x5555561d572c in property_set_bool qom/object.c:1925 #5 0x5555561de8de in object_property_set_qobject qom/qom-qobject.c:27 #6 0x5555561d99f4 in object_property_set_bool qom/object.c:1188 #7 0x555555e50ae7 in qdev_device_add /root/smartx/qemu-el7/qemu-test/qdev-monitor.c:626 #8 0x555555e51213 in qmp_device_add /root/smartx/qemu-el7/qemu-test/qdev-monitor.c:806 #9 0x555555e8ff40 in hmp_device_add /root/smartx/qemu-el7/qemu-test/hmp.c:1951 #10 0x555555be889a in handle_hmp_command /root/smartx/qemu-el7/qemu-test/monitor.c:3404 #11 0x555555beac8b in monitor_command_cb /root/smartx/qemu-el7/qemu-test/monitor.c:4296 #12 0x555556433eb7 in readline_handle_byte util/readline.c:393 #13 0x555555be89ec in monitor_read /root/smartx/qemu-el7/qemu-test/monitor.c:4279 #14 0x5555563285cc in tcp_chr_read chardev/char-socket.c:470 #15 0x7ffff670b968 in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x4a968) #16 0x55555640727c in glib_pollfds_poll util/main-loop.c:215 #17 0x55555640727c in os_host_main_loop_wait util/main-loop.c:238 #18 0x55555640727c in main_loop_wait util/main-loop.c:497 #19 0x555555b2d0bf in main_loop /root/smartx/qemu-el7/qemu-test/vl.c:2013 #20 0x555555b2d0bf in main /root/smartx/qemu-el7/qemu-test/vl.c:4776 #21 0x7fffdd2eb444 in __libc_start_main (/lib64/libc.so.6+0x22444) #22 0x555555b3767a (/root/smartx/qemu-el7/qemu-test/x86_64-softmmu/qemu-system-x86_64+0x5e367a) 0x604001020d18 is located 8 bytes inside of 40-byte region [0x604001020d10,0x604001020d38) freed by thread T0 here: #0 0x7ffff6f00508 in __interceptor_free (/lib64/libasan.so.4+0xde508) #1 0x7ffff671107d in g_free (/lib64/libglib-2.0.so.0+0x5007d) previously allocated by thread T0 here: #0 0x7ffff6f00a88 in __interceptor_calloc (/lib64/libasan.so.4+0xdea88) #1 0x7ffff6710fc5 in g_malloc0 (/lib64/libglib-2.0.so.0+0x4ffc5) SUMMARY: AddressSanitizer: heap-use-after-free /root/smartx/qemu-el7/qemu-test/hw/virtio/vhost.c:1473 in vhost_dev_free_inflight Shadow bytes around the buggy address: 0x0c08801fc150: fa fa 00 00 00 00 04 fa fa fa fd fd fd fd fd fa 0x0c08801fc160: fa fa fd fd fd fd fd fd fa fa 00 00 00 00 04 fa 0x0c08801fc170: fa fa 00 00 00 00 00 01 fa fa 00 00 00 00 04 fa 0x0c08801fc180: fa fa 00 00 00 00 00 01 fa fa 00 00 00 00 00 01 0x0c08801fc190: fa fa 00 00 00 00 00 fa fa fa 00 00 00 00 04 fa =>0x0c08801fc1a0: fa fa fd[fd]fd fd fd fa fa fa fd fd fd fd fd fa 0x0c08801fc1b0: fa fa fd fd fd fd fd fa fa fa fd fd fd fd fd fa 0x0c08801fc1c0: fa fa 00 00 00 00 00 fa fa fa fd fd fd fd fd fd 0x0c08801fc1d0: fa fa 00 00 00 00 00 01 fa fa fd fd fd fd fd fa 0x0c08801fc1e0: fa fa fd fd fd fd fd fa fa fa fd fd fd fd fd fd 0x0c08801fc1f0: fa fa 00 00 00 00 00 01 fa fa fd fd fd fd fd fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==7309==ABORTING Signed-off-by: Li Feng <fengli@smartx.com> Message-Id: <20200417101707.14467-1-fengli@smartx.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
On restart, we were scheduling a BH to process queued requests, which would run before starting up the data plane, leading to those requests being assigned and started on coroutines on the main context. This could cause requests to be wrongly processed in parallel from different threads (the main thread and the iothread managing the data plane), potentially leading to multiple issues. For example, stopping and resuming a VM multiple times while the guest is generating I/O on a virtio_blk device can trigger a crash with a stack tracing looking like this one: <------> Thread 2 (Thread 0x7ff736765700 (LWP 1062503)): #0 0x00005567a13b99d6 in iov_memset (iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:69 CTSRD-CHERI#1 0x00005567a13bab73 in qemu_iovec_memset (qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530 CTSRD-CHERI#2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86 CTSRD-CHERI#3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217 CTSRD-CHERI#4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323 CTSRD-CHERI#5 0x00005567a12f43d9 in qemu_laio_process_completions_and_submit (s=0x7ff7182e8420) at block/linux-aio.c:236 CTSRD-CHERI#6 0x00005567a12f44c2 in qemu_laio_poll_cb (opaque=0x7ff7182e8430) at block/linux-aio.c:267 CTSRD-CHERI#7 0x00005567a13aed83 in run_poll_handlers_once (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8) at util/aio-posix.c:520 CTSRD-CHERI#8 0x00005567a13aee9f in run_poll_handlers (ctx=0x5567a2b58c70, max_ns=16000, timeout=0x7ff7367645f8) at util/aio-posix.c:562 CTSRD-CHERI#9 0x00005567a13aefde in try_poll_mode (ctx=0x5567a2b58c70, timeout=0x7ff7367645f8) at util/aio-posix.c:597 CTSRD-CHERI#10 0x00005567a13af115 in aio_poll (ctx=0x5567a2b58c70, blocking=true) at util/aio-posix.c:639 CTSRD-CHERI#11 0x00005567a109acca in iothread_run (opaque=0x5567a2b29760) at iothread.c:75 CTSRD-CHERI#12 0x00005567a13b2790 in qemu_thread_start (args=0x5567a2b694c0) at util/qemu-thread-posix.c:519 CTSRD-CHERI#13 0x00007ff73eedf2de in start_thread () at /lib64/libpthread.so.0 CTSRD-CHERI#14 0x00007ff73ec10e83 in clone () at /lib64/libc.so.6 Thread 1 (Thread 0x7ff743986f00 (LWP 1062500)): #0 0x00005567a13b99d6 in iov_memset (iov=0x6563617073206f4e, iov_cnt=1717922848, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:69 CTSRD-CHERI#1 0x00005567a13bab73 in qemu_iovec_memset (qiov=0x7ff73ec99748, offset=516096, fillc=0, bytes=7018105756081554803) at util/iov.c:530 CTSRD-CHERI#2 0x00005567a12f411c in qemu_laio_process_completion (laiocb=0x7ff6512ee6c0) at block/linux-aio.c:86 CTSRD-CHERI#3 0x00005567a12f42ff in qemu_laio_process_completions (s=0x7ff7182e8420) at block/linux-aio.c:217 CTSRD-CHERI#4 0x00005567a12f480d in ioq_submit (s=0x7ff7182e8420) at block/linux-aio.c:323 CTSRD-CHERI#5 0x00005567a12f4a2f in laio_do_submit (fd=19, laiocb=0x7ff5f4ff9ae0, offset=472363008, type=2) at block/linux-aio.c:375 CTSRD-CHERI#6 0x00005567a12f4af2 in laio_co_submit (bs=0x5567a2b8c460, s=0x7ff7182e8420, fd=19, offset=472363008, qiov=0x7ff5f4ff9ca0, type=2) at block/linux-aio.c:394 CTSRD-CHERI#7 0x00005567a12f1803 in raw_co_prw (bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, type=2) at block/file-posix.c:1892 CTSRD-CHERI#8 0x00005567a12f1941 in raw_co_pwritev (bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, flags=0) at block/file-posix.c:1925 CTSRD-CHERI#9 0x00005567a12fe3e1 in bdrv_driver_pwritev (bs=0x5567a2b8c460, offset=472363008, bytes=20480, qiov=0x7ff5f4ff9ca0, qiov_offset=0, flags=0) at block/io.c:1183 CTSRD-CHERI#10 0x00005567a1300340 in bdrv_aligned_pwritev (child=0x5567a2b5b070, req=0x7ff5f4ff9db0, offset=472363008, bytes=20480, align=512, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0) at block/io.c:1980 CTSRD-CHERI#11 0x00005567a1300b29 in bdrv_co_pwritev_part (child=0x5567a2b5b070, offset=472363008, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, flags=0) at block/io.c:2137 CTSRD-CHERI#12 0x00005567a12baba1 in qcow2_co_pwritev_task (bs=0x5567a2b92740, file_cluster_offset=472317952, offset=487305216, bytes=20480, qiov=0x7ff72c0425b8, qiov_offset=0, l2meta=0x0) at block/qcow2.c:2444 CTSRD-CHERI#13 0x00005567a12bacdb in qcow2_co_pwritev_task_entry (task=0x5567a2b48540) at block/qcow2.c:2475 CTSRD-CHERI#14 0x00005567a13167d8 in aio_task_co (opaque=0x5567a2b48540) at block/aio_task.c:45 CTSRD-CHERI#15 0x00005567a13cf00c in coroutine_trampoline (i0=738245600, i1=32759) at util/coroutine-ucontext.c:115 CTSRD-CHERI#16 0x00007ff73eb622e0 in __start_context () at /lib64/libc.so.6 CTSRD-CHERI#17 0x00007ff6626f1350 in () CTSRD-CHERI#18 0x0000000000000000 in () <------> This is also known to cause crashes with this message (assertion failed): aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule' RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1812765 Signed-off-by: Sergio Lopez <slp@redhat.com> Message-Id: <20200603093240.40489-3-slp@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Since commit d70c996, when enabling the PMU we get: $ qemu-system-aarch64 -cpu host,pmu=on -M virt,accel=kvm,gic-version=3 Segmentation fault (core dumped) Thread 1 "qemu-system-aar" received signal SIGSEGV, Segmentation fault. 0x0000aaaaaae356d0 in kvm_ioctl (s=0x0, type=44547) at accel/kvm/kvm-all.c:2588 2588 ret = ioctl(s->fd, type, arg); (gdb) bt #0 0x0000aaaaaae356d0 in kvm_ioctl (s=0x0, type=44547) at accel/kvm/kvm-all.c:2588 CTSRD-CHERI#1 0x0000aaaaaae31568 in kvm_check_extension (s=0x0, extension=126) at accel/kvm/kvm-all.c:916 CTSRD-CHERI#2 0x0000aaaaaafce254 in kvm_arm_pmu_supported (cpu=0xaaaaac214ab0) at target/arm/kvm.c:213 CTSRD-CHERI#3 0x0000aaaaaafc0f94 in arm_set_pmu (obj=0xaaaaac214ab0, value=true, errp=0xffffffffe438) at target/arm/cpu.c:1111 CTSRD-CHERI#4 0x0000aaaaab5533ac in property_set_bool (obj=0xaaaaac214ab0, v=0xaaaaac223a80, name=0xaaaaac11a970 "pmu", opaque=0xaaaaac222730, errp=0xffffffffe438) at qom/object.c:2170 CTSRD-CHERI#5 0x0000aaaaab5512f0 in object_property_set (obj=0xaaaaac214ab0, v=0xaaaaac223a80, name=0xaaaaac11a970 "pmu", errp=0xffffffffe438) at qom/object.c:1328 CTSRD-CHERI#6 0x0000aaaaab551e10 in object_property_parse (obj=0xaaaaac214ab0, string=0xaaaaac11b4c0 "on", name=0xaaaaac11a970 "pmu", errp=0xffffffffe438) at qom/object.c:1561 CTSRD-CHERI#7 0x0000aaaaab54ee8c in object_apply_global_props (obj=0xaaaaac214ab0, props=0xaaaaac018e20, errp=0xaaaaabd6fd88 <error_fatal>) at qom/object.c:407 CTSRD-CHERI#8 0x0000aaaaab1dd5a4 in qdev_prop_set_globals (dev=0xaaaaac214ab0) at hw/core/qdev-properties.c:1218 CTSRD-CHERI#9 0x0000aaaaab1d9fac in device_post_init (obj=0xaaaaac214ab0) at hw/core/qdev.c:1050 ... CTSRD-CHERI#15 0x0000aaaaab54f310 in object_initialize_with_type (obj=0xaaaaac214ab0, size=52208, type=0xaaaaabe237f0) at qom/object.c:512 CTSRD-CHERI#16 0x0000aaaaab54fa24 in object_new_with_type (type=0xaaaaabe237f0) at qom/object.c:687 CTSRD-CHERI#17 0x0000aaaaab54fa80 in object_new (typename=0xaaaaabe23970 "host-arm-cpu") at qom/object.c:702 CTSRD-CHERI#18 0x0000aaaaaaf04a74 in machvirt_init (machine=0xaaaaac0a8550) at hw/arm/virt.c:1770 CTSRD-CHERI#19 0x0000aaaaab1e8720 in machine_run_board_init (machine=0xaaaaac0a8550) at hw/core/machine.c:1138 CTSRD-CHERI#20 0x0000aaaaaaf95394 in qemu_init (argc=5, argv=0xffffffffea58, envp=0xffffffffea88) at softmmu/vl.c:4348 CTSRD-CHERI#21 0x0000aaaaaada3f74 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at softmmu/main.c:48 This is because in frame CTSRD-CHERI#2, cpu->kvm_state is still NULL (the vCPU is not yet realized). KVM has a hard requirement of all cores supporting the same feature set. We only need to check if the accelerator supports a feature, not each vCPU individually. Fix by removing the 'CPUState *cpu' argument from the kvm_arm_<FEATURE>_supported() functions. Fixes: d70c996 ('Use CPUState::kvm_state in kvm_arm_pmu_supported') Reported-by: Haibo Xu <haibo.xu@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The stored permissions do not correspond to the loaded ones.
Permissions
0x800
become0x8000
.The text was updated successfully, but these errors were encountered: