forked from qemu/qemu
-
Notifications
You must be signed in to change notification settings - Fork 6
pc-bios: Update SeaBIOS to skip clearing screen #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
dcpleung
wants to merge
1
commit into
zephyrproject-rtos:zephyr
from
dcpleung:fix_x86_clearing_screen
Closed
pc-bios: Update SeaBIOS to skip clearing screen #1
dcpleung
wants to merge
1
commit into
zephyrproject-rtos:zephyr
from
dcpleung:fix_x86_clearing_screen
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This helps with looking at emulator outputs (from both running locally and sanitycheck) as clearing screen and disabling line wrapping affect developer terminal. Signed-off-by: Daniel Leung <daniel.leung@intel.com>
galak
requested changes
Oct 17, 2019
galak
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be better done in the sdk repo as we haven't used this qemu repo in some time.
Member
Author
|
Moved to zephyrproject-rtos/sdk-ng#124 |
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET command are not preserved on VM migration. Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES) get enabled. What happens is: first the VirtIONet::curr_guest_offloads gets restored and offloads are getting set correctly: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334 #3 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:168 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 However later on the features are getting restored, and offloads get reset to everything supported by features: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773 #3 virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 Fix this by preserving the state in saved_guest_offloads field and pushing out offload initialization to the new post load hook. Cc: qemu-stable@nongnu.org Signed-off-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com> Signed-off-by: Jason Wang <jasowang@redhat.com> (cherry picked from commit 7788c3f) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
When 'system_reset' is called, the main loop clear the memory
region cache before the BH has a chance to execute. Later when
the deferred function is called, some assumptions that were
made when scheduling them are no longer true when they actually
execute.
This is what happens using a virtio-blk device (fresh RHEL7.8 install):
$ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; echo q) \
| qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \
-device virtio-blk-pci,id=image1,drive=drive_image1 \
-drive file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none \
-device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \
-netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \
-monitor stdio -serial null -nographic
(qemu) system_reset
(qemu) system_reset
(qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: Assertion `caches != NULL' failed.
Aborted
(gdb) bt
Thread 1 (Thread 0x7f109c17b680 (LWP 10939)):
#0 0x00005604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at hw/virtio/virtio.c:227
#1 0x000056040832972b in vring_avail_flags (vq=0x56040a24bdd0) at hw/virtio/virtio.c:235
#2 0x000056040832d13d in virtio_should_notify (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648
#3 0x000056040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662
#4 0x00005604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at hw/block/dataplane/virtio-blk.c:75
#5 0x000056040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90
#6 0x000056040883dccd in aio_bh_poll (ctx=0x560409161980) at util/async.c:118
#7 0x0000560408842af7 in aio_dispatch (ctx=0x560409161980) at util/aio-posix.c:460
#8 0x000056040883e068 in aio_ctx_dispatch (source=0x560409161980, callback=0x0, user_data=0x0) at util/async.c:261
#9 0x00007f10a8fca06d in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#10 0x0000560408841445 in glib_pollfds_poll () at util/main-loop.c:215
#11 0x00005604088414bf in os_host_main_loop_wait (timeout=0) at util/main-loop.c:238
#12 0x00005604088415c4 in main_loop_wait (nonblocking=0) at util/main-loop.c:514
qemu#13 0x0000560408416b1e in main_loop () at vl.c:1923
qemu#14 0x000056040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, envp=0x7ffc2c3f9d00) at vl.c:4578
Fix this by cancelling the BH when the virtio dataplane is stopped.
[This is version of the patch was modified as discussed with Philippe on
the mailing list thread.
--Stefan]
Reported-by: Yihuang Yu <yihyu@redhat.com>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Fixes: https://bugs.launchpad.net/qemu/+bug/1839428
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190816171503.24761-1-philmd@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ebb6ff2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
The 'blockdev-create' QMP command was introduced as experimental feature in commit b0292b8, using the assert() debug call. It got promoted to 'stable' command in 3fb588a, but the assert call was not removed. Some block drivers are optional, and bdrv_find_format() might return a NULL value, triggering the assertion. Stable code is not expected to abort, so return an error instead. This is easily reproducible when libnfs is not installed: ./configure [...] module support no Block whitelist (rw) Block whitelist (ro) libiscsi support yes libnfs support no [...] Start QEMU: $ qemu-system-x86_64 -S -qmp unix:/tmp/qemu.qmp,server,nowait Send the 'blockdev-create' with the 'nfs' driver: $ ( cat << 'EOF' {'execute': 'qmp_capabilities'} {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'} EOF ) | socat STDIO UNIX:/tmp/qemu.qmp {"QMP": {"version": {"qemu": {"micro": 50, "minor": 1, "major": 4}, "package": "v4.1.0-733-g89ea03a7dc"}, "capabilities": ["oob"]}} {"return": {}} QEMU crashes: $ gdb qemu-system-x86_64 core Program received signal SIGSEGV, Segmentation fault. (gdb) bt #0 0x00007ffff510957f in raise () at /lib64/libc.so.6 #1 0x00007ffff50f3895 in abort () at /lib64/libc.so.6 #2 0x00007ffff50f3769 in _nl_load_domain.cold.0 () at /lib64/libc.so.6 #3 0x00007ffff5101a26 in .annobin_assert.c_end () at /lib64/libc.so.6 #4 0x0000555555d7e1f1 in qmp_blockdev_create (job_id=0x555556baee40 "x", options=0x555557666610, errp=0x7fffffffc770) at block/create.c:69 #5 0x0000555555c96b52 in qmp_marshal_blockdev_create (args=0x7fffdc003830, ret=0x7fffffffc7f8, errp=0x7fffffffc7f0) at qapi/qapi-commands-block-core.c:1314 #6 0x0000555555deb0a0 in do_qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false, errp=0x7fffffffc898) at qapi/qmp-dispatch.c:131 #7 0x0000555555deb2a1 in qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false) at qapi/qmp-dispatch.c:174 With this patch applied, QEMU returns a QMP error: {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'} {"id": "x", "error": {"class": "GenericError", "desc": "Block driver 'nfs' not found or not supported"}} Cc: qemu-stable@nongnu.org Reported-by: Xu Tian <xutian@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> (cherry picked from commit d90d5ca) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Even for block nodes with bs->drv == NULL, we can't just ignore a bdrv_set_aio_context() call. Leaving the node in its old context can mean that it's still in an iothread context in bdrv_close_all() during shutdown, resulting in an attempted unlock of the AioContext lock which we don't hold. This is an example stack trace of a related crash: #0 0x00007ffff59da57f in raise () at /lib64/libc.so.6 #1 0x00007ffff59c4895 in abort () at /lib64/libc.so.6 #2 0x0000555555b97b1e in error_exit (err=<optimized out>, msg=msg@entry=0x555555d386d0 <__func__.19059> "qemu_mutex_unlock_impl") at util/qemu-thread-posix.c:36 #3 0x0000555555b97f7f in qemu_mutex_unlock_impl (mutex=mutex@entry=0x5555568002f0, file=file@entry=0x555555d378df "util/async.c", line=line@entry=507) at util/qemu-thread-posix.c:97 #4 0x0000555555b92f55 in aio_context_release (ctx=ctx@entry=0x555556800290) at util/async.c:507 #5 0x0000555555b05cf8 in bdrv_prwv_co (child=child@entry=0x7fffc80012f0, offset=offset@entry=131072, qiov=qiov@entry=0x7fffffffd4f0, is_write=is_write@entry=true, flags=flags@entry=0) at block/io.c:833 #6 0x0000555555b060a9 in bdrv_pwritev (qiov=0x7fffffffd4f0, offset=131072, child=0x7fffc80012f0) at block/io.c:990 #7 0x0000555555b060a9 in bdrv_pwrite (child=0x7fffc80012f0, offset=131072, buf=<optimized out>, bytes=<optimized out>) at block/io.c:990 #8 0x0000555555ae172b in qcow2_cache_entry_flush (bs=bs@entry=0x555556810680, c=c@entry=0x5555568cc740, i=i@entry=0) at block/qcow2-cache.c:51 #9 0x0000555555ae18dd in qcow2_cache_write (bs=bs@entry=0x555556810680, c=0x5555568cc740) at block/qcow2-cache.c:248 #10 0x0000555555ae15de in qcow2_cache_flush (bs=0x555556810680, c=<optimized out>) at block/qcow2-cache.c:259 #11 0x0000555555ae16b1 in qcow2_cache_flush_dependency (c=0x5555568a1700, c=0x5555568a1700, bs=0x555556810680) at block/qcow2-cache.c:194 #12 0x0000555555ae16b1 in qcow2_cache_entry_flush (bs=bs@entry=0x555556810680, c=c@entry=0x5555568a1700, i=i@entry=0) at block/qcow2-cache.c:194 qemu#13 0x0000555555ae18dd in qcow2_cache_write (bs=bs@entry=0x555556810680, c=0x5555568a1700) at block/qcow2-cache.c:248 qemu#14 0x0000555555ae15de in qcow2_cache_flush (bs=bs@entry=0x555556810680, c=<optimized out>) at block/qcow2-cache.c:259 qemu#15 0x0000555555ad242c in qcow2_inactivate (bs=bs@entry=0x555556810680) at block/qcow2.c:2124 qemu#16 0x0000555555ad2590 in qcow2_close (bs=0x555556810680) at block/qcow2.c:2153 qemu#17 0x0000555555ab0c62 in bdrv_close (bs=0x555556810680) at block.c:3358 qemu#18 0x0000555555ab0c62 in bdrv_delete (bs=0x555556810680) at block.c:3542 qemu#19 0x0000555555ab0c62 in bdrv_unref (bs=0x555556810680) at block.c:4598 qemu#20 0x0000555555af4d72 in blk_remove_bs (blk=blk@entry=0x5555568103d0) at block/block-backend.c:785 qemu#21 0x0000555555af4dbb in blk_remove_all_bs () at block/block-backend.c:483 qemu#22 0x0000555555aae02f in bdrv_close_all () at block.c:3412 qemu#23 0x00005555557f9796 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4776 The reproducer I used is a qcow2 image on gluster volume, where the virtual disk size (4 GB) is larger than the gluster volume size (64M), so we can easily trigger an ENOSPC. This backend is assigned to a virtio-blk device using an iothread, and then from the guest a 'dd if=/dev/zero of=/dev/vda bs=1G count=1' causes the VM to stop because of an I/O error. qemu_gluster_co_flush_to_disk() sets bs->drv = NULL on error, so when virtio-blk stops the dataplane, the block nodes stay in the iothread AioContext. A 'quit' monitor command issued from this paused state crashes the process. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1631227 Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> (cherry picked from commit 1bffe1a) *drop context dependency on e64f25f Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Reading the RX_DATA register when the RX_FIFO is empty triggers an abort. This can be easily reproduced: $ qemu-system-arm -M emcraft-sf2 -monitor stdio -S QEMU 4.0.50 monitor - type 'help' for more information (qemu) x 0x40001010 Aborted (core dumped) (gdb) bt #1 0x00007f035874f895 in abort () at /lib64/libc.so.6 #2 0x00005628686591ff in fifo8_pop (fifo=0x56286a9a4c68) at util/fifo8.c:66 #3 0x00005628683e0b8e in fifo32_pop (fifo=0x56286a9a4c68) at include/qemu/fifo32.h:137 #4 0x00005628683e0efb in spi_read (opaque=0x56286a9a4850, addr=4, size=4) at hw/ssi/mss-spi.c:168 #5 0x0000562867f96801 in memory_region_read_accessor (mr=0x56286a9a4b60, addr=16, value=0x7ffeecb0c5c8, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439 #6 0x0000562867f96cdb in access_with_adjusted_size (addr=16, value=0x7ffeecb0c5c8, size=4, access_size_min=1, access_size_max=4, access_fn=0x562867f967c3 <memory_region_read_accessor>, mr=0x56286a9a4b60, attrs=...) at memory.c:569 #7 0x0000562867f99940 in memory_region_dispatch_read1 (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1420 #8 0x0000562867f99a08 in memory_region_dispatch_read (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1447 #9 0x0000562867f38721 in flatview_read_continue (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, addr1=16, l=4, mr=0x56286a9a4b60) at exec.c:3385 #10 0x0000562867f38874 in flatview_read (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3423 #11 0x0000562867f388ea in address_space_read_full (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3436 #12 0x0000562867f389c5 in address_space_rw (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=false) at exec.c:3466 qemu#13 0x0000562867f3bdd7 in cpu_memory_rw_debug (cpu=0x56286aa19d00, addr=1073745936, buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=0) at exec.c:3976 qemu#14 0x000056286811ed51 in memory_dump (mon=0x56286a8c32d0, count=1, format=120, wsize=4, addr=1073745936, is_physical=0) at monitor/misc.c:730 qemu#15 0x000056286811eff1 in hmp_memory_dump (mon=0x56286a8c32d0, qdict=0x56286b15c400) at monitor/misc.c:785 qemu#16 0x00005628684740ee in handle_hmp_command (mon=0x56286a8c32d0, cmdline=0x56286a8caeb2 "0x40001010") at monitor/hmp.c:1082 From the datasheet "Actel SmartFusion Microcontroller Subsystem User's Guide" Rev.1, Table 13-3 "SPI Register Summary", this register has a reset value of 0. Check the FIFO is not empty before accessing it, else log an error message. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20190709113715.7761-3-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit c0bccee) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
In the previous commit we fixed a crash when the guest read a register that pop from an empty FIFO. By auditing the repository, we found another similar use with an easy way to reproduce: $ qemu-system-aarch64 -M xlnx-zcu102 -monitor stdio -S QEMU 4.0.50 monitor - type 'help' for more information (qemu) xp/b 0xfd4a0134 Aborted (core dumped) (gdb) bt #0 0x00007f6936dea57f in raise () at /lib64/libc.so.6 #1 0x00007f6936dd4895 in abort () at /lib64/libc.so.6 #2 0x0000561ad32975ec in xlnx_dp_aux_pop_rx_fifo (s=0x7f692babee70) at hw/display/xlnx_dp.c:431 #3 0x0000561ad3297dc0 in xlnx_dp_read (opaque=0x7f692babee70, offset=77, size=4) at hw/display/xlnx_dp.c:667 #4 0x0000561ad321b896 in memory_region_read_accessor (mr=0x7f692babf620, addr=308, value=0x7ffe05c1db88, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439 #5 0x0000561ad321bd70 in access_with_adjusted_size (addr=308, value=0x7ffe05c1db88, size=1, access_size_min=4, access_size_max=4, access_fn=0x561ad321b858 <memory_region_read_accessor>, mr=0x7f692babf620, attrs=...) at memory.c:569 #6 0x0000561ad321e9d5 in memory_region_dispatch_read1 (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1420 #7 0x0000561ad321ea9d in memory_region_dispatch_read (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1447 #8 0x0000561ad31bd742 in flatview_read_continue (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1, addr1=308, l=1, mr=0x7f692babf620) at exec.c:3385 #9 0x0000561ad31bd895 in flatview_read (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3423 #10 0x0000561ad31bd90b in address_space_read_full (as=0x561ad5bb3020, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3436 #11 0x0000561ad33b1c42 in address_space_read (len=1, buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", attrs=..., addr=4249485620, as=0x561ad5bb3020) at include/exec/memory.h:2131 #12 0x0000561ad33b1c42 in memory_dump (mon=0x561ad59c4530, count=1, format=120, wsize=1, addr=4249485620, is_physical=1) at monitor/misc.c:723 qemu#13 0x0000561ad33b1fc1 in hmp_physical_memory_dump (mon=0x561ad59c4530, qdict=0x561ad6c6fd00) at monitor/misc.c:795 qemu#14 0x0000561ad37b4a9f in handle_hmp_command (mon=0x561ad59c4530, cmdline=0x561ad59d0f22 "/b 0x00000000fd4a0134") at monitor/hmp.c:1082 Fix by checking the FIFO is not empty before popping from it. The datasheet is not clear about the reset value of this register, we choose to return '0'. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20190709113715.7761-4-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit a09ef50) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
In currently implementation there will be a memory leak when
nbd_client_connect() returns error status. Here is an easy way to
reproduce:
1. run qemu-iotests as follow and check the result with asan:
./check -raw 143
Following is the asan output backtrack:
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7f629688a560 in calloc (/usr/lib64/libasan.so.3+0xc7560)
#1 0x7f6295e7e015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015)
#2 0x56281dab4642 in qobject_input_start_struct /mnt/sdb/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:295
#3 0x56281dab1a04 in visit_start_struct /mnt/sdb/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:49
#4 0x56281dad1827 in visit_type_SocketAddress qapi/qapi-visit-sockets.c:386
#5 0x56281da8062f in nbd_config /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1716
#6 0x56281da8062f in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1829
#7 0x56281da8062f in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873
Direct leak of 15 byte(s) in 1 object(s) allocated from:
#0 0x7f629688a3a0 in malloc (/usr/lib64/libasan.so.3+0xc73a0)
#1 0x7f6295e7dfbd in g_malloc (/usr/lib64/libglib-2.0.so.0+0x4ffbd)
#2 0x7f6295e96ace in g_strdup (/usr/lib64/libglib-2.0.so.0+0x68ace)
#3 0x56281da804ac in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1834
#4 0x56281da804ac in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7f629688a3a0 in malloc (/usr/lib64/libasan.so.3+0xc73a0)
#1 0x7f6295e7dfbd in g_malloc (/usr/lib64/libglib-2.0.so.0+0x4ffbd)
#2 0x7f6295e96ace in g_strdup (/usr/lib64/libglib-2.0.so.0+0x68ace)
#3 0x56281dab41a3 in qobject_input_type_str_keyval /mnt/sdb/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:536
#4 0x56281dab2ee9 in visit_type_str /mnt/sdb/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:297
#5 0x56281dad0fa1 in visit_type_UnixSocketAddress_members qapi/qapi-visit-sockets.c:141
#6 0x56281dad17b6 in visit_type_SocketAddress_members qapi/qapi-visit-sockets.c:366
#7 0x56281dad186a in visit_type_SocketAddress qapi/qapi-visit-sockets.c:393
#8 0x56281da8062f in nbd_config /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1716
#9 0x56281da8062f in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1829
#10 0x56281da8062f in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873
Fixes: 8f071c9
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Cc: qemu-stable <qemu-stable@nongnu.org>
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <1575517528-44312-3-git-send-email-pannengyuan@huawei.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 8198cf5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
The virtqueue code sets up MemoryRegionCaches to access the virtqueue
guest RAM data structures. The code currently assumes that
VRingMemoryRegionCaches is initialized before device emulation code
accesses the virtqueue. An assertion will fail in
vring_get_region_caches() when this is not true. Device fuzzing found a
case where this assumption is false (see below).
Virtqueue guest RAM addresses can also be changed from a vCPU thread
while an IOThread is accessing the virtqueue. This breaks the same
assumption but this time the caches could become invalid partway through
the virtqueue code. The code fetches the caches RCU pointer multiple
times so we will need to validate the pointer every time it is fetched.
Add checks each time we call vring_get_region_caches() and treat invalid
caches as a nop: memory stores are ignored and memory reads return 0.
The fuzz test failure is as follows:
$ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
-drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
-drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
-display none \
-qtest stdio
endianness
outl 0xcf8 0x80002020
outl 0xcfc 0xe0000000
outl 0xcf8 0x80002004
outw 0xcfc 0x7
write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
inb 0x4
writew 0xe000001c 0x1
write 0xe0000014 0x1 0x0d
The following error message is produced:
qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.
The backtrace looks like this:
#0 0x00007ffff5520625 in raise () at /lib64/libc.so.6
#1 0x00007ffff55098d9 in abort () at /lib64/libc.so.6
#2 0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
#3 0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
#4 0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
#5 vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
#6 0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
#7 virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
#8 virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
#9 0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
#10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
#11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
#12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
qemu#13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
qemu#14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
qemu#15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
qemu#16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
qemu#17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
qemu#18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
qemu#19 0x00005555559b20c9 in main_loop () at vl.c:1683
qemu#20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Cc: Michael Tsirkin <mst@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20200207104619.164892-1-stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit abdd16f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
v->vq forgot to cleanup in virtio_9p_device_unrealize, the memory leak stack is as follow: Direct leak of 14336 byte(s) in 2 object(s) allocated from: #0 0x7f819ae43970 (/lib64/libasan.so.5+0xef970) ??:? #1 0x7f819872f49d (/lib64/libglib-2.0.so.0+0x5249d) ??:? #2 0x55a3a58da624 (./x86_64-softmmu/qemu-system-x86_64+0x2c14624) /mnt/sdb/qemu/hw/virtio/virtio.c:2327 #3 0x55a3a571bac7 (./x86_64-softmmu/qemu-system-x86_64+0x2a55ac7) /mnt/sdb/qemu/hw/9pfs/virtio-9p-device.c:209 #4 0x55a3a58e7bc6 (./x86_64-softmmu/qemu-system-x86_64+0x2c21bc6) /mnt/sdb/qemu/hw/virtio/virtio.c:3504 #5 0x55a3a5ebfb37 (./x86_64-softmmu/qemu-system-x86_64+0x31f9b37) /mnt/sdb/qemu/hw/core/qdev.c:876 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Message-Id: <20200117060927.51996-2-pannengyuan@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com> Acked-by: Greg Kurz <groug@kaod.org> (cherry picked from commit 9580d60) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
If we call the qmp 'query-block' while qemu is working on
'block-commit', it will cause memleaks, the memory leak stack is as
follow:
Indirect leak of 12360 byte(s) in 3 object(s) allocated from:
#0 0x7f80f0b6d970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
#1 0x7f80ee86049d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
#2 0x55ea95b5bb67 in qdict_new /mnt/sdb/qemu-4.2.0-rc0/qobject/qdict.c:29
#3 0x55ea956cd043 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6427
#4 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
#5 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
#6 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
#7 0x55ea958818ea in bdrv_block_device_info /mnt/sdb/qemu-4.2.0-rc0/block/qapi.c:56
#8 0x55ea958879de in bdrv_query_info /mnt/sdb/qemu-4.2.0-rc0/block/qapi.c:392
#9 0x55ea9588b58f in qmp_query_block /mnt/sdb/qemu-4.2.0-rc0/block/qapi.c:578
#10 0x55ea95567392 in qmp_marshal_query_block qapi/qapi-commands-block-core.c:95
Indirect leak of 4120 byte(s) in 1 object(s) allocated from:
#0 0x7f80f0b6d970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
#1 0x7f80ee86049d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
#2 0x55ea95b5bb67 in qdict_new /mnt/sdb/qemu-4.2.0-rc0/qobject/qdict.c:29
#3 0x55ea956cd043 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6427
#4 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
#5 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
#6 0x55ea9569f301 in bdrv_backing_attach /mnt/sdb/qemu-4.2.0-rc0/block.c:1064
#7 0x55ea956a99dd in bdrv_replace_child_noperm /mnt/sdb/qemu-4.2.0-rc0/block.c:2283
#8 0x55ea956b9b53 in bdrv_replace_node /mnt/sdb/qemu-4.2.0-rc0/block.c:4196
#9 0x55ea956b9e49 in bdrv_append /mnt/sdb/qemu-4.2.0-rc0/block.c:4236
#10 0x55ea958c3472 in commit_start /mnt/sdb/qemu-4.2.0-rc0/block/commit.c:306
#11 0x55ea94b68ab0 in qmp_block_commit /mnt/sdb/qemu-4.2.0-rc0/blockdev.c:3459
#12 0x55ea9556a7a7 in qmp_marshal_block_commit qapi/qapi-commands-block-core.c:407
Fixes: bb808d5
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-id: 20200116085600.24056-1-pannengyuan@huawei.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit cb89561)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Similar to other virtio-deivces, ctrl_vq forgot to delete in virtio_crypto_device_unrealize, this patch fix it.
This device has aleardy maintained vq pointers. Thus, we use the new virtio_delete_queue function directly to do the cleanup.
The leak stack:
Direct leak of 10752 byte(s) in 3 object(s) allocated from:
#0 0x7f4c024b1970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
#1 0x7f4c018be49d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
#2 0x55a2f8017279 in virtio_add_queue /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:2333
#3 0x55a2f8057035 in virtio_crypto_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto.c:814
#4 0x55a2f8005d80 in virtio_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:3531
#5 0x55a2f8497d1b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891
#6 0x55a2f8b48595 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238
#7 0x55a2f8b54fad in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26
#8 0x55a2f8b4de2c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390
#9 0x55a2f80609c9 in virtio_crypto_pci_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto-pci.c:58
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Cc: "Gonglei (Arei)" <arei.gonglei@huawei.com>
Message-Id: <20200225075554.10835-5-pannengyuan@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit d56e1c8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
virtio queues forgot to delete in unrealize, and aslo error path in
realize, this patch fix these memleaks, the leak stack is as follow:
Direct leak of 114688 byte(s) in 16 object(s) allocated from:
#0 0x7f24024fdbf0 in calloc (/lib64/libasan.so.3+0xcabf0)
#1 0x7f2401642015 in g_malloc0 (/lib64/libglib-2.0.so.0+0x50015)
#2 0x55ad175a6447 in virtio_add_queue /mnt/sdb/qemu/hw/virtio/virtio.c:2327
#3 0x55ad17570cf9 in vhost_user_blk_device_realize /mnt/sdb/qemu/hw/block/vhost-user-blk.c:419
#4 0x55ad175a3707 in virtio_device_realize /mnt/sdb/qemu/hw/virtio/virtio.c:3509
#5 0x55ad176ad0d1 in device_set_realized /mnt/sdb/qemu/hw/core/qdev.c:876
#6 0x55ad1781ff9d in property_set_bool /mnt/sdb/qemu/qom/object.c:2080
#7 0x55ad178245ae in object_property_set_qobject /mnt/sdb/qemu/qom/qom-qobject.c:26
#8 0x55ad17821eb4 in object_property_set_bool /mnt/sdb/qemu/qom/object.c:1338
#9 0x55ad177aeed7 in virtio_pci_realize /mnt/sdb/qemu/hw/virtio/virtio-pci.c:1801
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20200224041336.30790-2-pannengyuan@huawei.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 13e5468)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
There is a use-after-free possible: bdrv_unref_child() leaves
bs->backing freed but not NULL. bdrv_attach_child may produce nested
polling loop due to drain, than access of freed pointer is possible.
I've produced the following crash on 30 iotest with modified code. It
does not reproduce on master, but still seems possible:
#0 __strcmp_avx2 () at /lib64/libc.so.6
#1 bdrv_backing_overridden (bs=0x55c9d3cc2060) at block.c:6350
#2 bdrv_refresh_filename (bs=0x55c9d3cc2060) at block.c:6404
#3 bdrv_backing_attach (c=0x55c9d48e5520) at block.c:1063
#4 bdrv_replace_child_noperm
(child=child@entry=0x55c9d48e5520,
new_bs=new_bs@entry=0x55c9d3cc2060) at block.c:2290
#5 bdrv_replace_child
(child=child@entry=0x55c9d48e5520,
new_bs=new_bs@entry=0x55c9d3cc2060) at block.c:2320
#6 bdrv_root_attach_child
(child_bs=child_bs@entry=0x55c9d3cc2060,
child_name=child_name@entry=0x55c9d241d478 "backing",
child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
ctx=<optimized out>, perm=<optimized out>, shared_perm=21,
opaque=0x55c9d3c5a3d0, errp=0x7ffd117108e0) at block.c:2424
#7 bdrv_attach_child
(parent_bs=parent_bs@entry=0x55c9d3c5a3d0,
child_bs=child_bs@entry=0x55c9d3cc2060,
child_name=child_name@entry=0x55c9d241d478 "backing",
child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
errp=errp@entry=0x7ffd117108e0) at block.c:5876
#8 in bdrv_set_backing_hd
(bs=bs@entry=0x55c9d3c5a3d0,
backing_hd=backing_hd@entry=0x55c9d3cc2060,
errp=errp@entry=0x7ffd117108e0)
at block.c:2576
#9 stream_prepare (job=0x55c9d49d84a0) at block/stream.c:150
#10 job_prepare (job=0x55c9d49d84a0) at job.c:761
#11 job_txn_apply (txn=<optimized out>, fn=<optimized out>) at
job.c:145
#12 job_do_finalize (job=0x55c9d49d84a0) at job.c:778
qemu#13 job_completed_txn_success (job=0x55c9d49d84a0) at job.c:832
qemu#14 job_completed (job=0x55c9d49d84a0) at job.c:845
qemu#15 job_completed (job=0x55c9d49d84a0) at job.c:836
qemu#16 job_exit (opaque=0x55c9d49d84a0) at job.c:864
qemu#17 aio_bh_call (bh=0x55c9d471a160) at util/async.c:117
qemu#18 aio_bh_poll (ctx=ctx@entry=0x55c9d3c46720) at util/async.c:117
qemu#19 aio_poll (ctx=ctx@entry=0x55c9d3c46720,
blocking=blocking@entry=true)
at util/aio-posix.c:728
qemu#20 bdrv_parent_drained_begin_single (poll=true, c=0x55c9d3d558f0)
at block/io.c:121
qemu#21 bdrv_parent_drained_begin_single (c=c@entry=0x55c9d3d558f0,
poll=poll@entry=true)
at block/io.c:114
qemu#22 bdrv_replace_child_noperm
(child=child@entry=0x55c9d3d558f0,
new_bs=new_bs@entry=0x55c9d3d27300) at block.c:2258
qemu#23 bdrv_replace_child
(child=child@entry=0x55c9d3d558f0,
new_bs=new_bs@entry=0x55c9d3d27300) at block.c:2320
qemu#24 bdrv_root_attach_child
(child_bs=child_bs@entry=0x55c9d3d27300,
child_name=child_name@entry=0x55c9d241d478 "backing",
child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
ctx=<optimized out>, perm=<optimized out>, shared_perm=21,
opaque=0x55c9d3cc2060, errp=0x7ffd11710c60) at block.c:2424
qemu#25 bdrv_attach_child
(parent_bs=parent_bs@entry=0x55c9d3cc2060,
child_bs=child_bs@entry=0x55c9d3d27300,
child_name=child_name@entry=0x55c9d241d478 "backing",
child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
errp=errp@entry=0x7ffd11710c60) at block.c:5876
qemu#26 bdrv_set_backing_hd
(bs=bs@entry=0x55c9d3cc2060,
backing_hd=backing_hd@entry=0x55c9d3d27300,
errp=errp@entry=0x7ffd11710c60)
at block.c:2576
qemu#27 stream_prepare (job=0x55c9d495ead0) at block/stream.c:150
...
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200316060631.30052-2-vsementsov@virtuozzo.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 6e57963)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Both qemu and qemu-img use writeback cache mode by default, which is
already documented in qemu(1). qemu-nbd uses writethrough cache mode by
default, and the default cache mode is not documented.
According to the qemu-nbd(8):
--cache=CACHE
The cache mode to be used with the file. See the
documentation of the emulator's -drive cache=... option for
allowed values.
qemu(1) says:
The default mode is cache=writeback.
So users have no reason to assume that qemu-nbd is using writethough
cache mode. The only hint is the painfully slow writing when using the
defaults.
Looking in git history, it seems that qemu used writethrough in the past
to support broken guests that did not flush data properly, or could not
flush due to limitations in qemu. But qemu-nbd clients can use
NBD_CMD_FLUSH to flush data, so using writethrough does not help anyone.
Change the default cache mode to writback, and document the default and
available values properly in the online help and manual.
With this change converting image via qemu-nbd is 3.5 times faster.
$ qemu-img create dst.img 50g
$ qemu-nbd -t -f raw -k /tmp/nbd.sock dst.img
Before this change:
$ hyperfine -r3 "./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock"
Benchmark #1: ./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock
Time (mean ± σ): 83.639 s ± 5.970 s [User: 2.733 s, System: 6.112 s]
Range (min … max): 76.749 s … 87.245 s 3 runs
After this change:
$ hyperfine -r3 "./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock"
Benchmark #1: ./qemu-img convert -p -f raw -O raw -T none -W fedora34.img nbd+unix:///?socket=/tmp/nbd.sock
Time (mean ± σ): 23.522 s ± 0.433 s [User: 2.083 s, System: 5.475 s]
Range (min … max): 23.234 s … 24.019 s 3 runs
Users can avoid the issue by using --cache=writeback[1] but the defaults
should give good performance for the common use case.
[1] https://bugzilla.redhat.com/1990656
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-Id: <20210813205519.50518-1-nsoffer@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 0961525)
Signed-off-by: Michael Roth <michael.roth@amd.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
PCI resource reserve capability should use LE format as all other PCI things. If we don't then seabios won't boot: === PCI new allocation pass #1 === PCI: check devices PCI: QEMU resource reserve cap: size 10000000000000 type io PCI: secondary bus 1 size 10000000000000 type io PCI: secondary bus 1 size 00200000 type mem PCI: secondary bus 1 size 00200000 type prefmem === PCI new allocation pass #2 === PCI: out of I/O address space This became more important since we started reserving IO by default, previously no one noticed. Fixes: e2a6290 ("hw/pcie-root-port: Fix hotplug for PCI devices requiring IO") Cc: marcel.apfelbaum@gmail.com Fixes: 226263f ("hw/pci: add QEMU-specific PCI capability to the Generic PCI Express Root Port") Cc: zuban32s@gmail.com Fixes: 6755e61 ("hw/pci: add PCI resource reserve capability to legacy PCI bridge") Cc: jing2.liu@linux.intel.com Tested-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> (cherry picked from commit 0e464f7) Signed-off-by: Michael Roth <michael.roth@amd.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Without the previous commit, when running 'make check-qtest-i386'
with QEMU configured with '--enable-sanitizers' we get:
AddressSanitizer:DEADLYSIGNAL
=================================================================
==287878==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000344
==287878==The signal is caused by a WRITE memory access.
==287878==Hint: address points to the zero page.
#0 0x564b2e5bac27 in blk_inc_in_flight block/block-backend.c:1346:5
#1 0x564b2e5bb228 in blk_pwritev_part block/block-backend.c:1317:5
#2 0x564b2e5bcd57 in blk_pwrite block/block-backend.c:1498:11
#3 0x564b2ca1cdd3 in fdctrl_write_data hw/block/fdc.c:2221:17
#4 0x564b2ca1b2f7 in fdctrl_write hw/block/fdc.c:829:9
#5 0x564b2dc49503 in portio_write softmmu/ioport.c:201:9
Add the reproducer for CVE-2021-20196.
Suggested-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20211124161536.631563-4-philmd@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit cc20926)
Signed-off-by: Michael Roth <michael.roth@amd.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Even for block nodes with bs->drv == NULL, we can't just ignore a bdrv_set_aio_context() call. Leaving the node in its old context can mean that it's still in an iothread context in bdrv_close_all() during shutdown, resulting in an attempted unlock of the AioContext lock which we don't hold. This is an example stack trace of a related crash: #0 0x00007ffff59da57f in raise () at /lib64/libc.so.6 #1 0x00007ffff59c4895 in abort () at /lib64/libc.so.6 #2 0x0000555555b97b1e in error_exit (err=<optimized out>, msg=msg@entry=0x555555d386d0 <__func__.19059> "qemu_mutex_unlock_impl") at util/qemu-thread-posix.c:36 #3 0x0000555555b97f7f in qemu_mutex_unlock_impl (mutex=mutex@entry=0x5555568002f0, file=file@entry=0x555555d378df "util/async.c", line=line@entry=507) at util/qemu-thread-posix.c:97 #4 0x0000555555b92f55 in aio_context_release (ctx=ctx@entry=0x555556800290) at util/async.c:507 #5 0x0000555555b05cf8 in bdrv_prwv_co (child=child@entry=0x7fffc80012f0, offset=offset@entry=131072, qiov=qiov@entry=0x7fffffffd4f0, is_write=is_write@entry=true, flags=flags@entry=0) at block/io.c:833 #6 0x0000555555b060a9 in bdrv_pwritev (qiov=0x7fffffffd4f0, offset=131072, child=0x7fffc80012f0) at block/io.c:990 #7 0x0000555555b060a9 in bdrv_pwrite (child=0x7fffc80012f0, offset=131072, buf=<optimized out>, bytes=<optimized out>) at block/io.c:990 #8 0x0000555555ae172b in qcow2_cache_entry_flush (bs=bs@entry=0x555556810680, c=c@entry=0x5555568cc740, i=i@entry=0) at block/qcow2-cache.c:51 #9 0x0000555555ae18dd in qcow2_cache_write (bs=bs@entry=0x555556810680, c=0x5555568cc740) at block/qcow2-cache.c:248 #10 0x0000555555ae15de in qcow2_cache_flush (bs=0x555556810680, c=<optimized out>) at block/qcow2-cache.c:259 #11 0x0000555555ae16b1 in qcow2_cache_flush_dependency (c=0x5555568a1700, c=0x5555568a1700, bs=0x555556810680) at block/qcow2-cache.c:194 #12 0x0000555555ae16b1 in qcow2_cache_entry_flush (bs=bs@entry=0x555556810680, c=c@entry=0x5555568a1700, i=i@entry=0) at block/qcow2-cache.c:194 qemu#13 0x0000555555ae18dd in qcow2_cache_write (bs=bs@entry=0x555556810680, c=0x5555568a1700) at block/qcow2-cache.c:248 qemu#14 0x0000555555ae15de in qcow2_cache_flush (bs=bs@entry=0x555556810680, c=<optimized out>) at block/qcow2-cache.c:259 qemu#15 0x0000555555ad242c in qcow2_inactivate (bs=bs@entry=0x555556810680) at block/qcow2.c:2124 qemu#16 0x0000555555ad2590 in qcow2_close (bs=0x555556810680) at block/qcow2.c:2153 qemu#17 0x0000555555ab0c62 in bdrv_close (bs=0x555556810680) at block.c:3358 qemu#18 0x0000555555ab0c62 in bdrv_delete (bs=0x555556810680) at block.c:3542 qemu#19 0x0000555555ab0c62 in bdrv_unref (bs=0x555556810680) at block.c:4598 qemu#20 0x0000555555af4d72 in blk_remove_bs (blk=blk@entry=0x5555568103d0) at block/block-backend.c:785 qemu#21 0x0000555555af4dbb in blk_remove_all_bs () at block/block-backend.c:483 qemu#22 0x0000555555aae02f in bdrv_close_all () at block.c:3412 qemu#23 0x00005555557f9796 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4776 The reproducer I used is a qcow2 image on gluster volume, where the virtual disk size (4 GB) is larger than the gluster volume size (64M), so we can easily trigger an ENOSPC. This backend is assigned to a virtio-blk device using an iothread, and then from the guest a 'dd if=/dev/zero of=/dev/vda bs=1G count=1' causes the VM to stop because of an I/O error. qemu_gluster_co_flush_to_disk() sets bs->drv = NULL on error, so when virtio-blk stops the dataplane, the block nodes stay in the iothread AioContext. A 'quit' monitor command issued from this paused state crashes the process. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1631227 Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> (cherry picked from commit 1bffe1a) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
If interface_count is NO_INTERFACE_INFO, let's not access the arrays
out-of-bounds.
==994==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x625000243930 at pc 0x5642068086a8 bp 0x7f0b6f9ffa50 sp 0x7f0b6f9ffa40
READ of size 1 at 0x625000243930 thread T0
#0 0x5642068086a7 in usbredir_check_bulk_receiving /home/elmarco/src/qemu/hw/usb/redirect.c:1503
#1 0x56420681301c in usbredir_post_load /home/elmarco/src/qemu/hw/usb/redirect.c:2154
#2 0x5642068a56c2 in vmstate_load_state /home/elmarco/src/qemu/migration/vmstate.c:168
#3 0x56420688e2ac in vmstate_load /home/elmarco/src/qemu/migration/savevm.c:829
#4 0x5642068980cb in qemu_loadvm_section_start_full /home/elmarco/src/qemu/migration/savevm.c:2211
#5 0x564206899645 in qemu_loadvm_state_main /home/elmarco/src/qemu/migration/savevm.c:2395
#6 0x5642068998cf in qemu_loadvm_state /home/elmarco/src/qemu/migration/savevm.c:2467
#7 0x56420685f3e9 in process_incoming_migration_co /home/elmarco/src/qemu/migration/migration.c:449
#8 0x564207106c47 in coroutine_trampoline /home/elmarco/src/qemu/util/coroutine-ucontext.c:115
#9 0x7f0c0604e37f (/lib64/libc.so.6+0x4d37f)
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Reviewed-by: Li Qiang <liq3ea@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807084048.4258-1-marcandre.lureau@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 7b84b90)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
The 'blockdev-create' QMP command was introduced as experimental feature in commit b0292b8, using the assert() debug call. It got promoted to 'stable' command in 3fb588a, but the assert call was not removed. Some block drivers are optional, and bdrv_find_format() might return a NULL value, triggering the assertion. Stable code is not expected to abort, so return an error instead. This is easily reproducible when libnfs is not installed: ./configure [...] module support no Block whitelist (rw) Block whitelist (ro) libiscsi support yes libnfs support no [...] Start QEMU: $ qemu-system-x86_64 -S -qmp unix:/tmp/qemu.qmp,server,nowait Send the 'blockdev-create' with the 'nfs' driver: $ ( cat << 'EOF' {'execute': 'qmp_capabilities'} {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'} EOF ) | socat STDIO UNIX:/tmp/qemu.qmp {"QMP": {"version": {"qemu": {"micro": 50, "minor": 1, "major": 4}, "package": "v4.1.0-733-g89ea03a7dc"}, "capabilities": ["oob"]}} {"return": {}} QEMU crashes: $ gdb qemu-system-x86_64 core Program received signal SIGSEGV, Segmentation fault. (gdb) bt #0 0x00007ffff510957f in raise () at /lib64/libc.so.6 #1 0x00007ffff50f3895 in abort () at /lib64/libc.so.6 #2 0x00007ffff50f3769 in _nl_load_domain.cold.0 () at /lib64/libc.so.6 #3 0x00007ffff5101a26 in .annobin_assert.c_end () at /lib64/libc.so.6 #4 0x0000555555d7e1f1 in qmp_blockdev_create (job_id=0x555556baee40 "x", options=0x555557666610, errp=0x7fffffffc770) at block/create.c:69 #5 0x0000555555c96b52 in qmp_marshal_blockdev_create (args=0x7fffdc003830, ret=0x7fffffffc7f8, errp=0x7fffffffc7f0) at qapi/qapi-commands-block-core.c:1314 #6 0x0000555555deb0a0 in do_qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false, errp=0x7fffffffc898) at qapi/qmp-dispatch.c:131 #7 0x0000555555deb2a1 in qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false) at qapi/qmp-dispatch.c:174 With this patch applied, QEMU returns a QMP error: {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'} {"id": "x", "error": {"class": "GenericError", "desc": "Block driver 'nfs' not found or not supported"}} Cc: qemu-stable@nongnu.org Reported-by: Xu Tian <xutian@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> (cherry picked from commit d90d5ca) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
When creating the admin queue in nvme_init() the variable that holds the number of queues created is modified before actual queue creation. This is a problem because if creating the queue fails then the variable is left in inconsistent state. This was actually observed when I tried to hotplug a nvme disk. The control got to nvme_file_open() which called nvme_init() which failed and thus nvme_close() was called which in turn called nvme_free_queue_pair() with queue being NULL. This lead to an instant crash: #0 0x000055d9507ec211 in nvme_free_queue_pair (bs=0x55d952ddb880, q=0x0) at block/nvme.c:164 #1 0x000055d9507ee180 in nvme_close (bs=0x55d952ddb880) at block/nvme.c:729 #2 0x000055d9507ee3d5 in nvme_file_open (bs=0x55d952ddb880, options=0x55d952bb1410, flags=147456, errp=0x7ffd8e19e200) at block/nvme.c:781 #3 0x000055d9507629f3 in bdrv_open_driver (bs=0x55d952ddb880, drv=0x55d95109c1e0 <bdrv_nvme>, node_name=0x0, options=0x55d952bb1410, open_flags=147456, errp=0x7ffd8e19e310) at block.c:1291 #4 0x000055d9507633d6 in bdrv_open_common (bs=0x55d952ddb880, file=0x0, options=0x55d952bb1410, errp=0x7ffd8e19e310) at block.c:1551 #5 0x000055d950766881 in bdrv_open_inherit (filename=0x0, reference=0x0, options=0x55d952bb1410, flags=32768, parent=0x55d9538ce420, child_role=0x55d950eaade0 <child_file>, errp=0x7ffd8e19e510) at block.c:3063 #6 0x000055d950765ae4 in bdrv_open_child_bs (filename=0x0, options=0x55d9541cdff0, bdref_key=0x55d950af33aa "file", parent=0x55d9538ce420, child_role=0x55d950eaade0 <child_file>, allow_none=true, errp=0x7ffd8e19e510) at block.c:2712 #7 0x000055d950766633 in bdrv_open_inherit (filename=0x0, reference=0x0, options=0x55d9541cdff0, flags=0, parent=0x0, child_role=0x0, errp=0x7ffd8e19e908) at block.c:3011 #8 0x000055d950766dba in bdrv_open (filename=0x0, reference=0x0, options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at block.c:3156 #9 0x000055d9507cb635 in blk_new_open (filename=0x0, reference=0x0, options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at block/block-backend.c:389 #10 0x000055d950465ec5 in blockdev_init (file=0x0, bs_opts=0x55d953d00390, errp=0x7ffd8e19e908) at blockdev.c:602 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Message-id: 927aae40b617ba7d4b6c7ffe74e6d7a2595f8e86.1562770546.git.mprivozn@redhat.com Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> (cherry picked from commit 95667c3) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Reading the RX_DATA register when the RX_FIFO is empty triggers an abort. This can be easily reproduced: $ qemu-system-arm -M emcraft-sf2 -monitor stdio -S QEMU 4.0.50 monitor - type 'help' for more information (qemu) x 0x40001010 Aborted (core dumped) (gdb) bt #1 0x00007f035874f895 in abort () at /lib64/libc.so.6 #2 0x00005628686591ff in fifo8_pop (fifo=0x56286a9a4c68) at util/fifo8.c:66 #3 0x00005628683e0b8e in fifo32_pop (fifo=0x56286a9a4c68) at include/qemu/fifo32.h:137 #4 0x00005628683e0efb in spi_read (opaque=0x56286a9a4850, addr=4, size=4) at hw/ssi/mss-spi.c:168 #5 0x0000562867f96801 in memory_region_read_accessor (mr=0x56286a9a4b60, addr=16, value=0x7ffeecb0c5c8, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439 #6 0x0000562867f96cdb in access_with_adjusted_size (addr=16, value=0x7ffeecb0c5c8, size=4, access_size_min=1, access_size_max=4, access_fn=0x562867f967c3 <memory_region_read_accessor>, mr=0x56286a9a4b60, attrs=...) at memory.c:569 #7 0x0000562867f99940 in memory_region_dispatch_read1 (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1420 #8 0x0000562867f99a08 in memory_region_dispatch_read (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1447 #9 0x0000562867f38721 in flatview_read_continue (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, addr1=16, l=4, mr=0x56286a9a4b60) at exec.c:3385 #10 0x0000562867f38874 in flatview_read (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3423 #11 0x0000562867f388ea in address_space_read_full (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3436 #12 0x0000562867f389c5 in address_space_rw (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=false) at exec.c:3466 qemu#13 0x0000562867f3bdd7 in cpu_memory_rw_debug (cpu=0x56286aa19d00, addr=1073745936, buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=0) at exec.c:3976 qemu#14 0x000056286811ed51 in memory_dump (mon=0x56286a8c32d0, count=1, format=120, wsize=4, addr=1073745936, is_physical=0) at monitor/misc.c:730 qemu#15 0x000056286811eff1 in hmp_memory_dump (mon=0x56286a8c32d0, qdict=0x56286b15c400) at monitor/misc.c:785 qemu#16 0x00005628684740ee in handle_hmp_command (mon=0x56286a8c32d0, cmdline=0x56286a8caeb2 "0x40001010") at monitor/hmp.c:1082 From the datasheet "Actel SmartFusion Microcontroller Subsystem User's Guide" Rev.1, Table 13-3 "SPI Register Summary", this register has a reset value of 0. Check the FIFO is not empty before accessing it, else log an error message. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20190709113715.7761-3-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit c0bccee) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
In the previous commit we fixed a crash when the guest read a register that pop from an empty FIFO. By auditing the repository, we found another similar use with an easy way to reproduce: $ qemu-system-aarch64 -M xlnx-zcu102 -monitor stdio -S QEMU 4.0.50 monitor - type 'help' for more information (qemu) xp/b 0xfd4a0134 Aborted (core dumped) (gdb) bt #0 0x00007f6936dea57f in raise () at /lib64/libc.so.6 #1 0x00007f6936dd4895 in abort () at /lib64/libc.so.6 #2 0x0000561ad32975ec in xlnx_dp_aux_pop_rx_fifo (s=0x7f692babee70) at hw/display/xlnx_dp.c:431 #3 0x0000561ad3297dc0 in xlnx_dp_read (opaque=0x7f692babee70, offset=77, size=4) at hw/display/xlnx_dp.c:667 #4 0x0000561ad321b896 in memory_region_read_accessor (mr=0x7f692babf620, addr=308, value=0x7ffe05c1db88, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439 #5 0x0000561ad321bd70 in access_with_adjusted_size (addr=308, value=0x7ffe05c1db88, size=1, access_size_min=4, access_size_max=4, access_fn=0x561ad321b858 <memory_region_read_accessor>, mr=0x7f692babf620, attrs=...) at memory.c:569 #6 0x0000561ad321e9d5 in memory_region_dispatch_read1 (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1420 #7 0x0000561ad321ea9d in memory_region_dispatch_read (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1447 #8 0x0000561ad31bd742 in flatview_read_continue (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1, addr1=308, l=1, mr=0x7f692babf620) at exec.c:3385 #9 0x0000561ad31bd895 in flatview_read (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3423 #10 0x0000561ad31bd90b in address_space_read_full (as=0x561ad5bb3020, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3436 #11 0x0000561ad33b1c42 in address_space_read (len=1, buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", attrs=..., addr=4249485620, as=0x561ad5bb3020) at include/exec/memory.h:2131 #12 0x0000561ad33b1c42 in memory_dump (mon=0x561ad59c4530, count=1, format=120, wsize=1, addr=4249485620, is_physical=1) at monitor/misc.c:723 qemu#13 0x0000561ad33b1fc1 in hmp_physical_memory_dump (mon=0x561ad59c4530, qdict=0x561ad6c6fd00) at monitor/misc.c:795 qemu#14 0x0000561ad37b4a9f in handle_hmp_command (mon=0x561ad59c4530, cmdline=0x561ad59d0f22 "/b 0x00000000fd4a0134") at monitor/hmp.c:1082 Fix by checking the FIFO is not empty before popping from it. The datasheet is not clear about the reset value of this register, we choose to return '0'. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20190709113715.7761-4-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit a09ef50) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
It fixes heap-use-after-free which was found by clang's ASAN.
Control flow of this use-after-free:
main_thread:
* Got SIGTERM and completes main loop
* Calls migration_shutdown
- migrate_fd_cancel (so, migration_thread begins to complete)
- object_unref(OBJECT(current_migration));
migration_thread:
* migration_iteration_finish -> schedule cleanup bh
* object_unref(OBJECT(s)); (Now, current_migration is freed)
* exits
main_thread:
* Calls vm_shutdown -> drain bdrvs -> main loop
-> cleanup_bh -> use after free
If you want to reproduce, these couple of sleeps will help:
vl.c:4613:
migration_shutdown();
+ sleep(2);
migration.c:3269:
+ sleep(1);
trace_migration_thread_after_loop();
migration_iteration_finish(s);
Original output:
qemu-system-x86_64: terminating on signal 15 from pid 31980 (<unknown process>)
=================================================================
==31958==ERROR: AddressSanitizer: heap-use-after-free on address 0x61900001d210
at pc 0x555558a535ca bp 0x7fffffffb190 sp 0x7fffffffb188
READ of size 8 at 0x61900001d210 thread T0 (qemu-vm-0)
#0 0x555558a535c9 in migrate_fd_cleanup migration/migration.c:1502:23
#1 0x5555594fde0a in aio_bh_call util/async.c:90:5
#2 0x5555594fe522 in aio_bh_poll util/async.c:118:13
#3 0x555559524783 in aio_poll util/aio-posix.c:725:17
#4 0x555559504fb3 in aio_wait_bh_oneshot util/aio-wait.c:71:5
#5 0x5555573bddf6 in virtio_blk_data_plane_stop
hw/block/dataplane/virtio-blk.c:282:5
#6 0x5555589d5c09 in virtio_bus_stop_ioeventfd hw/virtio/virtio-bus.c:246:9
#7 0x5555589e9917 in virtio_pci_stop_ioeventfd hw/virtio/virtio-pci.c:287:5
#8 0x5555589e22bf in virtio_pci_vmstate_change hw/virtio/virtio-pci.c:1072:9
#9 0x555557628931 in virtio_vmstate_change hw/virtio/virtio.c:2257:9
#10 0x555557c36713 in vm_state_notify vl.c:1605:9
#11 0x55555716ef53 in do_vm_stop cpus.c:1074:9
#12 0x55555716eeff in vm_shutdown cpus.c:1092:12
qemu#13 0x555557c4283e in main vl.c:4617:5
qemu#14 0x7fffdfdb482f in __libc_start_main
(/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
qemu#15 0x555556ecb118 in _start (x86_64-softmmu/qemu-system-x86_64+0x1977118)
0x61900001d210 is located 144 bytes inside of 952-byte region
[0x61900001d180,0x61900001d538)
freed by thread T6 (live_migration) here:
#0 0x555556f76782 in __interceptor_free
/tmp/final/llvm.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:124:3
#1 0x555558d5fa94 in object_finalize qom/object.c:618:9
#2 0x555558d57651 in object_unref qom/object.c:1068:9
#3 0x555558a55588 in migration_thread migration/migration.c:3272:5
#4 0x5555595393f2 in qemu_thread_start util/qemu-thread-posix.c:502:9
#5 0x7fffe057f6b9 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76b9)
previously allocated by thread T0 (qemu-vm-0) here:
#0 0x555556f76b03 in __interceptor_malloc
/tmp/final/llvm.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:146:3
#1 0x7ffff6ee37b8 in g_malloc (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x4f7b8)
#2 0x555558d58031 in object_new qom/object.c:640:12
#3 0x555558a31f21 in migration_object_init migration/migration.c:139:25
#4 0x555557c41398 in main vl.c:4320:5
#5 0x7fffdfdb482f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
Thread T6 (live_migration) created by T0 (qemu-vm-0) here:
#0 0x555556f5f0dd in pthread_create
/tmp/final/llvm.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:210:3
#1 0x555559538cf9 in qemu_thread_create util/qemu-thread-posix.c:539:11
#2 0x555558a53304 in migrate_fd_connect migration/migration.c:3332:5
#3 0x555558a72bd8 in migration_channel_connect migration/channel.c:92:5
#4 0x555558a6ef87 in exec_start_outgoing_migration migration/exec.c:42:5
#5 0x555558a4f3c2 in qmp_migrate migration/migration.c:1922:9
#6 0x555558bb4f6a in qmp_marshal_migrate qapi/qapi-commands-migration.c:607:5
#7 0x555559363738 in do_qmp_dispatch qapi/qmp-dispatch.c:131:5
#8 0x555559362a15 in qmp_dispatch qapi/qmp-dispatch.c:174:11
#9 0x5555571bac15 in monitor_qmp_dispatch monitor.c:4124:11
#10 0x55555719a22d in monitor_qmp_bh_dispatcher monitor.c:4207:9
#11 0x5555594fde0a in aio_bh_call util/async.c:90:5
#12 0x5555594fe522 in aio_bh_poll util/async.c:118:13
qemu#13 0x5555595201e0 in aio_dispatch util/aio-posix.c:460:5
qemu#14 0x555559503553 in aio_ctx_dispatch util/async.c:261:5
qemu#15 0x7ffff6ede196 in g_main_context_dispatch
(/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x4a196)
SUMMARY: AddressSanitizer: heap-use-after-free migration/migration.c:1502:23
in migrate_fd_cleanup
Shadow bytes around the buggy address:
0x0c327fffb9f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
=>0x0c327fffba40: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba50: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba60: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba70: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==31958==ABORTING
Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru>
Message-Id: <20190408113343.2370-1-yury-kotov@yandex-team.ru>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fixed up comment formatting
(cherry picked from commit fd392cf)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
When 'system_reset' is called, the main loop clear the memory
region cache before the BH has a chance to execute. Later when
the deferred function is called, some assumptions that were
made when scheduling them are no longer true when they actually
execute.
This is what happens using a virtio-blk device (fresh RHEL7.8 install):
$ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; echo q) \
| qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \
-device virtio-blk-pci,id=image1,drive=drive_image1 \
-drive file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none \
-device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \
-netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \
-monitor stdio -serial null -nographic
(qemu) system_reset
(qemu) system_reset
(qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: Assertion `caches != NULL' failed.
Aborted
(gdb) bt
Thread 1 (Thread 0x7f109c17b680 (LWP 10939)):
#0 0x00005604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at hw/virtio/virtio.c:227
#1 0x000056040832972b in vring_avail_flags (vq=0x56040a24bdd0) at hw/virtio/virtio.c:235
#2 0x000056040832d13d in virtio_should_notify (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648
#3 0x000056040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662
#4 0x00005604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at hw/block/dataplane/virtio-blk.c:75
#5 0x000056040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90
#6 0x000056040883dccd in aio_bh_poll (ctx=0x560409161980) at util/async.c:118
#7 0x0000560408842af7 in aio_dispatch (ctx=0x560409161980) at util/aio-posix.c:460
#8 0x000056040883e068 in aio_ctx_dispatch (source=0x560409161980, callback=0x0, user_data=0x0) at util/async.c:261
#9 0x00007f10a8fca06d in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#10 0x0000560408841445 in glib_pollfds_poll () at util/main-loop.c:215
#11 0x00005604088414bf in os_host_main_loop_wait (timeout=0) at util/main-loop.c:238
#12 0x00005604088415c4 in main_loop_wait (nonblocking=0) at util/main-loop.c:514
qemu#13 0x0000560408416b1e in main_loop () at vl.c:1923
qemu#14 0x000056040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, envp=0x7ffc2c3f9d00) at vl.c:4578
Fix this by cancelling the BH when the virtio dataplane is stopped.
[This is version of the patch was modified as discussed with Philippe on
the mailing list thread.
--Stefan]
Reported-by: Yihuang Yu <yihyu@redhat.com>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Fixes: https://bugs.launchpad.net/qemu/+bug/1839428
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190816171503.24761-1-philmd@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ebb6ff2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Spotted by ASAN, during make check...
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7f8e27262c48 in malloc (/lib64/libasan.so.5+0xeec48)
#1 0x7f8e26a5f3c5 in g_malloc (/lib64/libglib-2.0.so.0+0x523c5)
#2 0x555ab67078a8 in qstring_from_str /home/elmarco/src/qq/qobject/qstring.c:67
#3 0x555ab67071e4 in qstring_new /home/elmarco/src/qq/qobject/qstring.c:24
#4 0x555ab6713fbf in qstring_from_escaped_str /home/elmarco/src/qq/qobject/json-parser.c:144
#5 0x555ab671738c in parse_literal /home/elmarco/src/qq/qobject/json-parser.c:506
#6 0x555ab67179c3 in parse_value /home/elmarco/src/qq/qobject/json-parser.c:569
#7 0x555ab6715123 in parse_pair /home/elmarco/src/qq/qobject/json-parser.c:306
#8 0x555ab6715483 in parse_object /home/elmarco/src/qq/qobject/json-parser.c:357
#9 0x555ab671798b in parse_value /home/elmarco/src/qq/qobject/json-parser.c:561
#10 0x555ab6717a6b in json_parser_parse_err /home/elmarco/src/qq/qobject/json-parser.c:592
#11 0x555ab4fd4dcf in handle_qmp_command /home/elmarco/src/qq/monitor.c:4257
#12 0x555ab6712c4d in json_message_process_token /home/elmarco/src/qq/qobject/json-streamer.c:105
qemu#13 0x555ab67e01e2 in json_lexer_feed_char /home/elmarco/src/qq/qobject/json-lexer.c:323
qemu#14 0x555ab67e0af6 in json_lexer_feed /home/elmarco/src/qq/qobject/json-lexer.c:373
qemu#15 0x555ab6713010 in json_message_parser_feed /home/elmarco/src/qq/qobject/json-streamer.c:124
qemu#16 0x555ab4fd58ec in monitor_qmp_read /home/elmarco/src/qq/monitor.c:4337
qemu#17 0x555ab6559df2 in qemu_chr_be_write_impl /home/elmarco/src/qq/chardev/char.c:175
qemu#18 0x555ab6559e95 in qemu_chr_be_write /home/elmarco/src/qq/chardev/char.c:187
qemu#19 0x555ab6560127 in fd_chr_read /home/elmarco/src/qq/chardev/char-fd.c:66
qemu#20 0x555ab65d9c73 in qio_channel_fd_source_dispatch /home/elmarco/src/qq/io/channel-watch.c:84
qemu#21 0x7f8e26a598ac in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x4c8ac)
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180809114417.28718-4-marcandre.lureau@redhat.com>
[Screwed up in commit b273145]
Cc: qemu-stable@nongnu.org
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit cb9ec42)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
Let start from the beginning: Commit b9e413d (in 2.9) "block: explicitly acquire aiocontext in aio callbacks that need it" added pairs of aio_context_acquire/release to mirror_write_complete and mirror_read_complete, when they were aio callbacks for blk_aio_* calls. Then, commit 2e1990b (in 3.0) "block/mirror: Convert to coroutines" dropped these blk_aio_* calls, than mirror_write_complete and mirror_read_complete are not callbacks more, and don't need additional aiocontext acquiring. Furthermore, mirror_read_complete calls blk_co_pwritev inside these pair of aio_context_acquire/release, which leads to the following dead-lock with mirror: (gdb) info thr Id Target Id Frame 3 Thread (LWP 145412) "qemu-system-x86" syscall () 2 Thread (LWP 145416) "qemu-system-x86" __lll_lock_wait () * 1 Thread (LWP 145411) "qemu-system-x86" __lll_lock_wait () (gdb) bt #0 __lll_lock_wait () #1 _L_lock_812 () #2 __GI___pthread_mutex_lock #3 qemu_mutex_lock_impl (mutex=0x561032dce420 <qemu_global_mutex>, file=0x5610327d8654 "util/main-loop.c", line=236) at util/qemu-thread-posix.c:66 #4 qemu_mutex_lock_iothread_impl #5 os_host_main_loop_wait (timeout=480116000) at util/main-loop.c:236 #6 main_loop_wait (nonblocking=0) at util/main-loop.c:497 #7 main_loop () at vl.c:1892 #8 main Printing contents of qemu_global_mutex, I see that "__owner = 145416", so, thr1 is main loop, and now it wants BQL, which is owned by thr2. (gdb) thr 2 (gdb) bt #0 __lll_lock_wait () #1 _L_lock_870 () #2 __GI___pthread_mutex_lock #3 qemu_mutex_lock_impl (mutex=0x561034d25dc0, ... #4 aio_context_acquire (ctx=0x561034d25d60) #5 dma_blk_cb #6 dma_blk_io #7 dma_blk_read #8 ide_dma_cb #9 bmdma_cmd_writeb #10 bmdma_write #11 memory_region_write_accessor #12 access_with_adjusted_size qemu#15 flatview_write qemu#16 address_space_write qemu#17 address_space_rw qemu#18 kvm_handle_io qemu#19 kvm_cpu_exec qemu#20 qemu_kvm_cpu_thread_fn qemu#21 qemu_thread_start qemu#22 start_thread qemu#23 clone () Printing mutex in fr 2, I see "__owner = 145411", so thr2 wants aio context mutex, which is owned by thr1. Classic dead-lock. Then, let's check that aio context is hold by mirror coroutine: just print coroutine stack of first tracked request in mirror job target: (gdb) [...] (gdb) qemu coroutine 0x561035dd0860 #0 qemu_coroutine_switch #1 qemu_coroutine_yield #2 qemu_co_mutex_lock_slowpath #3 qemu_co_mutex_lock #4 qcow2_co_pwritev #5 bdrv_driver_pwritev #6 bdrv_aligned_pwritev #7 bdrv_co_pwritev #8 blk_co_pwritev #9 mirror_read_complete () at block/mirror.c:232 #10 mirror_co_read () at block/mirror.c:370 #11 coroutine_trampoline #12 __start_context Yes it is mirror_read_complete calling blk_co_pwritev after acquiring aio context. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> (cherry picked from commit d12ade5) Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
stephanosio
pushed a commit
that referenced
this pull request
Dec 27, 2021
When using the 9P2000.u version of the protocol, the following shell
command line in the guest can cause QEMU to crash:
while true; do rm -rf aa; mkdir -p a/b & touch a/b/c & mv a aa; done
With 9P2000.u, file renaming is handled by the WSTAT command. The
v9fs_wstat() function calls v9fs_complete_rename(), which calls
v9fs_fix_path() for every fid whose path is affected by the change.
The involved calls to v9fs_path_copy() may race with any other access
to the fid path performed by some worker thread, causing a crash like
shown below:
Thread 12 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
0x0000555555a25da2 in local_open_nofollow (fs_ctx=0x555557d958b8, path=0x0,
flags=65536, mode=0) at hw/9pfs/9p-local.c:59
59 while (*path && fd != -1) {
(gdb) bt
#0 0x0000555555a25da2 in local_open_nofollow (fs_ctx=0x555557d958b8,
path=0x0, flags=65536, mode=0) at hw/9pfs/9p-local.c:59
#1 0x0000555555a25e0c in local_opendir_nofollow (fs_ctx=0x555557d958b8,
path=0x0) at hw/9pfs/9p-local.c:92
#2 0x0000555555a261b8 in local_lstat (fs_ctx=0x555557d958b8,
fs_path=0x555556b56858, stbuf=0x7fff84830ef0) at hw/9pfs/9p-local.c:185
#3 0x0000555555a2b367 in v9fs_co_lstat (pdu=0x555557d97498,
path=0x555556b56858, stbuf=0x7fff84830ef0) at hw/9pfs/cofile.c:53
#4 0x0000555555a1e9e2 in v9fs_stat (opaque=0x555557d97498)
at hw/9pfs/9p.c:1083
#5 0x0000555555e060a2 in coroutine_trampoline (i0=-669165424, i1=32767)
at util/coroutine-ucontext.c:116
#6 0x00007fffef4f5600 in __start_context () at /lib64/libc.so.6
#7 0x0000000000000000 in ()
(gdb)
The fix is to take the path write lock when calling v9fs_complete_rename(),
like in v9fs_rename().
Impact: DoS triggered by unprivileged guest users.
Fixes: CVE-2018-19489
Cc: P J P <ppandit@redhat.com>
Reported-by: zhibin hu <noirfate@gmail.com>
Reviewed-by: Prasad J Pandit <pjp@fedoraproject.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 1d20398)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
nashif
pushed a commit
that referenced
this pull request
May 16, 2025
ASAN detected a leak when running the ahci-test
/ahci/io/dma/lba28/retry:
Direct leak of 35 byte(s) in 1 object(s) allocated from:
#0 in malloc
#1 in __vasprintf_internal
#2 in vasprintf
#3 in g_vasprintf
#4 in g_strdup_vprintf
#5 in g_strdup_printf
#6 in object_get_canonical_path ../qom/object.c:2096:19
#7 in blk_get_attached_dev_id_or_path ../block/block-backend.c:1033:12
#8 in blk_get_attached_dev_path ../block/block-backend.c:1047:12
#9 in send_qmp_error_event ../block/block-backend.c:2140:36
#10 in blk_error_action ../block/block-backend.c:2172:9
#11 in ide_handle_rw_error ../hw/ide/core.c:875:5
#12 in ide_dma_cb ../hw/ide/core.c:894:13
qemu#13 in dma_complete ../system/dma-helpers.c:107:9
qemu#14 in dma_blk_cb ../system/dma-helpers.c:129:9
qemu#15 in blk_aio_complete ../block/block-backend.c:1552:9
qemu#16 in blk_aio_write_entry ../block/block-backend.c:1619:5
qemu#17 in coroutine_trampoline ../util/coroutine-ucontext.c:175:9
Plug the leak by freeing the device path string.
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20241111145214.8261-1-farosas@suse.de>
[PMD: Use g_autofree]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20241111170333.43833-3-philmd@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 23ea425)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This helps with looking at emulator outputs (from both running
locally and sanitycheck) as clearing screen and disabling line
wrapping affect developer terminal.
Fixes zephyrproject-rtos/zephyr#11717
The code can be found https://github.com/dcpleung/seabios/tree/zephyr