-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
qemu-m68k: fix '-strace' not to accept arguments #5
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Noticed today when ran in strace mode: $ qemu-m68k-git -L /usr/m68k-unknown-linux-gnu/ -strace /tmp/a +RTS -D{s,i,w,G,g,b,S,t,p,a,l,m,z,c,r} Error while loading +RTS: No such file or directory Upstream handles options correctly: $ qemu-m68k -L /usr/m68k-unknown-linux-gnu/ -strace /tmp/a +RTS -D{s,i,w,G,g,b,S,t,p,a,l,m,z,c,r} qemu: fatal: Illegal instruction: ebc0 @ f67e36a8 Signed-off-by: Sergei Trofimovich <siarheit@google.com>
vivier
added a commit
that referenced
this pull request
Mar 5, 2016
qemu-m68k: fix '-strace' not to accept arguments
Thanks for the fix, if you agree I will merge it with the commit (59082aa) introducing the problem on my next rebase. |
Sure. No problem! |
vivier
pushed a commit
that referenced
this pull request
Mar 20, 2016
"name" is freed after visiting options, instead use the first NetClientState name. Adds a few assert() for clarifying and checking some impossible states. READ of size 1 at 0x602000000990 thread T0 #0 0x7f6b251c570c (/lib64/libasan.so.2+0x4770c) #1 0x5566dc380600 in qemu_find_net_clients_except net/net.c:824 #2 0x5566dc39bac7 in net_vhost_user_event net/vhost-user.c:193 #3 0x5566dbee862a in qemu_chr_be_event /home/elmarco/src/qemu/qemu-char.c:201 #4 0x5566dbef2890 in tcp_chr_disconnect /home/elmarco/src/qemu/qemu-char.c:2790 #5 0x5566dbef2d0b in tcp_chr_sync_read /home/elmarco/src/qemu/qemu-char.c:2835 #6 0x5566dbee8a99 in qemu_chr_fe_read_all /home/elmarco/src/qemu/qemu-char.c:295 #7 0x5566dc39b964 in net_vhost_user_watch net/vhost-user.c:180 #8 0x5566dc5a06c7 in qio_channel_fd_source_dispatch io/channel-watch.c:70 #9 0x7f6b1aa2ab87 in g_main_dispatch /home/elmarco/src/gnome/glib/glib/gmain.c:3154 #10 0x7f6b1aa2b9cb in g_main_context_dispatch /home/elmarco/src/gnome/glib/glib/gmain.c:3769 #11 0x5566dc475ed4 in glib_pollfds_poll /home/elmarco/src/qemu/main-loop.c:212 #12 0x5566dc476029 in os_host_main_loop_wait /home/elmarco/src/qemu/main-loop.c:257 #13 0x5566dc476165 in main_loop_wait /home/elmarco/src/qemu/main-loop.c:505 #14 0x5566dbf08d31 in main_loop /home/elmarco/src/qemu/vl.c:1932 #15 0x5566dbf16783 in main /home/elmarco/src/qemu/vl.c:4646 #16 0x7f6b180bb57f in __libc_start_main (/lib64/libc.so.6+0x2057f) #17 0x5566dbbf5348 in _start (/home/elmarco/src/qemu/x86_64-softmmu/qemu-system-x86_64+0x3f9348) 0x602000000990 is located 0 bytes inside of 5-byte region [0x602000000990,0x602000000995) freed by thread T0 here: #0 0x7f6b2521666a in __interceptor_free (/lib64/libasan.so.2+0x9866a) #1 0x7f6b1aa332a4 in g_free /home/elmarco/src/gnome/glib/glib/gmem.c:189 #2 0x5566dc5f416f in qapi_dealloc_type_str qapi/qapi-dealloc-visitor.c:134 #3 0x5566dc5f3268 in visit_type_str qapi/qapi-visit-core.c:196 #4 0x5566dc5ced58 in visit_type_Netdev_fields /home/elmarco/src/qemu/qapi-visit.c:5936 #5 0x5566dc5cef71 in visit_type_Netdev /home/elmarco/src/qemu/qapi-visit.c:5960 #6 0x5566dc381a8d in net_visit net/net.c:1049 #7 0x5566dc381c37 in net_client_init net/net.c:1076 #8 0x5566dc3839e2 in net_init_netdev net/net.c:1473 #9 0x5566dc63cc0a in qemu_opts_foreach util/qemu-option.c:1112 #10 0x5566dc383b36 in net_init_clients net/net.c:1499 #11 0x5566dbf15d86 in main /home/elmarco/src/qemu/vl.c:4397 #12 0x7f6b180bb57f in __libc_start_main (/lib64/libc.so.6+0x2057f) Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jun 22, 2016
Back in the 2.3.0 release we declared qcow[2] encryption as deprecated, warning people that it would be removed in a future release. commit a1f688f Author: Markus Armbruster <armbru@redhat.com> Date: Fri Mar 13 21:09:40 2015 +0100 block: Deprecate QCOW/QCOW2 encryption The code still exists today, but by a (happy?) accident we entirely broke the ability to use qcow[2] encryption in the system emulators in the 2.4.0 release due to commit 8336aaf Author: Daniel P. Berrange <berrange@redhat.com> Date: Tue May 12 17:09:18 2015 +0100 qcow2/qcow: protect against uninitialized encryption key This commit was designed to prevent future coding bugs which might cause QEMU to read/write data on an encrypted block device in plain text mode before a decryption key is set. It turns out this preventative measure was a little too good, because we already had a long standing bug where QEMU read encrypted data in plain text mode during system emulator startup, in order to guess disk geometry: Thread 10 (Thread 0x7fffd3fff700 (LWP 30373)): #0 0x00007fffe90b1a28 in raise () at /lib64/libc.so.6 #1 0x00007fffe90b362a in abort () at /lib64/libc.so.6 #2 0x00007fffe90aa227 in __assert_fail_base () at /lib64/libc.so.6 #3 0x00007fffe90aa2d2 in () at /lib64/libc.so.6 #4 0x000055555587ae19 in qcow2_co_readv (bs=0x5555562accb0, sector_num=0, remaining_sectors=1, qiov=0x7fffffffd260) at block/qcow2.c:1229 #5 0x000055555589b60d in bdrv_aligned_preadv (bs=bs@entry=0x5555562accb0, req=req@entry=0x7fffd3ffea50, offset=offset@entry=0, bytes=bytes@entry=512, align=align@entry=512, qiov=qiov@entry=0x7fffffffd260, flags=0) at block/io.c:908 #6 0x000055555589b8bc in bdrv_co_do_preadv (bs=0x5555562accb0, offset=0, bytes=512, qiov=0x7fffffffd260, flags=<optimized out>) at block/io.c:999 #7 0x000055555589c375 in bdrv_rw_co_entry (opaque=0x7fffffffd210) at block/io.c:544 #8 0x000055555586933b in coroutine_thread (opaque=0x555557876310) at coroutine-gthread.c:134 #9 0x00007ffff64e1835 in g_thread_proxy (data=0x5555562b5590) at gthread.c:778 #10 0x00007ffff6bb760a in start_thread () at /lib64/libpthread.so.0 #11 0x00007fffe917f59d in clone () at /lib64/libc.so.6 Thread 1 (Thread 0x7ffff7ecab40 (LWP 30343)): #0 0x00007fffe91797a9 in syscall () at /lib64/libc.so.6 #1 0x00007ffff64ff87f in g_cond_wait (cond=cond@entry=0x555555e085f0 <coroutine_cond>, mutex=mutex@entry=0x555555e08600 <coroutine_lock>) at gthread-posix.c:1397 #2 0x00005555558692c3 in qemu_coroutine_switch (co=<optimized out>) at coroutine-gthread.c:117 #3 0x00005555558692c3 in qemu_coroutine_switch (from_=0x5555562b5e30, to_=to_@entry=0x555557876310, action=action@entry=COROUTINE_ENTER) at coroutine-gthread.c:175 #4 0x0000555555868a90 in qemu_coroutine_enter (co=0x555557876310, opaque=0x0) at qemu-coroutine.c:116 #5 0x0000555555859b84 in thread_pool_completion_bh (opaque=0x7fffd40010e0) at thread-pool.c:187 #6 0x0000555555859514 in aio_bh_poll (ctx=ctx@entry=0x5555562953b0) at async.c:85 #7 0x0000555555864d10 in aio_dispatch (ctx=ctx@entry=0x5555562953b0) at aio-posix.c:135 #8 0x0000555555864f75 in aio_poll (ctx=ctx@entry=0x5555562953b0, blocking=blocking@entry=true) at aio-posix.c:291 #9 0x000055555589c40d in bdrv_prwv_co (bs=bs@entry=0x5555562accb0, offset=offset@entry=0, qiov=qiov@entry=0x7fffffffd260, is_write=is_write@entry=false, flags=flags@entry=(unknown: 0)) at block/io.c:591 #10 0x000055555589c503 in bdrv_rw_co (bs=bs@entry=0x5555562accb0, sector_num=sector_num@entry=0, buf=buf@entry=0x7fffffffd2e0 "\321,", nb_sectors=nb_sectors@entry=21845, is_write=is_write@entry=false, flags=flags@entry=(unknown: 0)) at block/io.c:614 #11 0x000055555589c562 in bdrv_read_unthrottled (nb_sectors=21845, buf=0x7fffffffd2e0 "\321,", sector_num=0, bs=0x5555562accb0) at block/io.c:622 #12 0x000055555589c562 in bdrv_read_unthrottled (bs=0x5555562accb0, sector_num=sector_num@entry=0, buf=buf@entry=0x7fffffffd2e0 "\321,", nb_sectors=nb_sectors@entry=21845) at block/io.c:634 nb_sectors@entry=1) at block/block-backend.c:504 #14 0x0000555555752e9f in guess_disk_lchs (blk=blk@entry=0x5555562a5290, pcylinders=pcylinders@entry=0x7fffffffd52c, pheads=pheads@entry=0x7fffffffd530, psectors=psectors@entry=0x7fffffffd534) at hw/block/hd-geometry.c:68 #15 0x0000555555752ff7 in hd_geometry_guess (blk=0x5555562a5290, pcyls=pcyls@entry=0x555557875d1c, pheads=pheads@entry=0x555557875d20, psecs=psecs@entry=0x555557875d24, ptrans=ptrans@entry=0x555557875d28) at hw/block/hd-geometry.c:133 #16 0x0000555555752b87 in blkconf_geometry (conf=conf@entry=0x555557875d00, ptrans=ptrans@entry=0x555557875d28, cyls_max=cyls_max@entry=65536, heads_max=heads_max@entry=16, secs_max=secs_max@entry=255, errp=errp@entry=0x7fffffffd5e0) at hw/block/block.c:71 #17 0x0000555555799bc4 in ide_dev_initfn (dev=0x555557875c80, kind=IDE_HD) at hw/ide/qdev.c:174 #18 0x0000555555768394 in device_realize (dev=0x555557875c80, errp=0x7fffffffd640) at hw/core/qdev.c:247 #19 0x0000555555769a81 in device_set_realized (obj=0x555557875c80, value=<optimized out>, errp=0x7fffffffd730) at hw/core/qdev.c:1058 #20 0x00005555558240ce in property_set_bool (obj=0x555557875c80, v=<optimized out>, opaque=0x555557875de0, name=<optimized out>, errp=0x7fffffffd730) at qom/object.c:1514 #21 0x0000555555826c87 in object_property_set_qobject (obj=obj@entry=0x555557875c80, value=value@entry=0x55555784bcb0, name=name@entry=0x55555591cb3d "realized", errp=errp@entry=0x7fffffffd730) at qom/qom-qobject.c:24 #22 0x0000555555825760 in object_property_set_bool (obj=obj@entry=0x555557875c80, value=value@entry=true, name=name@entry=0x55555591cb3d "realized", errp=errp@entry=0x7fffffffd730) at qom/object.c:905 #23 0x000055555576897b in qdev_init_nofail (dev=dev@entry=0x555557875c80) at hw/core/qdev.c:380 #24 0x0000555555799ead in ide_create_drive (bus=bus@entry=0x555557629630, unit=unit@entry=0, drive=0x5555562b77e0) at hw/ide/qdev.c:122 #25 0x000055555579a746 in pci_ide_create_devs (dev=dev@entry=0x555557628db0, hd_table=hd_table@entry=0x7fffffffd830) at hw/ide/pci.c:440 #26 0x000055555579b165 in pci_piix3_ide_init (bus=<optimized out>, hd_table=0x7fffffffd830, devfn=<optimized out>) at hw/ide/piix.c:218 #27 0x000055555568ca55 in pc_init1 (machine=0x5555562960a0, pci_enabled=1, kvmclock_enabled=<optimized out>) at /home/berrange/src/virt/qemu/hw/i386/pc_piix.c:256 #28 0x0000555555603ab2 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4249 So the safety net is correctly preventing QEMU reading cipher text as if it were plain text, during startup and aborting QEMU to avoid bad usage of this data. For added fun this bug only happens if the encrypted qcow2 file happens to have data written to the first cluster, otherwise the cluster won't be allocated and so qcow2 would not try the decryption routines at all, just return all 0's. That no one even noticed, let alone reported, this bug that has shipped in 2.4.0, 2.5.0 and 2.6.0 shows that the number of actual users of encrypted qcow2 is approximately zero. So rather than fix the crash, and backport it to stable releases, just go ahead with what we have warned users about and disable any use of qcow2 encryption in the system emulators. qemu-img/qemu-io/qemu-nbd are still able to access qcow2 encrypted images for the sake of data conversion. In the future, qcow2 will gain support for the alternative luks format, but when this happens it'll be using the '-object secret' infrastructure for getting keys, which avoids this problematic scenario entirely. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jul 21, 2016
It turns out qemu is calling exit() in various places from various threads without taking much care of resources state. The atexit() cleanup handlers cannot easily destroy resources that are in use (by the same thread or other). Since c1111a2, TCG arm guests run into the following abort() when running tests, the chardev mutex is locked during the write, so qemu_mutex_destroy() returns an error: #0 0x00007fffdbb806f5 in raise () at /lib64/libc.so.6 #1 0x00007fffdbb822fa in abort () at /lib64/libc.so.6 #2 0x00005555557616fe in error_exit (err=<optimized out>, msg=msg@entry=0x555555c38c30 <__func__.14622> "qemu_mutex_destroy") at /home/drjones/code/qemu/util/qemu-thread-posix.c:39 #3 0x0000555555b0be20 in qemu_mutex_destroy (mutex=mutex@entry=0x5555566aa0e0) at /home/drjones/code/qemu/util/qemu-thread-posix.c:57 #4 0x00005555558aab00 in qemu_chr_free_common (chr=0x5555566aa0e0) at /home/drjones/code/qemu/qemu-char.c:4029 #5 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4038 #6 0x00005555558b05f9 in qemu_chr_delete (chr=<optimized out>) at /home/drjones/code/qemu/qemu-char.c:4044 #7 0x00005555558b062c in qemu_chr_cleanup () at /home/drjones/code/qemu/qemu-char.c:4557 #8 0x00007fffdbb851e8 in __run_exit_handlers () at /lib64/libc.so.6 #9 0x00007fffdbb85235 in () at /lib64/libc.so.6 #10 0x00005555558d1b39 in testdev_write (testdev=0x5555566aa0a0) at /home/drjones/code/qemu/backends/testdev.c:71 #11 0x00005555558d1b39 in testdev_write (chr=<optimized out>, buf=0x7fffc343fd9a "", len=0) at /home/drjones/code/qemu/backends/testdev.c:95 #12 0x00005555558adced in qemu_chr_fe_write (s=0x5555566aa0e0, buf=buf@entry=0x7fffc343fd98 "0q", len=len@entry=2) at /home/drjones/code/qemu/qemu-char.c:282 Instead of using a atexit() handler, only run the chardev cleanup as initially proposed at the end of main(), where there are less chances (hic) of conflicts or other races. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reported-by: Andrew Jones <drjones@redhat.com> Message-Id: <20160704153823.16879-1-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Sep 4, 2016
When VNC is started with '-vnc none' there will be no listener socket present. When we try to populate the VncServerInfo we'll crash accessing a NULL 'lsock' field. #0 qio_channel_socket_get_local_address (ioc=0x0, errp=errp@entry=0x7ffd5b8aa0f0) at io/channel-socket.c:33 #1 0x00007f4b9a297d6f in vnc_init_basic_info_from_server_addr (errp=0x7ffd5b8aa0f0, info=0x7f4b9d425460, ioc=<optimized out>) at ui/vnc.c:146 #2 vnc_server_info_get (vd=0x7f4b9e858000) at ui/vnc.c:223 #3 0x00007f4b9a29d318 in vnc_qmp_event (vs=0x7f4b9ef82000, vs=0x7f4b9ef82000, event=QAPI_EVENT_VNC_CONNECTED) at ui/vnc.c:279 #4 vnc_connect (vd=vd@entry=0x7f4b9e858000, sioc=sioc@entry=0x7f4b9e8b3a20, skipauth=skipauth@entry=true, websocket=websocket @entry=false) at ui/vnc.c:2994 #5 0x00007f4b9a29e8c8 in vnc_display_add_client (id=<optimized out>, csock=<optimized out>, skipauth=<optimized out>) at ui/v nc.c:3825 #6 0x00007f4b9a18d8a1 in qmp_marshal_add_client (args=<optimized out>, ret=<optimized out>, errp=0x7ffd5b8aa230) at qmp-marsh al.c:123 #7 0x00007f4b9a0b53f5 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-2.6.0/mon itor.c:3922 #8 0x00007f4b9a348580 in json_message_process_token (lexer=0x7f4b9c78dfe8, input=0x7f4b9c7350e0, type=JSON_RCURLY, x=111, y=5 9) at qobject/json-streamer.c:94 #9 0x00007f4b9a35cfeb in json_lexer_feed_char (lexer=lexer@entry=0x7f4b9c78dfe8, ch=125 '}', flush=flush@entry=false) at qobj ect/json-lexer.c:310 #10 0x00007f4b9a35d0ae in json_lexer_feed (lexer=0x7f4b9c78dfe8, buffer=<optimized out>, size=<optimized out>) at qobject/json -lexer.c:360 #11 0x00007f4b9a348679 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>) at q object/json-streamer.c:114 #12 0x00007f4b9a0b3a1b in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/deb ug/qemu-2.6.0/monitor.c:3938 #13 0x00007f4b9a186751 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x7f4b9c7add40) at qemu-char.c:2895 #14 0x00007f4b92b5c79a in g_main_context_dispatch () from /lib64/libglib-2.0.so.0 #15 0x00007f4b9a2bb0c0 in glib_pollfds_poll () at main-loop.c:213 #16 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:258 #17 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:506 #18 0x00007f4b9a0835cf in main_loop () at vl.c:1934 #19 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4667 Do an upfront check for a NULL lsock and report an error to the caller, which matches behaviour from before commit 04d2529 Author: Daniel P. Berrange <berrange@redhat.com> Date: Fri Feb 27 16:20:57 2015 +0000 ui: convert VNC server to use QIOChannelSocket where getsockname() would be given a FD value -1 and thus report an error to the caller. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 1470134726-15697-2-git-send-email-berrange@redhat.com Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Sep 4, 2016
The vnc_server_info_get will allocate the VncServerInfo struct and then call vnc_init_basic_info_from_server_addr to populate the basic fields. If this returns an error though, the qapi_free_VncServerInfo call will then crash because the VncServerInfo struct instance was not properly NULL-initialized and thus contains random stack garbage. #0 0x00007f1987c8e6f5 in raise () at /lib64/libc.so.6 #1 0x00007f1987c902fa in abort () at /lib64/libc.so.6 #2 0x00007f1987ccf600 in __libc_message () at /lib64/libc.so.6 #3 0x00007f1987cd7d4a in _int_free () at /lib64/libc.so.6 #4 0x00007f1987cdb2ac in free () at /lib64/libc.so.6 #5 0x00007f198b654f6e in g_free () at /lib64/libglib-2.0.so.0 #6 0x0000559193cdcf54 in visit_type_str (v=v@entry= 0x5591972f14b0, name=name@entry=0x559193de1e29 "host", obj=obj@entry=0x5591961dbfa0, errp=errp@entry=0x7fffd7899d80) at qapi/qapi-visit-core.c:255 #7 0x0000559193cca8f3 in visit_type_VncBasicInfo_members (v=v@entry= 0x5591972f14b0, obj=obj@entry=0x5591961dbfa0, errp=errp@entry=0x7fffd7899dc0) at qapi-visit.c:12307 #8 0x0000559193ccb523 in visit_type_VncServerInfo_members (v=v@entry= 0x5591972f14b0, obj=0x5591961dbfa0, errp=errp@entry=0x7fffd7899e00) at qapi-visit.c:12632 #9 0x0000559193ccb60b in visit_type_VncServerInfo (v=v@entry= 0x5591972f14b0, name=name@entry=0x0, obj=obj@entry=0x7fffd7899e48, errp=errp@entry=0x0) at qapi-visit.c:12658 #10 0x0000559193cb53d8 in qapi_free_VncServerInfo (obj=<optimized out>) at qapi-types.c:3970 #11 0x0000559193c1e6ba in vnc_server_info_get (vd=0x7f1951498010) at ui/vnc.c:233 #12 0x0000559193c24275 in vnc_connect (vs=0x559197b2f200, vs=0x559197b2f200, event=QAPI_EVENT_VNC_CONNECTED) at ui/vnc.c:284 #13 0x0000559193c24275 in vnc_connect (vd=vd@entry=0x7f1951498010, sioc=sioc@entry=0x559196bf9c00, skipauth=skipauth@entry=tru e, websocket=websocket@entry=false) at ui/vnc.c:3039 #14 0x0000559193c25806 in vnc_display_add_client (id=<optimized out>, csock=<optimized out>, skipauth=<optimized out>) at ui/vnc.c:3877 #15 0x0000559193a90c28 in qmp_marshal_add_client (args=<optimized out>, ret=<optimized out>, errp=0x7fffd7899f90) at qmp-marshal.c:105 #16 0x000055919399c2b7 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /home/berrange/src/virt/qemu/monitor.c:3971 #17 0x0000559193ce3307 in json_message_process_token (lexer=0x559194ab0838, input=0x559194a6d940, type=JSON_RCURLY, x=111, y=1 2) at qobject/json-streamer.c:105 #18 0x0000559193cfa90d in json_lexer_feed_char (lexer=lexer@entry=0x559194ab0838, ch=125 '}', flush=flush@entry=false) at qobject/json-lexer.c:319 #19 0x0000559193cfaa1e in json_lexer_feed (lexer=0x559194ab0838, buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:369 #20 0x0000559193ce33c9 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>) at qobject/json-streamer.c:124 #21 0x000055919399a85b in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /home/berrange/src/virt/qemu/monitor.c:3987 #22 0x0000559193a87d00 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x559194a7d900) at qemu-char.c:2895 #23 0x00007f198b64f703 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #24 0x0000559193c484b3 in main_loop_wait () at main-loop.c:213 #25 0x0000559193c484b3 in main_loop_wait (timeout=<optimized out>) at main-loop.c:258 #26 0x0000559193c484b3 in main_loop_wait (nonblocking=<optimized out>) at main-loop.c:506 #27 0x0000559193964c55 in main () at vl.c:1908 #28 0x0000559193964c55 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4603 This was introduced in commit 98481bf Author: Eric Blake <eblake@redhat.com> Date: Mon Oct 26 16:34:45 2015 -0600 vnc: Hoist allocation of VncBasicInfo to callers which added error reporting for vnc_init_basic_info_from_server_addr but didn't change the g_malloc calls to g_malloc0. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 1470134726-15697-3-git-send-email-berrange@redhat.com Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Sep 4, 2016
ahci-test /x86_64/ahci/io/dma/lba28/retry triggers the following leak: Direct leak of 16 byte(s) in 1 object(s) allocated from: #0 0x7fc4b2a25e20 in malloc (/lib64/libasan.so.3+0xc6e20) #1 0x7fc4993bce58 in g_malloc (/lib64/libglib-2.0.so.0+0x4ee58) #2 0x556a187d4b34 in ahci_populate_sglist hw/ide/ahci.c:896 #3 0x556a187d8237 in ahci_dma_prepare_buf hw/ide/ahci.c:1367 #4 0x556a187b5a1a in ide_dma_cb hw/ide/core.c:844 #5 0x556a187d7eec in ahci_start_dma hw/ide/ahci.c:1333 #6 0x556a187b650b in ide_start_dma hw/ide/core.c:921 #7 0x556a187b61e6 in ide_sector_start_dma hw/ide/core.c:911 #8 0x556a187b9e26 in cmd_write_dma hw/ide/core.c:1486 #9 0x556a187bd519 in ide_exec_cmd hw/ide/core.c:2027 #10 0x556a187d71c5 in handle_reg_h2d_fis hw/ide/ahci.c:1204 #11 0x556a187d7681 in handle_cmd hw/ide/ahci.c:1254 #12 0x556a187d168a in check_cmd hw/ide/ahci.c:510 #13 0x556a187d0afc in ahci_port_write hw/ide/ahci.c:314 #14 0x556a187d105d in ahci_mem_write hw/ide/ahci.c:435 #15 0x556a1831d959 in memory_region_write_accessor /home/elmarco/src/qemu/memory.c:525 #16 0x556a1831dc35 in access_with_adjusted_size /home/elmarco/src/qemu/memory.c:591 #17 0x556a18323ce3 in memory_region_dispatch_write /home/elmarco/src/qemu/memory.c:1262 #18 0x556a1828cf67 in address_space_write_continue /home/elmarco/src/qemu/exec.c:2578 #19 0x556a1828d20b in address_space_write /home/elmarco/src/qemu/exec.c:2635 #20 0x556a1828d92b in address_space_rw /home/elmarco/src/qemu/exec.c:2737 #21 0x556a1828daf7 in cpu_physical_memory_rw /home/elmarco/src/qemu/exec.c:2746 #22 0x556a183068d3 in cpu_physical_memory_write /home/elmarco/src/qemu/include/exec/cpu-common.h:72 #23 0x556a18308194 in qtest_process_command /home/elmarco/src/qemu/qtest.c:382 #24 0x556a18309999 in qtest_process_inbuf /home/elmarco/src/qemu/qtest.c:573 #25 0x556a18309a4a in qtest_read /home/elmarco/src/qemu/qtest.c:585 #26 0x556a18598b85 in qemu_chr_be_write_impl /home/elmarco/src/qemu/qemu-char.c:387 #27 0x556a18598c52 in qemu_chr_be_write /home/elmarco/src/qemu/qemu-char.c:399 #28 0x556a185a2afa in tcp_chr_read /home/elmarco/src/qemu/qemu-char.c:2902 #29 0x556a18cbaf52 in qio_channel_fd_source_dispatch io/channel-watch.c:84 Follow John Snow recommendation: Everywhere else ncq_err is used, it is accompanied by a list cleanup except for ncq_cb, which is the case you are fixing here. Move the sglist destruction inside of ncq_err and then delete it from the other two locations to keep it tidy. Call dma_buf_commit in ide_dma_cb after the early return. Though, this is also a little wonky because this routine does more than clear the list, but it is at the moment the centralized "we're done with the sglist" function and none of the other side effects that occur in dma_buf_commit will interfere with the reset that occurs from ide_restart_bh, I think Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Sep 4, 2016
Since aa5cb7f, the chardevs are being cleaned up when leaving qemu. However, the monitor has still references to them, which may lead to crashes when running atexit() and trying to send monitor events: #0 0x00007fffdb18f6f5 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x00007fffdb1912fa in __GI_abort () at abort.c:89 #2 0x0000555555c263e7 in error_exit (err=22, msg=0x555555d47980 <__func__.13537> "qemu_mutex_lock") at util/qemu-thread-posix.c:39 #3 0x0000555555c26488 in qemu_mutex_lock (mutex=0x5555567a2420) at util/qemu-thread-posix.c:66 #4 0x00005555558c52db in qemu_chr_fe_write (s=0x5555567a2420, buf=0x55555740dc40 "{\"timestamp\": {\"seconds\": 1470041716, \"microseconds\": 989699}, \"event\": \"SPICE_DISCONNECTED\", \"data\": {\"server\": {\"port\": \"5900\", \"family\": \"ipv4\", \"host\": \"127.0.0.1\"}, \"client\": {\"port\": \"40272\", \"f"..., len=240) at qemu-char.c:280 #5 0x0000555555787cad in monitor_flush_locked (mon=0x5555567bd9e0) at /home/elmarco/src/qemu/monitor.c:311 #6 0x0000555555787e46 in monitor_puts (mon=0x5555567bd9e0, str=0x5555567a44ef "") at /home/elmarco/src/qemu/monitor.c:353 #7 0x00005555557880fe in monitor_json_emitter (mon=0x5555567bd9e0, data=0x5555567c73a0) at /home/elmarco/src/qemu/monitor.c:401 #8 0x00005555557882d2 in monitor_qapi_event_emit (event=QAPI_EVENT_SPICE_DISCONNECTED, qdict=0x5555567c73a0) at /home/elmarco/src/qemu/monitor.c:472 #9 0x000055555578838f in monitor_qapi_event_queue (event=QAPI_EVENT_SPICE_DISCONNECTED, qdict=0x5555567c73a0, errp=0x7fffffffca88) at /home/elmarco/src/qemu/monitor.c:497 #10 0x0000555555c15541 in qapi_event_send_spice_disconnected (server=0x5555571139d0, client=0x5555570d0db0, errp=0x5555566c0428 <error_abort>) at qapi-event.c:1038 #11 0x0000555555b11bc6 in channel_event (event=3, info=0x5555570d6c00) at ui/spice-core.c:248 #12 0x00007fffdcc9983a in adapter_channel_event (event=3, info=0x5555570d6c00) at reds.c:120 #13 0x00007fffdcc99a25 in reds_handle_channel_event (reds=0x5555567a9d60, event=3, info=0x5555570d6c00) at reds.c:324 #14 0x00007fffdcc7d4c4 in main_dispatcher_self_handle_channel_event (self=0x5555567b28b0, event=3, info=0x5555570d6c00) at main-dispatcher.c:175 #15 0x00007fffdcc7d5b1 in main_dispatcher_channel_event (self=0x5555567b28b0, event=3, info=0x5555570d6c00) at main-dispatcher.c:194 #16 0x00007fffdcca7674 in reds_stream_push_channel_event (s=0x5555570d9910, event=3) at reds-stream.c:354 #17 0x00007fffdcca749b in reds_stream_free (s=0x5555570d9910) at reds-stream.c:323 #18 0x00007fffdccb5dad in snd_disconnect_channel (channel=0x5555576a89a0) at sound.c:229 #19 0x00007fffdccb9e57 in snd_detach_common (worker=0x555557739720) at sound.c:1589 #20 0x00007fffdccb9f0e in snd_detach_playback (sin=0x5555569fe3f8) at sound.c:1602 #21 0x00007fffdcca3373 in spice_server_remove_interface (sin=0x5555569fe3f8) at reds.c:3387 #22 0x00005555558ff6e2 in line_out_fini (hw=0x5555569fe370) at audio/spiceaudio.c:152 #23 0x00005555558f909e in audio_atexit () at audio/audio.c:1754 #24 0x00007fffdb1941e8 in __run_exit_handlers (status=0, listp=0x7fffdb5175d8 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true) at exit.c:82 #25 0x00007fffdb194235 in __GI_exit (status=<optimized out>) at exit.c:104 #26 0x00007fffdb17b738 in __libc_start_main (main=0x5555558d7874 <main>, argc=67, argv=0x7fffffffcf48, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffcf38) at ../csu/libc-start.c:323 Add a monitor_cleanup() functions to remove all the monitors before cleaning up the chardev. Note that we are "losing" some events that used to be sent during atexit(). Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20160801112343.29082-2-marcandre.lureau@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Nov 4, 2016
The cssid 255 is reserved but still valid from an architectural point of view. However, feeding a bogus schid of 0xffffffff into the virtio hypercall will lead to a crash: Stack trace of thread 138363: #0 0x00000000100d168c css_find_subch (qemu-system-s390x) #1 0x00000000100d3290 virtio_ccw_hcall_notify #2 0x00000000100cbf60 s390_virtio_hypercall #3 0x000000001010ff7a handle_hypercall #4 0x0000000010079ed4 kvm_cpu_exec (qemu-system-s390x) #5 0x00000000100609b4 qemu_kvm_cpu_thread_fn #6 0x000003ff8b887bb4 start_thread (libpthread.so.0) #7 0x000003ff8b78df0a thread_start (libc.so.6) This is because the css array was only allocated for 0..254 instead of 0..255. Let's fix this by bumping MAX_CSSID to 255 and fencing off the reserved cssid of 255 during css image allocation. Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Tested-by: Christian Borntraeger <borntraeger@de.ibm.com> Cc: qemu-stable@nongnu.org Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
vivier
pushed a commit
that referenced
this pull request
Nov 4, 2016
Running cpuid instructions with a simple run like: i386-linux-user/qemu-i386 tests/tcg/sha1-i386 Results in the following assert: #0 0x00007ffff64246f5 in raise () from /lib64/libc.so.6 #1 0x00007ffff64262fa in abort () from /lib64/libc.so.6 #2 0x00007ffff7937ec5 in g_assertion_message () from /lib64/libglib-2.0.so.0 #3 0x00007ffff7937f5a in g_assertion_message_expr () from /lib64/libglib-2.0.so.0 #4 0x000055555561b54c in apicid_bitwidth_for_count (count=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:58 #5 0x000055555561b58a in apicid_smt_width (nr_cores=0, nr_threads=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:67 #6 0x000055555561b5c3 in apicid_core_offset (nr_cores=0, nr_threads=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:82 #7 0x000055555561b5e3 in apicid_pkg_offset (nr_cores=0, nr_threads=0) at /home/elmarco/src/qemu/include/hw/i386/topology.h:89 #8 0x000055555561dd86 in cpu_x86_cpuid (env=0x555557999550, index=4, count=3, eax=0x7fffffffcae8, ebx=0x7fffffffcaec, ecx=0x7fffffffcaf0, edx=0x7fffffffcaf4) at /home/elmarco/src/qemu/target-i386/cpu.c:2405 #9 0x0000555555638e8e in helper_cpuid (env=0x555557999550) at /home/elmarco/src/qemu/target-i386/misc_helper.c:106 #10 0x000055555599dc5e in static_code_gen_buffer () #11 0x00005555555952f8 in cpu_tb_exec (cpu=0x5555579912d0, itb=0x7ffff4371ab0) at /home/elmarco/src/qemu/cpu-exec.c:166 #12 0x0000555555595c8e in cpu_loop_exec_tb (cpu=0x5555579912d0, tb=0x7ffff4371ab0, last_tb=0x7fffffffd088, tb_exit=0x7fffffffd084, sc=0x7fffffffd0a0) at /home/elmarco/src/qemu/cpu-exec.c:517 #13 0x0000555555595e50 in cpu_exec (cpu=0x5555579912d0) at /home/elmarco/src/qemu/cpu-exec.c:612 #14 0x00005555555c065b in cpu_loop (env=0x555557999550) at /home/elmarco/src/qemu/linux-user/main.c:297 #15 0x00005555555c25b2 in main (argc=2, argv=0x7fffffffd848, envp=0x7fffffffd860) at /home/elmarco/src/qemu/linux-user/main.c:4803 The fields are set in qemu_init_vcpu() with softmmu, but it's a stub with linux-user. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Nov 4, 2016
Segfault happens when leaving qemu with msmouse backend: #0 0x00007fa8526ac975 in raise () at /lib64/libc.so.6 #1 0x00007fa8526add8a in abort () at /lib64/libc.so.6 #2 0x0000558be78846ab in error_exit (err=16, msg=0x558be799da10 ... #3 0x0000558be7884717 in qemu_mutex_destroy (mutex=0x558be93be750) at ... #4 0x0000558be7549951 in qemu_chr_free_common (chr=0x558be93be750) at ... #5 0x0000558be754999c in qemu_chr_free (chr=0x558be93be750) at ... #6 0x0000558be7549a20 in qemu_chr_delete (chr=0x558be93be750) at ... #7 0x0000558be754a8ef in qemu_chr_cleanup () at qemu-char.c:4643 #8 0x0000558be755843e in main (argc=5, argv=0x7ffe925d7118, ... The chr was freed by msmouse close callback before chardev cleanup, Then qemu_mutex_destroy triggered raise(). Because freeing chr is handled by qemu_chr_free_common, Remove the free from msmouse_chr_close to avoid double free. Fixes: c1111a2 Cc: qemu-stable@nongnu.org Signed-off-by: Lin Ma <lma@suse.com> Message-Id: <20160915143158.4796-1-lma@suse.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Nov 4, 2016
Commit 0ed93d8 ("linux-aio: process completions from ioq_submit()") added an optimization that processes completions each time ioq_submit() returns with requests in flight. This commit introduces a "Co-routine re-entered recursively" error which can be triggered with -drive format=qcow2,aio=native. Fam Zheng <famz@redhat.com>, Kevin Wolf <kwolf@redhat.com>, and I debugged the following backtrace: (gdb) bt #0 0x00007ffff0a046f5 in raise () at /lib64/libc.so.6 #1 0x00007ffff0a062fa in abort () at /lib64/libc.so.6 #2 0x0000555555ac0013 in qemu_coroutine_enter (co=0x5555583464d0) at util/qemu-coroutine.c:113 #3 0x0000555555a4b663 in qemu_laio_process_completions (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:218 #4 0x0000555555a4b874 in ioq_submit (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:331 #5 0x0000555555a4ba12 in laio_do_submit (fd=fd@entry=13, laiocb=laiocb@entry=0x555559d38ae0, offset=offset@entry=2932727808, type=type@entry=1) at block/linux-aio.c:383 #6 0x0000555555a4bbd3 in laio_co_submit (bs=<optimized out>, s=0x555557e2f7f0, fd=13, offset=2932727808, qiov=0x555559d38e20, type=1) at block/linux-aio.c:402 #7 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x55555663bcb0, offset=offset@entry=2932727808, bytes=bytes@entry=8192, qiov=qiov@entry=0x555559d38e20, flags=0) at block/io.c:804 #8 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x55555663bcb0, req=req@entry=0x555559d38d20, offset=offset@entry=2932727808, bytes=bytes@entry=8192, align=align@entry=512, qiov=qiov@entry=0x555559d38e20, flags=0) at block/io.c:1041 #9 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=2932727808, bytes=8192, qiov=qiov@entry=0x555559d38e20, flags=flags@entry=0) at block/io.c:1133 #10 0x0000555555a29629 in qcow2_co_preadv (bs=0x555556635890, offset=6178725888, bytes=8192, qiov=0x555557527840, flags=<optimized out>) at block/qcow2.c:1509 #11 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x555556635890, offset=offset@entry=6178725888, bytes=bytes@entry=8192, qiov=qiov@entry=0x555557527840, flags=0) at block/io.c:804 #12 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x555556635890, req=req@entry=0x555559d39000, offset=offset@entry=6178725888, bytes=bytes@entry=8192, align=align@entry=1, qiov=qiov@entry=0x555557527840, flags=0) at block/io.c:1041 #13 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=offset@entry=6178725888, bytes=bytes@entry=8192, qiov=qiov@entry=0x555557527840, flags=flags@entry=0) at block/io.c:1133 #14 0x0000555555a4515a in blk_co_preadv (blk=0x5555566356d0, offset=6178725888, bytes=8192, qiov=0x555557527840, flags=0) at block/block-backend.c:783 #15 0x0000555555a45266 in blk_aio_read_entry (opaque=0x5555577025e0) at block/block-backend.c:991 #16 0x0000555555ac0cfa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:78 It turned out that re-entrant ioq_submit() and completion processing between three requests caused this error. The following check is not sufficient to prevent recursively entering coroutines: if (laiocb->co != qemu_coroutine_self()) { qemu_coroutine_enter(laiocb->co); } As the following coroutine backtrace shows, not just the current coroutine (self) can be entered. There might also be other coroutines that are currently entered and transferred control due to the qcow2 lock (CoMutex): (gdb) qemu coroutine 0x5555583464d0 #0 0x0000555555ac0c90 in qemu_coroutine_switch (from_=from_@entry=0x5555583464d0, to_=to_@entry=0x5555572f9890, action=action@entry=COROUTINE_ENTER) at util/coroutine-ucontext.c:175 #1 0x0000555555abfe54 in qemu_coroutine_enter (co=0x5555572f9890) at util/qemu-coroutine.c:117 #2 0x0000555555ac031c in qemu_co_queue_run_restart (co=co@entry=0x5555583462c0) at util/qemu-coroutine-lock.c:60 #3 0x0000555555abfe5e in qemu_coroutine_enter (co=0x5555583462c0) at util/qemu-coroutine.c:119 #4 0x0000555555a4b663 in qemu_laio_process_completions (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:218 #5 0x0000555555a4b874 in ioq_submit (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:331 #6 0x0000555555a4ba12 in laio_do_submit (fd=fd@entry=13, laiocb=laiocb@entry=0x55555a338b40, offset=offset@entry=2911477760, type=type@entry=1) at block/linux-aio.c:383 #7 0x0000555555a4bbd3 in laio_co_submit (bs=<optimized out>, s=0x555557e2f7f0, fd=13, offset=2911477760, qiov=0x55555a338e80, type=1) at block/linux-aio.c:402 #8 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x55555663bcb0, offset=offset@entry=2911477760, bytes=bytes@entry=8192, qiov=qiov@entry=0x55555a338e80, flags=0) at block/io.c:804 #9 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x55555663bcb0, req=req@entry=0x55555a338d80, offset=offset@entry=2911477760, bytes=bytes@entry=8192, align=align@entry=512, qiov=qiov@entry=0x55555a338e80, flags=0) at block/io.c:1041 #10 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=2911477760, bytes=8192, qiov=qiov@entry=0x55555a338e80, flags=flags@entry=0) at block/io.c:1133 #11 0x0000555555a29629 in qcow2_co_preadv (bs=0x555556635890, offset=6157475840, bytes=8192, qiov=0x5555575df720, flags=<optimized out>) at block/qcow2.c:1509 #12 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x555556635890, offset=offset@entry=6157475840, bytes=bytes@entry=8192, qiov=qiov@entry=0x5555575df720, flags=0) at block/io.c:804 #13 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x555556635890, req=req@entry=0x55555a339060, offset=offset@entry=6157475840, bytes=bytes@entry=8192, align=align@entry=1, qiov=qiov@entry=0x5555575df720, flags=0) at block/io.c:1041 #14 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=offset@entry=6157475840, bytes=bytes@entry=8192, qiov=qiov@entry=0x5555575df720, flags=flags@entry=0) at block/io.c:1133 #15 0x0000555555a4515a in blk_co_preadv (blk=0x5555566356d0, offset=6157475840, bytes=8192, qiov=0x5555575df720, flags=0) at block/block-backend.c:783 #16 0x0000555555a45266 in blk_aio_read_entry (opaque=0x555557231aa0) at block/block-backend.c:991 #17 0x0000555555ac0cfa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:78 Use the new qemu_coroutine_entered() function instead of comparing against qemu_coroutine_self(). This is correct because: 1. If a coroutine is not entered then it must have yielded to wait for I/O completion. It is therefore safe to enter. 2. If a coroutine is entered then it must be in ioq_submit()/qemu_laio_process_completions() because otherwise it would be yielded while waiting for I/O completion. Therefore it will check laio->ret and return from ioq_submit() instead of yielding, i.e. it's guaranteed not to hang. Reported-by: Fam Zheng <famz@redhat.com> Tested-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 1474989516-18255-4-git-send-email-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Nov 4, 2016
The vhost-user & colo code is poking at the QemuOpts instance in the CharDriverState struct, not realizing that it is valid for this to be NULL. e.g. the following crash shows a codepath where it will be NULL: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x000055baf6ab4adc in qemu_opt_foreach (opts=0x0, func=0x55baf696b650 <net_vhost_chardev_opts>, opaque=0x7ffc51368c00, errp=0x7ffc51368e48) at util/qemu-option.c:617 617 QTAILQ_FOREACH(opt, &opts->head, next) { [Current thread is 1 (Thread 0x7f1d4970bb40 (LWP 6603))] (gdb) bt #0 0x000055baf6ab4adc in qemu_opt_foreach (opts=0x0, func=0x55baf696b650 <net_vhost_chardev_opts>, opaque=0x7ffc51368c00, errp=0x7ffc51368e48) at util/qemu-option.c:617 #1 0x000055baf696b7da in net_vhost_parse_chardev (opts=0x55baf8ff9260, errp=0x7ffc51368e48) at net/vhost-user.c:314 #2 0x000055baf696b985 in net_init_vhost_user (netdev=0x55baf8ff9250, name=0x55baf879d270 "hostnet2", peer=0x0, errp=0x7ffc51368e48) at net/vhost-user.c:360 #3 0x000055baf6960216 in net_client_init1 (object=0x55baf8ff9250, is_netdev=true, errp=0x7ffc51368e48) at net/net.c:1051 #4 0x000055baf6960518 in net_client_init (opts=0x55baf776e7e0, is_netdev=true, errp=0x7ffc51368f00) at net/net.c:1108 #5 0x000055baf696083f in netdev_add (opts=0x55baf776e7e0, errp=0x7ffc51368f00) at net/net.c:1186 #6 0x000055baf69608c7 in qmp_netdev_add (qdict=0x55baf7afaf60, ret=0x7ffc51368f50, errp=0x7ffc51368f48) at net/net.c:1205 #7 0x000055baf6622135 in handle_qmp_command (parser=0x55baf77fb590, tokens=0x7f1d24011960) at /path/to/qemu.git/monitor.c:3978 #8 0x000055baf6a9d099 in json_message_process_token (lexer=0x55baf77fb598, input=0x55baf75acd20, type=JSON_RCURLY, x=113, y=19) at qobject/json-streamer.c:105 #9 0x000055baf6abf7aa in json_lexer_feed_char (lexer=0x55baf77fb598, ch=125 '}', flush=false) at qobject/json-lexer.c:319 #10 0x000055baf6abf8f2 in json_lexer_feed (lexer=0x55baf77fb598, buffer=0x7ffc51369170 "}R\204\367\272U", size=1) at qobject/json-lexer.c:369 #11 0x000055baf6a9d13c in json_message_parser_feed (parser=0x55baf77fb590, buffer=0x7ffc51369170 "}R\204\367\272U", size=1) at qobject/json-streamer.c:124 #12 0x000055baf66221f7 in monitor_qmp_read (opaque=0x55baf77fb530, buf=0x7ffc51369170 "}R\204\367\272U", size=1) at /path/to/qemu.git/monitor.c:3994 #13 0x000055baf6757014 in qemu_chr_be_write_impl (s=0x55baf7610a40, buf=0x7ffc51369170 "}R\204\367\272U", len=1) at qemu-char.c:387 #14 0x000055baf6757076 in qemu_chr_be_write (s=0x55baf7610a40, buf=0x7ffc51369170 "}R\204\367\272U", len=1) at qemu-char.c:399 #15 0x000055baf675b3b0 in tcp_chr_read (chan=0x55baf90244b0, cond=G_IO_IN, opaque=0x55baf7610a40) at qemu-char.c:2927 #16 0x000055baf6a5d655 in qio_channel_fd_source_dispatch (source=0x55baf7610df0, callback=0x55baf675b25a <tcp_chr_read>, user_data=0x55baf7610a40) at io/channel-watch.c:84 #17 0x00007f1d3e80cbbd in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0 #18 0x000055baf69d3720 in glib_pollfds_poll () at main-loop.c:213 #19 0x000055baf69d37fd in os_host_main_loop_wait (timeout=126000000) at main-loop.c:258 #20 0x000055baf69d38ad in main_loop_wait (nonblocking=0) at main-loop.c:506 #21 0x000055baf676587b in main_loop () at vl.c:1908 #22 0x000055baf676d3bf in main (argc=101, argv=0x7ffc5136a6c8, envp=0x7ffc5136a9f8) at vl.c:4604 (gdb) p opts $1 = (QemuOpts *) 0x0 The crash occurred when attaching vhost-user net via QMP: { "execute": "chardev-add", "arguments": { "id": "charnet2", "backend": { "type": "socket", "data": { "addr": { "type": "unix", "data": { "path": "/var/run/openvswitch/vhost-user1" } }, "wait": false, "server": false } } }, "id": "libvirt-19" } { "return": { }, "id": "libvirt-19" } { "execute": "netdev_add", "arguments": { "type": "vhost-user", "chardev": "charnet2", "id": "hostnet2" }, "id": "libvirt-20" } Code using chardevs should not be poking at the internals of the CharDriverState struct. What vhost-user wants is a chardev that is operating as reconnectable network service, along with the ability to do FD passing over the connection. The colo code simply wants a network service. Add a feature concept to the char drivers so that chardev users can query the actual features they wish to have supported. The QemuOpts member is removed to prevent future mistakes in this area. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Nov 4, 2016
ASAN complains about buffer overflow when running: aarch64-softmmu/qemu-system-aarch64 -machine xilinx-zynq-a9 ==476==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000035e38 at pc 0x000000f75253 bp 0x7ffc597e0ec0 sp 0x7ffc597e0eb0 READ of size 8 at 0x602000035e38 thread T0 #0 0xf75252 in xilinx_spips_realize hw/ssi/xilinx_spips.c:623 #1 0xb9ef6c in device_set_realized hw/core/qdev.c:918 #2 0x129ae01 in property_set_bool qom/object.c:1854 #3 0x1296e70 in object_property_set qom/object.c:1088 #4 0x129dd1b in object_property_set_qobject qom/qom-qobject.c:27 #5 0x1297168 in object_property_set_bool qom/object.c:1157 #6 0xb9aeac in qdev_init_nofail hw/core/qdev.c:358 #7 0x78a5bf in zynq_init_spi_flashes /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:125 #8 0x78af60 in zynq_init /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:238 #9 0x998eac in main /home/elmarco/src/qemu/vl.c:4534 #10 0x7f96ed692730 in __libc_start_main (/lib64/libc.so.6+0x20730) #11 0x41d0a8 in _start (/home/elmarco/src/qemu/aarch64-softmmu/qemu-system-aarch64+0x41d0a8) 0x602000035e38 is located 0 bytes to the right of 8-byte region [0x602000035e30,0x602000035e38) allocated by thread T0 here: #0 0x7f970b014e60 in malloc (/lib64/libasan.so.3+0xc6e60) #1 0x7f96f15b0e18 in g_malloc (/lib64/libglib-2.0.so.0+0x4ee18) #2 0xb9ef6c in device_set_realized hw/core/qdev.c:918 #3 0x129ae01 in property_set_bool qom/object.c:1854 #4 0x1296e70 in object_property_set qom/object.c:1088 #5 0x129dd1b in object_property_set_qobject qom/qom-qobject.c:27 #6 0x1297168 in object_property_set_bool qom/object.c:1157 #7 0xb9aeac in qdev_init_nofail hw/core/qdev.c:358 #8 0x78a5bf in zynq_init_spi_flashes /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:125 #9 0x78af60 in zynq_init /home/elmarco/src/qemu/hw/arm/xilinx_zynq.c:238 #10 0x998eac in main /home/elmarco/src/qemu/vl.c:4534 #11 0x7f96ed692730 in __libc_start_main (/lib64/libc.so.6+0x20730) s->spi is allocated with the size of num_busses which may be 1 (by default). Change to use a loop up to s->num_busses also for the call to ssi_auto_connect_slaves(). Reported-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Nov 29, 2016
Qemu crash in the source side while migrating, after starting ipmi service inside vm. ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -smp 4 -m 4096 \ -drive file=/work/suse/suse11_sp3_64_vt,format=raw,if=none,id=drive-virtio-disk0,cache=none \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 \ -vnc :99 -monitor vc -device ipmi-bmc-sim,id=bmc0 -device isa-ipmi-kcs,bmc=bmc0,ioport=0xca2 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffec4268700 (LWP 7657)] __memcpy_ssse3_back () at ../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:2757 (gdb) bt #0 __memcpy_ssse3_back () at ../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:2757 #1 0x00005555559ef775 in memcpy (__len=3, __src=0xc1421c, __dest=<optimized out>) at /usr/include/bits/string3.h:51 #2 qemu_put_buffer (f=0x555557a97690, buf=0xc1421c <Address 0xc1421c out of bounds>, size=3) at migration/qemu-file.c:346 #3 0x00005555559eef66 in vmstate_save_state (f=f@entry=0x555557a97690, vmsd=0x555555f8a5a0 <vmstate_ISAIPMIKCSDevice>, opaque=0x555557231160, vmdesc=vmdesc@entry=0x55555798cc40) at migration/vmstate.c:333 #4 0x00005555557cfe45 in vmstate_save (f=f@entry=0x555557a97690, se=se@entry=0x555557231de0, vmdesc=vmdesc@entry=0x55555798cc40) at /mnt/sdb/zyy/qemu/migration/savevm.c:720 #5 0x00005555557d2be7 in qemu_savevm_state_complete_precopy (f=0x555557a97690, iterable_only=iterable_only@entry=false) at /mnt/sdb/zyy/qemu/migration/savevm.c:1128 #6 0x00005555559ea102 in migration_completion (start_time=<synthetic pointer>, old_vm_running=<synthetic pointer>, current_active_state=<optimized out>, s=0x5555560eaa80 <current_migration.44078>) at migration/migration.c:1707 #7 migration_thread (opaque=0x5555560eaa80 <current_migration.44078>) at migration/migration.c:1855 #8 0x00007ffff3900dc5 in start_thread (arg=0x7ffec4268700) at pthread_create.c:308 #9 0x00007fffefc6c71d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Signed-off-by: Zhuang Yanying <ann.zhuangyanying@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 28, 2017
QEMU will crash with the follow backtrace if the new created thread exited before we call qemu_thread_set_name() for it. (gdb) bt #0 0x00007f9a68b095d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007f9a68b0acc8 in __GI_abort () at abort.c:90 #2 0x00007f9a69cda389 in PAT_abort () from /usr/lib64/libuvpuserhotfix.so #3 0x00007f9a69cdda0d in patchIllInsHandler () from /usr/lib64/libuvpuserhotfix.so #4 <signal handler called> #5 pthread_setname_np (th=140298470549248, name=name@entry=0x8cc74a "io-task-worker") at ../nptl/sysdeps/unix/sysv/linux/pthread_setname.c:49 #6 0x00000000007f5f20 in qemu_thread_set_name (thread=thread@entry=0x7ffd2ac09680, name=name@entry=0x8cc74a "io-task-worker") at util/qemu_thread_posix.c:459 #7 0x00000000007f679e in qemu_thread_create (thread=thread@entry=0x7ffd2ac09680, name=name@entry=0x8cc74a "io-task-worker",start_routine=start_routine@entry=0x7c1300 <qio_task_thread_worker>, arg=arg@entry=0x7f99b8001720, mode=mode@entry=1) at util/qemu_thread_posix.c:498 #8 0x00000000007c15b6 in qio_task_run_in_thread (task=task@entry=0x7f99b80033d0, worker=worker@entry=0x7bd920 <qio_channel_socket_connect_worker>, opaque=0x7f99b8003370, destroy=0x7c6220 <qapi_free_SocketAddress>) at io/task.c:133 #9 0x00000000007bda04 in qio_channel_socket_connect_async (ioc=0x7f99b80014c0, addr=0x37235d0, callback=callback@entry=0x54ad00 <qemu_chr_socket_connected>, opaque=opaque@entry=0x38118b0, destroy=destroy@entry=0x0) at io/channel_socket.c:191 #10 0x00000000005487f6 in socket_reconnect_timeout (opaque=0x38118b0) at qemu_char.c:4402 #11 0x00007f9a6a1533b3 in g_timeout_dispatch () from /usr/lib64/libglib-2.0.so.0 #12 0x00007f9a6a15299a in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0 #13 0x0000000000747386 in glib_pollfds_poll () at main_loop.c:227 #14 0x0000000000747424 in os_host_main_loop_wait (timeout=404000000) at main_loop.c:272 #15 0x0000000000747575 in main_loop_wait (nonblocking=nonblocking@entry=0) at main_loop.c:520 #16 0x0000000000557d31 in main_loop () at vl.c:2170 #17 0x000000000041c8b7 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:5083 Let's detach the new thread after calling qemu_thread_set_name(). Signed-off-by: Caoxinhua <caoxinhua@huawei.com> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Message-Id: <1483493521-9604-1-git-send-email-zhang.zhanghailiang@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 28, 2017
This reverts commit aff8fd1. Both virtio-net and virtio-crypto do not balance virtio_queue_set_notification() enable and disable calls. This makes the notifications_disabled counter unreliable and Doug Goldstein reported the following assertion failure: #3 0x00007ffff44d1c62 in __GI___assert_fail ( assertion=assertion@entry=0x555555ae8e8a "vq->notification_disabled > 0", file=file@entry=0x555555ae89c0 "/home/doug/work/qemu/hw/virtio/virtio.c", line=line@entry=215, function=function@entry=0x555555ae9630 <__PRETTY_FUNCTION__.43707> "virtio_queue_set_notification") at assert.c:101 #4 0x00005555557f25d6 in virtio_queue_set_notification (vq=0x55555666aa90, enable=enable@entry=1) at /home/doug/work/qemu/hw/virtio/virtio.c:215 #5 0x00005555557dc311 in virtio_net_has_buffers (q=<optimized out>, q=<optimized out>, bufsize=102) at /home/doug/work/qemu/hw/net/virtio-net.c:1008 #6 virtio_net_receive (nc=<optimized out>, buf=0x555557386b88 "", size=102) at /home/doug/work/qemu/hw/net/virtio-net.c:1148 #7 0x00005555559cad33 in nc_sendv_compat (flags=<optimized out>, iovcnt=1, iov=0x7fffead746d0, nc=0x55555788b340) at net/net.c:705 #8 qemu_deliver_packet_iov (sender=<optimized out>, flags=<optimized out>, iov=0x7fffead746d0, iovcnt=1, opaque=0x55555788b340) at net/net.c:732 #9 0x00005555559cd929 in qemu_net_queue_deliver (size=<optimized out>, data=<optimized out>, flags=<optimized out>, sender=<optimized out>, queue=0x55555788b550) at net/queue.c:164 #10 qemu_net_queue_flush (queue=0x55555788b550) at net/queue.c:261 This patch is safe to revert since it's just an optimization for virtqueue polling. The next patch will improve the situation again without resorting to nesting. Reported-by: Doug Goldstein <cardoe@cardoe.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Richard Henderson <rth@twiddle.net> Tested-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 28, 2017
AioHandlers marked ->is_external must be skipped when aio_node_check() fails. bdrv_drained_begin() needs this to prevent dataplane from submitting new I/O requests while another thread accesses the device and relies on it being quiesced. This patch fixes the following segfault: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00005577f6127dad in bdrv_io_plug (bs=0x5577f7ae52f0) at qemu/block/io.c:2650 2650 bdrv_io_plug(child->bs); [Current thread is 1 (Thread 0x7ff5c4bd1c80 (LWP 10917))] (gdb) bt #0 0x00005577f6127dad in bdrv_io_plug (bs=0x5577f7ae52f0) at qemu/block/io.c:2650 #1 0x00005577f6114363 in blk_io_plug (blk=0x5577f7b8ba20) at qemu/block/block-backend.c:1561 #2 0x00005577f5d4091d in virtio_blk_handle_vq (s=0x5577f9ada030, vq=0x5577f9b3d2a0) at qemu/hw/block/virtio-blk.c:589 #3 0x00005577f5d4240d in virtio_blk_data_plane_handle_output (vdev=0x5577f9ada030, vq=0x5577f9b3d2a0) at qemu/hw/block/dataplane/virtio-blk.c:158 #4 0x00005577f5d88acd in virtio_queue_notify_aio_vq (vq=0x5577f9b3d2a0) at qemu/hw/virtio/virtio.c:1304 #5 0x00005577f5d8aaaf in virtio_queue_host_notifier_aio_poll (opaque=0x5577f9b3d308) at qemu/hw/virtio/virtio.c:2134 #6 0x00005577f60ca077 in run_poll_handlers_once (ctx=0x5577f79ddbb0) at qemu/aio-posix.c:493 #7 0x00005577f60ca268 in try_poll_mode (ctx=0x5577f79ddbb0, blocking=true) at qemu/aio-posix.c:569 #8 0x00005577f60ca331 in aio_poll (ctx=0x5577f79ddbb0, blocking=true) at qemu/aio-posix.c:601 #9 0x00005577f612722a in bdrv_flush (bs=0x5577f7c20970) at qemu/block/io.c:2403 #10 0x00005577f60c1b2d in bdrv_close (bs=0x5577f7c20970) at qemu/block.c:2322 #11 0x00005577f60c20e7 in bdrv_delete (bs=0x5577f7c20970) at qemu/block.c:2465 #12 0x00005577f60c3ecf in bdrv_unref (bs=0x5577f7c20970) at qemu/block.c:3425 #13 0x00005577f60bf951 in bdrv_root_unref_child (child=0x5577f7a2de70) at qemu/block.c:1361 #14 0x00005577f6112162 in blk_remove_bs (blk=0x5577f7b8ba20) at qemu/block/block-backend.c:491 #15 0x00005577f6111b1b in blk_remove_all_bs () at qemu/block/block-backend.c:245 #16 0x00005577f60c1db6 in bdrv_close_all () at qemu/block.c:2382 #17 0x00005577f5e60cca in main (argc=20, argv=0x7ffea6eb8398, envp=0x7ffea6eb8440) at qemu/vl.c:4684 The key thing is that bdrv_close() uses bdrv_drained_begin() and virtio_queue_host_notifier_aio_poll() must not be called. Thanks to Fam Zheng <famz@redhat.com> for identifying the root cause of this crash. Reported-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Tested-by: Alberto Garcia <berto@igalia.com> Message-id: 20170124095350.16679-1-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 1, 2017
Running postcopy-test with ASAN produces the following error: QTEST_QEMU_BINARY=ppc64-softmmu/qemu-system-ppc64 tests/postcopy-test ... ================================================================= ==23641==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f1556600000 at pc 0x55b8e9d28208 bp 0x7f1555f4d3c0 sp 0x7f1555f4d3b0 READ of size 8 at 0x7f1556600000 thread T6 #0 0x55b8e9d28207 in htab_save_first_pass /home/elmarco/src/qq/hw/ppc/spapr.c:1528 #1 0x55b8e9d2939c in htab_save_iterate /home/elmarco/src/qq/hw/ppc/spapr.c:1665 #2 0x55b8e9beae3a in qemu_savevm_state_iterate /home/elmarco/src/qq/migration/savevm.c:1044 #3 0x55b8ea677733 in migration_thread /home/elmarco/src/qq/migration/migration.c:1976 #4 0x7f15845f46c9 in start_thread (/lib64/libpthread.so.0+0x76c9) #5 0x7f157d9d0f7e in clone (/lib64/libc.so.6+0x107f7e) 0x7f1556600000 is located 0 bytes to the right of 2097152-byte region [0x7f1556400000,0x7f1556600000) allocated by thread T0 here: #0 0x7f159bb76980 in posix_memalign (/lib64/libasan.so.3+0xc7980) #1 0x55b8eab185b2 in qemu_try_memalign /home/elmarco/src/qq/util/oslib-posix.c:106 #2 0x55b8eab186c8 in qemu_memalign /home/elmarco/src/qq/util/oslib-posix.c:122 #3 0x55b8e9d268a8 in spapr_reallocate_hpt /home/elmarco/src/qq/hw/ppc/spapr.c:1214 #4 0x55b8e9d26e04 in ppc_spapr_reset /home/elmarco/src/qq/hw/ppc/spapr.c:1261 #5 0x55b8ea12e913 in qemu_system_reset /home/elmarco/src/qq/vl.c:1697 #6 0x55b8ea13fa40 in main /home/elmarco/src/qq/vl.c:4679 #7 0x7f157d8e9400 in __libc_start_main (/lib64/libc.so.6+0x20400) Thread T6 created by T0 here: #0 0x7f159bae0488 in __interceptor_pthread_create (/lib64/libasan.so.3+0x31488) #1 0x55b8eab1d9cb in qemu_thread_create /home/elmarco/src/qq/util/qemu-thread-posix.c:465 #2 0x55b8ea67874c in migrate_fd_connect /home/elmarco/src/qq/migration/migration.c:2096 #3 0x55b8ea66cbb0 in migration_channel_connect /home/elmarco/src/qq/migration/migration.c:500 #4 0x55b8ea678f38 in socket_outgoing_migration /home/elmarco/src/qq/migration/socket.c:87 #5 0x55b8eaa5a03a in qio_task_complete /home/elmarco/src/qq/io/task.c:142 #6 0x55b8eaa599cc in gio_task_thread_result /home/elmarco/src/qq/io/task.c:88 #7 0x7f15823e38e6 (/lib64/libglib-2.0.so.0+0x468e6) SUMMARY: AddressSanitizer: heap-buffer-overflow /home/elmarco/src/qq/hw/ppc/spapr.c:1528 in htab_save_first_pass index seems to be wrongly incremented, unless I miss something that would be worth a comment. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
vivier
added a commit
that referenced
this pull request
Apr 1, 2017
If, once the kernel has booted, we try to remove a memory hotplugged while the kernel was not started, QEMU crashes on an assert: qemu-system-ppc64: hw/virtio/vhost.c:651: vhost_commit: Assertion `r >= 0' failed. ... #4 in vhost_commit #5 in memory_region_transaction_commit #6 in pc_dimm_memory_unplug #7 in spapr_memory_unplug #8 spapr_machine_device_unplug #9 in hotplug_handler_unplug #10 in spapr_lmb_release #11 in detach #12 in set_allocation_state #13 in rtas_set_indicator ... If we take a closer look to the guest kernel log, we can see when we try to unplug the memory: pseries-hotplug-mem: Attempting to hot-add 4 LMB(s) What happens: 1- The kernel has ignored the memory hotplug event because it was not started when it was generated. 2- When we hot-unplug the memory, QEMU starts to remove the memory, generates an hot-unplug event, and signals the kernel of the incoming new event 3- as the kernel is started, on the QEMU signal, it reads the event list, decodes the hotplug event and tries to finish the hotplugging. 4- QEMU receive the the hotplug notification while it is trying to hot-unplug the memory. This moves the memory DRC to an invalid state This patch prevents this by not allowing to set the allocation state to USABLE while the DRC is awaiting release. RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1432382 Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
vivier
pushed a commit
that referenced
this pull request
Apr 28, 2017
During block job completion, nothing is preventing block_job_defer_to_main_loop_bh from being called in a nested aio_poll(), which is a trouble, such as in this code path: qmp_block_commit commit_active_start bdrv_reopen bdrv_reopen_multiple bdrv_reopen_prepare bdrv_flush aio_poll aio_bh_poll aio_bh_call block_job_defer_to_main_loop_bh stream_complete bdrv_reopen block_job_defer_to_main_loop_bh is the last step of the stream job, which should have been "paused" by the bdrv_drained_begin/end in bdrv_reopen_multiple, but it is not done because it's in the form of a main loop BH. Similar to why block jobs should be paused between drained_begin and drained_end, BHs they schedule must be excluded as well. To achieve this, this patch forces draining the BH in BDRV_POLL_WHILE. As a side effect this fixes a hang in block_job_detach_aio_context during system_reset when a block job is ready: #0 0x0000555555aa79f3 in bdrv_drain_recurse #1 0x0000555555aa825d in bdrv_drained_begin #2 0x0000555555aa8449 in bdrv_drain #3 0x0000555555a9c356 in blk_drain #4 0x0000555555aa3cfd in mirror_drain #5 0x0000555555a66e11 in block_job_detach_aio_context #6 0x0000555555a62f4d in bdrv_detach_aio_context #7 0x0000555555a63116 in bdrv_set_aio_context #8 0x0000555555a9d326 in blk_set_aio_context #9 0x00005555557e38da in virtio_blk_data_plane_stop #10 0x00005555559f9d5f in virtio_bus_stop_ioeventfd #11 0x00005555559fa49b in virtio_bus_stop_ioeventfd #12 0x00005555559f6a18 in virtio_pci_stop_ioeventfd #13 0x00005555559f6a18 in virtio_pci_reset #14 0x00005555559139a9 in qdev_reset_one #15 0x0000555555916738 in qbus_walk_children #16 0x0000555555913318 in qdev_walk_children #17 0x0000555555916738 in qbus_walk_children #18 0x00005555559168ca in qemu_devices_reset #19 0x000055555581fcbb in pc_machine_reset #20 0x00005555558a4d96 in qemu_system_reset #21 0x000055555577157a in main_loop_should_exit #22 0x000055555577157a in main_loop #23 0x000055555577157a in main The rationale is that the loop in block_job_detach_aio_context cannot make any progress in pausing/completing the job, because bs->in_flight is 0, so bdrv_drain doesn't process the block_job_defer_to_main_loop BH. With this patch, it does. Reported-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <20170418143044.12187-3-famz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Tested-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
vivier
pushed a commit
that referenced
this pull request
May 28, 2017
If trace backend is set to TRACE_NOP, trace_get_vcpu_event_count returns 0, cause bitmap_new call abort. The abort can be triggered as follows: $ ./configure --enable-trace-backend=nop --target-list=x86_64-softmmu $ gdb ./x86_64-softmmu/qemu-system-x86_64 -M q35,accel=kvm -m 1G (gdb) bt #0 0x00007ffff04e25f7 in raise () from /lib64/libc.so.6 #1 0x00007ffff04e3ce8 in abort () from /lib64/libc.so.6 #2 0x00005555559de905 in bitmap_new (nbits=<optimized out>) at /home/root/git/qemu2.git/include/qemu/bitmap.h:96 #3 cpu_common_initfn (obj=0x555556621d30) at qom/cpu.c:399 #4 0x0000555555a11869 in object_init_with_type (obj=0x555556621d30, ti=0x55555656bbb0) at qom/object.c:341 #5 0x0000555555a11869 in object_init_with_type (obj=0x555556621d30, ti=0x55555656bd30) at qom/object.c:341 #6 0x0000555555a11efc in object_initialize_with_type (data=data@entry=0x555556621d30, size=76560, type=type@entry=0x55555656bd30) at qom/object.c:376 #7 0x0000555555a12061 in object_new_with_type (type=0x55555656bd30) at qom/object.c:484 #8 0x0000555555a121c5 in object_new (typename=typename@entry=0x555556550340 "qemu64-x86_64-cpu") at qom/object.c:494 #9 0x00005555557f6e3d in pc_new_cpu (typename=typename@entry=0x555556550340 "qemu64-x86_64-cpu", apic_id=0, errp=errp@entry=0x5555565391b0 <error_fatal>) at /home/root/git/qemu2.git/hw/i386/pc.c:1101 #10 0x00005555557fa33e in pc_cpus_init (pcms=pcms@entry=0x5555565f9690) at /home/root/git/qemu2.git/hw/i386/pc.c:1184 #11 0x00005555557fe0f6 in pc_q35_init (machine=0x5555565f9690) at /home/root/git/qemu2.git/hw/i386/pc_q35.c:121 #12 0x000055555574fbad in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4562 Signed-off-by: Anthony Xu <anthony.xu@intel.com> Message-id: 1494369432-15418-1-git-send-email-anthony.xu@intel.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jun 4, 2017
Spotted by ASAN: /x86_64/hmp/pc-0.12: ================================================================= ==22538==ERROR: LeakSanitizer: detected memory leaks Direct leak of 224 byte(s) in 1 object(s) allocated from: #0 0x7f0f63cdee60 in malloc (/lib64/libasan.so.3+0xc6e60) #1 0x556f11ff32d7 in tcp_newtcpcb /home/elmarco/src/qemu/slirp/tcp_subr.c:250 #2 0x556f11fdb1d1 in tcp_listen /home/elmarco/src/qemu/slirp/socket.c:688 #3 0x556f11fca9d5 in slirp_add_hostfwd /home/elmarco/src/qemu/slirp/slirp.c:1052 #4 0x556f11f8db41 in slirp_hostfwd /home/elmarco/src/qemu/net/slirp.c:506 #5 0x556f11f8dd83 in hmp_hostfwd_add /home/elmarco/src/qemu/net/slirp.c:535 There might be a better way to fix this, but calling slirp tcp_close() doesn't work. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
vivier
pushed a commit
that referenced
this pull request
Jun 4, 2017
Since commit d4c19cd ("virtio-serial: add missing virtio_detach_element() call") the following commands may cause QEMU to segfault: $ qemu -M accel=kvm -cpu host -m 1G \ -drive if=virtio,file=test.img,format=raw \ -device virtio-serial-pci,id=virtio-serial0 \ -chardev socket,id=channel1,path=/tmp/chardev.sock,server,nowait \ -device virtserialport,chardev=channel1,bus=virtio-serial0.0,id=port1 $ nc -U /tmp/chardev.sock ^C (guest)$ cat /dev/zero >/dev/vport0p1 The segfault is non-deterministic: if the event loop notices the socket has been closed then there is no crash. The disconnect has to happen right before QEMU attempts to write data to the socket. The backtrace is as follows: Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault. 0x00005555557e0698 in do_flush_queued_data (port=0x5555582cedf0, vq=0x7fffcc854290, vdev=0x55555807b1d0) at hw/char/virtio-serial-bus.c:180 180 for (i = port->iov_idx; i < port->elem->out_num; i++) { #1 0x000055555580d363 in virtio_queue_notify_vq (vq=0x7fffcc854290) at hw/virtio/virtio.c:1524 #2 0x000055555580d363 in virtio_queue_host_notifier_read (n=0x7fffcc8542f8) at hw/virtio/virtio.c:2430 #3 0x0000555555b3482c in aio_dispatch_handlers (ctx=ctx@entry=0x5555566b8c80) at util/aio-posix.c:399 #4 0x0000555555b350d8 in aio_dispatch (ctx=0x5555566b8c80) at util/aio-posix.c:430 #5 0x0000555555b3212e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:261 #6 0x00007fffde71de52 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #7 0x0000555555b34353 in glib_pollfds_poll () at util/main-loop.c:213 #8 0x0000555555b34353 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:261 #9 0x0000555555b34353 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:517 #10 0x0000555555773207 in main_loop () at vl.c:1917 #11 0x0000555555773207 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4751 The do_flush_queued_data() function does not anticipate chardev close events during vsc->have_data(). It expects port->elem to remain non-NULL for the duration its for loop. The fix is simply to return from do_flush_queued_data() if the port closes because the close event already frees port->elem and drains the virtqueue - there is nothing left for do_flush_queued_data() to do. Reported-by: Sitong Liu <siliu@redhat.com> Reported-by: Min Deng <mdeng@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jun 15, 2017
…ered Submission of requests on linux aio is a bit tricky and can lead to requests completions on submission path: 44713c9 ("linux-aio: Handle io_submit() failure gracefully") 0ed93d8 ("linux-aio: process completions from ioq_submit()") That means that any coroutine which has been yielded in order to wait for completion can be resumed from submission path and be eventually terminated (freed). The following use-after-free crash was observed when IO throttling was enabled: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7f5813dff700 (LWP 56417)] virtqueue_unmap_sg (elem=0x7f5804009a30, len=1, vq=<optimized out>) at virtio.c:252 (gdb) bt #0 virtqueue_unmap_sg (elem=0x7f5804009a30, len=1, vq=<optimized out>) at virtio.c:252 ^^^^^^^^^^^^^^ remember the address #1 virtqueue_fill (vq=0x5598b20d21b0, elem=0x7f5804009a30, len=1, idx=0) at virtio.c:282 #2 virtqueue_push (vq=0x5598b20d21b0, elem=elem@entry=0x7f5804009a30, len=<optimized out>) at virtio.c:308 #3 virtio_blk_req_complete (req=req@entry=0x7f5804009a30, status=status@entry=0 '\000') at virtio-blk.c:61 #4 virtio_blk_rw_complete (opaque=<optimized out>, ret=0) at virtio-blk.c:126 #5 blk_aio_complete (acb=0x7f58040068d0) at block-backend.c:923 #6 coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at coroutine-ucontext.c:78 (gdb) p * elem $8 = {index = 77, out_num = 2, in_num = 1, in_addr = 0x7f5804009ad8, out_addr = 0x7f5804009ae0, in_sg = 0x0, out_sg = 0x7f5804009a50} ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 'in_sg' and 'out_sg' are invalid. e.g. it is impossible that 'in_sg' is zero, instead its value must be equal to: (gdb) p/x 0x7f5804009ad8 + sizeof(elem->in_addr[0]) + 2 * sizeof(elem->out_addr[0]) $26 = 0x7f5804009af0 Seems 'elem' was corrupted. Meanwhile another thread raised an abort: Thread 12 (Thread 0x7f57f2ffd700 (LWP 56426)): #0 raise () from /lib/x86_64-linux-gnu/libc.so.6 #1 abort () from /lib/x86_64-linux-gnu/libc.so.6 #2 qemu_coroutine_enter (co=0x7f5804009af0) at qemu-coroutine.c:113 #3 qemu_co_queue_run_restart (co=0x7f5804009a30) at qemu-coroutine-lock.c:60 #4 qemu_coroutine_enter (co=0x7f5804009a30) at qemu-coroutine.c:119 ^^^^^^^^^^^^^^^^^^ WTF?? this is equal to elem from crashed thread #5 qemu_co_queue_run_restart (co=0x7f57e7f16ae0) at qemu-coroutine-lock.c:60 #6 qemu_coroutine_enter (co=0x7f57e7f16ae0) at qemu-coroutine.c:119 #7 qemu_co_queue_run_restart (co=0x7f5807e112a0) at qemu-coroutine-lock.c:60 #8 qemu_coroutine_enter (co=0x7f5807e112a0) at qemu-coroutine.c:119 #9 qemu_co_queue_run_restart (co=0x7f5807f17820) at qemu-coroutine-lock.c:60 #10 qemu_coroutine_enter (co=0x7f5807f17820) at qemu-coroutine.c:119 #11 qemu_co_queue_run_restart (co=0x7f57e7f18e10) at qemu-coroutine-lock.c:60 #12 qemu_coroutine_enter (co=0x7f57e7f18e10) at qemu-coroutine.c:119 #13 qemu_co_enter_next (queue=queue@entry=0x5598b1e742d0) at qemu-coroutine-lock.c:106 #14 timer_cb (blk=0x5598b1e74280, is_write=<optimized out>) at throttle-groups.c:419 Crash can be explained by access of 'co' object from the loop inside qemu_co_queue_run_restart(): while ((next = QSIMPLEQ_FIRST(&co->co_queue_wakeup))) { QSIMPLEQ_REMOVE_HEAD(&co->co_queue_wakeup, co_queue_next); ^^^^^^^^^^^^^^^^^^^^ on each iteration 'co' is accessed, but 'co' can be already freed qemu_coroutine_enter(next); } When 'next' coroutine is resumed (entered) it can in its turn resume 'co', and eventually free it. That's why we see 'co' (which was freed) has the same address as 'elem' from the first backtrace. The fix is obvious: use temporary queue and do not touch coroutine after first qemu_coroutine_enter() is invoked. The issue is quite rare and happens every ~12 hours on very high IO and CPU load (building linux kernel with -j512 inside guest) when IO throttling is enabled. With the fix applied guest is running ~35 hours and is still alive so far. Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20170601160847.23720-1-roman.penyaev@profitbricks.com Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Fam Zheng <famz@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Cc: qemu-devel@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Feb 11, 2021
When the reconnect in NBD client is in progress, the iochannel used for NBD connection doesn't exist. Therefore an attempt to detach it from the aio_context of the parent BlockDriverState results in a NULL pointer dereference. The problem is triggerable, in particular, when an outgoing migration is about to finish, and stopping the dataplane tries to move the BlockDriverState from the iothread aio_context to the main loop. If the NBD connection is lost before this point, and the NBD client has entered the reconnect procedure, QEMU crashes: #0 qemu_aio_coroutine_enter (ctx=0x5618056c7580, co=0x0) at /build/qemu-6MF7tq/qemu-5.0.1/util/qemu-coroutine.c:109 #1 0x00005618034b1b68 in nbd_client_attach_aio_context_bh ( opaque=0x561805ed4c00) at /build/qemu-6MF7tq/qemu-5.0.1/block/nbd.c:164 #2 0x000056180353116b in aio_wait_bh (opaque=0x7f60e1e63700) at /build/qemu-6MF7tq/qemu-5.0.1/util/aio-wait.c:55 #3 0x0000561803530633 in aio_bh_call (bh=0x7f60d40a7e80) at /build/qemu-6MF7tq/qemu-5.0.1/util/async.c:136 #4 aio_bh_poll (ctx=ctx@entry=0x5618056c7580) at /build/qemu-6MF7tq/qemu-5.0.1/util/async.c:164 #5 0x0000561803533e5a in aio_poll (ctx=ctx@entry=0x5618056c7580, blocking=blocking@entry=true) at /build/qemu-6MF7tq/qemu-5.0.1/util/aio-posix.c:650 #6 0x000056180353128d in aio_wait_bh_oneshot (ctx=0x5618056c7580, cb=<optimized out>, opaque=<optimized out>) at /build/qemu-6MF7tq/qemu-5.0.1/util/aio-wait.c:71 #7 0x000056180345c50a in bdrv_attach_aio_context (new_context=0x5618056c7580, bs=0x561805ed4c00) at /build/qemu-6MF7tq/qemu-5.0.1/block.c:6172 #8 bdrv_set_aio_context_ignore (bs=bs@entry=0x561805ed4c00, new_context=new_context@entry=0x5618056c7580, ignore=ignore@entry=0x7f60e1e63780) at /build/qemu-6MF7tq/qemu-5.0.1/block.c:6237 #9 0x000056180345c969 in bdrv_child_try_set_aio_context ( bs=bs@entry=0x561805ed4c00, ctx=0x5618056c7580, ignore_child=<optimized out>, errp=<optimized out>) at /build/qemu-6MF7tq/qemu-5.0.1/block.c:6332 #10 0x00005618034957db in blk_do_set_aio_context (blk=0x56180695b3f0, new_context=0x5618056c7580, update_root_node=update_root_node@entry=true, errp=errp@entry=0x0) at /build/qemu-6MF7tq/qemu-5.0.1/block/block-backend.c:1989 #11 0x00005618034980bd in blk_set_aio_context (blk=<optimized out>, new_context=<optimized out>, errp=errp@entry=0x0) at /build/qemu-6MF7tq/qemu-5.0.1/block/block-backend.c:2010 #12 0x0000561803197953 in virtio_blk_data_plane_stop (vdev=<optimized out>) at /build/qemu-6MF7tq/qemu-5.0.1/hw/block/dataplane/virtio-blk.c:292 #13 0x00005618033d67bf in virtio_bus_stop_ioeventfd (bus=0x5618056d9f08) at /build/qemu-6MF7tq/qemu-5.0.1/hw/virtio/virtio-bus.c:245 #14 0x00005618031c9b2e in virtio_vmstate_change (opaque=0x5618056d9f90, running=0, state=<optimized out>) at /build/qemu-6MF7tq/qemu-5.0.1/hw/virtio/virtio.c:3220 #15 0x0000561803208bfd in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_FINISH_MIGRATE) at /build/qemu-6MF7tq/qemu-5.0.1/softmmu/vl.c:1275 #16 0x0000561803155c02 in do_vm_stop (state=RUN_STATE_FINISH_MIGRATE, send_stop=<optimized out>) at /build/qemu-6MF7tq/qemu-5.0.1/cpus.c:1032 #17 0x00005618033e3765 in migration_completion (s=0x5618056e6960) at /build/qemu-6MF7tq/qemu-5.0.1/migration/migration.c:2914 #18 migration_iteration_run (s=0x5618056e6960) at /build/qemu-6MF7tq/qemu-5.0.1/migration/migration.c:3275 #19 migration_thread (opaque=opaque@entry=0x5618056e6960) at /build/qemu-6MF7tq/qemu-5.0.1/migration/migration.c:3439 #20 0x0000561803536ad6 in qemu_thread_start (args=<optimized out>) at /build/qemu-6MF7tq/qemu-5.0.1/util/qemu-thread-posix.c:519 #21 0x00007f61085d06ba in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #22 0x00007f610830641d in sysctl () from /lib/x86_64-linux-gnu/libc.so.6 #23 0x0000000000000000 in ?? () Fix it by checking that the iochannel is non-null before trying to detach it from the aio_context. If it is null, no detaching is needed, and it will get reattached in the proper aio_context once the connection is reestablished. Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210129073859.683063-2-rvkagan@yandex-team.ru> Signed-off-by: Eric Blake <eblake@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Feb 11, 2021
When an NBD block driver state is moved from one aio_context to another (e.g. when doing a drain in a migration thread), nbd_client_attach_aio_context_bh is executed that enters the connection coroutine. However, the assumption that ->connection_co is always present here appears incorrect: the connection may have encountered an error other than -EIO in the underlying transport, and thus may have decided to quit rather than keep trying to reconnect, and therefore it may have terminated the connection coroutine. As a result an attempt to reassign the client in this state (NBD_CLIENT_QUIT) to a different aio_context leads to a null pointer dereference: #0 qio_channel_detach_aio_context (ioc=0x0) at /build/qemu-gYtjVn/qemu-5.0.1/io/channel.c:452 #1 0x0000562a242824b3 in bdrv_detach_aio_context (bs=0x562a268d6a00) at /build/qemu-gYtjVn/qemu-5.0.1/block.c:6151 #2 bdrv_set_aio_context_ignore (bs=bs@entry=0x562a268d6a00, new_context=new_context@entry=0x562a260c9580, ignore=ignore@entry=0x7feeadc9b780) at /build/qemu-gYtjVn/qemu-5.0.1/block.c:6230 #3 0x0000562a24282969 in bdrv_child_try_set_aio_context (bs=bs@entry=0x562a268d6a00, ctx=0x562a260c9580, ignore_child=<optimized out>, errp=<optimized out>) at /build/qemu-gYtjVn/qemu-5.0.1/block.c:6332 #4 0x0000562a242bb7db in blk_do_set_aio_context (blk=0x562a2735d0d0, new_context=0x562a260c9580, update_root_node=update_root_node@entry=true, errp=errp@entry=0x0) at /build/qemu-gYtjVn/qemu-5.0.1/block/block-backend.c:1989 #5 0x0000562a242be0bd in blk_set_aio_context (blk=<optimized out>, new_context=<optimized out>, errp=errp@entry=0x0) at /build/qemu-gYtjVn/qemu-5.0.1/block/block-backend.c:2010 #6 0x0000562a23fbd953 in virtio_blk_data_plane_stop (vdev=<optimized out>) at /build/qemu-gYtjVn/qemu-5.0.1/hw/block/dataplane/virtio-blk.c:292 #7 0x0000562a241fc7bf in virtio_bus_stop_ioeventfd (bus=0x562a260dbf08) at /build/qemu-gYtjVn/qemu-5.0.1/hw/virtio/virtio-bus.c:245 #8 0x0000562a23fefb2e in virtio_vmstate_change (opaque=0x562a260dbf90, running=0, state=<optimized out>) at /build/qemu-gYtjVn/qemu-5.0.1/hw/virtio/virtio.c:3220 #9 0x0000562a2402ebfd in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_FINISH_MIGRATE) at /build/qemu-gYtjVn/qemu-5.0.1/softmmu/vl.c:1275 #10 0x0000562a23f7bc02 in do_vm_stop (state=RUN_STATE_FINISH_MIGRATE, send_stop=<optimized out>) at /build/qemu-gYtjVn/qemu-5.0.1/cpus.c:1032 #11 0x0000562a24209765 in migration_completion (s=0x562a260e83a0) at /build/qemu-gYtjVn/qemu-5.0.1/migration/migration.c:2914 #12 migration_iteration_run (s=0x562a260e83a0) at /build/qemu-gYtjVn/qemu-5.0.1/migration/migration.c:3275 #13 migration_thread (opaque=opaque@entry=0x562a260e83a0) at /build/qemu-gYtjVn/qemu-5.0.1/migration/migration.c:3439 #14 0x0000562a2435ca96 in qemu_thread_start (args=<optimized out>) at /build/qemu-gYtjVn/qemu-5.0.1/util/qemu-thread-posix.c:519 #15 0x00007feed31466ba in start_thread (arg=0x7feeadc9c700) at pthread_create.c:333 #16 0x00007feed2e7c41d in __GI___sysctl (name=0x0, nlen=608471908, oldval=0x562a2452b138, oldlenp=0x0, newval=0x562a2452c5e0 <__func__.28102>, newlen=0) at ../sysdeps/unix/sysv/linux/sysctl.c:30 #17 0x0000000000000000 in ?? () Fix it by checking that the connection coroutine is non-null before trying to enter it. If it is null, no entering is needed, as the connection is probably going down anyway. Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210129073859.683063-3-rvkagan@yandex-team.ru> Signed-off-by: Eric Blake <eblake@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Feb 11, 2021
Address space is destroyed without proper removal of its listeners with current code. They are expected to be removed in virtio_device_instance_finalize [1], but qemu calls it through object_deinit, after address_space_destroy call through device_set_realized [2]. Move it to virtio_device_unrealize, called before device_set_realized [3] and making it symmetric with memory_listener_register in virtio_device_realize. v2: Delete no-op call of virtio_device_instance_finalize. Add backtraces. [1] #0 virtio_device_instance_finalize (obj=0x555557de5120) at /home/qemu/include/hw/virtio/virtio.h:71 #1 0x0000555555b703c9 in object_deinit (type=0x555556639860, obj=<optimized out>) at ../qom/object.c:671 #2 object_finalize (data=0x555557de5120) at ../qom/object.c:685 #3 object_unref (objptr=0x555557de5120) at ../qom/object.c:1184 #4 0x0000555555b4de9d in bus_free_bus_child (kid=0x555557df0660) at ../hw/core/qdev.c:55 #5 0x0000555555c65003 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:281 Queued by: #0 bus_remove_child (bus=0x555557de5098, child=child@entry=0x555557de5120) at ../hw/core/qdev.c:60 #1 0x0000555555b4ee31 in device_unparent (obj=<optimized out>) at ../hw/core/qdev.c:984 #2 0x0000555555b70465 in object_finalize_child_property ( obj=<optimized out>, name=<optimized out>, opaque=0x555557de5120) at ../qom/object.c:1725 #3 0x0000555555b6fa17 in object_property_del_child ( child=0x555557de5120, obj=0x555557ddcf90) at ../qom/object.c:645 #4 object_unparent (obj=0x555557de5120) at ../qom/object.c:664 #5 0x0000555555b4c071 in bus_unparent (obj=<optimized out>) at ../hw/core/bus.c:147 #6 0x0000555555b70465 in object_finalize_child_property ( obj=<optimized out>, name=<optimized out>, opaque=0x555557de5098) at ../qom/object.c:1725 #7 0x0000555555b6fa17 in object_property_del_child ( child=0x555557de5098, obj=0x555557ddcf90) at ../qom/object.c:645 #8 object_unparent (obj=0x555557de5098) at ../qom/object.c:664 #9 0x0000555555b4ee19 in device_unparent (obj=<optimized out>) at ../hw/core/qdev.c:981 #10 0x0000555555b70465 in object_finalize_child_property ( obj=<optimized out>, name=<optimized out>, opaque=0x555557ddcf90) at ../qom/object.c:1725 #11 0x0000555555b6fa17 in object_property_del_child ( child=0x555557ddcf90, obj=0x55555685da10) at ../qom/object.c:645 #12 object_unparent (obj=0x555557ddcf90) at ../qom/object.c:664 #13 0x00005555558dc331 in pci_for_each_device_under_bus ( opaque=<optimized out>, fn=<optimized out>, bus=<optimized out>) at ../hw/pci/pci.c:1654 [2] Optimizer omits pci_qdev_unrealize, called by device_set_realized, and do_pci_unregister_device, called by pci_qdev_unrealize and caller of address_space_destroy. #0 address_space_destroy (as=0x555557ddd1b8) at ../softmmu/memory.c:2840 #1 0x0000555555b4fc53 in device_set_realized (obj=0x555557ddcf90, value=<optimized out>, errp=0x7fffeea8f1e0) at ../hw/core/qdev.c:850 #2 0x0000555555b6eaa6 in property_set_bool (obj=0x555557ddcf90, v=<optimized out>, name=<optimized out>, opaque=0x555556650ba0, errp=0x7fffeea8f1e0) at ../qom/object.c:2255 #3 0x0000555555b70e07 in object_property_set ( obj=obj@entry=0x555557ddcf90, name=name@entry=0x555555db99df "realized", v=v@entry=0x7fffe46b7500, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/object.c:1400 #4 0x0000555555b73c5f in object_property_set_qobject ( obj=obj@entry=0x555557ddcf90, name=name@entry=0x555555db99df "realized", value=value@entry=0x7fffe44f6180, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/qom-qobject.c:28 #5 0x0000555555b71044 in object_property_set_bool ( obj=0x555557ddcf90, name=0x555555db99df "realized", value=<optimized out>, errp=0x5555565bbf38 <error_abort>) at ../qom/object.c:1470 #6 0x0000555555921cb7 in pcie_unplug_device (bus=<optimized out>, dev=0x555557ddcf90, opaque=<optimized out>) at /home/qemu/include/hw/qdev-core.h:17 #7 0x00005555558dc331 in pci_for_each_device_under_bus ( opaque=<optimized out>, fn=<optimized out>, bus=<optimized out>) at ../hw/pci/pci.c:1654 [3] #0 virtio_device_unrealize (dev=0x555557de5120) at ../hw/virtio/virtio.c:3680 #1 0x0000555555b4fc63 in device_set_realized (obj=0x555557de5120, value=<optimized out>, errp=0x7fffee28df90) at ../hw/core/qdev.c:850 #2 0x0000555555b6eab6 in property_set_bool (obj=0x555557de5120, v=<optimized out>, name=<optimized out>, opaque=0x555556650ba0, errp=0x7fffee28df90) at ../qom/object.c:2255 #3 0x0000555555b70e17 in object_property_set ( obj=obj@entry=0x555557de5120, name=name@entry=0x555555db99ff "realized", v=v@entry=0x7ffdd8035040, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/object.c:1400 #4 0x0000555555b73c6f in object_property_set_qobject ( obj=obj@entry=0x555557de5120, name=name@entry=0x555555db99ff "realized", value=value@entry=0x7ffdd8035020, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/qom-qobject.c:28 #5 0x0000555555b71054 in object_property_set_bool ( obj=0x555557de5120, name=name@entry=0x555555db99ff "realized", value=value@entry=false, errp=0x5555565bbf38 <error_abort>) at ../qom/object.c:1470 #6 0x0000555555b4edc5 in qdev_unrealize (dev=<optimized out>) at ../hw/core/qdev.c:403 #7 0x0000555555b4c2a9 in bus_set_realized (obj=<optimized out>, value=<optimized out>, errp=<optimized out>) at ../hw/core/bus.c:204 #8 0x0000555555b6eab6 in property_set_bool (obj=0x555557de5098, v=<optimized out>, name=<optimized out>, opaque=0x555557df04c0, errp=0x7fffee28e0a0) at ../qom/object.c:2255 #9 0x0000555555b70e17 in object_property_set ( obj=obj@entry=0x555557de5098, name=name@entry=0x555555db99ff "realized", v=v@entry=0x7ffdd8034f50, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/object.c:1400 #10 0x0000555555b73c6f in object_property_set_qobject ( obj=obj@entry=0x555557de5098, name=name@entry=0x555555db99ff "realized", value=value@entry=0x7ffdd8020630, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/qom-qobject.c:28 #11 0x0000555555b71054 in object_property_set_bool ( obj=obj@entry=0x555557de5098, name=name@entry=0x555555db99ff "realized", value=value@entry=false, errp=0x5555565bbf38 <error_abort>) at ../qom/object.c:1470 #12 0x0000555555b4c725 in qbus_unrealize ( bus=bus@entry=0x555557de5098) at ../hw/core/bus.c:178 #13 0x0000555555b4fc00 in device_set_realized (obj=0x555557ddcf90, value=<optimized out>, errp=0x7fffee28e1e0) at ../hw/core/qdev.c:844 #14 0x0000555555b6eab6 in property_set_bool (obj=0x555557ddcf90, v=<optimized out>, name=<optimized out>, opaque=0x555556650ba0, errp=0x7fffee28e1e0) at ../qom/object.c:2255 #15 0x0000555555b70e17 in object_property_set ( obj=obj@entry=0x555557ddcf90, name=name@entry=0x555555db99ff "realized", v=v@entry=0x7ffdd8020560, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/object.c:1400 #16 0x0000555555b73c6f in object_property_set_qobject ( obj=obj@entry=0x555557ddcf90, name=name@entry=0x555555db99ff "realized", value=value@entry=0x7ffdd8020540, errp=errp@entry=0x5555565bbf38 <error_abort>) at ../qom/qom-qobject.c:28 #17 0x0000555555b71054 in object_property_set_bool ( obj=0x555557ddcf90, name=0x555555db99ff "realized", value=<optimized out>, errp=0x5555565bbf38 <error_abort>) at ../qom/object.c:1470 #18 0x0000555555921cb7 in pcie_unplug_device (bus=<optimized out>, dev=0x555557ddcf90, opaque=<optimized out>) at /home/qemu/include/hw/qdev-core.h:17 #19 0x00005555558dc331 in pci_for_each_device_under_bus ( opaque=<optimized out>, fn=<optimized out>, bus=<optimized out>) at ../hw/pci/pci.c:1654 Fixes: c611c76 ("virtio: add MemoryListener to cache ring translations") Buglink: https://bugs.launchpad.net/qemu/+bug/1912846 Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20210125192505.390554-1-eperezma@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Mar 9, 2021
Commit v5.2.0-190-g0546c0609c ("vl: split various early command line options to a separate function") moved the trace backend init code to the qemu_process_early_options(). Which is now being called before os_daemonize() via qemu_maybe_daemonize(). Turns out that this change of order causes a problem when executing QEMU in daemon mode and with CONFIG_TRACE_SIMPLE. The trace thread is now being created by the parent, and the parent is left waiting for a trace file flush that was registered via st_init(). The result is that the parent process never exits. To reproduce, fire up a QEMU process with -daemonize and with CONFIG_TRACE_SIMPLE enabled. Two QEMU process will be left in the host: $ sudo ./x86_64-softmmu/qemu-system-x86_64 -S -no-user-config -nodefaults \ -nographic -machine none,accel=kvm:tcg -daemonize $ ps axf | grep qemu 529710 pts/3 S+ 0:00 | \_ grep --color=auto qemu 529697 ? Ssl 0:00 \_ ./x86_64-softmmu/qemu-system-x86_64 -S -no-user-config -nodefaults -nographic -machine none,accel=kvm:tcg -daemonize 529699 ? Sl 0:00 \_ ./x86_64-softmmu/qemu-system-x86_64 -S -no-user-config -nodefaults -nographic -machine none,accel=kvm:tcg -daemonize The parent thread is hang in flush_trace_file: $ sudo gdb ./x86_64-softmmu/qemu-system-x86_64 529697 (..) (gdb) bt #0 0x00007f9dac6a137d in syscall () at /lib64/libc.so.6 #1 0x00007f9dacc3c4f3 in g_cond_wait () at /lib64/libglib-2.0.so.0 #2 0x0000555d12f952da in flush_trace_file (wait=true) at ../trace/simple.c:140 #3 0x0000555d12f95b4c in st_flush_trace_buffer () at ../trace/simple.c:383 #4 0x00007f9dac5e43a7 in __run_exit_handlers () at /lib64/libc.so.6 #5 0x00007f9dac5e4550 in on_exit () at /lib64/libc.so.6 #6 0x0000555d12d454de in os_daemonize () at ../os-posix.c:255 #7 0x0000555d12d0bd5c in qemu_maybe_daemonize (pid_file=0x0) at ../softmmu/vl.c:2408 #8 0x0000555d12d0e566 in qemu_init (argc=8, argv=0x7fffc594d9b8, envp=0x7fffc594da00) at ../softmmu/vl.c:3459 #9 0x0000555d128edac1 in main (argc=8, argv=0x7fffc594d9b8, envp=0x7fffc594da00) at ../softmmu/main.c:49 (gdb) Aside from the 'zombie' process in the host, this is directly impacting Libvirt. Libvirt waits for the parent process to exit to be sure that the QMP monitor is available in the daemonized process to fetch QEMU capabilities, and as is now Libvirt hangs at daemon start waiting for the parent thread to exit. The fix is simple: just move the trace backend related code back to be executed after daemonizing. Fixes: 0546c06 Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20210105181437.538366-2-danielhb413@gmail.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 10, 2021
We can't know the caller read enough data in the memory pointed by ext_hdr to cast it as a ip6_ext_hdr_routing. Declare rt_hdr on the stack and fill it again from the iovec. Since we already checked there is enough data in the iovec buffer, simply add an assert() call to consume the bytes_read variable. This fix a 2 bytes buffer overrun in eth_parse_ipv6_hdr() reported by QEMU fuzzer: $ cat << EOF | ./qemu-system-i386 -M pc-q35-5.0 \ -accel qtest -monitor none \ -serial none -nographic -qtest stdio outl 0xcf8 0x80001010 outl 0xcfc 0xe1020000 outl 0xcf8 0x80001004 outw 0xcfc 0x7 write 0x25 0x1 0x86 write 0x26 0x1 0xdd write 0x4f 0x1 0x2b write 0xe1020030 0x4 0x190002e1 write 0xe102003a 0x2 0x0807 write 0xe1020048 0x4 0x12077cdd write 0xe1020400 0x4 0xba077cdd write 0xe1020420 0x4 0x190002e1 write 0xe1020428 0x4 0x3509d807 write 0xe1020438 0x1 0xe2 EOF ================================================================= ==2859770==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffdef904902 at pc 0x561ceefa78de bp 0x7ffdef904820 sp 0x7ffdef904818 READ of size 1 at 0x7ffdef904902 thread T0 #0 0x561ceefa78dd in _eth_get_rss_ex_dst_addr net/eth.c:410:17 #1 0x561ceefa41fb in eth_parse_ipv6_hdr net/eth.c:532:17 #2 0x561cef7de639 in net_tx_pkt_parse_headers hw/net/net_tx_pkt.c:228:14 #3 0x561cef7dbef4 in net_tx_pkt_parse hw/net/net_tx_pkt.c:273:9 #4 0x561ceec29f22 in e1000e_process_tx_desc hw/net/e1000e_core.c:730:29 #5 0x561ceec28eac in e1000e_start_xmit hw/net/e1000e_core.c:927:9 #6 0x561ceec1baab in e1000e_set_tdt hw/net/e1000e_core.c:2444:9 #7 0x561ceebf300e in e1000e_core_write hw/net/e1000e_core.c:3256:9 #8 0x561cef3cd4cd in e1000e_mmio_write hw/net/e1000e.c:110:5 Address 0x7ffdef904902 is located in stack of thread T0 at offset 34 in frame #0 0x561ceefa320f in eth_parse_ipv6_hdr net/eth.c:486 This frame has 1 object(s): [32, 34) 'ext_hdr' (line 487) <== Memory access at offset 34 overflows this variable HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork (longjmp and C++ exceptions *are* supported) SUMMARY: AddressSanitizer: stack-buffer-overflow net/eth.c:410:17 in _eth_get_rss_ex_dst_addr Shadow bytes around the buggy address: 0x10003df188d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df188e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df188f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df18900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df18910: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 =>0x10003df18920:[02]f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df18930: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df18940: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df18950: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df18960: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10003df18970: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Stack left redzone: f1 Stack right redzone: f3 ==2859770==ABORTING Add the corresponding qtest case with the fuzzer reproducer. FWIW GCC 11 similarly reported: net/eth.c: In function 'eth_parse_ipv6_hdr': net/eth.c:410:15: error: array subscript 'struct ip6_ext_hdr_routing[0]' is partly outside array bounds of 'struct ip6_ext_hdr[1]' [-Werror=array-bounds] 410 | if ((rthdr->rtype == 2) && (rthdr->segleft == 1)) { | ~~~~~^~~~~~~ net/eth.c:485:24: note: while referencing 'ext_hdr' 485 | struct ip6_ext_hdr ext_hdr; | ^~~~~~~ net/eth.c:410:38: error: array subscript 'struct ip6_ext_hdr_routing[0]' is partly outside array bounds of 'struct ip6_ext_hdr[1]' [-Werror=array-bounds] 410 | if ((rthdr->rtype == 2) && (rthdr->segleft == 1)) { | ~~~~~^~~~~~~~~ net/eth.c:485:24: note: while referencing 'ext_hdr' 485 | struct ip6_ext_hdr ext_hdr; | ^~~~~~~ Cc: qemu-stable@nongnu.org Buglink: https://bugs.launchpad.net/qemu/+bug/1879531 Reported-by: Alexander Bulekov <alxndr@bu.edu> Reported-by: Miroslav Rezanina <mrezanin@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Miroslav Rezanina <mrezanin@redhat.com> Fixes: eb70002 ("net_pkt: Extend packet abstraction as required by e1000e functionality") Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 10, 2021
Incoming enabled bitmaps are busy, because we do bdrv_dirty_bitmap_create_successor() for them. But disabled bitmaps being migrated are not marked busy, and user can remove them during the incoming migration. Then we may crash in cancel_incoming_locked() when try to remove the bitmap that was already removed by user, like this: #0 qemu_mutex_lock_impl (mutex=0x5593d88c50d1, file=0x559680554b20 "../block/dirty-bitmap.c", line=64) at ../util/qemu-thread-posix.c:77 #1 bdrv_dirty_bitmaps_lock (bs=0x5593d88c0ee9) at ../block/dirty-bitmap.c:64 #2 bdrv_release_dirty_bitmap (bitmap=0x5596810e9570) at ../block/dirty-bitmap.c:362 #3 cancel_incoming_locked (s=0x559680be8208 <dbm_state+40>) at ../migration/block-dirty-bitmap.c:918 #4 dirty_bitmap_load (f=0x559681d02b10, opaque=0x559680be81e0 <dbm_state>, version_id=1) at ../migration/block-dirty-bitmap.c:1194 #5 vmstate_load (f=0x559681d02b10, se=0x559680fb5810) at ../migration/savevm.c:908 #6 qemu_loadvm_section_part_end (f=0x559681d02b10, mis=0x559680fb4a30) at ../migration/savevm.c:2473 #7 qemu_loadvm_state_main (f=0x559681d02b10, mis=0x559680fb4a30) at ../migration/savevm.c:2626 #8 postcopy_ram_listen_thread (opaque=0x0) at ../migration/savevm.c:1871 #9 qemu_thread_start (args=0x5596817ccd10) at ../util/qemu-thread-posix.c:521 #10 start_thread () at /lib64/libpthread.so.0 #11 clone () at /lib64/libc.so.6 Note bs pointer taken from bitmap: it's definitely bad aligned. That's because we are in use after free, bitmap is already freed. So, let's make disabled bitmaps (being migrated) busy during incoming migration. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20210322094906.5079-2-vsementsov@virtuozzo.com>
vivier
pushed a commit
that referenced
this pull request
Apr 10, 2021
When building with --enable-sanitizers we get: Direct leak of 32 byte(s) in 2 object(s) allocated from: #0 0x5618479ec7cf in malloc (qemu-system-aarch64+0x233b7cf) #1 0x7f675745f958 in g_malloc (/lib64/libglib-2.0.so.0+0x58958) #2 0x561847f02ca2 in usb_packet_init hw/usb/core.c:531:5 #3 0x561848df4df4 in usb_ehci_init hw/usb/hcd-ehci.c:2575:5 #4 0x561847c119ac in ehci_sysbus_init hw/usb/hcd-ehci-sysbus.c:73:5 #5 0x56184a5bdab8 in object_init_with_type qom/object.c:375:9 #6 0x56184a5bd955 in object_init_with_type qom/object.c:371:9 #7 0x56184a5a2bda in object_initialize_with_type qom/object.c:517:5 #8 0x56184a5a24d5 in object_initialize qom/object.c:536:5 #9 0x56184a5a2f6c in object_initialize_child_with_propsv qom/object.c:566:5 #10 0x56184a5a2e60 in object_initialize_child_with_props qom/object.c:549:10 #11 0x56184a5a3a1e in object_initialize_child_internal qom/object.c:603:5 #12 0x561849542d18 in npcm7xx_init hw/arm/npcm7xx.c:427:5 Similarly to commit d710e1e ("usb: ehci: fix memory leak in ehci"), fix by calling usb_ehci_finalize() to free the USBPacket. Fixes: 7341ea0 Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210323183701.281152-1-f4bug@amsat.org> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 10, 2021
When building with --enable-sanitizers we get: Direct leak of 16 byte(s) in 1 object(s) allocated from: #0 0x5618479ec7cf in malloc (qemu-system-aarch64+0x233b7cf) #1 0x7f675745f958 in g_malloc (/lib64/libglib-2.0.so.0+0x58958) #2 0x561847c2dcc9 in xlnx_dp_init hw/display/xlnx_dp.c:1259:5 #3 0x56184a5bdab8 in object_init_with_type qom/object.c:375:9 #4 0x56184a5a2bda in object_initialize_with_type qom/object.c:517:5 #5 0x56184a5a24d5 in object_initialize qom/object.c:536:5 #6 0x56184a5a2f6c in object_initialize_child_with_propsv qom/object.c:566:5 #7 0x56184a5a2e60 in object_initialize_child_with_props qom/object.c:549:10 #8 0x56184a5a3a1e in object_initialize_child_internal qom/object.c:603:5 #9 0x5618495aa431 in xlnx_zynqmp_init hw/arm/xlnx-zynqmp.c:273:5 The RX/TX FIFOs are created in xlnx_dp_init(), add xlnx_dp_finalize() to destroy them. Fixes: 58ac482 ("introduce xlnx-dp") Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210323182958.277654-1-f4bug@amsat.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
vivier
pushed a commit
that referenced
this pull request
Apr 10, 2021
g_hash_table_add always retains ownership of the pointer passed in as the key. Its return status merely indicates whether the added entry was new, or replaced an existing entry. Thus key must never be freed after this method returns. Spotted by ASAN: ==2407186==ERROR: AddressSanitizer: heap-use-after-free on address 0x6020003ac4f0 at pc 0x7ffff766659c bp 0x7fffffffd1d0 sp 0x7fffffffc980 READ of size 1 at 0x6020003ac4f0 thread T0 #0 0x7ffff766659b (/lib64/libasan.so.6+0x8a59b) #1 0x7ffff6bfa843 in g_str_equal ../glib/ghash.c:2303 #2 0x7ffff6bf8167 in g_hash_table_lookup_node ../glib/ghash.c:493 #3 0x7ffff6bf9b78 in g_hash_table_insert_internal ../glib/ghash.c:1598 #4 0x7ffff6bf9c32 in g_hash_table_add ../glib/ghash.c:1689 #5 0x5555596caad4 in module_load_one ../util/module.c:233 #6 0x5555596ca949 in module_load_one ../util/module.c:225 #7 0x5555596ca949 in module_load_one ../util/module.c:225 #8 0x5555596cbdf4 in module_load_qom_all ../util/module.c:349 Typical C bug... Fixes: 9062912 ("module: use g_hash_table_add()") Cc: qemu-stable@nongnu.org Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20210316134456.3243102-1-marcandre.lureau@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 16, 2022
To: <quintela@redhat.com>, <dgilbert@redhat.com>, <qemu-devel@nongnu.org> CC: Li Zhijian <lizhijian@cn.fujitsu.com> Date: Sat, 31 Jul 2021 22:05:51 +0800 (5 weeks, 4 days, 17 hours ago) multifd with unsupported protocol will cause a segment fault. (gdb) bt #0 0x0000563b4a93faf8 in socket_connect (addr=0x0, errp=0x7f7f02675410) at ../util/qemu-sockets.c:1190 #1 0x0000563b4a797a03 in qio_channel_socket_connect_sync (ioc=0x563b4d16e8c0, addr=0x0, errp=0x7f7f02675410) at ../io/channel-socket.c:145 #2 0x0000563b4a797abf in qio_channel_socket_connect_worker (task=0x563b4cd86c30, opaque=0x0) at ../io/channel-socket.c:168 #3 0x0000563b4a792631 in qio_task_thread_worker (opaque=0x563b4cd86c30) at ../io/task.c:124 #4 0x0000563b4a91da69 in qemu_thread_start (args=0x563b4c44bb80) at ../util/qemu-thread-posix.c:541 #5 0x00007f7fe9b5b3f9 in ?? () #6 0x0000000000000000 in ?? () It's enough to check migrate_multifd_is_allowed() in multifd cleanup() and multifd setup() though there are so many other places using migrate_use_multifd(). Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 16, 2022
This patch fixed as follows: Thread 1 (Thread 0x7f34ee738d80 (LWP 11212)): #0 __pthread_clockjoin_ex (threadid=139847152957184, thread_return=0x7f30b1febf30, clockid=<optimized out>, abstime=<optimized out>, block=<optimized out>) at pthread_join_common.c:145 #1 0x0000563401998e36 in qemu_thread_join (thread=0x563402d66610) at util/qemu-thread-posix.c:587 #2 0x00005634017a79fa in process_incoming_migration_co (opaque=0x0) at migration/migration.c:502 #3 0x00005634019b59c9 in coroutine_trampoline (i0=63395504, i1=22068) at util/coroutine-ucontext.c:115 #4 0x00007f34ef860660 in ?? () at ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91 from /lib/x86_64-linux-gnu/libc.so.6 #5 0x00007f30b21ee730 in ?? () #6 0x0000000000000000 in ?? () Thread 13 (Thread 0x7f30b3dff700 (LWP 11747)): #0 __lll_lock_wait (futex=futex@entry=0x56340218ffa0 <qemu_global_mutex>, private=0) at lowlevellock.c:52 #1 0x00007f34efa000a3 in _GI__pthread_mutex_lock (mutex=0x56340218ffa0 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80 #2 0x0000563401997f99 in qemu_mutex_lock_impl (mutex=0x56340218ffa0 <qemu_global_mutex>, file=0x563401b7a80e "migration/colo.c", line=806) at util/qemu-thread-posix.c:78 #3 0x0000563401407144 in qemu_mutex_lock_iothread_impl (file=0x563401b7a80e "migration/colo.c", line=806) at /home/workspace/colo-qemu/cpus.c:1899 #4 0x00005634017ba8e8 in colo_process_incoming_thread (opaque=0x563402d664c0) at migration/colo.c:806 #5 0x0000563401998b72 in qemu_thread_start (args=0x5634039f8370) at util/qemu-thread-posix.c:519 #6 0x00007f34ef9fd609 in start_thread (arg=<optimized out>) at pthread_create.c:477 #7 0x00007f34ef924293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 The QEMU main thread is holding the lock: (gdb) p qemu_global_mutex $1 = {lock = {_data = {lock = 2, __count = 0, __owner = 11212, __nusers = 9, __kind = 0, __spins = 0, __elision = 0, __list = {_prev = 0x0, __next = 0x0}}, __size = "\002\000\000\000\000\000\000\000\314+\000\000\t", '\000' <repeats 26 times>, __align = 2}, file = 0x563401c07e4b "util/main-loop.c", line = 240, initialized = true} >From the call trace, we can see it is a deadlock bug. and the QEMU main thread holds the global mutex to wait until the COLO thread ends. and the colo thread wants to acquire the global mutex, which will cause a deadlock. So, we should release the qemu_global_mutex before waiting colo thread ends. Signed-off-by: Lei Rao <lei.rao@intel.com> Reviewed-by: Li Zhijian <lizhijian@cn.fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 16, 2022
Without the previous commit, when running 'make check-qtest-i386' with QEMU configured with '--enable-sanitizers' we get: AddressSanitizer:DEADLYSIGNAL ================================================================= ==287878==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000344 ==287878==The signal is caused by a WRITE memory access. ==287878==Hint: address points to the zero page. #0 0x564b2e5bac27 in blk_inc_in_flight block/block-backend.c:1346:5 #1 0x564b2e5bb228 in blk_pwritev_part block/block-backend.c:1317:5 #2 0x564b2e5bcd57 in blk_pwrite block/block-backend.c:1498:11 #3 0x564b2ca1cdd3 in fdctrl_write_data hw/block/fdc.c:2221:17 #4 0x564b2ca1b2f7 in fdctrl_write hw/block/fdc.c:829:9 #5 0x564b2dc49503 in portio_write softmmu/ioport.c:201:9 Add the reproducer for CVE-2021-20196. Suggested-by: Alexander Bulekov <alxndr@bu.edu> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20211124161536.631563-4-philmd@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 20, 2022
If asked for DMA request and no data is available, simply wait for data to be queued, do not abort. This fixes: $ cat << EOF | \ qemu-system-i386 -nographic -M q35,accel=qtest -serial none \ -monitor none -qtest stdio -trace lsi* \ -drive if=none,id=drive0,file=null-co://,file.read-zeroes=on,format=raw \ -device lsi53c895a,id=scsi0 -device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0 lsi_reset Reset lsi_reg_write Write reg DSP2 0x2e = 0xff lsi_reg_write Write reg DSP3 0x2f = 0xff lsi_execute_script SCRIPTS dsp=0xffff0000 opcode 0x184a3900 arg 0x4a8b2d75 qemu-system-i386: hw/scsi/lsi53c895a.c:624: lsi_do_dma: Assertion `s->current' failed. (gdb) bt #5 0x00007ffff4e8a3a6 in __GI___assert_fail (assertion=0x5555560accbc "s->current", file=0x5555560acc28 "hw/scsi/lsi53c895a.c", line=624, function=0x5555560adb18 "lsi_do_dma") at assert.c:101 #6 0x0000555555aa33b9 in lsi_do_dma (s=0x555557805ac0, out=1) at hw/scsi/lsi53c895a.c:624 #7 0x0000555555aa5042 in lsi_execute_script (s=0x555557805ac0) at hw/scsi/lsi53c895a.c:1250 #8 0x0000555555aa757a in lsi_reg_writeb (s=0x555557805ac0, offset=47, val=255 '\377') at hw/scsi/lsi53c895a.c:1984 #9 0x0000555555aa875b in lsi_mmio_write (opaque=0x555557805ac0, addr=47, val=255, size=1) at hw/scsi/lsi53c895a.c:2095 Cc: qemu-stable@nongnu.org Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Vadim Rozenfeld <vrozenfe@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Reported-by: Jérôme Poulin <jeromepoulin@gmail.com> Reported-by: Ruhr-University <bugs-syssec@rub.de> Reported-by: Gaoning Pan <pgn@zju.edu.cn> Reported-by: Cheolwoo Myung <cwmyung@snu.ac.kr> Fixes: b96a0da ("lsi: move dma_len+dma_buf into lsi_request") BugLink: https://bugs.launchpad.net/qemu/+bug/697510 BugLink: https://bugs.launchpad.net/qemu/+bug/1905521 BugLink: https://bugs.launchpad.net/qemu/+bug/1908515 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/84 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/305 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/552 Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20211123111732.83137-2-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jan 20, 2022
Example output of `uname -a` on an initial Gentoo LA64 port, running the upstream submission version of Linux (with some very minor patches not influencing output here): > Linux <hostname> 5.14.0-10342-g37a00851b145 #5 SMP PREEMPT Tue Aug 10 12:56:24 PM CST 2021 loongarch64 GNU/Linux And the same on the vendor-supplied Loongnix 20 system, with an early in-house port of Linux, and using the old-world ABI: > Linux <hostname> 4.19.167-rc5.lnd.1-loongson-3 #1 SMP Sat Apr 17 07:32:32 UTC 2021 loongarch64 loongarch64 loongarch64 GNU/Linux So a name of "loongarch64" matches both, fortunately. Signed-off-by: WANG Xuerui <git@xen0n.name> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20211221054105.178795-31-git@xen0n.name> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
vivier
pushed a commit
that referenced
this pull request
Jan 20, 2022
When closing the QEMU Gtk display window, it can occasionaly warn: qemu-system-x86_64: Gtk: gtk_clipboard_set_with_data: assertion 'targets != NULL' failed #3 0x00007ffff4f02f22 in gtk_clipboard_set_with_data (clipboard=<optimized out>, targets=<optimized out>, n_targets=<optimized out>, get_func=<optimized out>, clear_func=<optimized out>, user_data=<optimized out>) at /usr/src/debug/gtk3-3.24.30-4.fc35.x86_64/gtk/gtkclipboard.c:672 #4 0x00007ffff552cd75 in gd_clipboard_update_info (gd=0x5555579a9e00, info=0x555557ba4b50) at ../ui/gtk-clipboard.c:98 #5 0x00007ffff552ce00 in gd_clipboard_notify (notifier=0x5555579aaba8, data=0x7fffffffd720) at ../ui/gtk-clipboard.c:128 #6 0x000055555603e0ff in notifier_list_notify (list=0x555556657470 <clipboard_notifiers>, data=0x7fffffffd720) at ../util/notify.c:39 #7 0x000055555594e8e0 in qemu_clipboard_update (info=0x555557ba4b50) at ../ui/clipboard.c:54 #8 0x000055555594e840 in qemu_clipboard_peer_release (peer=0x55555684a5b0, selection=QEMU_CLIPBOARD_SELECTION_PRIMARY) at ../ui/clipboard.c:40 #9 0x000055555594e786 in qemu_clipboard_peer_unregister (peer=0x55555684a5b0) at ../ui/clipboard.c:19 #10 0x000055555595f044 in vdagent_disconnect (vd=0x55555684a400) at ../ui/vdagent.c:852 #11 0x000055555595f262 in vdagent_chr_fini (obj=0x55555684a400) at ../ui/vdagent.c:908 Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20211216083233.1166504-1-marcandre.lureau@redhat.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Jun 25, 2022
the code in pcibus_get_fw_dev_path contained the potential for a stack buffer overflow of 1 byte, potentially writing to the stack an extra NUL byte. This overflow could happen if the PCI slot is >= 0x10000000, and the PCI function is >= 0x10000000, due to the size parameter of snprintf being incorrectly calculated in the call: if (PCI_FUNC(d->devfn)) snprintf(path + off, sizeof(path) + off, ",%x", PCI_FUNC(d->devfn)); since the off obtained from a previous call to snprintf is added instead of subtracted from the total available size of the buffer. Without the accurate size guard from snprintf, we end up writing in the worst case: name (32) + "@" (1) + SLOT (8) + "," (1) + FUNC (8) + term NUL (1) = 51 bytes In order to provide something more robust, replace all of the code in pcibus_get_fw_dev_path with a single call to g_strdup_printf, so there is no need to rely on manual calculations. Found by compiling QEMU with FORTIFY_SOURCE=3 as the error: *** buffer overflow detected ***: terminated Thread 1 "qemu-system-x86" received signal SIGABRT, Aborted. [Switching to Thread 0x7ffff642c380 (LWP 121307)] 0x00007ffff71ff55c in __pthread_kill_implementation () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff71ff55c in __pthread_kill_implementation () at /lib64/libc.so.6 #1 0x00007ffff71ac6f6 in raise () at /lib64/libc.so.6 #2 0x00007ffff7195814 in abort () at /lib64/libc.so.6 #3 0x00007ffff71f279e in __libc_message () at /lib64/libc.so.6 #4 0x00007ffff729767a in __fortify_fail () at /lib64/libc.so.6 #5 0x00007ffff7295c36 in () at /lib64/libc.so.6 #6 0x00007ffff72957f5 in __snprintf_chk () at /lib64/libc.so.6 #7 0x0000555555b1c1fd in pcibus_get_fw_dev_path () #8 0x0000555555f2bde4 in qdev_get_fw_dev_path_helper.constprop () #9 0x0000555555f2bd86 in qdev_get_fw_dev_path_helper.constprop () #10 0x00005555559a6e5d in get_boot_device_path () #11 0x00005555559a712c in get_boot_devices_list () #12 0x0000555555b1a3d0 in fw_cfg_machine_reset () #13 0x0000555555bf4c2d in pc_machine_reset () #14 0x0000555555c66988 in qemu_system_reset () #15 0x0000555555a6dff6 in qdev_machine_creation_done () #16 0x0000555555c79186 in qmp_x_exit_preconfig.part () #17 0x0000555555c7b459 in qemu_init () #18 0x0000555555960a29 in main () Found-by: Dario Faggioli <Dario Faggioli <dfaggioli@suse.com> Found-by: Martin Liška <martin.liska@suse.com> Cc: qemu-stable@nongnu.org Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20220531114707.18830-1-cfontana@suse.de> Reviewed-by: Ani Sinha <ani@anisinha.ca>
vivier
pushed a commit
that referenced
this pull request
Jun 25, 2022
In two places in gdbstub.c we look at gdbserver_state.init to decide whether we're going to do a semihosting syscall via the gdb remote protocol: * when setting up, if the user didn't explicitly select either native semihosting or gdb semihosting, we autoselect, with the intended behaviour "use gdb if gdb is connected" * when the semihosting layer attempts to do a syscall via gdb, we silently ignore it if the gdbstub wasn't actually set up However, if the user's commandline sets up the gdbstub but tells QEMU to start rather than waiting for a GDB to connect (eg using '-s' but not '-S'), then we will have gdbserver_state.init true but no actual connection; an attempt to use gdb syscalls will then crash because we try to use gdbserver_state.c_cpu when it hasn't been set up: #0 0x00007ffff6803ba8 in qemu_cpu_kick (cpu=0x0) at ../../softmmu/cpus.c:457 #1 0x00007ffff6c03913 in gdb_do_syscallv (cb=0x7ffff6c19944 <common_semi_cb>, fmt=0x7ffff7573b7e "", va=0x7ffff56294c0) at ../../gdbstub.c:2946 #2 0x00007ffff6c19c3a in common_semi_gdb_syscall (cs=0x7ffff83fe060, cb=0x7ffff6c19944 <common_semi_cb>, fmt=0x7ffff7573b75 "isatty,%x") at ../../semihosting/arm-compat-semi.c:494 #3 0x00007ffff6c1a064 in gdb_isattyfn (cs=0x7ffff83fe060, gf=0x7ffff86a3690) at ../../semihosting/arm-compat-semi.c:636 #4 0x00007ffff6c1b20f in do_common_semihosting (cs=0x7ffff83fe060) at ../../semihosting/arm-compat-semi.c:967 #5 0x00007ffff693a037 in handle_semihosting (cs=0x7ffff83fe060) at ../../target/arm/helper.c:10316 You can probably also get into this state via some odd corner cases involving connecting a GDB and then telling it to detach from all the vCPUs. Abstract out the test into a new gdb_attached() function which returns true only if there's actually a GDB connected to the debug stub and attached to at least one vCPU. Reported-by: Liviu Ionescu <ilg@livius.net> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Luc Michel <luc@lmichel.fr> Message-id: 20220526190053.521505-2-peter.maydell@linaro.org
vivier
pushed a commit
that referenced
this pull request
Jul 7, 2022
There is such error info when running linux kernel: tcg_handle_interrupt: assertion failed: (qemu_mutex_iothread_locked()). calling stack: #0 in raise () at /lib64/libc.so.6 #1 in abort () at /lib64/libc.so.6 #2 in g_assertion_message_expr.cold () at /lib64/libglib-2.0.so.0 #3 in g_assertion_message_expr () at /lib64/libglib-2.0.so.0 #4 in tcg_handle_interrupt (cpu=0x632000030800, mask=2) at ../accel/tcg/tcg-accel-ops.c:79 #5 in cpu_interrupt (cpu=0x632000030800, mask=2) at ../softmmu/cpus.c:248 #6 in loongarch_cpu_set_irq (opaque=0x632000030800, irq=11, level=0) at ../target/loongarch/cpu.c:100 #7 in helper_csrwr_ticlr (env=0x632000039440, val=1) at ../target/loongarch/csr_helper.c:85 #8 in code_gen_buffer () #9 in cpu_tb_exec (cpu=0x632000030800, itb=0x7fff946ac280, tb_exit=0x7ffe4fcb6c30) at ../accel/tcg/cpu-exec.c:358 Add mutex iothread lock around loongarch_cpu_set_irq in csrwr_ticlr() to fix the bug. Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220701093407.2150607-10-yangxiaojuan@loongson.cn> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
vivier
pushed a commit
that referenced
this pull request
Sep 21, 2022
As soon as virtio_scsi_data_plane_start() attaches host notifiers the IOThread may start virtqueue processing. There is a race between IOThread virtqueue processing and virtio_scsi_data_plane_start() because it only assigns s->dataplane_started after attaching host notifiers. When a virtqueue handler function in the IOThread calls virtio_scsi_defer_to_dataplane() it may see !s->dataplane_started and attempt to start dataplane even though we're already in the IOThread: #0 0x00007f67b360857c __pthread_kill_implementation (libc.so.6 + 0xa257c) #1 0x00007f67b35bbd56 raise (libc.so.6 + 0x55d56) #2 0x00007f67b358e833 abort (libc.so.6 + 0x28833) #3 0x00007f67b358e75b __assert_fail_base.cold (libc.so.6 + 0x2875b) #4 0x00007f67b35b4cd6 __assert_fail (libc.so.6 + 0x4ecd6) #5 0x000055ca87fd411b memory_region_transaction_commit (qemu-kvm + 0x67511b) #6 0x000055ca87e17811 virtio_pci_ioeventfd_assign (qemu-kvm + 0x4b8811) #7 0x000055ca87e14836 virtio_bus_set_host_notifier (qemu-kvm + 0x4b5836) #8 0x000055ca87f8e14e virtio_scsi_set_host_notifier (qemu-kvm + 0x62f14e) #9 0x000055ca87f8dd62 virtio_scsi_dataplane_start (qemu-kvm + 0x62ed62) #10 0x000055ca87e14610 virtio_bus_start_ioeventfd (qemu-kvm + 0x4b5610) #11 0x000055ca87f8c29a virtio_scsi_handle_ctrl (qemu-kvm + 0x62d29a) #12 0x000055ca87fa5902 virtio_queue_host_notifier_read (qemu-kvm + 0x646902) #13 0x000055ca882c099e aio_dispatch_handler (qemu-kvm + 0x96199e) #14 0x000055ca882c1761 aio_poll (qemu-kvm + 0x962761) #15 0x000055ca880e1052 iothread_run (qemu-kvm + 0x782052) #16 0x000055ca882c562a qemu_thread_start (qemu-kvm + 0x96662a) This patch assigns s->dataplane_started before attaching host notifiers so that virtqueue handler functions that run in the IOThread before virtio_scsi_data_plane_start() returns correctly identify that dataplane does not need to be started. This fix is taken from the virtio-blk dataplane code and it's worth adding a comment in virtio-blk as well to explain why it works. Note that s->dataplane_started does not need the AioContext lock because it is set before attaching host notifiers and cleared after detaching host notifiers. In other words, the IOThread always sees the value true and the main loop thread does not modify it while the IOThread is active. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2099541 Reported-by: Qing Wang <qinwang@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220808162134.240405-1-stefanha@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Sep 21, 2022
The DMA engine is started by I/O access and then itself accesses the I/O registers, triggering a reentrancy bug. The following log can reveal it: ==5637==ERROR: AddressSanitizer: stack-overflow #0 0x5595435f6078 in tulip_xmit_list_update qemu/hw/net/tulip.c:673 #1 0x5595435f204a in tulip_write qemu/hw/net/tulip.c:805:13 #2 0x559544637f86 in memory_region_write_accessor qemu/softmmu/memory.c:492:5 #3 0x5595446379fa in access_with_adjusted_size qemu/softmmu/memory.c:554:18 #4 0x5595446372fa in memory_region_dispatch_write qemu/softmmu/memory.c #5 0x55954468b74c in flatview_write_continue qemu/softmmu/physmem.c:2825:23 #6 0x559544683662 in flatview_write qemu/softmmu/physmem.c:2867:12 #7 0x5595446833f3 in address_space_write qemu/softmmu/physmem.c:2963:18 #8 0x5595435fb082 in dma_memory_rw_relaxed qemu/include/sysemu/dma.h:87:12 #9 0x5595435fb082 in dma_memory_rw qemu/include/sysemu/dma.h:130:12 #10 0x5595435fb082 in dma_memory_write qemu/include/sysemu/dma.h:171:12 #11 0x5595435fb082 in stl_le_dma qemu/include/sysemu/dma.h:272:1 #12 0x5595435fb082 in stl_le_pci_dma qemu/include/hw/pci/pci.h:910:1 #13 0x5595435fb082 in tulip_desc_write qemu/hw/net/tulip.c:101:9 #14 0x5595435f7e3d in tulip_xmit_list_update qemu/hw/net/tulip.c:706:9 #15 0x5595435f204a in tulip_write qemu/hw/net/tulip.c:805:13 Fix this bug by restricting the DMA engine to memories regions. Signed-off-by: Zheyu Ma <zheyuma97@gmail.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Sep 26, 2022
Setup an ARM virtual machine of machine virt and execute qmp "query-acpi-ospm-status" causes segmentation fault with following dumpstack: #1 0x0000aaaaab64235c in qmp_query_acpi_ospm_status (errp=errp@entry=0xfffffffff030) at ../monitor/qmp-cmds.c:312 #2 0x0000aaaaabfc4e20 in qmp_marshal_query_acpi_ospm_status (args=<optimized out>, ret=0xffffea4ffe90, errp=0xffffea4ffe88) at qapi/qapi-commands-acpi.c:63 #3 0x0000aaaaabff8ba0 in do_qmp_dispatch_bh (opaque=0xffffea4ffe98) at ../qapi/qmp-dispatch.c:128 #4 0x0000aaaaac02e594 in aio_bh_call (bh=0xffffe0004d80) at ../util/async.c:150 #5 aio_bh_poll (ctx=ctx@entry=0xaaaaad0f6040) at ../util/async.c:178 #6 0x0000aaaaac00bd40 in aio_dispatch (ctx=ctx@entry=0xaaaaad0f6040) at ../util/aio-posix.c:421 #7 0x0000aaaaac02e010 in aio_ctx_dispatch (source=0xaaaaad0f6040, callback=<optimized out>, user_data=<optimized out>) at ../util/async.c:320 #8 0x0000fffff76f6884 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #9 0x0000aaaaac0452d4 in glib_pollfds_poll () at ../util/main-loop.c:297 #10 os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:320 #11 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:596 #12 0x0000aaaaab5c9e50 in qemu_main_loop () at ../softmmu/runstate.c:734 #13 0x0000aaaaab185370 in qemu_main (argc=argc@entry=47, argv=argv@entry=0xfffffffff518, envp=envp@entry=0x0) at ../softmmu/main.c:38 #14 0x0000aaaaab16f99c in main (argc=47, argv=0xfffffffff518) at ../softmmu/main.c:47 Fixes: ebb6207 ("hw/acpi: Add ACPI Generic Event Device Support") Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-id: 20220816094957.31700-1-zhukeqian1@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
vivier
pushed a commit
that referenced
this pull request
Jan 8, 2023
Fix warnings such: disas/nanomips.c:3251:64: warning: format specifies type 'char *' but the argument has type 'int64' (aka 'long long') [-Wformat] return img_format("CACHE 0x%" PRIx64 ", %s(%s)", op_value, s_value, rs); ~~ ^~~~~~~ %lld To avoid crashes such (kernel from commit f375ad6): $ qemu-system-mipsel -cpu I7200 -d in_asm -kernel generic_nano32r6el_page4k ... ---------------- IN: __bzero 0x805c6084: 20c4 6950 ADDU r13, a0, a2 0x805c6088: 9089 ADDIU a0, 1 Process 70261 stopped * thread #6, stop reason = EXC_BAD_ACCESS (code=1, address=0xfffffffffffffff0) frame #0: 0x00000001bfe38864 libsystem_platform.dylib`_platform_strlen + 4 libsystem_platform.dylib`: -> 0x1bfe38864 <+4>: ldr q0, [x1] 0x1bfe38868 <+8>: adr x3, #-0xc8 ; ___lldb_unnamed_symbol314 0x1bfe3886c <+12>: ldr q2, [x3], #0x10 0x1bfe38870 <+16>: and x2, x0, #0xf Target 0: (qemu-system-mipsel) stopped. (lldb) bt * thread #6, stop reason = EXC_BAD_ACCESS (code=1, address=0xfffffffffffffff0) * frame #0: 0x00000001bfe38864 libsystem_platform.dylib`_platform_strlen + 4 frame #1: 0x00000001bfce76a0 libsystem_c.dylib`__vfprintf + 4544 frame #2: 0x00000001bfd158b4 libsystem_c.dylib`_vasprintf + 280 frame #3: 0x0000000101c22fb0 libglib-2.0.0.dylib`g_vasprintf + 28 frame #4: 0x0000000101bfb7d8 libglib-2.0.0.dylib`g_strdup_vprintf + 32 frame #5: 0x000000010000fb70 qemu-system-mipsel`img_format(format=<unavailable>) at nanomips.c:103:14 [opt] frame #6: 0x0000000100018868 qemu-system-mipsel`SB_S9_(instruction=<unavailable>, info=<unavailable>) at nanomips.c:12616:12 [opt] frame #7: 0x000000010000f90c qemu-system-mipsel`print_insn_nanomips at nanomips.c:589:28 [opt] Fixes: 4066c15 ("disas/nanomips: Remove IMMEDIATE functions") Reported-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20221101114458.25756-2-philmd@linaro.org>
vivier
pushed a commit
that referenced
this pull request
Jan 27, 2023
This is below memleak detected when to quit the qemu-system-x86_64 (with vhost-scsi-pci). (qemu) quit ================================================================= ==15568==ERROR: LeakSanitizer: detected memory leaks Direct leak of 40 byte(s) in 1 object(s) allocated from: #0 0x7f00aec57917 in __interceptor_calloc (/lib64/libasan.so.6+0xb4917) #1 0x7f00ada0d7b5 in g_malloc0 (/lib64/libglib-2.0.so.0+0x517b5) #2 0x5648ffd38bac in vhost_scsi_start ../hw/scsi/vhost-scsi.c:92 #3 0x5648ffd38d52 in vhost_scsi_set_status ../hw/scsi/vhost-scsi.c:131 #4 0x5648ffda340e in virtio_set_status ../hw/virtio/virtio.c:2036 #5 0x5648ff8de281 in virtio_ioport_write ../hw/virtio/virtio-pci.c:431 #6 0x5648ff8deb29 in virtio_pci_config_write ../hw/virtio/virtio-pci.c:576 #7 0x5648ffe5c0c2 in memory_region_write_accessor ../softmmu/memory.c:493 #8 0x5648ffe5c424 in access_with_adjusted_size ../softmmu/memory.c:555 #9 0x5648ffe6428f in memory_region_dispatch_write ../softmmu/memory.c:1515 #10 0x5648ffe8613d in flatview_write_continue ../softmmu/physmem.c:2825 #11 0x5648ffe86490 in flatview_write ../softmmu/physmem.c:2867 #12 0x5648ffe86d9f in address_space_write ../softmmu/physmem.c:2963 #13 0x5648ffe86e57 in address_space_rw ../softmmu/physmem.c:2973 #14 0x5648fffbfb3d in kvm_handle_io ../accel/kvm/kvm-all.c:2639 #15 0x5648fffc0e0d in kvm_cpu_exec ../accel/kvm/kvm-all.c:2890 #16 0x5648fffc90a7 in kvm_vcpu_thread_fn ../accel/kvm/kvm-accel-ops.c:51 #17 0x56490042400a in qemu_thread_start ../util/qemu-thread-posix.c:505 #18 0x7f00ac3b6ea4 in start_thread (/lib64/libpthread.so.0+0x7ea4) Free the vsc->inflight at the 'stop' path. Fixes: b82526c ("vhost-scsi: support inflight io track") Cc: Joe Jin <joe.jin@oracle.com> Cc: Li Feng <fengli@smartx.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Message-Id: <20230104160433.21353-1-dongli.zhang@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 5, 2023
Fixes the appended use-after-free. The root cause is that during tb invalidation we use CPU_FOREACH, and therefore to safely free a vCPU we must wait for an RCU grace period to elapse. $ x86_64-linux-user/qemu-x86_64 tests/tcg/x86_64-linux-user/munmap-pthread ================================================================= ==1800604==ERROR: AddressSanitizer: heap-use-after-free on address 0x62d0005f7418 at pc 0x5593da6704eb bp 0x7f4961a7ac70 sp 0x7f4961a7ac60 READ of size 8 at 0x62d0005f7418 thread T2 #0 0x5593da6704ea in tb_jmp_cache_inval_tb ../accel/tcg/tb-maint.c:244 #1 0x5593da6704ea in do_tb_phys_invalidate ../accel/tcg/tb-maint.c:290 #2 0x5593da670631 in tb_phys_invalidate__locked ../accel/tcg/tb-maint.c:306 #3 0x5593da670631 in tb_invalidate_phys_page_range__locked ../accel/tcg/tb-maint.c:542 #4 0x5593da67106d in tb_invalidate_phys_range ../accel/tcg/tb-maint.c:614 #5 0x5593da6a64d4 in target_munmap ../linux-user/mmap.c:766 #6 0x5593da6dba05 in do_syscall1 ../linux-user/syscall.c:10105 #7 0x5593da6f564c in do_syscall ../linux-user/syscall.c:13329 #8 0x5593da49e80c in cpu_loop ../linux-user/x86_64/../i386/cpu_loop.c:233 #9 0x5593da6be28c in clone_func ../linux-user/syscall.c:6633 #10 0x7f496231cb42 in start_thread nptl/pthread_create.c:442 #11 0x7f49623ae9ff (/lib/x86_64-linux-gnu/libc.so.6+0x1269ff) 0x62d0005f7418 is located 28696 bytes inside of 32768-byte region [0x62d0005f0400,0x62d0005f8400) freed by thread T148 here: #0 0x7f49627b6460 in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:52 #1 0x5593da5ac057 in cpu_exec_unrealizefn ../cpu.c:180 #2 0x5593da81f851 (/home/cota/src/qemu/build/qemu-x86_64+0x484851) Signed-off-by: Emilio Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230111151628.320011-2-cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230124180127.1881110-27-alex.bennee@linaro.org>
vivier
pushed a commit
that referenced
this pull request
Apr 5, 2023
Fixes this tsan crash, easy to reproduce with any large enough program: $ tests/unit/test-qht 1..2 ThreadSanitizer: CHECK failed: sanitizer_deadlock_detector.h:67 "((n_all_locks_)) < (((sizeof(all_locks_with_contexts_)/sizeof((all_locks_with_contexts_)[0]))))" (0x40, 0x40) (tid=1821568) #0 __tsan::CheckUnwind() ../../../../src/libsanitizer/tsan/tsan_rtl.cpp:353 (libtsan.so.2+0x90034) #1 __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ../../../../src/libsanitizer/sanitizer_common/sanitizer_termination.cpp:86 (libtsan.so.2+0xca555) #2 __sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::addLock(unsigned long, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:67 (libtsan.so.2+0xb3616) #3 __sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::addLock(unsigned long, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:59 (libtsan.so.2+0xb3616) #4 __sanitizer::DeadlockDetector<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::onLockAfter(__sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >*, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:216 (libtsan.so.2+0xb3616) #5 __sanitizer::DD::MutexAfterLock(__sanitizer::DDCallback*, __sanitizer::DDMutex*, bool, bool) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector1.cpp:169 (libtsan.so.2+0xb3616) #6 __tsan::MutexPostLock(__tsan::ThreadState*, unsigned long, unsigned long, unsigned int, int) ../../../../src/libsanitizer/tsan/tsan_rtl_mutex.cpp:200 (libtsan.so.2+0xa3382) #7 __tsan_mutex_post_lock ../../../../src/libsanitizer/tsan/tsan_interface_ann.cpp:384 (libtsan.so.2+0x76bc3) #8 qemu_spin_lock /home/cota/src/qemu/include/qemu/thread.h:259 (test-qht+0x44a97) #9 qht_map_lock_buckets ../util/qht.c:253 (test-qht+0x44a97) #10 do_qht_iter ../util/qht.c:809 (test-qht+0x45f33) #11 qht_iter ../util/qht.c:821 (test-qht+0x45f33) #12 iter_check ../tests/unit/test-qht.c:121 (test-qht+0xe473) #13 qht_do_test ../tests/unit/test-qht.c:202 (test-qht+0xe473) #14 qht_test ../tests/unit/test-qht.c:240 (test-qht+0xe7c1) #15 test_default ../tests/unit/test-qht.c:246 (test-qht+0xe828) #16 <null> <null> (libglib-2.0.so.0+0x7daed) #17 <null> <null> (libglib-2.0.so.0+0x7d80a) #18 <null> <null> (libglib-2.0.so.0+0x7d80a) #19 g_test_run_suite <null> (libglib-2.0.so.0+0x7dfe9) #20 g_test_run <null> (libglib-2.0.so.0+0x7e055) #21 main ../tests/unit/test-qht.c:259 (test-qht+0xd2c6) #22 __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 (libc.so.6+0x29d8f) #23 __libc_start_main_impl ../csu/libc-start.c:392 (libc.so.6+0x29e3f) #24 _start <null> (test-qht+0xdb44) Signed-off-by: Emilio Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230111151628.320011-5-cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230124180127.1881110-30-alex.bennee@linaro.org>
vivier
pushed a commit
that referenced
this pull request
Apr 5, 2023
After live migration with virtio block device, qemu crash at: #0 0x000055914f46f795 in object_dynamic_cast_assert (obj=0x559151b7b090, typename=0x55914f80fbc4 "qio-channel", file=0x55914f80fb90 "/images/testvfe/sw/qemu.gerrit/include/io/channel.h", line=30, func=0x55914f80fcb8 <__func__.17257> "QIO_CHANNEL") at ../qom/object.c:872 #1 0x000055914f480d68 in QIO_CHANNEL (obj=0x559151b7b090) at /images/testvfe/sw/qemu.gerrit/include/io/channel.h:29 #2 0x000055914f4812f8 in qio_net_listener_set_client_func_full (listener=0x559151b7a720, func=0x55914f580b97 <tcp_chr_accept>, data=0x5591519f4ea0, notify=0x0, context=0x0) at ../io/net-listener.c:166 #3 0x000055914f580059 in tcp_chr_update_read_handler (chr=0x5591519f4ea0) at ../chardev/char-socket.c:637 #4 0x000055914f583dca in qemu_chr_be_update_read_handlers (s=0x5591519f4ea0, context=0x0) at ../chardev/char.c:226 #5 0x000055914f57b7c9 in qemu_chr_fe_set_handlers_full (b=0x559152bf23a0, fd_can_read=0x0, fd_read=0x0, fd_event=0x0, be_change=0x0, opaque=0x0, context=0x0, set_open=false, sync_state=true) at ../chardev/char-fe.c:279 #6 0x000055914f57b86d in qemu_chr_fe_set_handlers (b=0x559152bf23a0, fd_can_read=0x0, fd_read=0x0, fd_event=0x0, be_change=0x0, opaque=0x0, context=0x0, set_open=false) at ../chardev/char-fe.c:304 #7 0x000055914f378caf in vhost_user_async_close (d=0x559152bf21a0, chardev=0x559152bf23a0, vhost=0x559152bf2420, cb=0x55914f2fb8c1 <vhost_user_blk_disconnect>) at ../hw/virtio/vhost-user.c:2725 #8 0x000055914f2fba40 in vhost_user_blk_event (opaque=0x559152bf21a0, event=CHR_EVENT_CLOSED) at ../hw/block/vhost-user-blk.c:395 #9 0x000055914f58388c in chr_be_event (s=0x5591519f4ea0, event=CHR_EVENT_CLOSED) at ../chardev/char.c:61 #10 0x000055914f583905 in qemu_chr_be_event (s=0x5591519f4ea0, event=CHR_EVENT_CLOSED) at ../chardev/char.c:81 #11 0x000055914f581275 in char_socket_finalize (obj=0x5591519f4ea0) at ../chardev/char-socket.c:1083 #12 0x000055914f46f073 in object_deinit (obj=0x5591519f4ea0, type=0x5591519055c0) at ../qom/object.c:680 #13 0x000055914f46f0e5 in object_finalize (data=0x5591519f4ea0) at ../qom/object.c:694 #14 0x000055914f46ff06 in object_unref (objptr=0x5591519f4ea0) at ../qom/object.c:1202 #15 0x000055914f4715a4 in object_finalize_child_property (obj=0x559151b76c50, name=0x559151b7b250 "char3", opaque=0x5591519f4ea0) at ../qom/object.c:1747 #16 0x000055914f46ee86 in object_property_del_all (obj=0x559151b76c50) at ../qom/object.c:632 #17 0x000055914f46f0d2 in object_finalize (data=0x559151b76c50) at ../qom/object.c:693 #18 0x000055914f46ff06 in object_unref (objptr=0x559151b76c50) at ../qom/object.c:1202 #19 0x000055914f4715a4 in object_finalize_child_property (obj=0x559151b6b560, name=0x559151b76630 "chardevs", opaque=0x559151b76c50) at ../qom/object.c:1747 #20 0x000055914f46ef67 in object_property_del_child (obj=0x559151b6b560, child=0x559151b76c50) at ../qom/object.c:654 #21 0x000055914f46f042 in object_unparent (obj=0x559151b76c50) at ../qom/object.c:673 #22 0x000055914f58632a in qemu_chr_cleanup () at ../chardev/char.c:1189 #23 0x000055914f16c66c in qemu_cleanup () at ../softmmu/runstate.c:830 #24 0x000055914eee7b9e in qemu_default_main () at ../softmmu/main.c:38 #25 0x000055914eee7bcc in main (argc=86, argv=0x7ffc97cb8d88) at ../softmmu/main.c:48 In char_socket_finalize after s->listener freed, event callback function vhost_user_blk_event will be called to handle CHR_EVENT_CLOSED. vhost_user_blk_event is calling qio_net_listener_set_client_func_full which is still using s->listener. Setting s->listener = NULL after object_unref(OBJECT(s->listener)) can solve this issue. Signed-off-by: Yajun Wu <yajunw@nvidia.com> Acked-by: Jiri Pirko <jiri@nvidia.com> Message-Id: <20230214021430.3638579-1-yajunw@nvidia.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 5, 2023
This leakage can be seen through test-io-channel-tls: $ ../configure --target-list=aarch64-softmmu --enable-sanitizers $ make ./tests/unit/test-io-channel-tls $ ./tests/unit/test-io-channel-tls Indirect leak of 104 byte(s) in 1 object(s) allocated from: #0 0x7f81d1725808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x7f81d135ae98 in g_malloc (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x57e98) #2 0x55616c5d4c1b in object_new_with_propv ../qom/object.c:795 #3 0x55616c5d4a83 in object_new_with_props ../qom/object.c:768 #4 0x55616c5c5415 in test_tls_creds_create ../tests/unit/test-io-channel-tls.c:70 #5 0x55616c5c5a6b in test_io_channel_tls ../tests/unit/test-io-channel-tls.c:158 #6 0x7f81d137d58d (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d) Indirect leak of 32 byte(s) in 1 object(s) allocated from: #0 0x7f81d1725a06 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:153 #1 0x7f81d1472a20 in gnutls_dh_params_init (/lib/x86_64-linux-gnu/libgnutls.so.30+0x46a20) #2 0x55616c6485ff in qcrypto_tls_creds_x509_load ../crypto/tlscredsx509.c:634 #3 0x55616c648ba2 in qcrypto_tls_creds_x509_complete ../crypto/tlscredsx509.c:694 #4 0x55616c5e1fea in user_creatable_complete ../qom/object_interfaces.c:28 #5 0x55616c5d4c8c in object_new_with_propv ../qom/object.c:807 #6 0x55616c5d4a83 in object_new_with_props ../qom/object.c:768 #7 0x55616c5c5415 in test_tls_creds_create ../tests/unit/test-io-channel-tls.c:70 #8 0x55616c5c5a6b in test_io_channel_tls ../tests/unit/test-io-channel-tls.c:158 #9 0x7f81d137d58d (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d) ... SUMMARY: AddressSanitizer: 49143 byte(s) leaked in 184 allocation(s). The docs for `g_source_add_child_source(source, child_source)` says "source will hold a reference on child_source while child_source is attached to it." Therefore, we should unreference the child source at `qio_channel_tls_read_watch()` after attaching it to `source`. With this change, ./tests/unit/test-io-channel-tls shows no leakages. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 5, 2023
xbzrle_encode_buffer_avx512() checks for overflows too scarcely in its outer loop, causing out-of-bounds writes: $ ../configure --target-list=aarch64-softmmu --enable-sanitizers --enable-avx512bw $ make tests/unit/test-xbzrle && ./tests/unit/test-xbzrle ==5518==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62100000b100 at pc 0x561109a7714d bp 0x7ffed712a440 sp 0x7ffed712a430 WRITE of size 1 at 0x62100000b100 thread T0 #0 0x561109a7714c in uleb128_encode_small ../util/cutils.c:831 #1 0x561109b67f6a in xbzrle_encode_buffer_avx512 ../migration/xbzrle.c:275 #2 0x5611099a7428 in test_encode_decode_overflow ../tests/unit/test-xbzrle.c:153 #3 0x7fb2fb65a58d (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d) #4 0x7fb2fb65a333 (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a333) #5 0x7fb2fb65aa79 in g_test_run_suite (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7aa79) #6 0x7fb2fb65aa94 in g_test_run (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7aa94) #7 0x5611099a3a23 in main ../tests/unit/test-xbzrle.c:218 #8 0x7fb2fa78c082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) #9 0x5611099a608d in _start (/qemu/build/tests/unit/test-xbzrle+0x28408d) 0x62100000b100 is located 0 bytes to the right of 4096-byte region [0x62100000a100,0x62100000b100) allocated by thread T0 here: #0 0x7fb2fb823a06 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:153 #1 0x7fb2fb637ef0 in g_malloc0 (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x57ef0) Fix that by performing the overflow check in the inner loop, instead. Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 5, 2023
For ex, when resetting the xlnx-zcu102 machine: (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x50) * frame #0: 0x10020a740 gd_vc_send_chars(vc=0x000000000) at gtk.c:1759:41 [opt] frame #1: 0x100636264 qemu_chr_fe_accept_input(be=<unavailable>) at char-fe.c:159:9 [opt] frame #2: 0x1000608e0 cadence_uart_reset_hold [inlined] uart_rx_reset(s=0x10810a960) at cadence_uart.c:158:5 [opt] frame #3: 0x1000608d4 cadence_uart_reset_hold(obj=0x10810a960) at cadence_uart.c:530:5 [opt] frame #4: 0x100580ab4 resettable_phase_hold(obj=0x10810a960, opaque=0x000000000, type=<unavailable>) at resettable.c:0 [opt] frame #5: 0x10057d1b0 bus_reset_child_foreach(obj=<unavailable>, cb=(resettable_phase_hold at resettable.c:162), opaque=0x000000000, type=RESET_TYPE_COLD) at bus.c:97:13 [opt] frame #6: 0x1005809f8 resettable_phase_hold [inlined] resettable_child_foreach(rc=0x000060000332d2c0, obj=0x0000600002c1c180, cb=<unavailable>, opaque=0x000000000, type=RESET_TYPE_COLD) at resettable.c:96:9 [opt] frame #7: 0x1005809d8 resettable_phase_hold(obj=0x0000600002c1c180, opaque=0x000000000, type=RESET_TYPE_COLD) at resettable.c:173:5 [opt] frame #8: 0x1005803a0 resettable_assert_reset(obj=0x0000600002c1c180, type=<unavailable>) at resettable.c:60:5 [opt] frame #9: 0x10058027c resettable_reset(obj=0x0000600002c1c180, type=RESET_TYPE_COLD) at resettable.c:45:5 [opt] While the chardev is created early, the VirtualConsole is associated after, during qemu_init_displays(). Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230220072251.3385878-1-marcandre.lureau@redhat.com>
vivier
pushed a commit
that referenced
this pull request
Apr 5, 2023
blk_get_geometry() eventually calls bdrv_nb_sectors(), which is a co_wrapper_mixed_bdrv_rdlock. This means that when it is called from coroutine context, it already assume to have the graph locked. However, virtio_blk_sect_range_ok() in block/export/virtio-blk-handler.c (used by vhost-user-blk and VDUSE exports) runs in a coroutine, but doesn't take the graph lock - blk_*() functions are generally expected to do that internally. This causes an assertion failure when accessing an export for the first time if it runs in an iothread. This is an example of the crash: $ ./storage-daemon/qemu-storage-daemon --object iothread,id=th0 --blockdev file,filename=/home/kwolf/images/hd.img,node-name=disk --export vhost-user-blk,addr.type=unix,addr.path=/tmp/vhost.sock,node-name=disk,id=exp0,iothread=th0 qemu-storage-daemon: ../block/graph-lock.c:268: void assert_bdrv_graph_readable(void): Assertion `qemu_in_main_thread() || reader_count()' failed. (gdb) bt #0 0x00007ffff6eafe5c in __pthread_kill_implementation () from /lib64/libc.so.6 #1 0x00007ffff6e5fa76 in raise () from /lib64/libc.so.6 #2 0x00007ffff6e497fc in abort () from /lib64/libc.so.6 #3 0x00007ffff6e4971b in __assert_fail_base.cold () from /lib64/libc.so.6 #4 0x00007ffff6e58656 in __assert_fail () from /lib64/libc.so.6 #5 0x00005555556337a3 in assert_bdrv_graph_readable () at ../block/graph-lock.c:268 #6 0x00005555555fd5a2 in bdrv_co_nb_sectors (bs=0x5555564c5ef0) at ../block.c:5847 #7 0x00005555555ee949 in bdrv_nb_sectors (bs=0x5555564c5ef0) at block/block-gen.c:256 #8 0x00005555555fd6b9 in bdrv_get_geometry (bs=0x5555564c5ef0, nb_sectors_ptr=0x7fffef7fedd0) at ../block.c:5884 #9 0x000055555562ad6d in blk_get_geometry (blk=0x5555564cb200, nb_sectors_ptr=0x7fffef7fedd0) at ../block/block-backend.c:1624 #10 0x00005555555ddb74 in virtio_blk_sect_range_ok (blk=0x5555564cb200, block_size=512, sector=0, size=512) at ../block/export/virtio-blk-handler.c:44 #11 0x00005555555dd80d in virtio_blk_process_req (handler=0x5555564cbb98, in_iov=0x7fffe8003830, out_iov=0x7fffe8003860, in_num=1, out_num=0) at ../block/export/virtio-blk-handler.c:189 #12 0x00005555555dd546 in vu_blk_virtio_process_req (opaque=0x7fffe8003800) at ../block/export/vhost-user-blk-server.c:66 #13 0x00005555557bf4a1 in coroutine_trampoline (i0=-402635264, i1=32767) at ../util/coroutine-ucontext.c:177 #14 0x00007ffff6e75c20 in ?? () from /lib64/libc.so.6 #15 0x00007fffefffa870 in ?? () #16 0x0000000000000000 in ?? () Fix this by creating a new blk_co_get_geometry() that takes the lock, and changing blk_get_geometry() to be a co_wrapper_mixed around it. To make the resulting code cleaner, virtio-blk-handler.c can directly call the coroutine version now (though that wouldn't be necessary for fixing the bug, taking the lock in blk_co_get_geometry() is what fixes it). Fixes: 8ab8140 Reported-by: Lukáš Doktor <ldoktor@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20230327113959.60071-1-kwolf@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Noticed today when ran in strace mode:
Upstream handles options correctly:
Signed-off-by: Sergei Trofimovich siarheit@google.com