Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compile error #6

Closed
wants to merge 14 commits into from
Closed

compile error #6

wants to merge 14 commits into from

Conversation

Cool-Joe
Copy link

@Cool-Joe Cool-Joe commented Mar 4, 2014

Got a fresh copy from git today... and won't compile:

/data/qemu-buserror/target-arm/cpu64.c: In Funktion »aarch64_cpu_register_types«:
/data/qemu-buserror/target-arm/cpu64.c:124:5: Fehler: Vergleich eines vorzeichenlosen Ausdrucks < 0 ist stets »unwahr« [-Werror=type-limits]
for (i = 0; i < ARRAY_SIZE(aarch64_cpus); i++) {
^
cc1: Alle Warnungen werden als Fehler behandelt
make[1]: *** [target-arm/cpu64.o] Fehler 1
make: *** [subdir-aarch64-softmmu] Fehler 2

(Translation of the err msg: "Comparison of an unsigned statement < 0 is always »false«")

Compiled on ArchLinux 3.13.5-x86

PS: My temporary bugfix: Changed
for (i = 0; i < ARRAY_SIZE(aarch64_cpus); i++) {
into
for (i = 0; i < (int) ARRAY_SIZE(aarch64_cpus); i++) {
-> compiles... don't know if it's ok...

Support for a dallas/maxim onewire sensor, enought of it to
fool linux's w1-gpio driver

Signed-off-by: Michel Pollet <buserror@gmail.com>
Header file containing most of the base addresses and IO registers
needed for the Freescale mxs/imx23 SoC emumation.
Also contains a generic helper to implement the SET/AND/OR/XOR trick
shared by pretty much all of the IO blocks on this SoC

Signed-off-by: Michel Pollet <buserror@gmail.com>
Allows selective compilation of the MXS bits

Signed-off-by: Michel Pollet <buserror@gmail.com>
Prototype driver for the mxs/imx23 uart IO block. This has no
real 'uart' functional code, apart from letting itself be
initialized by linux without generating a timeout error.

Signed-off-by: Michel Pollet <buserror@gmail.com>
This driver works sufficiently well that linux can use it to access
the SD card using the SD->DMA->SSI->SD. It hasn't been tested for
much else.

Signed-off-by: Michel Pollet <buserror@gmail.com>
Implements the interrupt collector IO block

Signed-off-by: Michel Pollet <buserror@gmail.com>
This implements just enough of the digctl IO block to allow
linux to believe it's running on (currently only) an imx23.

Signed-off-by: Michel Pollet <buserror@gmail.com>
Implements the pinctrl and GPIO block for the imx23
It handles GPIO output, and GPIO input from qemu translated
into pin values and interrupts, if appropriate.

Signed-off-by: Michel Pollet <buserror@gmail.com>
This implements the SSP port(s) of the mxs. Currently hardcoded for
the SD card interface, but as TODO it could rather easily be made
to be generic and support 'generic' SPI too.
It is geared toward working with DMA, as the linux drivers uses it that
way.

Signed-off-by: Michel Pollet <buserror@gmail.com>
Support for the RTC IO block on the imx23, enough to return
a valid date and make guest linux happy.

Signed-off-by: Michel Pollet <buserror@gmail.com>
Support for the timer IO block of the mxs/imx23. Does not support
any of the fancy function, just the 32khz based timers used by
linux.

Signed-off-by: Michel Pollet <buserror@gmail.com>
Add the USB IO block, and the USB PHY IO block. This just wraps
an ehci instance, and support some of the 'extra' mxs registers

Signed-off-by: Michel Pollet <buserror@gmail.com>
This adds support for creating an imx23 instance. This also contains
some of the more minor IO blocks, and a 'catchall' driver that helps
debugging access to undocumented IO registers.
Currently the instance can boot a linux kernel, but does not support
booting from the 'special' signed Freescale binary blobs.

Signed-off-by: Michel Pollet <buserror@gmail.com>
Adds support for creating a basic imx23 dev board from Olimex, with
a few peripherals, a bitbang i2c bus with a RTC attached, a DS18S20
thermal sensor, and a rather crude 'relay' that increases/decreases
the thermal sensor temperature.

Basicaly, it's a complete emulation of the hardware used for my real
life boiler controller system; but it's a nice starting point for
any other imx233 board prototyping.
https://plus.google.com/111387094029238541867/posts/Smwc7yFK3Vk

Signed-off-by: Michel Pollet <buserror@gmail.com>
@crobinso
Copy link
Contributor

crobinso commented Apr 2, 2014

qemu does not track github pull requests, please report this through the normal channels: http://wiki.qemu.org/Contribute/ReportABug

pipo added a commit that referenced this pull request May 13, 2014
The docs for glfs_init suggest that the function sets errno on every
failure. In fact it doesn't. As other functions such as
qemu_gluster_open() in the gluster block code report their errors based
on this fact we need to make sure that errno is set on each failure.

This fixes a crash of qemu-img/qemu when a gluster brick isn't
accessible from given host while the server serving the volume
description is.

Thread 1 (Thread 0x7ffff7fba740 (LWP 203880)):
 #0  0x00007ffff77673f8 in glfs_lseek () from /usr/lib64/libgfapi.so.0
 #1  0x0000555555574a68 in qemu_gluster_getlength ()
 #2  0x0000555555565742 in refresh_total_sectors ()
 #3  0x000055555556914f in bdrv_open_common ()
 #4  0x000055555556e8e8 in bdrv_open ()
 #5  0x000055555556f02f in bdrv_open_image ()
 #6  0x000055555556e5f6 in bdrv_open ()
 #7  0x00005555555c5775 in bdrv_new_open ()
 #8  0x00005555555c5b91 in img_info ()
 #9  0x00007ffff62c9c05 in __libc_start_main () from /lib64/libc.so.6
 #10 0x00005555555648ad in _start ()

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
agraf pushed a commit to agraf/qemu that referenced this pull request Jul 14, 2014
VirtIOBlockReq is freed later by virtio_blk_free_request() in
hw/block/virtio-blk.c.  Remove this extraneous g_slice_free().

This patch fixes the following segfault:

  0x00005555556373af in virtio_blk_rw_complete (opaque=0x5555565ff5e0, ret=0) at hw/block/virtio-blk.c:99
  99          bdrv_acct_done(req->dev->bs, &req->acct);
  (gdb) print req
  $1 = (VirtIOBlockReq *) 0x5555565ff5e0
  (gdb) print req->dev
  $2 = (VirtIOBlock *) 0x0
  (gdb) bt
  #0  0x00005555556373af in virtio_blk_rw_complete (opaque=0x5555565ff5e0, ret=0) at hw/block/virtio-blk.c:99
  qemu#1  0x0000555555840ebe in bdrv_co_em_bh (opaque=0x5555566152d0) at block.c:4675
  qemu#2  0x000055555583de77 in aio_bh_poll (ctx=ctx@entry=0x5555563a8150) at async.c:81
  qemu#3  0x000055555584b7a7 in aio_poll (ctx=0x5555563a8150, blocking=blocking@entry=true) at aio-posix.c:188
  qemu#4  0x00005555556e520e in iothread_run (opaque=0x5555563a7fd8) at iothread.c:41
  qemu#5  0x00007ffff42ba124 in start_thread () from /usr/lib/libpthread.so.0
  qemu#6  0x00007ffff16d14bd in clone () from /usr/lib/libc.so.6

Reported-by: Max Reitz <mreitz@redhat.com>
Cc: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
mitake pushed a commit to mitake/qemu that referenced this pull request Jul 31, 2014
$ ~/usr/bin/qemu-system-x86_64 -enable-kvm -m 1024 -drive if=none,id=drive0,cache=none,aio=native,format=raw,file=/root/Image/centos-6.4.raw -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on,config-wce=on # make dataplane fail to initialize
qemu-system-x86_64: -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on,config-wce=on: device is incompatible with x-data-plane, use config-wce=off
*** glibc detected *** /root/usr/bin/qemu-system-x86_64: free(): invalid pointer: 0x00007f001fef12f8 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x7d776)[0x7f00153a5776]
/root/usr/bin/qemu-system-x86_64(+0x2c34ec)[0x7f001cf5b4ec]
/root/usr/bin/qemu-system-x86_64(+0x342f9a)[0x7f001cfdaf9a]
/root/usr/bin/qemu-system-x86_64(+0x33694e)[0x7f001cfce94e]
....................

 (gdb) bt
 #0  0x00007f3bf3a12015 in raise () from /lib64/libc.so.6
 #1  0x00007f3bf3a1348b in abort () from /lib64/libc.so.6
 #2  0x00007f3bf3a51a4e in __libc_message () from /lib64/libc.so.6
 #3  0x00007f3bf3a57776 in malloc_printerr () from /lib64/libc.so.6
 qemu#4  0x00007f3bfb60d4ec in free_and_trace (mem=0x7f3bfe0129f8) at vl.c:2786
 qemu#5  0x00007f3bfb68cf9a in virtio_cleanup (vdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio.c:900
 qemu#6  0x00007f3bfb68094e in virtio_blk_device_init (vdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio-blk.c:666
 qemu#7  0x00007f3bfb68dadf in virtio_device_init (qdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio.c:1092
 qemu#8  0x00007f3bfb50da46 in device_realize (dev=0x7f3bfe0129f8, err=0x7fff479c9258) at hw/qdev.c:176
.............................

In virtio_blk_device_init(), the memory which vdev point to is a static
member of "struct VirtIOBlkPCI", not heap memory, and it does not
get freed. So we shoule use virtio_common_cleanup() to clean this VirtIODevice
rather than virtio_cleanup(), which attempts to free the vdev.

This error was introduced by commit 05ff686
recently.

Signed-off-by: Dunrong Huang <huangdr@cloud-times.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aliguori pushed a commit that referenced this pull request Nov 25, 2014
If the pci bridge enters in error flow as part
of init process it will only delete the shpc mmio
subregion but not remove it from the properties list,
resulting in segmentation fault when the bridge runs
the exit function.

Example: add a pci bridge without specifing the chassis number:
    <qemu-bin> ... -device pci-bridge,id=p1
Result:
    (qemu) qemu-system-x86_64: -device pci-bridge,id=p1: Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
    qemu-system-x86_64: -device pci-bridge,id=p1: Device
    initialization failed.
    Segmentation fault (core dumped)

    if (child->class->unparent) {
    #0  0x00005555558d629b in object_finalize_child_property (obj=0x555556d2e830, name=0x555556d30630 "shpc-mmio[0]", opaque=0x555556a42fc8) at qom/object.c:1078
    #1  0x00005555558d4b1f in object_property_del_all (obj=0x555556d2e830) at qom/object.c:367
    #2  0x00005555558d4ca1 in object_finalize (data=0x555556d2e830) at qom/object.c:412
    #3  0x00005555558d55a1 in object_unref (obj=0x555556d2e830) at qom/object.c:720
    #4  0x000055555572c907 in qdev_device_add (opts=0x5555563544f0) at qdev-monitor.c:566
    #5  0x0000555555744f16 in device_init_func (opts=0x5555563544f0, opaque=0x0) at vl.c:2213
    #6  0x00005555559cf5f0 in qemu_opts_foreach (list=0x555555e0f8e0 <qemu_device_opts>, func=0x555555744efa <device_init_func>, opaque=0x0, abort_on_failure=1) at util/qemu-option.c:1057
    #7  0x000055555574a11b in main (argc=16, argv=0x7fffffffdde8, envp=0x7fffffffde70) at vl.c:423

Unparent the shpc mmio region as part of shpc cleanup.

Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
lalrae pushed a commit to lalrae/qemu that referenced this pull request Dec 17, 2014
stsquad added a commit to stsquad/qemu that referenced this pull request Jun 24, 2015
While I was testing multi-threaded TCG I discovered once consequence of
using locking around memory_region_dispatch is that virt-io transactions
could dead lock trying to grab the main mutex. This is due to the
virt-io driver writing data back into the system memory:

    #0  0x00007ffff119dcc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
    qemu#1  0x00007ffff11a10d8 in __GI_abort () at abort.c:89
    qemu#2  0x00005555555f9b24 in error_exit (err=<optimised out>, msg=msg@entry=0x5555559f3710 <__func__.6011> "qemu_mutex_lock") at util/qemu-thread-posix.c:48
    qemu#3  0x000055555594d630 in qemu_mutex_lock (mutex=mutex@entry=0x555555e62e60 <qemu_global_mutex>) at util/qemu-thread-posix.c:79
    qemu#4  0x0000555555631a84 in qemu_mutex_lock_iothread () at /home/alex/lsrc/qemu/qemu.git/cpus.c:1128
    qemu#5  0x000055555560dd1a in stw_phys_internal (endian=DEVICE_LITTLE_ENDIAN, val=1, addr=<optimised out>, as=0x555555e08060 <address_space_memory>) at /home/alex/lsrc/qemu/qemu.git/exec.c:3010
    qemu#6  stw_le_phys (as=as@entry=0x555555e08060 <address_space_memory>, addr=<optimised out>, val=1) at /home/alex/lsrc/qemu/qemu.git/exec.c:3024
    qemu#7  0x0000555555696ae5 in virtio_stw_phys (vdev=<optimised out>, value=<optimised out>, pa=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/include/hw/virtio/virtio-access.h:61
    qemu#8  vring_avail_event (vq=0x55555648dc00, vq=0x55555648dc00, vq=0x55555648dc00, val=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:214
    qemu#9  virtqueue_pop (vq=0x55555648dc00, elem=elem@entry=0x7fff1403fd98) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:472
    qemu#10 0x0000555555653cd1 in virtio_blk_get_request (s=0x555556486830) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:122
    qemu#11 virtio_blk_handle_output (vdev=<optimised out>, vq=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:446
    qemu#12 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=80, value=value@entry=0x7fffa93052b0, size=size@entry=4, access_size_min=<optimised out>,
        access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
    qemu#13 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=80, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
    qemu#14 io_mem_write (mr=mr@entry=0x555556b80388, addr=80, val=<optimised out>, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
    qemu#15 0x000055555560ad6b in address_space_rw_internal (as=<optimised out>, addr=167788112, buf=buf@entry=0x7fffa9305380 "", len=4, is_write=is_write@entry=true, unlocked=<optimised out>,
        unlocked@entry=false) at /home/alex/lsrc/qemu/qemu.git/exec.c:2318
    qemu#16 0x000055555560aea8 in address_space_rw (is_write=true, len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2392
    qemu#17 address_space_write (len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2404
    qemu#18 subpage_write (opaque=<optimised out>, addr=<optimised out>, value=<optimised out>, len=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:1963
    qemu#19 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=592, value=value@entry=0x7fffa9305420, size=size@entry=4, access_size_min=<optimised out>,
        access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
    qemu#20 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=592, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
    qemu#21 io_mem_write (mr=mr@entry=0x555556bfca20, addr=addr@entry=592, val=val@entry=0, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
    qemu#22 0x000055555564ce16 in io_writel (retaddr=140736492182797, addr=4027616848, val=0, physaddr=592, env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:386
    qemu#23 helper_le_stl_mmu (env=0x55555649e9b0, addr=<optimised out>, val=0, mmu_idx=<optimised out>, retaddr=140736492182797) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:426
    qemu#24 0x00007fffc49f9d0f in code_gen_buffer ()
    qemu#25 0x00005555556109dc in cpu_tb_exec (tb_ptr=0x7fffc49f9c60 <code_gen_buffer+8371296> "A\213n\374\205\355\017\205\233\001", cpu=0x555556496750)
        at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:179
    qemu#26 cpu_arm_exec (env=env@entry=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:524
    qemu#27 0x0000555555630f28 in tcg_cpu_exec (env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1344
    qemu#28 tcg_exec_all (cpu=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1392
    qemu#29 qemu_tcg_cpu_thread_fn (arg=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1037
    qemu#30 0x00007ffff1534182 in start_thread (arg=0x7fffa9306700) at pthread_create.c:312
    qemu#31 0x00007ffff126147d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

The fix in this patch makes the global/iothread mutex recursive (only if
called from the same thread-id). As other threads are still blocked
memory is still consistent all the way through.

This seems neater than having to do a trylock each time.

Tested-by: Alex Bennée <alex.bennee@linaro.org>
stsquad added a commit to stsquad/qemu that referenced this pull request Jun 24, 2015
While I was testing multi-threaded TCG I discovered once consequence of
using locking around memory_region_dispatch is that virt-io transactions
could dead lock trying to grab the main mutex. This is due to the
virt-io driver writing data back into the system memory:

    #0  0x00007ffff119dcc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
    qemu#1  0x00007ffff11a10d8 in __GI_abort () at abort.c:89
    qemu#2  0x00005555555f9b24 in error_exit (err=<optimised out>, msg=msg@entry=0x5555559f3710 <__func__.6011> "qemu_mutex_lock") at util/qemu-thread-posix.c:48
    qemu#3  0x000055555594d630 in qemu_mutex_lock (mutex=mutex@entry=0x555555e62e60 <qemu_global_mutex>) at util/qemu-thread-posix.c:79
    qemu#4  0x0000555555631a84 in qemu_mutex_lock_iothread () at /home/alex/lsrc/qemu/qemu.git/cpus.c:1128
    qemu#5  0x000055555560dd1a in stw_phys_internal (endian=DEVICE_LITTLE_ENDIAN, val=1, addr=<optimised out>, as=0x555555e08060 <address_space_memory>) at /home/alex/lsrc/qemu/qemu.git/exec.c:3010
    qemu#6  stw_le_phys (as=as@entry=0x555555e08060 <address_space_memory>, addr=<optimised out>, val=1) at /home/alex/lsrc/qemu/qemu.git/exec.c:3024
    qemu#7  0x0000555555696ae5 in virtio_stw_phys (vdev=<optimised out>, value=<optimised out>, pa=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/include/hw/virtio/virtio-access.h:61
    qemu#8  vring_avail_event (vq=0x55555648dc00, vq=0x55555648dc00, vq=0x55555648dc00, val=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:214
    qemu#9  virtqueue_pop (vq=0x55555648dc00, elem=elem@entry=0x7fff1403fd98) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:472
    qemu#10 0x0000555555653cd1 in virtio_blk_get_request (s=0x555556486830) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:122
    qemu#11 virtio_blk_handle_output (vdev=<optimised out>, vq=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:446
    qemu#12 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=80, value=value@entry=0x7fffa93052b0, size=size@entry=4, access_size_min=<optimised out>,
        access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
    qemu#13 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=80, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
    qemu#14 io_mem_write (mr=mr@entry=0x555556b80388, addr=80, val=<optimised out>, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
    qemu#15 0x000055555560ad6b in address_space_rw_internal (as=<optimised out>, addr=167788112, buf=buf@entry=0x7fffa9305380 "", len=4, is_write=is_write@entry=true, unlocked=<optimised out>,
        unlocked@entry=false) at /home/alex/lsrc/qemu/qemu.git/exec.c:2318
    qemu#16 0x000055555560aea8 in address_space_rw (is_write=true, len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2392
    qemu#17 address_space_write (len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2404
    qemu#18 subpage_write (opaque=<optimised out>, addr=<optimised out>, value=<optimised out>, len=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:1963
    qemu#19 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=592, value=value@entry=0x7fffa9305420, size=size@entry=4, access_size_min=<optimised out>,
        access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
    qemu#20 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=592, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
    qemu#21 io_mem_write (mr=mr@entry=0x555556bfca20, addr=addr@entry=592, val=val@entry=0, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
    qemu#22 0x000055555564ce16 in io_writel (retaddr=140736492182797, addr=4027616848, val=0, physaddr=592, env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:386
    qemu#23 helper_le_stl_mmu (env=0x55555649e9b0, addr=<optimised out>, val=0, mmu_idx=<optimised out>, retaddr=140736492182797) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:426
    qemu#24 0x00007fffc49f9d0f in code_gen_buffer ()
    qemu#25 0x00005555556109dc in cpu_tb_exec (tb_ptr=0x7fffc49f9c60 <code_gen_buffer+8371296> "A\213n\374\205\355\017\205\233\001", cpu=0x555556496750)
        at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:179
    qemu#26 cpu_arm_exec (env=env@entry=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:524
    qemu#27 0x0000555555630f28 in tcg_cpu_exec (env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1344
    qemu#28 tcg_exec_all (cpu=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1392
    qemu#29 qemu_tcg_cpu_thread_fn (arg=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1037
    qemu#30 0x00007ffff1534182 in start_thread (arg=0x7fffa9306700) at pthread_create.c:312
    qemu#31 0x00007ffff126147d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

The fix in this patch makes the global/iothread mutex recursive.
However as condition variables are involved we deal with this in our
own code by tracking the number of locks held by the thread.

This seems neater than having to do a trylock each time.

Tested-by: Alex Bennée <alex.bennee@linaro.org>

---
v2
  - don't actually use pthread recursion mechanism
  - convert rest of qemu_mutex_lock(&global_global_mutex) to iothread
  - emulate recursion with thread local tracking
ehabkost added a commit to ehabkost/qemu that referenced this pull request Jul 22, 2015
This fixes the following crash, introduced by commit
49d2e64:

  $ gdb --args qemu-system-x86_64 -machine pc,mem-merge=off -object memory-backend-ram,id=ram-node0,size=1024
  [...]
  Program received signal SIGABRT, Aborted.
  (gdb) bt
  #0  0x00007ffff253b8c7 in raise () at /lib64/libc.so.6
  qemu#1  0x00007ffff253d52a in abort () at /lib64/libc.so.6
  qemu#2  0x00007ffff253446d in __assert_fail_base () at /lib64/libc.so.6
  qemu#3  0x00007ffff2534522 in  () at /lib64/libc.so.6
  qemu#4  0x00005555558bb80a in qemu_opt_get_bool_helper (opts=0x55555621b650, name=name@entry=0x5555558ec922 "mem-merge", defval=defval@entry=true, del=del@entry=false) at qemu/util/qemu-option.c:388
  qemu#5  0x00005555558bbb5a in qemu_opt_get_bool (opts=<optimized out>, name=name@entry=0x5555558ec922 "mem-merge", defval=defval@entry=true) at qemu/util/qemu-option.c:398
  qemu#6  0x0000555555720a24 in host_memory_backend_init (obj=0x5555562ac970) at qemu/backends/hostmem.c:226

Instead of using qemu_opt_get_bool(), that didn't work with
qemu_machine_opts for a long time, we can use the corresponding
MachineState fields.

Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
huth pushed a commit to huth/qemu that referenced this pull request Feb 4, 2020
It's not a big deal, but 'check qtest-ppc/ppc64' runs fail if sanitizers is enabled.
The memory leak stack is as follow:

Direct leak of 128 byte(s) in 4 object(s) allocated from:
    #0 0x7f11756f5970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    qemu#1 0x7f1174f2549d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    qemu#2 0x556af05aa7da in mm_fw_cfg_init /mnt/sdb/qemu/tests/libqos/fw_cfg.c:119
    qemu#3 0x556af059f4f5 in read_boot_order_pmac /mnt/sdb/qemu/tests/boot-order-test.c:137
    qemu#4 0x556af059efe2 in test_a_boot_order /mnt/sdb/qemu/tests/boot-order-test.c:47
    qemu#5 0x556af059f2c0 in test_boot_orders /mnt/sdb/qemu/tests/boot-order-test.c:59
    qemu#6 0x556af059f52d in test_pmac_oldworld_boot_order /mnt/sdb/qemu/tests/boot-order-test.c:152
    qemu#7 0x7f1174f46cb9  (/lib64/libglib-2.0.so.0+0x73cb9)
    qemu#8 0x7f1174f46b73  (/lib64/libglib-2.0.so.0+0x73b73)
    qemu#9 0x7f1174f46b73  (/lib64/libglib-2.0.so.0+0x73b73)
    qemu#10 0x7f1174f46f71 in g_test_run_suite (/lib64/libglib-2.0.so.0+0x73f71)
    qemu#11 0x7f1174f46f94 in g_test_run (/lib64/libglib-2.0.so.0+0x73f94)

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-Id: <20200203025935.36228-1-pannengyuan@huawei.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 6, 2020
When remove dup_fd in monitor_fdset_dup_fd_find_remove function,
we need to free mon_fdset_fd_dup. ASAN shows memory leak stack:

Direct leak of 96 byte(s) in 3 object(s) allocated from:
    #0 0xfffd37b033b3 in __interceptor_calloc (/lib64/libasan.so.4+0xd33b3)
    #1 0xfffd375c71cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb)
    #2 0xaaae25bf1c17 in monitor_fdset_dup_fd_add /qemu/monitor/misc.c:1724
    #3 0xaaae265cfd8f in qemu_open /qemu/util/osdep.c:315
    #4 0xaaae264e2b2b in qmp_chardev_open_file_source /qemu/chardev/char-fd.c:122
    #5 0xaaae264e47cf in qmp_chardev_open_file /qemu/chardev/char-file.c:81
    #6 0xaaae264e118b in qemu_char_open /qemu/chardev/char.c:237
    #7 0xaaae264e118b in qemu_chardev_new /qemu/chardev/char.c:964
    #8 0xaaae264e1543 in qemu_chr_new_from_opts /qemu/chardev/char.c:680
    #9 0xaaae25e12e0f in chardev_init_func /qemu/vl.c:2083
    #10 0xaaae26603823 in qemu_opts_foreach /qemu/util/qemu-option.c:1170
    #11 0xaaae258c9787 in main /qemu/vl.c:4089
    #12 0xfffd35b80b9f in __libc_start_main (/lib64/libc.so.6+0x20b9f)
    #13 0xaaae258d7b63  (/qemu/build/aarch64-softmmu/qemu-system-aarch64+0x8b7b63)

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20200115072016.167252-1-kuhn.chenqun@huawei.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
qemu-deploy pushed a commit that referenced this pull request Feb 6, 2020
If we call the qmp 'query-block' while qemu is working on
'block-commit', it will cause memleaks, the memory leak stack is as
follow:

Indirect leak of 12360 byte(s) in 3 object(s) allocated from:
    #0 0x7f80f0b6d970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    #1 0x7f80ee86049d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    #2 0x55ea95b5bb67 in qdict_new /mnt/sdb/qemu-4.2.0-rc0/qobject/qdict.c:29
    #3 0x55ea956cd043 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6427
    #4 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
    #5 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
    #6 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
    #7 0x55ea958818ea in bdrv_block_device_info /mnt/sdb/qemu-4.2.0-rc0/block/qapi.c:56
    #8 0x55ea958879de in bdrv_query_info /mnt/sdb/qemu-4.2.0-rc0/block/qapi.c:392
    #9 0x55ea9588b58f in qmp_query_block /mnt/sdb/qemu-4.2.0-rc0/block/qapi.c:578
    #10 0x55ea95567392 in qmp_marshal_query_block qapi/qapi-commands-block-core.c:95

Indirect leak of 4120 byte(s) in 1 object(s) allocated from:
    #0 0x7f80f0b6d970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    #1 0x7f80ee86049d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    #2 0x55ea95b5bb67 in qdict_new /mnt/sdb/qemu-4.2.0-rc0/qobject/qdict.c:29
    #3 0x55ea956cd043 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6427
    #4 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
    #5 0x55ea956cc950 in bdrv_refresh_filename /mnt/sdb/qemu-4.2.0-rc0/block.c:6399
    #6 0x55ea9569f301 in bdrv_backing_attach /mnt/sdb/qemu-4.2.0-rc0/block.c:1064
    #7 0x55ea956a99dd in bdrv_replace_child_noperm /mnt/sdb/qemu-4.2.0-rc0/block.c:2283
    #8 0x55ea956b9b53 in bdrv_replace_node /mnt/sdb/qemu-4.2.0-rc0/block.c:4196
    #9 0x55ea956b9e49 in bdrv_append /mnt/sdb/qemu-4.2.0-rc0/block.c:4236
    #10 0x55ea958c3472 in commit_start /mnt/sdb/qemu-4.2.0-rc0/block/commit.c:306
    #11 0x55ea94b68ab0 in qmp_block_commit /mnt/sdb/qemu-4.2.0-rc0/blockdev.c:3459
    #12 0x55ea9556a7a7 in qmp_marshal_block_commit qapi/qapi-commands-block-core.c:407

Fixes: bb808d5
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-id: 20200116085600.24056-1-pannengyuan@huawei.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 13, 2020
gtk_widget_get_window() returns NULL if the widget's window is not
realized, and QEMU crashes. Example under gtk 3.22.30 (mate 1.20.1):

  qemu-system-x86_64: Gdk: gdk_window_get_origin: assertion 'GDK_IS_WINDOW (window)' failed
  (gdb) bt
  #0  0x00007ffff496cf70 in gdk_window_get_origin () from /usr/lib64/libgdk-3.so.0
  #1  0x00007ffff49582a0 in gdk_display_get_monitor_at_window () from /usr/lib64/libgdk-3.so.0
  #2  0x0000555555bb73e2 in gd_refresh_rate_millihz (window=0x5555579d6280) at ui/gtk.c:1973
  #3  gd_vc_gfx_init (view_menu=0x5555579f0590, group=0x0, idx=0, con=<optimized out>, vc=0x5555579d4a90, s=0x5555579d49f0) at ui/gtk.c:2048
  #4  gd_create_menu_view (s=0x5555579d49f0) at ui/gtk.c:2149
  #5  gd_create_menus (s=0x5555579d49f0) at ui/gtk.c:2188
  #6  gtk_display_init (ds=<optimized out>, opts=0x55555661ed80 <dpy>) at ui/gtk.c:2256
  #7  0x000055555583d5a0 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4358

Fixes: c4c0092 and 28b58f1 (display/gtk: get proper refreshrate)
Reported-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Jan Kiszka <jan.kiszka@web.de>
Message-id: 20200208161048.11311-3-f4bug@amsat.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 13, 2020
It's a mismatch between g_strsplit and g_free, it will cause a memory leak as follow:

[root@localhost]# ./aarch64-softmmu/qemu-system-aarch64 -accel help
Accelerators supported in QEMU binary:
tcg
kvm
=================================================================
==1207900==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 8 byte(s) in 2 object(s) allocated from:
    #0 0xfffd700231cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb)
    #1 0xfffd6ec57163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163)
    #2 0xfffd6ec724d7 in g_strndup (/lib64/libglib-2.0.so.0+0x724d7)
    #3 0xfffd6ec73d3f in g_strsplit (/lib64/libglib-2.0.so.0+0x73d3f)
    #4 0xaaab66be5077 in main /mnt/sdc/qemu-master/qemu-4.2.0-rc0/vl.c:3517
    #5 0xfffd6e140b9f in __libc_start_main (/lib64/libc.so.6+0x20b9f)
    #6 0xaaab66bf0f53  (./build/aarch64-softmmu/qemu-system-aarch64+0x8a0f53)

Direct leak of 2 byte(s) in 2 object(s) allocated from:
    #0 0xfffd700231cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb)
    #1 0xfffd6ec57163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163)
    #2 0xfffd6ec7243b in g_strdup (/lib64/libglib-2.0.so.0+0x7243b)
    #3 0xfffd6ec73e6f in g_strsplit (/lib64/libglib-2.0.so.0+0x73e6f)
    #4 0xaaab66be5077 in main /mnt/sdc/qemu-master/qemu-4.2.0-rc0/vl.c:3517
    #5 0xfffd6e140b9f in __libc_start_main (/lib64/libc.so.6+0x20b9f)
    #6 0xaaab66bf0f53  (./build/aarch64-softmmu/qemu-system-aarch64+0x8a0f53)

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-Id: <20200110091710.53424-2-pannengyuan@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 14, 2020
It's easy to reproduce as follow:
virsh qemu-monitor-command vm1 --pretty '{"execute": "device-list-properties",
"arguments":{"typename":"exynos4210.uart"}}'

ASAN shows memory leak stack:
  #1 0xfffd896d71cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb)
  #2 0xaaad270beee3 in timer_new_full /qemu/include/qemu/timer.h:530
  #3 0xaaad270beee3 in timer_new /qemu/include/qemu/timer.h:551
  #4 0xaaad270beee3 in timer_new_ns /qemu/include/qemu/timer.h:569
  #5 0xaaad270beee3 in exynos4210_uart_init /qemu/hw/char/exynos4210_uart.c:677
  #6 0xaaad275c8f4f in object_initialize_with_type /qemu/qom/object.c:516
  #7 0xaaad275c91bb in object_new_with_type /qemu/qom/object.c:684
  #8 0xaaad2755df2f in qmp_device_list_properties /qemu/qom/qom-qmp-cmds.c:152

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200213025603.149432-1-kuhn.chenqun@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
dgibson pushed a commit to dgibson/qemu that referenced this pull request Feb 17, 2020
'fdt' forgot to clean both e500 and pnv when we call 'system_reset' on ppc,
this patch fix it. The leak stacks are as follow:

Direct leak of 4194304 byte(s) in 4 object(s) allocated from:
    #0 0x7fafe37dd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    qemu#1 0x7fafe2e3149d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    qemu#2 0x561876f7f80d in create_device_tree /mnt/sdb/qemu-new/qemu/device_tree.c:40
    qemu#3 0x561876b7ac29 in ppce500_load_device_tree /mnt/sdb/qemu-new/qemu/hw/ppc/e500.c:364
    qemu#4 0x561876b7f437 in ppce500_reset_device_tree /mnt/sdb/qemu-new/qemu/hw/ppc/e500.c:617
    qemu#5 0x56187718b1ae in qemu_devices_reset /mnt/sdb/qemu-new/qemu/hw/core/reset.c:69
    qemu#6 0x561876f6938d in qemu_system_reset /mnt/sdb/qemu-new/qemu/vl.c:1412
    qemu#7 0x561876f6a25b in main_loop_should_exit /mnt/sdb/qemu-new/qemu/vl.c:1645
    qemu#8 0x561876f6a398 in main_loop /mnt/sdb/qemu-new/qemu/vl.c:1679
    qemu#9 0x561876f7da8e in main /mnt/sdb/qemu-new/qemu/vl.c:4438
    qemu#10 0x7fafde16b812 in __libc_start_main ../csu/libc-start.c:308
    qemu#11 0x5618765c055d in _start (/mnt/sdb/qemu-new/qemu/build/ppc64-softmmu/qemu-system-ppc64+0x2b1555d)

Direct leak of 1048576 byte(s) in 1 object(s) allocated from:
    #0 0x7fc0a6f1b970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    qemu#1 0x7fc0a656f49d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    qemu#2 0x55eb05acd2ca in pnv_dt_create /mnt/sdb/qemu-new/qemu/hw/ppc/pnv.c:507
    qemu#3 0x55eb05ace5bf in pnv_reset /mnt/sdb/qemu-new/qemu/hw/ppc/pnv.c:578
    qemu#4 0x55eb05f2f395 in qemu_system_reset /mnt/sdb/qemu-new/qemu/vl.c:1410
    qemu#5 0x55eb05f43850 in main /mnt/sdb/qemu-new/qemu/vl.c:4403
    qemu#6 0x7fc0a18a9812 in __libc_start_main ../csu/libc-start.c:308
    qemu#7 0x55eb0558655d in _start (/mnt/sdb/qemu-new/qemu/build/ppc64-softmmu/qemu-system-ppc64+0x2b1555d)

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-Id: <20200214033206.4395-1-pannengyuan@huawei.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
rth7680 pushed a commit to rth7680/qemu that referenced this pull request Feb 18, 2020
Coverity reports:

  *** CID 1419387:  Memory - illegal accesses  (OVERRUN)
  /hw/hppa/dino.c: 267 in dino_chip_read_with_attrs()
  261             val = s->ilr & s->imr & s->icr;
  262             break;
  263         case DINO_TOC_ADDR:
  264             val = s->toc_addr;
  265             break;
  266         case DINO_GMASK ... DINO_TLTIM:
  >>>     CID 1419387:  Memory - illegal accesses  (OVERRUN)
  >>>     Overrunning array "s->reg800" of 12 4-byte elements at element index 12 (byte offset 48) using index "(addr - 2048UL) / 4UL" (which evaluates to 12).
  267             val = s->reg800[(addr - DINO_GMASK) / 4];
  268             if (addr == DINO_PAMR) {
  269                 val &= ~0x01;  /* LSB is hardwired to 0 */
  270             }
  271             if (addr == DINO_MLTIM) {
  272                 val &= ~0x07;  /* 3 LSB are hardwired to 0 */

  *** CID 1419393:  Memory - corruptions  (OVERRUN)
  /hw/hppa/dino.c: 363 in dino_chip_write_with_attrs()
  357             /* These registers are read-only.  */
  358             break;
  359
  360         case DINO_GMASK ... DINO_TLTIM:
  361             i = (addr - DINO_GMASK) / 4;
  362             val &= reg800_keep_bits[i];
  >>>     CID 1419393:  Memory - corruptions  (OVERRUN)
  >>>     Overrunning array "s->reg800" of 12 4-byte elements at element index 12 (byte offset 48) using index "i" (which evaluates to 12).
  363             s->reg800[i] = val;
  364             break;
  365
  366         default:
  367             /* Controlled by dino_chip_mem_valid above.  */
  368             g_assert_not_reached();

  *** CID 1419394:  Memory - illegal accesses  (OVERRUN)
  /hw/hppa/dino.c: 362 in dino_chip_write_with_attrs()
  356         case DINO_IRR1:
  357             /* These registers are read-only.  */
  358             break;
  359
  360         case DINO_GMASK ... DINO_TLTIM:
  361             i = (addr - DINO_GMASK) / 4;
  >>>     CID 1419394:  Memory - illegal accesses  (OVERRUN)
  >>>     Overrunning array "reg800_keep_bits" of 12 4-byte elements at element index 12 (byte offset 48) using index "i" (which evaluates to 12).
  362             val &= reg800_keep_bits[i];
  363             s->reg800[i] = val;
  364             break;
  365
  366         default:
  367             /* Controlled by dino_chip_mem_valid above.  */

Indeed the array should contain 13 entries, the undocumented
register 0x82c is missing. Fix by increasing the array size
and adding the missing register.

CID 1419387 can be verified with:

  $ echo x 0xfff80830 | hppa-softmmu/qemu-system-hppa -S -monitor stdio -display none
  QEMU 4.2.50 monitor - type 'help' for more information
  (qemu) x 0xfff80830
  qemu/hw/hppa/dino.c:267:15: runtime error: index 12 out of bounds for type 'uint32_t [12]'
  SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /home/phil/source/qemu/hw/hppa/dino.c:267:15 in
  00000000fff80830: 0x00000000

and CID 1419393/1419394 with:

  $ echo writeb 0xfff80830 0x69 \
    | hppa-softmmu/qemu-system-hppa -S -accel qtest -qtest stdio -display none
  [I 1581634452.654113] OPENED
  [R +4.105415] writeb 0xfff80830 0x69
  qemu/hw/hppa/dino.c:362:16: runtime error: index 12 out of bounds for type 'const uint32_t [12]'
  SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior qemu/hw/hppa/dino.c:362:16 in
  =================================================================
  ==29607==ERROR: AddressSanitizer: global-buffer-overflow on address 0x5577dae32f30 at pc 0x5577d93f2463 bp 0x7ffd97ea11b0 sp 0x7ffd97ea11a8
  READ of size 4 at 0x5577dae32f30 thread T0
      #0 0x5577d93f2462 in dino_chip_write_with_attrs qemu/hw/hppa/dino.c:362:16
      #1 0x5577d9025664 in memory_region_write_with_attrs_accessor qemu/memory.c:503:12
      #2 0x5577d9024920 in access_with_adjusted_size qemu/memory.c:539:18
      #3 0x5577d9023608 in memory_region_dispatch_write qemu/memory.c:1482:13
      qemu#4 0x5577d8e3177a in flatview_write_continue qemu/exec.c:3166:23
      qemu#5 0x5577d8e20357 in flatview_write qemu/exec.c:3206:14
      qemu#6 0x5577d8e1fef4 in address_space_write qemu/exec.c:3296:18
      qemu#7 0x5577d8e20693 in address_space_rw qemu/exec.c:3306:16
      qemu#8 0x5577d9011595 in qtest_process_command qemu/qtest.c:432:13
      qemu#9 0x5577d900d19f in qtest_process_inbuf qemu/qtest.c:705:9
      qemu#10 0x5577d900ca22 in qtest_read qemu/qtest.c:717:5
      qemu#11 0x5577da8c4254 in qemu_chr_be_write_impl qemu/chardev/char.c:183:9
      qemu#12 0x5577da8c430c in qemu_chr_be_write qemu/chardev/char.c:195:9
      qemu#13 0x5577da8cf587 in fd_chr_read qemu/chardev/char-fd.c:68:9
      qemu#14 0x5577da9836cd in qio_channel_fd_source_dispatch qemu/io/channel-watch.c:84:12
      qemu#15 0x7faf44509ecc in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x4fecc)
      qemu#16 0x5577dab75f96 in glib_pollfds_poll qemu/util/main-loop.c:219:9
      qemu#17 0x5577dab74797 in os_host_main_loop_wait qemu/util/main-loop.c:242:5
      qemu#18 0x5577dab7435a in main_loop_wait qemu/util/main-loop.c:518:11
      qemu#19 0x5577d9514eb3 in main_loop qemu/vl.c:1682:9
      qemu#20 0x5577d950699d in main qemu/vl.c:4450:5
      qemu#21 0x7faf41a87f42 in __libc_start_main (/lib64/libc.so.6+0x23f42)
      qemu#22 0x5577d8cd4d4d in _start (qemu/build/sanitizer/hppa-softmmu/qemu-system-hppa+0x1256d4d)

  0x5577dae32f30 is located 0 bytes to the right of global variable 'reg800_keep_bits' defined in 'qemu/hw/hppa/dino.c:87:23' (0x5577dae32f00) of size 48
  SUMMARY: AddressSanitizer: global-buffer-overflow qemu/hw/hppa/dino.c:362:16 in dino_chip_write_with_attrs
  Shadow bytes around the buggy address:
    0x0aaf7b5be590: 00 f9 f9 f9 f9 f9 f9 f9 00 02 f9 f9 f9 f9 f9 f9
    0x0aaf7b5be5a0: 07 f9 f9 f9 f9 f9 f9 f9 07 f9 f9 f9 f9 f9 f9 f9
    0x0aaf7b5be5b0: 07 f9 f9 f9 f9 f9 f9 f9 00 00 00 00 00 00 00 00
    0x0aaf7b5be5c0: 00 00 00 02 f9 f9 f9 f9 00 00 00 00 00 00 00 00
    0x0aaf7b5be5d0: 00 00 00 00 00 00 00 00 00 00 00 03 f9 f9 f9 f9
  =>0x0aaf7b5be5e0: 00 00 00 00 00 00[f9]f9 f9 f9 f9 f9 00 00 00 00
    0x0aaf7b5be5f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    0x0aaf7b5be600: 00 00 01 f9 f9 f9 f9 f9 00 00 00 00 07 f9 f9 f9
    0x0aaf7b5be610: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
    0x0aaf7b5be620: 00 00 00 05 f9 f9 f9 f9 00 00 00 00 07 f9 f9 f9
    0x0aaf7b5be630: f9 f9 f9 f9 00 00 f9 f9 f9 f9 f9 f9 07 f9 f9 f9
  Shadow byte legend (one shadow byte represents 8 application bytes):
    Addressable:           00
    Partially addressable: 01 02 03 04 05 06 07
    Heap left redzone:       fa
    Freed heap region:       fd
    Stack left redzone:      f1
    Stack mid redzone:       f2
    Stack right redzone:     f3
    Stack after return:      f5
    Stack use after scope:   f8
    Global redzone:          f9
    Global init order:       f6
    Poisoned by user:        f7
    Container overflow:      fc
    Array cookie:            ac
    Intra object redzone:    bb
    ASan internal:           fe
    Left alloca redzone:     ca
    Right alloca redzone:    cb
    Shadow gap:              cc
  ==29607==ABORTING

Fixes: Covertiy CID 1419387 / 1419393 / 1419394 (commit 1809259)
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20200218063355.18577-3-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
dgibson pushed a commit to dgibson/qemu that referenced this pull request Feb 20, 2020
'fdt' forgot to clean both e500 and pnv when we call 'system_reset' on ppc,
this patch fix it. The leak stacks are as follow:

Direct leak of 4194304 byte(s) in 4 object(s) allocated from:
    #0 0x7fafe37dd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    qemu#1 0x7fafe2e3149d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    qemu#2 0x561876f7f80d in create_device_tree /mnt/sdb/qemu-new/qemu/device_tree.c:40
    qemu#3 0x561876b7ac29 in ppce500_load_device_tree /mnt/sdb/qemu-new/qemu/hw/ppc/e500.c:364
    qemu#4 0x561876b7f437 in ppce500_reset_device_tree /mnt/sdb/qemu-new/qemu/hw/ppc/e500.c:617
    qemu#5 0x56187718b1ae in qemu_devices_reset /mnt/sdb/qemu-new/qemu/hw/core/reset.c:69
    qemu#6 0x561876f6938d in qemu_system_reset /mnt/sdb/qemu-new/qemu/vl.c:1412
    qemu#7 0x561876f6a25b in main_loop_should_exit /mnt/sdb/qemu-new/qemu/vl.c:1645
    qemu#8 0x561876f6a398 in main_loop /mnt/sdb/qemu-new/qemu/vl.c:1679
    qemu#9 0x561876f7da8e in main /mnt/sdb/qemu-new/qemu/vl.c:4438
    qemu#10 0x7fafde16b812 in __libc_start_main ../csu/libc-start.c:308
    qemu#11 0x5618765c055d in _start (/mnt/sdb/qemu-new/qemu/build/ppc64-softmmu/qemu-system-ppc64+0x2b1555d)

Direct leak of 1048576 byte(s) in 1 object(s) allocated from:
    #0 0x7fc0a6f1b970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    qemu#1 0x7fc0a656f49d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    qemu#2 0x55eb05acd2ca in pnv_dt_create /mnt/sdb/qemu-new/qemu/hw/ppc/pnv.c:507
    qemu#3 0x55eb05ace5bf in pnv_reset /mnt/sdb/qemu-new/qemu/hw/ppc/pnv.c:578
    qemu#4 0x55eb05f2f395 in qemu_system_reset /mnt/sdb/qemu-new/qemu/vl.c:1410
    qemu#5 0x55eb05f43850 in main /mnt/sdb/qemu-new/qemu/vl.c:4403
    qemu#6 0x7fc0a18a9812 in __libc_start_main ../csu/libc-start.c:308
    qemu#7 0x55eb0558655d in _start (/mnt/sdb/qemu-new/qemu/build/ppc64-softmmu/qemu-system-ppc64+0x2b1555d)

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-Id: <20200214033206.4395-1-pannengyuan@huawei.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
qemu-deploy pushed a commit that referenced this pull request Feb 27, 2020
In currently implementation there will be a memory leak when
nbd_client_connect() returns error status. Here is an easy way to
reproduce:

1. run qemu-iotests as follow and check the result with asan:
    ./check -raw 143

Following is the asan output backtrack:
Direct leak of 40 byte(s) in 1 object(s) allocated from:
    #0 0x7f629688a560 in calloc (/usr/lib64/libasan.so.3+0xc7560)
    #1 0x7f6295e7e015 in g_malloc0  (/usr/lib64/libglib-2.0.so.0+0x50015)
    #2 0x56281dab4642 in qobject_input_start_struct  /mnt/sdb/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:295
    #3 0x56281dab1a04 in visit_start_struct  /mnt/sdb/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:49
    #4 0x56281dad1827 in visit_type_SocketAddress  qapi/qapi-visit-sockets.c:386
    #5 0x56281da8062f in nbd_config   /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1716
    #6 0x56281da8062f in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1829
    #7 0x56281da8062f in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873

Direct leak of 15 byte(s) in 1 object(s) allocated from:
    #0 0x7f629688a3a0 in malloc (/usr/lib64/libasan.so.3+0xc73a0)
    #1 0x7f6295e7dfbd in g_malloc (/usr/lib64/libglib-2.0.so.0+0x4ffbd)
    #2 0x7f6295e96ace in g_strdup (/usr/lib64/libglib-2.0.so.0+0x68ace)
    #3 0x56281da804ac in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1834
    #4 0x56281da804ac in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873

Indirect leak of 24 byte(s) in 1 object(s) allocated from:
    #0 0x7f629688a3a0 in malloc (/usr/lib64/libasan.so.3+0xc73a0)
    #1 0x7f6295e7dfbd in g_malloc (/usr/lib64/libglib-2.0.so.0+0x4ffbd)
    #2 0x7f6295e96ace in g_strdup (/usr/lib64/libglib-2.0.so.0+0x68ace)
    #3 0x56281dab41a3 in qobject_input_type_str_keyval /mnt/sdb/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:536
    #4 0x56281dab2ee9 in visit_type_str /mnt/sdb/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:297
    #5 0x56281dad0fa1 in visit_type_UnixSocketAddress_members qapi/qapi-visit-sockets.c:141
    #6 0x56281dad17b6 in visit_type_SocketAddress_members qapi/qapi-visit-sockets.c:366
    #7 0x56281dad186a in visit_type_SocketAddress qapi/qapi-visit-sockets.c:393
    #8 0x56281da8062f in nbd_config /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1716
    #9 0x56281da8062f in nbd_process_options /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1829
    #10 0x56281da8062f in nbd_open /mnt/sdb/qemu-4.2.0-rc0/block/nbd.c:1873

Fixes: 8f071c9
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Cc: qemu-stable <qemu-stable@nongnu.org>
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <1575517528-44312-3-git-send-email-pannengyuan@huawei.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 27, 2020
Similar to other virtio device(https://patchwork.kernel.org/patch/11399237/), virtio queues forgot to delete in unrealize, and aslo error path in realize, this patch fix these memleaks, the leak stack is as follow:
Direct leak of 57344 byte(s) in 1 object(s) allocated from:
    #0 0x7f15784fb970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    #1 0x7f157790849d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    #2 0x55587a1bf859 in virtio_add_queue /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:2333
    #3 0x55587a2071d5 in vuf_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/vhost-user-fs.c:212
    #4 0x55587a1ae360 in virtio_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:3531
    #5 0x55587a63fb7b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891
    #6 0x55587acf03f5 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238
    #7 0x55587acfce0d in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26
    #8 0x55587acf5c8c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390
    #9 0x55587a8e22a2 in pci_qdev_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/pci/pci.c:2095
    #10 0x55587a63fb7b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891
    #11 0x55587acf03f5 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238
    #12 0x55587acfce0d in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26
    #13 0x55587acf5c8c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390
    #14 0x55587a496d65 in qdev_device_add /mnt/sdb/qemu-new/qemu_test/qemu/qdev-monitor.c:679

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20200225075554.10835-2-pannengyuan@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 27, 2020
Similar to other virtio-deivces, ctrl_vq forgot to delete in virtio_crypto_device_unrealize, this patch fix it.
This device has aleardy maintained vq pointers. Thus, we use the new virtio_delete_queue function directly to do the cleanup.

The leak stack:
Direct leak of 10752 byte(s) in 3 object(s) allocated from:
    #0 0x7f4c024b1970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    #1 0x7f4c018be49d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    #2 0x55a2f8017279 in virtio_add_queue /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:2333
    #3 0x55a2f8057035 in virtio_crypto_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto.c:814
    #4 0x55a2f8005d80 in virtio_device_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio.c:3531
    #5 0x55a2f8497d1b in device_set_realized /mnt/sdb/qemu-new/qemu_test/qemu/hw/core/qdev.c:891
    #6 0x55a2f8b48595 in property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:2238
    #7 0x55a2f8b54fad in object_property_set_qobject /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qobject.c:26
    #8 0x55a2f8b4de2c in object_property_set_bool /mnt/sdb/qemu-new/qemu_test/qemu/qom/object.c:1390
    #9 0x55a2f80609c9 in virtio_crypto_pci_realize /mnt/sdb/qemu-new/qemu_test/qemu/hw/virtio/virtio-crypto-pci.c:58

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Cc: "Gonglei (Arei)" <arei.gonglei@huawei.com>
Message-Id: <20200225075554.10835-5-pannengyuan@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 27, 2020
virtio queues forgot to delete in unrealize, and aslo error path in
realize, this patch fix these memleaks, the leak stack is as follow:

Direct leak of 114688 byte(s) in 16 object(s) allocated from:
    #0 0x7f24024fdbf0 in calloc (/lib64/libasan.so.3+0xcabf0)
    #1 0x7f2401642015 in g_malloc0 (/lib64/libglib-2.0.so.0+0x50015)
    #2 0x55ad175a6447 in virtio_add_queue /mnt/sdb/qemu/hw/virtio/virtio.c:2327
    #3 0x55ad17570cf9 in vhost_user_blk_device_realize /mnt/sdb/qemu/hw/block/vhost-user-blk.c:419
    #4 0x55ad175a3707 in virtio_device_realize /mnt/sdb/qemu/hw/virtio/virtio.c:3509
    #5 0x55ad176ad0d1 in device_set_realized /mnt/sdb/qemu/hw/core/qdev.c:876
    #6 0x55ad1781ff9d in property_set_bool /mnt/sdb/qemu/qom/object.c:2080
    #7 0x55ad178245ae in object_property_set_qobject /mnt/sdb/qemu/qom/qom-qobject.c:26
    #8 0x55ad17821eb4 in object_property_set_bool /mnt/sdb/qemu/qom/object.c:1338
    #9 0x55ad177aeed7 in virtio_pci_realize /mnt/sdb/qemu/hw/virtio/virtio-pci.c:1801

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20200224041336.30790-2-pannengyuan@huawei.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 27, 2020
The virtqueue code sets up MemoryRegionCaches to access the virtqueue
guest RAM data structures.  The code currently assumes that
VRingMemoryRegionCaches is initialized before device emulation code
accesses the virtqueue.  An assertion will fail in
vring_get_region_caches() when this is not true.  Device fuzzing found a
case where this assumption is false (see below).

Virtqueue guest RAM addresses can also be changed from a vCPU thread
while an IOThread is accessing the virtqueue.  This breaks the same
assumption but this time the caches could become invalid partway through
the virtqueue code.  The code fetches the caches RCU pointer multiple
times so we will need to validate the pointer every time it is fetched.

Add checks each time we call vring_get_region_caches() and treat invalid
caches as a nop: memory stores are ignored and memory reads return 0.

The fuzz test failure is as follows:

  $ qemu -M pc -device virtio-blk-pci,id=drv0,drive=drive0,addr=4.0 \
         -drive if=none,id=drive0,file=null-co://,format=raw,auto-read-only=off \
         -drive if=none,id=drive1,file=null-co://,file.read-zeroes=on,format=raw \
         -display none \
         -qtest stdio
  endianness
  outl 0xcf8 0x80002020
  outl 0xcfc 0xe0000000
  outl 0xcf8 0x80002004
  outw 0xcfc 0x7
  write 0xe0000000 0x24 0x00ffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab5cffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffabffffffab0000000001
  inb 0x4
  writew 0xe000001c 0x1
  write 0xe0000014 0x1 0x0d

The following error message is produced:

  qemu-system-x86_64: /home/stefanha/qemu/hw/virtio/virtio.c:286: vring_get_region_caches: Assertion `caches != NULL' failed.

The backtrace looks like this:

  #0  0x00007ffff5520625 in raise () at /lib64/libc.so.6
  #1  0x00007ffff55098d9 in abort () at /lib64/libc.so.6
  #2  0x00007ffff55097a9 in _nl_load_domain.cold () at /lib64/libc.so.6
  #3  0x00007ffff5518a66 in annobin_assert.c_end () at /lib64/libc.so.6
  #4  0x00005555559073da in vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:286
  #5  vring_get_region_caches (vq=<optimized out>) at qemu/hw/virtio/virtio.c:283
  #6  0x000055555590818d in vring_used_flags_set_bit (mask=1, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
  #7  virtio_queue_split_set_notification (enable=0, vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:398
  #8  virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:451
  #9  0x0000555555908512 in virtio_queue_set_notification (vq=vq@entry=0x5555575ceea0, enable=enable@entry=0) at qemu/hw/virtio/virtio.c:444
  #10 0x00005555558c697a in virtio_blk_handle_vq (s=0x5555575c57e0, vq=0x5555575ceea0) at qemu/hw/block/virtio-blk.c:775
  #11 0x0000555555907836 in virtio_queue_notify_aio_vq (vq=0x5555575ceea0) at qemu/hw/virtio/virtio.c:2244
  #12 0x0000555555cb5dd7 in aio_dispatch_handlers (ctx=ctx@entry=0x55555671a420) at util/aio-posix.c:429
  #13 0x0000555555cb67a8 in aio_dispatch (ctx=0x55555671a420) at util/aio-posix.c:460
  #14 0x0000555555cb307e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
  #15 0x00007ffff7bbc510 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
  #16 0x0000555555cb5848 in glib_pollfds_poll () at util/main-loop.c:219
  #17 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
  #18 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
  #19 0x00005555559b20c9 in main_loop () at vl.c:1683
  #20 0x0000555555838115 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4441

Reported-by: Alexander Bulekov <alxndr@bu.edu>
Cc: Michael Tsirkin <mst@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20200207104619.164892-1-stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Feb 28, 2020
'list' forgot to free at the end of dump_vmstate_json_to_file(), although it's called only once, but seems like a clean code.

Fix the leak as follow:
Direct leak of 16 byte(s) in 1 object(s) allocated from:
    #0 0x7fb946abd768 in __interceptor_malloc (/lib64/libasan.so.5+0xef768)
    #1 0x7fb945eca445 in g_malloc (/lib64/libglib-2.0.so.0+0x52445)
    #2 0x7fb945ee2066 in g_slice_alloc (/lib64/libglib-2.0.so.0+0x6a066)
    #3 0x7fb945ee3139 in g_slist_prepend (/lib64/libglib-2.0.so.0+0x6b139)
    #4 0x5585db591581 in object_class_get_list_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1084
    #5 0x5585db590f66 in object_class_foreach_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1028
    #6 0x7fb945eb35f7 in g_hash_table_foreach (/lib64/libglib-2.0.so.0+0x3b5f7)
    #7 0x5585db59110c in object_class_foreach /mnt/sdb/qemu-new/qemu/qom/object.c:1038
    #8 0x5585db5916b6 in object_class_get_list /mnt/sdb/qemu-new/qemu/qom/object.c:1092
    #9 0x5585db335ca0 in dump_vmstate_json_to_file /mnt/sdb/qemu-new/qemu/migration/savevm.c:638
    #10 0x5585daa5bcbf in main /mnt/sdb/qemu-new/qemu/vl.c:4420
    #11 0x7fb941204812 in __libc_start_main ../csu/libc-start.c:308
    #12 0x5585da29420d in _start (/mnt/sdb/qemu-new/qemu/build/x86_64-softmmu/qemu-system-x86_64+0x27f020d)

Indirect leak of 7472 byte(s) in 467 object(s) allocated from:
    #0 0x7fb946abd768 in __interceptor_malloc (/lib64/libasan.so.5+0xef768)
    #1 0x7fb945eca445 in g_malloc (/lib64/libglib-2.0.so.0+0x52445)
    #2 0x7fb945ee2066 in g_slice_alloc (/lib64/libglib-2.0.so.0+0x6a066)
    #3 0x7fb945ee3139 in g_slist_prepend (/lib64/libglib-2.0.so.0+0x6b139)
    #4 0x5585db591581 in object_class_get_list_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1084
    #5 0x5585db590f66 in object_class_foreach_tramp /mnt/sdb/qemu-new/qemu/qom/object.c:1028
    #6 0x7fb945eb35f7 in g_hash_table_foreach (/lib64/libglib-2.0.so.0+0x3b5f7)
    #7 0x5585db59110c in object_class_foreach /mnt/sdb/qemu-new/qemu/qom/object.c:1038
    #8 0x5585db5916b6 in object_class_get_list /mnt/sdb/qemu-new/qemu/qom/object.c:1092
    #9 0x5585db335ca0 in dump_vmstate_json_to_file /mnt/sdb/qemu-new/qemu/migration/savevm.c:638
    #10 0x5585daa5bcbf in main /mnt/sdb/qemu-new/qemu/vl.c:4420
    #11 0x7fb941204812 in __libc_start_main ../csu/libc-start.c:308
    #12 0x5585da29420d in _start (/mnt/sdb/qemu-new/qemu/build/x86_64-softmmu/qemu-system-x86_64+0x27f020d)

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Mar 12, 2020
'crypto_opts' forgot to free in qcow2_close(), this patch fix the bellow leak stack:

Direct leak of 24 byte(s) in 1 object(s) allocated from:
    #0 0x7f0edd81f970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    #1 0x7f0edc6d149d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    #2 0x55d7eaede63d in qobject_input_start_struct /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qobject-input-visitor.c:295
    #3 0x55d7eaed78b8 in visit_start_struct /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qapi-visit-core.c:49
    #4 0x55d7eaf5140b in visit_type_QCryptoBlockOpenOptions qapi/qapi-visit-crypto.c:290
    #5 0x55d7eae43af3 in block_crypto_open_opts_init /mnt/sdb/qemu-new/qemu_test/qemu/block/crypto.c:163
    #6 0x55d7eacd2924 in qcow2_update_options_prepare /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1148
    #7 0x55d7eacd33f7 in qcow2_update_options /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1232
    #8 0x55d7eacd9680 in qcow2_do_open /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1512
    #9 0x55d7eacdc55e in qcow2_open_entry /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1792
    #10 0x55d7eacdc8fe in qcow2_open /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:1819
    #11 0x55d7eac3742d in bdrv_open_driver /mnt/sdb/qemu-new/qemu_test/qemu/block.c:1317
    #12 0x55d7eac3e990 in bdrv_open_common /mnt/sdb/qemu-new/qemu_test/qemu/block.c:1575
    #13 0x55d7eac4442c in bdrv_open_inherit /mnt/sdb/qemu-new/qemu_test/qemu/block.c:3126
    #14 0x55d7eac45c3f in bdrv_open /mnt/sdb/qemu-new/qemu_test/qemu/block.c:3219
    #15 0x55d7ead8e8a4 in blk_new_open /mnt/sdb/qemu-new/qemu_test/qemu/block/block-backend.c:397
    #16 0x55d7eacde74c in qcow2_co_create /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:3534
    #17 0x55d7eacdfa6d in qcow2_co_create_opts /mnt/sdb/qemu-new/qemu_test/qemu/block/qcow2.c:3668
    #18 0x55d7eac1c678 in bdrv_create_co_entry /mnt/sdb/qemu-new/qemu_test/qemu/block.c:485
    #19 0x55d7eb0024d2 in coroutine_trampoline /mnt/sdb/qemu-new/qemu_test/qemu/util/coroutine-ucontext.c:115

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200227012950.12256-2-pannengyuan@huawei.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
olafhering pushed a commit to olafhering/qemu that referenced this pull request Mar 12, 2020
'type/id' forgot to free in qmp_object_add, this patch fix that.

The leak stack:
Direct leak of 84 byte(s) in 6 object(s) allocated from:
    #0 0x7fe2a5ebf768 in __interceptor_malloc (/lib64/libasan.so.5+0xef768)
    qemu#1 0x7fe2a5044445 in g_malloc (/lib64/libglib-2.0.so.0+0x52445)
    qemu#2 0x7fe2a505dd92 in g_strdup (/lib64/libglib-2.0.so.0+0x6bd92)
    qemu#3 0x56344954e692 in qmp_object_add /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qmp-cmds.c:258
    qemu#4 0x563449960f5a in do_qmp_dispatch /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qmp-dispatch.c:132
    qemu#5 0x563449960f5a in qmp_dispatch /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qmp-dispatch.c:175
    qemu#6 0x563449498a30 in monitor_qmp_dispatch /mnt/sdb/qemu-new/qemu_test/qemu/monitor/qmp.c:145
    qemu#7 0x56344949a64f in monitor_qmp_bh_dispatcher /mnt/sdb/qemu-new/qemu_test/qemu/monitor/qmp.c:234
    qemu#8 0x563449a92a3a in aio_bh_call /mnt/sdb/qemu-new/qemu_test/qemu/util/async.c:136

Direct leak of 54 byte(s) in 6 object(s) allocated from:
    #0 0x7fe2a5ebf768 in __interceptor_malloc (/lib64/libasan.so.5+0xef768)
    qemu#1 0x7fe2a5044445 in g_malloc (/lib64/libglib-2.0.so.0+0x52445)
    qemu#2 0x7fe2a505dd92 in g_strdup (/lib64/libglib-2.0.so.0+0x6bd92)
    qemu#3 0x56344954e6c4 in qmp_object_add /mnt/sdb/qemu-new/qemu_test/qemu/qom/qom-qmp-cmds.c:267
    qemu#4 0x563449960f5a in do_qmp_dispatch /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qmp-dispatch.c:132
    qemu#5 0x563449960f5a in qmp_dispatch /mnt/sdb/qemu-new/qemu_test/qemu/qapi/qmp-dispatch.c:175
    qemu#6 0x563449498a30 in monitor_qmp_dispatch /mnt/sdb/qemu-new/qemu_test/qemu/monitor/qmp.c:145
    qemu#7 0x56344949a64f in monitor_qmp_bh_dispatcher /mnt/sdb/qemu-new/qemu_test/qemu/monitor/qmp.c:234
    qemu#8 0x563449a92a3a in aio_bh_call /mnt/sdb/qemu-new/qemu_test/qemu/util/async.c:136

Fixes: 5f07c4d
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-Id: <20200310064640.5059-1-pannengyuan@huawei.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Mar 24, 2020
There is a use-after-free possible: bdrv_unref_child() leaves
bs->backing freed but not NULL. bdrv_attach_child may produce nested
polling loop due to drain, than access of freed pointer is possible.

I've produced the following crash on 30 iotest with modified code. It
does not reproduce on master, but still seems possible:

    #0  __strcmp_avx2 () at /lib64/libc.so.6
    #1  bdrv_backing_overridden (bs=0x55c9d3cc2060) at block.c:6350
    #2  bdrv_refresh_filename (bs=0x55c9d3cc2060) at block.c:6404
    #3  bdrv_backing_attach (c=0x55c9d48e5520) at block.c:1063
    #4  bdrv_replace_child_noperm
        (child=child@entry=0x55c9d48e5520,
        new_bs=new_bs@entry=0x55c9d3cc2060) at block.c:2290
    #5  bdrv_replace_child
        (child=child@entry=0x55c9d48e5520,
        new_bs=new_bs@entry=0x55c9d3cc2060) at block.c:2320
    #6  bdrv_root_attach_child
        (child_bs=child_bs@entry=0x55c9d3cc2060,
        child_name=child_name@entry=0x55c9d241d478 "backing",
        child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
        ctx=<optimized out>, perm=<optimized out>, shared_perm=21,
        opaque=0x55c9d3c5a3d0, errp=0x7ffd117108e0) at block.c:2424
    #7  bdrv_attach_child
        (parent_bs=parent_bs@entry=0x55c9d3c5a3d0,
        child_bs=child_bs@entry=0x55c9d3cc2060,
        child_name=child_name@entry=0x55c9d241d478 "backing",
        child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
        errp=errp@entry=0x7ffd117108e0) at block.c:5876
    #8  in bdrv_set_backing_hd
        (bs=bs@entry=0x55c9d3c5a3d0,
        backing_hd=backing_hd@entry=0x55c9d3cc2060,
        errp=errp@entry=0x7ffd117108e0)
        at block.c:2576
    #9  stream_prepare (job=0x55c9d49d84a0) at block/stream.c:150
    #10 job_prepare (job=0x55c9d49d84a0) at job.c:761
    #11 job_txn_apply (txn=<optimized out>, fn=<optimized out>) at
        job.c:145
    #12 job_do_finalize (job=0x55c9d49d84a0) at job.c:778
    #13 job_completed_txn_success (job=0x55c9d49d84a0) at job.c:832
    #14 job_completed (job=0x55c9d49d84a0) at job.c:845
    #15 job_completed (job=0x55c9d49d84a0) at job.c:836
    #16 job_exit (opaque=0x55c9d49d84a0) at job.c:864
    #17 aio_bh_call (bh=0x55c9d471a160) at util/async.c:117
    #18 aio_bh_poll (ctx=ctx@entry=0x55c9d3c46720) at util/async.c:117
    #19 aio_poll (ctx=ctx@entry=0x55c9d3c46720,
        blocking=blocking@entry=true)
        at util/aio-posix.c:728
    #20 bdrv_parent_drained_begin_single (poll=true, c=0x55c9d3d558f0)
        at block/io.c:121
    #21 bdrv_parent_drained_begin_single (c=c@entry=0x55c9d3d558f0,
        poll=poll@entry=true)
        at block/io.c:114
    #22 bdrv_replace_child_noperm
        (child=child@entry=0x55c9d3d558f0,
        new_bs=new_bs@entry=0x55c9d3d27300) at block.c:2258
    #23 bdrv_replace_child
        (child=child@entry=0x55c9d3d558f0,
        new_bs=new_bs@entry=0x55c9d3d27300) at block.c:2320
    #24 bdrv_root_attach_child
        (child_bs=child_bs@entry=0x55c9d3d27300,
        child_name=child_name@entry=0x55c9d241d478 "backing",
        child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
        ctx=<optimized out>, perm=<optimized out>, shared_perm=21,
        opaque=0x55c9d3cc2060, errp=0x7ffd11710c60) at block.c:2424
    #25 bdrv_attach_child
        (parent_bs=parent_bs@entry=0x55c9d3cc2060,
        child_bs=child_bs@entry=0x55c9d3d27300,
        child_name=child_name@entry=0x55c9d241d478 "backing",
        child_role=child_role@entry=0x55c9d26ecee0 <child_backing>,
        errp=errp@entry=0x7ffd11710c60) at block.c:5876
    #26 bdrv_set_backing_hd
        (bs=bs@entry=0x55c9d3cc2060,
        backing_hd=backing_hd@entry=0x55c9d3d27300,
        errp=errp@entry=0x7ffd11710c60)
        at block.c:2576
    #27 stream_prepare (job=0x55c9d495ead0) at block/stream.c:150
    ...

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200316060631.30052-2-vsementsov@virtuozzo.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
philmd added a commit to philmd/qemu that referenced this pull request Mar 31, 2020
Similarly to commit 158b659 with the APB PnP registers, guests
can crash QEMU when writting to the AHB PnP registers:

  $ echo 'writeb 0xfffff042 69' | qemu-system-sparc -M leon3_generic -S -bios /etc/magic -qtest stdio
  [I 1571938309.932255] OPENED
  [R +0.063474] writeb 0xfffff042 69
  Segmentation fault (core dumped)

  (gdb) bt
  #0  0x0000000000000000 in  ()
  #1  0x0000562999110df4 in memory_region_write_with_attrs_accessor
      (mr=mr@entry=0x56299aa28ea0, addr=66, value=value@entry=0x7fff6abe13b8, size=size@entry=1, shift=<optimized out>, mask=mask@entry=255, attrs=...) at memory.c:503
  #2  0x000056299911095e in access_with_adjusted_size
      (addr=addr@entry=66, value=value@entry=0x7fff6abe13b8, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=access_fn@entry=
      0x562999110d70 <memory_region_write_with_attrs_accessor>, mr=0x56299aa28ea0, attrs=...) at memory.c:539
  qemu#3  0x0000562999114fba in memory_region_dispatch_write (mr=mr@entry=0x56299aa28ea0, addr=66, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...) at memory.c:1482
  qemu#4  0x00005629990c0860 in flatview_write_continue
      (fv=fv@entry=0x56299aa7d8a0, addr=addr@entry=4294963266, attrs=..., ptr=ptr@entry=0x7fff6abe1540, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x56299aa28ea0)
      at include/qemu/host-utils.h:164
  qemu#5  0x00005629990c0a76 in flatview_write (fv=0x56299aa7d8a0, addr=4294963266, attrs=..., buf=0x7fff6abe1540, len=1) at exec.c:3165
  qemu#6  0x00005629990c4c1b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=..., buf=buf@entry=0x7fff6abe1540, len=len@entry=1) at exec.c:3256
  qemu#7  0x000056299910f807 in qtest_process_command (chr=chr@entry=0x5629995ee920 <qtest_chr>, words=words@entry=0x56299acfcfa0) at qtest.c:437

Instead of crashing, log the access as unimplemented.

Signed-off-by: Laurent Vivier <laurent@vivier.eu>
qemu-deploy pushed a commit that referenced this pull request Mar 31, 2020
The tulip networking card emulation has an OOB issue in
'tulip_copy_tx_buffers' when the guest provide malformed descriptor.
This test will trigger a ASAN heap overflow crash. To trigger this
issue we can construct the data as following:

1. construct a 'tulip_descriptor'. Its control is set to
'0x7ff | 0x7ff << 11', this will make the 'tulip_copy_tx_buffers's
'len1' and 'len2' to 0x7ff(2047). So 'len1+len2' will overflow
'TULIPState's 'tx_frame' field. This descriptor's 'buf_addr1' and
'buf_addr2' should set to a guest address.

2. write this descriptor to tulip device's CSR4 register. This will
set the 'TULIPState's 'current_tx_desc' field.

3. write 'CSR6_ST' to tulip device's CSR6 register. This will trigger
'tulip_xmit_list_update' and finally calls 'tulip_copy_tx_buffers'.

Following shows the backtrack of crash:

==31781==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x628000007cd0 at pc 0x7fe03c5a077a bp 0x7fff05b46770 sp 0x7fff05b45f18
WRITE of size 2047 at 0x628000007cd0 thread T0
    #0 0x7fe03c5a0779  (/usr/lib/x86_64-linux-gnu/libasan.so.4+0x79779)
    #1 0x5575fb6daa6a in flatview_read_continue /home/test/qemu/exec.c:3194
    #2 0x5575fb6daccb in flatview_read /home/test/qemu/exec.c:3227
    #3 0x5575fb6dae66 in address_space_read_full /home/test/qemu/exec.c:3240
    #4 0x5575fb6db0cb in address_space_rw /home/test/qemu/exec.c:3268
    #5 0x5575fbdfd460 in dma_memory_rw_relaxed /home/test/qemu/include/sysemu/dma.h:87
    #6 0x5575fbdfd4b5 in dma_memory_rw /home/test/qemu/include/sysemu/dma.h:110
    #7 0x5575fbdfd866 in pci_dma_rw /home/test/qemu/include/hw/pci/pci.h:787
    #8 0x5575fbdfd8a3 in pci_dma_read /home/test/qemu/include/hw/pci/pci.h:794
    #9 0x5575fbe02761 in tulip_copy_tx_buffers hw/net/tulip.c:585
    #10 0x5575fbe0366b in tulip_xmit_list_update hw/net/tulip.c:678
    #11 0x5575fbe04073 in tulip_write hw/net/tulip.c:783

Signed-off-by: Li Qiang <liq3ea@163.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Mar 31, 2020
We neglect to free port->bh on the error paths.  Fix that.
Reproducer:
    {'execute': 'device_add', 'arguments': {'id': 'virtio_serial_pci0', 'driver': 'virtio-serial-pci', 'bus': 'pci.0', 'addr': '0x5'}, 'id': 'yVkZcGgV'}
    {'execute': 'device_add', 'arguments': {'id': 'port1', 'driver': 'virtserialport', 'name': 'port1', 'chardev': 'channel1', 'bus': 'virtio_serial_pci0.0', 'nr': 1}, 'id': '3dXdUgJA'}
    {'execute': 'device_add', 'arguments': {'id': 'port2', 'driver': 'virtserialport', 'name': 'port2', 'chardev': 'channel2', 'bus': 'virtio_serial_pci0.0', 'nr': 1}, 'id': 'qLzcCkob'}
    {'execute': 'device_add', 'arguments': {'id': 'port2', 'driver': 'virtserialport', 'name': 'port2', 'chardev': 'channel2', 'bus': 'virtio_serial_pci0.0', 'nr': 2}, 'id': 'qLzcCkob'}

The leak stack:
Direct leak of 40 byte(s) in 1 object(s) allocated from:
    #0 0x7f04a8008ae8 in __interceptor_malloc (/lib64/libasan.so.5+0xefae8)
    #1 0x7f04a73cf1d5 in g_malloc (/lib64/libglib-2.0.so.0+0x531d5)
    #2 0x56273eaee484 in aio_bh_new /mnt/sdb/backup/qemu/util/async.c:125
    #3 0x56273eafe9a8 in qemu_bh_new /mnt/sdb/backup/qemu/util/main-loop.c:532
    #4 0x56273d52e62e in virtser_port_device_realize /mnt/sdb/backup/qemu/hw/char/virtio-serial-bus.c:946
    #5 0x56273dcc5040 in device_set_realized /mnt/sdb/backup/qemu/hw/core/qdev.c:891
    #6 0x56273e5ebbce in property_set_bool /mnt/sdb/backup/qemu/qom/object.c:2238
    #7 0x56273e5e5a9c in object_property_set /mnt/sdb/backup/qemu/qom/object.c:1324
    #8 0x56273e5ef5f8 in object_property_set_qobject /mnt/sdb/backup/qemu/qom/qom-qobject.c:26
    #9 0x56273e5e5e6a in object_property_set_bool /mnt/sdb/backup/qemu/qom/object.c:1390
    #10 0x56273daa40de in qdev_device_add /mnt/sdb/backup/qemu/qdev-monitor.c:680
    #11 0x56273daa53e9 in qmp_device_add /mnt/sdb/backup/qemu/qdev-monitor.c:805

Fixes: 199646d
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Amit Shah <amit@kernel.org>
Message-Id: <20200309021738.30072-1-pannengyuan@huawei.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Mar 31, 2020
virtio_vqs forgot to free on the error path in realize(). Fix that.

The asan stack:
Direct leak of 14336 byte(s) in 1 object(s) allocated from:
    #0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
    #1 0x7f58b858249d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
    #2 0x5562cc627f49 in virtio_add_queue /mnt/sdb/qemu/hw/virtio/virtio.c:2413
    #3 0x5562cc4b524a in virtio_blk_device_realize /mnt/sdb/qemu/hw/block/virtio-blk.c:1202
    #4 0x5562cc613050 in virtio_device_realize /mnt/sdb/qemu/hw/virtio/virtio.c:3615
    #5 0x5562ccb7a568 in device_set_realized /mnt/sdb/qemu/hw/core/qdev.c:891
    #6 0x5562cd39cd45 in property_set_bool /mnt/sdb/qemu/qom/object.c:2238

Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-Id: <20200328005705.29898-2-pannengyuan@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Apr 3, 2020
Since commit 8c6b035 ("util/async:
make bh_aio_poll() O(1)"), migration-test reveals a leak:

QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64
tests/qtest/migration-test  -p /x86_64/migration/postcopy/recovery
tests/qtest/libqtest.c:140: kill_qemu() tried to terminate QEMU
process but encountered exit status 1 (expected 0)

=================================================================
==2082571==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 40 byte(s) in 1 object(s) allocated from:
    #0 0x7f25971dfc58 in __interceptor_malloc (/lib64/libasan.so.5+0x10dc58)
    #1 0x7f2596d08358 in g_malloc (/lib64/libglib-2.0.so.0+0x57358)
    #2 0x560970d006f8 in qemu_bh_new /home/elmarco/src/qemu/util/main-loop.c:532
    #3 0x5609704afa02 in migrate_fd_connect
/home/elmarco/src/qemu/migration/migration.c:3407
    #4 0x5609704b6b6f in migration_channel_connect
/home/elmarco/src/qemu/migration/channel.c:92
    #5 0x5609704b2bfb in socket_outgoing_migration
/home/elmarco/src/qemu/migration/socket.c:108
    #6 0x560970b9bd6c in qio_task_complete /home/elmarco/src/qemu/io/task.c:196
    #7 0x560970b9aa97 in qio_task_thread_result
/home/elmarco/src/qemu/io/task.c:111
    #8 0x7f2596cfee3a  (/lib64/libglib-2.0.so.0+0x4de3a)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20200325184723.2029630-2-marcandre.lureau@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu-deploy pushed a commit that referenced this pull request Apr 3, 2020
Direct leak of 4120 byte(s) in 1 object(s) allocated from:
    #0 0x7fa114931887 in __interceptor_calloc (/lib64/libasan.so.6+0xb0887)
    #1 0x7fa1144ad8f0 in g_malloc0 (/lib64/libglib-2.0.so.0+0x588f0)
    #2 0x561e3c9c8897 in qmp_object_add /home/elmarco/src/qemu/qom/qom-qmp-cmds.c:291
    #3 0x561e3cf48736 in qmp_dispatch /home/elmarco/src/qemu/qapi/qmp-dispatch.c:155
    #4 0x561e3c8efb36 in monitor_qmp_dispatch /home/elmarco/src/qemu/monitor/qmp.c:145
    #5 0x561e3c8f09ed in monitor_qmp_bh_dispatcher /home/elmarco/src/qemu/monitor/qmp.c:234
    #6 0x561e3d08c993 in aio_bh_call /home/elmarco/src/qemu/util/async.c:136
    #7 0x561e3d08d0a5 in aio_bh_poll /home/elmarco/src/qemu/util/async.c:164
    #8 0x561e3d0a535a in aio_dispatch /home/elmarco/src/qemu/util/aio-posix.c:380
    #9 0x561e3d08e3ca in aio_ctx_dispatch /home/elmarco/src/qemu/util/async.c:298
    #10 0x7fa1144a776e in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x5276e)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20200325184723.2029630-3-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
@repo-lockdown
Copy link

repo-lockdown bot commented Apr 8, 2020

Thank you for your interest in the QEMU project.

This repository is a read-only mirror of the project's master
repostories hosted on https://git.qemu.org/git/qemu.git.
The project does not process merge requests filed on GitHub.

QEMU welcomes contributions of code (either fixing bugs or adding new
functionality). However, we get a lot of patches, and so we have some
guidelines about contributing on the project website:
https://www.qemu.org/contribute/

@repo-lockdown repo-lockdown bot locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
4 participants