Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qemu crashes with signal 6 when ran qemu-iotests 194 with "vhdx" format #22

Closed
nasastry opened this issue Oct 12, 2017 · 2 comments
Closed

Comments

@nasastry
Copy link

nasastry commented Oct 12, 2017

cde:info Mirrored with LTC bug https://bugzilla.linux.ibm.com/show_bug.cgi?id=160031 </cde:info>

  1. Get tests/qemu-iotests from the source tree of qemu
    configure, compile then

  2. Point to the system supplied qemu binary
    export QEMU_PROG=/usr/bin/qemu-system-ppc64

  3. Run test scenario from 194
    # cd qemu/tests/qemu-iotests
    # ./check -vhdx 194

QEMU          -- "/usr/bin/qemu-system-ppc64" -nodefaults -machine accel=qtest
QEMU_IMG      -- "/home/nasastry/qemu/qemu-img"
QEMU_IO       -- "/home/nasastry/qemu/qemu-io"  --cache writeback -f vhdx
QEMU_NBD      -- "/home/nasastry/qemu/qemu-nbd"
IMGFMT        -- vhdx
IMGPROTO      -- file
PLATFORM      -- Linux/ppc64le zzfp365-lp1 4.13.0-4.rel.git49564cb.el7.centos.ppc64le
TEST_DIR      -- /home/nasastry/qemu/tests/qemu-iotests/scratch
SOCKET_SCM_HELPER -- /home/nasastry/qemu/tests/qemu-iotests/socket_scm_helper

194         [failed, exit status 1] - output mismatch (see 194.out.bad)
--- /home/nasastry/qemu/tests/qemu-iotests/194.out	2017-10-09 14:09:04.272726282 +0530
+++ /home/nasastry/qemu/tests/qemu-iotests/194.out.bad	2017-10-12 20:26:11.540909292 +0530
@@ -1,18 +1,16 @@
+WARNING:qemu:qemu received signal -6: /usr/bin/qemu-system-ppc64 -chardev socket,id=mon,path=/home/nasastry/qemu/tests/qemu-iotests/scratch/qemudest-44872-monitor.sock -mon chardev=mon,mode=control -display none -vga none -qtest unix:path=/home/nasastry/qemu/tests/qemu-iotests/scratch/qemudest-44872-qtest.sock -machine accel=qtest -nodefaults -machine accel=qtest -drive if=virtio,id=drive0,file=/home/nasastry/qemu/tests/qemu-iotests/scratch/44872-dest.img,format=vhdx,cache=writeback -incoming unix:/home/nasastry/qemu/tests/qemu-iotests/scratch/44872-migration.sock
 Launching VMs...
-Launching NBD server on destination...
-{u'return': {}}
-{u'return': {}}
-Starting `drive-mirror` on source...
-{u'return': {}}
-Waiting for `drive-mirror` to complete...
-{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror-job0', u'type': u'mirror', u'speed': 0, u'len': 1073741824, u'offset': 1073741824}, u'event': u'BLOCK_JOB_READY'}
-Starting migration...
-{u'return': {}}
-{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'setup'}, u'event': u'MIGRATION'}
-{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'active'}, u'event': u'MIGRATION'}
-{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'status': u'completed'}, u'event': u'MIGRATION'}
-Gracefully ending the `drive-mirror` job on source...
-{u'return': {}}
-{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror-job0', u'type': u'mirror', u'speed': 0, u'len': 1073741824, u'offset': 1073741824}, u'event': u'BLOCK_JOB_COMPLETED'}
-Stopping the NBD server on destination...
-{u'return': {}}
+Traceback (most recent call last):
+  File "194", line 42, in <module>
+    .add_incoming('unix:{0}'.format(migration_sock_path))
+  File "/home/nasastry/qemu/tests/qemu-iotests/../../scripts/qemu.py", line 206, in launch
+    self._post_launch()
+  File "/home/nasastry/qemu/tests/qemu-iotests/../../scripts/qtest.py", line 100, in _post_launch
+    super(QEMUQtestMachine, self)._post_launch()
+  File "/home/nasastry/qemu/tests/qemu-iotests/../../scripts/qemu.py", line 184, in _post_launch
+    self._qmp.accept()
+  File "/home/nasastry/qemu/tests/qemu-iotests/../../scripts/qmp/qmp.py", line 157, in accept
+    return self.__negotiate_capabilities()
+  File "/home/nasastry/qemu/tests/qemu-iotests/../../scripts/qmp/qmp.py", line 72, in __negotiate_capabilities
+    raise QMPConnectError
+qmp.qmp.QMPConnectError
Failures: 194
Failed 1 of 1 tests

From gdb

[root@zzfp365-lp1 qemu-iotests]# ls core*
core.44881
[root@zzfp365-lp1 qemu-iotests]# gdb /usr/bin/qemu-system-ppc64 core.44881
...
Core was generated by `/usr/bin/qemu-system-ppc64 -chardev socket,id=mon,path=/home/nasastry/qemu/test'.
Program terminated with signal 6, Aborted.
#0  0x00007fff8659eff0 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install qemu-system-ppc-2.10.0-2.rel.gitc334a4e.el7.centos.ppc64le
(gdb) bt
#0  0x00007fff8659eff0 in raise () from /lib64/libc.so.6
#1  0x00007fff865a136c in abort () from /lib64/libc.so.6
#2  0x00007fff86594c44 in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007fff86594d34 in __assert_fail () from /lib64/libc.so.6
#4  0x0000000127ae4df0 in bdrv_co_pwritev (child=0x162f090b0, offset=65536, bytes=<optimized out>, qiov=0x7ffff5f21700, flags=<optimized out>) at block/io.c:1537
#5  0x0000000127ae4f00 in bdrv_rw_co_entry (opaque=0x7ffff5f21680) at block/io.c:597
#6  0x0000000127bab958 in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:79
#7  0x00007fff865b2b9c in makecontext () from /lib64/libc.so.6
#8  0x0000000000000000 in ?? ()

qemu version is qemu-2.10.0-2.rel.gitc334a4e.el7.centos.ppc64le

@cdeadmin
Copy link

------- Comment From alexey@au1.ibm.com 2017-10-25 02:25:14 EDT-------
The test is only failing on vhdx and incoming migration is failing on these for a long time already (I reported bug in qemu any way). It is very low priority as we do not use hyperv images (this is where vhdx is from) and qcow2 is just fine.

aik pushed a commit that referenced this issue Jan 31, 2018
Direct leak of 160 byte(s) in 4 object(s) allocated from:
    #0 0x55ed7678cda8 in calloc (/home/elmarco/src/qq/build/x86_64-softmmu/qemu-system-x86_64+0x797da8)
    #1 0x7f3f5e725f75 in g_malloc0 /home/elmarco/src/gnome/glib/builddir/../glib/gmem.c:124
    #2 0x55ed778aa3a7 in query_option_descs /home/elmarco/src/qq/util/qemu-config.c:60:16
    #3 0x55ed778aa307 in get_drive_infolist /home/elmarco/src/qq/util/qemu-config.c:140:19
    #4 0x55ed778a9f40 in qmp_query_command_line_options /home/elmarco/src/qq/util/qemu-config.c:254:36
    #5 0x55ed76d4868c in qmp_marshal_query_command_line_options /home/elmarco/src/qq/build/qmp-marshal.c:3078:14
    #6 0x55ed77855dd5 in do_qmp_dispatch /home/elmarco/src/qq/qapi/qmp-dispatch.c:104:5
    #7 0x55ed778558cc in qmp_dispatch /home/elmarco/src/qq/qapi/qmp-dispatch.c:131:11
    #8 0x55ed768b592f in handle_qmp_command /home/elmarco/src/qq/monitor.c:3840:11
    #9 0x55ed7786ccfe in json_message_process_token /home/elmarco/src/qq/qobject/json-streamer.c:105:5
    #10 0x55ed778fe37c in json_lexer_feed_char /home/elmarco/src/qq/qobject/json-lexer.c:323:13
    #11 0x55ed778fdde6 in json_lexer_feed /home/elmarco/src/qq/qobject/json-lexer.c:373:15
    #12 0x55ed7786cd83 in json_message_parser_feed /home/elmarco/src/qq/qobject/json-streamer.c:124:12
    #13 0x55ed768b559e in monitor_qmp_read /home/elmarco/src/qq/monitor.c:3882:5
    #14 0x55ed77714f29 in qemu_chr_be_write_impl /home/elmarco/src/qq/chardev/char.c:167:9
    #15 0x55ed77714fde in qemu_chr_be_write /home/elmarco/src/qq/chardev/char.c:179:9
    #16 0x55ed7772ffad in tcp_chr_read /home/elmarco/src/qq/chardev/char-socket.c:440:13
    #17 0x55ed7777113b in qio_channel_fd_source_dispatch /home/elmarco/src/qq/io/channel-watch.c:84:12
    #18 0x7f3f5e71d90b in g_main_dispatch /home/elmarco/src/gnome/glib/builddir/../glib/gmain.c:3182
    #19 0x7f3f5e71e7ac in g_main_context_dispatch /home/elmarco/src/gnome/glib/builddir/../glib/gmain.c:3847
    #20 0x55ed77886ffc in glib_pollfds_poll /home/elmarco/src/qq/util/main-loop.c:214:9
    #21 0x55ed778865fd in os_host_main_loop_wait /home/elmarco/src/qq/util/main-loop.c:261:5
    #22 0x55ed77886222 in main_loop_wait /home/elmarco/src/qq/util/main-loop.c:515:11
    #23 0x55ed76d2a4df in main_loop /home/elmarco/src/qq/vl.c:1995:9
    #24 0x55ed76d1cb4a in main /home/elmarco/src/qq/vl.c:4914:5
    #25 0x7f3f555f6039 in __libc_start_main (/lib64/libc.so.6+0x21039)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180104160523.22995-14-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
aik pushed a commit that referenced this issue Jan 31, 2018
Spotted thanks to ASAN:

==25226==ERROR: AddressSanitizer: global-buffer-overflow on address 0x556715a1f120 at pc 0x556714b6f6b1 bp 0x7ffcdfac1360 sp 0x7ffcdfac1350
READ of size 1 at 0x556715a1f120 thread T0
    #0 0x556714b6f6b0 in init_disasm /home/elmarco/src/qemu/disas/s390.c:219
    #1 0x556714b6fa6a in print_insn_s390 /home/elmarco/src/qemu/disas/s390.c:294
    #2 0x55671484d031 in monitor_disas /home/elmarco/src/qemu/disas.c:635
    #3 0x556714862ec0 in memory_dump /home/elmarco/src/qemu/monitor.c:1324
    #4 0x55671486342a in hmp_memory_dump /home/elmarco/src/qemu/monitor.c:1418
    #5 0x5567148670be in handle_hmp_command /home/elmarco/src/qemu/monitor.c:3109
    #6 0x5567148674ed in qmp_human_monitor_command /home/elmarco/src/qemu/monitor.c:613
    #7 0x556714b00918 in qmp_marshal_human_monitor_command /home/elmarco/src/qemu/build/qmp-marshal.c:1704
    #8 0x556715138a3e in do_qmp_dispatch /home/elmarco/src/qemu/qapi/qmp-dispatch.c:104
    #9 0x556715138f83 in qmp_dispatch /home/elmarco/src/qemu/qapi/qmp-dispatch.c:131
    #10 0x55671485cf88 in handle_qmp_command /home/elmarco/src/qemu/monitor.c:3839
    #11 0x55671514e80b in json_message_process_token /home/elmarco/src/qemu/qobject/json-streamer.c:105
    #12 0x5567151bf2dc in json_lexer_feed_char /home/elmarco/src/qemu/qobject/json-lexer.c:323
    #13 0x5567151bf827 in json_lexer_feed /home/elmarco/src/qemu/qobject/json-lexer.c:373
    #14 0x55671514ee62 in json_message_parser_feed /home/elmarco/src/qemu/qobject/json-streamer.c:124
    #15 0x556714854b1f in monitor_qmp_read /home/elmarco/src/qemu/monitor.c:3881
    #16 0x556715045440 in qemu_chr_be_write_impl /home/elmarco/src/qemu/chardev/char.c:172
    #17 0x556715047184 in qemu_chr_be_write /home/elmarco/src/qemu/chardev/char.c:184
    #18 0x55671505a8e6 in tcp_chr_read /home/elmarco/src/qemu/chardev/char-socket.c:440
    #19 0x5567150943c3 in qio_channel_fd_source_dispatch /home/elmarco/src/qemu/io/channel-watch.c:84
    #20 0x7fb90292b90b in g_main_dispatch ../glib/gmain.c:3182
    #21 0x7fb90292c7ac in g_main_context_dispatch ../glib/gmain.c:3847
    #22 0x556715162eca in glib_pollfds_poll /home/elmarco/src/qemu/util/main-loop.c:214
    #23 0x556715163001 in os_host_main_loop_wait /home/elmarco/src/qemu/util/main-loop.c:261
    #24 0x5567151631fa in main_loop_wait /home/elmarco/src/qemu/util/main-loop.c:515
    #25 0x556714ad6d3b in main_loop /home/elmarco/src/qemu/vl.c:1950
    #26 0x556714ade329 in main /home/elmarco/src/qemu/vl.c:4865
    #27 0x7fb8fe5c9009 in __libc_start_main (/lib64/libc.so.6+0x21009)
    #28 0x5567147af4d9 in _start (/home/elmarco/src/qemu/build/s390x-softmmu/qemu-system-s390x+0xf674d9)

0x556715a1f120 is located 32 bytes to the left of global variable 'char_hci_type_info' defined in '/home/elmarco/src/qemu/hw/bt/hci-csr.c:493:23' (0x556715a1f140) of size 104
0x556715a1f120 is located 8 bytes to the right of global variable 's390_opcodes' defined in '/home/elmarco/src/qemu/disas/s390.c:860:33' (0x556715a15280) of size 40600

This fix is based on Andreas Arnez <arnez@linux.vnet.ibm.com> upstream
commit:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commitdiff;h=9ace48f3d7d80ce09c5df60cccb433470410b11b

2014-08-19  Andreas Arnez  <arnez@linux.vnet.ibm.com>

       * s390-dis.c (init_disasm): Simplify initialization of
       opc_index[].  This also fixes an access after the last element
       of s390_opcodes[].

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180104160523.22995-19-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
@cdeadmin
Copy link

------- Comment From nasastry@in.ibm.com 2018-02-15 04:03:46 EDT-------
With qemu-system-ppc-2.11.0-1.rel.gite7153e0.el7.centos.ppc64le
not seeing the test failure.

[root@zzfp365-lp1 scripts]# export QEMU_PROG=/usr/bin/qemu-system-ppc64
[root@zzfp365-lp1 scripts]# cd ../tests/qemu-iotests
[root@zzfp365-lp1 qemu-iotests]# ./check -vhdx 194
QEMU -- "/usr/bin/qemu-system-ppc64" -nodefaults -machine accel=qtest
QEMU_IMG -- "/home/nasastry/qemu/qemu-img"
QEMU_IO -- "/home/nasastry/qemu/qemu-io" --cache writeback -f vhdx
QEMU_NBD -- "/home/nasastry/qemu/qemu-nbd"
IMGFMT -- vhdx
IMGPROTO -- file
PLATFORM -- Linux/ppc64le zzfp365-lp1 4.14.0-3.git68b4afb.el7.centos.ppc64le
TEST_DIR -- /home/nasastry/qemu/tests/qemu-iotests/scratch
SOCKET_SCM_HELPER -- /home/nasastry/qemu/tests/qemu-iotests/socket_scm_helper

194 [not run] not suitable for this image format: vhdx
Not run: 194
Passed all 0 tests

this bugzilla can be closed.

aik pushed a commit that referenced this issue May 15, 2018
Eric Auger reported the problem days ago that OOB broke ARM when running
with libvirt:

http://lists.gnu.org/archive/html/qemu-devel/2018-03/msg06231.html

The problem was that the monitor dispatcher bottom half was bound to
qemu_aio_context now, which could be polled unexpectedly in block code.
We should keep the dispatchers run in iohandler_ctx just like what we
did before the Out-Of-Band series (chardev uses qio, and qio binds
everything with iohandler_ctx).

If without this change, QMP dispatcher might be run even before reaching
main loop in block IO path, for example, in a stack like (the ARM case,
"cont" command handler run even during machine init phase):

        #0  qmp_cont ()
        #1  0x00000000006bd210 in qmp_marshal_cont ()
        #2  0x0000000000ac05c4 in do_qmp_dispatch ()
        #3  0x0000000000ac07a0 in qmp_dispatch ()
        #4  0x0000000000472d60 in monitor_qmp_dispatch_one ()
        #5  0x000000000047302c in monitor_qmp_bh_dispatcher ()
        #6  0x0000000000acf374 in aio_bh_call ()
        #7  0x0000000000acf428 in aio_bh_poll ()
        #8  0x0000000000ad5110 in aio_poll ()
        #9  0x0000000000a08ab8 in blk_prw ()
        #10 0x0000000000a091c4 in blk_pread ()
        #11 0x0000000000734f94 in pflash_cfi01_realize ()
        #12 0x000000000075a3a4 in device_set_realized ()
        #13 0x00000000009a26cc in property_set_bool ()
        #14 0x00000000009a0a40 in object_property_set ()
        #15 0x00000000009a3a08 in object_property_set_qobject ()
        #16 0x00000000009a0c8c in object_property_set_bool ()
        #17 0x0000000000758f94 in qdev_init_nofail ()
        #18 0x000000000058e190 in create_one_flash ()
        #19 0x000000000058e2f4 in create_flash ()
        #20 0x00000000005902f0 in machvirt_init ()
        #21 0x00000000007635cc in machine_run_board_init ()
        #22 0x00000000006b135c in main ()

Actually the problem is more severe than that.  After we switched to the
qemu AIO handler it means the monitor dispatcher code can even be called
with nested aio_poll(), then it can be an explicit aio_poll() inside
another main loop aio_poll() which could be racy too; breaking code
like TPM and 9p that use nested event loops.

Switch to use the iohandler_ctx for monitor dispatchers.

My sincere thanks to Eric Auger who offered great help during both
debugging and verifying the problem.  The ARM test was carried out by
applying this patch upon QEMU 2.12.0-rc0 and problem is gone after the
patch.

A quick test of mine shows that after this patch applied we can pass all
raw iotests even with OOB on by default.

CC: Eric Blake <eblake@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Fam Zheng <famz@redhat.com>
Reported-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180410044942.17059-1-peterx@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants