Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QEMU-1.2.0 based TLMu #4

Open
yTakatsukasa opened this issue Nov 1, 2012 · 2 comments
Open

QEMU-1.2.0 based TLMu #4

yTakatsukasa opened this issue Nov 1, 2012 · 2 comments

Comments

@yTakatsukasa
Copy link

Hi Edgar,
I have 'rebase'd tlmu onto QEMU-1.2.0.
Rebase command of git did not help because QEMU core has changed pretty much. Ofcourse you know :)
I had to rewrite the memory access hooks.

Now tlm_mem is treated as ramd which is almost as same as romd of QEMU.
I have added another mem_map function, tlmu_map_ram_nosync().
Memories specified with tlmu_map_ram_nosync() is accessed via DMI without any timing calculation.
Search walk of TLMRegisterRamEntry never happens anymore.

Linux on ARM now boots up in 3sec, which is 7 times faster.

I only tested the functionality of ARM.
There must be new bugs about timing calculation and syncing.

I am planning to keep up with mainline QEMU release.

If you are interested, please try tlmu-1.2.0 branch on our repository.
https://github.com/hdlab/tlmu/tree/tlmu-1.2.0

Best regards,
Yutetsu.

@edgarigl
Copy link
Owner

edgarigl commented Nov 1, 2012

On Thu, Nov 01, 2012 at 03:06:29AM -0700, Yutetsu TAKATSUKASA wrote:

Hi Edgar,
I have 'rebase'd tlmu onto QEMU-1.2.0.
Rebase command of git did not help because QEMU core has changed pretty much.
Ofcourse you know :)
I had to rewrite the memory access hooks.
Now tlm_mem is treated as ramd which is almost as same as romd of QEMU.
I have added another mem_map function, tlmu_map_ram_nosync().
Memories specified with tlmu_map_ram_nosync() is accessed via DMI without any
timing calculation.
Search walk of TLMRegisterRamEntry never happens anymore.
Linux on ARM now boots up in 3sec, which is 7 times faster.
I only tested the functionality of ARM.
There must be new bugs about timing calculation and syncing.
I am planning to keep up with mainline QEMU release.
If you are interested, please try tlmu-1.2.0 branch on our repository.
https://github.com/hdlab/tlmu/tree/tlmu-1.2.0
Best regards,
Yutetsu.

Hi,

This looks like cool stuff! I'm going to give it a try but am pretty
busy for two weeks or so, sorry.

A few questions:

  1. The testsuite that was in place in before, with mips, arm and cris
    guests running in parallel on the same virtual system. Do these still
    work?
  2. Could you explain the system you test with a little more?
    Does the TLM world initiate accesses into TLMu aswell?
  3. Can the TLM world access memory areas in TLMU mapped as turbo mode?

Thanks for working on this!

Best regads,
Edgar

@yTakatsukasa
Copy link
Author

Hi

1 The testsuite that was in place in before, with mips, arm and cris guests running in parallel on the same virtual system. Do these still work?

Yes. But output looks changed. It shows like below on my computer. They looks running sequentially.
I guess the reason for this change is usage of pthread has been changed in QEMU. TCG thread??? not sure.

Hello, I am the ARM
ARM: STOP: 0
Hello, I am the CRIS
CRIS: STOP: 0
Hello, I am the MIPSEL
MIPS: STOP: 0

2 Could you explain the system you test with a little more? Does the TLM world initiate accesses into TLMu aswell?

No.
Only ARM926 is in TLMu and other modules like memories, timers are in TLM(SystemC) world.

Whole environment can be downloaded from http://www.hdlab.co.jp/web/a050consulting/b009armcpumodel/
I am very sorry to tell you that the registration is required and everything is written only in Japanese.

3 Can the TLM world access memory areas in TLMU mapped as turbo mode?

Not tested, but it should.
Accessing TLMu from TLM world invokes cpu_physical_memory_rw() or cpu_physical_memory_rw_debug().
These functions are always used when CPU in TLMu accesses QEMU-inside devices or TLM world.
So maybe OK.

Let me summarize the memory access handling in my implementation.

  • If devices or memory is mapped with tlmu_map_ram()
    Mapped devices are treated as ramd.
    They are accessed via tlm_read() or tlm_write() called by cpu_physical_memory_rw().
  • If memory is mapped with tlmu_map_ram_nosync()
    DMI pointer is directly passed to QEMU memory_region_init_ram_ptr().
    The memory is treated as a normal QEMU memory not as ramd.
    It is accessible via cpu_physical_memory_rw().
    If DMI r/w access is not allowed for the memory, then fallbacks to tlmu_map_ram()

Regards,
Yutetsu.

edgarigl pushed a commit that referenced this issue Nov 27, 2014
$ ~/usr/bin/qemu-system-x86_64 -enable-kvm -m 1024 -drive if=none,id=drive0,cache=none,aio=native,format=raw,file=/root/Image/centos-6.4.raw -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on,config-wce=on # make dataplane fail to initialize
qemu-system-x86_64: -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on,config-wce=on: device is incompatible with x-data-plane, use config-wce=off
*** glibc detected *** /root/usr/bin/qemu-system-x86_64: free(): invalid pointer: 0x00007f001fef12f8 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x7d776)[0x7f00153a5776]
/root/usr/bin/qemu-system-x86_64(+0x2c34ec)[0x7f001cf5b4ec]
/root/usr/bin/qemu-system-x86_64(+0x342f9a)[0x7f001cfdaf9a]
/root/usr/bin/qemu-system-x86_64(+0x33694e)[0x7f001cfce94e]
....................

 (gdb) bt
 #0  0x00007f3bf3a12015 in raise () from /lib64/libc.so.6
 #1  0x00007f3bf3a1348b in abort () from /lib64/libc.so.6
 #2  0x00007f3bf3a51a4e in __libc_message () from /lib64/libc.so.6
 #3  0x00007f3bf3a57776 in malloc_printerr () from /lib64/libc.so.6
 #4  0x00007f3bfb60d4ec in free_and_trace (mem=0x7f3bfe0129f8) at vl.c:2786
 #5  0x00007f3bfb68cf9a in virtio_cleanup (vdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio.c:900
 #6  0x00007f3bfb68094e in virtio_blk_device_init (vdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio-blk.c:666
 #7  0x00007f3bfb68dadf in virtio_device_init (qdev=0x7f3bfe0129f8) at /root/Develop/QEMU/qemu/hw/virtio.c:1092
 #8  0x00007f3bfb50da46 in device_realize (dev=0x7f3bfe0129f8, err=0x7fff479c9258) at hw/qdev.c:176
.............................

In virtio_blk_device_init(), the memory which vdev point to is a static
member of "struct VirtIOBlkPCI", not heap memory, and it does not
get freed. So we shoule use virtio_common_cleanup() to clean this VirtIODevice
rather than virtio_cleanup(), which attempts to free the vdev.

This error was introduced by commit 05ff686
recently.

Signed-off-by: Dunrong Huang <huangdr@cloud-times.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
edgarigl pushed a commit that referenced this issue Nov 27, 2014
…d region

The memory API allows a MemoryRegion's size to be 2^64, as a special
case (otherwise the size always fits in a 64 bit integer). This meant
that attempts to access address zero in a 2^64 sized region would
assert in address_space_translate():

  #3  0x00007ffff3e4d192 in __GI___assert_fail#(assertion=0x555555a43f32
    "!a.hi", file=0x555555a43ef0 "include/qemu/int128.h", line=18,
    function=0x555555a4439f "int128_get64") at assert.c:103
  #4  0x0000555555877642 in int128_get64 (a=...)
    at include/qemu/int128.h:18
  #5  0x00005555558782f2 in address_space_translate (as=0x55555668d140,
   /addr=0, xlat=0x7fffafac9918, plen=0x7fffafac9920, is_write=false)
    at exec.c:221

Fix this by doing the 'min' operation in 128 bit arithmetic
rather than 64 bit arithmetic (we know the result of the 'min'
definitely fits in 64 bits because one of the inputs did).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
edgarigl pushed a commit that referenced this issue Nov 27, 2014
The docs for glfs_init suggest that the function sets errno on every
failure. In fact it doesn't. As other functions such as
qemu_gluster_open() in the gluster block code report their errors based
on this fact we need to make sure that errno is set on each failure.

This fixes a crash of qemu-img/qemu when a gluster brick isn't
accessible from given host while the server serving the volume
description is.

Thread 1 (Thread 0x7ffff7fba740 (LWP 203880)):
 #0  0x00007ffff77673f8 in glfs_lseek () from /usr/lib64/libgfapi.so.0
 #1  0x0000555555574a68 in qemu_gluster_getlength ()
 #2  0x0000555555565742 in refresh_total_sectors ()
 #3  0x000055555556914f in bdrv_open_common ()
 #4  0x000055555556e8e8 in bdrv_open ()
 #5  0x000055555556f02f in bdrv_open_image ()
 #6  0x000055555556e5f6 in bdrv_open ()
 #7  0x00005555555c5775 in bdrv_new_open ()
 #8  0x00005555555c5b91 in img_info ()
 #9  0x00007ffff62c9c05 in __libc_start_main () from /lib64/libc.so.6
 #10 0x00005555555648ad in _start ()

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
edgarigl pushed a commit that referenced this issue Nov 27, 2014
If libusb_get_device_list() fails, the uninitialized local variable
libusb_device would be passed to libusb_free_device_list(), that
will cause a crash, like:
(gdb) bt
 #0  0x00007fbbb4bafc10 in pthread_mutex_lock () from /lib64/libpthread.so.0
 #1  0x00007fbbb233e653 in libusb_unref_device (dev=0x6275682d627375)
     at core.c:902
 #2  0x00007fbbb233e739 in libusb_free_device_list (list=0x7fbbb6e8436e,
     unref_devices=<optimized out>) at core.c:653
 #3  0x00007fbbb6cd80a4 in usb_host_auto_check (unused=unused@entry=0x0)
     at hw/usb/host-libusb.c:1446
 #4  0x00007fbbb6cd8525 in usb_host_initfn (udev=0x7fbbbd3c5670)
     at hw/usb/host-libusb.c:912
 #5  0x00007fbbb6cc123b in usb_device_init (dev=0x7fbbbd3c5670)
     at hw/usb/bus.c:106
 ...

So initialize libusb_device at the begin time.

Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
edgarigl pushed a commit that referenced this issue Nov 27, 2014
VirtIOBlockReq is freed later by virtio_blk_free_request() in
hw/block/virtio-blk.c.  Remove this extraneous g_slice_free().

This patch fixes the following segfault:

  0x00005555556373af in virtio_blk_rw_complete (opaque=0x5555565ff5e0, ret=0) at hw/block/virtio-blk.c:99
  99          bdrv_acct_done(req->dev->bs, &req->acct);
  (gdb) print req
  $1 = (VirtIOBlockReq *) 0x5555565ff5e0
  (gdb) print req->dev
  $2 = (VirtIOBlock *) 0x0
  (gdb) bt
  #0  0x00005555556373af in virtio_blk_rw_complete (opaque=0x5555565ff5e0, ret=0) at hw/block/virtio-blk.c:99
  #1  0x0000555555840ebe in bdrv_co_em_bh (opaque=0x5555566152d0) at block.c:4675
  #2  0x000055555583de77 in aio_bh_poll (ctx=ctx@entry=0x5555563a8150) at async.c:81
  #3  0x000055555584b7a7 in aio_poll (ctx=0x5555563a8150, blocking=blocking@entry=true) at aio-posix.c:188
  #4  0x00005555556e520e in iothread_run (opaque=0x5555563a7fd8) at iothread.c:41
  #5  0x00007ffff42ba124 in start_thread () from /usr/lib/libpthread.so.0
  #6  0x00007ffff16d14bd in clone () from /usr/lib/libc.so.6

Reported-by: Max Reitz <mreitz@redhat.com>
Cc: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
edgarigl pushed a commit that referenced this issue Nov 27, 2014
If the pci bridge enters in error flow as part
of init process it will only delete the shpc mmio
subregion but not remove it from the properties list,
resulting in segmentation fault when the bridge runs
the exit function.

Example: add a pci bridge without specifing the chassis number:
    <qemu-bin> ... -device pci-bridge,id=p1
Result:
    (qemu) qemu-system-x86_64: -device pci-bridge,id=p1: Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
    qemu-system-x86_64: -device pci-bridge,id=p1: Device
    initialization failed.
    Segmentation fault (core dumped)

    if (child->class->unparent) {
    #0  0x00005555558d629b in object_finalize_child_property (obj=0x555556d2e830, name=0x555556d30630 "shpc-mmio[0]", opaque=0x555556a42fc8) at qom/object.c:1078
    #1  0x00005555558d4b1f in object_property_del_all (obj=0x555556d2e830) at qom/object.c:367
    #2  0x00005555558d4ca1 in object_finalize (data=0x555556d2e830) at qom/object.c:412
    #3  0x00005555558d55a1 in object_unref (obj=0x555556d2e830) at qom/object.c:720
    #4  0x000055555572c907 in qdev_device_add (opts=0x5555563544f0) at qdev-monitor.c:566
    #5  0x0000555555744f16 in device_init_func (opts=0x5555563544f0, opaque=0x0) at vl.c:2213
    #6  0x00005555559cf5f0 in qemu_opts_foreach (list=0x555555e0f8e0 <qemu_device_opts>, func=0x555555744efa <device_init_func>, opaque=0x0, abort_on_failure=1) at util/qemu-option.c:1057
    #7  0x000055555574a11b in main (argc=16, argv=0x7fffffffdde8, envp=0x7fffffffde70) at vl.c:423

Unparent the shpc mmio region as part of shpc cleanup.

Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants