-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
atmega2560 emulation #2
Comments
Hi systemmind.
This project was to add AVR 8 bit CPUs to QEMU, I had no intentions to
emulate any boards or peripherals.
There is only CPU and a sample/simple board.
1. here
<https://github.com/michaelrolnik/qemu-avr/blob/master/target/avr/cpu.c#L500-L515>
is the list of all supported CPUs
2. and this
<https://github.com/michaelrolnik/qemu-avr/blob/master/hw/avr/sample.c> is
the sample board.
You can try to build it and then (assuming your code is in prog.c file)
Compile it and create a binary file
avr-gcc -O0 -mmcu=avr5 prog.c -o prog.avr5.out -g -ggdb
avr-objcopy -O binary prog.avr5.out prog.avr5.bin
Then run it
qemu-system-avr -bios prog.avr5.bin
if you want to debug the test run
qemu-system-avr -bios prog.avr5.bin -s -S
and in another window run
avr-gdb prog.avr5.out
target remote :1234 (within opened gdb windows)
Regards,
Michael
…On Sun, Nov 18, 2018 at 10:11 PM systemmind ***@***.***> wrote:
I know it is not an issue, but it would be good to have some additional
description how actually to use the tool. Is it possible to run the virtual
machine with hex/bin file and communicate with it through serial port?
Which chips the project supports?
But the general question for me: may I emulate any 8-bit chips software
like atmega2560?
Thanks.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGHKok7PvZuLF0248ZAAmUmOORfM8G1Vks5uwb8PgaJpZM4YoHbE>
.
--
Best Regards,
Michael Rolnik
|
Thanks Michael, your answer was very helpful. Looks like I have run the machine correctly but if I don't use And also what do you think about peripherals? How much efforts may it take to add some? Maybe I may do some patch for that, does it make sense at all? |
Hi,
btw, what is your name?
1. Sorry, I don't know why qemu stucks, maybe because it just CPU and qemu
is not built for this
2. Peripherals are not hard. However, peripherals on avr boards do not have
separate register regions (some peripherals share same registers) so I was
not sure how to implement it. But, once you know how to declare/define
correct register ranges, I assume that it should take several days to
implement a simple AVR board with all peripherals. It could be a student
project. Actually, this was what I hoped would happen. I implement CPU and
if there is an interest people will add peripherals, however it turns out
there was no interest.
Michael
…On Mon, Nov 19, 2018 at 1:56 AM systemmind ***@***.***> wrote:
Hi systemmind. This project was to add AVR 8 bit CPUs to QEMU, I had no
intentions to emulate any boards or peripherals. There is only CPU and a
sample/simple board. 1. here
https://github.com/michaelrolnik/qemu-avr/blob/master/target/avr/cpu.c#L500-L515
is the list of all supported CPUs 2. and this
https://github.com/michaelrolnik/qemu-avr/blob/master/hw/avr/sample.c is
the sample board. You can try to build it and then (assuming your code is
in prog.c file) Compile it and create a binary file avr-gcc -O0 -mmcu=avr5
prog.c -o prog.avr5.out -g -ggdb avr-objcopy -O binary prog.avr5.out
prog.avr5.bin Then run it qemu-system-avr -bios prog.avr5.bin if you want
to debug the test run qemu-system-avr -bios prog.avr5.bin -s -S and in
another window run avr-gdb prog.avr5.out target remote :1234 (within opened
gdb windows) Regards, Michael
… <#m_-4016278844835622279_>
On Sun, Nov 18, 2018 at 10:11 PM systemmind ***@***.***> wrote: I know it
is not an issue, but it would be good to have some additional description
how actually to use the tool. Is it possible to run the virtual machine
with hex/bin file and communicate with it through serial port? Which chips
the project supports? But the general question for me: may I emulate any
8-bit chips software like atmega2560? Thanks. — You are receiving this
because you are subscribed to this thread. Reply to this email directly,
view it on GitHub <#2 <#2>>,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGHKok7PvZuLF0248ZAAmUmOORfM8G1Vks5uwb8PgaJpZM4YoHbE
.
-- Best Regards, Michael Rolnik
Thanks Michael, your answer was very helpful. Looks like I have run the
machine correctly but if I don't use -S option the monitor stuck and I
cannot to do with machine any actions anymore. I even cannot stop it with
Ctrl+c key combination and may just kill like that sudo killall -9
qemu-system-avr. The same situation appears if I run the qemu with -S
argument but execute cont in qemu monitor. It is correct behavior of
emulation?
And also what do you think about peripherals? How much efforts may it take
to add some? Maybe I may do some patch for that, does it make sense at all?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGHKorBc2IAt-mkXirycfR6JmpCiPphdks5uwfPDgaJpZM4YoHbE>
.
--
Best Regards,
Michael Rolnik
|
Hi, again. My name is Illia (Elias is english variant, so you may ask me by this name too if you want). I have built this project on macOs and I think it would be good for me to test it on some another machine. Maybe it would work better on it. I am interesting by this project because I am working remotely and now I am modifying some arduino project with atmega2560 cpu and I am looking any funny solution to make some regression/integration automation tests for the project. There actually some another ways how to do this (including simulation with simavr) but I would want to have some independent framework that allows to test the compiled hex/bin/elf file and qemu with peripherals corresponds my requirements I guess. It is interesting for me to understand how does the qemu works and I think I may have some extra time to spend it to work on peripherals features. I also have some working background with atmega series and I have a junior developer assistant that may also be included to perform some tasks. So, maybe you can describe the actual problem with register mapping more detailed and point some references (if they exist) where I may read about qemu architecture and how to expand its features. Maybe do you have some another ideas how we may do this? I believe if qemu will have full support of avr architecture it will be more popular between another developers. Regards. P.S. how do you fold the quote?:)
|
Hi Illia. Look at this sparc32_dma device
this basically mean that a device is assigned a range of memory addresses and this range cannot be shared between several devices. |
following is my question to QEMU community some time ago
|
and the following is the answer I got from Peter Maydell
|
This solution looks more acceptable and logical for me too. I will investigate how it may be implemented in qemu. |
right.
…On Tue, Nov 20, 2018 at 3:20 PM systemmind ***@***.***> wrote:
then a higher level SoC container device can map those into the right
places
This solution looks more acceptable and logical for me too. I will
investigate how it may be implemented in qemu.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGHKok2clYFgRVtrCUhhbIYTzyfWQHTNks5uxAG1gaJpZM4YoHbE>
.
--
Best Regards,
Michael Rolnik
|
Please have a look at the patch by S.E.Harris@kent.ac.uk (https://patchwork.kernel.org/project/qemu-devel) |
Address Sanitizer shows memory leak in xhci_address_slot hw/usb/hcd-xhci.c:2156 and the stack is as bellow: Direct leak of 64 byte(s) in 4 object(s) allocated from: #0 0xffff91c6f5ab in realloc (/lib64/libasan.so.4+0xd35ab) #1 0xffff91987243 in g_realloc (/lib64/libglib-2.0.so.0+0x57243) #2 0xaaaab0b26a1f in qemu_iovec_add util/iov.c:296 #3 0xaaaab07e5ce3 in xhci_address_slot hw/usb/hcd-xhci.c:2156 #4 0xaaaab07e5ce3 in xhci_process_commands hw/usb/hcd-xhci.c:2493 #5 0xaaaab00058d7 in memory_region_write_accessor qemu/memory.c:507 #6 0xaaaab0000d87 in access_with_adjusted_size memory.c:573 #7 0xaaaab000abcf in memory_region_dispatch_write memory.c:1516 #8 0xaaaaaff59947 in flatview_write_continue exec.c:3367 #9 0xaaaaaff59c33 in flatview_write exec.c:3406 #10 0xaaaaaff63b3b in address_space_write exec.c:3496 #11 0xaaaab002f263 in kvm_cpu_exec accel/kvm/kvm-all.c:2288 #12 0xaaaaaffee427 in qemu_kvm_cpu_thread_fn cpus.c:1290 #13 0xaaaab0b1a943 in qemu_thread_start util/qemu-thread-posix.c:502 #14 0xffff908ce8bb in start_thread (/lib64/libpthread.so.0+0x78bb) #15 0xffff908165cb in thread_start (/lib64/libc.so.6+0xd55cb) Cc: zhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: Ying Fang <fangying1@huawei.com> Reviewed-by: Li Qiang <liq3ea@gmail.com> Message-id: 20190827080209.2365-1-fangying1@huawei.com Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Address Sanitizer shows memory leak in xhci_kick_epctx hw/usb/hcd-xhci.c:1912. A sglist is leaked when a packet is retired and returns USB_RET_NAK status. The leak stack is as bellow: Direct leak of 2688 byte(s) in 168 object(s) allocated from: #0 0xffffae8b11db in __interceptor_malloc (/lib64/libasan.so.4+0xd31db) #1 0xffffae5c9163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163) #2 0xaaaabb6fb3f7 in qemu_sglist_init dma-helpers.c:43 #3 0xaaaabba705a7 in pci_dma_sglist_init include/hw/pci/pci.h:837 #4 0xaaaabba705a7 in xhci_xfer_create_sgl hw/usb/hcd-xhci.c:1443 #5 0xaaaabba705a7 in xhci_setup_packet hw/usb/hcd-xhci.c:1615 #6 0xaaaabba77a6f in xhci_kick_epctx hw/usb/hcd-xhci.c:1912 #7 0xaaaabbdaad27 in timerlist_run_timers util/qemu-timer.c:592 #8 0xaaaabbdab19f in qemu_clock_run_timers util/qemu-timer.c:606 #9 0xaaaabbdab19f in qemu_clock_run_all_timers util/qemu-timer.c:692 #10 0xaaaabbdab9a3 in main_loop_wait util/main-loop.c:524 #11 0xaaaabb6ff5e7 in main_loop vl.c:1806 #12 0xaaaabb1e1453 in main vl.c:4488 Signed-off-by: Ying Fang <fangying1@huawei.com> Message-id: 20190828062535.1573-1-fangying1@huawei.com Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
When 'system_reset' is called, the main loop clear the memory region cache before the BH has a chance to execute. Later when the deferred function is called, some assumptions that were made when scheduling them are no longer true when they actually execute. This is what happens using a virtio-blk device (fresh RHEL7.8 install): $ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; echo q) \ | qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \ -device virtio-blk-pci,id=image1,drive=drive_image1 \ -drive file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none \ -device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \ -netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \ -monitor stdio -serial null -nographic (qemu) system_reset (qemu) system_reset (qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: Assertion `caches != NULL' failed. Aborted (gdb) bt Thread 1 (Thread 0x7f109c17b680 (LWP 10939)): #0 0x00005604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at hw/virtio/virtio.c:227 #1 0x000056040832972b in vring_avail_flags (vq=0x56040a24bdd0) at hw/virtio/virtio.c:235 #2 0x000056040832d13d in virtio_should_notify (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648 #3 0x000056040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662 #4 0x00005604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at hw/block/dataplane/virtio-blk.c:75 #5 0x000056040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90 #6 0x000056040883dccd in aio_bh_poll (ctx=0x560409161980) at util/async.c:118 #7 0x0000560408842af7 in aio_dispatch (ctx=0x560409161980) at util/aio-posix.c:460 #8 0x000056040883e068 in aio_ctx_dispatch (source=0x560409161980, callback=0x0, user_data=0x0) at util/async.c:261 #9 0x00007f10a8fca06d in g_main_context_dispatch () at /lib64/libglib-2.0.so.0 #10 0x0000560408841445 in glib_pollfds_poll () at util/main-loop.c:215 #11 0x00005604088414bf in os_host_main_loop_wait (timeout=0) at util/main-loop.c:238 #12 0x00005604088415c4 in main_loop_wait (nonblocking=0) at util/main-loop.c:514 #13 0x0000560408416b1e in main_loop () at vl.c:1923 #14 0x000056040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, envp=0x7ffc2c3f9d00) at vl.c:4578 Fix this by cancelling the BH when the virtio dataplane is stopped. [This is version of the patch was modified as discussed with Philippe on the mailing list thread. --Stefan] Reported-by: Yihuang Yu <yihyu@redhat.com> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Fixes: https://bugs.launchpad.net/qemu/+bug/1839428 Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190816171503.24761-1-philmd@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When encounter error, multifd_send_thread should always notify who pay attention to it before exit. Otherwise it may block migration_thread at multifd_send_sync_main forever. Error as follow: ------------------------------------------------------------------------------- (gdb) bt #0 0x00007f4d669dfa0b in do_futex_wait.constprop.1 () from /lib64/libpthread.so.0 #1 0x00007f4d669dfa9f in __new_sem_wait_slow.constprop.0 () from /lib64/libpthread.so.0 #2 0x00007f4d669dfb3b in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0 #3 0x0000562ccf0a5614 in qemu_sem_wait (sem=sem@entry=0x562cd1b698e8) at util/qemu-thread-posix.c:319 #4 0x0000562ccecb4752 in multifd_send_sync_main (rs=<optimized out>) at /qemu/migration/ram.c:1099 #5 0x0000562ccecb95f4 in ram_save_iterate (f=0x562cd0ecc000, opaque=<optimized out>) at /qemu/migration/ram.c:3550 #6 0x0000562ccef43c23 in qemu_savevm_state_iterate (f=0x562cd0ecc000, postcopy=false) at migration/savevm.c:1189 #7 0x0000562ccef3dcf3 in migration_iteration_run (s=0x562cd09fabf0) at migration/migration.c:3131 #8 migration_thread (opaque=opaque@entry=0x562cd09fabf0) at migration/migration.c:3258 #9 0x0000562ccf0a4c26 in qemu_thread_start (args=<optimized out>) at util/qemu-thread-posix.c:502 #10 0x00007f4d669d9e25 in start_thread () from /lib64/libpthread.so.0 #11 0x00007f4d6670635d in clone () from /lib64/libc.so.6 (gdb) f 4 #4 0x0000562ccecb4752 in multifd_send_sync_main (rs=<optimized out>) at /qemu/migration/ram.c:1099 1099 qemu_sem_wait(&p->sem_sync); (gdb) list 1094 } 1095 for (i = 0; i < migrate_multifd_channels(); i++) { 1096 MultiFDSendParams *p = &multifd_send_state->params[i]; 1097 1098 trace_multifd_send_sync_main_wait(p->id); 1099 qemu_sem_wait(&p->sem_sync); 1100 } 1101 trace_multifd_send_sync_main(multifd_send_state->packet_num); 1102 } 1103 (gdb) p i $1 = 0 (gdb) p multifd_send_state->params[0].pending_job $2 = 2 //It means the job before MULTIFD_FLAG_SYNC has already fail (gdb) p multifd_send_state->params[0].quit $3 = true Signed-off-by: Ivan Ren <ivanren@tencent.com> Message-Id: <1567044996-2362-1-git-send-email-ivanren@tencent.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The 'blockdev-create' QMP command was introduced as experimental feature in commit b0292b8, using the assert() debug call. It got promoted to 'stable' command in 3fb588a, but the assert call was not removed. Some block drivers are optional, and bdrv_find_format() might return a NULL value, triggering the assertion. Stable code is not expected to abort, so return an error instead. This is easily reproducible when libnfs is not installed: ./configure [...] module support no Block whitelist (rw) Block whitelist (ro) libiscsi support yes libnfs support no [...] Start QEMU: $ qemu-system-x86_64 -S -qmp unix:/tmp/qemu.qmp,server,nowait Send the 'blockdev-create' with the 'nfs' driver: $ ( cat << 'EOF' {'execute': 'qmp_capabilities'} {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'} EOF ) | socat STDIO UNIX:/tmp/qemu.qmp {"QMP": {"version": {"qemu": {"micro": 50, "minor": 1, "major": 4}, "package": "v4.1.0-733-g89ea03a7dc"}, "capabilities": ["oob"]}} {"return": {}} QEMU crashes: $ gdb qemu-system-x86_64 core Program received signal SIGSEGV, Segmentation fault. (gdb) bt #0 0x00007ffff510957f in raise () at /lib64/libc.so.6 #1 0x00007ffff50f3895 in abort () at /lib64/libc.so.6 #2 0x00007ffff50f3769 in _nl_load_domain.cold.0 () at /lib64/libc.so.6 #3 0x00007ffff5101a26 in .annobin_assert.c_end () at /lib64/libc.so.6 #4 0x0000555555d7e1f1 in qmp_blockdev_create (job_id=0x555556baee40 "x", options=0x555557666610, errp=0x7fffffffc770) at block/create.c:69 #5 0x0000555555c96b52 in qmp_marshal_blockdev_create (args=0x7fffdc003830, ret=0x7fffffffc7f8, errp=0x7fffffffc7f0) at qapi/qapi-commands-block-core.c:1314 #6 0x0000555555deb0a0 in do_qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false, errp=0x7fffffffc898) at qapi/qmp-dispatch.c:131 #7 0x0000555555deb2a1 in qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false) at qapi/qmp-dispatch.c:174 With this patch applied, QEMU returns a QMP error: {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'} {"id": "x", "error": {"class": "GenericError", "desc": "Block driver 'nfs' not found or not supported"}} Cc: qemu-stable@nongnu.org Reported-by: Xu Tian <xutian@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When I run QEMU with KVM under Valgrind, I currently get this warning: Syscall param ioctl(generic) points to uninitialised byte(s) at 0x95BA45B: ioctl (in /usr/lib64/libc-2.28.so) by 0x429DC3: kvm_ioctl (kvm-all.c:2365) by 0x51B249: kvm_arch_get_supported_msr_feature (kvm.c:469) by 0x4C2A49: x86_cpu_get_supported_feature_word (cpu.c:3765) by 0x4C4116: x86_cpu_expand_features (cpu.c:5065) by 0x4C7F8D: x86_cpu_realizefn (cpu.c:5242) by 0x5961F3: device_set_realized (qdev.c:835) by 0x7038F6: property_set_bool (object.c:2080) by 0x707EFE: object_property_set_qobject (qom-qobject.c:26) by 0x705814: object_property_set_bool (object.c:1338) by 0x498435: pc_new_cpu (pc.c:1549) by 0x49C67D: pc_cpus_init (pc.c:1681) Address 0x1ffeffee74 is on thread 1's stack in frame #2, created by kvm_arch_get_supported_msr_feature (kvm.c:445) It's harmless, but a little bit annoying, so silence it by properly initializing the whole structure with zeroes. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET command are not preserved on VM migration. Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES) get enabled. What happens is: first the VirtIONet::curr_guest_offloads gets restored and offloads are getting set correctly: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334 #3 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:168 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 However later on the features are getting restored, and offloads get reset to everything supported by features: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773 #3 virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 Fix this by preserving the state in saved_guest_offloads field and pushing out offload initialization to the new post load hook. Cc: qemu-stable@nongnu.org Signed-off-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
Guests can crash QEMU when writting to PnP registers: $ echo 'writeb 0x800ff042 69' | qemu-system-sparc -M leon3_generic -S -bios /etc/magic -qtest stdio [I 1571938309.932255] OPENED [R +0.063474] writeb 0x800ff042 69 Segmentation fault (core dumped) (gdb) bt #0 0x0000000000000000 in () #1 0x0000555f4bcdf0bc in memory_region_write_with_attrs_accessor (mr=0x555f4d7be8c0, addr=66, value=0x7fff07d00f08, size=1, shift=0, mask=255, attrs=...) at memory.c:503 #2 0x0000555f4bcdf185 in access_with_adjusted_size (addr=66, value=0x7fff07d00f08, size=1, access_size_min=1, access_size_max=4, access_fn=0x555f4bcdeff4 <memory_region_write_with_attrs_accessor>, mr=0x555f4d7be8c0, attrs=...) at memory.c:539 #3 0x0000555f4bce2243 in memory_region_dispatch_write (mr=0x555f4d7be8c0, addr=66, data=69, op=MO_8, attrs=...) at memory.c:1489 #4 0x0000555f4bc80b20 in flatview_write_continue (fv=0x555f4d92c400, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1, addr1=66, l=1, mr=0x555f4d7be8c0) at exec.c:3161 #5 0x0000555f4bc80c65 in flatview_write (fv=0x555f4d92c400, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1) at exec.c:3201 #6 0x0000555f4bc80fb0 in address_space_write (as=0x555f4d7aa460, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1) at exec.c:3291 #7 0x0000555f4bc8101d in address_space_rw (as=0x555f4d7aa460, addr=2148528194, attrs=..., buf=0x7fff07d01120 "E", len=1, is_write=true) at exec.c:3301 #8 0x0000555f4bcdb388 in qtest_process_command (chr=0x555f4c2ed7e0 <qtest_chr>, words=0x555f4db0c5d0) at qtest.c:432 Instead of crashing, log the access as unimplemented. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com> Message-Id: <20191025110114.27091-2-philmd@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
ivq/dvq/svq/free_page_vq is forgot to cleanup in virtio_balloon_device_unrealize, the memory leak stack is as follow: Direct leak of 14336 byte(s) in 2 object(s) allocated from: #0 0x7f99fd9d8560 in calloc (/usr/lib64/libasan.so.3+0xc7560) #1 0x7f99fcb20015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015) #2 0x557d90638437 in virtio_add_queue hw/virtio/virtio.c:2327 #3 0x557d9064401d in virtio_balloon_device_realize hw/virtio/virtio-balloon.c:793 #4 0x557d906356f7 in virtio_device_realize hw/virtio/virtio.c:3504 #5 0x557d9073f081 in device_set_realized hw/core/qdev.c:876 #6 0x557d908b1f4d in property_set_bool qom/object.c:2080 #7 0x557d908b655e in object_property_set_qobject qom/qom-qobject.c:26 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Message-Id: <1575444716-17632-2-git-send-email-pannengyuan@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com>
ivqs/ovqs/c_ivq/c_ovq is forgot to cleanup in virtio_serial_device_unrealize, the memory leak stack is as bellow: Direct leak of 1290240 byte(s) in 180 object(s) allocated from: #0 0x7fc9bfc27560 in calloc (/usr/lib64/libasan.so.3+0xc7560) #1 0x7fc9bed6f015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015) #2 0x5650e02b83e7 in virtio_add_queue hw/virtio/virtio.c:2327 #3 0x5650e02847b5 in virtio_serial_device_realize hw/char/virtio-serial-bus.c:1089 #4 0x5650e02b56a7 in virtio_device_realize hw/virtio/virtio.c:3504 #5 0x5650e03bf031 in device_set_realized hw/core/qdev.c:876 #6 0x5650e0531efd in property_set_bool qom/object.c:2080 #7 0x5650e053650e in object_property_set_qobject qom/qom-qobject.c:26 #8 0x5650e0533e14 in object_property_set_bool qom/object.c:1338 #9 0x5650e04c0e37 in virtio_pci_realize hw/virtio/virtio-pci.c:1801 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com> Cc: Laurent Vivier <lvivier@redhat.com> Cc: Amit Shah <amit@kernel.org> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1575444716-17632-3-git-send-email-pannengyuan@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Currently the SLOF firmware for pseries guests will disable/re-enable a PCI device multiple times via IO/MEM/MASTER bits of PCI_COMMAND register after the initial probe/feature negotiation, as it tends to work with a single device at a time at various stages like probing and running block/network bootloaders without doing a full reset in-between. In QEMU, when PCI_COMMAND_MASTER is disabled we disable the corresponding IOMMU memory region, so DMA accesses (including to vring fields like idx/flags) will no longer undergo the necessary translation. Normally we wouldn't expect this to happen since it would be misbehavior on the driver side to continue driving DMA requests. However, in the case of pseries, with iommu_platform=on, we trigger the following sequence when tearing down the virtio-blk dataplane ioeventfd in response to the guest unsetting PCI_COMMAND_MASTER: #2 0x0000555555922651 in virtqueue_map_desc (vdev=vdev@entry=0x555556dbcfb0, p_num_sg=p_num_sg@entry=0x7fffe657e1a8, addr=addr@entry=0x7fffe657e240, iov=iov@entry=0x7fffe6580240, max_num_sg=max_num_sg@entry=1024, is_write=is_write@entry=false, pa=0, sz=0) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:757 #3 0x0000555555922a89 in virtqueue_pop (vq=vq@entry=0x555556dc8660, sz=sz@entry=184) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:950 #4 0x00005555558d3eca in virtio_blk_get_request (vq=0x555556dc8660, s=0x555556dbcfb0) at /home/mdroth/w/qemu.git/hw/block/virtio-blk.c:255 #5 0x00005555558d3eca in virtio_blk_handle_vq (s=0x555556dbcfb0, vq=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/block/virtio-blk.c:776 #6 0x000055555591dd66 in virtio_queue_notify_aio_vq (vq=vq@entry=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:1550 #7 0x000055555591ecef in virtio_queue_notify_aio_vq (vq=0x555556dc8660) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:1546 #8 0x000055555591ecef in virtio_queue_host_notifier_aio_poll (opaque=0x555556dc86c8) at /home/mdroth/w/qemu.git/hw/virtio/virtio.c:2527 #9 0x0000555555d02164 in run_poll_handlers_once (ctx=ctx@entry=0x55555688bfc0, timeout=timeout@entry=0x7fffe65844a8) at /home/mdroth/w/qemu.git/util/aio-posix.c:520 #10 0x0000555555d02d1b in try_poll_mode (timeout=0x7fffe65844a8, ctx=0x55555688bfc0) at /home/mdroth/w/qemu.git/util/aio-posix.c:607 #11 0x0000555555d02d1b in aio_poll (ctx=ctx@entry=0x55555688bfc0, blocking=blocking@entry=true) at /home/mdroth/w/qemu.git/util/aio-posix.c:639 #12 0x0000555555d0004d in aio_wait_bh_oneshot (ctx=0x55555688bfc0, cb=cb@entry=0x5555558d5130 <virtio_blk_data_plane_stop_bh>, opaque=opaque@entry=0x555556de86f0) at /home/mdroth/w/qemu.git/util/aio-wait.c:71 #13 0x00005555558d59bf in virtio_blk_data_plane_stop (vdev=<optimized out>) at /home/mdroth/w/qemu.git/hw/block/dataplane/virtio-blk.c:288 #14 0x0000555555b906a1 in virtio_bus_stop_ioeventfd (bus=bus@entry=0x555556dbcf38) at /home/mdroth/w/qemu.git/hw/virtio/virtio-bus.c:245 #15 0x0000555555b90dbb in virtio_bus_stop_ioeventfd (bus=bus@entry=0x555556dbcf38) at /home/mdroth/w/qemu.git/hw/virtio/virtio-bus.c:237 #16 0x0000555555b92a8e in virtio_pci_stop_ioeventfd (proxy=0x555556db4e40) at /home/mdroth/w/qemu.git/hw/virtio/virtio-pci.c:292 #17 0x0000555555b92a8e in virtio_write_config (pci_dev=0x555556db4e40, address=<optimized out>, val=1048832, len=<optimized out>) at /home/mdroth/w/qemu.git/hw/virtio/virtio-pci.c:613 I.e. the calling code is only scheduling a one-shot BH for virtio_blk_data_plane_stop_bh, but somehow we end up trying to process an additional virtqueue entry before we get there. This is likely due to the following check in virtio_queue_host_notifier_aio_poll: static bool virtio_queue_host_notifier_aio_poll(void *opaque) { EventNotifier *n = opaque; VirtQueue *vq = container_of(n, VirtQueue, host_notifier); bool progress; if (!vq->vring.desc || virtio_queue_empty(vq)) { return false; } progress = virtio_queue_notify_aio_vq(vq); namely the call to virtio_queue_empty(). In this case, since no new requests have actually been issued, shadow_avail_idx == last_avail_idx, so we actually try to access the vring via vring_avail_idx() to get the latest non-shadowed idx: int virtio_queue_empty(VirtQueue *vq) { bool empty; ... if (vq->shadow_avail_idx != vq->last_avail_idx) { return 0; } rcu_read_lock(); empty = vring_avail_idx(vq) == vq->last_avail_idx; rcu_read_unlock(); return empty; but since the IOMMU region has been disabled we get a bogus value (0 usually), which causes virtio_queue_empty() to falsely report that there are entries to be processed, which causes errors such as: "virtio: zero sized buffers are not allowed" or "virtio-blk missing headers" and puts the device in an error state. This patch works around the issue by introducing virtio_set_disabled(), which sets a 'disabled' flag to bypass checks like virtio_queue_empty() when bus-mastering is disabled. Since we'd check this flag at all the same sites as vdev->broken, we replace those checks with an inline function which checks for either vdev->broken or vdev->disabled. The 'disabled' flag is only migrated when set, which should be fairly rare, but to maintain migration compatibility we disable it's use for older machine types. Users requiring the use of the flag in conjunction with older machine types can set it explicitly as a virtio-device option. NOTES: - This leaves some other oddities in play, like the fact that DRIVER_OK also gets unset in response to bus-mastering being disabled, but not restored (however the device seems to continue working) - Similarly, we disable the host notifier via virtio_bus_stop_ioeventfd(), which seems to move the handling out of virtio-blk dataplane and back into the main IO thread, and it ends up staying there till a reset (but otherwise continues working normally) Cc: David Gibson <david@gibson.dropbear.id.au>, Cc: Alexey Kardashevskiy <aik@ozlabs.ru> Cc: "Michael S. Tsirkin" <mst@redhat.com> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com> Message-Id: <20191120005003.27035-1-mdroth@linux.vnet.ibm.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This avoid a memory leak when qom-set is called to set throttle_group limits, here is an easy way to reproduce: 1. run qemu-iotests as follow and check the result with asan: ./check -qcow2 184 Following is the asan output backtrack: Direct leak of 912 byte(s) in 3 object(s) allocated from: #0 0xffff8d7ab3c3 in __interceptor_calloc (/lib64/libasan.so.4+0xd33c3) #1 0xffff8d4c31cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb) #2 0x190c857 in qobject_input_start_struct /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qapi/qobject-input-visitor.c:295 #3 0x19070df in visit_start_struct /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qapi/qapi-visit-core.c:49 #4 0x1948b87 in visit_type_ThrottleLimits qapi/qapi-visit-block-core.c:3759 #5 0x17e4aa3 in throttle_group_set_limits /mnt/sdc/qemu-master/qemu-4.2.0-rc0/block/throttle-groups.c:900 #6 0x1650eff in object_property_set /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qom/object.c:1272 #7 0x1658517 in object_property_set_qobject /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qom/qom-qobject.c:26 #8 0x15880bb in qmp_qom_set /mnt/sdc/qemu-master/qemu-4.2.0-rc0/qom/qom-qmp-cmds.c:74 #9 0x157e3e3 in qmp_marshal_qom_set qapi/qapi-commands-qom.c:154 Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: PanNengyuan <pannengyuan@huawei.com> Message-id: 1574835614-42028-1-git-send-email-pannengyuan@huawei.com Signed-off-by: Max Reitz <mreitz@redhat.com>
The accel_list forgot to free, the asan output: Direct leak of 16 byte(s) in 1 object(s) allocated from: #0 0xffff919331cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb) #1 0xffff913f7163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163) #2 0xffff91413d9b in g_strsplit (/lib64/libglib-2.0.so.0+0x73d9b) #3 0xaaab42fb58e7 in configure_accelerators /qemu/vl.c:2777 #4 0xaaab42fb58e7 in main /qemu/vl.c:4121 #5 0xffff8f9b0b9f in __libc_start_main (/lib64/libc.so.6+0x20b9f) #6 0xaaab42fc1dab (/qemu/build/aarch64-softmmu/qemu-system-aarch64+0x8b1dab) Indirect leak of 4 byte(s) in 1 object(s) allocated from: #0 0xffff919331cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb) #1 0xffff913f7163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163) #2 0xffff9141243b in g_strdup (/lib64/libglib-2.0.so.0+0x7243b) #3 0xffff91413e6f in g_strsplit (/lib64/libglib-2.0.so.0+0x73e6f) #4 0xaaab42fb58e7 in configure_accelerators /qemu/vl.c:2777 #5 0xaaab42fb58e7 in main /qemu/vl.c:4121 #6 0xffff8f9b0b9f in __libc_start_main (/lib64/libc.so.6+0x20b9f) #7 0xaaab42fc1dab (/qemu/build/aarch64-softmmu/qemu-system-aarch64+0x8b1dab) Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200108114207.58084-1-kuhn.chenqun@huawei.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
start vm with libvirt, when GuestOS running, enter poweroff command using the xhci keyboard, then ASAN shows memory leak stack: Direct leak of 80 byte(s) in 5 object(s) allocated from: #0 0xfffd1e6431cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb) #1 0xfffd1e107163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163) #2 0xaaad39051367 in qemu_sglist_init /qemu/dma-helpers.c:43 #3 0xaaad3947c407 in pci_dma_sglist_init /qemu/include/hw/pci/pci.h:842 #4 0xaaad3947c407 in xhci_xfer_create_sgl /qemu/hw/usb/hcd-xhci.c:1446 #5 0xaaad3947c407 in xhci_setup_packet /qemu/hw/usb/hcd-xhci.c:1618 #6 0xaaad3948625f in xhci_submit /qemu/hw/usb/hcd-xhci.c:1827 #7 0xaaad3948625f in xhci_fire_transfer /qemu/hw/usb/hcd-xhci.c:1839 #8 0xaaad3948625f in xhci_kick_epctx /qemu/hw/usb/hcd-xhci.c:1991 #9 0xaaad3948f537 in xhci_doorbell_write /qemu/hw/usb/hcd-xhci.c:3158 #10 0xaaad38bcbfc7 in memory_region_write_accessor /qemu/memory.c:483 #11 0xaaad38bc654f in access_with_adjusted_size /qemu/memory.c:544 #12 0xaaad38bd1877 in memory_region_dispatch_write /qemu/memory.c:1482 #13 0xaaad38b1c77f in flatview_write_continue /qemu/exec.c:3167 #14 0xaaad38b1ca83 in flatview_write /qemu/exec.c:3207 #15 0xaaad38b268db in address_space_write /qemu/exec.c:3297 #16 0xaaad38bf909b in kvm_cpu_exec /qemu/accel/kvm/kvm-all.c:2383 #17 0xaaad38bb063f in qemu_kvm_cpu_thread_fn /qemu/cpus.c:1246 #18 0xaaad39821c93 in qemu_thread_start /qemu/util/qemu-thread-posix.c:519 #19 0xfffd1c8378bb (/lib64/libpthread.so.0+0x78bb) #20 0xfffd1c77616b (/lib64/libc.so.6+0xd616b) Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com> Message-id: 20200110105855.81144-1-kuhn.chenqun@huawei.com Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
I know it is not an issue, but it would be good to have some additional description how actually to use the tool. Is it possible to run the virtual machine with hex/bin file and communicate with it through serial port? Which chips the project supports?
But the general question for me: may I emulate any 8-bit chips software like atmega2560?
Thanks.
The text was updated successfully, but these errors were encountered: