Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ASAN reports UAF + SEGV when fuzzing exposed PIO with Hypercube guest VM. #6147

Closed
shuox opened this issue Jun 4, 2021 · 2 comments
Closed
Labels
status: new The issue status: new for creation

Comments

@shuox
Copy link

shuox commented Jun 4, 2021

==468==ERROR: AddressSanitizer: heap-use-after-free on address 0x6180000200e0 at pc 0x559e3c1f0895 bp 0x7ffdd5f07de0 sp 0x7ffdd5f07dd0
READ of size 4 at 0x6180000200e0 thread T0 (mevent)
#0 0x559e3c1f0894 in timer_handler core/timer.c:45
#1 0x559e3c1c42bb in mevent_handle core/mevent.c:203
#2 0x559e3c1c754e in mevent_dispatch core/mevent.c:441
#3 0x559e3c1e527f in main core/main.c:1091
#4 0x7f49155a609a in __libc_start_main ../csu/libc-start.c:308
#5 0x559e3bebbdfd in _start (/usr/bin/acrn-dm+0x1eadfd)0x6180000200e0 is located 96 bytes inside of 784-byte region [0x618000020080,0x618000020390)
freed by thread T81 (vcpu 0) here:
#0 0x559e3bfa6a4f in __interceptor_free (/usr/bin/acrn-dm+0x2d5a4f)
#1 0x559e3c15ac40 in virtio_rnd_deinit hw/pci/virtio/virtio_rnd.c:514
#2 0x559e3c0ea7a5 in pci_emul_deinit hw/pci/core.c:1048
#3 0x559e3c0eb241 in deinit_pci hw/pci/core.c:1620
#4 0x559e3c1e117d in vm_reset_vdevs core/main.c:581
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486previously allocated by thread T81 (vcpu 0) here:
#0 0x559e3bfa7046 in calloc (/usr/bin/acrn-dm+0x2d6046)
#1 0x559e3c15c7de in virtio_rnd_init hw/pci/virtio/virtio_rnd.c:394
#2 0x559e3c0d8c54 in pci_emul_init hw/pci/core.c:1034
#3 0x559e3c0ebb25 in init_pci hw/pci/core.c:1419
#4 0x559e3c1e1280 in vm_reset_vdevs core/main.c:595
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486Thread T81 (vcpu 0) created by T0 (mevent) here:
#0 0x559e3bed3ab5 in pthread_create (/usr/bin/acrn-dm+0x202ab5)
#1 0x559e3c1e06e0 in add_cpu core/main.c:275
#2 0x559e3c1e525d in main core/main.c:1079
#3 0x7f49155a609a in __libc_start_main ../csu/libc-start.c:308SUMMARY: AddressSanitizer: heap-use-after-free core/timer.c:45 in timer_handler
Shadow bytes around the buggy address:
0x0c307fffbfc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfe0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbff0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffc000: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c307fffc010: fd fd fd fd fd fd fd fd fd fd fd fd[fd]fd fd fd
0x0c307fffc020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc030: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc040: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc050: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc060: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc

==468==ERROR: AddressSanitizer: heap-use-after-free on address 0x6180000200f0 at pc 0x559e3c1f09af bp 0x7ffdd5f07de0 sp 0x7ffdd5f07dd0
READ of size 8 at 0x6180000200f0 thread T0 (mevent)
#0 0x559e3c1f09ae in timer_handler core/timer.c:58
#1 0x559e3c1c42bb in mevent_handle core/mevent.c:203
#2 0x559e3c1c754e in mevent_dispatch core/mevent.c:441
#3 0x559e3c1e527f in main core/main.c:1091
#4 0x7f49155a609a in __libc_start_main ../csu/libc-start.c:308
#5 0x559e3bebbdfd in _start (/usr/bin/acrn-dm+0x1eadfd)0x6180000200f0 is located 112 bytes inside of 784-byte region [0x618000020080,0x618000020390)
freed by thread T81 (vcpu 0) here:
#0 0x559e3bfa6a4f in __interceptor_free (/usr/bin/acrn-dm+0x2d5a4f)
#1 0x559e3c15ac40 in virtio_rnd_deinit hw/pci/virtio/virtio_rnd.c:514
#2 0x559e3c0ea7a5 in pci_emul_deinit hw/pci/core.c:1048
#3 0x559e3c0eb241 in deinit_pci hw/pci/core.c:1620
#4 0x559e3c1e117d in vm_reset_vdevs core/main.c:581
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486previously allocated by thread T81 (vcpu 0) here:
#0 0x559e3bfa7046 in calloc (/usr/bin/acrn-dm+0x2d6046)
#1 0x559e3c15c7de in virtio_rnd_init hw/pci/virtio/virtio_rnd.c:394
#2 0x559e3c0d8c54 in pci_emul_init hw/pci/core.c:1034
#3 0x559e3c0ebb25 in init_pci hw/pci/core.c:1419
#4 0x559e3c1e1280 in vm_reset_vdevs core/main.c:595
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486SUMMARY: AddressSanitizer: heap-use-after-free core/timer.c:58 in timer_handler
Shadow bytes around the buggy address:
0x0c307fffbfc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfe0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbff0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffc000: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c307fffc010: fd fd fd fd fd fd fd fd fd fd fd fd fd fd[fd]fd
0x0c307fffc020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc030: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc040: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc050: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc060: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc

==468==ERROR: AddressSanitizer: heap-use-after-free on address 0x6180000200f8 at pc 0x559e3c1f09e4 bp 0x7ffdd5f07de0 sp 0x7ffdd5f07dd0
READ of size 8 at 0x6180000200f8 thread T0 (mevent)
#0 0x559e3c1f09e3 in timer_handler core/timer.c:59
#1 0x559e3c1c42bb in mevent_handle core/mevent.c:203
#2 0x559e3c1c754e in mevent_dispatch core/mevent.c:441
#3 0x559e3c1e527f in main core/main.c:1091
#4 0x7f49155a609a in __libc_start_main ../csu/libc-start.c:308
#5 0x559e3bebbdfd in _start (/usr/bin/acrn-dm+0x1eadfd)0x6180000200f8 is located 120 bytes inside of 784-byte region [0x618000020080,0x618000020390)
freed by thread T81 (vcpu 0) here:
#0 0x559e3bfa6a4f in __interceptor_free (/usr/bin/acrn-dm+0x2d5a4f)
#1 0x559e3c15ac40 in virtio_rnd_deinit hw/pci/virtio/virtio_rnd.c:514
#2 0x559e3c0ea7a5 in pci_emul_deinit hw/pci/core.c:1048
#3 0x559e3c0eb241 in deinit_pci hw/pci/core.c:1620
#4 0x559e3c1e117d in vm_reset_vdevs core/main.c:581
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486previously allocated by thread T81 (vcpu 0) here:
#0 0x559e3bfa7046 in calloc (/usr/bin/acrn-dm+0x2d6046)
#1 0x559e3c15c7de in virtio_rnd_init hw/pci/virtio/virtio_rnd.c:394
#2 0x559e3c0d8c54 in pci_emul_init hw/pci/core.c:1034
#3 0x559e3c0ebb25 in init_pci hw/pci/core.c:1419
#4 0x559e3c1e1280 in vm_reset_vdevs core/main.c:595
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486SUMMARY: AddressSanitizer: heap-use-after-free core/timer.c:59 in timer_handler
Shadow bytes around the buggy address:
0x0c307fffbfc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfe0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbff0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffc000: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c307fffc010: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd[fd]
0x0c307fffc020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc030: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc040: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc050: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc060: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc

==468==ERROR: AddressSanitizer: heap-use-after-free on address 0x618000020080 at pc 0x559e3c00aed1 bp 0x7ffdd5f07d90 sp 0x7ffdd5f07d80
READ of size 8 at 0x618000020080 thread T0 (mevent)
#0 0x559e3c00aed0 in virtio_poll_timer hw/pci/virtio/virtio.c:82
#1 0x559e3c1f0832 in timer_handler core/timer.c:59
#2 0x559e3c1c42bb in mevent_handle core/mevent.c:203
#3 0x559e3c1c754e in mevent_dispatch core/mevent.c:441
#4 0x559e3c1e527f in main core/main.c:1091
#5 0x7f49155a609a in __libc_start_main ../csu/libc-start.c:308
#6 0x559e3bebbdfd in _start (/usr/bin/acrn-dm+0x1eadfd)0x618000020080 is located 0 bytes inside of 784-byte region [0x618000020080,0x618000020390)
freed by thread T81 (vcpu 0) here:
#0 0x559e3bfa6a4f in __interceptor_free (/usr/bin/acrn-dm+0x2d5a4f)
#1 0x559e3c15ac40 in virtio_rnd_deinit hw/pci/virtio/virtio_rnd.c:514
#2 0x559e3c0ea7a5 in pci_emul_deinit hw/pci/core.c:1048
#3 0x559e3c0eb241 in deinit_pci hw/pci/core.c:1620
#4 0x559e3c1e117d in vm_reset_vdevs core/main.c:581
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486previously allocated by thread T81 (vcpu 0) here:
#0 0x559e3bfa7046 in calloc (/usr/bin/acrn-dm+0x2d6046)
#1 0x559e3c15c7de in virtio_rnd_init hw/pci/virtio/virtio_rnd.c:394
#2 0x559e3c0d8c54 in pci_emul_init hw/pci/core.c:1034
#3 0x559e3c0ebb25 in init_pci hw/pci/core.c:1419
#4 0x559e3c1e1280 in vm_reset_vdevs core/main.c:595
#5 0x559e3c1e13dc in vm_system_reset core/main.c:635
#6 0x559e3c1e2545 in vm_loop core/main.c:712
#7 0x559e3c1e3562 in start_thread core/main.c:249
#8 0x7f4916b25fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486SUMMARY: AddressSanitizer: heap-use-after-free hw/pci/virtio/virtio.c:82 in virtio_poll_timer
Shadow bytes around the buggy address:
0x0c307fffbfc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbfe0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffbff0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c307fffc000: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c307fffc010:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc030: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc040: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc050: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c307fffc060: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc

==468==ERROR: AddressSanitizer: SEGV on unknown address 0x559e3d000001 (pc 0x559e3c00ad72 bp 0x7ffdd5f07de0 sp 0x7ffdd5f07da0 T0)
==468==The signal is caused by a READ memory access.
#0 0x559e3c00ad71 in virtio_poll_timer hw/pci/virtio/virtio.c:83
#1 0x559e3c1f0832 in timer_handler core/timer.c:59
#2 0x559e3c1c42bb in mevent_handle core/mevent.c:203
#3 0x559e3c1c754e in mevent_dispatch core/mevent.c:441
#4 0x559e3c1e527f in main core/main.c:1091
#5 0x7f49155a609a in __libc_start_main ../csu/libc-start.c:308
#6 0x559e3bebbdfd in _start (/usr/bin/acrn-dm+0x1eadfd)AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV hw/pci/virtio/virtio.c:83 in virtio_poll_timer

@shuox shuox added the status: new The issue status: new for creation label Jun 4, 2021
@shuox
Copy link
Author

shuox commented Jun 4, 2021

[External_System_ID] ACRN-7105

shuox pushed a commit to shuox/acrn-hypervisor that referenced this issue Jun 4, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: projectacrn#6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
@liudlong
Copy link
Contributor

liudlong commented Jun 4, 2021

Verify the patch on Yaag, it is OK.

wenlingz pushed a commit that referenced this issue Jun 4, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: #6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
yonghuah pushed a commit to yonghuah/acrn-hypervisor that referenced this issue Jun 30, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: projectacrn#6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
yonghuah pushed a commit to yonghuah/acrn-hypervisor that referenced this issue Jul 1, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: projectacrn#6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
yonghuah added a commit to yonghuah/acrn-hypervisor that referenced this issue Jul 1, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: projectacrn#6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
wenlingz pushed a commit that referenced this issue Jul 2, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: #6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
wenlingz pushed a commit that referenced this issue Jul 2, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: #6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
wenlingz pushed a commit that referenced this issue Jul 2, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: #6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
yonghuah pushed a commit to yonghuah/acrn-hypervisor that referenced this issue Jul 14, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: projectacrn#6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
wenlingz pushed a commit that referenced this issue Jul 15, 2021
With virtio polling mode enabled, a timer is running in the virtio
backend service. And the timer will also be triggered if its frondend
driver didn't do the device reset in shutdown. A freed virtio device
will be accessed in the polling timer handler.

Do the virtio reset() callback specifically to clear the polling timer
before the free.

Tracked-On: #6147
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: new The issue status: new for creation
Projects
None yet
Development

No branches or pull requests

3 participants