-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adsp: cavs: remove irq set at clear mask #11
Conversation
There will have an unhandled IRQ when we enable IRQ by clear mask. Remove the irq set. Signed-off-by: Pan Xiuli <xiuli.pan@linux.intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xiulipan is this still valid after our conversation at OSTS. Apologies for the delay, I forgot about this. Please always ping me in the future If I dont get back with 24hrs.
Btw, we will still need to add the breaks
where needed.
@lgirdwood |
@xiulipan is this PR still valid. I'm trying to boot the APL FW and I dont see the correct value in the IPC register with or without this PR |
@lgirdwood @ranj063 It is wired that we do not need this PR anymore. As it will cause a not one care IRQ in old fw. I think the FW have fixed the issue and we do not need to fix this issue anymore. Will have some other PR for the intend. |
@lgirdwood any idea about how to make there is no IRQ at the FW load begin? I removed I try to remove this line to avoid no one care IRQ at begin. But it seems I also blocked normal IRQ. Can you take a look and give some advice about how to make this right?
|
Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET command are not preserved on VM migration. Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES) get enabled. What happens is: first the VirtIONet::curr_guest_offloads gets restored and offloads are getting set correctly: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474 thesofproject#1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 thesofproject#2 virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334 thesofproject#3 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:168 thesofproject#4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197 thesofproject#5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 thesofproject#6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 thesofproject#7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 thesofproject#8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 thesofproject#9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 thesofproject#10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 thesofproject#11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 However later on the features are getting restored, and offloads get reset to everything supported by features: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474 thesofproject#1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 thesofproject#2 virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773 thesofproject#3 virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052 thesofproject#4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220 thesofproject#5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 thesofproject#6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 thesofproject#7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 thesofproject#8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 thesofproject#9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 thesofproject#10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 thesofproject#11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 Fix this by preserving the state in saved_guest_offloads field and pushing out offload initialization to the new post load hook. Cc: qemu-stable@nongnu.org Signed-off-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
There will have an unhandled IRQ when we enable IRQ by clear mask.
Remove the irq set.
Signed-off-by: Pan Xiuli xiuli.pan@linux.intel.com