Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apicv: cancel event injection if vcpu is scheduled out #8

Merged
merged 1 commit into from Mar 15, 2018

Conversation

fyin1-zz
Copy link

And re-inject the event after vcpu is scheduled in.

Signed-off-by: Yin Fengwei fengwei.yin@intel.com

And re-inject the event after vcpu is scheduled in.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
@@ -147,6 +147,9 @@ static void context_switch_out(struct vcpu *vcpu)
if (vcpu == NULL)
return;

/* cancel event(int, gp, nmi and exception) injection */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the vcpu in context_switch_out is not the current vmcs structure. In such case the exec_vmwrite won't work as expected.
It will be better that the exec_vmptrld is called for the safety.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Yakui,
Thanks for the comments.
Now, we only have CPU partition feature (no CPU sharing yet). So from performance perspective, we tries best to avoid loading VMCS. There is comments in line 168 explaining it and what should be done if we want to support CPU sharing.

Copy link
Contributor

@yakuizhao yakuizhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@lijinxia
Copy link
Contributor

verified+1

@jren1 jren1 merged commit 9848000 into projectacrn:master Mar 15, 2018
gvancuts referenced this pull request in gvancuts/acrn-hypervisor May 9, 2018
yonghuah added a commit to yonghuah/acrn-hypervisor that referenced this pull request Feb 21, 2022
 'mevent_lmutex' is initialized as default type
 attempting to recursively lock on this kind of
 mutext results in undefined behaviour.

 Recursively lock on 'mevent_lmutex' can be detected
 in mevent thread when user tries to trigger system
 reset from user VM, in this case, user VM reboot hang.

 The backtrace for this issue:
  projectacrn#1 in mevent_qlock () at core/mevent.c:93
  projectacrn#2 in mevent_delete_even at core/mevent.c:357
    ===>Recursively LOCK
  projectacrn#3 in mevent_delete_close at core/mevent.c:387
  projectacrn#4 in acrn_timer_deinit at core/timer.c:106
  projectacrn#5 in virtio_reset_dev at hw/pci/virtio/virtio.c:171
  projectacrn#6 in virtio_console_reset at
     hw/pci/virtio/virtio_console.c:196
  projectacrn#7 in virtio_console_destroy at
    hw/pci/virtio/virtio_console.c:1015
  projectacrn#8 in virtio_console_teardown_backend at
    hw/pci/virtio/virtio_console.c:1042
  projectacrn#9 in mevent_drain_del_list () at
    core/mevent.c:348 ===> 1st LOCK
  projectacrn#10 in mevent_dispatch () at core/mevent.c:472
  projectacrn#11 in main at core/main.c:1110

  So the root cause is:
  mevent_mutex lock is recursively locked by mevent thread
  itself (projectacrn#9 for this first lock and projectacrn#2 for recursively lock),
  which is not allowed for mutex with default attribute.

  This patch changes the mutex type of 'mevent_lmutex'
  from default to "PTHREAD_MUTEX_RECURSIVE", because
  recrusively lock shall be allowed as user of mevent
  may call mevent functions (where mutex lock maybe required)
  in teardown callbacks.

Tracked-On: projectacrn#7133
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Yu Wang <yu1.wang@intel.com>
yonghuah added a commit to yonghuah/acrn-hypervisor that referenced this pull request Feb 21, 2022
 'mevent_lmutex' is initialized as default type,
 while attempting to recursively lock on this
 kind of mutext results in undefined behaviour.

 Recursively lock on 'mevent_lmutex' can be detected
 in mevent thread when user tries to trigger system
 reset from user VM, in this case, user VM reboot hang.

 The backtrace for this issue:
  projectacrn#1 in mevent_qlock () at core/mevent.c:93
  projectacrn#2 in mevent_delete_even at core/mevent.c:357
    ===>Recursively LOCK
  projectacrn#3 in mevent_delete_close at core/mevent.c:387
  projectacrn#4 in acrn_timer_deinit at core/timer.c:106
  projectacrn#5 in virtio_reset_dev at hw/pci/virtio/virtio.c:171
  projectacrn#6 in virtio_console_reset at
     hw/pci/virtio/virtio_console.c:196
  projectacrn#7 in virtio_console_destroy at
    hw/pci/virtio/virtio_console.c:1015
  projectacrn#8 in virtio_console_teardown_backend at
    hw/pci/virtio/virtio_console.c:1042
  projectacrn#9 in mevent_drain_del_list () at
    core/mevent.c:348 ===> 1st LOCK
  projectacrn#10 in mevent_dispatch () at core/mevent.c:472
  projectacrn#11 in main at core/main.c:1110

  So the root cause is:
  mevent_mutex lock is recursively locked by mevent thread
  itself (projectacrn#9 for this first lock and projectacrn#2 for recursively lock),
  which is not allowed for mutex with default attribute.

  This patch changes the mutex type of 'mevent_lmutex'
  from default to "PTHREAD_MUTEX_RECURSIVE", because
  recrusively lock shall be allowed as user of mevent
  may call mevent functions (where mutex lock maybe required)
  in teardown callbacks.

Tracked-On: projectacrn#7133
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Yu Wang <yu1.wang@intel.com>
acrnsi-robot pushed a commit that referenced this pull request Feb 21, 2022
 'mevent_lmutex' is initialized as default type,
 while attempting to recursively lock on this
 kind of mutext results in undefined behaviour.

 Recursively lock on 'mevent_lmutex' can be detected
 in mevent thread when user tries to trigger system
 reset from user VM, in this case, user VM reboot hang.

 The backtrace for this issue:
  #1 in mevent_qlock () at core/mevent.c:93
  #2 in mevent_delete_even at core/mevent.c:357
    ===>Recursively LOCK
  #3 in mevent_delete_close at core/mevent.c:387
  #4 in acrn_timer_deinit at core/timer.c:106
  #5 in virtio_reset_dev at hw/pci/virtio/virtio.c:171
  #6 in virtio_console_reset at
     hw/pci/virtio/virtio_console.c:196
  #7 in virtio_console_destroy at
    hw/pci/virtio/virtio_console.c:1015
  #8 in virtio_console_teardown_backend at
    hw/pci/virtio/virtio_console.c:1042
  #9 in mevent_drain_del_list () at
    core/mevent.c:348 ===> 1st LOCK
  #10 in mevent_dispatch () at core/mevent.c:472
  #11 in main at core/main.c:1110

  So the root cause is:
  mevent_mutex lock is recursively locked by mevent thread
  itself (#9 for this first lock and #2 for recursively lock),
  which is not allowed for mutex with default attribute.

  This patch changes the mutex type of 'mevent_lmutex'
  from default to "PTHREAD_MUTEX_RECURSIVE", because
  recrusively lock shall be allowed as user of mevent
  may call mevent functions (where mutex lock maybe required)
  in teardown callbacks.

Tracked-On: #7133
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Yu Wang <yu1.wang@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants