-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Race condition when using wait/cont on target in combination with hooks #54
Comments
@mariusmue I see you self-assigned this issue. I have a deadline coming up where I need this code to be working. Of course, I can't and don't have any expectations of you (both fixing this in a certain time period or even fixing this at all): this is an opensource project and is delivered as is. However, I still need this, and thus would be willing to fix this myself. Could you please give me some pointers so I can look into fixing this myself? Where do you think the issue is? Could you describe in a high-level way how to fix this? |
Hi @redfast00. In any case, if you need a hotfix to have the code working for your assignement, here is what you can do: |
Will try that, thanks for the quick reply! |
How did it go? I just realized, a workaround which would require even less changes in your program is the insertion of a small-sleep after the cont(), or the wait(). ( While this is certainly no solution to the underlying problem, it may help in your specific case. Meanwhile, I'm trying to figure out how to fix this behaviour. |
* guard waiting for watchmen via events In rare occasions, a race can occure where watchmen callbacks are running while the main-thread already consumes the update-state message. This commit guards those callback by an additional event on which target wait() is waiting for. However, there may still be a small TOCTOU-window on that event, as the waiting is using a timeout (for state-updates which are not generating watchmen callbacks). Hence, in the long run, the full code dealing with state updates could benefit from refactoring. * added smoketest for #54 (thanks @redfast00) * Remove manual keystone-library copy from CI It seems that keystone version 0.9.2 (as in pypi) has fixed the issues regarding the installation path of keystone. * Various fixes found by CI * Add processing_callbacks to dictify-ignore list Event()-Objects cannot be serialized, so it should either be private or part of the ignore list. As the watchman are independently accessing this Event, it's added to the ignore list * Fix assert statement in smoketest to be py2-compliant * Set environment variables for the smoketest
I rolled out a fix with #55. Let me know if you still encounter any problems. For now, I close this issue. |
@mariusmue The issue is still present in my codebase, unfortunately I can't share the binary and code needed to reproduce it (the binary is proprietary, and the code contains parts of the proprietary binary). However, this line c03fc2b#diff-ac6fa11e7e5c294778098e85088305f3R456 looks suspicious: the return value of the wait is not checked, so even if we are still waiting, it will progress past it anyway after the timeout. |
This seems to work, but I'm scared of it breaking unexpectedly: with 0.01 after cont as suggested, it crashes; with 0.1, it works but is slow. Edit: s/it works but is slow/it works sometimes but is slow/ |
I modified the code to be: @watch('TargetWait')
def wait(self, state=TargetStates.STOPPED|TargetStates.EXITED):
while True:
self.processing_callbacks.wait(.1)
if (self.state & state != 0) and self.processing_callbacks.is_set():
break
elif (self.state & state != 0) and not self.processing_callbacks.is_set():
print("not set, but we would continue") It then gets stuck in an infinite loop, where it keeps printing: not set, but we would continue This shows that this |
Can confirm that with my original testcase I published and the modified code of the post before, it also keeps printing. |
It's not a glorified sleep issue. If you check the commit message, I am well aware that there is still a risk for a race. The current problem is that not all events causing a state transition are handled via the watchmen state transition guard. Hence, not in all cases the event would be set/cleared, which is of course suboptimal - hence I went to the method of a timeout - which, as we see, in your case does not fix towards the intended behavior. |
Oh, my bad, you did mention this in your commit message, my excuses for not reading it. |
I am willing to do this refactor, is there a high-level documentation of the architecture of the avatar2 codebase somewhere? Would the original paper be a good read for this? |
Still, as the problem persists, I reopened the issue for now and will look for further solutions. |
Oh, if you really want to do this refactor, I guess the fastest way to get an overview is scheduling a (screen-shared) call with me, or join the avatar2-slack for further synchronization. In any case, this would need some more thought into how to get a clean synchronization model - and may not feasible within the limited time until your deadline. I also have an independent commit which would solve your issue for callbacks after breakpoints (by making breakpoint an event with two state-transitions, whereas in the first ones the callbacks are run). //edit: The avatar2-paper unfortunately does not describe the implementation details in-depth. |
Yes, that's the main issue for me; beating the deadline. I'm currently blocked on this issue (all other parts are done) and a large part of the codebase was written in Avatar2 (making switching to another approach hard), so it does make sense for me to do whatever it takes to get this issue resolved, either using your independent commit or refactoring Avatar2. I only need to have breakpoint hooks working: memory reads and writes would be nice if I want to polish my work later on, but for now the BreakpointHit event will suffice. |
I will spent some more hours today on this. If nothing succeeds, expect an extra branch with a 2-tier-breakpoint based approach by tomorrow morning. |
In the mean time, for testing, feel free to just increase the timeout on the wait. In my tests, it's pretty easy to see that in many cases the event-mechanism seems to succeed:
|
I pushed you this branch: https://github.com/avatartwo/avatar2/tree/54/two-tier-bp The more I think about the issue, the more it becomes an evident that we need a code refactor and a better state-update model. As of now, the update-state on a target is triggered via the avatar2-fastqueue, as state-updates can come in at any time, asynchronously. Edit: Updated linked branch to the correct one. |
The code on the |
* guard waiting for watchmen via events In rare occasions, a race can occure where watchmen callbacks are running while the main-thread already consumes the update-state message. This commit guards those callback by an additional event on which target wait() is waiting for. However, there may still be a small TOCTOU-window on that event, as the waiting is using a timeout (for state-updates which are not generating watchmen callbacks). Hence, in the long run, the full code dealing with state updates could benefit from refactoring. * added smoketest for #54 (thanks @redfast00) * Remove manual keystone-library copy from CI It seems that keystone version 0.9.2 (as in pypi) has fixed the issues regarding the installation path of keystone. * Various fixes found by CI * Add processing_callbacks to dictify-ignore list Event()-Objects cannot be serialized, so it should either be private or part of the ignore list. As the watchman are independently accessing this Event, it's added to the ignore list * Fix assert statement in smoketest to be py2-compliant * Set environment variables for the smoketest
This fix made it into main by now, hence, I'm closing this issue. |
Circumstances
I am using the GDBTarget target to connect to QEMU. I can't use the built-in QEMU target because QEMU gets spawned from another process. There seems to be an issue with synchronous hooks being async. I have the following testcase to reproduce the issue:
This results in this crash:
See also the following trace with
gdb_verbose_mi
enabled (around message 15-16):Steps to reproduce this
qemu-system-arm -machine virt -gdb tcp::1234 -S -nographic -bios qemu_arm_test
in a separate terminal. Theqemu_arm_test
binary is inavatar2/tests/binaries
of this repository.Expected behaviour
Machine state is
STOPPED
in synchronous hooksActual behaviour
Synchronous hooks act like async hooks?
The text was updated successfully, but these errors were encountered: