New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
corelog chokes coredevice #986
Comments
That's probably because of moninj. Moninj, session, and corelog all compete for CPU time, and since we use a round-robin scheduler without any kind of readiness notification mechanism, adding one more thread increases the average latency by poll time plus schedule time plus (in case of corelog) two context switch times. |
And poll time plus schedule time plus two ctx switches are 400k cycles? Or is that the scheduler tick? |
The scheduler is cooperative so it doesn't have ticks. It switches to the next thread once the current one blocks on something. |
400k cycles were spent rotating the log ring buffer in place. I am now observing exact same performance with or without aqctl_corelog. |
@whitequark Please fix this in release-3 as well. |
Ah sorry, didn't notice the milestone. |
On a freshly started coredevice with artiq_run and nothing else running:
Now starting
aqctl_corelog
:And reattempting the experiment:
Persists after killing
aqctl_corelog
:But after a while (in this case ~5 minutes) it recovers:
artiq_master
andartiq_run
are also different. Onartiq_master
the RPCs take ~5ms while onartiq_run
they are around 2.5msbreak_realtime()
and/or the fact that thescheduler.check_pause()
is used as an RPC matter.The text was updated successfully, but these errors were encountered: