-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hpet and timer rework #169
Conversation
…ds with shift/mask
@@ -38,5 +38,5 @@ void init_net(kernel_heaps kh) | |||
rprintf("init net page alloc %p\n", backed); | |||
lwip_heap = allocate_mcache(h, backed, 5, 11, PAGESIZE); | |||
lwip_init(); | |||
register_periodic_timer(milliseconds(500), closure(h, timeout)); | |||
register_periodic_timer(milliseconds(10), closure(h, timeout)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can take this out, this does nothing. Infact if you remove this timer all together you gets 10x perf boost. ie 1000 request in 1.2 sec vs 1000 request in 9.8 sec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to add the usermode perf issue was mostly because of runloop stalling and now since we switched to hpet that goes away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Yes, I did observe that the timer running seems to slow it down somewhat (although much less at 100HZ vs 2HZ). However, according to lwip/doc/rawapi.txt:
sys_check_timeouts()
When the system is running, you have to periodically call
sys_check_timeouts() which will handle all timers for all protocols in
the stack; add this to your main loop or equivalent.
So I don't think the solution is to just remove it but rather to find out why it's slowing it down. Without this call (or LWIP_TIMERS_CUSTOM should we choose to support it), LWIP has no way to handle timeout-based events.
- I suspect there is a deeper issue with interrupt handling, and turning up the runloop timer frequency is kind of masking it. I'm looking into that now w.r.t. main-loop: WARNING: I/O thread spun for 1000 iterations #114.
x86_64/service.c
Outdated
// which should limit the skew | ||
timer_check(); | ||
|
||
u64 timeout = MIN(timer_check(), milliseconds(100)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make millseconds(100) a const outside the loop, any decent compiler would automatically do it, but just in case..
These changes give us proper HPET support as well as an interval timer to be used for the runloop. Previously, we depended on a single, periodic LAPIC timer which was incorrectly set to a fixed and very long period. (~1s ... configure_timer() was completely ignoring the rate!) This appears to be related to much of the slowness seen with LWIP connection processing (see below).
These changes address improper access of HPET registers (by specification they only allow 32 and 64 bit access, and qemu was ignoring smaller accesses accordingly), implement HPET one-shot and periodic timers, and replace the use of the periodic LAPIC timer with an interval timer within the runloop based on the nearest timeout in the timer heap (or 100ms, whichever is less). While we're at it, crank up the LWIP periodic timer to 100HZ.
with master:
with timer_fix: