Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hpet and timer rework #169

Merged
merged 8 commits into from
Oct 26, 2018
Merged

hpet and timer rework #169

merged 8 commits into from
Oct 26, 2018

Conversation

wjhun
Copy link
Contributor

@wjhun wjhun commented Oct 26, 2018

These changes give us proper HPET support as well as an interval timer to be used for the runloop. Previously, we depended on a single, periodic LAPIC timer which was incorrectly set to a fixed and very long period. (~1s ... configure_timer() was completely ignoring the rate!) This appears to be related to much of the slowness seen with LWIP connection processing (see below).

These changes address improper access of HPET registers (by specification they only allow 32 and 64 bit access, and qemu was ignoring smaller accesses accordingly), implement HPET one-shot and periodic timers, and replace the use of the periodic LAPIC timer with an interval timer within the runloop based on the nearest timeout in the timer heap (or 100ms, whichever is less). While we're at it, crank up the LWIP periodic timer to 100HZ.

with master:

$ time test/network_test 127.0.0.1:8080 -connections 100 -requests 1
c: 100 active: 0 req: 300 resp: 100

real	1m40.539s
user	0m0.028s
sys	0m0.000s

with timer_fix:

$ time test/network_test 127.0.0.1:8080 -connections 100 -requests 1
c: 100 active: 0 req: 300 resp: 100

real    0m1.060s
user    0m0.000s
sys     0m0.028s

@wjhun wjhun requested a review from a team October 26, 2018 14:55
@@ -38,5 +38,5 @@ void init_net(kernel_heaps kh)
rprintf("init net page alloc %p\n", backed);
lwip_heap = allocate_mcache(h, backed, 5, 11, PAGESIZE);
lwip_init();
register_periodic_timer(milliseconds(500), closure(h, timeout));
register_periodic_timer(milliseconds(10), closure(h, timeout));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can take this out, this does nothing. Infact if you remove this timer all together you gets 10x perf boost. ie 1000 request in 1.2 sec vs 1000 request in 9.8 sec

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to add the usermode perf issue was mostly because of runloop stalling and now since we switched to hpet that goes away.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Yes, I did observe that the timer running seems to slow it down somewhat (although much less at 100HZ vs 2HZ). However, according to lwip/doc/rawapi.txt:
  • sys_check_timeouts()

    When the system is running, you have to periodically call
    sys_check_timeouts() which will handle all timers for all protocols in
    the stack; add this to your main loop or equivalent.

So I don't think the solution is to just remove it but rather to find out why it's slowing it down. Without this call (or LWIP_TIMERS_CUSTOM should we choose to support it), LWIP has no way to handle timeout-based events.

x86_64/service.c Outdated
// which should limit the skew
timer_check();

u64 timeout = MIN(timer_check(), milliseconds(100));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make millseconds(100) a const outside the loop, any decent compiler would automatically do it, but just in case..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants