Skip to content

Commit

Permalink
kernel - Major vm_page, lwkt thread, and other changes
Browse files Browse the repository at this point in the history
* Remove the rest of the LWKT fairq code, it may be added back in a different
  form later.  Go back to the strict priority model with round-robining
  of same-priority LWKT threads.

  Currently the model scans gd_tdrunq for sort insertion, which is probably
  a bit too inefficient.

* Refactor the LWKT scheduler clock.  The round-robining is now based on
  the head of gd->gd_tdrunq and the lwkt_schedulerclock() function will
  move it.  When a thread not on the head is selected to run (because
  the head is contending on a token), the round-robin tick will force a
  resched on the next tick.  As before, we never reschedule-ahead the
  kernel scheduler helper thread or threads that have already dropped
  to a user priority.

* The token code now tries a little harder to acquire the token before
  giving up, controllable with lwkt.token_spin and lwkt.token_delay
  (token_spin is the number of times to try and token_delay is the delay
  between tries, in nanoseconds).

* Fix a serious bug in usched_bsd4.c which improperly reassigned the 'dd'
  variable and caused the scheduler helper to monitor the wrong dd
  structure.

* Refactor the vm_page coloring code.  On SMP systems we now use the
  coloring code to implement cpu localization when allocating pages.
  The pages are still 'twisted' based on their physical address so both
  functions are served, but cpu localization is now the more important
  function.

* Implement NON-OBJECT vm_page allocations.  NULL may now be passed, which
  allocates a VM page unassociated with any VM object.  This will be
  used by the pmap code.

* Implement cpu localization for zalloc() and friends.  This removes a major
  contention point when handling concurrent VM faults.  The only major
  contention point left is the PQ_INACTIVE vm_page_queues[] queue.

* Temporarily remove the VM_ALLOC_ZERO request.  This will probably be
  reenabled in a later commit.

* Remove MSGF_NORESCHED (it is not being used) and simplify related
  lwkt scheduler functions.

* schedcpu_stats() and schedcpu_resource() no longer stall the callout
  kernel threads when scanning allproc, if they are unable to acquire
  proc->p_token.

* Move the need_lwkt_resched() from hardclock() to lwkt_schedulerclock()
  (which hardclock() calls).
  • Loading branch information
Matthew Dillon committed Oct 26, 2011
1 parent d7f4c45 commit 85946b6
Show file tree
Hide file tree
Showing 16 changed files with 490 additions and 390 deletions.
11 changes: 1 addition & 10 deletions sys/kern/kern_clock.c
Expand Up @@ -543,22 +543,13 @@ hardclock(systimer_t info, int in_ipi __unused, struct intrframe *frame)
/*
* lwkt thread scheduler fair queueing
*/
lwkt_fairq_schedulerclock(curthread);
lwkt_schedulerclock(curthread);

/*
* softticks are handled for all cpus
*/
hardclock_softtick(gd);

/*
* The LWKT scheduler will generally allow the current process to
* return to user mode even if there are other runnable LWKT threads
* running in kernel mode on behalf of a user process. This will
* ensure that those other threads have an opportunity to run in
* fairly short order (but not instantly).
*/
need_lwkt_resched();

/*
* ITimer handling is per-tick, per-cpu.
*
Expand Down
10 changes: 8 additions & 2 deletions sys/kern/kern_synch.c
Expand Up @@ -212,7 +212,10 @@ schedcpu_stats(struct proc *p, void *data __unused)
return(0);

PHOLD(p);
lwkt_gettoken(&p->p_token);
if (lwkt_trytoken(&p->p_token) == FALSE) {
PRELE(p);
return(0);
}

p->p_swtime++;
FOREACH_LWP_IN_PROC(lp, p) {
Expand Down Expand Up @@ -249,7 +252,10 @@ schedcpu_resource(struct proc *p, void *data __unused)
return(0);

PHOLD(p);
lwkt_gettoken(&p->p_token);
if (lwkt_trytoken(&p->p_token) == FALSE) {
PRELE(p);
return(0);
}

if (p->p_stat == SZOMB || p->p_limit == NULL) {
lwkt_reltoken(&p->p_token);
Expand Down
9 changes: 2 additions & 7 deletions sys/kern/lwkt_msgport.c
Expand Up @@ -238,18 +238,13 @@ _lwkt_initport(lwkt_port_t port,
* Schedule the target thread. If the message flags contains MSGF_NORESCHED
* we tell the scheduler not to reschedule if td is at a higher priority.
*
* This routine is called even if the thread is already scheduled so messages
* without NORESCHED will cause the target thread to be rescheduled even if
* prior messages did not.
* This routine is called even if the thread is already scheduled.
*/
static __inline
void
_lwkt_schedule_msg(thread_t td, int flags)
{
if (flags & MSGF_NORESCHED)
lwkt_schedule_noresched(td);
else
lwkt_schedule(td);
lwkt_schedule(td);
}

/*
Expand Down

0 comments on commit 85946b6

Please sign in to comment.