Permalink
Switch branches/tags
Nothing to show
Commits on Oct 17, 2010
  1. Defined as tsk_seruntime

    Decad3nce committed Oct 17, 2010
  2. Add 1.2GHz and 600MHz speed scaling steps, cap speed at 1GHz on boot,…

    … stock voltages and timings are retained for existing modes.
    
    Signed-off-by: Wes Garner <wesgarner@gmail.com>
    Unhelpful committed with nullghost Aug 22, 2010
  3. disable config_preempt as it causes a panic on boot

    nullghost committed Oct 17, 2010
  4. Squashed set of commits for mutex adaptive spin

    for 2.6.29 (Peter Zijlstra) -- this comes default in .31+
    This is a combination of 6 commits.
    mutex: small cleanup
    
    Remove a local variable by combining an assingment and test in one.
    
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: Wes Garner <wesgarner@gmail.com>
    
    mutex: preemption fixes
    
    The problem is that dropping the spinlock right before schedule is a voluntary
    preemption point and can cause a schedule, right after which we schedule again.
    
    Fix this inefficiency by keeping preemption disabled until we schedule, do this
    by explicity disabling preemption and providing a schedule() variant that
    assumes preemption is already disabled.
    
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: Wes Garner <wesgarner@gmail.com>
    
    mutex: implement adaptive spinning
    
    Change mutex contention behaviour such that it will sometimes busy wait on
    acquisition - moving its behaviour closer to that of spinlocks.
    
    This concept got ported to mainline from the -rt tree, where it was originally
    implemented for rtmutexes by Steven Rostedt, based on work by Gregory Haskins.
    
    Testing with Ingo's test-mutex application (http://lkml.org/lkml/2006/1/8/50)
    gave a 345% boost for VFS scalability on my testbox:
    
     # ./test-mutex-shm V 16 10 | grep "^avg ops"
     avg ops/sec:               296604
    
     # ./test-mutex-shm V 16 10 | grep "^avg ops"
     avg ops/sec:               85870
    
    The key criteria for the busy wait is that the lock owner has to be running on
    a (different) cpu. The idea is that as long as the owner is running, there is a
    fair chance it'll release the lock soon, and thus we'll be better off spinning
    instead of blocking/scheduling.
    
    Since regular mutexes (as opposed to rtmutexes) do not atomically track the
    owner, we add the owner in a non-atomic fashion and deal with the races in
    the slowpath.
    
    Furthermore, to ease the testing of the performance impact of this new code,
    there is means to disable this behaviour runtime (without having to reboot
    the system), when scheduler debugging is enabled (CONFIG_SCHED_DEBUG=y),
    by issuing the following command:
    
     # echo NO_OWNER_SPIN > /debug/sched_features
    
    This command re-enables spinning again (this is also the default):
    
     # echo OWNER_SPIN > /debug/sched_features
    
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: Wes Garner <wesgarner@gmail.com>
    
    mutex: set owner in mutex_lock() only
    
    mutex_lock() sets the lock owner, no need to set it upfront in
    __mutex_lock_common().
    
    Inside __mutex_lock_common() we can cope with the case where the
    successful acquirer got preempted by us before setting the owner
    field: there is an explicit check in the spinning part where we read
    the owner field optimistically.
    
    The sleeping path won't use that field at all.
    
    The debug code does owner checks only on unlock where the owner field
    is garuanteed to be set by then.
    
    [a.p.zijlstra@chello.nl: same for trylock]
    Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: Wes Garner <wesgarner@gmail.com>
    
    mutex: adaptive spin for debug too
    
    Johannes' argumentation shows the flaw in mine and testing does indeed confirm
    that the adaptive spin works just fine with the debug code.
    
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: Wes Garner <wesgarner@gmail.com>
    
    mutex: adaptive spin performance tweaks
    
    Ok, numbers first, incremental below:
    
    * dbench 50 (higher is better):
    spin        1282MB/s
    v10         548MB/s
    v10 no wait 1868MB/s
    
    * 4k creates (numbers in files/second higher is better):
    spin        avg 200.60 median 193.20 std 19.71 high 305.93 low 186.82
    v10         avg 180.94 median 175.28 std 13.91 high 229.31 low 168.73
    v10 no wait avg 232.18 median 222.38 std 22.91 high 314.66 low 209.12
    
    * File stats (numbers in seconds, lower is better):
    spin        2.27s
    v10         5.1s
    v10 no wait 1.6s
    
    This patch brings v10 up to v10 no wait.  The changes are smaller than
    they look, I just moved the need_resched checks in __mutex_lock_common
    after the cmpxchg.
    
    Signed-off-by: Chris Mason <chris.mason@oracle.com>
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Signed-off-by: Wes Garner <wesgarner@gmail.com>
    Peter Zijlstra committed with nullghost Jan 14, 2009
  5. update defconfig for interactive cpu governor

    nullghost committed Oct 17, 2010
  6. integrate interactive cpufreq governor

    darchstar committed with nullghost Aug 17, 2010
  7. update defconfig with bfq iosched

    nullghost committed Oct 17, 2010
  8. integrate bfq iosched

    darchstar committed with nullghost Aug 17, 2010
  9. clean some junk files

    nullghost committed Oct 17, 2010