Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Commits on Aug 13, 2012
  1. kthread: Implement park/unpark facility

    Thomas Gleixner authored
    To avoid the full teardown/setup of per cpu kthreads in the case of
    cpu hot(un)plug, provide a facility which allows to put the kthread
    into a park position and unpark it when the cpu comes online again.
    Signed-off-by: Thomas Gleixner <>
    Reviewed-by: Namhyung Kim <>
    Cc: Peter Zijlstra <>
    Reviewed-by: Srivatsa S. Bhat <>
    Cc: Rusty Russell <>
    Reviewed-by: Paul E. McKenney <>
    Signed-off-by: Thomas Gleixner <>
Commits on Jul 22, 2012
  1. kthread_worker: reimplement flush_kthread_work() to allow freeing the…

    Tejun Heo authored
    … work item being executed
    kthread_worker provides minimalistic workqueue-like interface for
    users which need a dedicated worker thread (e.g. for realtime
    priority).  It has basic queue, flush_work, flush_worker operations
    which mostly match the workqueue counterparts; however, due to the way
    flush_work() is implemented, it has a noticeable difference of not
    allowing work items to be freed while being executed.
    While the current users of kthread_worker are okay with the current
    behavior, the restriction does impede some valid use cases.  Also,
    removing this difference isn't difficult and actually makes the code
    easier to understand.
    This patch reimplements flush_kthread_work() such that it uses a
    flush_work item instead of queue/done sequence numbers.
    Signed-off-by: Tejun Heo <>
  2. kthread_worker: reorganize to prepare for flush_kthread_work() reimpl…

    Tejun Heo authored
    Make the following two non-functional changes.
    * Separate out insert_kthread_work() from queue_kthread_work().
    * Relocate struct kthread_flush_work and kthread_flush_work_fn()
      definitions above flush_kthread_work().
    v2: Added lockdep_assert_held() in insert_kthread_work() as suggested
        by Andy Walls.
    Signed-off-by: Tejun Heo <>
    Acked-by: Andy Walls <>
Commits on Nov 23, 2011
  1. freezer: kill unused set_freezable_with_signal()

    Tejun Heo authored
    There's no in-kernel user of set_freezable_with_signal() left.  Mixing
    TIF_SIGPENDING with kernel threads can lead to nasty corner cases as
    kernel threads never travel signal delivery path on their own.
    e.g. the current implementation is buggy in the cancelation path of
    __thaw_task().  It calls recalc_sigpending_and_wake() in an attempt to
    clear TIF_SIGPENDING but the function never clears it regardless of
    sigpending state.  This means that signallable freezable kthreads may
    continue executing with !freezing() && stuck TIF_SIGPENDING, which can
    be troublesome.
    This patch removes set_freezable_with_signal() along with
    PF_FREEZER_NOSIG and recalc_sigpending*() calls in freezer.  User
    tasks get TIF_SIGPENDING, kernel tasks get woken up and the spurious
    sigpending is dealt with in the usual signal delivery path.
    Signed-off-by: Tejun Heo <>
    Acked-by: Oleg Nesterov <>
Commits on Nov 21, 2011
  1. freezer: implement and use kthread_freezable_should_stop()

    Tejun Heo authored
    Writeback and thinkpad_acpi have been using thaw_process() to prevent
    deadlock between the freezer and kthread_stop(); unfortunately, this
    is inherently racy - nothing prevents freezing from happening between
    thaw_process() and kthread_stop().
    This patch implements kthread_freezable_should_stop() which enters
    refrigerator if necessary but is guaranteed to return if
    kthread_stop() is invoked.  Both thaw_process() users are converted to
    use the new function.
    Note that this deadlock condition exists for many of freezable
    kthreads.  They need to be converted to use the new should_stop or
    freezable workqueue.
    Tested with synthetic test case.
    Signed-off-by: Tejun Heo <>
    Acked-by: Henrique de Moraes Holschuh <>
    Cc: Jens Axboe <>
    Cc: Oleg Nesterov <>
Commits on Oct 31, 2011
  1. kernel: Map most files to use export.h instead of module.h

    Paul Gortmaker authored
    The changed files were only including linux/module.h for the
    EXPORT_SYMBOL infrastructure, and nothing else.  Revector them
    onto the isolated export header for faster compile times.
    Nothing to see here but a whole lot of instances of:
      -#include <linux/module.h>
      +#include <linux/export.h>
    This commit is only changing the kernel dir; next targets
    will probably be mm, fs, the arch dirs, etc.
    Signed-off-by: Paul Gortmaker <>
Commits on May 28, 2011
  1. @kosaki

    cpuset: Fix cpuset_cpus_allowed_fallback(), don't update tsk->rt.nr_c…

    kosaki authored Ingo Molnar committed
    The rule is, we have to update tsk->rt.nr_cpus_allowed if we change
    tsk->cpus_allowed. Otherwise RT scheduler may confuse.
    Signed-off-by: KOSAKI Motohiro <>
    Cc: Oleg Nesterov <>
    Signed-off-by: Peter Zijlstra <>
    Signed-off-by: Ingo Molnar <>
Commits on Mar 31, 2011
  1. @lucasdemarchi

    Fix common misspellings

    lucasdemarchi authored
    Fixes generated by 'codespell' and manually reviewed.
    Signed-off-by: Lucas De Marchi <>
Commits on Mar 23, 2011
  1. @torvalds

    kthread: NUMA aware kthread_create_on_node()

    Eric Dumazet authored torvalds committed
    All kthreads being created from a single helper task, they all use memory
    from a single node for their kernel stack and task struct.
    This patch suite creates kthread_create_on_node(), adding a 'cpu' parameter
    to parameters already used by kthread_create().
    This parameter serves in allocating memory for the new kthread on its
    memory node if possible.
    Signed-off-by: Eric Dumazet <>
    Acked-by: David S. Miller <>
    Reviewed-by: Andi Kleen <>
    Acked-by: Rusty Russell <>
    Cc: Tejun Heo <>
    Cc: Tony Luck <>
    Cc: Fenghua Yu <>
    Cc: David Howells <>
    Cc: <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
Commits on Jan 7, 2011
  1. sched: Constify function scope static struct sched_param usage

    Peter Zijlstra authored Ingo Molnar committed
    Function-scope statics are discouraged because they are
    easily overlooked and can cause subtle bugs/races due to
    their global (non-SMP safe) nature.
    Linus noticed that we did this for sched_param - at minimum
    make the const.
    Suggested-by: Linus Torvalds <>
    Signed-off-by: Peter Zijlstra <>
    LKML-Reference: Message-ID: <>
    Signed-off-by: Ingo Molnar <>
Commits on Jan 5, 2011
  1. Merge commit 'v2.6.37' into sched/core

    Ingo Molnar authored
    Merge reason: Merge the final .37 tree.
    Signed-off-by: Ingo Molnar <>
Commits on Dec 22, 2010
  1. @yongzhang

    kthread_work: make lockdep happy

    yongzhang authored Tejun Heo committed
    spinlock in kthread_worker and wait_queue_head in kthread_work both
    should be lockdep sensible, so change the interface to make it
    suiltable for CONFIG_LOCKDEP.
    tj: comment update
    Reported-by: Nicolas <>
    Signed-off-by: Yong Zhang <>
    Signed-off-by: Andy Walls <>
    Tested-by: Andy Walls <>
    Cc: Tejun Heo <>
    Cc: Andrew Morton <>
    Signed-off-by: Tejun Heo <>
Commits on Oct 23, 2010
  1. @kosaki

    sched: Make sched_param argument static in sched_setscheduler() callers

    kosaki authored Ingo Molnar committed
    Andrew Morton pointed out almost all sched_setscheduler() callers are
    using fixed parameters and can be converted to static.  It reduces runtime
    memory use a little.
    Signed-off-by: KOSAKI Motohiro <>
    Reported-by: Andrew Morton <>
    Acked-by: James Morris <>
    Cc: Ingo Molnar <>
    Cc: Steven Rostedt <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Thomas Gleixner <>
    Signed-off-by: Ingo Molnar <>
Commits on Jun 29, 2010
  1. kthread: implement kthread_data()

    Tejun Heo authored
    Implement kthread_data() which takes @task pointing to a kthread and
    returns @data specified when creating the kthread.  The caller is
    responsible for ensuring the validity of @task when calling this
    Signed-off-by: Tejun Heo <>
  2. kthread: implement kthread_worker

    Tejun Heo authored
    Implement simple work processor for kthread.  This is to ease using
    kthread.  Single thread workqueue used to be used for things like this
    but workqueue won't guarantee fixed kthread association anymore to
    enable worker sharing.
    This can be used in cases where specific kthread association is
    necessary, for example, when it should have RT priority or be assigned
    to certain cgroup.
    Signed-off-by: Tejun Heo <>
    Cc: Andrew Morton <>
Commits on Mar 24, 2010
  1. @miaoxie @torvalds

    cpuset: fix the problem that cpuset_mem_spread_node() returns an offl…

    miaoxie authored torvalds committed
    …ine node
    cpuset_mem_spread_node() returns an offline node, and causes an oops.
    This patch fixes it by initializing task->mems_allowed to
    node_states[N_HIGH_MEMORY], and updating task->mems_allowed when doing
    memory hotplug.
    Signed-off-by: Miao Xie <>
    Acked-by: David Rientjes <>
    Reported-by: Nick Piggin <>
    Tested-by: Nick Piggin <>
    Cc: Paul Menage <>
    Cc: Li Zefan <>
    Cc: Ingo Molnar <>
    Cc: <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
Commits on Feb 9, 2010
  1. @antonblanchard

    kthread, sched: Remove reference to kthread_create_on_cpu

    antonblanchard authored Ingo Molnar committed
    kthread_create_on_cpu doesn't exist so update a comment in
    kthread.c to reflect this.
    Signed-off-by: Anton Blanchard <>
    Acked-by: Rusty Russell <>
    Cc: Peter Zijlstra <>
    LKML-Reference: <20100209040740.GB3702@kryten>
    Signed-off-by: Ingo Molnar <>
Commits on Dec 16, 2009
  1. sched: Move kthread_bind() back to kthread.c

    Peter Zijlstra authored Ingo Molnar committed
    Since kthread_bind() lost its dependencies on sched.c, move it
    back where it came from.
    Signed-off-by: Peter Zijlstra <>
    Cc: Mike Galbraith <>
    LKML-Reference: <>
    Signed-off-by: Ingo Molnar <>
Commits on Nov 3, 2009
  1. sched: Fix kthread_bind() by moving the body of kthread_bind() to sch…

    Mike Galbraith authored Ingo Molnar committed
    Eric Paris reported that commit
    f685cea causes boot time
    PREEMPT_DEBUG complaints.
     [    4.590699] BUG: using smp_processor_id() in preemptible [00000000] code: rmmod/1314
     [    4.593043] caller is task_hot+0x86/0xd0
    Since kthread_bind() messes with scheduler internals, move the
    body to sched.c, and lock the runqueue.
    Reported-by: Eric Paris <>
    Signed-off-by: Mike Galbraith <>
    Tested-by: Eric Paris <>
    Cc: Peter Zijlstra <>
    LKML-Reference: <>
    [ v2: fix !SMP build and clean up ]
    Signed-off-by: Ingo Molnar <>
Commits on Sep 9, 2009
  1. sched: Keep kthreads at default priority

    Mike Galbraith authored Ingo Molnar committed
    Removes kthread/workqueue priority boost, they increase worst-case
    desktop latencies.
    Signed-off-by: Mike Galbraith <>
    Acked-by: Peter Zijlstra <>
    LKML-Reference: <>
    Signed-off-by: Ingo Molnar <>
Commits on Jul 27, 2009
  1. @utrace @torvalds

    update the comment in kthread_stop()

    utrace authored torvalds committed
    Commit 6370617 ("kthreads: rework
    kthread_stop()") removed the limitation that the thread function mysr
    not call do_exit() itself, but forgot to update the comment.
    Since that commit it is OK to use kthread_stop() even if kthread can
    exit itself.
    Signed-off-by: Oleg Nesterov <>
    Signed-off-by: Rusty Russell <>
    Signed-off-by: Linus Torvalds <>
Commits on Jun 18, 2009
  1. @utrace @torvalds

    kthreads: rework kthread_stop()

    utrace authored torvalds committed
    Based on Eric's patch which in turn was based on my patch.
    kthread_stop() has the nasty problems:
    - it runs unpredictably long with the global semaphore held.
    - it deadlocks if kthread itself does kthread_stop() before it obeys
      the kthread_should_stop() request.
    - it is not useable if kthread exits on its own, see for example the
      ugly "wait_to_die:" hack in migration_thread()
    - it is not possible to just tell kthread it should stop, we must always
      wait for its exit.
    With this patch kthread() allocates all neccesary data (struct kthread) on
    its own stack, globals kthread_stop_xxx are deleted.  ->vfork_done is used
    as a pointer into "struct kthread", this means kthread_stop() can easily
    wait for kthread's exit.
    Signed-off-by: Oleg Nesterov <>
    Cc: Christoph Hellwig <>
    Cc: "Eric W. Biederman" <>
    Cc: Ingo Molnar <>
    Cc: Pavel Emelyanov <>
    Cc: Rusty Russell <>
    Cc: Vitaliy Gusev <
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
  2. @utrace @torvalds

    kthreads: simplify the startup synchronization

    utrace authored torvalds committed
    We use two completions two create the kernel thread, this is a bit ugly.
    kthread() wakes up create_kthread() via ->started, then create_kthread()
    wakes up the caller kthread_create() via ->done.  But kthread() does not
    need to wait for kthread(), it can just return.  Instead kthread() itself
    can wake up the caller of kthread_create().
    Kill kthread_create_info->started, ->done is enough.  This improves the
    scalability a bit and sijmplifies the code.
    The only problem if kernel_thread() fails, in that case create_kthread()
    must do complete(&create->done).
    Signed-off-by: Oleg Nesterov <>
    Cc: Christoph Hellwig <>
    Cc: "Eric W. Biederman" <>
    Cc: Ingo Molnar <>
    Cc: Pavel Emelyanov <>
    Cc: Rusty Russell <>
    Cc: Vitaliy Gusev <
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
Commits on Jun 17, 2009
  1. @miaoxie @torvalds

    cpuset,mm: update tasks' mems_allowed in time

    miaoxie authored torvalds committed
    Fix allocating page cache/slab object on the unallowed node when memory
    spread is set by updating tasks' mems_allowed after its cpuset's mems is
    In order to update tasks' mems_allowed in time, we must modify the code of
    memory policy.  Because the memory policy is applied in the process's
    context originally.  After applying this patch, one task directly
    manipulates anothers mems_allowed, and we use alloc_lock in the
    task_struct to protect mems_allowed and memory policy of the task.
    But in the fast path, we didn't use lock to protect them, because adding a
    lock may lead to performance regression.  But if we don't add a lock,the
    task might see no nodes when changing cpuset's mems_allowed to some
    non-overlapping set.  In order to avoid it, we set all new allowed nodes,
    then clear newly disallowed ones.
      The rework of mpol_new() to extract the adjusting of the node mask to
      apply cpuset and mpol flags "context" breaks set_mempolicy() and mbind()
      with MPOL_PREFERRED and a NULL nodemask--i.e., explicit local
      allocation.  Fix this by adding the check for MPOL_PREFERRED and empty
      node mask to mpol_new_mpolicy().
      Remove the now unneeded 'nodes = NULL' from mpol_new().
      Note that mpol_new_mempolicy() is always called with a non-NULL
      'nodes' parameter now that it has been removed from mpol_new().
      Therefore, we don't need to test nodes for NULL before testing it for
      'empty'.  However, just to be extra paranoid, add a VM_BUG_ON() to
      verify this assumption.]
      I don't think the function name 'mpol_new_mempolicy' is descriptive
      enough to differentiate it from mpol_new().
      This function applies cpuset set context, usually constraining nodes
      to those allowed by the cpuset.  However, when the 'RELATIVE_NODES flag
      is set, it also translates the nodes.  So I settled on
      'mpol_set_nodemask()', because the comment block for mpol_new() mentions
      that we need to call this function to "set nodes".
      Some additional minor line length, whitespace and typo cleanup.]
    Signed-off-by: Miao Xie <>
    Cc: Ingo Molnar <>
    Cc: Peter Zijlstra <>
    Cc: Christoph Lameter <>
    Cc: Paul Menage <>
    Cc: Nick Piggin <>
    Cc: Yasunori Goto <>
    Cc: Pekka Enberg <>
    Cc: David Rientjes <>
    Signed-off-by: Lee Schermerhorn <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
Commits on Apr 15, 2009
  1. @rostedt

    tracing/events: move trace point headers into include/trace/events

    Steven Rostedt authored rostedt committed
    Impact: clean up
    Create a sub directory in include/trace called events to keep the
    trace point headers in their own separate directory. Only headers that
    declare trace points should be defined in this directory.
    Cc: Peter Zijlstra <>
    Cc: Thomas Gleixner <>
    Cc: Neil Horman <>
    Cc: Zhao Lei <>
    Cc: Eduard - Gabriel Munteanu <>
    Cc: Pekka Enberg <>
    Signed-off-by: Steven Rostedt <>
Commits on Apr 14, 2009
  1. @rostedt

    tracing: create automated trace defines

    Steven Rostedt authored rostedt committed
    This patch lowers the number of places a developer must modify to add
    new tracepoints. The current method to add a new tracepoint
    into an existing system is to write the trace point macro in the
    trace header with one of the macros TRACE_EVENT, TRACE_FORMAT or
    DECLARE_TRACE, then they must add the same named item into the C file
    with the macro DEFINE_TRACE(name) and then add the trace point.
    This change cuts out the needing to add the DEFINE_TRACE(name).
    Every file that uses the tracepoint must still include the trace/<type>.h
    file, but the one C file must also add a define before the including
    of that file.
     #include <trace/mytrace.h>
    This will cause the trace/mytrace.h file to also produce the C code
    necessary to implement the trace point.
    Note, if more than one trace/<type>.h is used to create the C code
    it is best to list them all together.
     #include <trace/foo.h>
     #include <trace/bar.h>
     #include <trace/fido.h>
    Thanks to Mathieu Desnoyers and Christoph Hellwig for coming up with
    the cleaner solution of the define above the includes over my first
    design to have the C code include a "special" header.
    This patch converts sched, irq and lockdep and skb to use this new
    Cc: Peter Zijlstra <>
    Cc: Thomas Gleixner <>
    Cc: Neil Horman <>
    Cc: Zhao Lei <>
    Cc: Eduard - Gabriel Munteanu <>
    Cc: Pekka Enberg <>
    Signed-off-by: Steven Rostedt <>
Commits on Apr 9, 2009
  1. @utrace @rustyrussell

    kthread: move sched-realeted initialization from kthreadd context

    utrace authored rustyrussell committed
    kthreadd is the single thread which implements ths "create" request, move
    sched_setscheduler/etc from create_kthread() to kthread_create() to
    improve the scalability.
    We should be careful with sched_setscheduler(), use _nochek helper.
    Signed-off-by: Oleg Nesterov <>
    Cc: Christoph Hellwig <>
    Cc: "Eric W. Biederman" <>
    Cc: Ingo Molnar <>
    Cc: Pavel Emelyanov <>
    Cc: Vitaliy Gusev <
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Rusty Russell <>
  2. @rustyrussell

    kthread: Don't looking for a task in create_kthread() #2

    Vitaliy Gusev authored rustyrussell committed
    Remove the unnecessary find_task_by_pid_ns(). kthread() can just
    use "current" to get the same result.
    Signed-off-by: Vitaliy Gusev <>
    Acked-by: Oleg Nesterov <>
    Signed-off-by: Rusty Russell <>
Commits on Mar 30, 2009
  1. @rustyrussell

    cpumask: remove dangerous CPU_MASK_ALL_PTR, &CPU_MASK_ALL

    rustyrussell authored
    Impact: cleanup
    (Thanks to Al Viro for reminding me of this, via Ingo)
    CPU_MASK_ALL is the (deprecated) "all bits set" cpumask, defined as so:
    	#define CPU_MASK_ALL (cpumask_t) { { ... } }
    Taking the address of such a temporary is questionable at best,
    unfortunately 321a8e9 (cpumask: add CPU_MASK_ALL_PTR macro) added
    Which formalizes this practice.  One day gcc could bite us over this
    usage (though we seem to have gotten away with it so far).
    So replace everywhere which used &CPU_MASK_ALL or CPU_MASK_ALL_PTR
    with the modern "cpu_all_mask" (a real const struct cpumask *).
    Signed-off-by: Rusty Russell <>
    Acked-by: Ingo Molnar <>
    Reported-by: Al Viro <>
    Cc: Mike Travis <>
Commits on Nov 16, 2008
  1. tracepoints: add DECLARE_TRACE() and DEFINE_TRACE()

    Mathieu Desnoyers authored Ingo Molnar committed
    Impact: API *CHANGE*. Must update all tracepoint users.
    Add DEFINE_TRACE() to tracepoints to let them declare the tracepoint
    structure in a single spot for all the kernel. It helps reducing memory
    consumption, especially when declaring a lot of tracepoints, e.g. for
    kmalloc tracing.
    *API CHANGE WARNING*: now, DECLARE_TRACE() must be used in headers for
    tracepoint declarations rather than DEFINE_TRACE(). This is the sane way
    to do it. The name previously used was misleading.
    Updates scheduler instrumentation to follow this API change.
    Signed-off-by: Mathieu Desnoyers <>
    Signed-off-by: Ingo Molnar <>
Commits on Oct 20, 2008
  1. @torvalds

    Merge branch 'tracing-v28-for-linus' of git://…

    torvalds authored
    * 'tracing-v28-for-linus' of git:// (131 commits)
      tracing/fastboot: improve help text
      tracing/stacktrace: improve help text
      tracing/fastboot: fix initcalls disposition in
      tracing/fastboot: fix initcall name regexp
      tracing/fastboot: fix issues and improve output of
      tracepoints: synchronize unregister static inline
      tracepoints: tracepoint_synchronize_unregister()
      ftrace: make ftrace_test_p6nop disassembler-friendly
      markers: fix synchronize marker unregister static inline
      tracing/fastboot: add better resolution to initcall debug/tracing
      trace: add build-time check to avoid overrunning hex buffer
      ftrace: fix hex output mode of ftrace
      tracing/fastboot: fix initcalls disposition in
      tracing/fastboot: fix printk format typo in boot tracer
      ftrace: return an error when setting a nonexistent tracer
      ftrace: make some tracers reentrant
      ring-buffer: make reentrant
      ring-buffer: move page indexes into page headers
      tracing/fastboot: only trace non-module initcalls
      ftrace: move pc counter in irqtrace
    Manually fix conflicts:
     - init/main.c: initcall tracing
     - kernel/module.c: verbose level vs tracepoints
     - scripts/ fallout from cherry-picking commits.
  2. @torvalds

    kthread_bind: use wait_task_inactive(TASK_UNINTERRUPTIBLE)

    Oleg Nesterov authored torvalds committed
    Now that wait_task_inactive(task, state) checks task->state == state,
    we can simplify the code and make this debugging check more robust.
    Signed-off-by: Oleg Nesterov <>
    Cc: Roland McGrath <>
    Cc: Ingo Molnar <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
Commits on Oct 14, 2008
  1. tracing, sched: LTTng instrumentation - scheduler

    Mathieu Desnoyers authored Ingo Molnar committed
    Instrument the scheduler activity (sched_switch, migration, wakeups,
    wait for a task, signal delivery) and process/thread
    creation/destruction (fork, exit, kthread stop). Actually, kthread
    creation is not instrumented in this patch because it is architecture
    dependent. It allows to connect tracers such as ftrace which detects
    scheduling latencies, good/bad scheduler decisions. Tools like LTTng can
    export this scheduler information along with instrumentation of the rest
    of the kernel activity to perform post-mortem analysis on the scheduler
    About the performance impact of tracepoints (which is comparable to
    markers), even without immediate values optimizations, tests done by
    Hideo Aoki on ia64 show no regression. His test case was using hackbench
    on a kernel where scheduler instrumentation (about 5 events in code
    scheduler code) was added. See the "Tracepoints" patch header for
    performance result detail.
    Changelog :
    - Change instrumentation location and parameter to match ftrace
      instrumentation, previously done with kernel markers.
    [ conflict resolutions ]
    Signed-off-by: Mathieu Desnoyers <>
    Acked-by: 'Peter Zijlstra' <>
    Signed-off-by: Ingo Molnar <>
Commits on Jul 26, 2008
  1. @torvalds

    tracehook: wait_task_inactive

    Roland McGrath authored torvalds committed
    This extends wait_task_inactive() with a new argument so it can be used in
    a "soft" mode where it will check for the task changing state unexpectedly
    and back off.  There is no change to existing callers.  This lays the
    groundwork to allow robust, noninvasive tracing that can try to sample a
    blocked thread but back off safely if it wakes up.
    Signed-off-by: Roland McGrath <>
    Cc: Oleg Nesterov <>
    Reviewed-by: Ingo Molnar <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
Commits on Jul 18, 2008
  1. kthread: reduce stack pressure in create_kthread and kthreadd

    Mike Travis authored Ingo Molnar committed
      * Replace:
      	set_cpus_allowed(..., CPU_MASK_ALL)
      	set_cpus_allowed_ptr(..., CPU_MASK_ALL_PTR)
        to remove excessive stack requirements when NR_CPUS=4096.
    Signed-off-by: Mike Travis <>
    Cc: Andrew Morton <>
    Signed-off-by: Ingo Molnar <>
Something went wrong with that request. Please try again.