Commits on Aug 24, 2013
  1. cgroup: fix RCU accesses to task->cgroups

    commit 14611e51a57df10240817d8ada510842faf0ec51 upstream.
    task->cgroups is a RCU pointer pointing to struct css_set.  A task
    switches to a different css_set on cgroup migration but a css_set
    doesn't change once created and its pointers to cgroup_subsys_states
    aren't RCU protected.
    task_subsys_state[_check]() is the macro to acquire css given a task
    and subsys_id pair.  It RCU-dereferences task->cgroups->subsys[] not
    task->cgroups, so the RCU pointer task->cgroups ends up being
    dereferenced without read_barrier_depends() after it.  It's broken.
    Fix it by introducing task_css_set[_check]() which does
    RCU-dereference on task->cgroups.  task_subsys_state[_check]() is
    reimplemented to directly dereference ->subsys[] of the css_set
    returned from task_css_set[_check]().
    This removes some of sparse RCU warnings in cgroup.
    v2: Fixed unbalanced parenthsis and there's no need to use
        rcu_dereference_raw() when !CONFIG_PROVE_RCU.  Both spotted by Li.
    Signed-off-by: Tejun Heo <>
    Reported-by: Fengguang Wu <>
    Acked-by: Li Zefan <>
    [bwh: Backported to 3.2:
     - Adjust context
     - Remove CONFIG_PROVE_RCU condition
     - s/lockdep_is_held(&cgroup_mutex)/cgroup_lock_is_held()/]
    Signed-off-by: Ben Hutchings <>
    htejun committed with Jun 25, 2013
Commits on Jul 25, 2013
  1. Revert "video: tegra: dc: Fix the check of dirty window."

    This reverts commit c87f1c9.
    committed Jul 25, 2013
  2. Revert "video: tegra: dc: add tracing information"

    This reverts commit a60c525119bfbe1b4f2fa5655be0d087dfa4017b.
    committed Jul 24, 2013
Commits on Jul 14, 2013
  1. net: force a reload of first item in hlist_nulls_for_each_entry_rcu

    [ Upstream commit c87a124a5d5e8cf8e21c4363c3372bcaf53ea190 ]
    Roman Gushchin discovered that udp4_lib_lookup2() was not reloading
    first item in the rcu protected list, in case the loop was restarted.
    This produced soft lockups as in
    rcu_dereference(X)/ACCESS_ONCE(X) seem to not work as intended if X is
    ptr->field :
    In some cases, gcc caches the value or ptr->field in a register.
    Use a barrier() to disallow such caching, as documented in
    Documentation/atomic_ops.txt line 114
    Thanks a lot to Roman for providing analysis and numerous patches.
    Diagnosed-by: Roman Gushchin <>
    Signed-off-by: Eric Dumazet <>
    Reported-by: Boris Zhmurov <>
    Signed-off-by: Roman Gushchin <>
    Acked-by: Paul E. McKenney <>
    Signed-off-by: David S. Miller <>
    Signed-off-by: Ben Hutchings <>
    Eric Dumazet committed with May 29, 2013
Commits on Jul 13, 2013
  1. drivers: modem_if: omit cbp71_force_crash_exit for P4LTE

    -set STATE_CRASH_EXIT if disconnected and not enumerated for some time.
    committed Jul 12, 2013
  2. Revert "drivers: modem_if: don't wait_enumeration in if_usb_disconnec…

    …t for P4LTE"
    This reverts commit e0c09f5.
    committed Jul 12, 2013
Commits on Jul 10, 2013
  1. perf: Treat attr.config as u64 in perf_swevent_init()

    Trinity discovered that we fail to check all 64 bits of
    attr.config passed by user space, resulting to out-of-bounds
    access of the perf_swevent_enabled array in
    Introduced in commit b0a873ebb ("perf: Register PMU
    Signed-off-by: Tommi Rantala <>
    Cc: Peter Zijlstra <>
    Cc: Paul Mackerras <>
    Cc: Arnaldo Carvalho de Melo <>
    Signed-off-by: Ingo Molnar <>
    rantala committed with Apr 13, 2013
Commits on Jun 29, 2013
Commits on Jun 19, 2013
  1. swap: avoid read_swap_cache_async() race to deadlock while waiting on…

    … discard I/O completion
    commit cbab0e4eec299e9059199ebe6daf48730be46d2b upstream.
    read_swap_cache_async() can race against get_swap_page(), and stumble
    across a SWAP_HAS_CACHE entry in the swap map whose page wasn't brought
    into the swapcache yet.
    This transient swap_map state is expected to be transitory, but the
    actual placement of discard at scan_swap_map() inserts a wait for I/O
    completion thus making the thread at read_swap_cache_async() to loop
    around its -EEXIST case, while the other end at get_swap_page() is
    scheduled away at scan_swap_map().  This can leave the system deadlocked
    if the I/O completion happens to be waiting on the CPU waitqueue where
    read_swap_cache_async() is busy looping and !CONFIG_PREEMPT.
    This patch introduces a cond_resched() call to make the aforementioned
    read_swap_cache_async() busy loop condition to bail out when necessary,
    thus avoiding the subtle race window.
    Signed-off-by: Rafael Aquini <>
    Acked-by: Johannes Weiner <>
    Acked-by: KOSAKI Motohiro <>
    Acked-by: Hugh Dickins <>
    Cc: Shaohua Li <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
    Signed-off-by: Ben Hutchings <>
    aquini committed with Jun 12, 2013
  2. Revert "sched/debug: Limit sd->*_idx range on sysctl"

    This reverts commit df7f308.
    committed Jun 19, 2013
Commits on Jun 18, 2013
  1. drivers: modem_if: p4lte: use memcpy in dpram_download for header

    -using inline _memcpy for this triggers an internal compiler error in
     GCC 4.6.
    committed Jun 17, 2013
Commits on Jun 17, 2013
  1. Revert "drivers: modem_if: use memcpy in dpram_download for header"

    -limiting this to only P4LTE
    This reverts commit 0dc15f5.
    committed Jun 17, 2013
Commits on Jun 16, 2013
  1. drivers: modem_if: replace mif P4 with mif P5 to fix l2_hsic issue

     drivers: modem_if: use memcpy in dpram_download for header
    grzwolf committed with Jun 10, 2013
  2. Clear the head of descriptor for the endpoint of gser and acm port in…

    … the unbind function.
    In the unbind function, freeing descriptors will cause the descriptor of endpoint list point to wrong address.
    Therefore, set the head of descriptor to NULL in order to re-config it in the set_alt function.
    Change-Id: Ieaab1562c2e90dde17845cafa39a5abd7fadcb61
    yi-hsin_hung committed with Sep 6, 2012
  3. block, bfq: add Early Queue Merge (EQM) to BFQ-v6r2 for

    A set of processes may happen  to  perform interleaved reads, i.e., requests
    whose union would give rise to a  sequential read  pattern.  There   are two
    typical  cases: in the first  case,   processes  read  fixed-size  chunks of
    data at a fixed distance from each other, while in the second case processes
    may read variable-size chunks at  variable distances. The latter case occurs
    for  example with  KVM, which  splits the  I/O generated  by the  guest into
    multiple chunks,  and lets these chunks  be served by a  pool of cooperating
    processes,  iteratively  assigning  the  next  chunk of  I/O  to  the  first
    available  process. CFQ  uses actual  queue merging  for the  first type  of
    processes, whereas it  uses preemption to get a sequential  read pattern out
    of the read requests  performed by the second type of  processes. In the end
    it uses  two different  mechanisms to  achieve the  same goal:  boosting the
    throughput with interleaved I/O.
    This patch introduces  Early Queue Merge (EQM), a unified mechanism to get a
    sequential  read pattern  with both  types of  processes. The  main idea  is
    checking newly arrived requests against the next request of the active queue
    both in case of actual request insert and in case of request merge. By doing
    so, both the types of processes can be handled by just merging their queues.
    EQM is  then simpler and  more compact than the  pair of mechanisms  used in
    Finally, EQM  also preserves the  typical low-latency properties of  BFQ, by
    properly restoring the weight-raising state of  a queue when it gets back to
    a non-merged state.
    Signed-off-by: Mauro Andreolini <>
    Signed-off-by: Arianna Avanzini <>
    Reviewed-by: Paolo Valente <>
    ariava committed with Jun 15, 2013
  4. block: introduce the BFQ-v6r2 I/O sched for 3.1

    Add the BFQ-v6r2 I/O scheduler to 3.1.
    The general structure is borrowed from CFQ, as much code. A (bfq_)queue is
    associated to each task doing I/O on a device, and each time a scheduling
    decision has to be made a queue is selected and served until it expires.
        - Slices are given in the service domain: tasks are assigned budgets,
          measured in number of sectors. Once got the disk, a task must
          however consume its assigned budget within a configurable maximum time
          (by default, the maximum possible value of the budgets is automatically
          computed to comply with this timeout). This allows the desired latency
          vs "throughput boosting" tradeoff to be set.
        - Budgets are scheduled according to a variant of WF2Q+, implemented
          using an augmented rb-tree to take eligibility into account while
          preserving an O(log N) overall complexity.
        - A low-latency tunable is provided; if enabled, both interactive and soft
          real-time applications are guaranteed very low latency.
        - Latency guarantees are preserved also in presence of NCQ.
        - Also with flash-based devices, a high throughput is achieved while
          still preserving latency guarantees.
        - Useful features borrowed from CFQ: cooperating-queues merging (with
          some additional optimizations with respect to the original CFQ version),
          static fallback queue for OOM.
        - BFQ supports full hierarchical scheduling, exporting a cgroups
          interface.  Each node has a full scheduler, so each group can
          be assigned its own ioprio (mapped to a weight, see next point)
          and an ioprio_class.
        - If the cgroups interface is used, weights can be explictly assigned,
          otherwise ioprio values are mapped to weights using the relation
          weight = IOPRIO_BE_NR - ioprio.
        - ioprio classes are served in strict priority order, i.e., lower
          priority queues are not served as long as there are higher priority
          queues.  Among queues in the same class the bandwidth is distributed
          in proportion to the weight of each queue. A very thin extra bandwidth
          is however guaranteed to the Idle class, to prevent it from starving.
    Signed-off-by: Paolo Valente <>
    Signed-off-by: Arianna Avanzini <>
    ariava committed with Jun 15, 2013
  5. block: cgroups, kconfig, build bits for BFQ-v6r2-3.1

    Add a Kconfig option and do the related Makefile changes to compile
    the BFQ I/O scheduler.  Also add the bfqio controller to the cgroups
    Signed-off-by: Paolo Valente <>
    Signed-off-by: Arianna Avanzini <>
    ariava committed with Jun 15, 2013
  6. block: prepare I/O context code for BFQ-v6r2 for 3.1

    BFQ uses struct cfq_io_context to store its per-process per-device data,
    reusing the same code for cic handling of CFQ.  The code is not shared
    ATM to minimize the impact of these patches.
    This patch introduces a new hlist to each io_context to store all the
    cic's allocated by BFQ to allow calling the right destructor on module
    unload; the radix tree used for cic lookup needs to be duplicated
    because it can contain dead keys inserted by a scheduler and later
    retrieved by the other one.
    Update the io_context exit and free paths to take care also of
    the BFQ cic's.
    Change the type of cfqq inside struct cfq_io_context to void *
    to use it also for BFQ per-queue data.
    A new bfq-specific ioprio_changed field is necessary, too, to avoid
    clobbering cfq's one, so switch ioprio_changed to a bitmap, with one
    element per scheduler.
    Signed-off-by: Paolo Valente <>
    Signed-off-by: Arianna Avanzini <>
    ariava committed with Jun 15, 2013
  7. Revert "block: prepare I/O context code for BFQ-v6r1 for 3.1"

    This reverts commit d51968e.
    committed Jun 16, 2013
  8. Revert "block: cgroups, kconfig, build bits for BFQ-v6r1-3.1"

    This reverts commit 3355d0f.
    committed Jun 16, 2013
  9. Revert "block: introduce the BFQ-v6r1 I/O sched for 3.1"

    This reverts commit cf1989c.
    committed Jun 16, 2013
Commits on Jun 8, 2013
  1. wireless: allow 40 MHz on world roaming channels 12/13

    commit 43c771a1963ab461a2f194e3c97fded1d5fe262f upstream.
    When in world roaming mode, allow 40 MHz to be used
    on channels 12 and 13 so that an AP that is, e.g.,
    using HT40+ on channel 9 (in the UK) can be used.
    Reported-by: Eddie Chapman <>
    Tested-by: Eddie Chapman <>
    Acked-by: Luis R. Rodriguez <>
    Signed-off-by: Johannes Berg <>
    Signed-off-by: Greg Kroah-Hartman <>
    jmberg committed with Nov 12, 2012
  2. wireless: drop invalid mesh address extension frames

    commit 7dd111e8ee10cc6816669eabcad3334447673236 upstream.
    The mesh header can have address extension by a 4th
    or a 5th and 6th address, but never both. Drop such
    frames in 802.11 -> 802.3 conversion along with any
    frames that have the wrong extension.
    Reviewed-by: Javier Cardona <>
    Signed-off-by: Johannes Berg <>
    Signed-off-by: Greg Kroah-Hartman <>
    jmberg committed with Oct 25, 2012
  3. cfg80211: fix antenna gain handling

    commit c4a9fafc77a5318f5ed26c509bbcddf03e18c201 upstream.
    No driver initializes chan->max_antenna_gain to something sensible, and
    the only place where it is being used right now is inside ath9k. This
    leads to ath9k potentially using less tx power than it can use, which can
    decrease performance/range in some rare cases.
    Rather than going through every single driver, this patch initializes
    chan->orig_mag in wiphy_register(), ignoring whatever value the driver
    left in there. If a driver for some reason wishes to limit it independent
    from regulatory rulesets, it can do so internally.
    Signed-off-by: Felix Fietkau <>
    Signed-off-by: Johannes Berg <>
    Signed-off-by: Greg Kroah-Hartman <>
    Felix Fietkau committed with Oct 17, 2012
  4. cfg80211: fix possible circular lock on reg_regdb_search()

    commit a85d0d7f3460b1a123b78e7f7e39bf72c37dfb78 upstream.
    When call_crda() is called we kick off a witch hunt search
    for the same regulatory domain on our internal regulatory
    database and that work gets kicked off on a workqueue, this
    is done while the cfg80211_mutex is held. If that workqueue
    kicks off it will first lock reg_regdb_search_mutex and
    later cfg80211_mutex but to ensure two CPUs will not contend
    against cfg80211_mutex the right thing to do is to have the
    reg_regdb_search() wait until the cfg80211_mutex is let go.
    The lockdep report is pasted below.
    cfg80211: Calling CRDA to update world regulatory domain
    [ INFO: possible circular locking dependency detected ]
    3.3.8 #3 Tainted: G           O
    kworker/0:1/235 is trying to acquire lock:
     (cfg80211_mutex){+.+...}, at: [<816468a4>] set_regdom+0x78c/0x808 [cfg80211]
    but task is already holding lock:
     (reg_regdb_search_mutex){+.+...}, at: [<81646828>] set_regdom+0x710/0x808 [cfg80211]
    which lock already depends on the new lock.
    the existing dependency chain (in reverse order) is:
    -> #2 (reg_regdb_search_mutex){+.+...}:
           [<800a8384>] lock_acquire+0x60/0x88
           [<802950a8>] mutex_lock_nested+0x54/0x31c
           [<81645778>] is_world_regdom+0x9f8/0xc74 [cfg80211]
    -> #1 (reg_mutex#2){+.+...}:
           [<800a8384>] lock_acquire+0x60/0x88
           [<802950a8>] mutex_lock_nested+0x54/0x31c
           [<8164539c>] is_world_regdom+0x61c/0xc74 [cfg80211]
    -> #0 (cfg80211_mutex){+.+...}:
           [<800a77b8>] __lock_acquire+0x10d4/0x17bc
           [<800a8384>] lock_acquire+0x60/0x88
           [<802950a8>] mutex_lock_nested+0x54/0x31c
           [<816468a4>] set_regdom+0x78c/0x808 [cfg80211]
    other info that might help us debug this:
    Chain exists of:
      cfg80211_mutex --> reg_mutex#2 --> reg_regdb_search_mutex
     Possible unsafe locking scenario:
           CPU0                    CPU1
           ----                    ----
     *** DEADLOCK ***
    3 locks held by kworker/0:1/235:
     #0:  (events){.+.+..}, at: [<80089a00>] process_one_work+0x230/0x460
     #1:  (reg_regdb_work){+.+...}, at: [<80089a00>] process_one_work+0x230/0x460
     #2:  (reg_regdb_search_mutex){+.+...}, at: [<81646828>] set_regdom+0x710/0x808 [cfg80211]
    stack backtrace:
    Call Trace:
    [<80290fd4>] dump_stack+0x8/0x34
    [<80291bc4>] print_circular_bug+0x2ac/0x2d8
    [<800a77b8>] __lock_acquire+0x10d4/0x17bc
    [<800a8384>] lock_acquire+0x60/0x88
    [<802950a8>] mutex_lock_nested+0x54/0x31c
    [<816468a4>] set_regdom+0x78c/0x808 [cfg80211]
    Reported-by: Felix Fietkau <>
    Tested-by: Felix Fietkau <>
    Signed-off-by: Luis R. Rodriguez <>
    Reviewed-by: Johannes Berg <>
    Signed-off-by: John W. Linville <>
    Signed-off-by: Greg Kroah-Hartman <>
    mcgrof committed with Sep 14, 2012