Permalink
Commits on Jan 13, 2012
  1. S5PC11X : FIMC apply v4l2 standard for asynchronous dequeue/queue

    To make framedrop in camera HAL layer, FIMC should support
    v4l2 standard
    
    Signed-off-by: Song Youngmok <ym.song@samsung.com>
    
    (Re-added for video recording fix)
    (cherry picked from commit cbb71a5)
    pawitp committed with Jan 1, 2012
Commits on Nov 26, 2011
  1. Revert "Add AIO"

    This reverts commit 1817542.
    committed Nov 26, 2011
Commits on Nov 25, 2011
  1. Add AIO

    committed Nov 25, 2011
Commits on Nov 22, 2011
  1. vmscan: abort reclaim/compaction if compaction can proceed

    If compaction can proceed, shrink_zones() stops doing any work but its
    callers still call shrink_slab() which raises the priority and potentially
    sleeps.  This is unnecessary and wasteful so this patch aborts direct
    reclaim/compaction entirely if compaction can proceed.
    
    Signed-off-by: Mel Gorman <mgorman@suse.de>
    Acked-by: Rik van Riel <riel@redhat.com>
    Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
    Acked-by: Johannes Weiner <jweiner@redhat.com>
    Cc: Josh Boyer <jwboyer@redhat.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Mel Gorman committed with Nov 1, 2011
  2. vmscan: limit direct reclaim for higher order allocations

    When suffering from memory fragmentation due to unfreeable pages, THP page
    faults will repeatedly try to compact memory.  Due to the unfreeable
    pages, compaction fails.
    
    Needless to say, at that point page reclaim also fails to create free
    contiguous 2MB areas.  However, that doesn't stop the current code from
    trying, over and over again, and freeing a minimum of 4MB (2UL <<
    sc->order pages) at every single invocation.
    
    This resulted in my 12GB system having 2-3GB free memory, a corresponding
    amount of used swap and very sluggish response times.
    
    This can be avoided by having the direct reclaim code not reclaim from
    zones that already have plenty of free memory available for compaction.
    
    If compaction still fails due to unmovable memory, doing additional
    reclaim will only hurt the system, not help.
    
    [jweiner@redhat.com: change comment to explain the order check]
    Signed-off-by: Rik van Riel <riel@redhat.com>
    Acked-by: Johannes Weiner <jweiner@redhat.com>
    Acked-by: Mel Gorman <mgorman@suse.de>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
    Signed-off-by: Johannes Weiner <jweiner@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    rikvanriel committed with Nov 1, 2011
  3. vmscan: add barrier to prevent evictable page in unevictable list

    When a race between putback_lru_page() and shmem_lock with lock=0 happens,
    progrom execution order is as follows, but clear_bit in processor #1 could
    be reordered right before spin_unlock of processor #1.  Then, the page
    would be stranded on the unevictable list.
    
    spin_lock
    SetPageLRU
    spin_unlock
                                    clear_bit(AS_UNEVICTABLE)
                                    spin_lock
                                    if PageLRU()
                                            if !test_bit(AS_UNEVICTABLE)
                                            	move evictable list
    smp_mb
    if !test_bit(AS_UNEVICTABLE)
            move evictable list
                                    spin_unlock
    
    But, pagevec_lookup() in scan_mapping_unevictable_pages() has
    rcu_read_[un]lock() so it could protect reordering before reaching
    test_bit(AS_UNEVICTABLE) on processor #1 so this problem never happens.
    But it's a unexpected side effect and we should solve this problem
    properly.
    
    This patch adds a barrier after mapping_clear_unevictable.
    
    I didn't meet this problem but just found during review.
    
    Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
    Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
    Cc: Mel Gorman <mel@csn.ul.ie>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
    Acked-by: Johannes Weiner <jweiner@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Minchan Kim committed with Nov 1, 2011
  4. mm: vmscan: do not writeback filesystem pages in direct reclaim

    Testing from the XFS folk revealed that there is still too much I/O from
    the end of the LRU in kswapd.  Previously it was considered acceptable by
    VM people for a small number of pages to be written back from reclaim with
    testing generally showing about 0.3% of pages reclaimed were written back
    (higher if memory was low).  That writing back a small number of pages is
    ok has been heavily disputed for quite some time and Dave Chinner
    explained it well;
    
    	It doesn't have to be a very high number to be a problem. IO
    	is orders of magnitude slower than the CPU time it takes to
    	flush a page, so the cost of making a bad flush decision is
    	very high. And single page writeback from the LRU is almost
    	always a bad flush decision.
    
    To complicate matters, filesystems respond very differently to requests
    from reclaim according to Christoph Hellwig;
    
    	xfs tries to write it back if the requester is kswapd
    	ext4 ignores the request if it's a delayed allocation
    	btrfs ignores the request
    
    As a result, each filesystem has different performance characteristics
    when under memory pressure and there are many pages being dirtied.  In
    some cases, the request is ignored entirely so the VM cannot depend on the
    IO being dispatched.
    
    The objective of this series is to reduce writing of filesystem-backed
    pages from reclaim, play nicely with writeback that is already in progress
    and throttle reclaim appropriately when writeback pages are encountered.
    The assumption is that the flushers will always write pages faster than if
    reclaim issues the IO.
    
    A secondary goal is to avoid the problem whereby direct reclaim splices
    two potentially deep call stacks together.
    
    There is a potential new problem as reclaim has less control over how long
    before a page in a particularly zone or container is cleaned and direct
    reclaimers depend on kswapd or flusher threads to do the necessary work.
    However, as filesystems sometimes ignore direct reclaim requests already,
    it is not expected to be a serious issue.
    
    Patch 1 disables writeback of filesystem pages from direct reclaim
    	entirely. Anonymous pages are still written.
    
    Patch 2 removes dead code in lumpy reclaim as it is no longer able
    	to synchronously write pages. This hurts lumpy reclaim but
    	there is an expectation that compaction is used for hugepage
    	allocations these days and lumpy reclaim's days are numbered.
    
    Patches 3-4 add warnings to XFS and ext4 if called from
    	direct reclaim. With patch 1, this "never happens" and is
    	intended to catch regressions in this logic in the future.
    
    Patch 5 disables writeback of filesystem pages from kswapd unless
    	the priority is raised to the point where kswapd is considered
    	to be in trouble.
    
    Patch 6 throttles reclaimers if too many dirty pages are being
    	encountered and the zones or backing devices are congested.
    
    Patch 7 invalidates dirty pages found at the end of the LRU so they
    	are reclaimed quickly after being written back rather than
    	waiting for a reclaimer to find them
    
    I consider this series to be orthogonal to the writeback work but it is
    worth noting that the writeback work affects the viability of patch 8 in
    particular.
    
    I tested this on ext4 and xfs using fs_mark, a simple writeback test based
    on dd and a micro benchmark that does a streaming write to a large mapping
    (exercises use-once LRU logic) followed by streaming writes to a mix of
    anonymous and file-backed mappings.  The command line for fs_mark when
    botted with 512M looked something like
    
    ./fs_mark -d  /tmp/fsmark-2676  -D  100  -N  150  -n  150  -L  25  -t  1  -S0  -s  10485760
    
    The number of files was adjusted depending on the amount of available
    memory so that the files created was about 3xRAM.  For multiple threads,
    the -d switch is specified multiple times.
    
    The test machine is x86-64 with an older generation of AMD processor with
    4 cores.  The underlying storage was 4 disks configured as RAID-0 as this
    was the best configuration of storage I had available.  Swap is on a
    separate disk.  Dirty ratio was tuned to 40% instead of the default of
    20%.
    
    Testing was run with and without monitors to both verify that the patches
    were operating as expected and that any performance gain was real and not
    due to interference from monitors.
    
    Here is a summary of results based on testing XFS.
    
    512M1P-xfs           Files/s  mean                 32.69 ( 0.00%)     34.44 ( 5.08%)
    512M1P-xfs           Elapsed Time fsmark                    51.41     48.29
    512M1P-xfs           Elapsed Time simple-wb                114.09    108.61
    512M1P-xfs           Elapsed Time mmap-strm                113.46    109.34
    512M1P-xfs           Kswapd efficiency fsmark                 62%       63%
    512M1P-xfs           Kswapd efficiency simple-wb              56%       61%
    512M1P-xfs           Kswapd efficiency mmap-strm              44%       42%
    512M-xfs             Files/s  mean                 30.78 ( 0.00%)     35.94 (14.36%)
    512M-xfs             Elapsed Time fsmark                    56.08     48.90
    512M-xfs             Elapsed Time simple-wb                112.22     98.13
    512M-xfs             Elapsed Time mmap-strm                219.15    196.67
    512M-xfs             Kswapd efficiency fsmark                 54%       56%
    512M-xfs             Kswapd efficiency simple-wb              54%       55%
    512M-xfs             Kswapd efficiency mmap-strm              45%       44%
    512M-4X-xfs          Files/s  mean                 30.31 ( 0.00%)     33.33 ( 9.06%)
    512M-4X-xfs          Elapsed Time fsmark                    63.26     55.88
    512M-4X-xfs          Elapsed Time simple-wb                100.90     90.25
    512M-4X-xfs          Elapsed Time mmap-strm                261.73    255.38
    512M-4X-xfs          Kswapd efficiency fsmark                 49%       50%
    512M-4X-xfs          Kswapd efficiency simple-wb              54%       56%
    512M-4X-xfs          Kswapd efficiency mmap-strm              37%       36%
    512M-16X-xfs         Files/s  mean                 60.89 ( 0.00%)     65.22 ( 6.64%)
    512M-16X-xfs         Elapsed Time fsmark                    67.47     58.25
    512M-16X-xfs         Elapsed Time simple-wb                103.22     90.89
    512M-16X-xfs         Elapsed Time mmap-strm                237.09    198.82
    512M-16X-xfs         Kswapd efficiency fsmark                 45%       46%
    512M-16X-xfs         Kswapd efficiency simple-wb              53%       55%
    512M-16X-xfs         Kswapd efficiency mmap-strm              33%       33%
    
    Up until 512-4X, the FSmark improvements were statistically significant.
    For the 4X and 16X tests the results were within standard deviations but
    just barely.  The time to completion for all tests is improved which is an
    important result.  In general, kswapd efficiency is not affected by
    skipping dirty pages.
    
    1024M1P-xfs          Files/s  mean                 39.09 ( 0.00%)     41.15 ( 5.01%)
    1024M1P-xfs          Elapsed Time fsmark                    84.14     80.41
    1024M1P-xfs          Elapsed Time simple-wb                210.77    184.78
    1024M1P-xfs          Elapsed Time mmap-strm                162.00    160.34
    1024M1P-xfs          Kswapd efficiency fsmark                 69%       75%
    1024M1P-xfs          Kswapd efficiency simple-wb              71%       77%
    1024M1P-xfs          Kswapd efficiency mmap-strm              43%       44%
    1024M-xfs            Files/s  mean                 35.45 ( 0.00%)     37.00 ( 4.19%)
    1024M-xfs            Elapsed Time fsmark                    94.59     91.00
    1024M-xfs            Elapsed Time simple-wb                229.84    195.08
    1024M-xfs            Elapsed Time mmap-strm                405.38    440.29
    1024M-xfs            Kswapd efficiency fsmark                 79%       71%
    1024M-xfs            Kswapd efficiency simple-wb              74%       74%
    1024M-xfs            Kswapd efficiency mmap-strm              39%       42%
    1024M-4X-xfs         Files/s  mean                 32.63 ( 0.00%)     35.05 ( 6.90%)
    1024M-4X-xfs         Elapsed Time fsmark                   103.33     97.74
    1024M-4X-xfs         Elapsed Time simple-wb                204.48    178.57
    1024M-4X-xfs         Elapsed Time mmap-strm                528.38    511.88
    1024M-4X-xfs         Kswapd efficiency fsmark                 81%       70%
    1024M-4X-xfs         Kswapd efficiency simple-wb              73%       72%
    1024M-4X-xfs         Kswapd efficiency mmap-strm              39%       38%
    1024M-16X-xfs        Files/s  mean                 42.65 ( 0.00%)     42.97 ( 0.74%)
    1024M-16X-xfs        Elapsed Time fsmark                   103.11     99.11
    1024M-16X-xfs        Elapsed Time simple-wb                200.83    178.24
    1024M-16X-xfs        Elapsed Time mmap-strm                397.35    459.82
    1024M-16X-xfs        Kswapd efficiency fsmark                 84%       69%
    1024M-16X-xfs        Kswapd efficiency simple-wb              74%       73%
    1024M-16X-xfs        Kswapd efficiency mmap-strm              39%       40%
    
    All FSMark tests up to 16X had statistically significant improvements.
    For the most part, tests are completing faster with the exception of the
    streaming writes to a mixture of anonymous and file-backed mappings which
    were slower in two cases
    
    In the cases where the mmap-strm tests were slower, there was more
    swapping due to dirty pages being skipped.  The number of additional pages
    swapped is almost identical to the fewer number of pages written from
    reclaim.  In other words, roughly the same number of pages were reclaimed
    but swapping was slower.  As the test is a bit unrealistic and stresses
    memory heavily, the small shift is acceptable.
    
    4608M1P-xfs          Files/s  mean                 29.75 ( 0.00%)     30.96 ( 3.91%)
    4608M1P-xfs          Elapsed Time fsmark                   512.01    492.15
    4608M1P-xfs          Elapsed Time simple-wb                618.18    566.24
    4608M1P-xfs          Elapsed Time mmap-strm                488.05    465.07
    4608M1P-xfs          Kswapd efficiency fsmark                 93%       86%
    4608M1P-xfs          Kswapd efficiency simple-wb              88%       84%
    4608M1P-xfs          Kswapd efficiency mmap-strm              46%       45%
    4608M-xfs            Files/s  mean                 27.60 ( 0.00%)     28.85 ( 4.33%)
    4608M-xfs            Elapsed Time fsmark                   555.96    532.34
    4608M-xfs            Elapsed Time simple-wb                659.72    571.85
    4608M-xfs            Elapsed Time mmap-strm               1082.57   1146.38
    4608M-xfs            Kswapd efficiency fsmark                 89%       91%
    4608M-xfs            Kswapd efficiency simple-wb              88%       82%
    4608M-xfs            Kswapd efficiency mmap-strm              48%       46%
    4608M-4X-xfs         Files/s  mean                 26.00 ( 0.00%)     27.47 ( 5.35%)
    4608M-4X-xfs         Elapsed Time fsmark                   592.91    564.00
    4608M-4X-xfs         Elapsed Time simple-wb                616.65    575.07
    4608M-4X-xfs         Elapsed Time mmap-strm               1773.02   1631.53
    4608M-4X-xfs         Kswapd efficiency fsmark                 90%       94%
    4608M-4X-xfs         Kswapd efficiency simple-wb              87%       82%
    4608M-4X-xfs         Kswapd efficiency mmap-strm              43%       43%
    4608M-16X-xfs        Files/s  mean                 26.07 ( 0.00%)     26.42 ( 1.32%)
    4608M-16X-xfs        Elapsed Time fsmark                   602.69    585.78
    4608M-16X-xfs        Elapsed Time simple-wb                606.60    573.81
    4608M-16X-xfs        Elapsed Time mmap-strm               1549.75   1441.86
    4608M-16X-xfs        Kswapd efficiency fsmark                 98%       98%
    4608M-16X-xfs        Kswapd efficiency simple-wb              88%       82%
    4608M-16X-xfs        Kswapd efficiency mmap-strm              44%       42%
    
    Unlike the other tests, the fsmark results are not statistically
    significant but the min and max times are both improved and for the most
    part, tests completed faster.
    
    There are other indications that this is an improvement as well.  For
    example, in the vast majority of cases, there were fewer pages scanned by
    direct reclaim implying in many cases that stalls due to direct reclaim
    are reduced.  KSwapd is scanning more due to skipping dirty pages which is
    unfortunate but the CPU usage is still acceptable
    
    In an earlier set of tests, I used blktrace and in almost all cases
    throughput throughout the entire test was higher.  However, I ended up
    discarding those results as recording blktrace data was too heavy for my
    liking.
    
    On a laptop, I plugged in a USB stick and ran a similar tests of tests
    using it as backing storage.  A desktop environment was running and for
    the entire duration of the tests, firefox and gnome terminal were
    launching and exiting to vaguely simulate a user.
    
    1024M-xfs            Files/s  mean               0.41 ( 0.00%)        0.44 ( 6.82%)
    1024M-xfs            Elapsed Time fsmark               2053.52   1641.03
    1024M-xfs            Elapsed Time simple-wb            1229.53    768.05
    1024M-xfs            Elapsed Time mmap-strm            4126.44   4597.03
    1024M-xfs            Kswapd efficiency fsmark              84%       85%
    1024M-xfs            Kswapd efficiency simple-wb           92%       81%
    1024M-xfs            Kswapd efficiency mmap-strm           60%       51%
    1024M-xfs            Avg wait ms fsmark                5404.53     4473.87
    1024M-xfs            Avg wait ms simple-wb             2541.35     1453.54
    1024M-xfs            Avg wait ms mmap-strm             3400.25     3852.53
    
    The mmap-strm results were hurt because firefox launching had a tendency
    to push the test out of memory.  On the postive side, firefox launched
    marginally faster with the patches applied.  Time to completion for many
    tests was faster but more importantly - the "Avg wait" time as measured by
    iostat was far lower implying the system would be more responsive.  It was
    also the case that "Avg wait ms" on the root filesystem was lower.  I
    tested it manually and while the system felt slightly more responsive
    while copying data to a USB stick, it was marginal enough that it could be
    my imagination.
    
    This patch: do not writeback filesystem pages in direct reclaim.
    
    When kswapd is failing to keep zones above the min watermark, a process
    will enter direct reclaim in the same manner kswapd does.  If a dirty page
    is encountered during the scan, this page is written to backing storage
    using mapping->writepage.
    
    This causes two problems.  First, it can result in very deep call stacks,
    particularly if the target storage or filesystem are complex.  Some
    filesystems ignore write requests from direct reclaim as a result.  The
    second is that a single-page flush is inefficient in terms of IO.  While
    there is an expectation that the elevator will merge requests, this does
    not always happen.  Quoting Christoph Hellwig;
    
    	The elevator has a relatively small window it can operate on,
    	and can never fix up a bad large scale writeback pattern.
    
    This patch prevents direct reclaim writing back filesystem pages by
    checking if current is kswapd.  Anonymous pages are still written to swap
    as there is not the equivalent of a flusher thread for anonymous pages.
    If the dirty pages cannot be written back, they are placed back on the LRU
    lists.  There is now a direct dependency on dirty page balancing to
    prevent too many pages in the system being dirtied which would prevent
    reclaim making forward progress.
    
    Signed-off-by: Mel Gorman <mgorman@suse.de>
    Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
    Cc: Dave Chinner <david@fromorbit.com>
    Cc: Christoph Hellwig <hch@infradead.org>
    Cc: Johannes Weiner <jweiner@redhat.com>
    Cc: Wu Fengguang <fengguang.wu@intel.com>
    Cc: Jan Kara <jack@suse.cz>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Alex Elder <aelder@sgi.com>
    Cc: Theodore Ts'o <tytso@mit.edu>
    Cc: Chris Mason <chris.mason@oracle.com>
    Cc: Dave Hansen <dave@linux.vnet.ibm.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    gormanm committed with Nov 1, 2011
  5. ext4: Remove kernel_lock annotations

    The BKL is gone, these annotations are useless.
    
    Signed-off-by: Richard Weinberger <richard@nod.at>
    Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
    richardweinberger committed with Nov 7, 2011
  6. ext4: ignore journalled data options on remount if fs has no journal

    This avoids a confusing failure in the init scripts when the
    /etc/fstab has data=writeback or data=journal but the file system does
    not have a journal.  So check for this case explicitly, and warn the
    user that we are ignoring the (pointless, since they have no journal)
    data=* mount option.
    
    Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
    tytso committed with Nov 7, 2011
Commits on Nov 19, 2011
  1. Bluetooth: Drop rfcomm session reference only once for incoming session

    Move decision to drop reference for incoming session to
    rfcomm_session_close to get more clear
    rfcomm_session_hold()/rfcomm_session_put() pairs.
    
    Rebase by: Jaikumar Ganesh <jaikumarg@android.com>
    
    Change-Id: Ie23c02178453cbb04f3fb3ee07ffc88552a94612
    Signed-off-by: Ville Tervo <ville.tervo@nokia.com>
    Signed-off-by: Jaikumar Ganesh <jaikumarg@android.com>
    Ville Tervo committed with Feb 26, 2010
Commits on Nov 17, 2011
Commits on Nov 16, 2011
  1. Update version in GIT

    committed Nov 16, 2011
  2. Change SD driver settings to match stock kernel on I9000 devices

    Fixes SD card corrupting on some (rare) devices.
    
    Change-Id: I6ed62f24d915d91cdb79cd65a97ec0b58a35c4ce
    pawitp committed with Nov 6, 2011
  3. net: wireless: bcmdhd: Call init_ioctl() only if was started properly…

    … for WEXT
    
    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Nov 9, 2011
  4. net: wireless: bcmdhd: Call init_ioctl() only if was started properly

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Nov 9, 2011
  5. net: wireless: bcmdhd: Fix possible memory leak in escan/iscan

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Nov 4, 2011
  6. cpufreq: interactive governor: default 20ms timer

    Change-Id: Ie9952f07b38667f2932474090044195c57976faa
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
    toddpoynor committed with Nov 10, 2011
  7. cpufreq: interactive governor: go to intermediate hi speed before max

    * Add attribute hispeed_freq, which defaults to max.
    
    * Rename go_maxspeed_load to go_hispeed_load.
    
    * If hit go_hispeed_load and at min speed, go to hispeed_freq;
      if hit go_hispeed_load and already above min speed go to max
      speed.
    
    Change-Id: I1050dec5f013fc1177387352ba787a7e1c68703e
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
    toddpoynor committed with Nov 9, 2011
  8. cpufreq: interactive governor: scale to max only if at min speed

    Change-Id: Ieffb2aa56b5290036285c948718be7be0d3af9e8
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
    toddpoynor committed with Nov 4, 2011
  9. cpufreq: interactive governor: apply intermediate load on current speed

    Calculate intermediate speed by applyng CPU load to current speed, not
    max speed.
    
    Change-Id: Idecf598b9a203b07c989c5d9e9c6efc67a1afc2e
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
    toddpoynor committed with Oct 28, 2011
  10. input: gpio_input: don't print debounce message unless flag is set

    Change-Id: I29ccb32e795c5c3e4c51c3d3a209f5b55dfd7d94
    Signed-off-by: Dima Zavin <dima@android.com>
    Dima Zavin committed with Nov 8, 2011
  11. net: wireless: bcm4329: Skip dhd_bus_stop() if bus is already down

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Nov 4, 2011
  12. net: wireless: bcmdhd: Skip dhd_bus_stop() if bus is already down

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Nov 4, 2011
  13. net: wireless: bcmdhd: Improve suspend/resume processing

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Nov 2, 2011
  14. net: wireless: bcmdhd: Check if FW is Ok for internal FW call

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Nov 2, 2011
  15. tcp: Don't nuke connections for the wrong protocol

    Currently, calling tcp_nuke_addr to reset IPv6 connections
    resets IPv4 connections as well, because all Android
    framework sockets are dual-stack (i.e., IPv6) sockets, and
    we don't check the source address to see if the connection
    was in fact an IPv4 connection.
    
    Fix this by checking the source address and not resetting
    the connection if it's a mapped address.
    
    Also slightly tweak the IPv4 code path, which doesn't check
    for mapped addresses either. This was not causing any
    problems because tcp_is_local normally always returns true
    for LOOPBACK4_IPV6 (127.0.0.6), because the loopback
    interface is configured as as 127.0.0.0/8. However,
    checking explicitly for LOOPBACK4_IPV6 makes the code a bit
    more robust.
    
    Bug: 5535055
    Change-Id: I4d6ed3497c5b8643c864783cf681f088cf6b8d2a
    Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
    lcolitti committed with Nov 4, 2011
  16. net: wireless: Skip connect warning for CONFIG_CFG80211_ALLOW_RECONNECT

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Oct 28, 2011
  17. mm: avoid livelock on !__GFP_FS allocations

    Under the following conditions, __alloc_pages_slowpath can loop
    forever:
    gfp_mask & __GFP_WAIT is true
    gfp_mask & __GFP_FS is false
    reclaim and compaction make no progress
    order <= PAGE_ALLOC_COSTLY_ORDER
    
    The gfp conditions are normally invalid, because !__GFP_FS
    disables most of the reclaim methods that __GFP_WAIT would
    wait for.  However, these conditions happen very often during
    suspend and resume, when pm_restrict_gfp_mask() effectively
    converts all GFP_KERNEL allocations into __GFP_WAIT.
    
    The oom killer is not run because gfp_mask & __GFP_FS is false,
    but should_alloc_retry will always return true when order is less
    than PAGE_ALLOC_COSTLY_ORDER.  __alloc_pages_slowpath will
    loop forever between the rebalance label and should_alloc_retry,
    unless another thread happens to release enough pages to satisfy
    the allocation.
    
    Add a check to detect when PM has disabled __GFP_FS, and do not
    retry if reclaim is not making any progress.
    
    [taken from patch on lkml by Mel Gorman, commit message by ccross]
    Change-Id: I864a24e9d9fd98bd0e3d6e9c1e85b6c1b766850e
    Signed-off-by: Colin Cross <ccross@android.com>
    Mel Gorman committed with Oct 24, 2011
  18. net: wireless: bcm4329: Prohibit FW access in case of FW crash

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Oct 26, 2011
  19. net: wireless: bcmdhd: Adjust scan parameters for wl_cfg80211_connect()

    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Oct 26, 2011
  20. net: wireless: bcmdhd: Update to version 5.90.125.94

    - Fix WFD interface removal
    - Fix profile update
    - Keep same mode for softap or WFD during early suspend
    - Add dhd_console_ms parameter access
    
    Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
    Dmitry Shmidt committed with Oct 25, 2011
  21. Linux 3.0.8

    gregkh committed with Oct 25, 2011
  22. crypto: ghash - Avoid null pointer dereference if no key is set

    commit 7ed47b7 upstream.
    
    The ghash_update function passes a pointer to gf128mul_4k_lle which will
    be NULL if ghash_setkey is not called or if the most recent call to
    ghash_setkey failed to allocate memory.  This causes an oops.  Fix this
    up by returning an error code in the null case.
    
    This is trivially triggered from unprivileged userspace through the
    AF_ALG interface by simply writing to the socket without setting a key.
    
    The ghash_final function has a similar issue, but triggering it requires
    a memory allocation failure in ghash_setkey _after_ at least one
    successful call to ghash_update.
    
      BUG: unable to handle kernel NULL pointer dereference at 00000670
      IP: [<d88c92d4>] gf128mul_4k_lle+0x23/0x60 [gf128mul]
      *pde = 00000000
      Oops: 0000 [#1] PREEMPT SMP
      Modules linked in: ghash_generic gf128mul algif_hash af_alg nfs lockd nfs_acl sunrpc bridge ipv6 stp llc
    
      Pid: 1502, comm: hashatron Tainted: G        W   3.1.0-rc9-00085-ge9308cf #32 Bochs Bochs
      EIP: 0060:[<d88c92d4>] EFLAGS: 00000202 CPU: 0
      EIP is at gf128mul_4k_lle+0x23/0x60 [gf128mul]
      EAX: d69db1f0 EBX: d6b8ddac ECX: 00000004 EDX: 00000000
      ESI: 00000670 EDI: d6b8ddac EBP: d6b8ddc8 ESP: d6b8dda4
       DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
      Process hashatron (pid: 1502, ti=d6b8c000 task=d6810000 task.ti=d6b8c000)
      Stack:
       00000000 d69db1f0 00000163 00000000 d6b8ddc8 c101a520 d69db1f0 d52aa000
       00000ff0 d6b8dde8 d88d310f d6b8a3f8 d52aa000 00001000 d88d502c d6b8ddfc
       00001000 d6b8ddf4 c11676ed d69db1e8 d6b8de24 c11679ad d52aa000 00000000
      Call Trace:
       [<c101a520>] ? kmap_atomic_prot+0x37/0xa6
       [<d88d310f>] ghash_update+0x85/0xbe [ghash_generic]
       [<c11676ed>] crypto_shash_update+0x18/0x1b
       [<c11679ad>] shash_ahash_update+0x22/0x36
       [<c11679cc>] shash_async_update+0xb/0xd
       [<d88ce0ba>] hash_sendpage+0xba/0xf2 [algif_hash]
       [<c121b24c>] kernel_sendpage+0x39/0x4e
       [<d88ce000>] ? 0xd88cdfff
       [<c121b298>] sock_sendpage+0x37/0x3e
       [<c121b261>] ? kernel_sendpage+0x4e/0x4e
       [<c10b4dbc>] pipe_to_sendpage+0x56/0x61
       [<c10b4e1f>] splice_from_pipe_feed+0x58/0xcd
       [<c10b4d66>] ? splice_from_pipe_begin+0x10/0x10
       [<c10b51f5>] __splice_from_pipe+0x36/0x55
       [<c10b4d66>] ? splice_from_pipe_begin+0x10/0x10
       [<c10b6383>] splice_from_pipe+0x51/0x64
       [<c10b63c2>] ? default_file_splice_write+0x2c/0x2c
       [<c10b63d5>] generic_splice_sendpage+0x13/0x15
       [<c10b4d66>] ? splice_from_pipe_begin+0x10/0x10
       [<c10b527f>] do_splice_from+0x5d/0x67
       [<c10b6865>] sys_splice+0x2bf/0x363
       [<c129373b>] ? sysenter_exit+0xf/0x16
       [<c104dc1e>] ? trace_hardirqs_on_caller+0x10e/0x13f
       [<c129370c>] sysenter_do_call+0x12/0x32
      Code: 83 c4 0c 5b 5e 5f c9 c3 55 b9 04 00 00 00 89 e5 57 8d 7d e4 56 53 8d 5d e4 83 ec 18 89 45 e0 89 55 dc 0f b6 70 0f c1 e6 04 01 d6 <f3> a5 be 0f 00 00 00 4e 89 d8 e8 48 ff ff ff 8b 45 e0 89 da 0f
      EIP: [<d88c92d4>] gf128mul_4k_lle+0x23/0x60 [gf128mul] SS:ESP 0068:d6b8dda4
      CR2: 0000000000000670
      ---[ end trace 4eaa2a86a8e2da24 ]---
      note: hashatron[1502] exited with preempt_count 1
      BUG: scheduling while atomic: hashatron/1502/0x10000002
      INFO: lockdep is turned off.
      [...]
    
    Signed-off-by: Nick Bowler <nbowler@elliptictech.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
    Nick Bowler committed with Oct 20, 2011
  23. mm: fix race between mremap and removing migration entry

    commit 486cf46 upstream.
    
    I don't usually pay much attention to the stale "? " addresses in
    stack backtraces, but this lucky report from Pawel Sikora hints that
    mremap's move_ptes() has inadequate locking against page migration.
    
     3.0 BUG_ON(!PageLocked(p)) in migration_entry_to_page():
     kernel BUG at include/linux/swapops.h:105!
     RIP: 0010:[<ffffffff81127b76>]  [<ffffffff81127b76>]
                           migration_entry_wait+0x156/0x160
      [<ffffffff811016a1>] handle_pte_fault+0xae1/0xaf0
      [<ffffffff810feee2>] ? __pte_alloc+0x42/0x120
      [<ffffffff8112c26b>] ? do_huge_pmd_anonymous_page+0xab/0x310
      [<ffffffff81102a31>] handle_mm_fault+0x181/0x310
      [<ffffffff81106097>] ? vma_adjust+0x537/0x570
      [<ffffffff81424bed>] do_page_fault+0x11d/0x4e0
      [<ffffffff81109a05>] ? do_mremap+0x2d5/0x570
      [<ffffffff81421d5f>] page_fault+0x1f/0x30
    
    mremap's down_write of mmap_sem, together with i_mmap_mutex or lock,
    and pagetable locks, were good enough before page migration (with its
    requirement that every migration entry be found) came in, and enough
    while migration always held mmap_sem; but not enough nowadays, when
    there's memory hotremove and compaction.
    
    The danger is that move_ptes() lets a migration entry dodge around
    behind remove_migration_pte()'s back, so it's in the old location when
    looking at the new, then in the new location when looking at the old.
    
    Either mremap's move_ptes() must additionally take anon_vma lock(), or
    migration's remove_migration_pte() must stop peeking for is_swap_entry()
    before it takes pagetable lock.
    
    Consensus chooses the latter: we prefer to add overhead to migration
    than to mremapping, which gets used by JVMs and by exec stack setup.
    
    Reported-and-tested-by: Paweł Sikora <pluto@agmk.net>
    Signed-off-by: Hugh Dickins <hughd@google.com>
    Acked-by: Andrea Arcangeli <aarcange@redhat.com>
    Acked-by: Mel Gorman <mgorman@suse.de>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
    Hugh Dickins committed with Oct 19, 2011