Permalink
Switch branches/tags
Nothing to show
Commits on Apr 29, 2012
  1. Revert "touchscreen: atmel_mxt1386: re-enable MXT_FACTORY_TEST and EN…

    pershoot committed Apr 29, 2012
    …ABLE_NOISE_TEST_MODE"
    
    This reverts commit cd8aaa9.
  2. Revert "touchscreen: atmel_mxt1386: backport/sync mxt_ta_worker from …

    pershoot committed Apr 29, 2012
    …I-9100 (GSII), Update 4"
    
    This reverts commit 9e2bd0e.
  3. Revert "touchscreen: atmel_mxt1386: add mxt_fhe_worker from GSII (I-9…

    pershoot committed Apr 29, 2012
    …100), Update 4"
    
    This reverts commit 6926fbc.
  4. Revert "touchscreen: atmel_mxt1386: remove set_mode_for_ta from condi…

    pershoot committed Apr 29, 2012
    …tional (in resume)"
    
    This reverts commit 3e1bf1e.
  5. ARM: SMP: use a timing out completion for cpu hotplug

    Russell King authored and pershoot committed Jan 20, 2012
    Rather than open-coding the jiffy-based wait, and polling for the
    secondary CPU to come online, use a completion instead.  This
    removes the need to poll, instead we will be notified when the
    secondary CPU has initialized.
    
    Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
  6. ARM: fix rcu stalls on SMP platforms

    Russell King authored and pershoot committed Jan 19, 2012
    We can stall RCU processing on SMP platforms if a CPU sits in its idle
    loop for a long time.  This happens because we don't call irq_enter()
    and irq_exit() around generic_smp_call_function_interrupt() and
    friends.  Add the necessary calls, and remove the one from within
    ipi_timer(), so that they're all in a common place.
    
    Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
  7. cpufreq: interactive: remove unused target_validate_time_in_idle

    toddpoynor authored and pershoot committed Apr 24, 2012
    Change-Id: I37c5085b91318242612440dfd775ad762996612f
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
  8. cpufreq: interactive: Boost frequency on touchscreen input

    toddpoynor authored and pershoot committed Apr 25, 2012
    Based on previous patches by Tero Kristo <tero.kristo@nokia.com>,
    Brian Steuer <bsteuer@codeaurora.org>,
    David Ng <dave@codeaurora.org>,
    Antti P Miettinen <amiettinen@nvidia.com>, and
    Thomas Renninger <trenn@suse.de>
    
    Change-Id: Ic55fedcf6f9310f43a7022fb88e23b0392122769
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
    
    Conflicts:
    
    	drivers/cpufreq/cpufreq_interactive.c
  9. cpufreq: Separate speed target revalidate time and initial set time

    toddpoynor authored and pershoot committed Apr 20, 2012
    Allow speed drop after min_sample_time elapses from last time
    the current speed was last re-validated as appropriate for
    current load / input boost.
    
    Allow speed bump after min_sample_time (or above_hispeed_delay)
    elapses from the time the current speed was originally set.
    
    Change-Id: Ic25687a7a53d25e6544c30c47d7ab6f27a47bee8
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
  10. cpufreq: interactive: based hispeed bump on target freq, not actual

    toddpoynor authored and pershoot committed Apr 19, 2012
    For systems that set a common speed for all CPUs, checking current
    speed here could bypass the intermediate hispeed bump decision for
    this CPU when another CPU was already at hispeed.  This could
    result in an overly high setting (for all CPUs) in situations
    where all CPUs were about to drop to load levels that map to
    hispeed or below.
    
    Change-Id: I186f23dcfc5e2b6336cab8b0327f0c8a9a4482bc
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
  11. cpufreq: interactive: adjust code and documentation to match

    toddpoynor authored and pershoot committed Apr 18, 2012
    Change-Id: If59c668d514a29febe5c35404fd9d01df8548eb1
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
    
    Conflicts:
    
    	Documentation/cpu-freq/governors.txt
  12. cpufreq: interactive: configurable delay before raising above hispeed

    toddpoynor authored and pershoot committed Apr 14, 2012
    Change-Id: I4d6ac40b23a3790d48e30c37408284e9f955e8fa
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
  13. cpufreq: interactive: don't drop speed if recently at higher load

    toddpoynor authored and pershoot committed Apr 7, 2012
    Apply min_sample_time to the last time the current target speed
    was originally requested or re-validated as appropriate for the
    current load, not to the time since the current speed was
    originally set.  Avoids periodic dips in speed during bursty
    loads.
    
    Change-Id: I250bda657985de60373f9897cc41f480664d51a1
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
    
    Conflicts:
    
    	drivers/cpufreq/cpufreq_interactive.c
  14. cpufreq: interactive: set at least hispeed when above hispeed load

    toddpoynor authored and pershoot committed Apr 7, 2012
    If load is above go_hispeed_load, always go to at least hispeed_freq,
    even when reducing speed from a higher speed, not just when jumping
    up from minimum speed.  Avoids running at a lower than intended
    speed after a burst of even higher load.
    
    Change-Id: I5b9d2a15ba25ce609b21bac7c724265cf6838dee
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
  15. cpufreq: interactive: apply intermediate load to max speed not current

    toddpoynor authored and pershoot committed Apr 6, 2012
    Evaluate spikes in load (below go_hispeed_load) against the maximum
    speed supported by the device, not the current speed (which tends to
    make it too difficult to raise speed to intermediate levels until
    very busy).
    
    Change-Id: Ib937006abf8bedb60891a739acd733e89b732ae0
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
  16. cpufreq: interactive: Choose greater of short-term load or long-term …

    pershoot committed Apr 26, 2012
    …load
    
    -reference github.com/arco
  17. cpufreq interactive governor: event tracing

    toddpoynor authored and pershoot committed Feb 17, 2012
    Change-Id: Ic13614a3da2faa2d4bd215ca3eb7191614f0cf66
    Signed-off-by: Todd Poynor <toddpoynor@google.com>
  18. ARM: cache-v7: Disable preemption when reading CCSIDR (no tracing)

    rabinv authored and pershoot committed Feb 13, 2012
    This patch (http://goo.gl/iYyln) breaks the kernel boot when lockdep is enabled.
    
    v7_setup (called before the MMU is enabled) calls v7_flush_dcache_all,
    and the save_and_disable_irqs added by this patch ends up calling
    into lockdep C code (trace_hardirqs_off()) when we are in no position
    to execute it (no stack, no MMU).
    
    The following fixes it.
    
    Original Patch:
     Stephen Boyd <sboyd@codeaurora.org>
    
    -Reference:
     https://lkml.org/lkml/2012/2/13/269
     dorimanx
  19. ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR

    bebarino authored and pershoot committed Mar 19, 2012
    Conflicts:
    
    	arch/arm/mm/cache-v7.S
  20. lib/crc: add slice by 8 algorithm to crc32.c

    fzsfw authored and pershoot committed Nov 1, 2011
    Add support for slice by 8 to existing crc32 algorithm.  Also modify
    gen_crc32table.c to only produce table entries that are actually used.
    The parameters CRC_LE_BITS and CRC_BE_BITS determine the number of bits in
    the input array that are processed during each step.  Generally the more
    bits the faster the algorithm is but the more table data required.
    
    Using an x86_64 Opteron machine running at 2100MHz the following table was
    collected with a pre-warmed cache by computing the crc 1000 times on a
    buffer of 4096 bytes.
    
    BITS Size LE Cycles/byte BE Cycles/byte
    ----------------------------------------------
    1 873 41.65 34.60
    2 1097 25.43 29.61
    4 1057 13.29 15.28
    8 2913 7.13 8.19
    32 9684 2.80 2.82
    64 18178 1.53 1.53
    
    BITS is the value of CRC_LE_BITS or CRC_BE_BITS. The old
    default was 8 which actually selected the 32 bit algorithm. In
    this version the value 8 is used to select the standard
    8 bit algorithm and two new values: 32 and 64 are introduced
    to select the slice by 4 and slice by 8 algorithms respectively.
    
    Where Size is the size of crc32.o's text segment which includes
    code and table data when both LE and BE versions are set to BITS.
    
    The current version of crc32.c by default uses the slice by 4 algorithm
    which requires about 2.8 cycles per byte.  The slice by 8 algorithm is
    roughly 2X faster and enables packet processing at over 1GB/sec on a
    typical 2-3GHz system.
    
    Signed-off-by: Bob Pearson <rpearson@systemfabricworks.com>
    Cc: Roland Dreier <roland@kernel.org>
    Cc: Eric Dumazet <eric.dumazet@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Ezekeel <notezekeel@googlemail.com>
  21. arm: remove stale export of 'sha_transform'

    torvalds authored and pershoot committed Aug 19, 2011
    The generic library code already exports the generic function, this was
    left-over from the ARM-specific version that just got removed.
    
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  22. [PATCH v1] tegra: remove the clock sleepable WARNING

    Wei Ni authored and pershoot committed Feb 13, 2012
  23. arm: Allow CPU-supported unaligned accesses

    Brent DeGraaf authored and pershoot committed Apr 18, 2011
    This change reconfigures the CPU to allow CPU-supported unaligned
    accesses, which are generally faster than software-only fixups,
    resulting in fewer alignment exceptions.
    
    Signed-off-by: Brent DeGraaf <bdegraaf@codeaurora.org>
  24. binder: Quiet binder

    labbott authored and pershoot committed Jun 17, 2011
    The majority of the binder messages are not very informative
    and useful to the reader. Make them available via debug
    mechanisms.
    
    Change-Id: Ie0dbd924e583bf045a99a04d812b52a668b6cbb2
    Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
  25. arm: remove "optimized" SHA1 routines

    torvalds authored and pershoot committed Aug 19, 2011
    Since commit 1eb19a12bd22 ("lib/sha1: use the git implementation of
    SHA-1"), the ARM SHA1 routines no longer work.  The reason? They
    depended on the larger 320-byte workspace, and now the sha1 workspace is
    just 16 words (64 bytes).  So the assembly version would overwrite the
    stack randomly.
    
    The optimized asm version is also probably slower than the new improved
    C version, so there's no reason to keep it around.  At least that was
    the case in git, where what appears to be the same assembly language
    version was removed two years ago because the optimized C BLK_SHA1 code
    was faster.
    
    Reported-and-tested-by: Joachim Eastwood <manabian@gmail.com>
    Cc: Andreas Schwab <schwab@linux-m68k.org>
    Cc: Nicolas Pitre <nico@fluxnic.net>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  26. lib/sha1: use the git implementation of SHA-1

    Mandeep Singh Baines authored and pershoot committed Aug 19, 2011
    For ChromiumOS, we use SHA-1 to verify the integrity of the root
    filesystem.  The speed of the kernel sha-1 implementation has a major
    impact on our boot performance.
    
    To improve boot performance, we investigated using the heavily optimized
    sha-1 implementation used in git.  With the git sha-1 implementation, we
    see a 11.7% improvement in boot time.
    
    10 reboots, remove slowest/fastest.
    
    Before:
    
      Mean: 6.58 seconds Stdev: 0.14
    
    After (with git sha-1, this patch):
    
      Mean: 5.89 seconds Stdev: 0.07
    
    The other cool thing about the git SHA-1 implementation is that it only
    needs 64 bytes of stack for the workspace while the original kernel
    implementation needed 320 bytes.
    
    Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
    Cc: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
    Cc: Nicolas Pitre <nico@cam.org>
    Cc: Herbert Xu <herbert@gondor.apana.org.au>
    Cc: David S. Miller <davem@davemloft.net>
    Cc: linux-crypto@vger.kernel.org
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  27. lib: Introduce some memory copy macros and functions

    Miao Xie authored and pershoot committed Jun 5, 2011
        The kernel's memcpy and memmove is very inefficient. But the glibc
    version is
        quite fast, in some cases it is 10 times faster than the kernel
    version. So I
        introduce some memory copy macros and functions of the glibc to
    improve the
        kernel version's performance.
    
        The strategy of the memory functions is:
        1. Copy bytes until the destination pointer is aligned.
        2. Copy words in unrolled loops. If the source and destination are
    not aligned
           in the same way, use word memory operations, but shift and merge
    two read
           words before writing.
        3. Copy the few remaining bytes.
    
     [ported to 3.0]
    
    Conflicts:
    
    	lib/Makefile
  28. netfilter: xt_qtaguid: fix ipv6 protocol lookup

    wing-github authored and pershoot committed Apr 17, 2012
    When updating the stats for a given uid it would incorrectly assume
    IPV4 and pick up the wrong protocol when IPV6.
    
    Change-Id: Iea4a635012b4123bf7aa93809011b7b2040bb3d5
    Signed-off-by: JP Abgrall <jpa@google.com>
  29. netfilter: qtaguid: initialize a local var to keep compiler happy.

    wing-github authored and pershoot committed Apr 14, 2012
    There was a case that might have seemed like new_tag_stat was not
    initialized and actually used.
    Added comment explaining why it was impossible, and a BUG()
    in case the logic gets changed.
    
    Change-Id: I1eddd1b6f754c08a3bf89f7e9427e5dce1dfb081
    Signed-off-by: JP Abgrall <jpa@google.com>