Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Commits on Jun 16, 2010
  1. Merge branch 'configs-2.6.34'

    Oleksandr Natalenko authored
  2. configs-2.6.34: update config for Dell Inspiron 1525

    Oleksandr Natalenko authored
  3. Merge branch 'version-2.6.34'

    Oleksandr Natalenko authored
  4. version-2.6.34: bump to v2.6.34-pf4

    Oleksandr Natalenko authored
  5. fix merge conflict

    Oleksandr Natalenko authored
  6. fix revert commit conflict

    Oleksandr Natalenko authored
  7. fix revert conflict

    Oleksandr Natalenko authored
  8. Merge remote branch 'l7filter'

    Oleksandr Natalenko authored
  9. fix ext4 merge conflict

    Oleksandr Natalenko authored
  10. Merge remote branch 'misc-2.6.34'

    Oleksandr Natalenko authored
  11. Merge remote branch 'march-native'

    Oleksandr Natalenko authored
  12. Merge remote branch 'expose-processor-select'

    Oleksandr Natalenko authored
  13. Merge remote branch 'acpi-fixes'

    Oleksandr Natalenko authored
  14. Merge remote branch 'acpi-dsdt'

    Oleksandr Natalenko authored
  15. @damentz
  16. @damentz
  17. @damentz

    slub: move kmem_cache_node into it's own cacheline

    Alexander Duyck authored damentz committed
    This patch is meant to improve the performance of SLUB by moving the local
    kmem_cache_node lock into it's own cacheline separate from kmem_cache.
    This is accomplished by simply removing the local_node when NUMA is enabled.
    
    On my system with 2 nodes I saw around a 5% performance increase w/
    hackbench times dropping from 6.2 seconds to 5.9 seconds on average.  I
    suspect the performance gain would increase as the number of nodes
    increases, but I do not have the data to currently back that up.
    
    Bugzilla-Reference: http://bugzilla.kernel.org/show_bug.cgi?id=15713
    Cc: <stable@kernel.org>
    Reported-by: Alex Shi <alex.shi@intel.com>
    Tested-by: Alex Shi <alex.shi@intel.com>
    Acked-by: Yanmin Zhang <yanmin_zhang@linux.intel.com>
    Acked-by: Christoph Lameter <cl@linux-foundation.org>
    Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
    Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
  18. @fenrus75 @damentz

    ondemand: Make the iowait-is-busy-time a sysfs tunable

    fenrus75 authored damentz committed
    Pavel Machek pointed out that not all CPUs have an efficient idle
    at high frequency. Specifically, older Intel and various AMD cpus
    would get a higher powerusage when copying files from USB.
    
    Mike Chan pointed out that the same is true for various ARM chips
    as well.
    
    Thomas Renninger suggested to make this a sysfs tunable with a
    reasonable default.
    
    This patch adds a sysfs tunable for the new behavior, and uses
    a very simple function to determine a reasonable default, depending
    on the CPU vendor/type.
    
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Acked-by: Rik van Riel <riel@redhat.com>
    Acked-by: Pavel Machek <pavel@ucw.cz>
  19. @fenrus75 @damentz

    ondemand: Solve a big performance issue by counting IOWAIT time as busy

    fenrus75 authored damentz committed
    The ondemand cpufreq governor uses CPU busy time (e.g. not-idle time) as
    a measure for scaling the CPU frequency up or down.
    If the CPU is busy, the CPU frequency scales up, if it's idle, the CPU
    frequency scales down. Effectively, it uses the CPU busy time as proxy
    variable for the more nebulous "how critical is performance right now"
    question.
    
    This algorithm falls flat on its face in the light of workloads where
    you're alternatingly disk and CPU bound, such as the ever popular
    "git grep", but also things like startup of programs and maildir using
    email clients... much to the chagarin of Andrew Morton.
    
    This patch changes the ondemand algorithm to count iowait time as busy,
    not idle, time. As shown in the breakdown cases above, iowait is performance
    critical often, and by counting iowait, the proxy variable becomes a more
    accurate representation of the "how critical is performance" question.
    
    The problem and fix are both verified with the "perf timechar" tool.
    
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Dave Jones <davej@redhat.com>
    Reviewed-by: Rik van Riel <riel@redhat.com>
  20. @fenrus75 @damentz

    sched: Intoduce get_cpu_iowait_time_us()

    fenrus75 authored damentz committed
    For the ondemand cpufreq governor, it is desired that the iowait
    time is microaccounted in a similar way as idle time is.
    
    This patch introduces the infrastructure to account and expose
    this information via the get_cpu_iowait_time_us() function.
    
    [akpm@linux-foundation.org: fix CONFIG_NO_HZ=n build]
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Rik van Riel <riel@redhat.com>
  21. @fenrus75 @damentz

    sched: Eliminate the ts->idle_lastupdate field

    fenrus75 authored damentz committed
    Now that the only user of ts->idle_lastupdate is update_ts_time_stats(),
    the entire field can be eliminated.
    
    In update_ts_time_stats(), idle_lastupdate is first set to "now",
    and a few lines later, the only user is an if() statement that
    assigns a variable either to "now" or to ts->idle_lastupdate,
    which has the value of "now" at that point.
    
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Rik van Riel <riel@redhat.com>
  22. @fenrus75 @damentz

    sched: Fold updating of the last_update_time_info into update_ts_time…

    fenrus75 authored damentz committed
    …_stats()
    
    This patch folds the updating of the last_update_time into the
    update_ts_time_stats() function, and updates the callers.
    
    This allows for further cleanups that are done in the next patch.
    
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Rik van Riel <riel@redhat.com>
  23. @fenrus75 @damentz

    sched: Update the idle statistics in get_cpu_idle_time_us()

    fenrus75 authored damentz committed
    Right now, get_cpu_idle_time_us() only reports the idle statistics
    upto the point the CPU entered last idle; not what is valid right now.
    
    This patch adds an update of the idle statistics to get_cpu_idle_time_us(),
    so that calling this function always returns statistics that are accurate
    at the point of the call.
    
    This includes resetting the start of the idle time for accounting purposes
    to avoid double accounting.
    
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Rik van Riel <riel@redhat.com>
  24. @fenrus75 @damentz

    sched: Introduce a function to update the idle statistics

    fenrus75 authored damentz committed
    Currently, two places update the idle statistics (and more to
    come later in this series).
    
    This patch creates a helper function for updating these statistics.
    
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Rik van Riel <riel@redhat.com>
  25. @fenrus75 @damentz

    sched: Add a comment to get_cpu_idle_time_us()

    fenrus75 authored damentz committed
    The exported function get_cpu_idle_time_us() has no comment
    describing it; add a kerneldoc comment
    
    Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Rik van Riel <riel@redhat.com>
  26. @jankara @damentz

    mm: Stop background writeback if there is other work queued for the t…

    jankara authored damentz committed
    …hread
    
    Signed-off-by: Jan Kara <jack@suse.cz>
  27. @damentz

    writeback: limit write_cache_pages integrity scanning to current EOF

    Dave Chinner authored damentz committed
    sync can currently take a really long time if a concurrent writer is
    extending a file. The problem is that the dirty pages on the address
    space grow in the same direction as write_cache_pages scans, so if
    the writer keeps ahead of writeback, the writeback will not
    terminate until the writer stops adding dirty pages.
    
    For a data integrity sync, we only need to write the pages dirty at
    the time we start the writeback, so we can stop scanning once we get
    to the page that was at the end of the file at the time the scan
    started.
    
    This will prevent operations like copying a large file preventing
    sync from completing as it will not write back pages that were
    dirtied after the sync was started. This does not impact the
    existing integrity guarantees, as any dirty page (old or new)
    within the EOF range at the start of the scan will still be
    captured.
    
    Signed-off-by: Dave Chinner <dchinner@redhat.com>
  28. @damentz

    xfs: remove nr_to_write writeback windup.

    Dave Chinner authored damentz committed
    Now that the background flush code has been fixed, we shouldn't need to
    silently multiply the wbc->nr_to_write to get good writeback. Remove
    that code.
    
    Signed-off-by: Dave Chinner <dchinner@redhat.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
  29. @damentz

    writeback: pay attention to wbc->nr_to_write in write_cache_pages

    Dave Chinner authored damentz committed
    If a filesystem writes more than one page in ->writepage, write_cache_pages
    fails to notice this and continues to attempt writeback when wbc->nr_to_write
    has gone negative - this trace was captured from XFS:
    
        wbc_writeback_start: towrt=1024
        wbc_writepage: towrt=1024
        wbc_writepage: towrt=0
        wbc_writepage: towrt=-1
        wbc_writepage: towrt=-5
        wbc_writepage: towrt=-21
        wbc_writepage: towrt=-85
    
    This has adverse effects on filesystem writeback behaviour. write_cache_pages()
    needs to terminate after a certain number of pages are written, not after a
    certain number of calls to ->writepage are made.  This is a regression
    introduced by 17bc6c3, but cannot be reverted
    directly due to subsequent bug fixes that have gone in on top of it.
    
    This commit adds a ->writepage tracepoint inside write_cache_pages() (how the
    above trace was generated) and does the revert manually leaving the subsequent
    bug fixes in tact. ext4 is not affected by this as a previous commit in the
    series stops ext4 from using the generic function.
    
    Signed-off-by: Dave Chinner <dchinner@redhat.com>
  30. @tytso @damentz

    ext4: Use our own write_cache_pages()

    tytso authored damentz committed
    Make a copy of write_cache_pages() for the benefit of
    ext4_da_writepages().  This allows us to simplify the code some, and
    will allow us to further customize the code in future patches.
    
    There are some nasty hacks in write_cache_pages(), which Linus has
    (correctly) characterized as vile.  I've just copied it into
    write_cache_pages_da(), without trying to clean those bits up lest I
    break something in the ext4's delalloc implementation, which is a bit
    fragile right now.  This will allow Dave Chinner to clean up
    write_cache_pages() in mm/page-writeback.c, without worrying about
    breaking ext4.  Eventually write_cache_pages_da() will go away when I
    rewrite ext4's delayed allocation and create a general
    ext4_writepages() which is used for all of ext4's writeback.  Until
    now this is the lowest risk way to clean up the core
    write_cache_pages() function.
    
    Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
  31. REVERT EVERYTHING BELOW!, AVOID CONFLICTS THATS WHY I DUN IT

    Brandon Berhent authored
  32. pretty much revert all the fedora stuff except writeback ones because…

    Brandon Berhent authored
    … they're shit
  33. fix autususpend stuff

    Brandon Berhent authored
  34. linux-2.6-driver-level-usb-autosuspend.diff

    Brandon Berhent authored
    linux-2.6-enable-btusb-autosuspend.patch
    linux-2.6-usb-uvc-autosuspend.diff
Commits on Jun 15, 2010
  1. remove ck block of code from makefile

    Brandon Berhent authored
Something went wrong with that request. Please try again.