Permalink
Commits on Apr 4, 2015
  1. msm: HTC: Enable f2fs support for m4

    mdmower committed Apr 4, 2015
    Change-Id: I62f805a3a7b8901696ba9d654fc851cb74f4c2c7
  2. msm: HTC: Enable f2fs support for s4/m7 devices

    jrior001 authored and mdmower committed Jun 26, 2014
    elite, fighter, jet, m7, ville
    
    Change-Id: I048b6fcefb6ec8e1e8badda915d339b9175af468
  3. F2FS import for 3.4 kernel

    jrior001 authored and mdmower committed Mar 10, 2015
    Squashed import of the latest 3.20 mainline from
    https://kernel.googlesource.com/pub/scm/linux/kernel/git/jaegeuk/f2fs/
    3.4 branch which includes necessary mods for the 3.4 kernel
    
    Change-Id: I75179ebca9f83c83a923fe150a68f986ba712d68
Commits on Mar 31, 2015
  1. msm: HTC: Regenerate defconfigs

    jrior001 authored and mdmower committed Mar 31, 2015
    Update for LZ4 kernel compression. All of elite, fighter, jet, m4, m7,
    t6, and ville were regenerated. Only these were affected.
    
    Change-Id: I75fb1b7b1ef38d85493bb4328216f160da0d2661
  2. arm: Add support for LZ4-compressed kernel

    Kyungsik Lee authored and intervigilium committed Aug 9, 2013
    Date	Tue, 26 Feb 2013 15:24:29 +0900
    
    This patch integrates the LZ4 decompression code to the arm pre-boot code.
    And it depends on two patchs below
    
    lib: Add support for LZ4-compressed kernel
    decompressor: Add LZ4 decompressor module
    
    Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com>
    
    v2:
    - Apply CFLAGS, -Os to decompress.o to improve decompress
      performance during boot-up process
    
    Change-Id: Ic2fa8712ce919abc2cd0767672e97bd2c9177e25
  3. msm: HTC: zara: Moduleless defconfig

    mdmower committed Mar 31, 2015
    ... and regenerate (note removal of S5K4E5YX cam driver upon
    regeneration is fine since zara does not use it).
    
    Change-Id: I430f4956bbb9d787fdb076c2d5528025d2027b70
  4. msm: HTC: t6: Remove HAC AMP

    Flyhalf205 authored and Gerrit Code Review committed Mar 20, 2015
    Change-Id: I669825379a602d2cd1836162fd851e945a7936a9
  5. Revert "msm: HTC: t6: merge display on commands"

    Flyhalf205 committed Jan 28, 2015
    This reverts commit 47fef4f.
    
    Change-Id: If82bd301f34fa916681f11c5c77fe8123e61a86a
    (cherry picked from commit a87c99a)
Commits on Mar 19, 2015
  1. power: pm8921-charger: Add DLXP support

    brinlyau committed Mar 19, 2015
    Change-Id: I8425a2a6751643b79c183885a6cf9c16ef97129b
  2. ASoC: wcd9304: Add HTC DLXP support

    brinlyau authored and intervigilium committed Mar 19, 2015
    Change-Id: Ic1e6afbc97a2bb8754bee2769e3bbe62c70165c3
Commits on Mar 17, 2015
  1. msm: HTC: zara: kill off more dead config

    brinlyau committed Mar 17, 2015
    Change-Id: Id5a6857080342c7a26e24913cc7882f1d6fa3b70
Commits on Mar 15, 2015
  1. msm: HTC: Regenerate defconfig and enable zram/zsmalloc/lz4

    jrior001 authored and intervigilium committed Mar 13, 2015
    Change-Id: Idf6e8db31d4a4be6817332367c95ab69c2cd8735
  2. zram: switch to lz4 as default commpressor

    jrior001 authored and intervigilium committed Mar 13, 2015
    Change-Id: I6a5ed146ef436546e0d9362d305ac43ad4334899
Commits on Mar 13, 2015
  1. lz4: add overrun checks to lz4_uncompress_unknownoutputsize()

    gregkh authored and jrior001 committed Jul 3, 2014
    Jan points out that I forgot to make the needed fixes to the
    lz4_uncompress_unknownoutputsize() function to mirror the changes done
    in lz4_decompress() with regards to potential pointer overflows.
    
    The only in-kernel user of this function is the zram code, which only
    takes data from a valid compressed buffer that it made itself, so it's
    not a big issue.  But due to external kernel modules using this
    function, it's better to be safe here.
    
    Reported-by: Jan Beulich <JBeulich@suse.com>
    Cc: "Don A. Bailey" <donb@securitymouse.com>
    Cc: stable <stable@vger.kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  2. lz4: fix another possible overrun

    gregkh authored and jrior001 committed Jun 24, 2014
    There is one other possible overrun in the lz4 code as implemented by
    Linux at this point in time (which differs from the upstream lz4
    codebase, but will get synced at in a future kernel release.)  As
    pointed out by Don, we also need to check the overflow in the data
    itself.
    
    While we are at it, replace the odd error return value with just a
    "simple" -1 value as the return value is never used for anything other
    than a basic "did this work or not" check.
    
    Reported-by: "Don A. Bailey" <donb@securitymouse.com>
    Reported-by: Willy Tarreau <w@1wt.eu>
    Cc: stable <stable@vger.kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  3. lz4: ensure length does not wrap

    gregkh authored and jrior001 committed Jun 21, 2014
    Given some pathologically compressed data, lz4 could possibly decide to
    wrap a few internal variables, causing unknown things to happen.  Catch
    this before the wrapping happens and abort the decompression.
    
    Reported-by: "Don A. Bailey" <donb@securitymouse.com>
    Cc: stable <stable@vger.kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  4. lz4: fix compression/decompression signedness mismatch

    sergey-senozhatsky authored and jrior001 committed Sep 11, 2013
    LZ4 compression and decompression functions require different in
    signedness input/output parameters: unsigned char for compression and
    signed char for decompression.
    
    Change decompression API to require "(const) unsigned char *".
    
    Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Kyungsik Lee <kyungsik.lee@lge.com>
    Cc: Geert Uytterhoeven <geert@linux-m68k.org>
    Cc: Yann Collet <yann.collet.73@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  5. lib/lz4: correct the LZ4 license

    rlaager authored and jrior001 committed Aug 22, 2013
    The LZ4 code is listed as using the "BSD 2-Clause License".
    
    Signed-off-by: Richard Laager <rlaager@wiktel.com>
    Acked-by: Kyungsik Lee <kyungsik.lee@lge.com>
    Cc: Chanho Min <chanho.min@lge.com>
    Cc: Richard Yao <ryao@gentoo.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    [ The 2-clause BSD can be just converted into GPL, but that's rude and
      pointless, so don't do it   - Linus ]
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  6. lib: add lz4 compressor module

    Chanho Min authored and jrior001 committed Jul 8, 2013
    This patchset is for supporting LZ4 compression and the crypto API using
    it.
    
    As shown below, the size of data is a little bit bigger but compressing
    speed is faster under the enabled unaligned memory access.  We can use
    lz4 de/compression through crypto API as well.  Also, It will be useful
    for another potential user of lz4 compression.
    
    lz4 Compression Benchmark:
    Compiler: ARM gcc 4.6.4
    ARMv7, 1 GHz based board
       Kernel: linux 3.4
       Uncompressed data Size: 101 MB
             Compressed Size  compression Speed
       LZO   72.1MB		  32.1MB/s, 33.0MB/s(UA)
       LZ4   75.1MB		  30.4MB/s, 35.9MB/s(UA)
       LZ4HC 59.8MB		   2.4MB/s,  2.5MB/s(UA)
    - UA: Unaligned memory Access support
    - Latest patch set for LZO applied
    
    This patch:
    
    Add support for LZ4 compression in the Linux Kernel.  LZ4 Compression APIs
    for kernel are based on LZ4 implementation by Yann Collet and were changed
    for kernel coding style.
    
    LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
    LZ4 source repository : http://code.google.com/p/lz4/
    svn revision : r90
    
    Two APIs are added:
    
    lz4_compress() support basic lz4 compression whereas lz4hc_compress()
    support high compression or CPU performance get lower but compression
    ratio get higher.  Also, we require the pre-allocated working memory with
    the defined size and destination buffer must be allocated with the size of
    lz4_compressbound.
    
    [akpm@linux-foundation.org: make lz4_compresshcctx() static]
    Signed-off-by: Chanho Min <chanho.min@lge.com>
    Cc: "Darrick J. Wong" <djwong@us.ibm.com>
    Cc: Bob Pearson <rpearson@systemfabricworks.com>
    Cc: Richard Weinberger <richard@nod.at>
    Cc: Herbert Xu <herbert@gondor.hengli.com.au>
    Cc: Yann Collet <yann.collet.73@gmail.com>
    Cc: Kyungsik Lee <kyungsik.lee@lge.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  7. lib: add support for LZ4-compressed kernel

    Kyungsik Lee authored and jrior001 committed Jul 8, 2013
    Add support for extracting LZ4-compressed kernel images, as well as
    LZ4-compressed ramdisk images in the kernel boot process.
    
    Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: Ingo Molnar <mingo@elte.hu>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Russell King <rmk@arm.linux.org.uk>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Florian Fainelli <florian@openwrt.org>
    Cc: Yann Collet <yann.collet.73@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  8. decompressor: add LZ4 decompressor module

    Kyungsik Lee authored and jrior001 committed Jul 8, 2013
    Add support for LZ4 decompression in the Linux Kernel.  LZ4 Decompression
    APIs for kernel are based on LZ4 implementation by Yann Collet.
    
    Benchmark Results(PATCH v3)
    Compiler: Linaro ARM gcc 4.6.2
    
    1. ARMv7, 1.5GHz based board
       Kernel: linux 3.4
       Uncompressed Kernel Size: 14MB
            Compressed Size  Decompression Speed
       LZO  6.7MB            20.1MB/s, 25.2MB/s(UA)
       LZ4  7.3MB            29.1MB/s, 45.6MB/s(UA)
    
    2. ARMv7, 1.7GHz based board
       Kernel: linux 3.7
       Uncompressed Kernel Size: 14MB
            Compressed Size  Decompression Speed
       LZO  6.0MB            34.1MB/s, 52.2MB/s(UA)
       LZ4  6.5MB            86.7MB/s
    - UA: Unaligned memory Access support
    - Latest patch set for LZO applied
    
    This patch set is for adding support for LZ4-compressed Kernel.  LZ4 is a
    very fast lossless compression algorithm and it also features an extremely
    fast decoder [1].
    
    But we have five of decompressors already and one question which does
    arise, however, is that of where do we stop adding new ones?  This issue
    had been discussed and came to the conclusion [2].
    
    Russell King said that we should have:
    
     - one decompressor which is the fastest
     - one decompressor for the highest compression ratio
     - one popular decompressor (eg conventional gzip)
    
    If we have a replacement one for one of these, then it should do exactly
    that: replace it.
    
    The benchmark shows that an 8% increase in image size vs a 66% increase
    in decompression speed compared to LZO(which has been known as the
    fastest decompressor in the Kernel).  Therefore the "fast but may not be
    small" compression title has clearly been taken by LZ4 [3].
    
    [1] http://code.google.com/p/lz4/
    [2] http://thread.gmane.org/gmane.linux.kbuild.devel/9157
    [3] http://thread.gmane.org/gmane.linux.kbuild.devel/9347
    
    LZ4 homepage: http://fastcompression.blogspot.com/p/lz4.html
    LZ4 source repository: http://code.google.com/p/lz4/
    
    Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com>
    Signed-off-by: Yann Collet <yann.collet.73@gmail.com>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: Ingo Molnar <mingo@elte.hu>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Russell King <rmk@arm.linux.org.uk>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Florian Fainelli <florian@openwrt.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  9. zram: fix incorrect stat with failed_reads

    chaseyu authored and jrior001 committed Aug 29, 2014
    Since we allocate a temporary buffer in zram_bvec_read to handle partial
    page operations in commit 924bd88 ("Staging: zram: allow partial
    page operations"), our ->failed_reads value may be incorrect as we do
    not increase its value when failing to allocate the temporary buffer.
    
    Let's fix this issue and correct the annotation of failed_reads.
    
    Signed-off-by: Chao Yu <chao2.yu@samsung.com>
    Acked-by: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Acked-by: Jerome Marchand <jmarchan@redhat.com>
    Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  10. zram: replace global tb_lock with fine grain lock

    Weijie Yang authored and jrior001 committed Mar 13, 2015
    Currently, we use a rwlock tb_lock to protect concurrent access to the
    whole zram meta table.  However, according to the actual access model,
    there is only a small chance for upper user to access the same
    table[index], so the current lock granularity is too big.
    
    The idea of optimization is to change the lock granularity from whole
    meta table to per table entry (table -> table[index]), so that we can
    protect concurrent access to the same table[index], meanwhile allow the
    maximum concurrency.
    
    With this in mind, several kinds of locks which could be used as a
    per-entry lock were tested and compared:
    
    Test environment:
    x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04,
    kernel v3.15.0-rc3 as base, zram with 4 max_comp_streams LZO.
    
    iozone test:
    iozone -t 4 -R -r 16K -s 200M -I +Z
    (1GB zram with ext4 filesystem, take the average of 10 tests, KB/s)
    
          Test       base      CAS    spinlock    rwlock   bit_spinlock
    -------------------------------------------------------------------
     Initial write  1381094   1425435   1422860   1423075   1421521
           Rewrite  1529479   1641199   1668762   1672855   1654910
              Read  8468009  11324979  11305569  11117273  10997202
           Re-read  8467476  11260914  11248059  11145336  10906486
      Reverse Read  6821393   8106334   8282174   8279195   8109186
       Stride read  7191093   8994306   9153982   8961224   9004434
       Random read  7156353   8957932   9167098   8980465   8940476
    Mixed workload  4172747   5680814   5927825   5489578   5972253
      Random write  1483044   1605588   1594329   1600453   1596010
            Pwrite  1276644   1303108   1311612   1314228   1300960
             Pread  4324337   4632869   4618386   4457870   4500166
    
    To enhance the possibility of access the same table[index] concurrently,
    set zram a small disksize(10MB) and let threads run with large loop
    count.
    
    fio test:
    fio --bs=32k --randrepeat=1 --randseed=100 --refill_buffers
    --scramble_buffers=1 --direct=1 --loops=3000 --numjobs=4
    --filename=/dev/zram0 --name=seq-write --rw=write --stonewall
    --name=seq-read --rw=read --stonewall --name=seq-readwrite
    --rw=rw --stonewall --name=rand-readwrite --rw=randrw --stonewall
    (10MB zram raw block device, take the average of 10 tests, KB/s)
    
        Test     base     CAS    spinlock    rwlock  bit_spinlock
    -------------------------------------------------------------
    seq-write   933789   999357   1003298    995961   1001958
     seq-read  5634130  6577930   6380861   6243912   6230006
       seq-rw  1405687  1638117   1640256   1633903   1634459
      rand-rw  1386119  1614664   1617211   1609267   1612471
    
    All the optimization methods show a higher performance than the base,
    however, it is hard to say which method is the most appropriate.
    
    On the other hand, zram is mostly used on small embedded system, so we
    don't want to increase any memory footprint.
    
    This patch pick the bit_spinlock method, pack object size and page_flag
    into an unsigned long table.value, so as to not increase any memory
    overhead on both 32-bit and 64-bit system.
    
    On the third hand, even though different kinds of locks have different
    performances, we can ignore this difference, because: if zram is used as
    zram swapfile, the swap subsystem can prevent concurrent access to the
    same swapslot; if zram is used as zram-blk for set up filesystem on it,
    the upper filesystem and the page cache also prevent concurrent access
    of the same block mostly.  So we can ignore the different performances
    among locks.
    
    Change-Id: I85b58fc94a794a85be4713988b218ef6e52f2a27
    Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Reviewed-by: Davidlohr Bueso <davidlohr@hp.com>
    Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
    Signed-off-by: Minchan Kim <minchan@kernel.org>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  11. zram: use size_t instead of u16

    minchank authored and jrior001 committed Aug 6, 2014
    Some architectures (eg, hexagon and PowerPC) could use PAGE_SHIFT of 16
    or more.  In these cases u16 is not sufficiently large to represent a
    compressed page's size so use size_t.
    
    Signed-off-by: Minchan Kim <minchan@kernel.org>
    Reported-by: Weijie Yang <weijie.yang@samsung.com>
    Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  12. zram: remove unused SECTOR_SIZE define

    sergey-senozhatsky authored and jrior001 committed Aug 6, 2014
    Drop SECTOR_SIZE define, because it's not used.
    
    Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Weijie Yang <weijie.yang@samsung.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  13. zram: rename struct `table' to `zram_table_entry'

    sergey-senozhatsky authored and jrior001 committed Aug 6, 2014
    Andrew Morton has recently noted that `struct table' actually represents
    table entry and, thus, should be renamed.  Rename to `zram_table_entry'.
    
    Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Weijie Yang <weijie.yang@samsung.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  14. zram: avoid lockdep splat by revalidate_disk

    minchank authored and jrior001 committed Jul 23, 2014
    Sasha reported lockdep warning [1] introduced by [2].
    
    It could be fixed by doing disk revalidation out of the init_lock.  It's
    okay because disk capacity change is protected by init_lock so that
    revalidate_disk always sees up-to-date value so there is no race.
    
    [1] https://lkml.org/lkml/2014/7/3/735
    [2] zram: revalidate disk after capacity change
    
    Fixes 2e32bae ("zram: revalidate disk after capacity change").
    
    Signed-off-by: Minchan Kim <minchan@kernel.org>
    Reported-by: Sasha Levin <sasha.levin@oracle.com>
    Cc: "Alexander E. Patrakov" <patrakov@gmail.com>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    CC: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  15. zram: revalidate disk after capacity change

    minchank authored and jrior001 committed Jul 2, 2014
    Alexander reported mkswap on /dev/zram0 is failed if other process is
    opening the block device file.
    
    Step is as follows,
    
    0. Reset the unused zram device.
    1. Use a program that opens /dev/zram0 with O_RDWR and sleeps
       until killed.
    2. While that program sleeps, echo the correct value to
       /sys/block/zram0/disksize.
    3. Verify (e.g. in /proc/partitions) that the disk size is applied
       correctly. It is.
    4. While that program still sleeps, attempt to mkswap /dev/zram0.
       This fails: mkswap: error: swap area needs to be at least 40 KiB
    
    When I investigated, the size get by ioctl(fd, BLKGETSIZE64, xxx) on
    mkswap to get a size of blockdev was zero although zram0 has right size by
    2.
    
    The reason is zram didn't revalidate disk after changing capacity so that
    size of blockdev's inode is not uptodate until all of file is close.
    
    This patch should fix the BUG.
    
    Signed-off-by: Minchan Kim <minchan@kernel.org>
    Reported-by: Alexander E. Patrakov <patrakov@gmail.com>
    Tested-by: Alexander E. Patrakov <patrakov@gmail.com>
    Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Acked-by: Jerome Marchand <jmarchan@redhat.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  16. zram: correct offset usage in zram_bio_discard

    Weijie Yang authored and jrior001 committed Jun 4, 2014
    We want to skip the physical block(PAGE_SIZE) which is partially covered
    by the discard bio, so we check the remaining size and subtract it if
    there is a need to goto the next physical block.
    
    The current offset usage in zram_bio_discard is incorrect, it will cause
    its upper filesystem breakdown.  Consider the following scenario:
    
    On some architecture or config, PAGE_SIZE is 64K for example, filesystem
    is set up on zram disk without PAGE_SIZE aligned, a discard bio leads to a
    offset = 4K and size=72K, normally, it should not really discard any
    physical block as it partially cover two physical blocks.  However, with
    the current offset usage, it will discard the second physical block and
    free its memory, which will cause filesystem breakdown.
    
    This patch corrects the offset usage in zram_bio_discard.
    
    Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Bob Liu <bob.liu@oracle.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  17. zram: support REQ_DISCARD

    JoonsooKim authored and jrior001 committed Apr 7, 2014
    zram is ram based block device and can be used by backend of filesystem.
    When filesystem deletes a file, it normally doesn't do anything on data
    block of that file.  It just marks on metadata of that file.  This
    behavior has no problem on disk based block device, but has problems on
    ram based block device, since we can't free memory used for data block.
    To overcome this disadvantage, there is REQ_DISCARD functionality.  If
    block device support REQ_DISCARD and filesystem is mounted with discard
    option, filesystem sends REQ_DISCARD to block device whenever some data
    blocks are discarded.  All we have to do is to handle this request.
    
    This patch implements to flag up QUEUE_FLAG_DISCARD and handle this
    REQ_DISCARD request.  With it, we can free memory used by zram if it isn't
    used.
    
    [akpm@linux-foundation.org: tweak comments]
    Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    
    Change-Id: I9831047aa7ab4162bf761fdd8897e2d1e7a4b34f
  18. zram: use scnprintf() in attrs show() methods

    sergey-senozhatsky authored and jrior001 committed Apr 7, 2014
    sysfs.txt documentation lists the following requirements:
    
     - The buffer will always be PAGE_SIZE bytes in length. On i386, this
       is 4096.
    
     - show() methods should return the number of bytes printed into the
       buffer. This is the return value of scnprintf().
    
     - show() should always use scnprintf().
    
    Use scnprintf() in show() functions.
    
    Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Acked-by: Minchan Kim <minchan@kernel.org>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  19. zram: propagate error to user

    minchank authored and jrior001 committed Apr 7, 2014
    When we initialized zcomp with single, we couldn't change
    max_comp_streams without zram reset but current interface doesn't show
    any error to user and even it changes max_comp_streams's value without
    any effect so it would make user very confusing.
    
    This patch prevents max_comp_streams's change when zcomp was initialized
    as single zcomp and emit the error to user(ex, echo).
    
    [akpm@linux-foundation.org: don't return with the lock held, per Sergey]
    [fengguang.wu@intel.com: fix coccinelle warnings]
    Signed-off-by: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
    Cc: Stephen Rothwell <sfr@canb.auug.org.au>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  20. zram: return error-valued pointer from zcomp_create()

    sergey-senozhatsky authored and jrior001 committed Apr 7, 2014
    Instead of returning just NULL, return ERR_PTR from zcomp_create() if
    compressing backend creation has failed.  ERR_PTR(-EINVAL) for unsupported
    compression algorithm request, ERR_PTR(-ENOMEM) for allocation (zcomp or
    compression stream) error.
    
    Perform IS_ERR() check of returned from zcomp_create() value in
    disksize_store() and set return code to PTR_ERR().
    
    Change suggested by Jerome Marchand.
    
    [akpm@linux-foundation.org: clean up error recovery flow]
    Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Reported-by: Jerome Marchand <jmarchan@redhat.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    
    Change-Id: I3a6907a5cecbf087facea2251b60df75ec70bb21
  21. zram: move comp allocation out of init_lock

    sergey-senozhatsky authored and jrior001 committed Apr 7, 2014
    While fixing lockdep spew of ->init_lock reported by Sasha Levin [1],
    Minchan Kim noted [2] that it's better to move compression backend
    allocation (using GPF_KERNEL) out of the ->init_lock lock, same way as
    with zram_meta_alloc(), in order to prevent the same lockdep spew.
    
    [1] https://lkml.org/lkml/2014/2/27/337
    [2] https://lkml.org/lkml/2014/3/3/32
    
    Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Reported-by: Minchan Kim <minchan@kernel.org>
    Acked-by: Minchan Kim <minchan@kernel.org>
    Cc: Sasha Levin <sasha.levin@oracle.com>
    Acked-by: Jerome Marchand <jmarchan@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  22. zram: add lz4 algorithm backend

    sergey-senozhatsky authored and jrior001 committed Apr 7, 2014
    Introduce LZ4 compression backend and make it available for selection.
    LZ4 support is optional and requires user to set ZRAM_LZ4_COMPRESS config
    option.  The default compression backend is LZO.
    
    TEST
    
    (x86_64, core i5, 2 cores + 2 hyperthreading, zram disk size 1G,
    ext4 file system, 3 compression streams)
    
    iozone -t 3 -R -r 16K -s 60M -I +Z
    
           Test           LZO           LZ4
    ----------------------------------------------
      Initial write   1642744.62    1317005.09
            Rewrite   2498980.88    1800645.16
               Read   3957026.38    5877043.75
            Re-read   3950997.38    5861847.00
       Reverse Read   2937114.56    5047384.00
        Stride read   2948163.19    4929587.38
        Random read   3292692.69    4880793.62
     Mixed workload   1545602.62    3502940.38
       Random write   2448039.75    1758786.25
             Pwrite   1670051.03    1338329.69
              Pread   2530682.00    5097177.62
             Fwrite   3232085.62    3275942.56
              Fread   6306880.25    6645271.12
    
    So on my system LZ4 is slower in write-only tests, while it performs
    better in read-only and mixed (reads + writes) tests.
    
    Official LZ4 benchmarks available here http://code.google.com/p/lz4/
    (linux kernel uses revision r90).
    
    Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Acked-by: Minchan Kim <minchan@kernel.org>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>