commit 27d20fd upstream. Salman Qazi describes the following radix-tree bug: In the following case, we get can get a deadlock: 0. The radix tree contains two items, one has the index 0. 1. The reader (in this case find_get_pages) takes the rcu_read_lock. 2. The reader acquires slot(s) for item(s) including the index 0 item. 3. The non-zero index item is deleted, and as a consequence the other item is moved to the root of the tree. The place where it used to be is queued for deletion after the readers finish. 3b. The zero item is deleted, removing it from the direct slot, it remains in the rcu-delayed indirect node. 4. The reader looks at the index 0 slot, and finds that the page has 0 ref count 5. The reader looks at it again, hoping that the item will either be freed or the ref count will increase. This never happens, as the slot it is looking at will never be updated. Also, this slot can never be reclaimed because the reader is holding rcu_read_lock and is in an infinite loop. The fix is to re-use the same "indirect" pointer case that requires a slot lookup retry into a general "retry the lookup" bit. Signed-off-by: Nick Piggin <firstname.lastname@example.org> Reported-by: Salman Qazi <email@example.com> Signed-off-by: Andrew Morton <firstname.lastname@example.org> Signed-off-by: Linus Torvalds <email@example.com> Signed-off-by: Greg Kroah-Hartman <firstname.lastname@example.org> Signed-off-by: Andi Kleen <email@example.com>
net: wireless: bcm4329: Disable wake irq at driver stop net: wireless: bcm4329: Allocate skb with GFP_KERNEL flag if possible net: wireless: bcm4329: Reduce listen interval to 10 (from 20) net: wireless: bcm4329: compile wifi driver as Os net: wireless: bcm4329: Fix scan timeout for abg case bcm4329: updates from samsung infuse source Revert "bcm4329: updates from samsung infuse source" bcm4329: driver update from infuse, minus the part that breaks mobileAP bcm4329: include only the part of the infuse wifi patch that fixes wifi
This reverts commit 68a665e.
This reverts commit 340d456.
net/netfilter/nf_conntrack_netlink.c: In function 'ctnetlink_parse_tuple': net/netfilter/nf_conntrack_netlink.c:832:11: warning: comparison between 'enum ctattr_tuple' and 'enum ctattr_type' Use ctattr_type for the 'type' parameter since that's the type of all attributes passed to this function. Signed-off-by: Patrick McHardy <firstname.lastname@example.org>
vmscan: prevent background aging of anon page in no swap system mm: page allocator: adjust the per-cpu counter threshold when memory is low
…sion The performance of memcpy and memmove of the general version is very inefficient, this patch improves them.
The kernel's memcpy and memmove is very inefficient. But the glibc version is quite fast, in some cases it is 10 times faster than the kernel version. So I introduce some memory copy macros and functions of the glibc to improve the kernel version's performance. The strategy of the memory functions is: 1. Copy bytes until the destination pointer is aligned. 2. Copy words in unrolled loops. If the source and destination are not aligned in the same way, use word memory operations, but shift and merge two read words before writing. 3. Copy the few remaining bytes. Signed-off-by: Miao Xie <miaox@...fujitsu.com>
This reverts commit 88588b1.
No .xz encoder creates files with empty LZMA2 streams, but such files would still be valid and decompressors must accept them. Note that empty .xz files are a different thing than empty LZMA2 streams. This bug didn't affect typical .xz files that had no uncompressed data.
This implements the API defined in <linux/decompress/generic.h> which is used for kernel, initramfs, and initrd decompression. This patch together with the first patch is enough for XZ-compressed initramfs and initrd; XZ-compressed kernel will need arch-specific changes. The buffering requirements described in decompress_unxz.c are stricter than with gzip, so the relevant changes should be done to the arch-specific code when adding support for XZ-compressed kernel. Similarly, the heap size in arch-specific pre-boot code may need to be increased (30 KiB is enough). The XZ decompressor needs memmove(), memeq() (memcmp() == 0), and memzero() (memset(ptr, 0, size)), which aren't available in all arch-specific pre-boot environments. I'm including simple versions in decompress_unxz.c, but a cleaner solution would naturally be nicer. Signed-off-by: Lasse Collin <email@example.com> Cc: "H. Peter Anvin" <firstname.lastname@example.org> Cc: Alain Knaff <email@example.com> Cc: Albin Tonnerre <firstname.lastname@example.org> Cc: Phillip Lougher <email@example.com> Signed-off-by: Andrew Morton <firstname.lastname@example.org> Signed-off-by: Linus Torvalds <email@example.com>
xz_dec_run() could incorrectly return XZ_BUF_ERROR if all of the following was true: - The caller knows how many bytes of output to expect and only provides that much output space. - When the last output bytes are decoded, the caller-provided input buffer ends right before the LZMA2 end of payload marker. So LZMA2 won't provide more output anymore, but it won't know it yet and thus won't return XZ_STREAM_END yet. - A BCJ filter is in use and it hasn't left any unfiltered bytes in the temp buffer. This can happen with any BCJ filter, but in practice it's more likely with filters other than the x86 BCJ. This fixes <https://bugzilla.redhat.com/show_bug.cgi?id=735408> where Squashfs thinks that a valid file system is corrupt. This also fixes a similar bug in single-call mode where the uncompressed size of a block using BCJ + LZMA2 was 0 bytes and caller provided no output space. Many empty .xz files don't contain any blocks and thus don't trigger this bug. This also tweaks a closely related detail: xz_dec_bcj_run() could call xz_dec_lzma2_run() to decode into temp buffer when it was known to be useless. This was harmless although it wasted a minuscule number of CPU cycles. Signed-off-by: Lasse Collin <firstname.lastname@example.org> Cc: stable <email@example.com> Signed-off-by: Linus Torvalds <firstname.lastname@example.org>
<linux/kernel.h> is needed for min_t. The old version happened to work on x86 because <asm/unaligned.h> indirectly includes <linux/kernel.h>, but it didn't work on ARM. <linux/kernel.h> includes <asm/byteorder.h> so it's not necessary to include it explicitly anymore. Signed-off-by: Lasse Collin <email@example.com> Cc: stable <firstname.lastname@example.org> Signed-off-by: Linus Torvalds <email@example.com>
In userspace, the .lzma format has become mostly a legacy file format that got superseded by the .xz format. Similarly, LZMA Utils was superseded by XZ Utils. These patches add support for XZ decompression into the kernel. Most of the code is as is from XZ Embedded <http://tukaani.org/xz/embedded.html>. It was written for the Linux kernel but is usable in other projects too. Advantages of XZ over the current LZMA code in the kernel: - Nice API that can be used by other kernel modules; it's not limited to kernel, initramfs, and initrd decompression. - Integrity check support (CRC32) - BCJ filters improve compression of executable code on certain architectures. These together with LZMA2 can produce a few percent smaller kernel or Squashfs images than plain LZMA without making the decompression slower. This patch: Add the main decompression code (xz_dec), testing module (xz_dec_test), wrapper script (xz_wrap.sh) for the xz command line tool, and documentation. The xz_dec module is enough to have a usable XZ decompressor e.g. for Squashfs. Signed-off-by: Lasse Collin <firstname.lastname@example.org> Cc: "H. Peter Anvin" <email@example.com> Cc: Alain Knaff <firstname.lastname@example.org> Cc: Albin Tonnerre <email@example.com> Cc: Phillip Lougher <firstname.lastname@example.org> Signed-off-by: Andrew Morton <email@example.com> Signed-off-by: Linus Torvalds <firstname.lastname@example.org>
SD2.0 specification says the max. timeout for writes is 250msecs for SDHC cards. But when the card is almost full and there are too few free sectors, the garbage collectors inside SD card may take more time and hence delaying the write more than 250msecs. This typically happens with cards that don't adhere to specification. Based on this the timeout is increased to 300msecs but still for some cards this is not sufficient. Few SanDisk class4 SD cards violate the spec. and may take more time than expected when the card is full. So increase the data timeout to 800msecs, otherwise we might see file system corruption due to loss of important FS metadata in timeouts. Change-Id: Ie8f0616dbbb2a15d7dbe95cd49eeb2308a5badd1 Signed-off-by: Sujith Reddy Thumma <email@example.com>
Reduce the number of variables modified by the loop in do_csum() by 1, which seems like a good idea. On Nios II (a RISC CPU with 3-operand instruction set) it reduces the loop from 7 to 6 instructions, including the conditional branch. Signed-off-by: Ian Abbott <firstname.lastname@example.org> Signed-off-by: David S. Miller <email@example.com>
If the attempt to map a page for DMA fails (eg, because we're out of mapping space) then we must not hold on to the page we allocated for DMA - doing so will result in a memory leak. Cc: <firstname.lastname@example.org> Reported-by: Bryan Phillippe <email@example.com> Tested-by: Bryan Phillippe <firstname.lastname@example.org> Signed-off-by: Russell King <email@example.com>
Enforce explicit upper limit for buddy index during memory free operations. This index is generated during memory free operations of the buddy bestfit allocator. The index points to each free buddy that should be coalesced into a greater free memory area. The upper limit now enforced is the number of entries in the current PMEM region. Without this fix, pmem_buddy_bestfit_free would continue to corrupt bytes in ever higher addresses, by power of 2, of memory until stopped by the high bit being set in the currently examined byte. Unless the high bit was set immediately at the first invalid byte, this could have caused memory corruption past the current pmem region metadata in kernel memory space. One manifestation of this problem was a corner case where non-existent buddies were incorrectly coalesced. In a 12MB region, the buddy free for 8MB tried to coalesce the freed buddy metadata with a non-existent 8MB buddy which actually belonged to the following region. This caused the order number in the metadata belonging to the following region to be incorrectly incremented past the region boundary. Soon after, there was a subsequent invalid allocation in that following region past the region boundary. Signed-off-by: Stephen Biggs <firstname.lastname@example.org>