Skip to content
This repository was archived by the owner on Sep 24, 2020. It is now read-only.
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: coreos/linux
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v4.14.11
Choose a base ref
...
head repository: coreos/linux
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v4.14.11-coreos
Choose a head ref
  • 16 commits
  • 14 files changed
  • 11 contributors

Commits on Jan 2, 2018

  1. kbuild: derive relative path for KBUILD_SRC from CURDIR

    This enables relocating source and build trees to different roots,
    provided they stay reachable relative to one another.  Useful for
    builds done within a sandbox where the eventual root is prefixed
    by some undesirable path component.
    Vito Caputo authored and Jenkins OS committed Jan 2, 2018
    Configuration menu
    Copy the full SHA
    7c25b75 View commit details
    Browse the repository at this point in the history
  2. Add arm64 coreos verity hash

    Signed-off-by: Geoff Levand <geoff@infradead.org>
    glevand authored and Jenkins OS committed Jan 2, 2018
    Configuration menu
    Copy the full SHA
    20720b9 View commit details
    Browse the repository at this point in the history
  3. dccp: CVE-2017-8824: use-after-free in DCCP code

    Whenever the sock object is in DCCP_CLOSED state, dccp_disconnect()
    must free dccps_hc_tx_ccid and dccps_hc_rx_ccid and set to NULL.
    
    Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com>
    Reviewed-by: Eric Dumazet <edumazet@google.com>
    0x36 authored and Jenkins OS committed Jan 2, 2018
    Configuration menu
    Copy the full SHA
    d60268f View commit details
    Browse the repository at this point in the history
  4. block: factor out __blkdev_issue_zero_pages()

    blkdev_issue_zeroout() will use this in !BLKDEV_ZERO_NOFALLBACK case.
    
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    idryomov authored and Jenkins OS committed Jan 2, 2018
    Configuration menu
    Copy the full SHA
    a893a5e View commit details
    Browse the repository at this point in the history
  5. block: cope with WRITE ZEROES failing in blkdev_issue_zeroout()

    sd_config_write_same() ignores ->max_ws_blocks == 0 and resets it to
    permit trying WRITE SAME on older SCSI devices, unless ->no_write_same
    is set.  Because REQ_OP_WRITE_ZEROES is implemented in terms of WRITE
    SAME, blkdev_issue_zeroout() may fail with -EREMOTEIO:
    
      $ fallocate -zn -l 1k /dev/sdg
      fallocate: fallocate failed: Remote I/O error
      $ fallocate -zn -l 1k /dev/sdg  # OK
      $ fallocate -zn -l 1k /dev/sdg  # OK
    
    The following calls succeed because sd_done() sets ->no_write_same in
    response to a sense that would become BLK_STS_TARGET/-EREMOTEIO, causing
    __blkdev_issue_zeroout() to fall back to generating ZERO_PAGE bios.
    
    This means blkdev_issue_zeroout() must cope with WRITE ZEROES failing
    and fall back to manually zeroing, unless BLKDEV_ZERO_NOFALLBACK is
    specified.  For BLKDEV_ZERO_NOFALLBACK case, return -EOPNOTSUPP if
    sd_done() has just set ->no_write_same thus indicating lack of offload
    support.
    
    Fixes: c20cfc2 ("block: stop using blkdev_issue_write_same for zeroing")
    Cc: Hannes Reinecke <hare@suse.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    idryomov authored and Jenkins OS committed Jan 2, 2018
    Configuration menu
    Copy the full SHA
    0d68ba2 View commit details
    Browse the repository at this point in the history

Commits on Jan 3, 2018

  1. Merge pull request #128 from coreosbot/v4.14.11-coreos

    Rebase patches onto 4.14.11
    bgilbert authored Jan 3, 2018
    Configuration menu
    Copy the full SHA
    e2b917f View commit details
    Browse the repository at this point in the history

Commits on Jan 4, 2018

  1. x86/cpu, x86/pti: Do not enable PTI on AMD processors

    AMD processors are not subject to the types of attacks that the kernel
    page table isolation feature protects against.  The AMD microarchitecture
    does not allow memory references, including speculative references, that
    access higher privileged data when running in a lesser privileged mode
    when that access would result in a page fault.
    
    Disable page table isolation by default on AMD processors by not setting
    the X86_BUG_CPU_INSECURE feature, which controls whether X86_FEATURE_PTI
    is set.
    
    Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Borislav Petkov <bp@suse.de>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: stable@vger.kernel.org
    Link: https://lkml.kernel.org/r/20171227054354.20369.94587.stgit@tlendack-t1.amdoffice.net
    tlendacky authored and bgilbert committed Jan 4, 2018
    Configuration menu
    Copy the full SHA
    65e80d5 View commit details
    Browse the repository at this point in the history
  2. x86/pti: Make sure the user/kernel PTEs match

    Meelis reported that his K8 Athlon64 emits MCE warnings when PTI is
    enabled:
    
    [Hardware Error]: Error Addr: 0x0000ffff81e000e0
    [Hardware Error]: MC1 Error: L1 TLB multimatch.
    [Hardware Error]: cache level: L1, tx: INSN
    
    The address is in the entry area, which is mapped into kernel _AND_ user
    space. That's special because we switch CR3 while we are executing
    there. 
    
    User mapping:
    0xffffffff81e00000-0xffffffff82000000           2M     ro         PSE     GLB x  pmd
    
    Kernel mapping:
    0xffffffff81000000-0xffffffff82000000          16M     ro         PSE         x  pmd
    
    So the K8 is complaining that the TLB entries differ. They differ in the
    GLB bit.
    
    Drop the GLB bit when installing the user shared mapping.
    
    Fixes: 6dc72c3 ("x86/mm/pti: Share entry text PMD")
    Reported-by: Meelis Roos <mroos@linux.ee>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Meelis Roos <mroos@linux.ee>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: stable@vger.kernel.org
    Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031407180.1957@nanos
    KAGA-KOKO authored and bgilbert committed Jan 4, 2018
    Configuration menu
    Copy the full SHA
    612c0c9 View commit details
    Browse the repository at this point in the history
  3. x86/pti: Switch to kernel CR3 at early in entry_SYSCALL_compat()

    The preparation for PTI which added CR3 switching to the entry code
    misplaced the CR3 switch in entry_SYSCALL_compat().
    
    With PTI enabled the entry code tries to access a per cpu variable after
    switching to kernel GS. This fails because that variable is not mapped to
    user space. This results in a double fault and in the worst case a kernel
    crash.
    
    Move the switch ahead of the access and clobber RSP which has been saved
    already.
    
    Fixes: 8a09317 ("x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching")
    Reported-by: Lars Wendler <wendler.lars@web.de>
    Reported-by: Laura Abbott <labbott@redhat.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Borislav Betkov <bp@alien8.de>
    Cc: Andy Lutomirski <luto@kernel.org>, 
    Cc: Dave Hansen <dave.hansen@linux.intel.com>, 
    Cc: Peter Zijlstra <peterz@infradead.org>, 
    Cc: Greg KH <gregkh@linuxfoundation.org>, , 
    Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Cc: Juergen Gross <jgross@suse.com>
    Cc: stable@vger.kernel.org
    Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031949200.1957@nanos
    KAGA-KOKO authored and bgilbert committed Jan 4, 2018
    Configuration menu
    Copy the full SHA
    6224d8f View commit details
    Browse the repository at this point in the history
  4. x86/process: Define cpu_tss_rw in same section as declaration

    cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED
    but then defined with DEFINE_PER_CPU_SHARED_ALIGNED
    leading to section mismatch warnings.
    
    Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because
    it's mapped to the cpu entry area and must be page aligned.
    
    [ tglx: Massaged changelog a bit ]
    
    Fixes: 1a935bc ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct")
    Suggested-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: thomas.lendacky@amd.com
    Cc: Borislav Petkov <bpetkov@suse.de>
    Cc: tklauser@distanz.ch
    Cc: minipli@googlemail.com
    Cc: me@kylehuey.com
    Cc: namit@vmware.com
    Cc: luto@kernel.org
    Cc: jpoimboe@redhat.com
    Cc: tj@kernel.org
    Cc: cl@linux.com
    Cc: bp@suse.de
    Cc: thgarnie@google.com
    Cc: kirill.shutemov@linux.intel.com
    Cc: stable@vger.kernel.org
    Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@google.com
    nickdesaulniers authored and bgilbert committed Jan 4, 2018
    Configuration menu
    Copy the full SHA
    2183883 View commit details
    Browse the repository at this point in the history
  5. x86/mm: Set MODULES_END to 0xffffffffff000000

    Since f06bdd4 ("x86/mm: Adapt MODULES_END based on fixmap section size")
    kasan_mem_to_shadow(MODULES_END) could be not aligned to a page boundary.
    
    So passing page unaligned address to kasan_populate_zero_shadow() have two
    possible effects:
    
    1) It may leave one page hole in supposed to be populated area. After commit
      2150652 ("x86/kasan/64: Teach KASAN about the cpu_entry_area") that
      hole happens to be in the shadow covering fixmap area and leads to crash:
    
     BUG: unable to handle kernel paging request at fffffbffffe8ee04
     RIP: 0010:check_memory_region+0x5c/0x190
    
     Call Trace:
      <NMI>
      memcpy+0x1f/0x50
      ghes_copy_tofrom_phys+0xab/0x180
      ghes_read_estatus+0xfb/0x280
      ghes_notify_nmi+0x2b2/0x410
      nmi_handle+0x115/0x2c0
      default_do_nmi+0x57/0x110
      do_nmi+0xf8/0x150
      end_repeat_nmi+0x1a/0x1e
    
    Note, the crash likely disappeared after commit 92a0f81, which
    changed kasan_populate_zero_shadow() call the way it was before
    commit 2150652.
    
    2) Attempt to load module near MODULES_END will fail, because
       __vmalloc_node_range() called from kasan_module_alloc() will hit the
       WARN_ON(!pte_none(*pte)) in the vmap_pte_range() and bail out with error.
    
    To fix this we need to make kasan_mem_to_shadow(MODULES_END) page aligned
    which means that MODULES_END should be 8*PAGE_SIZE aligned.
    
    The whole point of commit f06bdd4 was to move MODULES_END down if
    NR_CPUS is big, so the cpu_entry_area takes a lot of space.
    But since 92a0f81 ("x86/cpu_entry_area: Move it out of the fixmap")
    the cpu_entry_area is no longer in fixmap, so we could just set
    MODULES_END to a fixed 8*PAGE_SIZE aligned address.
    
    Fixes: f06bdd4 ("x86/mm: Adapt MODULES_END based on fixmap section size")
    Reported-by: Jakub Kicinski <kubakici@wp.pl>
    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: stable@vger.kernel.org
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Thomas Garnier <thgarnie@google.com>
    Link: https://lkml.kernel.org/r/20171228160620.23818-1-aryabinin@virtuozzo.com
    aryabinin authored and bgilbert committed Jan 4, 2018
    Configuration menu
    Copy the full SHA
    a058d76 View commit details
    Browse the repository at this point in the history
  6. x86/mm: Map cpu_entry_area at the same place on 4/5 level

    There is no reason for 4 and 5 level pagetables to have a different
    layout. It just makes determining vaddr_end for KASLR harder than
    necessary.
    
    Fixes: 92a0f81 ("x86/cpu_entry_area: Move it out of the fixmap")
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Benjamin Gilbert <benjamin.gilbert@coreos.com>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: stable <stable@vger.kernel.org>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Garnier <thgarnie@google.com>,
    Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
    Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
    KAGA-KOKO authored and bgilbert committed Jan 4, 2018
    Configuration menu
    Copy the full SHA
    c131254 View commit details
    Browse the repository at this point in the history

Commits on Jan 5, 2018

  1. x86/kaslr: Fix the vaddr_end mess

    vaddr_end for KASLR is only documented in the KASLR code itself and is
    adjusted depending on config options. So it's not surprising that a change
    of the memory layout causes KASLR to have the wrong vaddr_end. This can map
    arbitrary stuff into other areas causing hard to understand problems.
    
    Remove the whole ifdef magic and define the start of the cpu_entry_area to
    be the end of the KASLR vaddr range.
    
    Add documentation to that effect.
    
    Fixes: 92a0f81 ("x86/cpu_entry_area: Move it out of the fixmap")
    Reported-by: Benjamin Gilbert <benjamin.gilbert@coreos.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Benjamin Gilbert <benjamin.gilbert@coreos.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: stable <stable@vger.kernel.org>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Garnier <thgarnie@google.com>,
    Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
    Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
    KAGA-KOKO authored and bgilbert committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    3842b86 View commit details
    Browse the repository at this point in the history
  2. x86/events/intel/ds: Use the proper cache flush method for mapping ds…

    … buffers
    
    Thomas reported the following warning:
    
     BUG: using smp_processor_id() in preemptible [00000000] code: ovsdb-server/4498
     caller is native_flush_tlb_single+0x57/0xc0
     native_flush_tlb_single+0x57/0xc0
     __set_pte_vaddr+0x2d/0x40
     set_pte_vaddr+0x2f/0x40
     cea_set_pte+0x30/0x40
     ds_update_cea.constprop.4+0x4d/0x70
     reserve_ds_buffers+0x159/0x410
     x86_reserve_hardware+0x150/0x160
     x86_pmu_event_init+0x3e/0x1f0
     perf_try_init_event+0x69/0x80
     perf_event_alloc+0x652/0x740
     SyS_perf_event_open+0x3f6/0xd60
     do_syscall_64+0x5c/0x190
    
    set_pte_vaddr is used to map the ds buffers into the cpu entry area, but
    there are two problems with that:
    
     1) The resulting flush is not supposed to be called in preemptible context
    
     2) The cpu entry area is supposed to be per CPU, but the debug store
        buffers are mapped for all CPUs so these mappings need to be flushed
        globally.
    
    Add the necessary preemption protection across the mapping code and flush
    TLBs globally.
    
    Fixes: c1961a4 ("x86/events/intel/ds: Map debug buffers in cpu_entry_area")
    Reported-by: Thomas Zeitlhofer <thomas.zeitlhofer+lkml@ze-it.at>
    Signed-off-by: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Tested-by: Thomas Zeitlhofer <thomas.zeitlhofer+lkml@ze-it.at>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: stable@vger.kernel.org
    Link: https://lkml.kernel.org/r/20180104170712.GB3040@hirez.programming.kicks-ass.net
    Peter Zijlstra authored and bgilbert committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    ac76d02 View commit details
    Browse the repository at this point in the history
  3. x86/tlb: Drop the _GPL from the cpu_tlbstate export

    The recent changes for PTI touch cpu_tlbstate from various tlb_flush
    inlines. cpu_tlbstate is exported as GPL symbol, so this causes a
    regression when building the most beloved out of tree drivers for certain
    graphics card.
    
    Aside of that the export was wrong since it was introduced as it should
    have been EXPORT_PER_CPU_SYMBOL_GPL().
    
    Use the correct PER_CPU export and drop the _GPL to restore the previous
    state which allows users to utilize the cards they payed for. I'm always
    happy to make this kind of change to support our #friends (or however this
    hot hashtag is named today) from the closet sauce graphics world..
    
    Fixes: 1e02ce4 ("x86: Store a per-cpu shadow copy of CR4")
    Fixes: 6fd166a ("x86/mm: Use/Fix PCID to optimize user/kernel switches")
    Reported-by: Kees Cook <keescook@google.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: stable@vger.kernel.org
    KAGA-KOKO authored and bgilbert committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    0c74b4e View commit details
    Browse the repository at this point in the history
  4. Merge KPTI fixes based on pull request #129 from bgilbert/v4.14.11-co…

    …reos
    
    KPTI fixes for 4.14.11.  Modified from pull request #129 to fix
    typos in "x86/kaslr: Fix the vaddr_end mess".
    bgilbert committed Jan 5, 2018
    Configuration menu
    Copy the full SHA
    a9e46b7 View commit details
    Browse the repository at this point in the history
Loading