Skip to content
Permalink
Dmitry-Baryshk…
Switch branches/tags

Commits on Jul 18, 2021

  1. drm/msm/kms: drop set_encoder_mode callback

    set_encoder_mode callback is completely unused now. Drop it from
    msm_kms_func().
    
    Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    Reviewed-by: Abhinav Kumar <abhinavk@codeaurora.org>
    lumag authored and intel-lab-lkp committed Jul 18, 2021
  2. drm/msm/dsi: stop calling set_encoder_mode callback

    None of the display drivers now implement set_encoder_mode callback.
    Stop calling it from the modeset init code.
    
    Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    Reviewed-by: Abhinav Kumar <abhinavk@codeaurora.org>
    lumag authored and intel-lab-lkp committed Jul 18, 2021
  3. drm/msm/dp: stop calling set_encoder_mode callback

    None of the display drivers now implement set_encoder_mode callback.
    Stop calling it from the modeset init code.
    
    Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    Reviewed-by: Abhinav Kumar <abhinavk@codeaurora.org>
    lumag authored and intel-lab-lkp committed Jul 18, 2021
  4. drm/msm/mdp5: move mdp5_encoder_set_intf_mode after msm_dsi_modeset_init

    Move a call to mdp5_encoder_set_intf_mode() after
    msm_dsi_modeset_init(), removing set_encoder_mode callback.
    
    Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    Reviewed-by: Abhinav Kumar <abhinavk@codeaurora.org>
    lumag authored and intel-lab-lkp committed Jul 18, 2021
  5. drm/msm/dpu: support setting up two independent DSI connectors

    Move setting up encoders from set_encoder_mode to
    _dpu_kms_initialize_dsi() / _dpu_kms_initialize_displayport(). This
    allows us to support not only "single DSI" and "bonded DSI" but also "two
    independent DSI" configurations. In future this would also help adding
    support for multiple DP connectors.
    
    Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    Reviewed-by: Abhinav Kumar <abhinavk@codeaurora.org>
    lumag authored and intel-lab-lkp committed Jul 18, 2021
  6. drm/msm/dsi: add three helper functions

    Add three helper functions to be used by display drivers for setting up
    encoders.
    
    Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    Reviewed-by: Abhinav Kumar <abhinavk@codeaurora.org>
    lumag authored and intel-lab-lkp committed Jul 18, 2021
  7. drm/msm/dsi: rename dual DSI to bonded DSI

    We are preparing to support two independent DSI hosts in the DSI/DPU
    code. To remove possible confusion (as both configurations can be
    referenced as dual DSI) let's rename old "dual DSI" (two DSI hosts
    driving single device, with clocks being locked) to "bonded DSI".
    
    Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    Reviewed-by: Abhinav Kumar <abhinavk@codeaurora.org>
    lumag authored and intel-lab-lkp committed Jul 18, 2021

Commits on Jul 16, 2021

  1. Revert "Makefile: Enable -Wimplicit-fallthrough for Clang"

    This reverts commit b7eb335.
    
    It turns out that the problem with the clang -Wimplicit-fallthrough
    warning is not about the kernel source code, but about clang itself, and
    that the warning is unusable until clang fixes its broken ways.
    
    In particular, when you enable this warning for clang, you not only get
    warnings about implicit fallthroughs.  You also get this:
    
       warning: fallthrough annotation in unreachable code [-Wimplicit-fallthrough]
    
    which is completely broken becasue it
    
     (a) doesn't even tell you where the problem is (seriously: no line
         numbers, no filename, no nothing).
    
     (b) is fundamentally broken anyway, because there are perfectly valid
         reasons to have a fallthrough statement even if it turns out that
         it can perhaps not be reached.
    
    In the kernel, an example of that second case is code in the scheduler:
    
                    switch (state) {
                    case cpuset:
                            if (IS_ENABLED(CONFIG_CPUSETS)) {
                                    cpuset_cpus_allowed_fallback(p);
                                    state = possible;
                                    break;
                            }
                            fallthrough;
                    case possible:
    
    where if CONFIG_CPUSETS is enabled you actually never hit the
    fallthrough case at all.  But that in no way makes the fallthrough
    wrong.
    
    So the warning is completely broken, and enabling it for clang is a very
    bad idea.
    
    In the meantime, we can keep the gcc option enabled, and make the gcc
    build use
    
        -Wimplicit-fallthrough=5
    
    which means that we will at least continue to require a proper
    fallthrough statement, and that gcc won't silently accept the magic
    comment versions. Because gcc does this all correctly, and while the odd
    "=5" part is kind of obscure, it's documented in [1]:
    
      "-Wimplicit-fallthrough=5 doesn’t recognize any comments as
       fallthrough comments, only attributes disable the warning"
    
    so if clang ever fixes its bad behavior we can try enabling it there again.
    
    Link: https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html [1]
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
    Cc: Nathan Chancellor <nathan@kernel.org>
    Cc: Nick Desaulniers <ndesaulniers@google.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    torvalds committed Jul 16, 2021
  2. Merge tag 'configfs-5.13-1' of git://git.infradead.org/users/hch/conf…

    …igfs
    
    Pull configfs fix from Christoph Hellwig:
    
     - fix the read and write iterators (Bart Van Assche)
    
    * tag 'configfs-5.13-1' of git://git.infradead.org/users/hch/configfs:
      configfs: fix the read and write iterators
    torvalds committed Jul 16, 2021
  3. Merge tag 'pwm/for-5.14-rc2' of git://git.kernel.org/pub/scm/linux/ke…

    …rnel/git/thierry.reding/linux-pwm
    
    Pull pwm fixes from Thierry Reding:
     "A couple of fixes from Uwe that I missed for v5.14-rc1"
    
    * tag 'pwm/for-5.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm:
      pwm: ep93xx: Ensure configuring period and duty_cycle isn't wrongly skipped
      pwm: berlin: Ensure configuring period and duty_cycle isn't wrongly skipped
      pwm: tiecap: Ensure configuring period and duty_cycle isn't wrongly skipped
      pwm: spear: Ensure configuring period and duty_cycle isn't wrongly skipped
      pwm: sprd: Ensure configuring period and duty_cycle isn't wrongly skipped
    torvalds committed Jul 16, 2021

Commits on Jul 15, 2021

  1. Merge tag 'Wimplicit-fallthrough-clang-5.14-rc2' of git://git.kernel.…

    …org/pub/scm/linux/kernel/git/gustavoars/linux
    
    Pull fallthrough fixes from Gustavo Silva:
     "This fixes many fall-through warnings when building with Clang and
      -Wimplicit-fallthrough, and also enables -Wimplicit-fallthrough for
      Clang, globally.
    
      It's also important to notice that since we have adopted the use of
      the pseudo-keyword macro fallthrough, we also want to avoid having
      more /* fall through */ comments being introduced. Contrary to GCC,
      Clang doesn't recognize any comments as implicit fall-through markings
      when the -Wimplicit-fallthrough option is enabled.
    
      So, in order to avoid having more comments being introduced, we use
      the option -Wimplicit-fallthrough=5 for GCC, which similar to Clang,
      will cause a warning in case a code comment is intended to be used as
      a fall-through marking. The patch for Makefile also enforces this.
    
      We had almost 4,000 of these issues for Clang in the beginning, and
      there might be a couple more out there when building some
      architectures with certain configurations. However, with the recent
      fixes I think we are in good shape and it is now possible to enable
      the warning for Clang"
    
    * tag 'Wimplicit-fallthrough-clang-5.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux: (27 commits)
      Makefile: Enable -Wimplicit-fallthrough for Clang
      powerpc/smp: Fix fall-through warning for Clang
      dmaengine: mpc512x: Fix fall-through warning for Clang
      usb: gadget: fsl_qe_udc: Fix fall-through warning for Clang
      powerpc/powernv: Fix fall-through warning for Clang
      MIPS: Fix unreachable code issue
      MIPS: Fix fall-through warnings for Clang
      ASoC: Mediatek: MT8183: Fix fall-through warning for Clang
      power: supply: Fix fall-through warnings for Clang
      dmaengine: ti: k3-udma: Fix fall-through warning for Clang
      s390: Fix fall-through warnings for Clang
      dmaengine: ipu: Fix fall-through warning for Clang
      iommu/arm-smmu-v3: Fix fall-through warning for Clang
      mmc: jz4740: Fix fall-through warning for Clang
      PCI: Fix fall-through warning for Clang
      scsi: libsas: Fix fall-through warning for Clang
      video: fbdev: Fix fall-through warning for Clang
      math-emu: Fix fall-through warning
      cpufreq: Fix fall-through warning for Clang
      drm/msm: Fix fall-through warning in msm_gem_new_impl()
      ...
    torvalds committed Jul 15, 2021
  2. Merge branch 'akpm' (patches from Andrew)

    Merge misc fixes from Andrew Morton:
     "13 patches.
    
      Subsystems affected by this patch series: mm (kasan, pagealloc, rmap,
      hmm, and hugetlb), and hfs"
    
    * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
      mm/hugetlb: fix refs calculation from unaligned @vaddr
      hfs: add lock nesting notation to hfs_find_init
      hfs: fix high memory mapping in hfs_bnode_read
      hfs: add missing clean-up in hfs_fill_super
      lib/test_hmm: remove set but unused page variable
      mm: fix the try_to_unmap prototype for !CONFIG_MMU
      mm/page_alloc: further fix __alloc_pages_bulk() return value
      mm/page_alloc: correct return value when failing at preparing
      mm/page_alloc: avoid page allocator recursion with pagesets.lock held
      Revert "mm/page_alloc: make should_fail_alloc_page() static"
      kasan: fix build by including kernel.h
      kasan: add memzero init for unaligned size at DEBUG
      mm: move helper to check slub_debug_enabled
    torvalds committed Jul 15, 2021
  3. EDAC/igen6: fix core dependency AGAIN

    My previous patch had a typo/thinko which prevents this driver
    from being enabled: change X64_64 to X86_64.
    
    Fixes: 0a9ece9 ("EDAC/igen6: fix core dependency")
    Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
    Cc: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
    Cc: linux-edac@vger.kernel.org
    Cc: bowsingbetee <bowsingbetee@protonmail.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Tony Luck <tony.luck@intel.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    rddunlap authored and torvalds committed Jul 15, 2021
  4. Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

    Pull kvm fixes from Paolo Bonzini:
    
     - Allow again loading KVM on 32-bit non-PAE builds
    
     - Fixes for host SMIs on AMD
    
     - Fixes for guest SMIs on AMD
    
     - Fixes for selftests on s390 and ARM
    
     - Fix memory leak
    
     - Enforce no-instrumentation area on vmentry when hardware breakpoints
       are in use.
    
    * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (25 commits)
      KVM: selftests: smm_test: Test SMM enter from L2
      KVM: nSVM: Restore nested control upon leaving SMM
      KVM: nSVM: Fix L1 state corruption upon return from SMM
      KVM: nSVM: Introduce svm_copy_vmrun_state()
      KVM: nSVM: Check that VM_HSAVE_PA MSR was set before VMRUN
      KVM: nSVM: Check the value written to MSR_VM_HSAVE_PA
      KVM: SVM: Fix sev_pin_memory() error checks in SEV migration utilities
      KVM: SVM: Return -EFAULT if copy_to_user() for SEV mig packet header fails
      KVM: SVM: add module param to control the #SMI interception
      KVM: SVM: remove INIT intercept handler
      KVM: SVM: #SMI interception must not skip the instruction
      KVM: VMX: Remove vmx_msr_index from vmx.h
      KVM: X86: Disable hardware breakpoints unconditionally before kvm_x86->run()
      KVM: selftests: Address extra memslot parameters in vm_vaddr_alloc
      kvm: debugfs: fix memory leak in kvm_create_vm_debugfs
      KVM: x86/pmu: Clear anythread deprecated bit when 0xa leaf is unsupported on the SVM
      KVM: mmio: Fix use-after-free Read in kvm_vm_ioctl_unregister_coalesced_mmio
      KVM: SVM: Revert clearing of C-bit on GPA in #NPF handler
      KVM: x86/mmu: Do not apply HPA (memory encryption) mask to GPAs
      KVM: x86: Use kernel's x86_phys_bits to handle reduced MAXPHYADDR
      ...
    torvalds committed Jul 15, 2021
  5. Merge tag 'iommu-fixes-v5.14-rc1' of git://git.kernel.org/pub/scm/lin…

    …ux/kernel/git/joro/iommu
    
    Pull iommu fixes from Joerg Roedel:
    
     - Revert a patch which caused boot failures with QCOM IOMMU
    
     - Two fixes for Intel VT-d context table handling
    
     - Physical address decoding fix for Rockchip IOMMU
    
     - Add a reviewer for AMD IOMMU
    
    * tag 'iommu-fixes-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
      MAINTAINERS: Add Suravee Suthikulpanit as Reviewer for AMD IOMMU (AMD-Vi)
      iommu/rockchip: Fix physical address decoding
      iommu/vt-d: Fix clearing real DMA device's scalable-mode context entries
      iommu/vt-d: Global devTLB flush when present context entry changed
      iommu/qcom: Revert "iommu/arm: Cleanup resources in case of probe error path"
    torvalds committed Jul 15, 2021
  6. mm/hugetlb: fix refs calculation from unaligned @vaddr

    Commit 82e5d37 ("mm/hugetlb: refactor subpage recording")
    refactored the count of subpages but missed an edge case when @vaddr is
    not aligned to PAGE_SIZE e.g.  when close to vma->vm_end.  It would then
    errousnly set @refs to 0 and record_subpages_vmas() wouldn't set the
    @pages array element to its value, consequently causing the reported
    null-deref by syzbot.
    
    Fix it by aligning down @vaddr by PAGE_SIZE in @refs calculation.
    
    Link: https://lkml.kernel.org/r/20210713152440.28650-1-joao.m.martins@oracle.com
    Fixes: 82e5d37 ("mm/hugetlb: refactor subpage recording")
    Reported-by: syzbot+a3fcd59df1b372066f5a@syzkaller.appspotmail.com
    Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
    Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    jpemartins authored and torvalds committed Jul 15, 2021
  7. hfs: add lock nesting notation to hfs_find_init

    Syzbot reports a possible recursive lock in [1].
    
    This happens due to missing lock nesting information.  From the logs, we
    see that a call to hfs_fill_super is made to mount the hfs filesystem.
    While searching for the root inode, the lock on the catalog btree is
    grabbed.  Then, when the parent of the root isn't found, a call to
    __hfs_bnode_create is made to create the parent of the root.  This
    eventually leads to a call to hfs_ext_read_extent which grabs a lock on
    the extents btree.
    
    Since the order of locking is catalog btree -> extents btree, this lock
    hierarchy does not lead to a deadlock.
    
    To tell lockdep that this locking is safe, we add nesting notation to
    distinguish between catalog btrees, extents btrees, and attributes
    btrees (for HFS+).  This has already been done in hfsplus.
    
    Link: https://syzkaller.appspot.com/bug?id=f007ef1d7a31a469e3be7aeb0fde0769b18585db [1]
    Link: https://lkml.kernel.org/r/20210701030756.58760-4-desmondcheongzx@gmail.com
    Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>
    Reported-by: syzbot+b718ec84a87b7e73ade4@syzkaller.appspotmail.com
    Tested-by: syzbot+b718ec84a87b7e73ade4@syzkaller.appspotmail.com
    Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
    Cc: Shuah Khan <skhan@linuxfoundation.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    desmondcheongzx authored and torvalds committed Jul 15, 2021
  8. hfs: fix high memory mapping in hfs_bnode_read

    Pages that we read in hfs_bnode_read need to be kmapped into kernel
    address space.  However, currently only the 0th page is kmapped.  If the
    given offset + length exceeds this 0th page, then we have an invalid
    memory access.
    
    To fix this, we kmap relevant pages one by one and copy their relevant
    portions of data.
    
    An example of invalid memory access occurring without this fix can be seen
    in the following crash report:
    
      ==================================================================
      BUG: KASAN: use-after-free in memcpy include/linux/fortify-string.h:191 [inline]
      BUG: KASAN: use-after-free in hfs_bnode_read+0xc4/0xe0 fs/hfs/bnode.c:26
      Read of size 2 at addr ffff888125fdcffe by task syz-executor5/4634
    
      CPU: 0 PID: 4634 Comm: syz-executor5 Not tainted 5.13.0-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      Call Trace:
       __dump_stack lib/dump_stack.c:79 [inline]
       dump_stack+0x195/0x1f8 lib/dump_stack.c:120
       print_address_description.constprop.0+0x1d/0x110 mm/kasan/report.c:233
       __kasan_report mm/kasan/report.c:419 [inline]
       kasan_report.cold+0x7b/0xd4 mm/kasan/report.c:436
       check_region_inline mm/kasan/generic.c:180 [inline]
       kasan_check_range+0x154/0x1b0 mm/kasan/generic.c:186
       memcpy+0x24/0x60 mm/kasan/shadow.c:65
       memcpy include/linux/fortify-string.h:191 [inline]
       hfs_bnode_read+0xc4/0xe0 fs/hfs/bnode.c:26
       hfs_bnode_read_u16 fs/hfs/bnode.c:34 [inline]
       hfs_bnode_find+0x880/0xcc0 fs/hfs/bnode.c:365
       hfs_brec_find+0x2d8/0x540 fs/hfs/bfind.c:126
       hfs_brec_read+0x27/0x120 fs/hfs/bfind.c:165
       hfs_cat_find_brec+0x19a/0x3b0 fs/hfs/catalog.c:194
       hfs_fill_super+0xc13/0x1460 fs/hfs/super.c:419
       mount_bdev+0x331/0x3f0 fs/super.c:1368
       hfs_mount+0x35/0x40 fs/hfs/super.c:457
       legacy_get_tree+0x10c/0x220 fs/fs_context.c:592
       vfs_get_tree+0x93/0x300 fs/super.c:1498
       do_new_mount fs/namespace.c:2905 [inline]
       path_mount+0x13f5/0x20e0 fs/namespace.c:3235
       do_mount fs/namespace.c:3248 [inline]
       __do_sys_mount fs/namespace.c:3456 [inline]
       __se_sys_mount fs/namespace.c:3433 [inline]
       __x64_sys_mount+0x2b8/0x340 fs/namespace.c:3433
       do_syscall_64+0x37/0xc0 arch/x86/entry/common.c:47
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      RIP: 0033:0x45e63a
      Code: 48 c7 c2 bc ff ff ff f7 d8 64 89 02 b8 ff ff ff ff eb d2 e8 88 04 00 00 0f 1f 84 00 00 00 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
      RSP: 002b:00007f9404d410d8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
      RAX: ffffffffffffffda RBX: 0000000020000248 RCX: 000000000045e63a
      RDX: 0000000020000000 RSI: 0000000020000100 RDI: 00007f9404d41120
      RBP: 00007f9404d41120 R08: 00000000200002c0 R09: 0000000020000000
      R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003
      R13: 0000000000000003 R14: 00000000004ad5d8 R15: 0000000000000000
    
      The buggy address belongs to the page:
      page:00000000dadbcf3e refcount:0 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x125fdc
      flags: 0x2fffc0000000000(node=0|zone=2|lastcpupid=0x3fff)
      raw: 02fffc0000000000 ffffea000497f748 ffffea000497f6c8 0000000000000000
      raw: 0000000000000001 0000000000000000 00000000ffffffff 0000000000000000
      page dumped because: kasan: bad access detected
    
      Memory state around the buggy address:
       ffff888125fdce80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
       ffff888125fdcf00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
      >ffff888125fdcf80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
                                                                      ^
       ffff888125fdd000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
       ffff888125fdd080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
      ==================================================================
    
    Link: https://lkml.kernel.org/r/20210701030756.58760-3-desmondcheongzx@gmail.com
    Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>
    Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
    Cc: Shuah Khan <skhan@linuxfoundation.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    desmondcheongzx authored and torvalds committed Jul 15, 2021
  9. hfs: add missing clean-up in hfs_fill_super

    Patch series "hfs: fix various errors", v2.
    
    This series ultimately aims to address a lockdep warning in
    hfs_find_init reported by Syzbot [1].
    
    The work done for this led to the discovery of another bug, and the
    Syzkaller repro test also reveals an invalid memory access error after
    clearing the lockdep warning.  Hence, this series is broken up into
    three patches:
    
    1. Add a missing call to hfs_find_exit for an error path in
       hfs_fill_super
    
    2. Fix memory mapping in hfs_bnode_read by fixing calls to kmap
    
    3. Add lock nesting notation to tell lockdep that the observed locking
       hierarchy is safe
    
    This patch (of 3):
    
    Before exiting hfs_fill_super, the struct hfs_find_data used in
    hfs_find_init should be passed to hfs_find_exit to be cleaned up, and to
    release the lock held on the btree.
    
    The call to hfs_find_exit is missing from an error path.  We add it back
    in by consolidating calls to hfs_find_exit for error paths.
    
    Link: https://syzkaller.appspot.com/bug?id=f007ef1d7a31a469e3be7aeb0fde0769b18585db [1]
    Link: https://lkml.kernel.org/r/20210701030756.58760-1-desmondcheongzx@gmail.com
    Link: https://lkml.kernel.org/r/20210701030756.58760-2-desmondcheongzx@gmail.com
    Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>
    Reviewed-by: Viacheslav Dubeyko <slava@dubeyko.com>
    Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Shuah Khan <skhan@linuxfoundation.org>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    desmondcheongzx authored and torvalds committed Jul 15, 2021
  10. lib/test_hmm: remove set but unused page variable

    The HMM selftests use atomic_check_access() to check atomic access to a
    page has been revoked.  It doesn't matter if the page mapping has been
    removed from the mirrored page tables as that also implies atomic access
    has been revoked.  Therefore remove the unused page variable to fix this
    compiler warning:
    
      lib/test_hmm.c:631:16: warning: variable `page' set but not used [-Wunused-but-set-variable]
    
    Link: https://lkml.kernel.org/r/20210706025603.4059-1-apopple@nvidia.com
    Fixes: b659bae ("mm: selftests for exclusive device memory")
    Signed-off-by: Alistair Popple <apopple@nvidia.com>
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Reported-by: kernel test robot <oliver.sang@intel.com>
    Reported-by: Yang Yingliang <yangyingliang@huawei.com>
    Acked-by: Souptick Joarder <jrdr.linux@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Alistair Popple authored and torvalds committed Jul 15, 2021
  11. mm: fix the try_to_unmap prototype for !CONFIG_MMU

    Adjust the nommu stub of try_to_unmap to match the changed protype for the
    full version.  Turn it into an inline instead of a macro to generally
    improve the type checking.
    
    Link: https://lkml.kernel.org/r/20210705053944.885828-1-hch@lst.de
    Fixes: 1fb08ac ("mm: rmap: make try_to_unmap() void function")
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Yang Shi <shy828301@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Christoph Hellwig authored and torvalds committed Jul 15, 2021
  12. mm/page_alloc: further fix __alloc_pages_bulk() return value

    The author of commit b3b64eb ("mm/page_alloc: do bulk array
    bounds check after checking populated elements") was possibly
    confused by the mixture of return values throughout the function.
    
    The API contract is clear that the function "Returns the number of pages
    on the list or array." It does not list zero as a unique return value with
    a special meaning.  Therefore zero is a plausible return value only if
    @nr_pages is zero or less.
    
    Clean up the return logic to make it clear that the returned value is
    always the total number of pages in the array/list, not the number of
    pages that were allocated during this call.
    
    The only change in behavior with this patch is the value returned if
    prepare_alloc_pages() fails.  To match the API contract, the number of
    pages currently in the array/list is returned in this case.
    
    The call site in __page_pool_alloc_pages_slow() also seems to be confused
    on this matter.  It should be attended to by someone who is familiar with
    that code.
    
    [mel@techsingularity.net: Return nr_populated if 0 pages are requested]
    
    Link: https://lkml.kernel.org/r/20210713152100.10381-4-mgorman@techsingularity.net
    Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
    Cc: Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>
    Cc: Zhang Qiang <Qiang.Zhang@windriver.com>
    Cc: Yanfei Xu <yanfei.xu@windriver.com>
    Cc: Matteo Croce <mcroce@microsoft.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    chucklever authored and torvalds committed Jul 15, 2021
  13. mm/page_alloc: correct return value when failing at preparing

    If the array passed in is already partially populated, we should return
    "nr_populated" even failing at preparing arguments stage.
    
    Link: https://lkml.kernel.org/r/20210713152100.10381-3-mgorman@techsingularity.net
    Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com>
    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Link: https://lore.kernel.org/r/20210709102855.55058-1-yanfei.xu@windriver.com
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Yanfei Xu authored and torvalds committed Jul 15, 2021
  14. mm/page_alloc: avoid page allocator recursion with pagesets.lock held

    Syzbot is reporting potential deadlocks due to pagesets.lock when
    PAGE_OWNER is enabled.  One example from Desmond Cheong Zhi Xi is as
    follows
    
      __alloc_pages_bulk()
        local_lock_irqsave(&pagesets.lock, flags) <---- outer lock here
        prep_new_page():
          post_alloc_hook():
            set_page_owner():
              __set_page_owner():
                save_stack():
                  stack_depot_save():
                    alloc_pages():
                      alloc_page_interleave():
                        __alloc_pages():
                          get_page_from_freelist():
                            rm_queue():
                              rm_queue_pcplist():
                                local_lock_irqsave(&pagesets.lock, flags);
                                *** DEADLOCK ***
    
    Zhang, Qiang also reported
    
      BUG: sleeping function called from invalid context at mm/page_alloc.c:5179
      in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: swapper/0
      .....
      __dump_stack lib/dump_stack.c:79 [inline]
      dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:96
      ___might_sleep.cold+0x1f1/0x237 kernel/sched/core.c:9153
      prepare_alloc_pages+0x3da/0x580 mm/page_alloc.c:5179
      __alloc_pages+0x12f/0x500 mm/page_alloc.c:5375
      alloc_page_interleave+0x1e/0x200 mm/mempolicy.c:2147
      alloc_pages+0x238/0x2a0 mm/mempolicy.c:2270
      stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303
      save_stack+0x15e/0x1e0 mm/page_owner.c:120
      __set_page_owner+0x50/0x290 mm/page_owner.c:181
      prep_new_page mm/page_alloc.c:2445 [inline]
      __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5313
      alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline]
      vm_area_alloc_pages mm/vmalloc.c:2775 [inline]
      __vmalloc_area_node mm/vmalloc.c:2845 [inline]
      __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2947
      __vmalloc_node mm/vmalloc.c:2996 [inline]
      vzalloc+0x67/0x80 mm/vmalloc.c:3066
    
    There are a number of ways it could be fixed.  The page owner code could
    be audited to strip GFP flags that allow sleeping but it'll impair the
    functionality of PAGE_OWNER if allocations fail.  The bulk allocator could
    add a special case to release/reacquire the lock for prep_new_page and
    lookup PCP after the lock is reacquired at the cost of performance.  The
    pages requiring prep could be tracked using the least significant bit and
    looping through the array although it is more complicated for the list
    interface.  The options are relatively complex and the second one still
    incurs a performance penalty when PAGE_OWNER is active so this patch takes
    the simple approach -- disable bulk allocation of PAGE_OWNER is active.
    The caller will be forced to allocate one page at a time incurring a
    performance penalty but PAGE_OWNER is already a performance penalty.
    
    Link: https://lkml.kernel.org/r/20210708081434.GV3840@techsingularity.net
    Fixes: dbbee9d ("mm/page_alloc: convert per-cpu list protection to local_lock")
    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Reported-by: Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>
    Reported-by: "Zhang, Qiang" <Qiang.Zhang@windriver.com>
    Reported-by: syzbot+127fd7828d6eeb611703@syzkaller.appspotmail.com
    Tested-by: syzbot+127fd7828d6eeb611703@syzkaller.appspotmail.com
    Acked-by: Rafael Aquini <aquini@redhat.com>
    Cc: Shuah Khan <skhan@linuxfoundation.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    gormanm authored and torvalds committed Jul 15, 2021
  15. Revert "mm/page_alloc: make should_fail_alloc_page() static"

    This reverts commit f717309.
    
    Fix an unresolved symbol error when CONFIG_DEBUG_INFO_BTF=y:
    
        LD      vmlinux
        BTFIDS  vmlinux
      FAILED unresolved symbol should_fail_alloc_page
      make: *** [Makefile:1199: vmlinux] Error 255
      make: *** Deleting file 'vmlinux'
    
    Link: https://lkml.kernel.org/r/20210708191128.153796-1-mcroce@linux.microsoft.com
    Fixes: f717309 ("mm/page_alloc: make should_fail_alloc_page() static")
    Signed-off-by: Matteo Croce <mcroce@microsoft.com>
    Acked-by: Mel Gorman <mgorman@techsingularity.net>
    Tested-by: John Hubbard <jhubbard@nvidia.com>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Dan Streetman <ddstreet@ieee.org>
    Cc: Yang Shi <shy828301@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    teknoraver authored and torvalds committed Jul 15, 2021
  16. kasan: fix build by including kernel.h

    The <linux/kasan.h> header relies on _RET_IP_ being defined, and had been
    receiving that definition via inclusion of bug.h which includes kernel.h.
    However, since f39650d ("kernel.h: split out panic and oops helpers")
    that is no longer the case and get the following build error when building
    CONFIG_KASAN_HW_TAGS on arm64:
    
      In file included from arch/arm64/mm/kasan_init.c:10:
      include/linux/kasan.h: In function 'kasan_slab_free':
      include/linux/kasan.h:230:39: error: '_RET_IP_' undeclared (first use in this function)
        230 |   return __kasan_slab_free(s, object, _RET_IP_, init);
    
    Fix it by including kernel.h from kasan.h.
    
    Link: https://lkml.kernel.org/r/20210705072716.2125074-1-elver@google.com
    Fixes: f39650d ("kernel.h: split out panic and oops helpers")
    Signed-off-by: Marco Elver <elver@google.com>
    Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
    Cc: Alexander Potapenko <glider@google.com>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Peter Collingbourne <pcc@google.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
    Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    melver authored and torvalds committed Jul 15, 2021
  17. kasan: add memzero init for unaligned size at DEBUG

    Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite the
    redzone of object with unaligned size.
    
    An additional memzero_explicit() path is added to replacing init by hwtag
    instruction for those unaligned size at SLUB debug mode.
    
    The penalty is acceptable since they are only enabled in debug mode, not
    production builds.  A block of comment is added for explanation.
    
    Link: https://lkml.kernel.org/r/20210705103229.8505-3-yee.lee@mediatek.com
    Signed-off-by: Yee Lee <yee.lee@mediatek.com>
    Suggested-by: Andrey Konovalov <andreyknvl@gmail.com>
    Suggested-by: Marco Elver <elver@google.com>
    Reviewed-by: Marco Elver <elver@google.com>
    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
    Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
    Cc: Alexander Potapenko <glider@google.com>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Nicholas Tang <nicholas.tang@mediatek.com>
    Cc: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
    Cc: Chinwen Chang <chinwen.chang@mediatek.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    yeeleemtk authored and torvalds committed Jul 15, 2021
  18. mm: move helper to check slub_debug_enabled

    Move the helper to check slub_debug_enabled, so that we can confine the
    use of #ifdef outside slub.c as well.
    
    Link: https://lkml.kernel.org/r/20210705103229.8505-2-yee.lee@mediatek.com
    Signed-off-by: Marco Elver <elver@google.com>
    Signed-off-by: Yee Lee <yee.lee@mediatek.com>
    Suggested-by: Matthew Wilcox <willy@infradead.org>
    Cc: Alexander Potapenko <glider@google.com>
    Cc: Andrey Konovalov <andreyknvl@gmail.com>
    Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
    Cc: Chinwen Chang <chinwen.chang@mediatek.com>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
    Cc: Nicholas Tang <nicholas.tang@mediatek.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    melver authored and torvalds committed Jul 15, 2021
  19. KVM: selftests: smm_test: Test SMM enter from L2

    Two additional tests are added:
    - SMM triggered from L2 does not currupt L1 host state.
    - Save/restore during SMM triggered from L2 does not corrupt guest/host
      state.
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Message-Id: <20210628104425.391276-7-vkuznets@redhat.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    vittyvk authored and bonzini committed Jul 15, 2021
  20. KVM: nSVM: Restore nested control upon leaving SMM

    If the VM was migrated while in SMM, no nested state was saved/restored,
    and therefore svm_leave_smm has to load both save and control area
    of the vmcb12. Save area is already loaded from HSAVE area,
    so now load the control area as well from the vmcb12.
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Message-Id: <20210628104425.391276-6-vkuznets@redhat.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    vittyvk authored and bonzini committed Jul 15, 2021
  21. KVM: nSVM: Fix L1 state corruption upon return from SMM

    VMCB split commit 4995a36 ("KVM: SVM: Use a separate vmcb for the
    nested L2 guest") broke return from SMM when we entered there from guest
    (L2) mode. Gen2 WS2016/Hyper-V is known to do this on boot. The problem
    manifests itself like this:
    
      kvm_exit:             reason EXIT_RSM rip 0x7ffbb280 info 0 0
      kvm_emulate_insn:     0:7ffbb280: 0f aa
      kvm_smm_transition:   vcpu 0: leaving SMM, smbase 0x7ffb3000
      kvm_nested_vmrun:     rip: 0x000000007ffbb280 vmcb: 0x0000000008224000
        nrip: 0xffffffffffbbe119 int_ctl: 0x01020000 event_inj: 0x00000000
        npt: on
      kvm_nested_intercepts: cr_read: 0000 cr_write: 0010 excp: 40060002
        intercepts: fd44bfeb 0000217f 00000000
      kvm_entry:            vcpu 0, rip 0xffffffffffbbe119
      kvm_exit:             reason EXIT_NPF rip 0xffffffffffbbe119 info
        200000006 1ab000
      kvm_nested_vmexit:    vcpu 0 reason npf rip 0xffffffffffbbe119 info1
        0x0000000200000006 info2 0x00000000001ab000 intr_info 0x00000000
        error_code 0x00000000
      kvm_page_fault:       address 1ab000 error_code 6
      kvm_nested_vmexit_inject: reason EXIT_NPF info1 200000006 info2 1ab000
        int_info 0 int_info_err 0
      kvm_entry:            vcpu 0, rip 0x7ffbb280
      kvm_exit:             reason EXIT_EXCP_GP rip 0x7ffbb280 info 0 0
      kvm_emulate_insn:     0:7ffbb280: 0f aa
      kvm_inj_exception:    #GP (0x0)
    
    Note: return to L2 succeeded but upon first exit to L1 its RIP points to
    'RSM' instruction but we're not in SMM.
    
    The problem appears to be that VMCB01 gets irreversibly destroyed during
    SMM execution. Previously, we used to have 'hsave' VMCB where regular
    (pre-SMM) L1's state was saved upon nested_svm_vmexit() but now we just
    switch to VMCB01 from VMCB02.
    
    Pre-split (working) flow looked like:
    - SMM is triggered during L2's execution
    - L2's state is pushed to SMRAM
    - nested_svm_vmexit() restores L1's state from 'hsave'
    - SMM -> RSM
    - enter_svm_guest_mode() switches to L2 but keeps 'hsave' intact so we have
      pre-SMM (and pre L2 VMRUN) L1's state there
    - L2's state is restored from SMRAM
    - upon first exit L1's state is restored from L1.
    
    This was always broken with regards to svm_get_nested_state()/
    svm_set_nested_state(): 'hsave' was never a part of what's being
    save and restored so migration happening during SMM triggered from L2 would
    never restore L1's state correctly.
    
    Post-split flow (broken) looks like:
    - SMM is triggered during L2's execution
    - L2's state is pushed to SMRAM
    - nested_svm_vmexit() switches to VMCB01 from VMCB02
    - SMM -> RSM
    - enter_svm_guest_mode() switches from VMCB01 to VMCB02 but pre-SMM VMCB01
      is already lost.
    - L2's state is restored from SMRAM
    - upon first exit L1's state is restored from VMCB01 but it is corrupted
     (reflects the state during 'RSM' execution).
    
    VMX doesn't have this problem because unlike VMCB, VMCS keeps both guest
    and host state so when we switch back to VMCS02 L1's state is intact there.
    
    To resolve the issue we need to save L1's state somewhere. We could've
    created a third VMCB for SMM but that would require us to modify saved
    state format. L1's architectural HSAVE area (pointed by MSR_VM_HSAVE_PA)
    seems appropriate: L0 is free to save any (or none) of L1's state there.
    Currently, KVM does 'none'.
    
    Note, for nested state migration to succeed, both source and destination
    hypervisors must have the fix. We, however, don't need to create a new
    flag indicating the fact that HSAVE area is now populated as migration
    during SMM triggered from L2 was always broken.
    
    Fixes: 4995a36 ("KVM: SVM: Use a separate vmcb for the nested L2 guest")
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    vittyvk authored and bonzini committed Jul 15, 2021
  22. KVM: nSVM: Introduce svm_copy_vmrun_state()

    Separate the code setting non-VMLOAD-VMSAVE state from
    svm_set_nested_state() into its own function. This is going to be
    re-used from svm_enter_smm()/svm_leave_smm().
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Message-Id: <20210628104425.391276-4-vkuznets@redhat.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    vittyvk authored and bonzini committed Jul 15, 2021
  23. KVM: nSVM: Check that VM_HSAVE_PA MSR was set before VMRUN

    APM states that "The address written to the VM_HSAVE_PA MSR, which holds
    the address of the page used to save the host state on a VMRUN, must point
    to a hypervisor-owned page. If this check fails, the WRMSR will fail with
    a #GP(0) exception. Note that a value of 0 is not considered valid for the
    VM_HSAVE_PA MSR and a VMRUN that is attempted while the HSAVE_PA is 0 will
    fail with a #GP(0) exception."
    
    svm_set_msr() already checks that the supplied address is valid, so only
    check for '0' is missing. Add it to nested_svm_vmrun().
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Message-Id: <20210628104425.391276-3-vkuznets@redhat.com>
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    vittyvk authored and bonzini committed Jul 15, 2021
  24. KVM: nSVM: Check the value written to MSR_VM_HSAVE_PA

    APM states that #GP is raised upon write to MSR_VM_HSAVE_PA when
    the supplied address is not page-aligned or is outside of "maximum
    supported physical address for this implementation".
    page_address_valid() check seems suitable. Also, forcefully page-align
    the address when it's written from VMM.
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Message-Id: <20210628104425.391276-2-vkuznets@redhat.com>
    Cc: stable@vger.kernel.org
    Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
    [Add comment about behavior for host-provided values. - Paolo]
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    vittyvk authored and bonzini committed Jul 15, 2021
  25. KVM: SVM: Fix sev_pin_memory() error checks in SEV migration utilities

    Use IS_ERR() instead of checking for a NULL pointer when querying for
    sev_pin_memory() failures.  sev_pin_memory() always returns an error code
    cast to a pointer, or a valid pointer; it never returns NULL.
    
    Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
    Cc: Steve Rutherford <srutherford@google.com>
    Cc: Brijesh Singh <brijesh.singh@amd.com>
    Cc: Ashish Kalra <ashish.kalra@amd.com>
    Fixes: d3d1af8 ("KVM: SVM: Add KVM_SEND_UPDATE_DATA command")
    Fixes: 15fb7de ("KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command")
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20210506175826.2166383-3-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    sean-jc authored and bonzini committed Jul 15, 2021
Older