Skip to content
Permalink
Christophe-Ler…
Switch branches/tags

Commits on Dec 15, 2021

  1. powerpc/mm: Convert to default topdown mmap layout

    Select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and
    remove arch/powerpc/mm/mmap.c
    
    This change reuses the generic framework added by
    commit 67f3977 ("arm64, mm: move generic mmap layout
    functions to mm") without any functional change.
    
    Comparison between powerpc implementation and the generic one:
    - mmap_is_legacy() is identical.
    - arch_mmap_rnd() does exactly the same allthough it's written
    slightly differently.
    - MIN_GAP and MAX_GAP are identical.
    - mmap_base() does the same but uses STACK_RND_MASK which provides
    the same values as stack_maxrandom_size().
    - arch_pick_mmap_layout() is identical.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  2. powerpc/mm: Enable full randomisation of memory mappings

    Do like most other architectures and provide randomisation also to
    "legacy" memory mappings, by adding the random factor to
    mm->mmap_base in arch_pick_mmap_layout().
    
    See commit 8b8addf ("x86/mm/32: Enable full randomization on
    i386 and X86_32") for all explanations and benefits of that mmap
    randomisation.
    
    At the moment, slice_find_area_bottomup() doesn't use mm->mmap_base
    but uses the fixed TASK_UNMAPPED_BASE instead.
    slice_find_area_bottomup() being used as a fallback to
    slice_find_area_topdown(), it can't use mm->mmap_base
    directly.
    
    Instead of always using TASK_UNMAPPED_BASE as base address, leave
    it to the caller. When called from slice_find_area_topdown()
    TASK_UNMAPPED_BASE is used. Otherwise mm->mmap_base is used.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  3. powerpc/mm: Move get_unmapped_area functions to slice.c

    hugetlb_get_unmapped_area() is now identical to the
    generic version if only RADIX is enabled, so move it
    to slice.c and let it fallback on the generic one
    when HASH MMU is not compiled in.
    
    Do the same with arch_get_unmapped_area() and
    arch_get_unmapped_area_topdown().
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  4. powerpc/mm: Use generic_hugetlb_get_unmapped_area()

    Use the generic version of arch_hugetlb_get_unmapped_area()
    which is now available at all time.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  5. powerpc/mm: Use generic_get_unmapped_area() and call it from arch_get…

    …_unmapped_area()
    
    Use the generic version of arch_get_unmapped_area() which
    is now available at all time instead of its copy
    radix__arch_get_unmapped_area()
    
    To allow that for PPC64, add arch_get_mmap_base() and
    arch_get_mmap_end() macros.
    
    Instead of setting mm->get_unmapped_area() to either
    arch_get_unmapped_area() or generic_get_unmapped_area(),
    always set it to arch_get_unmapped_area() and call
    generic_get_unmapped_area() from there when radix is enabled.
    
    Do the same with radix__arch_get_unmapped_area_topdown()
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  6. powerpc/mm: Remove CONFIG_PPC_MM_SLICES

    CONFIG_PPC_MM_SLICES is always selected by hash book3s/64.
    CONFIG_PPC_MM_SLICES is never selected by other platforms.
    
    Remove it.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  7. powerpc/mm: Make slice specific to book3s/64

    Since commit 555904d ("powerpc/8xx: MM_SLICE is not needed
    anymore") only book3s/64 selects CONFIG_PPC_MM_SLICES.
    
    Move slice.c into mm/book3s64/
    
    Move necessary stuff in asm/book3s/64/slice.h and
    remove asm/slice.h
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  8. powerpc/mm: Move vma_mmu_pagesize()

    vma_mmu_pagesize() is only required for slices,
    otherwise there is a generic weak version doing the
    exact same thing.
    
    Move it to slice.c
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  9. mm: Add len and flags parameters to arch_get_mmap_end()

    Powerpc needs flags and len to make decision on arch_get_mmap_end().
    
    So add them as parameters to arch_get_mmap_end().
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Will Deacon <will@kernel.org>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  10. mm, hugetlbfs: Allow an arch to always use generic versions of get_un…

    …mapped_area functions
    
    Unlike most architectures, powerpc can only define at runtime
    if it is going to use the generic arch_get_unmapped_area() or not.
    
    Today, powerpc has a copy of the generic arch_get_unmapped_area()
    because when selection HAVE_ARCH_UNMAPPED_AREA the generic
    arch_get_unmapped_area() is not available.
    
    Rename it generic_get_unmapped_area() and make it independent of
    HAVE_ARCH_UNMAPPED_AREA.
    
    Do the same for arch_get_unmapped_area_topdown() versus
    HAVE_ARCH_UNMAPPED_AREA_TOPDOWN.
    
    Do the same for hugetlb_get_unmapped_area() versus
    HAVE_ARCH_HUGETLB_UNMAPPED_AREA.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  11. mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DE…

    …FAULT_TOPDOWN_MMAP_LAYOUT
    
    Commit e7142bf ("arm64, mm: make randomization selected by
    generic topdown mmap layout") introduced a default version of
    arch_randomize_brk() provided when
    CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected.
    
    powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
    but needs to provide its own arch_randomize_brk().
    
    In order to allow that, define generic version of arch_randomize_brk()
    as a __weak symbol.
    
    Cc: Alexandre Ghiti <alex@ghiti.fr>
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    chleroy authored and intel-lab-lkp committed Dec 15, 2021
  12. Merge branch 'topic/ppc-kvm' into next

    Bring in some more KVM commits from our KVM topic branch.
    mpe committed Dec 15, 2021

Commits on Dec 14, 2021

  1. KVM: PPC: Book3S HV P9: Use kvm_arch_vcpu_get_wait() to get rcuwait o…

    …bject
    
    Use kvm_arch_vcpu_get_wait() to get a vCPU's rcuwait object instead of
    using vcpu->wait directly in kvmhv_run_single_vcpu().  Functionally, this
    is a nop as vcpu->arch.waitp is guaranteed to point at vcpu->wait.  But
    that is not obvious at first glance, and a future change coming in via
    the KVM tree, commit 510958e ("KVM: Force PPC to define its own
    rcuwait object"), will hide vcpu->wait from architectures that define
    __KVM_HAVE_ARCH_WQP to prevent generic KVM from attepting to wake a vCPU
    with the wrong rcuwait object.
    
    Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20211213174556.3871157-1-seanjc@google.com
    sean-jc authored and mpe committed Dec 14, 2021

Commits on Dec 9, 2021

  1. powerpc/powermac: Add additional missing lockdep_register_key()

    Commit df1f679 ("powerpc/powermac: Add missing
    lockdep_register_key()") fixed a problem that was causing a WARNING.
    
    There are two other places in the same file with the same problem
    originating from commit 9e607f7 ("i2c_powermac: shut up lockdep
    warning").
    
    Add missing lockdep_register_key()
    
    Fixes: 9e607f7 ("i2c_powermac: shut up lockdep warning")
    Reported-by: Erhard Furtner <erhard_f@mailbox.org>
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Depends-on: df1f679 ("powerpc/powermac: Add missing lockdep_register_key()")
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=200055
    Link: https://lore.kernel.org/r/2c7e421874e21b2fb87813d768cf662f630c2ad4.1638984999.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  2. powerpc/fadump: Fix inaccurate CPU state info in vmcore generated wit…

    …h panic
    
    In panic path, fadump is triggered via a panic notifier function.
    Before calling panic notifier functions, smp_send_stop() gets called,
    which stops all CPUs except the panic'ing CPU. Commit 8389b37
    ("powerpc: stop_this_cpu: remove the cpu from the online map.") and
    again commit bab2623 ("powerpc: Offline CPU in stop_this_cpu()")
    started marking CPUs as offline while stopping them. So, if a kernel
    has either of the above commits, vmcore captured with fadump via panic
    path would not process register data for all CPUs except the panic'ing
    CPU. Sample output of crash-utility with such vmcore:
    
      # crash vmlinux vmcore
      ...
            KERNEL: vmlinux
          DUMPFILE: vmcore  [PARTIAL DUMP]
              CPUS: 1
              DATE: Wed Nov 10 09:56:34 EST 2021
            UPTIME: 00:00:42
      LOAD AVERAGE: 2.27, 0.69, 0.24
             TASKS: 183
          NODENAME: XXXXXXXXX
           RELEASE: 5.15.0+
           VERSION: #974 SMP Wed Nov 10 04:18:19 CST 2021
           MACHINE: ppc64le  (2500 Mhz)
            MEMORY: 8 GB
             PANIC: "Kernel panic - not syncing: sysrq triggered crash"
               PID: 3394
           COMMAND: "bash"
              TASK: c0000000150a5f80  [THREAD_INFO: c0000000150a5f80]
               CPU: 1
             STATE: TASK_RUNNING (PANIC)
    
      crash> p -x __cpu_online_mask
      __cpu_online_mask = $1 = {
        bits = {0x2, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
      }
      crash>
      crash>
      crash> p -x __cpu_active_mask
      __cpu_active_mask = $2 = {
        bits = {0xff, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
      }
      crash>
    
    While this has been the case since fadump was introduced, the issue
    was not identified for two probable reasons:
    
      - In general, the bulk of the vmcores analyzed were from crash
        due to exception.
    
      - The above did change since commit 8341f2f ("sysrq: Use
        panic() to force a crash") started using panic() instead of
        deferencing NULL pointer to force a kernel crash. But then
        commit de6e5d3 ("powerpc: smp_send_stop do not offline
        stopped CPUs") stopped marking CPUs as offline till kernel
        commit bab2623 ("powerpc: Offline CPU in stop_this_cpu()")
        reverted that change.
    
    To ensure post processing register data of all other CPUs happens
    as intended, let panic() function take the crash friendly path (read
    crash_smp_send_stop()) with the help of crash_kexec_post_notifiers
    option. Also, as register data for all CPUs is captured by f/w, skip
    IPI callbacks here for fadump, to avoid any complications in finding
    the right backtraces.
    
    Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20211207103719.91117-2-hbathini@linux.ibm.com
    hbathini authored and mpe committed Dec 9, 2021
  3. powerpc: handle kdump appropriately with crash_kexec_post_notifiers o…

    …ption
    
    Kdump can be triggered after panic_notifers since commit f06e515
    ("kernel/panic.c: add "crash_kexec_post_notifiers" option for kdump
    after panic_notifers") introduced crash_kexec_post_notifiers option.
    But using this option would mean smp_send_stop(), that marks all other
    CPUs as offline, gets called before kdump is triggered. As a result,
    kdump routines fail to save other CPUs' registers. To fix this, kdump
    friendly crash_smp_send_stop() function was introduced with kernel
    commit 0ee5941 ("x86/panic: replace smp_send_stop() with kdump
    friendly version in panic path"). Override this kdump friendly weak
    function to handle crash_kexec_post_notifiers option appropriately
    on powerpc.
    
    Reported-by: kernel test robot <lkp@intel.com>
    Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
    [Fixed signature of crash_stop_this_cpu() - reported by lkp@intel.com]
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20211207103719.91117-1-hbathini@linux.ibm.com
    hbathini authored and mpe committed Dec 9, 2021
  4. selftests/powerpc/spectre_v2: Return skip code when miss_percent is high

    A mis-match between reported and actual mitigation is not restricted to the
    Vulnerable case. The guest might also report the mitigation as "Software
    count cache flush" and the host will still mitigate with branch cache
    disabled.
    
    So, instead of skipping depending on the detected mitigation, simply skip
    whenever the detected miss_percent is the expected one for a fully
    mitigated system, that is, above 95%.
    
    Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20211207130557.40566-1-cascardo@canonical.com
    Thadeu Lima de Souza Cascardo authored and mpe committed Dec 9, 2021
  5. powerpc/cell: Fix clang -Wimplicit-fallthrough warning

    Clang warns:
    
    arch/powerpc/platforms/cell/pervasive.c:81:2: error: unannotated fall-through between switch labels
            case SRR1_WAKEEE:
            ^
    arch/powerpc/platforms/cell/pervasive.c:81:2: note: insert 'break;' to avoid fall-through
            case SRR1_WAKEEE:
            ^
            break;
    1 error generated.
    
    Clang is more pedantic than GCC, which does not warn when failing
    through to a case that is just break or return. Clang's version is more
    in line with the kernel's own stance in deprecated.rst. Add athe missing
    break to silence the warning.
    
    Fixes: 6e83985 ("powerpc/cbe: Do not process external or decremeter interrupts from sreset")
    Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
    Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
    Reviewed-by: Nathan Chancellor <nathan@kernel.org>
    Reviewed-by: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20211207110228.698956-1-anders.roxell@linaro.org
    roxell authored and mpe committed Dec 9, 2021
  6. macintosh: Add const to of_device_id

    struct of_device_id should normally be const.
    
    Signed-off-by: Xiang wangx <wangxiang@cdjrlc.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20211205130925.28389-1-wangxiang@cdjrlc.com
    Xiang wangx authored and mpe committed Dec 9, 2021
  7. powerpc/inst: Optimise copy_inst_from_kernel_nofault()

    copy_inst_from_kernel_nofault() uses copy_from_kernel_nofault() to
    copy one or two 32bits words. This means calling an out-of-line
    function which itself calls back copy_from_kernel_nofault_allowed()
    then performs a generic copy with loops.
    
    Rewrite copy_inst_from_kernel_nofault() to do everything at a
    single place and use __get_kernel_nofault() directly to perform
    single accesses without loops.
    
    Allthough the generic function uses pagefault_disable(), it is not
    required on powerpc because do_page_fault() bails earlier when a
    kernel mode fault happens on a kernel address.
    
    As the function has now become very small, inline it.
    
    With this change, on an 8xx the time spent in the loop in
    ftrace_replace_code() is reduced by 23% at function tracer activation
    and 27% at nop tracer activation.
    The overall time to activate function tracer (measured with shell
    command 'time') is 570ms before the patch and 470ms after the patch.
    
    Even vmlinux size is reduced (by 152 instruction).
    
    Before the patch:
    
    	00000018 <copy_inst_from_kernel_nofault>:
    	  18:	94 21 ff e0 	stwu    r1,-32(r1)
    	  1c:	7c 08 02 a6 	mflr    r0
    	  20:	38 a0 00 04 	li      r5,4
    	  24:	93 e1 00 1c 	stw     r31,28(r1)
    	  28:	7c 7f 1b 78 	mr      r31,r3
    	  2c:	38 61 00 08 	addi    r3,r1,8
    	  30:	90 01 00 24 	stw     r0,36(r1)
    	  34:	48 00 00 01 	bl      34 <copy_inst_from_kernel_nofault+0x1c>
    				34: R_PPC_REL24	copy_from_kernel_nofault
    	  38:	2c 03 00 00 	cmpwi   r3,0
    	  3c:	40 82 00 0c 	bne     48 <copy_inst_from_kernel_nofault+0x30>
    	  40:	81 21 00 08 	lwz     r9,8(r1)
    	  44:	91 3f 00 00 	stw     r9,0(r31)
    	  48:	80 01 00 24 	lwz     r0,36(r1)
    	  4c:	83 e1 00 1c 	lwz     r31,28(r1)
    	  50:	38 21 00 20 	addi    r1,r1,32
    	  54:	7c 08 03 a6 	mtlr    r0
    	  58:	4e 80 00 20 	blr
    
    After the patch (before inlining):
    
    	00000018 <copy_inst_from_kernel_nofault>:
    	  18:	3d 20 b0 00 	lis     r9,-20480
    	  1c:	7c 04 48 40 	cmplw   r4,r9
    	  20:	7c 69 1b 78 	mr      r9,r3
    	  24:	41 80 00 14 	blt     38 <copy_inst_from_kernel_nofault+0x20>
    	  28:	81 44 00 00 	lwz     r10,0(r4)
    	  2c:	38 60 00 00 	li      r3,0
    	  30:	91 49 00 00 	stw     r10,0(r9)
    	  34:	4e 80 00 20 	blr
    
    	  38:	38 60 ff de 	li      r3,-34
    	  3c:	4e 80 00 20 	blr
    	  40:	38 60 ff f2 	li      r3,-14
    	  44:	4e 80 00 20 	blr
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    [mpe: Add clang workaround, with version check as suggested by Nathan]
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/0d5b12183d5176dd702d29ad94c39c384e51c78f.1638208156.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  8. powerpc/inst: Move ppc_inst_t definition in asm/reg.h

    Because of circular inclusion of asm/hw_breakpoint.h, we
    need to move definition of asm/reg.h outside of inst.h
    so that asm/hw_breakpoint.h gets it without including
    asm/inst.h
    
    Also remove asm/inst.h from asm/uprobes.h as it's not
    needed anymore.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/4b79f1491118af96b1ac0735e74aeca02ea4c04e.1638208156.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  9. powerpc/inst: Define ppc_inst_t as u32 on PPC32

    Unlike PPC64 ABI, PPC32 uses the stack to pass a parameter defined
    as a struct, even when the struct has a single simple element.
    
    To avoid that, define ppc_inst_t as u32 on PPC32.
    
    Keep it as 'struct ppc_inst' when __CHECKER__ is defined so that
    sparse can perform type checking.
    
    Also revert commit 511eea5 ("powerpc/kprobes: Fix Oops by passing
    ppc_inst as a pointer to emulate_step() on ppc32") as now the
    instruction to be emulated is passed as a register to emulate_step().
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/c6d0c46f598f76ad0b0a88bc0d84773bd921b17c.1638208156.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  10. powerpc/inst: Define ppc_inst_t

    In order to stop using 'struct ppc_inst' on PPC32,
    define a ppc_inst_t typedef.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/fe5baa2c66fea9db05a8b300b3e8d2880a42596c.1638208156.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  11. powerpc/inst: Refactor ___get_user_instr()

    PPC64 version of ___get_user_instr() can be used for PPC32 as well,
    by simply disabling the suffix part with IS_ENABLED(CONFIG_PPC64).
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/1f0ede830ccb33a659119a55cb590820c27004db.1638208156.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  12. powerpc/32s: Allocate one 256k IBAT instead of two consecutives 128k …

    …IBATs
    
    Today we have the following IBATs allocated:
    
    	---[ Instruction Block Address Translation ]---
    	0: 0xc0000000-0xc03fffff 0x00000000         4M Kernel   x     m
    	1: 0xc0400000-0xc05fffff 0x00400000         2M Kernel   x     m
    	2: 0xc0600000-0xc06fffff 0x00600000         1M Kernel   x     m
    	3: 0xc0700000-0xc077ffff 0x00700000       512K Kernel   x     m
    	4: 0xc0780000-0xc079ffff 0x00780000       128K Kernel   x     m
    	5: 0xc07a0000-0xc07bffff 0x007a0000       128K Kernel   x     m
    	6:         -
    	7:         -
    
    The two 128K should be a single 256K instead.
    
    When _etext is not aligned to 128Kbytes, the system will allocate
    all necessary BATs to the lower 128Kbytes boundary, then allocate
    an additional 128Kbytes BAT for the remaining block.
    
    Instead, align the top to 128Kbytes so that the function directly
    allocates a 256Kbytes last block:
    
    	---[ Instruction Block Address Translation ]---
    	0: 0xc0000000-0xc03fffff 0x00000000         4M Kernel   x     m
    	1: 0xc0400000-0xc05fffff 0x00400000         2M Kernel   x     m
    	2: 0xc0600000-0xc06fffff 0x00600000         1M Kernel   x     m
    	3: 0xc0700000-0xc077ffff 0x00700000       512K Kernel   x     m
    	4: 0xc0780000-0xc07bffff 0x00780000       256K Kernel   x     m
    	5:         -
    	6:         -
    	7:         -
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/ab58b296832b0ec650e2203200e060adbcb2677d.1637930421.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  13. powerpc: Remove CONFIG_PPC_HAVE_KUAP and CONFIG_PPC_HAVE_KUEP

    All platforms now have KUAP and KUEP so remove CONFIG_PPC_HAVE_KUAP
    and CONFIG_PPC_HAVE_KUEP.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/a3c007ad0951965199e6ab2ef1035966bc66e771.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  14. powerpc/kuap: Wire-up KUAP on book3e/64

    This adds KUAP support to book3e/64.
    This is done by reading the content of SPRN_MAS1 and checking
    the TID at the time user pgtable is loaded.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/e2c2c9375afd4bbc06aa904d0103a5f5102a2b1a.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  15. powerpc/kuap: Wire-up KUAP on 85xx in 32 bits mode.

    This adds KUAP support to 85xx in 32 bits mode.
    This is done by reading the content of SPRN_MAS1 and checking
    the TID at the time user pgtable is loaded.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/f8696f8980ca1532ada3a2f0e0a03e756269c7fe.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  16. powerpc/kuap: Wire-up KUAP on 40x

    This adds KUAP support to 40x. This is done by checking
    the content of SPRN_PID at the time user pgtable is loaded.
    
    40x doesn't have KUEP, but KUAP implies KUEP because when the
    PID doesn't match the page's PID, the page cannot be read nor
    executed.
    
    So KUEP is now automatically selected when KUAP is selected and
    disabled when KUAP is disabled.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/aaefa91897ddc42ac11019dc0e1d1a525bd08e90.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  17. powerpc/kuap: Wire-up KUAP on 44x

    This adds KUAP support to 44x. This is done by checking
    the content of SPRN_PID at the time it is read and written
    into SPRN_MMUCR.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/7d6c3f1978a26feada74b084f651e8cf1e3b3a47.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  18. powerpc: Add KUAP support for BOOKE and 40x

    On booke/40x we don't have segments like book3s/32.
    On booke/40x we don't have access protection groups like 8xx.
    
    Use the PID register to provide user access protection.
    Kernel address space can be accessed with any PID.
    User address space has to be accessed with the PID of the user.
    User PID is always not null.
    
    Everytime the kernel is entered, set PID register to 0 and
    restore PID register when returning to user.
    
    Everytime kernel needs to access user data, PID is restored
    for the access.
    
    In TLB miss handlers, check the PID and bail out to data storage
    exception when PID is 0 and accessed address is in user space.
    
    Note that also forbids execution of user text by kernel except
    when user access is unlocked. But this shouldn't be a problem
    as the kernel is not supposed to ever run user text.
    
    This patch prepares the infrastructure but the real activation of KUAP
    is done by following patches for each processor type one by one.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/5d65576a8e31e9480415785a180c92dd4e72306d.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  19. powerpc/kuap: Make PPC_KUAP_DEBUG depend on PPC_KUAP only

    PPC_KUAP_DEBUG is supported by all platforms doing PPC_KUAP,
    it doesn't depend on Radix on book3s/64.
    
    This will avoid adding one more dependency when implementing
    KUAP on book3e/64.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/a5ff6228a36e51783b83d8c10d058db76e450f63.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  20. powerpc/kuap: Prepare for supporting KUAP on BOOK3E/64

    Also call kuap_lock() and kuap_save_and_lock() from
    interrupt functions with CONFIG_PPC64.
    
    For book3s/64 we keep them empty as it is done in assembly.
    
    Also do the locked assert when switching task unless it is
    book3s/64.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/1cbf94e26e6d6e2e028fd687588a7e6622d454a6.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  21. powerpc/config: Add CONFIG_BOOKE_OR_40x

    We have many functionnalities common to 40x and BOOKE, it leads to
    many places with #if defined(CONFIG_BOOKE) || defined(CONFIG_40x).
    
    We are going to add a few more with KUAP for booke/40x, so create
    a new symbol which is defined when either BOOKE or 40x is defined.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/9a3dbd60924cb25c9f944d3d8205ac5a0d15e229.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
  22. powerpc/nohash: Move setup_kuap out of 8xx.c

    In order to reuse it on booke/4xx, move KUAP
    setup routine out of 8xx.c
    
    Make them usable on SMP by removing the __init tag
    as it is called for each CPU.
    
    And use __prevent_user_access() instead of hard
    coding initial lock.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/ae35eec3426509efc2b8ae69586c822e2fe2642a.1634627931.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Dec 9, 2021
Older