Skip to content
Permalink
Sourabh-Jain/p…
Switch branches/tags

Commits on Apr 16, 2021

  1. powerpc/kexec_file: use current CPU info while setting up FDT

    kexec_file_load uses initial_boot_params in setting up the device-tree
    for the kernel to be loaded. Though initial_boot_params holds info
    about CPUs at the time of boot, it doesn't account for hot added CPUs.
    
    So, kexec'ing with kexec_file_load syscall would leave the kexec'ed
    kernel with inaccurate CPU info. Also, if kdump kernel is loaded with
    kexec_file_load syscall and the system crashes on a hot added CPU,
    capture kernel hangs failing to identify the boot CPU.
    
     Kernel panic - not syncing: sysrq triggered crash
     CPU: 24 PID: 6065 Comm: echo Kdump: loaded Not tainted 5.12.0-rc5upstream torvalds#54
     Call Trace:
     [c0000000e590fac0] [c0000000007b2400] dump_stack+0xc4/0x114 (unreliable)
     [c0000000e590fb00] [c000000000145290] panic+0x16c/0x41c
     [c0000000e590fba0] [c0000000008892e0] sysrq_handle_crash+0x30/0x40
     [c0000000e590fc00] [c000000000889cdc] __handle_sysrq+0xcc/0x1f0
     [c0000000e590fca0] [c00000000088a538] write_sysrq_trigger+0xd8/0x178
     [c0000000e590fce0] [c0000000005e9b7c] proc_reg_write+0x10c/0x1b0
     [c0000000e590fd10] [c0000000004f26d0] vfs_write+0xf0/0x330
     [c0000000e590fd60] [c0000000004f2aec] ksys_write+0x7c/0x140
     [c0000000e590fdb0] [c000000000031ee0] system_call_exception+0x150/0x290
     [c0000000e590fe10] [c00000000000ca5c] system_call_common+0xec/0x278
     --- interrupt: c00 at 0x7fff905b9664
     NIP:  00007fff905b9664 LR: 00007fff905320c4 CTR: 0000000000000000
     REGS: c0000000e590fe80 TRAP: 0c00   Not tainted  (5.12.0-rc5upstream)
     MSR:  800000000280f033 <SF,VEC,VSX,EE,PR,FP,ME,IR,DR,RI,LE>  CR: 28000242
           XER: 00000000
     IRQMASK: 0
     GPR00: 0000000000000004 00007ffff5fedf30 00007fff906a7300 0000000000000001
     GPR04: 000001002a7355b0 0000000000000002 0000000000000001 00007ffff5fef616
     GPR08: 0000000000000001 0000000000000000 0000000000000000 0000000000000000
     GPR12: 0000000000000000 00007fff9073a160 0000000000000000 0000000000000000
     GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
     GPR20: 0000000000000000 00007fff906a4ee0 0000000000000002 0000000000000001
     GPR24: 00007fff906a0898 0000000000000000 0000000000000002 000001002a7355b0
     GPR28: 0000000000000002 00007fff906a1790 000001002a7355b0 0000000000000002
     NIP [00007fff905b9664] 0x7fff905b9664
     LR [00007fff905320c4] 0x7fff905320c4
     --- interrupt: c00
    
    To avoid this from happening, extract current CPU info from of_root
    device node and use it for setting up the fdt in kexec_file_load case.
    
    Fixes: 6ecd016 ("powerpc/kexec_file: Add appropriate regions for memory reserve map")
    
    Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com>
    sourabhjains authored and intel-lab-lkp committed Apr 16, 2021

Commits on Apr 14, 2021

  1. powerpc/mm/radix: Make radix__change_memory_range() static

    The lkp bot pointed out that with W=1 we get:
    
      arch/powerpc/mm/book3s64/radix_pgtable.c:183:6: error: no previous
      prototype for 'radix__change_memory_range'
    
    Which is really saying that it could be static, make it so.
    
    Reported-by: kernel test robot <lkp@intel.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    mpe committed Apr 14, 2021
  2. powerpc/vdso: Add support for time namespaces

    This patch adds the necessary glue to provide time namespaces.
    
    Things are mainly copied from ARM64.
    
    __arch_get_timens_vdso_data() calculates timens vdso data position
    based on the vdso data position, knowing it is the next page in vvar.
    This avoids having to redo the mflr/bcl/mflr/mtlr dance to locate
    the page relative to running code position.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> # vDSO parts
    Acked-by: Andrei Vagin <avagin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/1a15495f80ec19a87b16cf874dbf7c3fa5ec40fe.1617209142.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  3. powerpc/vdso: Separate vvar vma from vdso

    Since commit 511157a ("powerpc/vdso: Move vdso datapage up front")
    VVAR page is in front of the VDSO area. In result it breaks CRIU
    (Checkpoint Restore In Userspace) [1], where CRIU expects that "[vdso]"
    from /proc/../maps points at ELF/vdso image, rather than at VVAR data page.
    Laurent made a patch to keep CRIU working (by reading aux vector).
    But I think it still makes sence to separate two mappings into different
    VMAs. It will also make ppc64 less "special" for userspace and as
    a side-bonus will make VVAR page un-writable by debugger (which previously
    would COW page and can be unexpected).
    
    I opportunistically Cc stable on it: I understand that usually such
    stuff isn't a stable material, but that will allow us in CRIU have
    one workaround less that is needed just for one release (v5.11) on
    one platform (ppc64), which we otherwise have to maintain.
    I wouldn't go as far as to say that the commit 511157a is ABI
    regression as no other userspace got broken, but I'd really appreciate
    if it gets backported to v5.11 after v5.12 is released, so as not
    to complicate already non-simple CRIU-vdso code. Thanks!
    
    [1]: checkpoint-restore/criu#1417
    
    Cc: stable@vger.kernel.org # v5.11
    Signed-off-by: Dmitry Safonov <dima@arista.com>
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> # vDSO parts.
    Acked-by: Andrei Vagin <avagin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/f401eb1ebc0bfc4d8f0e10dc8e525fd409eb68e2.1617209142.git.christophe.leroy@csgroup.eu
    0x7f454c46 authored and mpe committed Apr 14, 2021
  4. lib/vdso: Add vdso_data pointer as input to __arch_get_timens_vdso_da…

    …ta()
    
    For the same reason as commit e876f0b ("lib/vdso: Allow
    architectures to provide the vdso data pointer"), powerpc wants to
    avoid calculation of relative position to code.
    
    As the timens_vdso_data is next page to vdso_data, provide
    vdso_data pointer to __arch_get_timens_vdso_data() in order
    to ease the calculation on powerpc in following patches.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
    Acked-by: Andrei Vagin <avagin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/539c4204b1baa77c55f758904a1ea239abbc7a5c.1617209142.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  5. lib/vdso: Mark do_hres_timens() and do_coarse_timens() __always_inline()

    In the same spirit as commit c966533 ("lib/vdso: Mark do_hres()
    and do_coarse() as __always_inline"), mark do_hres_timens() and
    do_coarse_timens() __always_inline.
    
    The measurement below in on a non timens process, ie on the fastest path.
    
    On powerpc32, without the patch:
    
    clock-gettime-monotonic-raw:    vdso: 1155 nsec/call
    clock-gettime-monotonic-coarse:    vdso: 813 nsec/call
    clock-gettime-monotonic:    vdso: 1076 nsec/call
    
    With the patch:
    
    clock-gettime-monotonic-raw:    vdso: 1100 nsec/call
    clock-gettime-monotonic-coarse:    vdso: 667 nsec/call
    clock-gettime-monotonic:    vdso: 1025 nsec/call
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
    Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/90dcf45ebadfd5a07f24241551c62f619d1cb930.1617209142.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  6. powerpc: move norestart trap flag to bit 0

    Compact the trap flags down to use the low 4 bits of regs.trap.
    
    A few 64e interrupt trap numbers set bit 4. Although they tended to be
    trivial so it wasn't a real problem[1], it is not the right thing to do,
    and confusing.
    
    [*] E.g., 0x310 hypercall goes to unknown_exception, which prints
        regs->trap directly so 0x310 will appear fine, and only the syscall
        interrupt will test norestart, so it won't be confused by 0x310.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-12-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  7. powerpc: remove partial register save logic

    All subarchitectures always save all GPRs to pt_regs interrupt frames
    now. Remove FULL_REGS and associated bits.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-11-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  8. powerpc: clean up do_page_fault

    search_exception_tables + __bad_page_fault can be substituted with
    bad_page_fault, do_page_fault no longer needs to return a value
    to asm for any sub-architecture, and __bad_page_fault can be static.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-10-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  9. powerpc/64e/interrupt: handle bad_page_fault in C

    With non-volatile registers saved on interrupt, bad_page_fault
    can now be called by do_page_fault.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-9-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  10. powerpc/64e/interrupt: Use new interrupt context tracking scheme

    With the new interrupt exit code, context tracking can be managed
    more precisely, so remove the last of the 64e workarounds and switch
    to the new context tracking code already used by 64s.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-8-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  11. powerpc/64e/interrupt: reconcile irq soft-mask state in C

    Use existing 64s interrupt entry wrapper code to reconcile irqs in C.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-7-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  12. powerpc/64e/interrupt: NMI save irq soft-mask state in C

    64e non-maskable interrupts save the state of the irq soft-mask in
    asm. This can be done in C in interrupt wrappers as 64s does.
    
    I haven't been able to test this with qemu because it doesn't seem
    to cause FSL bookE WDT interrupts.
    
    This makes WatchdogException an NMI interrupt, which affects 32-bit
    as well (okay, or create a new handler?)
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-6-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  13. powerpc/64e/interrupt: use new interrupt return

    Update the new C and asm interrupt return code to account for 64e
    specifics, switch over to use it.
    
    The now-unused old ret_from_except code, that was moved to 64e after the
    64s conversion, is removed.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-5-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  14. powerpc/interrupt: update common interrupt code for

    This makes adjustments to 64-bit asm and common C interrupt return
    code to be usable by the 64e subarchitecture.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-4-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  15. powerpc/64e/interrupt: always save nvgprs on interrupt

    In order to use the C interrupt return, nvgprs must always be saved.
    
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-3-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  16. powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order

    user_exit_irqoff() -> __context_tracking_exit -> vtime_user_exit
    warns in __seqprop_assert due to lockdep thinking preemption is enabled
    because trace_hardirqs_off() has not yet been called.
    
    Switch the order of these two calls, which matches their ordering in
    interrupt_enter_prepare.
    
    Fixes: 5f0b6ac ("powerpc/64/syscall: Reconcile interrupts")
    Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210316104206.407354-2-npiggin@gmail.com
    npiggin authored and mpe committed Apr 14, 2021
  17. powerpc/perf: Infrastructure to support checking of attr.config*

    Introduce code to support the checking of attr.config* for
    values which are reserved for a given platform.
    Performance Monitoring Unit (PMU) configuration registers
    have fields that are reserved and some specific values for
    bit fields are reserved. For ex., MMCRA[61:62] is
    Random Sampling Mode (SM) and value of 0b11 for this field
    is reserved.
    
    Writing non-zero or invalid values in these fields will
    have unknown behaviours.
    
    Patch adds a generic call-back function "check_attr_config"
    in "struct power_pmu", to be called in event_init to
    check for attr.config* values for a given platform.
    
    Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210408074504.248211-1-maddy@linux.ibm.com
    maddy-kerneldev authored and mpe committed Apr 14, 2021
  18. powerpc/fadump: make symbol 'rtas_fadump_set_regval' static

    Fix sparse warnings:
    
    arch/powerpc/platforms/pseries/rtas-fadump.c:250:6: warning:
     symbol 'rtas_fadump_set_regval' was not declared. Should it be static?
    
    Signed-off-by: Pu Lehui <pulehui@huawei.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210408062012.85973-1-pulehui@huawei.com
    Pu Lehui authored and mpe committed Apr 14, 2021
  19. powerpc/mem: Use kmap_local_page() in flushing functions

    Flushing functions don't rely on preemption being disabled, so
    use kmap_local_page() instead of kmap_atomic().
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/b6a880ea0ec7886b51edbb4979c188be549231c0.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  20. powerpc/mem: Inline flush_dcache_page()

    flush_dcache_page() is only a few lines, it is worth
    inlining.
    
    ia64, csky, mips, openrisc and riscv have a similar
    flush_dcache_page() and inline it.
    
    On pmac32_defconfig, we get a small size reduction.
    On ppc64_defconfig, we get a very small size increase.
    
    In both case that's in the noise (less than 0.1%).
    
    text		data	bss	dec		hex	filename
    18991155	5934744	1497624	26423523	19330e3	vmlinux64.before
    18994829	5936732	1497624	26429185	1934701	vmlinux64.after
    9150963		2467502	 184548	11803013	 b41985	vmlinux32.before
    9149689		2467302	 184548	11801539	 b413c3	vmlinux32.after
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/21c417488b70b7629dae316539fb7bb8bdef4fdd.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  21. powerpc/mem: Help GCC realise __flush_dcache_icache() flushes single …

    …pages
    
    'And' the given page address with PAGE_MASK to help GCC.
    
    With the patch:
    
    	00000024 <__flush_dcache_icache>:
    	  24:	54 63 00 26 	rlwinm  r3,r3,0,0,19
    	  28:	39 40 00 40 	li      r10,64
    	  2c:	7c 69 1b 78 	mr      r9,r3
    	  30:	7d 49 03 a6 	mtctr   r10
    	  34:	7c 00 48 6c 	dcbst   0,r9
    	  38:	39 29 00 20 	addi    r9,r9,32
    	  3c:	7c 00 48 6c 	dcbst   0,r9
    	  40:	39 29 00 20 	addi    r9,r9,32
    	  44:	42 00 ff f0 	bdnz    34 <__flush_dcache_icache+0x10>
    	  48:	7c 00 04 ac 	hwsync
    	  4c:	39 20 00 40 	li      r9,64
    	  50:	7d 29 03 a6 	mtctr   r9
    	  54:	7c 00 1f ac 	icbi    0,r3
    	  58:	38 63 00 20 	addi    r3,r3,32
    	  5c:	7c 00 1f ac 	icbi    0,r3
    	  60:	38 63 00 20 	addi    r3,r3,32
    	  64:	42 00 ff f0 	bdnz    54 <__flush_dcache_icache+0x30>
    	  68:	7c 00 04 ac 	hwsync
    	  6c:	4c 00 01 2c 	isync
    	  70:	4e 80 00 20 	blr
    
    Without the patch:
    
    	00000024 <__flush_dcache_icache>:
    	  24:	54 6a 00 34 	rlwinm  r10,r3,0,0,26
    	  28:	39 23 10 1f 	addi    r9,r3,4127
    	  2c:	7d 2a 48 50 	subf    r9,r10,r9
    	  30:	55 29 d9 7f 	rlwinm. r9,r9,27,5,31
    	  34:	41 82 00 94 	beq     c8 <__flush_dcache_icache+0xa4>
    	  38:	71 28 00 01 	andi.   r8,r9,1
    	  3c:	38 c9 ff ff 	addi    r6,r9,-1
    	  40:	7d 48 53 78 	mr      r8,r10
    	  44:	7d 27 4b 78 	mr      r7,r9
    	  48:	40 82 00 6c 	bne     b4 <__flush_dcache_icache+0x90>
    	  4c:	54 e7 f8 7e 	rlwinm  r7,r7,31,1,31
    	  50:	7c e9 03 a6 	mtctr   r7
    	  54:	7c 00 40 6c 	dcbst   0,r8
    	  58:	39 08 00 20 	addi    r8,r8,32
    	  5c:	7c 00 40 6c 	dcbst   0,r8
    	  60:	39 08 00 20 	addi    r8,r8,32
    	  64:	42 00 ff f0 	bdnz    54 <__flush_dcache_icache+0x30>
    	  68:	7c 00 04 ac 	hwsync
    	  6c:	71 28 00 01 	andi.   r8,r9,1
    	  70:	39 09 ff ff 	addi    r8,r9,-1
    	  74:	40 82 00 2c 	bne     a0 <__flush_dcache_icache+0x7c>
    	  78:	55 29 f8 7e 	rlwinm  r9,r9,31,1,31
    	  7c:	7d 29 03 a6 	mtctr   r9
    	  80:	7c 00 57 ac 	icbi    0,r10
    	  84:	39 4a 00 20 	addi    r10,r10,32
    	  88:	7c 00 57 ac 	icbi    0,r10
    	  8c:	39 4a 00 20 	addi    r10,r10,32
    	  90:	42 00 ff f0 	bdnz    80 <__flush_dcache_icache+0x5c>
    	  94:	7c 00 04 ac 	hwsync
    	  98:	4c 00 01 2c 	isync
    	  9c:	4e 80 00 20 	blr
    	  a0:	7c 00 57 ac 	icbi    0,r10
    	  a4:	2c 08 00 00 	cmpwi   r8,0
    	  a8:	39 4a 00 20 	addi    r10,r10,32
    	  ac:	40 82 ff cc 	bne     78 <__flush_dcache_icache+0x54>
    	  b0:	4b ff ff e4 	b       94 <__flush_dcache_icache+0x70>
    	  b4:	7c 00 50 6c 	dcbst   0,r10
    	  b8:	2c 06 00 00 	cmpwi   r6,0
    	  bc:	39 0a 00 20 	addi    r8,r10,32
    	  c0:	40 82 ff 8c 	bne     4c <__flush_dcache_icache+0x28>
    	  c4:	4b ff ff a4 	b       68 <__flush_dcache_icache+0x44>
    	  c8:	7c 00 04 ac 	hwsync
    	  cc:	7c 00 04 ac 	hwsync
    	  d0:	4c 00 01 2c 	isync
    	  d4:	4e 80 00 20 	blr
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/23030822ea5cd0a122948b10226abe56602dc027.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  22. powerpc/mem: flush_dcache_icache_phys() is for HIGHMEM pages only

    __flush_dcache_icache() is usable for non HIGHMEM pages on
    every platform.
    
    It is only for HIGHMEM pages that BOOKE needs kmap() and
    BOOK3S needs flush_dcache_icache_phys().
    
    So make flush_dcache_icache_phys() dependent on CONFIG_HIGHMEM and
    call it only when it is a HIGHMEM page.
    
    We could make flush_dcache_icache_phys() available at all time,
    but as it is declared NOKPROBE_SYMBOL(), GCC doesn't optimise
    it out when it is not used.
    
    So define a stub for !CONFIG_HIGHMEM in order to remove the #ifdef in
    flush_dcache_icache_page() and use IS_ENABLED() instead.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/79ed5d7914f497cd5fcd681ca2f4d50a91719455.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  23. powerpc/mem: Optimise flush_dcache_icache_hugepage()

    flush_dcache_icache_hugepage() is a static function, with
    only one caller. That caller calls it when PageCompound() is true,
    so bugging on !PageCompound() is useless if we can trust the
    compiler a little. Remove the BUG_ON(!PageCompound()).
    
    The number of elements of a page won't change over time, but
    GCC doesn't know about it, so it gets the value at every iteration.
    
    To avoid that, call compound_nr() outside the loop and save it in
    a local variable.
    
    Whether the page is a HIGHMEM page or not doesn't change over time.
    
    But GCC doesn't know it so it does the test on every iteration.
    
    Do the test outside the loop.
    
    When the page is not a HIGHMEM page, page_address() will fallback on
    lowmem_page_address(), so call lowmem_page_address() directly and
    don't suffer the call to page_address() on every iteration.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/ab03712b70105fccfceef095aa03007de9295a40.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  24. powerpc/mem: Call flush_coherent_icache() at higher level

    flush_coherent_icache() doesn't need the address anymore,
    so it can be called immediately when entering the public
    functions and doesn't need to be disseminated among
    lower level functions.
    
    And use page_to_phys() instead of open coding the calculation
    of phys address to call flush_dcache_icache_phys().
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/5f063986e325d2efdd404b8f8c5f4bcbd4eb11a6.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  25. powerpc/mem: Remove address argument to flush_coherent_icache()

    flush_coherent_icache() can use any valid address as mentionned
    by the comment.
    
    Use PAGE_OFFSET as base address. This allows removing the
    user access stuff.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/742b6360ae4f344a1c6ecfadcf3b6645f443fa7a.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  26. powerpc/mem: Declare __flush_dcache_icache() static

    __flush_dcache_icache() is only used in mem.c.
    
    Move it before the functions that use it and declare it static.
    
    And also fix the name of the parameter in the comment.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/3fa903eb5a10b2bc7d99a8c559ffdaa05452d8e0.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  27. powerpc/mem: Move cache flushing functions into mm/cacheflush.c

    Cache flushing functions are in the middle of completely
    unrelated stuff in mm/mem.c
    
    Create a dedicated mm/cacheflush.c for those functions.
    
    Also cleanup the list of included headers.
    
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/7bf6f1600acad146e541a4e220940062f2e5b03d.1617895813.git.christophe.leroy@csgroup.eu
    chleroy authored and mpe committed Apr 14, 2021
  28. powerpc/powernv: make symbol 'mpipl_kobj' static

    The sparse tool complains as follows:
    
    arch/powerpc/platforms/powernv/opal-core.c:74:16: warning:
     symbol 'mpipl_kobj' was not declared.
    
    This symbol is not used outside of opal-core.c, so marks it static.
    
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Signed-off-by: Bixuan Cui <cuibixuan@huawei.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210409063855.57347-1-cuibixuan@huawei.com
    Bixuan Cui authored and mpe committed Apr 14, 2021
  29. powerpc/xmon: Make symbol 'spu_inst_dump' static

    Fix sparse warning:
    
    arch/powerpc/xmon/xmon.c:4216:1: warning:
     symbol 'spu_inst_dump' was not declared. Should it be static?
    
    This symbol is not used outside of xmon.c, so make it static.
    
    Signed-off-by: Pu Lehui <pulehui@huawei.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210409070151.163424-1-pulehui@huawei.com
    Pu Lehui authored and mpe committed Apr 14, 2021
  30. powerpc/perf/hv-24x7: Make some symbols static

    The sparse tool complains as follows:
    
    arch/powerpc/perf/hv-24x7.c:229:1: warning:
     symbol '__pcpu_scope_hv_24x7_txn_flags' was not declared. Should it be static?
    arch/powerpc/perf/hv-24x7.c:230:1: warning:
     symbol '__pcpu_scope_hv_24x7_txn_err' was not declared. Should it be static?
    arch/powerpc/perf/hv-24x7.c:236:1: warning:
     symbol '__pcpu_scope_hv_24x7_hw' was not declared. Should it be static?
    arch/powerpc/perf/hv-24x7.c:244:1: warning:
     symbol '__pcpu_scope_hv_24x7_reqb' was not declared. Should it be static?
    arch/powerpc/perf/hv-24x7.c:245:1: warning:
     symbol '__pcpu_scope_hv_24x7_resb' was not declared. Should it be static?
    
    This symbol is not used outside of hv-24x7.c, so this
    commit marks it static.
    
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Signed-off-by: Bixuan Cui <cuibixuan@huawei.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210409090124.59492-1-cuibixuan@huawei.com
    Bixuan Cui authored and mpe committed Apr 14, 2021
  31. powerpc/perf: Make symbol 'isa207_pmu_format_attr' static

    The sparse tool complains as follows:
    
    arch/powerpc/perf/isa207-common.c:24:18: warning:
     symbol 'isa207_pmu_format_attr' was not declared. Should it be static?
    
    This symbol is not used outside of isa207-common.c, so this
    commit marks it static.
    
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Signed-off-by: Bixuan Cui <cuibixuan@huawei.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210409090119.59444-1-cuibixuan@huawei.com
    Bixuan Cui authored and mpe committed Apr 14, 2021
  32. powerpc/pseries/pmem: Make symbol 'drc_pmem_match' static

    The sparse tool complains as follows:
    
    arch/powerpc/platforms/pseries/pmem.c:142:27: warning:
     symbol 'drc_pmem_match' was not declared. Should it be static?
    
    This symbol is not used outside of pmem.c, so this
    commit marks it static.
    
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Signed-off-by: Bixuan Cui <cuibixuan@huawei.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210409090114.59396-1-cuibixuan@huawei.com
    Bixuan Cui authored and mpe committed Apr 14, 2021
  33. powerpc/pseries: Make symbol '__pcpu_scope_hcall_stats' static

    The sparse tool complains as follows:
    
    arch/powerpc/platforms/pseries/hvCall_inst.c:29:1: warning:
     symbol '__pcpu_scope_hcall_stats' was not declared. Should it be static?
    
    This symbol is not used outside of hvCall_inst.c, so this
    commit marks it static.
    
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Signed-off-by: Bixuan Cui <cuibixuan@huawei.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210409090109.59347-1-cuibixuan@huawei.com
    Bixuan Cui authored and mpe committed Apr 14, 2021
  34. powerpc/iommu: Enable remaining IOMMU Pagesizes present in LoPAR

    According to LoPAR, ibm,query-pe-dma-window output named "IO Page Sizes"
    will let the OS know all possible pagesizes that can be used for creating a
    new DDW.
    
    Currently Linux will only try using 3 of the 8 available options:
    4K, 64K and 16M. According to LoPAR, Hypervisor may also offer 32M, 64M,
    128M, 256M and 16G.
    
    Enabling bigger pages would be interesting for direct mapping systems
    with a lot of RAM, while using less TCE entries.
    
    Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20210408201915.174217-1-leobras.c@gmail.com
    LeoBras authored and mpe committed Apr 14, 2021
Older