Skip to content
Permalink
Mark-Rutland/t…
Switch branches/tags

Commits on Nov 17, 2021

  1. x86: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Ingo Molnar <mingo@redhat.com>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  2. powerpc: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Paul Mackerras <paulus@samba.org>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  3. powerpc: avoid discarding flags in system_call_exception()

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Thus, when setting flags
    we must use an atomic operation rather than a plain read-modify-write
    sequence, as a plain read-modify-write may discard flags which are
    concurrently set by a remote thread, e.g.
    
    	// task A			// task B
    	tmp = A->thread_info.flags;
    					set_tsk_thread_flag(A, NEWFLAG_B);
    	tmp |= NEWFLAG_A;
    	A->thread_info.flags = tmp;
    
    In arch/powerpc/kernel/interrupt.c's system_call_exception(), we set
    _TIF_RESTOREALL in the thread info flags with a read-modify-write, which
    may result in other flags being discarded.
    
    Elsewhere in the file we use clear_bits() to atomically remove flag
    bits, so let's use set_bits() here for consistency with those.
    
    I presume there may be reasons (e.g. instrumentation) that prevent the
    use of set_thread_flag() and clear_thread_flag() here, which would
    otherwise be preferable.
    
    Fixes: ae7aaec ("powerpc/64s: system call rfscv workaround for TM bugs")
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Cc: Eirik Fuller <efuller@redhat.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  4. openrisc: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Stafford Horne <shorne@gmail.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Jonas Bonn <jonas@southpole.se>
    Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  5. microblaze: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Tested-by: Michal Simek <michal.simek@xilinx.com>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  6. arm64: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Will Deacon <will@kernel.org>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  7. arm: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Russell King <linux@armlinux.org.uk>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  8. alpha: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Richard Henderson <rth@twiddle.net>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  9. sched: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    The READ_ONCE(ti->flags) .. cmpxchg(ti->flags) loop in
    set_nr_if_polling() is left as-is for clarity.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Juri Lelli <juri.lelli@redhat.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Vincent Guittot <vincent.guittot@linaro.org>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  10. entry: snapshot thread flags

    Some thread flags can be set remotely, and so even when IRQs are
    disabled, the flags can change under our feet. Generally this is
    unlikely to cause a problem in practice, but it is somewhat unsound, and
    KCSAN will legitimately warn that there is a data race.
    
    To avoid such issues, a snapshot of the flags has to be taken prior to
    using them. Some places already use READ_ONCE() for that, others do not.
    
    Convert them all to the new flag accessor helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021
  11. thread_info: add helpers to snapshot thread flags

    In <linux/thread_info.h> there are helpers to manipulate individual
    thread flags, but where code wants to check several flags at once, it
    must open code reading current_thread_info()->flags and operating on a
    snapshot.
    
    As some flags can be set remotely it's necessary to use READ_ONCE() to
    get a consistent snapshot even when IRQs are disabled, but some code
    forgets to do this. Generally this is unlike to cause a problem in
    practice, but it is somewhat unsound, and KCSAN will legitimately warn
    that there is a data race.
    
    To make it easier to do the right thing, and to highlight that
    concurrent modification is possible, add new helpers to snapshot the
    flags, which should be used in preference to plain reads. Subsequent
    patches will move existing code to use the new helpers.
    
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Marco Elver <elver@google.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Cc: Boqun Feng <boqun.feng@gmail.com>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Will Deacon <will@kernel.org>
    Mark Rutland authored and intel-lab-lkp committed Nov 17, 2021

Commits on Nov 8, 2021

  1. arm64: pgtable: make __pte_to_phys/__phys_to_pte_val inline functions

    gcc warns about undefined behavior the vmalloc code when building
    with CONFIG_ARM64_PA_BITS_52, when the 'idx++' in the argument to
    __phys_to_pte_val() is evaluated twice:
    
    mm/vmalloc.c: In function 'vmap_pfn_apply':
    mm/vmalloc.c:2800:58: error: operation on 'data->idx' may be undefined [-Werror=sequence-point]
     2800 |         *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot));
          |                                                 ~~~~~~~~~^~
    arch/arm64/include/asm/pgtable-types.h:25:37: note: in definition of macro '__pte'
       25 | #define __pte(x)        ((pte_t) { (x) } )
          |                                     ^
    arch/arm64/include/asm/pgtable.h:80:15: note: in expansion of macro '__phys_to_pte_val'
       80 |         __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
          |               ^~~~~~~~~~~~~~~~~
    mm/vmalloc.c:2800:30: note: in expansion of macro 'pfn_pte'
     2800 |         *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot));
          |                              ^~~~~~~
    
    I have no idea why this never showed up earlier, but the safest
    workaround appears to be changing those macros into inline functions
    so the arguments get evaluated only once.
    
    Cc: Matthew Wilcox <willy@infradead.org>
    Fixes: 75387b9 ("arm64: handle 52-bit physical addresses in page table entries")
    Signed-off-by: Arnd Bergmann <arnd@arndb.de>
    Link: https://lore.kernel.org/r/20211105075414.2553155-1-arnd@kernel.org
    Signed-off-by: Will Deacon <will@kernel.org>
    arndb authored and willdeacon committed Nov 8, 2021
  2. arm64: Track no early_pgtable_alloc() for kmemleak

    After switched page size from 64KB to 4KB on several arm64 servers here,
    kmemleak starts to run out of early memory pool due to a huge number of
    those early_pgtable_alloc() calls:
    
      kmemleak_alloc_phys()
      memblock_alloc_range_nid()
      memblock_phys_alloc_range()
      early_pgtable_alloc()
      init_pmd()
      alloc_init_pud()
      __create_pgd_mapping()
      __map_memblock()
      paging_init()
      setup_arch()
      start_kernel()
    
    Increased the default value of DEBUG_KMEMLEAK_MEM_POOL_SIZE by 4 times
    won't be enough for a server with 200GB+ memory. There isn't much
    interesting to check memory leaks for those early page tables and those
    early memory mappings should not reference to other memory. Hence, no
    kmemleak false positives, and we can safely skip tracking those early
    allocations from kmemleak like we did in the commit fed84c7
    ("mm/memblock.c: skip kmemleak for kasan_init()") without needing to
    introduce complications to automatically scale the value depends on the
    runtime memory size etc. After the patch, the default value of
    DEBUG_KMEMLEAK_MEM_POOL_SIZE becomes sufficient again.
    
    Signed-off-by: Qian Cai <quic_qiancai@quicinc.com>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
    Link: https://lore.kernel.org/r/20211105150509.7826-1-quic_qiancai@quicinc.com
    Signed-off-by: Will Deacon <will@kernel.org>
    qcsde authored and willdeacon committed Nov 8, 2021
  3. arm64: mte: change PR_MTE_TCF_NONE back into an unsigned long

    This constant was previously an unsigned long, but was changed
    into an int in commit 433c38f ("arm64: mte: change ASYNC and
    SYNC TCF settings into bitfields"). This ended up causing spurious
    unsigned-signed comparison warnings in expressions such as:
    
    (x & PR_MTE_TCF_MASK) != PR_MTE_TCF_NONE
    
    Therefore, change it back into an unsigned long to silence these
    warnings.
    
    Link: https://linux-review.googlesource.com/id/I07a72310db30227a5b7d789d0b817d78b657c639
    Signed-off-by: Peter Collingbourne <pcc@google.com>
    Link: https://lore.kernel.org/r/20211105230829.2254790-1-pcc@google.com
    Signed-off-by: Will Deacon <will@kernel.org>
    pcc authored and willdeacon committed Nov 8, 2021
  4. arm64: vdso: remove -nostdlib compiler flag

    The -nostdlib option requests the compiler to not use the standard
    system startup files or libraries when linking. It is effective only
    when $(CC) is used as a linker driver.
    
    Since commit 691efbe ("arm64: vdso: use $(LD) instead of $(CC)
    to link VDSO"), $(LD) is directly used, hence -nostdlib is unneeded.
    
    Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
    Link: https://lore.kernel.org/r/20211107161802.323125-1-masahiroy@kernel.org
    Signed-off-by: Will Deacon <will@kernel.org>
    masahir0y authored and willdeacon committed Nov 8, 2021
  5. arm64: arm64_ftr_reg->name may not be a human-readable string

    The id argument of ARM64_FTR_REG_OVERRIDE() is used for two purposes:
    one as the system register encoding (used for the sys_id field of
    __ftr_reg_entry), and the other as the register name (stringified
    and used for the name field of arm64_ftr_reg), which is debug
    information. The id argument is supposed to be a macro that
    indicates an encoding of the register (eg. SYS_ID_AA64PFR0_EL1, etc).
    
    ARM64_FTR_REG(), which also has the same id argument,
    uses ARM64_FTR_REG_OVERRIDE() and passes the id to the macro.
    Since the id argument is completely macro-expanded before it is
    substituted into a macro body of ARM64_FTR_REG_OVERRIDE(),
    the stringified id in the body of ARM64_FTR_REG_OVERRIDE is not
    a human-readable register name, but a string of numeric bitwise
    operations.
    
    Fix this so that human-readable register names are available as
    debug information.
    
    Fixes: 8f266a5 ("arm64: cpufeature: Add global feature override facility")
    Signed-off-by: Reiji Watanabe <reijiw@google.com>
    Reviewed-by: Oliver Upton <oupton@google.com>
    Acked-by: Marc Zyngier <maz@kernel.org>
    Link: https://lore.kernel.org/r/20211101045421.2215822-1-reijiw@google.com
    Signed-off-by: Will Deacon <will@kernel.org>
    reijiw-kvm authored and willdeacon committed Nov 8, 2021

Commits on Oct 29, 2021

  1. Merge branch 'for-next/fixes' into for-next/core

    Merge for-next/fixes to resolve conflicts in arm64_hugetlb_cma_reserve().
    
    * for-next/fixes:
      acpi/arm64: fix next_platform_timer() section mismatch error
      arm64/hugetlb: fix CMA gigantic page order for non-4K PAGE_SIZE
    willdeacon committed Oct 29, 2021
  2. Merge branch 'for-next/vdso' into for-next/core

    * for-next/vdso:
      arm64: vdso32: require CROSS_COMPILE_COMPAT for gcc+bfd
      arm64: vdso32: suppress error message for 'make mrproper'
      arm64: vdso32: drop test for -march=armv8-a
      arm64: vdso32: drop the test for dmb ishld
    willdeacon committed Oct 29, 2021
  3. Merge branch 'for-next/trbe-errata' into for-next/core

    * for-next/trbe-errata:
      arm64: errata: Add detection for TRBE write to out-of-range
      arm64: errata: Add workaround for TSB flush failures
      arm64: errata: Add detection for TRBE overwrite in FILL mode
      arm64: Add Neoverse-N2, Cortex-A710 CPU part definition
    willdeacon committed Oct 29, 2021
  4. Merge branch 'for-next/sve' into for-next/core

    * for-next/sve:
      arm64/sve: Fix warnings when SVE is disabled
      arm64/sve: Add stub for sve_max_virtualisable_vl()
      arm64/sve: Track vector lengths for tasks in an array
      arm64/sve: Explicitly load vector length when restoring SVE state
      arm64/sve: Put system wide vector length information into structs
      arm64/sve: Use accessor functions for vector lengths in thread_struct
      arm64/sve: Rename find_supported_vector_length()
      arm64/sve: Make access to FFR optional
      arm64/sve: Make sve_state_size() static
      arm64/sve: Remove sve_load_from_fpsimd_state()
      arm64/fp: Reindent fpsimd_save()
    willdeacon committed Oct 29, 2021
  5. Merge branch 'for-next/scs' into for-next/core

    * for-next/scs:
      scs: Release kasan vmalloc poison in scs_free process
    willdeacon committed Oct 29, 2021
  6. Merge branch 'for-next/pfn-valid' into for-next/core

    * for-next/pfn-valid:
      arm64/mm: drop HAVE_ARCH_PFN_VALID
      dma-mapping: remove bogus test for pfn_valid from dma_map_resource
    willdeacon committed Oct 29, 2021
  7. Merge branch 'for-next/perf' into for-next/core

    * for-next/perf:
      drivers/perf: Improve build test coverage
      drivers/perf: thunderx2_pmu: Change data in size tx2_uncore_event_update()
      drivers/perf: hisi: Fix PA PMU counter offset
    willdeacon committed Oct 29, 2021
  8. Merge branch 'for-next/mte' into for-next/core

    * for-next/mte:
      kasan: Extend KASAN mode kernel parameter
      arm64: mte: Add asymmetric mode support
      arm64: mte: CPU feature detection for Asymm MTE
      arm64: mte: Bitfield definitions for Asymm MTE
      kasan: Remove duplicate of kasan_flag_async
      arm64: kasan: mte: move GCR_EL1 switch to task switch when KASAN disabled
    willdeacon committed Oct 29, 2021
  9. Merge branch 'for-next/mm' into for-next/core

    * for-next/mm:
      arm64: mm: update max_pfn after memory hotplug
      arm64/mm: Add pud_sect_supported()
      arm64: mm: Drop pointless call to set_max_mapnr()
    willdeacon committed Oct 29, 2021
  10. Merge branch 'for-next/misc' into for-next/core

    * for-next/misc:
      arm64: Select POSIX_CPU_TIMERS_TASK_WORK
      arm64: Document boot requirements for FEAT_SME_FA64
      arm64: ftrace: use function_nocfi for _mcount as well
      arm64: asm: setup.h: export common variables
      arm64/traps: Avoid unnecessary kernel/user pointer conversion
    willdeacon committed Oct 29, 2021
  11. Merge branch 'for-next/kselftest' into for-next/core

    * for-next/kselftest:
      selftests: arm64: Factor out utility functions for assembly FP tests
      selftests: arm64: Add coverage of ptrace flags for SVE VL inheritance
      selftests: arm64: Verify that all possible vector lengths are handled
      selftests: arm64: Fix and enable test for setting current VL in vec-syscfg
      selftests: arm64: Remove bogus error check on writing to files
      selftests: arm64: Fix printf() format mismatch in vec-syscfg
      selftests: arm64: Move FPSIMD in SVE ptrace test into a function
      selftests: arm64: More comprehensively test the SVE ptrace interface
      selftests: arm64: Verify interoperation of SVE and FPSIMD register sets
      selftests: arm64: Clarify output when verifying SVE register set
      selftests: arm64: Document what the SVE ptrace test is doing
      selftests: arm64: Remove extraneous register setting code
      selftests: arm64: Don't log child creation as a test in SVE ptrace test
      selftests: arm64: Use a define for the number of SVE ptrace tests to be run
    willdeacon committed Oct 29, 2021
  12. Merge branch 'for-next/kexec' into for-next/core

    * for-next/kexec:
      arm64: trans_pgd: remove trans_pgd_map_page()
      arm64: kexec: remove cpu-reset.h
      arm64: kexec: remove the pre-kexec PoC maintenance
      arm64: kexec: keep MMU enabled during kexec relocation
      arm64: kexec: install a copy of the linear-map
      arm64: kexec: use ld script for relocation function
      arm64: kexec: relocate in EL1 mode
      arm64: kexec: configure EL2 vectors for kexec
      arm64: kexec: pass kimage as the only argument to relocation function
      arm64: kexec: Use dcache ops macros instead of open-coding
      arm64: kexec: skip relocation code for inplace kexec
      arm64: kexec: flush image and lists during kexec load time
      arm64: hibernate: abstract ttrb0 setup function
      arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors
      arm64: kernel: add helper for booted at EL2 and not VHE
    willdeacon committed Oct 29, 2021
  13. Merge branch 'for-next/extable' into for-next/core

    * for-next/extable:
      arm64: vmlinux.lds.S: remove `.fixup` section
      arm64: extable: add load_unaligned_zeropad() handler
      arm64: extable: add a dedicated uaccess handler
      arm64: extable: add `type` and `data` fields
      arm64: extable: use `ex` for `exception_table_entry`
      arm64: extable: make fixup_exception() return bool
      arm64: extable: consolidate definitions
      arm64: gpr-num: support W registers
      arm64: factor out GPR numbering helpers
      arm64: kvm: use kvm_exception_table_entry
      arm64: lib: __arch_copy_to_user(): fold fixups into body
      arm64: lib: __arch_copy_from_user(): fold fixups into body
      arm64: lib: __arch_clear_user(): fold fixups into body
    willdeacon committed Oct 29, 2021
  14. Merge branch 'for-next/8.6-timers' into for-next/core

    * for-next/8.6-timers:
      arm64: Add HWCAP for self-synchronising virtual counter
      arm64: Add handling of CNTVCTSS traps
      arm64: Add CNT{P,V}CTSS_EL0 alternatives to cnt{p,v}ct_el0
      arm64: Add a capability for FEAT_ECV
      clocksource/drivers/arch_arm_timer: Move workaround synchronisation around
      clocksource/drivers/arm_arch_timer: Fix masking for high freq counters
      clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
      clocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface
      clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
      clocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code
      clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
      clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
      clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
      clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL
      clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64
      clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
      clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses
    willdeacon committed Oct 29, 2021

Commits on Oct 28, 2021

  1. arm64: Select POSIX_CPU_TIMERS_TASK_WORK

    With 6caa581 ("KVM: arm64: Use generic KVM xfer to guest work
    function") all arm64 exit paths are properly equipped to handle the
    POSIX timers' task work.
    
    Deferring timer callbacks to thread context, not only limits the amount
    of time spent in hard interrupt context, but is a safer
    implementation[1], and will allow PREEMPT_RT setups to use KVM[2].
    
    So let's enable POSIX_CPU_TIMERS_TASK_WORK on arm64.
    
    [1] https://lore.kernel.org/all/20200716201923.228696399@linutronix.de/
    [2] https://lore.kernel.org/linux-rt-users/87v92bdnlx.ffs@tglx/
    
    Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
    Acked-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Marc Zyngier <maz@kernel.org>
    Link: https://lore.kernel.org/r/20211018144713.873464-1-nsaenzju@redhat.com
    Signed-off-by: Will Deacon <will@kernel.org>
    vianpl authored and willdeacon committed Oct 28, 2021
  2. arm64: Document boot requirements for FEAT_SME_FA64

    The EAC1 release of the SME specification adds the FA64 feature which
    requires enablement at higher ELs before lower ELs can use it. Document
    what we require from higher ELs in our boot requirements.
    
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Link: https://lore.kernel.org/r/20211026111802.12853-1-broonie@kernel.org
    Signed-off-by: Will Deacon <will@kernel.org>
    broonie authored and willdeacon committed Oct 28, 2021

Commits on Oct 26, 2021

  1. arm64/sve: Fix warnings when SVE is disabled

    In configurations where SVE is disabled we define but never reference the
    functions for retrieving the default vector length, causing warnings. Fix
    this by move the ifdef up, marking get_default_vl() inline since it is
    referenced from code guarded by an IS_ENABLED() check, and do the same for
    the other accessors for consistency.
    
    Reported-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Link: https://lore.kernel.org/r/20211022141635.2360415-3-broonie@kernel.org
    Signed-off-by: Will Deacon <will@kernel.org>
    broonie authored and willdeacon committed Oct 26, 2021
  2. arm64/sve: Add stub for sve_max_virtualisable_vl()

    Fixes build problems for configurations with KVM enabled but SVE disabled.
    
    Reported-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Link: https://lore.kernel.org/r/20211022141635.2360415-2-broonie@kernel.org
    Signed-off-by: Will Deacon <will@kernel.org>
    broonie authored and willdeacon committed Oct 26, 2021

Commits on Oct 21, 2021

  1. arm64: errata: Add detection for TRBE write to out-of-range

    Arm Neoverse-N2 and Cortex-A710 cores are affected by an erratum where
    the trbe, under some circumstances, might write upto 64bytes to an
    address after the Limit as programmed by the TRBLIMITR_EL1.LIMIT.
    This might -
      - Corrupt a page in the ring buffer, which may corrupt trace from a
        previous session, consumed by userspace.
      - Hit the guard page at the end of the vmalloc area and raise a fault.
    
    To keep the handling simpler, we always leave the last page from the
    range, which TRBE is allowed to write. This can be achieved by ensuring
    that we always have more than a PAGE worth space in the range, while
    calculating the LIMIT for TRBE. And then the LIMIT pointer can be
    adjusted to leave the PAGE (TRBLIMITR.LIMIT -= PAGE_SIZE), out of the
    TRBE range while enabling it. This makes sure that the TRBE will only
    write to an area within its allowed limit (i.e, [head-head+size]) and
    we do not have to handle address faults within the driver.
    
    Cc: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
    Cc: Mike Leach <mike.leach@linaro.org>
    Cc: Leo Yan <leo.yan@linaro.org>
    Cc: Will Deacon <will@kernel.org>
    Cc: Mark Rutland <mark.rutland@arm.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
    Link: https://lore.kernel.org/r/20211019163153.3692640-5-suzuki.poulose@arm.com
    Signed-off-by: Will Deacon <will@kernel.org>
    Suzuki K Poulose authored and willdeacon committed Oct 21, 2021
Older