Skip to content

Commit

Permalink
Merge branch 'akpm' (patches from Andrew)
Browse files Browse the repository at this point in the history
Merge yet more updates from Andrew Morton:

 - More MM work. 100ish more to go. Mike Rapoport's "mm: remove
   __ARCH_HAS_5LEVEL_HACK" series should fix the current ppc issue

 - Various other little subsystems

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (127 commits)
  lib/ubsan.c: fix gcc-10 warnings
  tools/testing/selftests/vm: remove duplicate headers
  selftests: vm: pkeys: fix multilib builds for x86
  selftests: vm: pkeys: use the correct page size on powerpc
  selftests/vm/pkeys: override access right definitions on powerpc
  selftests/vm/pkeys: test correct behaviour of pkey-0
  selftests/vm/pkeys: introduce a sub-page allocator
  selftests/vm/pkeys: detect write violation on a mapped access-denied-key page
  selftests/vm/pkeys: associate key on a mapped page and detect write violation
  selftests/vm/pkeys: associate key on a mapped page and detect access violation
  selftests/vm/pkeys: improve checks to determine pkey support
  selftests/vm/pkeys: fix assertion in test_pkey_alloc_exhaust()
  selftests/vm/pkeys: fix number of reserved powerpc pkeys
  selftests/vm/pkeys: introduce powerpc support
  selftests/vm/pkeys: introduce generic pkey abstractions
  selftests: vm: pkeys: use the correct huge page size
  selftests/vm/pkeys: fix alloc_random_pkey() to make it really random
  selftests/vm/pkeys: fix assertion in pkey_disable_set/clear()
  selftests/vm/pkeys: fix pkey_disable_clear()
  selftests: vm: pkeys: add helpers for pkey bits
  ...
  • Loading branch information
torvalds committed Jun 5, 2020
2 parents 5bfea2d + 469cbd0 commit 886d7de
Show file tree
Hide file tree
Showing 199 changed files with 3,312 additions and 2,163 deletions.
17 changes: 9 additions & 8 deletions Documentation/dev-tools/kcov.rst
Expand Up @@ -217,14 +217,15 @@ This allows to collect coverage from two types of kernel background
threads: the global ones, that are spawned during kernel boot in a limited
number of instances (e.g. one USB hub_event() worker thread is spawned per
USB HCD); and the local ones, that are spawned when a user interacts with
some kernel interface (e.g. vhost workers).
some kernel interface (e.g. vhost workers); as well as from soft
interrupts.

To enable collecting coverage from a global background thread, a unique
global handle must be assigned and passed to the corresponding
kcov_remote_start() call. Then a userspace process can pass a list of such
handles to the KCOV_REMOTE_ENABLE ioctl in the handles array field of the
kcov_remote_arg struct. This will attach the used kcov device to the code
sections, that are referenced by those handles.
To enable collecting coverage from a global background thread or from a
softirq, a unique global handle must be assigned and passed to the
corresponding kcov_remote_start() call. Then a userspace process can pass
a list of such handles to the KCOV_REMOTE_ENABLE ioctl in the handles
array field of the kcov_remote_arg struct. This will attach the used kcov
device to the code sections, that are referenced by those handles.

Since there might be many local background threads spawned from different
userspace processes, we can't use a single global handle per annotation.
Expand All @@ -242,7 +243,7 @@ handles as they don't belong to a particular subsystem. The bytes 4-7 are
currently reserved and must be zero. In the future the number of bytes
used for the subsystem or handle ids might be increased.

When a particular userspace proccess collects coverage by via a common
When a particular userspace proccess collects coverage via a common
handle, kcov will collect coverage for each code section that is annotated
to use the common handle obtained as kcov_handle from the current
task_struct. However non common handles allow to collect coverage
Expand Down
34 changes: 34 additions & 0 deletions Documentation/features/debug/debug-vm-pgtable/arch-support.txt
@@ -0,0 +1,34 @@
#
# Feature name: debug-vm-pgtable
# Kconfig: ARCH_HAS_DEBUG_VM_PGTABLE
# description: arch supports pgtable tests for semantics compliance
#
-----------------------
| arch |status|
-----------------------
| alpha: | TODO |
| arc: | ok |
| arm: | TODO |
| arm64: | ok |
| c6x: | TODO |
| csky: | TODO |
| h8300: | TODO |
| hexagon: | TODO |
| ia64: | TODO |
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
| powerpc: | ok |
| riscv: | TODO |
| s390: | ok |
| sh: | TODO |
| sparc: | TODO |
| um: | TODO |
| unicore32: | TODO |
| x86: | ok |
| xtensa: | TODO |
-----------------------
1 change: 1 addition & 0 deletions arch/arc/Kconfig
Expand Up @@ -6,6 +6,7 @@
config ARC
def_bool y
select ARC_TIMERS
select ARCH_HAS_DEBUG_VM_PGTABLE
select ARCH_HAS_DMA_PREP_COHERENT
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SETUP_DMA_OPS
Expand Down
18 changes: 0 additions & 18 deletions arch/arc/include/asm/highmem.h
Expand Up @@ -25,33 +25,15 @@
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)

#define kmap_prot PAGE_KERNEL


#include <asm/cacheflush.h>

extern void *kmap(struct page *page);
extern void *kmap_high(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void kunmap_high(struct page *page);

extern void kmap_init(void);

static inline void flush_cache_kmaps(void)
{
flush_cache_all();
}

static inline void kunmap(struct page *page)
{
BUG_ON(in_interrupt());
if (!PageHighMem(page))
return;
kunmap_high(page);
}


#endif

#endif
28 changes: 5 additions & 23 deletions arch/arc/mm/highmem.c
Expand Up @@ -49,38 +49,23 @@
extern pte_t * pkmap_page_table;
static pte_t * fixmap_page_table;

void *kmap(struct page *page)
{
BUG_ON(in_interrupt());
if (!PageHighMem(page))
return page_address(page);

return kmap_high(page);
}
EXPORT_SYMBOL(kmap);

void *kmap_atomic(struct page *page)
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
int idx, cpu_idx;
unsigned long vaddr;

preempt_disable();
pagefault_disable();
if (!PageHighMem(page))
return page_address(page);

cpu_idx = kmap_atomic_idx_push();
idx = cpu_idx + KM_TYPE_NR * smp_processor_id();
vaddr = FIXMAP_ADDR(idx);

set_pte_at(&init_mm, vaddr, fixmap_page_table + idx,
mk_pte(page, kmap_prot));
mk_pte(page, prot));

return (void *)vaddr;
}
EXPORT_SYMBOL(kmap_atomic);
EXPORT_SYMBOL(kmap_atomic_high_prot);

void __kunmap_atomic(void *kv)
void kunmap_atomic_high(void *kv)
{
unsigned long kvaddr = (unsigned long)kv;

Expand All @@ -102,11 +87,8 @@ void __kunmap_atomic(void *kv)

kmap_atomic_idx_pop();
}

pagefault_enable();
preempt_enable();
}
EXPORT_SYMBOL(__kunmap_atomic);
EXPORT_SYMBOL(kunmap_atomic_high);

static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
{
Expand Down
9 changes: 0 additions & 9 deletions arch/arm/include/asm/highmem.h
Expand Up @@ -10,8 +10,6 @@
#define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

#define kmap_prot PAGE_KERNEL

#define flush_cache_kmaps() \
do { \
if (cache_is_vivt()) \
Expand All @@ -20,9 +18,6 @@

extern pte_t *pkmap_page_table;

extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);

/*
* The reason for kmap_high_get() is to ensure that the currently kmap'd
* page usage count does not decrease to zero while we're using its
Expand Down Expand Up @@ -63,10 +58,6 @@ static inline void *kmap_high_get(struct page *page)
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
#endif

Expand Down
1 change: 0 additions & 1 deletion arch/arm/include/asm/pgtable.h
Expand Up @@ -17,7 +17,6 @@

#else

#define __ARCH_USE_5LEVEL_HACK
#include <asm-generic/pgtable-nopud.h>
#include <asm/memory.h>
#include <asm/pgtable-hwdef.h>
Expand Down
7 changes: 6 additions & 1 deletion arch/arm/lib/uaccess_with_memcpy.c
Expand Up @@ -24,6 +24,7 @@ pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
{
unsigned long addr = (unsigned long)_addr;
pgd_t *pgd;
p4d_t *p4d;
pmd_t *pmd;
pte_t *pte;
pud_t *pud;
Expand All @@ -33,7 +34,11 @@ pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
if (unlikely(pgd_none(*pgd) || pgd_bad(*pgd)))
return 0;

pud = pud_offset(pgd, addr);
p4d = p4d_offset(pgd, addr);
if (unlikely(p4d_none(*p4d) || p4d_bad(*p4d)))
return 0;

pud = pud_offset(p4d, addr);
if (unlikely(pud_none(*pud) || pud_bad(*pud)))
return 0;

Expand Down
2 changes: 1 addition & 1 deletion arch/arm/mach-sa1100/assabet.c
Expand Up @@ -633,7 +633,7 @@ static void __init map_sa1100_gpio_regs( void )
int prot = PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_DOMAIN(DOMAIN_IO);
pmd_t *pmd;

pmd = pmd_offset(pud_offset(pgd_offset_k(virt), virt), virt);
pmd = pmd_offset(pud_offset(p4d_offset(pgd_offset_k(virt), virt), virt), virt);
*pmd = __pmd(phys | prot);
flush_pmd_entry(pmd);
}
Expand Down
29 changes: 23 additions & 6 deletions arch/arm/mm/dump.c
Expand Up @@ -207,6 +207,7 @@ struct pg_level {
static struct pg_level pg_level[] = {
{
}, { /* pgd */
}, { /* p4d */
}, { /* pud */
}, { /* pmd */
.bits = section_bits,
Expand Down Expand Up @@ -308,7 +309,7 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start,

for (i = 0; i < PTRS_PER_PTE; i++, pte++) {
addr = start + i * PAGE_SIZE;
note_page(st, addr, 4, pte_val(*pte), domain);
note_page(st, addr, 5, pte_val(*pte), domain);
}
}

Expand Down Expand Up @@ -350,14 +351,14 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
addr += SECTION_SIZE;
pmd++;
domain = get_domain_name(pmd);
note_page(st, addr, 3, pmd_val(*pmd), domain);
note_page(st, addr, 4, pmd_val(*pmd), domain);
}
}
}

static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
static void walk_pud(struct pg_state *st, p4d_t *p4d, unsigned long start)
{
pud_t *pud = pud_offset(pgd, 0);
pud_t *pud = pud_offset(p4d, 0);
unsigned long addr;
unsigned i;

Expand All @@ -366,7 +367,23 @@ static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
if (!pud_none(*pud)) {
walk_pmd(st, pud, addr);
} else {
note_page(st, addr, 2, pud_val(*pud), NULL);
note_page(st, addr, 3, pud_val(*pud), NULL);
}
}
}

static void walk_p4d(struct pg_state *st, pgd_t *pgd, unsigned long start)
{
p4d_t *p4d = p4d_offset(pgd, 0);
unsigned long addr;
unsigned i;

for (i = 0; i < PTRS_PER_P4D; i++, p4d++) {
addr = start + i * P4D_SIZE;
if (!p4d_none(*p4d)) {
walk_pud(st, p4d, addr);
} else {
note_page(st, addr, 2, p4d_val(*p4d), NULL);
}
}
}
Expand All @@ -381,7 +398,7 @@ static void walk_pgd(struct pg_state *st, struct mm_struct *mm,
for (i = 0; i < PTRS_PER_PGD; i++, pgd++) {
addr = start + i * PGDIR_SIZE;
if (!pgd_none(*pgd)) {
walk_pud(st, pgd, addr);
walk_p4d(st, pgd, addr);
} else {
note_page(st, addr, 1, pgd_val(*pgd), NULL);
}
Expand Down
7 changes: 6 additions & 1 deletion arch/arm/mm/fault-armv.c
Expand Up @@ -91,6 +91,7 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
{
spinlock_t *ptl;
pgd_t *pgd;
p4d_t *p4d;
pud_t *pud;
pmd_t *pmd;
pte_t *pte;
Expand All @@ -100,7 +101,11 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
if (pgd_none_or_clear_bad(pgd))
return 0;

pud = pud_offset(pgd, address);
p4d = p4d_offset(pgd, address);
if (p4d_none_or_clear_bad(p4d))
return 0;

pud = pud_offset(p4d, address);
if (pud_none_or_clear_bad(pud))
return 0;

Expand Down
22 changes: 14 additions & 8 deletions arch/arm/mm/fault.c
Expand Up @@ -43,19 +43,21 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
printk("%s[%08lx] *pgd=%08llx", lvl, addr, (long long)pgd_val(*pgd));

do {
p4d_t *p4d;
pud_t *pud;
pmd_t *pmd;
pte_t *pte;

if (pgd_none(*pgd))
p4d = p4d_offset(pgd, addr);
if (p4d_none(*p4d))
break;

if (pgd_bad(*pgd)) {
if (p4d_bad(*p4d)) {
pr_cont("(bad)");
break;
}

pud = pud_offset(pgd, addr);
pud = pud_offset(p4d, addr);
if (PTRS_PER_PUD != 1)
pr_cont(", *pud=%08llx", (long long)pud_val(*pud));

Expand Down Expand Up @@ -405,6 +407,7 @@ do_translation_fault(unsigned long addr, unsigned int fsr,
{
unsigned int index;
pgd_t *pgd, *pgd_k;
p4d_t *p4d, *p4d_k;
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;

Expand All @@ -419,13 +422,16 @@ do_translation_fault(unsigned long addr, unsigned int fsr,
pgd = cpu_get_pgd() + index;
pgd_k = init_mm.pgd + index;

if (pgd_none(*pgd_k))
p4d = p4d_offset(pgd, addr);
p4d_k = p4d_offset(pgd_k, addr);

if (p4d_none(*p4d_k))
goto bad_area;
if (!pgd_present(*pgd))
set_pgd(pgd, *pgd_k);
if (!p4d_present(*p4d))
set_p4d(p4d, *p4d_k);

pud = pud_offset(pgd, addr);
pud_k = pud_offset(pgd_k, addr);
pud = pud_offset(p4d, addr);
pud_k = pud_offset(p4d_k, addr);

if (pud_none(*pud_k))
goto bad_area;
Expand Down

0 comments on commit 886d7de

Please sign in to comment.