Skip to content

Commit

Permalink
mm: Remove HARDENED_USERCOPY_FALLBACK
Browse files Browse the repository at this point in the history
This has served its purpose and is no longer used.  All usercopy
violations appear to have been handled by now, any remaining instances
(or new bugs) will cause copies to be rejected.

This isn't a direct revert of commit 2d891fb ("usercopy: Allow
strict enforcement of whitelists"); since usercopy_fallback is
effectively 0, the fallback handling is removed too.

This also removes the usercopy_fallback module parameter on slab_common.

Link: KSPP/linux#153
Link: https://lkml.kernel.org/r/20210921061149.1091163-1-steve@sk2.org
Signed-off-by: Stephen Kitt <steve@sk2.org>
Suggested-by: Kees Cook <keescook@chromium.org>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>	[defconfig change]
Acked-by: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E . Hallyn" <serge@hallyn.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
skitt authored and henricaodopao1 committed May 11, 2024
1 parent 6a0e4d4 commit cb296d9
Show file tree
Hide file tree
Showing 5 changed files with 0 additions and 51 deletions.
2 changes: 0 additions & 2 deletions include/linux/slab.h
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,6 @@ struct mem_cgroup;
void __init kmem_cache_init(void);
bool slab_is_available(void);

extern bool usercopy_fallback;

struct kmem_cache *kmem_cache_create(const char *name, unsigned int size,
unsigned int align, slab_flags_t flags,
void (*ctor)(void *));
Expand Down
13 changes: 0 additions & 13 deletions mm/slab.c
Original file line number Diff line number Diff line change
Expand Up @@ -4446,19 +4446,6 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
n <= cachep->useroffset - offset + cachep->usersize)
return;

/*
* If the copy is still within the allocated object, produce
* a warning instead of rejecting the copy. This is intended
* to be a temporary method to find any missing usercopy
* whitelists.
*/
if (usercopy_fallback &&
offset <= cachep->object_size &&
n <= cachep->object_size - offset) {
usercopy_warn("SLAB object", cachep->name, to_user, offset, n);
return;
}

usercopy_abort("SLAB object", cachep->name, to_user, offset, n);
}
#endif /* CONFIG_HARDENED_USERCOPY */
Expand Down
8 changes: 0 additions & 8 deletions mm/slab_common.c
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,6 @@ LIST_HEAD(slab_caches);
DEFINE_MUTEX(slab_mutex);
struct kmem_cache *kmem_cache;

#ifdef CONFIG_HARDENED_USERCOPY
bool usercopy_fallback __ro_after_init =
IS_ENABLED(CONFIG_HARDENED_USERCOPY_FALLBACK);
module_param(usercopy_fallback, bool, 0400);
MODULE_PARM_DESC(usercopy_fallback,
"WARN instead of reject usercopy whitelist violations");
#endif

static LIST_HEAD(slab_caches_to_rcu_destroy);
static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work);
static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
Expand Down
14 changes: 0 additions & 14 deletions mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -3955,7 +3955,6 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
{
struct kmem_cache *s;
unsigned int offset;
size_t object_size;

ptr = kasan_reset_tag(ptr);

Expand Down Expand Up @@ -3984,19 +3983,6 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
n <= s->useroffset - offset + s->usersize)
return;

/*
* If the copy is still within the allocated object, produce
* a warning instead of rejecting the copy. This is intended
* to be a temporary method to find any missing usercopy
* whitelists.
*/
object_size = slab_ksize(s);
if (usercopy_fallback &&
offset <= object_size && n <= object_size - offset) {
usercopy_warn("SLUB object", s->name, to_user, offset, n);
return;
}

usercopy_abort("SLUB object", s->name, to_user, offset, n);
}
#endif /* CONFIG_HARDENED_USERCOPY */
Expand Down
14 changes: 0 additions & 14 deletions security/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -163,20 +163,6 @@ config HARDENED_USERCOPY
or are part of the kernel text. This kills entire classes
of heap overflow exploits and similar kernel memory exposures.

config HARDENED_USERCOPY_FALLBACK
bool "Allow usercopy whitelist violations to fallback to object size"
depends on HARDENED_USERCOPY
default y
help
This is a temporary option that allows missing usercopy whitelists
to be discovered via a WARN() to the kernel log, instead of
rejecting the copy, falling back to non-whitelisted hardened
usercopy that checks the slab allocation size instead of the
whitelist size. This option will be removed once it seems like
all missing usercopy whitelists have been identified and fixed.
Booting with "slab_common.usercopy_fallback=Y/N" can change
this setting.

config HARDENED_USERCOPY_PAGESPAN
bool "Refuse to copy allocations that span multiple pages"
depends on HARDENED_USERCOPY
Expand Down

0 comments on commit cb296d9

Please sign in to comment.