Skip to content

Commit

Permalink
[scudo][standalone] Implement TSD registry disabling
Browse files Browse the repository at this point in the history
Summary:
In order to implement `malloc_{enable|disable}` we were just disabling
(or really locking) the Primary and the Secondary. That meant that
allocations could still be serviced from the TSD as long as the cache
wouldn't have to be filled from the Primary.

This wasn't working out for Android tests, so this change implements
registry disabling (eg: locking) so that `getTSDAndLock` doesn't
return a TSD if the allocator is disabled. This also means that the
Primary doesn't have to be disabled in this situation.

For the Shared Registry, we loop through all the TSDs and lock them.
For the Exclusive Registry, we add a `Disabled` boolean to the Registry
that forces `getTSDAndLock` to use the Fallback TSD instead of the
thread local one. Disabling the Registry is then done by locking the
Fallback TSD and setting the boolean in question (I don't think this
needed an atomic variable but I might be wrong).

I clang-formatted the whole thing as usual hence the couple of extra
whiteline changes in this CL.

Reviewers: cferris, pcc, hctim, morehouse, eugenis

Subscribers: jfb, #sanitizers, llvm-commits

Tags: #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D71719
  • Loading branch information
Kostya Kortchinsky committed Dec 20, 2019
1 parent a9c8453 commit 77e906a
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 8 deletions.
12 changes: 6 additions & 6 deletions compiler-rt/lib/scudo/standalone/combined.h
Expand Up @@ -22,7 +22,7 @@
#include "tsd.h"

#ifdef GWP_ASAN_HOOKS
# include "gwp_asan/guarded_pool_allocator.h"
#include "gwp_asan/guarded_pool_allocator.h"
// GWP-ASan is declared here in order to avoid indirect call overhead. It's also
// instantiated outside of the Allocator class, as the allocator is only
// zero-initialised. GWP-ASan requires constant initialisation, and the Scudo
Expand Down Expand Up @@ -414,19 +414,19 @@ template <class Params> class Allocator {
return NewPtr;
}

// TODO(kostyak): while this locks the Primary & Secondary, it still allows
// pointers to be fetched from the TSD. We ultimately want to
// lock the registry as well. For now, it's good enough.
// TODO(kostyak): disable() is currently best-effort. There are some small
// windows of time when an allocation could still succeed after
// this function finishes. We will revisit that later.
void disable() {
initThreadMaybe();
Primary.disable();
TSDRegistry.disable();
Secondary.disable();
}

void enable() {
initThreadMaybe();
Secondary.enable();
Primary.enable();
TSDRegistry.enable();
}

// The function returns the amount of bytes required to store the statistics,
Expand Down
2 changes: 1 addition & 1 deletion compiler-rt/lib/scudo/standalone/flags.cpp
Expand Up @@ -40,7 +40,7 @@ void registerFlags(FlagParser *Parser, Flags *F) {

#ifdef GWP_ASAN_HOOKS
#define GWP_ASAN_OPTION(Type, Name, DefaultValue, Description) \
Parser->registerFlag("GWP_ASAN_"#Name, Description, FlagType::FT_##Type, \
Parser->registerFlag("GWP_ASAN_" #Name, Description, FlagType::FT_##Type, \
reinterpret_cast<void *>(&F->GWP_ASAN_##Name));
#include "gwp_asan/options.inc"
#undef GWP_ASAN_OPTION
Expand Down
15 changes: 15 additions & 0 deletions compiler-rt/lib/scudo/standalone/tests/wrappers_c_test.cpp
Expand Up @@ -284,6 +284,21 @@ TEST(ScudoWrappersCTest, MallocIterateBoundary) {
free(P);
}

// We expect heap operations within a disable/enable scope to deadlock.
TEST(ScudoWrappersCTest, MallocDisableDeadlock) {
EXPECT_DEATH(
{
void *P = malloc(Size);
EXPECT_NE(P, nullptr);
free(P);
malloc_disable();
alarm(1);
P = malloc(Size);
malloc_enable();
},
"");
}

#if !SCUDO_FUCHSIA
TEST(ScudoWrappersCTest, MallocInfo) {
char Buffer[64];
Expand Down
16 changes: 15 additions & 1 deletion compiler-rt/lib/scudo/standalone/tsd_exclusive.h
Expand Up @@ -48,7 +48,8 @@ template <class Allocator> struct TSDRegistryExT {
}

ALWAYS_INLINE TSD<Allocator> *getTSDAndLock(bool *UnlockRequired) {
if (LIKELY(State == ThreadState::Initialized)) {
if (LIKELY(State == ThreadState::Initialized &&
!atomic_load(&Disabled, memory_order_acquire))) {
*UnlockRequired = false;
return &ThreadTSD;
}
Expand All @@ -58,6 +59,18 @@ template <class Allocator> struct TSDRegistryExT {
return FallbackTSD;
}

// To disable the exclusive TSD registry, we effectively lock the fallback TSD
// and force all threads to attempt to use it instead of their local one.
void disable() {
FallbackTSD->lock();
atomic_store(&Disabled, 1U, memory_order_release);
}

void enable() {
atomic_store(&Disabled, 0U, memory_order_release);
FallbackTSD->unlock();
}

private:
void initOnceMaybe(Allocator *Instance) {
ScopedLock L(Mutex);
Expand All @@ -81,6 +94,7 @@ template <class Allocator> struct TSDRegistryExT {

pthread_key_t PThreadKey;
bool Initialized;
atomic_u8 Disabled;
TSD<Allocator> *FallbackTSD;
HybridMutex Mutex;
static THREADLOCAL ThreadState State;
Expand Down
10 changes: 10 additions & 0 deletions compiler-rt/lib/scudo/standalone/tsd_shared.h
Expand Up @@ -72,6 +72,16 @@ template <class Allocator, u32 MaxTSDCount> struct TSDRegistrySharedT {
return getTSDAndLockSlow(TSD);
}

void disable() {
for (u32 I = 0; I < NumberOfTSDs; I++)
TSDs[I].lock();
}

void enable() {
for (u32 I = 0; I < NumberOfTSDs; I++)
TSDs[I].unlock();
}

private:
ALWAYS_INLINE void setCurrentTSD(TSD<Allocator> *CurrentTSD) {
#if _BIONIC
Expand Down

0 comments on commit 77e906a

Please sign in to comment.