Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[scudo] Don't preserve space for regions at init() (DO NOT MERGE) #74531

Closed
wants to merge 1 commit into from

Conversation

ChiaHungDuan
Copy link
Contributor

This is only a proof-of-concept and will be only used on certain scenarios.

@ChiaHungDuan
Copy link
Contributor Author

FYI, @cferris1000

@llvmbot
Copy link
Collaborator

llvmbot commented Dec 5, 2023

@llvm/pr-subscribers-compiler-rt-sanitizer

Author: None (ChiaHungDuan)

Changes

This is only a proof-of-concept and will be only used on certain scenarios.


Full diff: https://github.com/llvm/llvm-project/pull/74531.diff

1 Files Affected:

  • (modified) compiler-rt/lib/scudo/standalone/primary64.h (+32)
diff --git a/compiler-rt/lib/scudo/standalone/primary64.h b/compiler-rt/lib/scudo/standalone/primary64.h
index d1929ff7212f4..000364a810702 100644
--- a/compiler-rt/lib/scudo/standalone/primary64.h
+++ b/compiler-rt/lib/scudo/standalone/primary64.h
@@ -119,11 +119,13 @@ template <typename Config> class SizeClassAllocator64 {
     SmallerBlockReleasePageDelta =
         PagesInGroup * (1 + MinSizeClass / 16U) / 100;
 
+#if 0
     // Reserve the space required for the Primary.
     CHECK(ReservedMemory.create(/*Addr=*/0U, PrimarySize,
                                 "scudo:primary_reserve"));
     PrimaryBase = ReservedMemory.getBase();
     DCHECK_NE(PrimaryBase, 0U);
+#endif
 
     u32 Seed;
     const u64 Time = getMonotonicTimeFast();
@@ -133,12 +135,14 @@ template <typename Config> class SizeClassAllocator64 {
     for (uptr I = 0; I < NumClasses; I++) {
       RegionInfo *Region = getRegionInfo(I);
 
+#if 0
       // The actual start of a region is offset by a random number of pages
       // when PrimaryEnableRandomOffset is set.
       Region->RegionBeg = (PrimaryBase + (I << RegionSizeLog)) +
                           (Config::Primary::EnableRandomOffset
                                ? ((getRandomModN(&Seed, 16) + 1) * PageSize)
                                : 0);
+#endif
       Region->RandState = getRandomU32(&Seed);
       // Releasing small blocks is expensive, set a higher threshold to avoid
       // frequent page releases.
@@ -148,9 +152,11 @@ template <typename Config> class SizeClassAllocator64 {
         Region->TryReleaseThreshold = PageSize;
       Region->ReleaseInfo.LastReleaseAtNs = Time;
 
+#if 0
       Region->MemMapInfo.MemMap = ReservedMemory.dispatch(
           PrimaryBase + (I << RegionSizeLog), RegionSize);
       CHECK(Region->MemMapInfo.MemMap.isAllocated());
+#endif
     }
     shuffle(RegionInfoArray, NumClasses, &Seed);
 
@@ -165,6 +171,10 @@ template <typename Config> class SizeClassAllocator64 {
   void unmapTestOnly() NO_THREAD_SAFETY_ANALYSIS {
     for (uptr I = 0; I < NumClasses; I++) {
       RegionInfo *Region = getRegionInfo(I);
+      MemMapT &MemMap = Region->MemMapInfo.MemMap;
+      if (MemMap.isAllocated())
+        MemMap.unmap(MemMap.getBase(), MemMap.getCapacity());
+
       *Region = {};
     }
     if (PrimaryBase)
@@ -586,9 +596,16 @@ template <typename Config> class SizeClassAllocator64 {
   }
 
   uptr getRegionBaseByClassId(uptr ClassId) {
+    RegionInfo *Region = getRegionInfo(ClassId);
+    Region->MMLock.assertHeld();
+    if (!Region->MemMapInfo.MemMap.isAllocated())
+      return 0U;
+    return Region->MemMapInfo.MemMap.getBase();
+#if 0
     return roundDown(getRegionInfo(ClassId)->RegionBeg - PrimaryBase,
                      RegionSize) +
            PrimaryBase;
+#endif
   }
 
   static CompactPtrT compactPtrInternal(uptr Base, uptr Ptr) {
@@ -982,6 +999,21 @@ template <typename Config> class SizeClassAllocator64 {
     const uptr Size = getSizeByClassId(ClassId);
     const u16 MaxCount = CacheT::getMaxCached(Size);
 
+    MemMapT &RegionMemMap = Region->MemMapInfo.MemMap;
+    if (UNLIKELY(!RegionMemMap.isAllocated())) {
+      DCHECK_EQ(Region->RegionBeg, 0U);
+      ReservedMemoryT ReservedMemory;
+      ReservedMemory.create(/*Addr=*/0U, RegionSize, "scudo:primary_reserve");
+      RegionMemMap = ReservedMemory.dispatch(ReservedMemory.getBase(),
+                                             ReservedMemory.getCapacity());
+      CHECK(RegionMemMap.isAllocated());
+
+      Region->RegionBeg = RegionMemMap.getBase() +
+                          (Config::Primary::EnableRandomOffset
+                               ? ((getRandomModN(&Region->RandState, 16) + 1) *
+                                  getPageSizeCached())
+                               : 0);
+    }
     const uptr RegionBeg = Region->RegionBeg;
     const uptr MappedUser = Region->MemMapInfo.MappedUser;
     const uptr TotalUserBytes =

if (UNLIKELY(!RegionMemMap.isAllocated())) {
DCHECK_EQ(Region->RegionBeg, 0U);
ReservedMemoryT ReservedMemory;
ReservedMemory.create(/*Addr=*/0U, RegionSize, "scudo:primary_reserve");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this only reserve the memory for this region?

Can it be modified to allow failure and return a nullptr? The person trying to use this sees a failure here, but it could mean that the allowed VA space has been exhausted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. Let me do it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. BTW, I did push -f to avoid having to cherry-pick all fix-ups.

@ChiaHungDuan ChiaHungDuan force-pushed the non-preserve-regions branch 2 times, most recently from e702f9e to 71d3d25 Compare December 11, 2023 22:15
@ChiaHungDuan
Copy link
Contributor Author

The latest revision allows the failure of region reservation.

@@ -129,7 +129,7 @@ void MemMapLinux::releaseAndZeroPagesToOSImpl(uptr From, uptr Size) {
bool ReservedMemoryLinux::createImpl(uptr Addr, uptr Size, const char *Name,
uptr Flags) {
ReservedMemoryLinux::MemMapT MemMap;
if (!MemMap.map(Addr, Size, Name, Flags | MAP_NOACCESS))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the choice of whether to pass MAP_ALLOWNOMEM or not can be left to the caller, like we do in ReservedMemoryFuchsia::createImpl

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in the description, this is just a proof-of-concept so most stuff are quick changes. The formal CL will review all the logic later

@ChiaHungDuan ChiaHungDuan force-pushed the non-preserve-regions branch 2 times, most recently from 4ac1127 to e157950 Compare February 26, 2024 23:02
This releases the requirement that we need to preserve the memory for
all regions at the beginning. It needs a huge amount of contiguous pages
and which may be a challenge in certain cases. Therefore, adding a new
flag, PreserveAllRegions, to indicate whether we want to allocate the
regions on demand.

Note that once the PreserveAllRegions is enabled, EnableRandomOffset
becomes irrelevant because the base is already random.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants