Skip to content
Permalink
Browse files
[bmalloc] freeableMemory and footprint of Heap are completely broken
https://bugs.webkit.org/show_bug.cgi?id=230245

Reviewed by Geoffrey Garen.

This introduced in r279922. The physical size of the newly allocated range was changed from zero
to the size of the range on that commit and that causes the numbers wrong. That change itself is
correct fix because the range has physical pages attached. That simply activated the bug which was
there for a long time.

I've added the correction to adjust both numbers with newly allocated region. Also added assertion
to check the overflow of those values and the option to log those value change to the console.

Fortunately those numbers are used for debugging purpose. Scavenger will dump out those to stderr
with its verbose mode. There's no practical cases affected by this bug.

Here is the example of footprint logging before fixing the bug:

        >>> footprint: 18446744073709535232 (-16384) scavenge
        footprint: 18446744073709518848 (-16384) scavenge
        footprint: 18446744073709502464 (-16384) scavenge
        footprint: 18446744073709486080 (-16384) scavenge
        footprint: 18446744073709469696 (-16384) scavenge
        footprint: 18446744073709453312 (-16384) scavenge
        ...

It just began with negative number which overflows on unsigned. And following is the one with fix:

        footprint: 1048576 (1048576) allocateLarge
        footprint: 2097152 (1048576) allocateLarge
        footprint: 3145728 (1048576) allocateLarge
        footprint: 4194304 (1048576) allocateLarge
        footprint: 5242880 (1048576) allocateLarge
        footprint: 6291456 (1048576) allocateLarge
        >>> footprint: 6275072 (-16384) scavenge
        footprint: 6258688 (-16384) scavenge
        footprint: 6242304 (-16384) scavenge
        footprint: 6225920 (-16384) scavenge
        footprint: 6209536 (-16384) scavenge
        footprint: 6193152 (-16384) scavenge
        ...

* bmalloc/Heap.cpp:
(bmalloc::Heap::adjustStat):
(bmalloc::Heap::logStat):
(bmalloc::Heap::adjustFreeableMemory):
(bmalloc::Heap::adjustFootprint):
(bmalloc::Heap::decommitLargeRange):
(bmalloc::Heap::scavenge):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::deallocateSmallLine):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::deallocateLarge):
(bmalloc::Heap::externalCommit):
(bmalloc::Heap::externalDecommit):
* bmalloc/Heap.h:


Canonical link: https://commits.webkit.org/241981@main
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@282850 268f45cc-cd09-0410-ab3c-d52691b4dbfc
  • Loading branch information
basuke committed Sep 22, 2021
1 parent 2852c82 commit 74463d497f6834b67b80bc226d557593dcb7117c
Showing with 120 additions and 20 deletions.
  1. +61 −0 Source/bmalloc/ChangeLog
  2. +54 −20 Source/bmalloc/bmalloc/Heap.cpp
  3. +5 −0 Source/bmalloc/bmalloc/Heap.h
@@ -1,3 +1,64 @@
2021-09-21 Basuke Suzuki <basuke.suzuki@sony.com>

[bmalloc] freeableMemory and footprint of Heap are completely broken
https://bugs.webkit.org/show_bug.cgi?id=230245

Reviewed by Geoffrey Garen.

This introduced in r279922. The physical size of the newly allocated range was changed from zero
to the size of the range on that commit and that causes the numbers wrong. That change itself is
correct fix because the range has physical pages attached. That simply activated the bug which was
there for a long time.

I've added the correction to adjust both numbers with newly allocated region. Also added assertion
to check the overflow of those values and the option to log those value change to the console.

Fortunately those numbers are used for debugging purpose. Scavenger will dump out those to stderr
with its verbose mode. There's no practical cases affected by this bug.

Here is the example of footprint logging before fixing the bug:

>>> footprint: 18446744073709535232 (-16384) scavenge
footprint: 18446744073709518848 (-16384) scavenge
footprint: 18446744073709502464 (-16384) scavenge
footprint: 18446744073709486080 (-16384) scavenge
footprint: 18446744073709469696 (-16384) scavenge
footprint: 18446744073709453312 (-16384) scavenge
...

It just began with negative number which overflows on unsigned. And following is the one with fix:

footprint: 1048576 (1048576) allocateLarge
footprint: 2097152 (1048576) allocateLarge
footprint: 3145728 (1048576) allocateLarge
footprint: 4194304 (1048576) allocateLarge
footprint: 5242880 (1048576) allocateLarge
footprint: 6291456 (1048576) allocateLarge
>>> footprint: 6275072 (-16384) scavenge
footprint: 6258688 (-16384) scavenge
footprint: 6242304 (-16384) scavenge
footprint: 6225920 (-16384) scavenge
footprint: 6209536 (-16384) scavenge
footprint: 6193152 (-16384) scavenge
...

* bmalloc/Heap.cpp:
(bmalloc::Heap::adjustStat):
(bmalloc::Heap::logStat):
(bmalloc::Heap::adjustFreeableMemory):
(bmalloc::Heap::adjustFootprint):
(bmalloc::Heap::decommitLargeRange):
(bmalloc::Heap::scavenge):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::deallocateSmallLine):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::deallocateLarge):
(bmalloc::Heap::externalCommit):
(bmalloc::Heap::externalDecommit):
* bmalloc/Heap.h:

2021-09-16 Filip Pizlo <fpizlo@apple.com>

Stub out the footprint() API when libpas is in use
@@ -99,19 +99,51 @@ size_t Heap::footprint()
return m_footprint;
}

BINLINE void Heap::adjustStat(size_t& value, ssize_t amount)
{
auto result = value + amount;
BASSERT((amount >= 0 && result >= value) || (amount < 0 && result < value));
value = result;
}

BINLINE void Heap::logStat(size_t value, ssize_t amount, const char* label, const char* note)
{
fprintf(stderr, "%s: %lu (%ld) %s\n", label, value, amount, note);
}

BINLINE void Heap::adjustFreeableMemory(UniqueLockHolder&, ssize_t amount, const char* note)
{
constexpr bool verbose = false;

adjustStat(m_freeableMemory, amount);

if constexpr (verbose)
logStat(m_freeableMemory, amount, "freeableMemory", note);
}

BINLINE void Heap::adjustFootprint(UniqueLockHolder&, ssize_t amount, const char* note)
{
constexpr bool verbose = false;

adjustStat(m_footprint, amount);

if constexpr (verbose)
logStat(m_footprint, amount, "footprint", note);
}

void Heap::markAllLargeAsEligibile(const LockHolder&)
{
m_largeFree.markAllAsEligibile();
m_hasPendingDecommits = false;
m_condition.notify_all();
}

void Heap::decommitLargeRange(UniqueLockHolder&, LargeRange& range, BulkDecommit& decommitter)
void Heap::decommitLargeRange(UniqueLockHolder& lock, LargeRange& range, BulkDecommit& decommitter)
{
BASSERT(range.hasPhysicalPages());

m_footprint -= range.totalPhysicalSize();
m_freeableMemory -= range.totalPhysicalSize();
adjustFootprint(lock, -range.totalPhysicalSize(), "decommitLargeRange");
adjustFreeableMemory(lock, -range.totalPhysicalSize(), "decommitLargeRange");
decommitter.addLazy(range.begin(), range.physicalEnd() - range.begin());
m_hasPendingDecommits = true;
range.setStartPhysicalSize(0);
@@ -139,8 +171,8 @@ void Heap::scavenge(UniqueLockHolder& lock, BulkDecommit& decommitter, size_t& d

size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
m_freeableMemory -= decommitSize;
m_footprint -= decommitSize;
adjustFootprint(lock, -decommitSize, "scavenge");
adjustFreeableMemory(lock, -decommitSize, "scavenge");
decommitter.addEager(page->begin()->begin(), pageSize);
page->setHasPhysicalPages(false);
#if ENABLE_PHYSICAL_PAGE_MAP
@@ -205,7 +237,7 @@ void Heap::allocateSmallChunk(UniqueLockHolder& lock, size_t pageClass, FailureA
chunk->freePages().push(page);
});

m_freeableMemory += chunkSize;
adjustFreeableMemory(lock, chunkSize, "allocateSmallChunk");

m_scavenger->schedule(0);

@@ -275,10 +307,10 @@ SmallPage* Heap::allocateSmallPage(UniqueLockHolder& lock, size_t sizeClass, Lin
size_t pageSize = bmalloc::pageSize(pageClass);
size_t physicalSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
if (page->hasPhysicalPages())
m_freeableMemory -= physicalSize;
adjustFreeableMemory(lock, -physicalSize, "allocateSmallPage");
else {
m_scavenger->scheduleIfUnderMemoryPressure(pageSize);
m_footprint += physicalSize;
adjustFootprint(lock, physicalSize, "allocateSmallPage");
vmAllocatePhysicalPagesSloppy(page->begin()->begin(), pageSize);
page->setHasPhysicalPages(true);
#if ENABLE_PHYSICAL_PAGE_MAP
@@ -314,7 +346,7 @@ void Heap::deallocateSmallLine(UniqueLockHolder& lock, Object object, LineCache&

size_t pageClass = m_constants.pageClass(page->sizeClass());

m_freeableMemory += physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass));
adjustFreeableMemory(lock, physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass)), "deallocateSmallLine");

List<SmallPage>::remove(page); // 'page' may be in any thread's line cache.

@@ -490,7 +522,7 @@ LargeRange Heap::splitAndAllocate(UniqueLockHolder& lock, LargeRange& range, siz

if (range.startPhysicalSize() < range.size()) {
m_scavenger->scheduleIfUnderMemoryPressure(range.size());
m_footprint += range.size() - range.totalPhysicalSize();
adjustFootprint(lock, range.size() - range.totalPhysicalSize(), "splitAndAllocate");
vmAllocatePhysicalPagesSloppy(range.begin() + range.startPhysicalSize(), range.size() - range.startPhysicalSize());
range.setStartPhysicalSize(range.size());
range.setTotalPhysicalSize(range.size());
@@ -501,12 +533,12 @@ LargeRange Heap::splitAndAllocate(UniqueLockHolder& lock, LargeRange& range, siz
}

if (prev) {
m_freeableMemory += prev.totalPhysicalSize();
adjustFreeableMemory(lock, prev.totalPhysicalSize(), "splitAndAllocate.prev");
m_largeFree.add(prev);
}

if (next) {
m_freeableMemory += next.totalPhysicalSize();
adjustFreeableMemory(lock, next.totalPhysicalSize(), "splitAndAllocate.next");
m_largeFree.add(next);
}

@@ -552,10 +584,12 @@ void* Heap::allocateLarge(UniqueLockHolder& lock, size_t alignment, size_t size,
ASSERT_OR_RETURN_ON_FAILURE(range);

m_largeFree.add(range);
adjustFreeableMemory(lock, range.totalPhysicalSize(), "allocateLarge");
adjustFootprint(lock, range.totalPhysicalSize(), "allocateLarge");

range = m_largeFree.remove(alignment, size);
}

m_freeableMemory -= range.totalPhysicalSize();
adjustFreeableMemory(lock, -range.totalPhysicalSize(), "allocateLarge.reuse");

void* result = splitAndAllocate(lock, range, alignment, size).begin();
ASSERT_OR_RETURN_ON_FAILURE(result);
@@ -605,11 +639,11 @@ void Heap::shrinkLarge(UniqueLockHolder& lock, const Range& object, size_t newSi
m_scavenger->schedule(size);
}

void Heap::deallocateLarge(UniqueLockHolder&, void* object)
void Heap::deallocateLarge(UniqueLockHolder& lock, void* object)
{
size_t size = m_largeAllocated.remove(object);
m_largeFree.add(LargeRange(object, size, size, size, static_cast<char*>(object) + size));
m_freeableMemory += size;
adjustFreeableMemory(lock, size, "deallocateLarge");
m_scavenger->schedule(size);
}

@@ -619,11 +653,11 @@ void Heap::externalCommit(void* ptr, size_t size)
externalCommit(lock, ptr, size);
}

void Heap::externalCommit(UniqueLockHolder&, void* ptr, size_t size)
void Heap::externalCommit(UniqueLockHolder& lock, void* ptr, size_t size)
{
BUNUSED_PARAM(ptr);

m_footprint += size;
adjustFootprint(lock, size, "externalCommit");
#if ENABLE_PHYSICAL_PAGE_MAP
m_physicalPageMap.commit(ptr, size);
#endif
@@ -635,11 +669,11 @@ void Heap::externalDecommit(void* ptr, size_t size)
externalDecommit(lock, ptr, size);
}

void Heap::externalDecommit(UniqueLockHolder&, void* ptr, size_t size)
void Heap::externalDecommit(UniqueLockHolder& lock, void* ptr, size_t size)
{
BUNUSED_PARAM(ptr);

m_footprint -= size;
adjustFootprint(lock, -size, "externalDecommit");
#if ENABLE_PHYSICAL_PAGE_MAP
m_physicalPageMap.decommit(ptr, size);
#endif
@@ -117,6 +117,11 @@ class Heap {
LargeRange tryAllocateLargeChunk(size_t alignment, size_t);
LargeRange splitAndAllocate(UniqueLockHolder&, LargeRange&, size_t alignment, size_t);

inline void adjustFootprint(UniqueLockHolder&, ssize_t, const char* note);
inline void adjustFreeableMemory(UniqueLockHolder&, ssize_t, const char* note);
inline void adjustStat(size_t& value, ssize_t);
inline void logStat(size_t value, ssize_t amount, const char* label, const char* note);

HeapKind m_kind;
HeapConstants& m_constants;

0 comments on commit 74463d4

Please sign in to comment.