Skip to content
Permalink
Browse files
Copying should be generational
https://bugs.webkit.org/show_bug.cgi?id=126555

Reviewed by Geoffrey Garen.

This patch adds support for copying to our generational collector. Eden collections
always trigger copying. Full collections use our normal fragmentation-based heuristics.

The way this works is that the CopiedSpace now has the notion of an old generation set of CopiedBlocks
and a new generation of CopiedBlocks. During each mutator cycle new CopiedSpace allocations reside
in the new generation. When a collection occurs, those blocks are moved to the old generation.

One key thing to remember is that both new and old generation objects in the MarkedSpace can
refer to old or new generation allocations in CopiedSpace. This is why we must fire write barriers
when assigning to an old (MarkedSpace) object's Butterfly.

* heap/CopiedAllocator.h:
(JSC::CopiedAllocator::tryAllocateDuringCopying):
* heap/CopiedBlock.h:
(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::didEvacuateBytes):
(JSC::CopiedBlock::isOld):
(JSC::CopiedBlock::didPromote):
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
(JSC::CopiedBlock::reportLiveBytesDuringCopying):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::CopiedSpace):
(JSC::CopiedSpace::~CopiedSpace):
(JSC::CopiedSpace::init):
(JSC::CopiedSpace::tryAllocateOversize):
(JSC::CopiedSpace::tryReallocateOversize):
(JSC::CopiedSpace::doneFillingBlock):
(JSC::CopiedSpace::didStartFullCollection):
(JSC::CopiedSpace::doneCopying):
(JSC::CopiedSpace::size):
(JSC::CopiedSpace::capacity):
(JSC::CopiedSpace::isPagedOut):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::CopiedGeneration::CopiedGeneration):
* heap/CopiedSpaceInlines.h:
(JSC::CopiedSpace::contains):
(JSC::CopiedSpace::recycleEvacuatedBlock):
(JSC::CopiedSpace::allocateBlock):
(JSC::CopiedSpace::startedCopying):
* heap/CopyVisitor.cpp:
(JSC::CopyVisitor::copyFromShared):
* heap/CopyVisitorInlines.h:
(JSC::CopyVisitor::allocateNewSpace):
(JSC::CopyVisitor::allocateNewSpaceSlow):
* heap/GCThreadSharedData.cpp:
(JSC::GCThreadSharedData::didStartCopying):
* heap/Heap.cpp:
(JSC::Heap::copyBackingStores):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::copyLater):
* heap/TinyBloomFilter.h:
(JSC::TinyBloomFilter::add):


Canonical link: https://commits.webkit.org/144997@main
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@162017 268f45cc-cd09-0410-ab3c-d52691b4dbfc
  • Loading branch information
Mark Hahnenberg committed Jan 14, 2014
1 parent 0e9a7e3 commit 4c18225e23c06df0853207dbfd80d2286d64a36b
Showing 13 changed files with 367 additions and 128 deletions.
@@ -1,3 +1,64 @@
2014-01-10 Mark Hahnenberg <mhahnenberg@apple.com>

Copying should be generational
https://bugs.webkit.org/show_bug.cgi?id=126555

Reviewed by Geoffrey Garen.

This patch adds support for copying to our generational collector. Eden collections
always trigger copying. Full collections use our normal fragmentation-based heuristics.

The way this works is that the CopiedSpace now has the notion of an old generation set of CopiedBlocks
and a new generation of CopiedBlocks. During each mutator cycle new CopiedSpace allocations reside
in the new generation. When a collection occurs, those blocks are moved to the old generation.

One key thing to remember is that both new and old generation objects in the MarkedSpace can
refer to old or new generation allocations in CopiedSpace. This is why we must fire write barriers
when assigning to an old (MarkedSpace) object's Butterfly.

* heap/CopiedAllocator.h:
(JSC::CopiedAllocator::tryAllocateDuringCopying):
* heap/CopiedBlock.h:
(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::didEvacuateBytes):
(JSC::CopiedBlock::isOld):
(JSC::CopiedBlock::didPromote):
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
(JSC::CopiedBlock::reportLiveBytesDuringCopying):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::CopiedSpace):
(JSC::CopiedSpace::~CopiedSpace):
(JSC::CopiedSpace::init):
(JSC::CopiedSpace::tryAllocateOversize):
(JSC::CopiedSpace::tryReallocateOversize):
(JSC::CopiedSpace::doneFillingBlock):
(JSC::CopiedSpace::didStartFullCollection):
(JSC::CopiedSpace::doneCopying):
(JSC::CopiedSpace::size):
(JSC::CopiedSpace::capacity):
(JSC::CopiedSpace::isPagedOut):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::CopiedGeneration::CopiedGeneration):
* heap/CopiedSpaceInlines.h:
(JSC::CopiedSpace::contains):
(JSC::CopiedSpace::recycleEvacuatedBlock):
(JSC::CopiedSpace::allocateBlock):
(JSC::CopiedSpace::startedCopying):
* heap/CopyVisitor.cpp:
(JSC::CopyVisitor::copyFromShared):
* heap/CopyVisitorInlines.h:
(JSC::CopyVisitor::allocateNewSpace):
(JSC::CopyVisitor::allocateNewSpaceSlow):
* heap/GCThreadSharedData.cpp:
(JSC::GCThreadSharedData::didStartCopying):
* heap/Heap.cpp:
(JSC::Heap::copyBackingStores):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::copyLater):
* heap/TinyBloomFilter.h:
(JSC::TinyBloomFilter::add):

2014-01-14 Mark Lam <mark.lam@apple.com>

ASSERTION FAILED: !hasError() in JSC::Parser<LexerType>::createSavePoint().
@@ -38,6 +38,7 @@ class CopiedAllocator {

bool fastPathShouldSucceed(size_t bytes) const;
CheckedBoolean tryAllocate(size_t bytes, void** outPtr);
CheckedBoolean tryAllocateDuringCopying(size_t bytes, void** outPtr);
CheckedBoolean tryReallocate(void *oldPtr, size_t oldBytes, size_t newBytes);
void* forceAllocate(size_t bytes);
CopiedBlock* resetCurrentBlock();
@@ -93,6 +94,14 @@ inline CheckedBoolean CopiedAllocator::tryAllocate(size_t bytes, void** outPtr)
return true;
}

inline CheckedBoolean CopiedAllocator::tryAllocateDuringCopying(size_t bytes, void** outPtr)
{
if (!tryAllocate(bytes, outPtr))
return false;
m_currentBlock->reportLiveBytesDuringCopying(bytes);
return true;
}

inline CheckedBoolean CopiedAllocator::tryReallocate(
void* oldPtr, size_t oldBytes, size_t newBytes)
{
@@ -49,10 +49,14 @@ class CopiedBlock : public HeapBlock<CopiedBlock> {
void pin();
bool isPinned();

bool isOld();
bool isOversize();
void didPromote();

unsigned liveBytes();
void reportLiveBytes(JSCell*, CopyToken, unsigned);
bool shouldReportLiveBytes(SpinLockHolder&, JSCell* owner);
void reportLiveBytes(SpinLockHolder&, JSCell*, CopyToken, unsigned);
void reportLiveBytesDuringCopying(unsigned);
void didSurviveGC();
void didEvacuateBytes(unsigned);
bool shouldEvacuate();
@@ -81,20 +85,20 @@ class CopiedBlock : public HeapBlock<CopiedBlock> {

bool hasWorkList();
CopyWorkList& workList();
SpinLock& workListLock() { return m_workListLock; }

private:
CopiedBlock(Region*);
void zeroFillWilderness(); // Can be called at any time to zero-fill to the end of the block.

void checkConsistency();

#if ENABLE(PARALLEL_GC)
SpinLock m_workListLock;
#endif
OwnPtr<CopyWorkList> m_workList;

size_t m_remaining;
uintptr_t m_isPinned;
bool m_isPinned : 1;
bool m_isOld : 1;
unsigned m_liveBytes;
#ifndef NDEBUG
unsigned m_liveObjects;
@@ -130,14 +134,13 @@ inline CopiedBlock::CopiedBlock(Region* region)
: HeapBlock<CopiedBlock>(region)
, m_remaining(payloadCapacity())
, m_isPinned(false)
, m_isOld(false)
, m_liveBytes(0)
#ifndef NDEBUG
, m_liveObjects(0)
#endif
{
#if ENABLE(PARALLEL_GC)
m_workListLock.Init();
#endif
ASSERT(is8ByteAligned(reinterpret_cast<void*>(m_remaining)));
}

@@ -156,6 +159,7 @@ inline void CopiedBlock::didSurviveGC()
inline void CopiedBlock::didEvacuateBytes(unsigned bytes)
{
ASSERT(m_liveBytes >= bytes);
ASSERT(m_liveObjects);
checkConsistency();
m_liveBytes -= bytes;
#ifndef NDEBUG
@@ -188,6 +192,16 @@ inline bool CopiedBlock::isPinned()
return m_isPinned;
}

inline bool CopiedBlock::isOld()
{
return m_isOld;
}

inline void CopiedBlock::didPromote()
{
m_isOld = true;
}

inline bool CopiedBlock::isOversize()
{
return region()->isCustomSize();
@@ -26,21 +26,33 @@
#ifndef CopiedBlockInlines_h
#define CopiedBlockInlines_h

#include "ClassInfo.h"
#include "CopiedBlock.h"
#include "Heap.h"
#include "MarkedBlock.h"

namespace JSC {

inline void CopiedBlock::reportLiveBytes(JSCell* owner, CopyToken token, unsigned bytes)
inline bool CopiedBlock::shouldReportLiveBytes(SpinLockHolder&, JSCell* owner)
{
// We want to add to live bytes if the owner isn't part of the remembered set or
// if this block was allocated during the last cycle.
// If we always added live bytes we would double count for elements in the remembered
// set across collections.
// If we didn't always add live bytes to new blocks, we'd get too few.
bool ownerIsRemembered = MarkedBlock::blockFor(owner)->isRemembered(owner);
return !ownerIsRemembered || !m_isOld;
}

inline void CopiedBlock::reportLiveBytes(SpinLockHolder&, JSCell* owner, CopyToken token, unsigned bytes)
{
#if ENABLE(PARALLEL_GC)
SpinLockHolder locker(&m_workListLock);
#endif
#ifndef NDEBUG
checkConsistency();
#ifndef NDEBUG
m_liveObjects++;
#endif
m_liveBytes += bytes;
checkConsistency();
ASSERT(m_liveBytes <= CopiedBlock::blockSize);

if (isPinned())
return;
@@ -56,6 +68,19 @@ inline void CopiedBlock::reportLiveBytes(JSCell* owner, CopyToken token, unsigne
m_workList->append(CopyWorklistItem(owner, token));
}

inline void CopiedBlock::reportLiveBytesDuringCopying(unsigned bytes)
{
checkConsistency();
// This doesn't need to be locked because the thread that calls this function owns the current block.
m_isOld = true;
#ifndef NDEBUG
m_liveObjects++;
#endif
m_liveBytes += bytes;
checkConsistency();
ASSERT(m_liveBytes <= CopiedBlock::blockSize);
}

} // namespace JSC

#endif // CopiedBlockInlines_h

0 comments on commit 4c18225

Please sign in to comment.