Skip to content
Permalink
Browse files
Use std::atomic for CodeBlock::m_visitAggregateHasBeenCalled.
<https://webkit.org/b/142640>

Reviewed by Mark Hahnenberg.

We used to spin our own compare and swap on a uint8_t.  Now that we can
use C++11, let's use std::atomic instead.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
- The CAS here needs std::memory_order_acquire ordering because it
  requires lock acquisition semantics to visit the CodeBlock.

* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::clearMarksForFullCollection):
(JSC::CodeBlockSet::clearMarksForEdenCollection):
- These can go with relaxed ordering because they are all done before
  the GC starts parallel marking.


Canonical link: https://commits.webkit.org/160666@main
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@181456 268f45cc-cd09-0410-ab3c-d52691b4dbfc
  • Loading branch information
Mark Lam committed Mar 12, 2015
1 parent ec30ef3 commit f43ee2d7cdba05d37959484fe2fead3d105e964c
@@ -1,3 +1,26 @@
2015-03-12 Mark Lam <mark.lam@apple.com>

Use std::atomic for CodeBlock::m_visitAggregateHasBeenCalled.
<https://webkit.org/b/142640>

Reviewed by Mark Hahnenberg.

We used to spin our own compare and swap on a uint8_t. Now that we can
use C++11, let's use std::atomic instead.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
- The CAS here needs std::memory_order_acquire ordering because it
requires lock acquisition semantics to visit the CodeBlock.

* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::clearMarksForFullCollection):
(JSC::CodeBlockSet::clearMarksForEdenCollection):
- These can go with relaxed ordering because they are all done before
the GC starts parallel marking.

2015-03-12 Csaba Osztrogonác <ossy@webkit.org>

[cmake] Fix the incremental build issue revealed by r181419
@@ -2195,26 +2195,12 @@ void CodeBlock::visitAggregate(SlotVisitor& visitor)
{
#if ENABLE(PARALLEL_GC)
// I may be asked to scan myself more than once, and it may even happen concurrently.
// To this end, use a CAS loop to check if I've been called already. Only one thread
// may proceed past this point - whichever one wins the CAS race.
unsigned oldValue;
do {
oldValue = m_visitAggregateHasBeenCalled;
if (oldValue) {
// Looks like someone else won! Return immediately to ensure that we don't
// trace the same CodeBlock concurrently. Doing so is hazardous since we will
// be mutating the state of ValueProfiles, which contain JSValues, which can
// have word-tearing on 32-bit, leading to awesome timing-dependent crashes
// that are nearly impossible to track down.

// Also note that it must be safe to return early as soon as we see the
// value true (well, (unsigned)1), since once a GC thread is in this method
// and has won the CAS race (i.e. was responsible for setting the value true)
// it will definitely complete the rest of this method before declaring
// termination.
return;
}
} while (!WTF::weakCompareAndSwap(&m_visitAggregateHasBeenCalled, 0, 1));
// To this end, use an atomic operation to check (and set) if I've been called already.
// Only one thread may proceed past this point - whichever one wins the atomic set race.
bool expected = false;
bool setByMe = m_visitAggregateHasBeenCalled.compare_exchange_strong(expected, true, std::memory_order_acquire);
if (!setByMe)
return;
#endif // ENABLE(PARALLEL_GC)

if (!!m_alternative)
@@ -1065,7 +1065,7 @@ class CodeBlock : public ThreadSafeRefCounted<CodeBlock>, public UnconditionalFi
bool m_isStrictMode;
bool m_needsActivation;
bool m_mayBeExecuting;
uint8_t m_visitAggregateHasBeenCalled;
std::atomic<bool> m_visitAggregateHasBeenCalled;

RefPtr<SourceProvider> m_source;
unsigned m_sourceOffset;
@@ -1301,7 +1301,7 @@ inline void CodeBlockSet::mark(CodeBlock* codeBlock)

codeBlock->m_mayBeExecuting = true;
// We might not have cleared the marks for this CodeBlock, but we need to visit it.
codeBlock->m_visitAggregateHasBeenCalled = false;
codeBlock->m_visitAggregateHasBeenCalled.store(false, std::memory_order_relaxed);
#if ENABLE(GGC)
m_currentlyExecuting.append(codeBlock);
#endif
@@ -65,7 +65,7 @@ void CodeBlockSet::clearMarksForFullCollection()
{
for (CodeBlock* codeBlock : m_oldCodeBlocks) {
codeBlock->m_mayBeExecuting = false;
codeBlock->m_visitAggregateHasBeenCalled = false;
codeBlock->m_visitAggregateHasBeenCalled.store(false, std::memory_order_relaxed);
}

// We promote after we clear marks on the old generation CodeBlocks because
@@ -82,7 +82,7 @@ void CodeBlockSet::clearMarksForEdenCollection(const Vector<const JSCell*>& reme
continue;
executable->forEachCodeBlock([](CodeBlock* codeBlock) {
codeBlock->m_mayBeExecuting = false;
codeBlock->m_visitAggregateHasBeenCalled = false;
codeBlock->m_visitAggregateHasBeenCalled.store(false, std::memory_order_relaxed);
});
}
}

0 comments on commit f43ee2d

Please sign in to comment.