Skip to content

Commit

Permalink
InternCache - remove synchronized (#1251)
Browse files Browse the repository at this point in the history
  • Loading branch information
pjfanning committed Mar 28, 2024
1 parent 695ca3f commit 1fd8cb1
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 7 deletions.
4 changes: 4 additions & 0 deletions release-notes/VERSION-2.x
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,10 @@ a pure JSON library.

2.18.0 (not yet released)

#1251: `InternCache` replace synchronized with `ReentrantLock` - the cache
size limit is no longer strictly enforced for performance reasons but
we should never go far about the limit
(contributed by @pjfanning)
#1252: `ThreadLocalBufferManager` replace synchronized with `ReentrantLock`
(contributed by @pjfanning)

Expand Down
21 changes: 14 additions & 7 deletions src/main/java/com/fasterxml/jackson/core/util/InternCache.java
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package com.fasterxml.jackson.core.util;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;

/**
* Singleton class that adds a simple first-level cache in front of
Expand Down Expand Up @@ -29,7 +30,7 @@ public final class InternCache
* cases where multiple threads might try to concurrently
* flush the map.
*/
private final Object lock = new Object();
private final ReentrantLock lock = new ReentrantLock();

public InternCache() { this(MAX_ENTRIES, 0.8f, 4); }

Expand All @@ -47,13 +48,19 @@ public String intern(String input) {
* we are simply likely to keep on clearing same, commonly used entries.
*/
if (size() >= MAX_ENTRIES) {
/* Not incorrect wrt well-known double-locking anti-pattern because underlying
* storage gives close enough answer to real one here; and we are
* more concerned with flooding than starvation.
/* As of 2.18, the limit is not strictly enforced, but we do try to
* clear entries if we have reached the limit. We do not expect to
* go too much over the limit, and if we do, it's not a huge problem.
* If some other thread has the lock, we will not clear but the lock should
* not be held for long, so another thread should be able to clear in the near future.
*/
synchronized (lock) {
if (size() >= MAX_ENTRIES) {
clear();
if (lock.tryLock()) {
try {
if (size() >= MAX_ENTRIES) {
clear();
}
} finally {
lock.unlock();
}
}
}
Expand Down

0 comments on commit 1fd8cb1

Please sign in to comment.