STF-383: Fix off-heap memory growth in FileMode.MEMORY#368
Conversation
`FileChannel.read(ByteBuffer)` with a heap-backed buffer causes the JDK to substitute a temporary direct buffer obtained from a per-thread cache (`sun.nio.ch.Util.BufferCache`). With chunk sizes near `Integer.MAX_VALUE`, a single MEMORY-mode database load leaves up to ~2 GB of direct memory cached on the loading thread for that thread's lifetime. Repeated loads on different threads compound the growth. Open the database via `FileInputStream` and delegate to the existing chunked `InputStream` read path. `FileInputStream.read(byte[])` is implemented natively without going through the NIO buffer cache, so it avoids the leak entirely. The MMAP path is unchanged, since `FileChannel.map()` does not use the cache. Note: `Files.readAllBytes()` and `Files.newInputStream()` would NOT fix this, as both are backed by `Channels.newInputStream(FileChannel)` internally and still trigger the cache. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Manages local Java and Maven versions via mise, matching the setup in minfraud-api-java. CI is unaffected since the GitHub Actions workflows use setup-java directly. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The existing chunkSizes parametrized matrix in ReaderTest only routed through Reader(File, int chunkSize), which hardcodes FileMode.MEMORY_MAPPED. As a result the chunked file-MEMORY load path in BufferHolder had no integration coverage — a remainder-chunk-sized- wrong regression or an EOF-handling change could ship silently. Add a package-private Reader(File, FileMode, int chunkSize) constructor and a testMemoryMode(int chunkSize) test that mirrors test(int) but in MEMORY mode. With chunk sizes 512/2048 against the test DBs (1285 and 2794 bytes), the multi-chunk + remainder branch is now exercised end to end. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Code Review
This pull request addresses an issue with unbounded off-heap memory growth in FileMode.MEMORY by replacing FileChannel.read() with FileInputStream. This change prevents the JDK from caching large direct ByteBuffers in per-thread storage (sun.nio.ch.Util.BufferCache). The update also includes a version bump to 4.1.0, the addition of mise configuration for development environment management, and new tests for memory mode. Feedback was provided regarding potential integer overflow when calculating chunk counts for extremely large databases, which could lead to an incorrectly sized buffers array.
| var fullChunks = (int) (size / chunkSize); | ||
| var remainder = (int) (size % chunkSize); | ||
| var totalChunks = fullChunks + (remainder > 0 ? 1 : 0); | ||
| var buffers = new ByteBuffer[totalChunks]; |
There was a problem hiding this comment.
The calculation of fullChunks and totalChunks uses an explicit cast from long to int without checking for overflow. If the database size is extremely large relative to the chunkSize (e.g., a multi-terabyte database with a small custom chunkSize), fullChunks could overflow, leading to an incorrectly sized buffers array and potential data corruption or IndexOutOfBoundsException during the reading loop. While MaxMind databases are typically within int range for chunk counts, it is safer to validate this or use Math.toIntExact.
var fullChunksLong = size / chunkSize;
var remainder = (int) (size % chunkSize);
var totalChunksLong = fullChunksLong + (remainder > 0 ? 1 : 0);
if (totalChunksLong > Integer.MAX_VALUE) {
throw new IOException("Database size exceeds supported chunk count for the given chunkSize");
}
var totalChunks = (int) totalChunksLong;
var fullChunks = (int) fullChunksLong;
var buffers = new ByteBuffer[totalChunks];There was a problem hiding this comment.
(Claude replying on Greg's behalf.)
Thanks for flagging — leaving the cast unchanged in this PR for a few reasons:
- Pre-existing, not introduced by this PR. The same
(int) (size / chunkSize)pattern (and the parallel(int) (size % chunkSize)) was in the prior multi-chunk MEMORY path before the refactor, so this isn't a regression we're shipping. - Not practically reachable. A real MaxMind DB at multi-TB sizes doesn't exist; well before this cast would overflow we'd hit
byte[]size limits (Integer.MAX_VALUE),ByteBuffer.allocate(int)'sintcap, and JVM heap limits. - Project guidance (CLAUDE.md) says to not add validation for scenarios that can't happen — at this layer,
chunkSizeis set by the library, not external input.
If/when very-large-DB support beyond ~2GB chunks becomes a goal, the right move is probably a broader audit of the int casts and array indexing throughout BufferHolder / MultiBuffer, not just Math.toIntExact in this one spot. Out of scope for the current PR.
|
|
||
| var totalRead = 0L; | ||
| for (var buffer : buffers) { | ||
| var read = channel.read(buffer); |
There was a problem hiding this comment.
Interesting:
1. Pre-existing short-read bug, now fixed (worth calling out). The old multi-chunk MEMORY path had a latent bug:
for (var buffer : buffers) {
var read = channel.read(buffer); // may be a *short* read
if (read == -1) { break; }
totalRead += read;
buffer.flip(); // flips at the short position
}
FileChannel.read(ByteBuffer) is not contractually obligated to fully fill the buffer — a short read would leave the chunk partially populated and flip() would expose only the
bytes that were actually read, silently truncating the in-memory database. The new readFully loop handles this correctly. Good catch (even if accidental).
Not sure if we'd want to mention in the changelog.
There was a problem hiding this comment.
(Claude replying on Greg's behalf.)
Done in 232dd73. Traced the introducing commit via git log -S: the buggy for (var buffer : buffers) { channel.read(buffer); buffer.flip(); } loop was added in 2a1fa7f "Make buffers read-only for all paths" and first shipped in v4.0.0, so 4.0.0/4.0.1/4.0.2 are all affected.
Added a second bullet under the 4.1.0 section noting:
- The silent-truncation risk specifically for the multi-chunk path.
- That it only affects databases larger than ~2GB (the default chunk size).
- That the new chunked read loop retries until each chunk is fully populated.
The single-chunk MEMORY path has the same FileChannel.read(ByteBuffer) contract concern but at least throws when channel.read(buffer) != size, so only the multi-chunk path silently truncates — that's the one worth calling out.
| // mode that would mean chunkSize bytes of off-heap memory held per loader | ||
| // thread for the JVM's lifetime. | ||
| try (FileInputStream stream = new FileInputStream(database)) { | ||
| long size = database.length(); |
There was a problem hiding this comment.
2. Minor robustness regression: size source no longer matches the open fd. BufferHolder.java:33-34:
try (FileInputStream stream = new FileInputStream(database)) {
long size = database.length();
database.length() is a path-based stat(2), not a query against the open fd. The old code used channel.size() on the already-open channel, which is atomic with the open.
Theoretical race: if the file is replaced between new FileInputStream(...) and database.length(), you could get a size that doesn't match the bytes the stream will deliver —
and then readFully either throws "Unexpected end of stream" (truncation) or silently stops short of the file's real end (extension/replacement with a larger file). Same general
class of race as before, but slightly wider window.
Suggestion: use stream.getChannel().size() instead. Lazily fetching the channel doesn't populate the NIO buffer cache — only channel.read(heapBuffer) does that, and we're not
calling it. This keeps the size lookup consistent with the open file descriptor without reintroducing the leak.
There was a problem hiding this comment.
(Claude replying on Greg's behalf.)
Fixed in 2624387 — database.length() → stream.getChannel().size(). Verified the suggestion doesn't reintroduce the cache: FileChannelImpl.getChannel() is lazy and .size() is an fstat-only syscall, no IOUtil.read / Util.getTemporaryDirectBuffer. Added a one-line comment noting both why (fd-atomic) and that getChannel().size() is cache-safe.
`database.length()` is a path-based `stat(2)`, which can race with file replacement between the `FileInputStream` open and the size lookup — the bytes the stream then delivers may not match the size we read. Switch to `stream.getChannel().size()`, which is an `fstat` on the already-open file descriptor and therefore atomic with the open. `FileChannelImpl.getChannel()` is lazy and `.size()` is a stat-only syscall, so this does not reintroduce the per-thread direct ByteBuffer cache that the previous commit was fixing — that cache only grows when `FileChannel.read(heapByteBuffer)` is called. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The prior multi-chunk MEMORY load loop (introduced in 4.0.0 by 2a1fa7f, "Make buffers read-only for all paths") called FileChannel.read(ByteBuffer) and trusted it to fully fill the destination — a short read would have silently truncated the in-memory database. The new readFully helper retries until each chunk is full, so this is incidentally fixed. Worth noting in the changelog so anyone investigating a >2GB MEMORY-mode corruption can trace it back. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
FileChannel.read()into a heap buffer populatessun.nio.ch.Util.BufferCache, a per-thread cache that retains the largest direct buffer ever requested for the JVM lifetime. Under chunked MEMORY mode that meantchunkSize× loading-threads of off-heap pinned permanently. Switching toFileInputStreambypasses the cache.FileMode.MEMORY_MAPPEDis unaffected (it usesFileChannel.map, notread).Reader(File, FileMode, int)package-private ctor and atestMemoryModeparametrized over the existingchunkSizes()provider, so the multi-chunk + remainder branch inBufferHoldernow has integration coverage.mise.toml/mise.lockfor local Java + Maven version management, matching minfraud-api-java. CI usesactions/setup-javadirectly and is unaffected.Note: the linked Linear ticket suggested
Files.readAllBytes()as the fix, but that wouldn't work —Files.readAllBytesis implemented asChannels.newInputStream(FileChannel)and still routes throughFileChannel.read→BufferCache. Onlyjava.io.FileInputStream(whose nativeread(byte[])copies viaSetByteArrayRegionwithout going through NIO) actually bypasses the cache. This was previously commented on STF-383.Test plan
mvn test(199/199 pass, including 3 new chunked MEMORY-mode test runs)mvn checkstyle:checkclean🤖 Generated with Claude Code