-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak #48
Comments
Hm - thanks for the report. Thats a worry. It probably won't matter but - what versions of nodejs / fdb are you using, and on what OS? |
Ah, right, sorry. I'm using
FDB server and client are of version |
Yep, confirmed it leaks. It leaks way faster without the tn.get() call: const fdb = require('foundationdb')
fdb.setAPIVersion(620)
const db = fdb.open()
;(async () => {
while (true) {
await db.doTn(async tn => {
});
}
})() I've tracked down the issue and seem to have fixed it. I wasn't freeing a temporary heap allocated structure after transactions were passed to libfdb_c. |
📦 foundationdb@1.0.3. See if that fixes things! |
With Without the |
Can you try checking out from source to see if you can reproduce the behaviour? That sounds like the behaviour I was seeing in 1.0.2. Maybe something went wrong in the publishing process? (I rewrote the node module publishing scripts yesterday.) I'm still seeing a (much slower) memory leak in 1.0.3. Without the tn.get() call, in about a minute process memory usage goes from 18mb to 24mb. Valgrind doesn't show the leak, regardless of how many iterations I run the process for. I'm wondering if maybe there's some internal FDB structures we aren't cleaning up properly, and the foundationdb C library is deallocating that memory automatically when its thread is terminated (hence those allocations not showing up in valgrind). This was the valgrind stats from running 10M iterations (with no
|
Here's what I did:
Checked that the code would crash, then rebuilt binaries using
I didn't see any change after that, though I did some more experimenting:
My hypothesis is that under heavy load NodeJS triggers GC less often; and retained NodeJS objects lead to attached native objects being retained as well, making some internal FDB client buffer grow. Anyway, let me test Thank you for your time! |
Huh, yeah - maybe something like that. I don't think memory usage should grow as it is though. You're definitely using 1.0.3? (Check I bet if I rewrote the same test in raw C it would sit at a constant amount of memory. ... I might try tracking the number of allocated FDB Futures and see what happens to that number over time |
Yep, did just that. |
After running It can be seen that the major leak went away. The memory still grows, but really slow. NodeJS heap: Another observation: during load bursts, the consumed memory sometimes grows, and not being released: May have something to do with FDB client's Fast Allocator (see https://forums.foundationdb.org/t/memory-leak-in-c-api/554/2), so this may be expected. |
I’m glad that helped. The memory leak I fixed for 1.0.3 looked pretty significant, and I was surprised the other day when it sounded like it didn’t make much difference. And hm, it could be to do with the allocator. But I’d expect those simple loops to sit on a single fdb transaction and reuse it every time. That’s definitely not the full story. |
Hi! We noticed a memory leak when using Starting profiling:
End profiling after 10 min:
The situation is similar to what happened with I use macOS with rosetta, but we have the same situation with a memory leak in our services in Kubernetes |
We noticed a memory leak in our services that use
node-foundationdb
, which appears to be somewhere in native code, because NodeJS heap size stays at consistent levels, but the total process memory grows.I made a repro, see https://github.com/aikoven/node-foundationdb-memory-leak-repro
This repro makes simple read-only transactions in a loop:
After about an hour, the process consumed about 700MB of memory, while NodeJS heap usage was consistently ~90MB.
The text was updated successfully, but these errors were encountered: