-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memleak #3
Comments
Without a complete reproducible example (including some input to test it on), I really can't help you. The most likely explanation is that the freeing logic in these bindings has some problem with it. So you might consider debugging that. This FFI binding isn't terribly complicated. Another explanation is that you're just holding on to the captures (or regexes). But as I said, it's impossible for me to know whether this is the case because you haven't provided a reproducible example. |
I wrote a small source code to reproduce the issue: Directory structure: RUN
RESULTS
Seems like the problem is the way I set timeout for parsing process:
Sometimes, it takes too long to parse a text so I need to set a timeout for parse process. If this is the problem, can you suggest me the right way to implement timeout? I need to get all the captured fields if parse success. Thank you! |
I found out that
Will this timeout implement slow down my grok? Performance is a key factor of my program. Off topic question: Is using Iter() the best way to get all the captured fields? Or is there any parts of RureParseTypedCompiled() that can be improved? Thank you! |
I'm sorry, but I don't have the time to debug programs like this. It sounds like this isn't a memory leak with this library, so I'm going to close this issue. I think your question might be better suited for StackOverflow or some other general help forum.
As should be clear from the API docs, no, you cannot.
Is it the simplest? Sure. Is it the fastest? No. Because every call to match incurs FFI overhead. Does this matter? I don't know. You need to make that decision for yourself. If performance is your first and only priority then you probably shouldn't be using Go.
Uh... I mean, you're calling |
I finally find out that which line makes it memleak: This line allocates memory for Captures everytime an event passes by function RureParseTypedCompiled() but it doesn't free the memory after it exits RureParseTypedCompiled() |
Please provide a minimal program that reproduces the problem. If you look at the source, Lines 322 to 327 in 0338b65
Sure you can. Just call
Is there any way to free memory used by a Go slice? Nope. Welcome to life with a GC. :-) The only way such things get freed is by not using it and having the GC run. |
When I run my program on test server, the EPS is very high. Therefore, NewCaptures() is called very fast and continuously. One way to reproduce this problem is continuously calling NewCaptures() as fast as possible. Create once and reuse Captures many times would solve the problem. It would be much better for me if I can free Captures before quitting RureParseTypedCompiled() because for my usecase, it is possible but very complicated to create once and reuse later. :D
|
Unless you can show me evidence of a bug, I'm not making any changes here and I'm not going to add an explicit free function, sorry. If you need deterministic memory management, then don't use a language with a GC. |
https://github.com/thangld322/rure-go-mem-test Please check out this code. It will reproduce memory problem. The mem usage increases continuously and dramatically. Code details: I have 20000 nginx access raw plaintext logs and 8 patterns to match these logs. |
It looks like this might be a poor interaction between Go's runtime and the thread local storage cache that
and then rebuild |
I've been bitten by this as well in a situation where rure-go is used to process huge amounts of data each day. In some situations all memory on the host would be consumed, triggering the OOM killer. The patch to disable the perf-cache feature did the trick. |
This commit removes the thread_local dependency (even as an optional dependency) and replaces it with a more purpose driven memory pool. The comments in src/pool.rs explain this in more detail, but the short story is that thread_local seems to be at the root of some memory leaks happening in certain usage scenarios. The great thing about thread_local though is how fast it is. Using a simple Mutex<Vec<T>> is easily at least twice as slow. We work around that a bit by coding a simplistic fast path for the "owner" of a pool. This does require one new use of `unsafe`, of which we extensively document. This now makes the 'perf-cache' feature a no-op. We of course retain it for compatibility purposes (and perhaps it will be used again in the future), but for now, we always use the same pool. As for benchmarks, it is likely that *some* cases will get a hair slower. But there shouldn't be any dramatic difference. A careful review of micro-benchmarks in addition to more holistic (albeit ad hoc) benchmarks via ripgrep seems to confirm this. Now that we have more explicit control over the memory pool, we also clean stuff up with repsect to RefUnwindSafe. Fixes #362, Fixes #576 Ref BurntSushi/rure-go#3
This commit removes the thread_local dependency (even as an optional dependency) and replaces it with a more purpose driven memory pool. The comments in src/pool.rs explain this in more detail, but the short story is that thread_local seems to be at the root of some memory leaks happening in certain usage scenarios. The great thing about thread_local though is how fast it is. Using a simple Mutex<Vec<T>> is easily at least twice as slow. We work around that a bit by coding a simplistic fast path for the "owner" of a pool. This does require one new use of `unsafe`, of which we extensively document. This now makes the 'perf-cache' feature a no-op. We of course retain it for compatibility purposes (and perhaps it will be used again in the future), but for now, we always use the same pool. As for benchmarks, it is likely that *some* cases will get a hair slower. But there shouldn't be any dramatic difference. A careful review of micro-benchmarks in addition to more holistic (albeit ad hoc) benchmarks via ripgrep seems to confirm this. Now that we have more explicit control over the memory pool, we also clean stuff up with repsect to RefUnwindSafe. Fixes #362, Fixes #576 Ref BurntSushi/rure-go#3
Hi,
I used this library to implement a grok filter to parse plaintext logs.
First, I compile the pattern to use later.
Everytime a logs passes by, I call this function:
After just one minute of running, system's memory is full and the program crashes with this logs:
Can you tell me what is wrong please?
The text was updated successfully, but these errors were encountered: