-
Notifications
You must be signed in to change notification settings - Fork 18k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: heap size reached GOMEMLIMIT but no GC activity triggered #56764
Comments
To be clear, are you passing However, if you're seeing that it uses the memory anyway, that's not good, though I can't trivially reproduce this locally. Do you have a reproducer by any chance? |
FWIW, I would expect this behavior if you were using a version of Go that didn't support |
Also, how are you confirming there is no GC activity? What's the output of running your program with |
I've been looking into this more since it's potentially pretty serious, but I haven't found any leads. Plus, we're regularly testing this behavior in the runtime and this memory limit functionality is used within Google, and we haven't seen any serious out of memory issues in production as a result of it (yet). If you have any more information or a way to reproduce, please let me know! Putting this into WaitingForInfo for now. |
I used GOMEMLIMIT=1GiB instead of 1GB. Acutally, I use debug.SetMemoryLimit(1<<30).
I used go1.16.5 to compile and benchmark it with the same pressure.
ps: Then why I want to replace the ballast? The initialization of ballast is tricky, the allocation memory space maybe zeroed if the pages is reused, then the ballast will take up a huge 1GB RSS.
Yes, I used GODEBUG=gctrace=1, there's no output of GC activity. I also read the memstats per second and print the NumGC, it's zero. But HeapAlloc isn't zero, it's a big value about 4G+ before OOM detected and killed.
OK, I will try to provide an reproducible example later. ps: Now I have a guess, the GC is triggered by the goal calculated by GOMEMLIMIT, but it is not complete. During the mark phase, the new allocated object are marked reachable, so the RSS increases. And this occasion is under a http benchmark, the pressure is high, Rss increases too high, finally it's killed. It's a guess, I will test it later :) |
Sorry, I'm not sure I follow. If your application allocates around 6.3 GiB in 2 minutes (as per your original post), then that's an allocation rate of about 53 MiB/s. If you have a 1 GiB ballast and your total live heap stays around 2 GiB, that suggests to me that your live heap is small, on the order of MiB. The GC should have no problem keeping up in this scenario I would expect the mark phase to be really short. (Even then, newly allocated memory marked live during a mark phase will become eligible for reclamation next cycle provided it's not referenced by the next mark phase.) If the application is actually getting to a mark phase, the GC is programmed to become more aggressive as the goal is neared and begins to be exceeded. It'll force the allocating goroutines to assist until it can finish. I think a reproducer at this point would be the most useful thing. |
@mknyszek Thanks very much. My bad, when I merged the code, there're some conflicts. I dropped one line of code I read, modified the go mgc.go and add some debugging messages, and test again. I found if this problem was caused by a bug, it will be a very very very apparent bug. Then I check my code. It's a little awkward :) I'll close this issue. @mknyszek Thank you. |
Oh, haha. It happens. :) Well, thanks for checking and for trying to reproduce! |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes.
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
Before we use go memory ballast to avoid frequent GC when using little heap. After upgrade to go1.19, I want to test whether we should use GOMEMLIMIT instead when deploying by per host per service.
I use GOGC=off+GOMEMLIMIT=1GB to test, our host is 4 core 8 GB. And there's no other services may compete for the memory.
I do some http benchmarking against the service:
This testing process last for nearly 2 minutes. Even though there's no forced GC (for GOGC=off), I think the GC should be triggered by GOMEMLIMIT=1GB, but not triggered.
What did you expect to see?
I want to see at least there's some GC activities when heap reached to the soft memory limit.
What did you see instead?
I didn't see any GC activity. And The heap grow beyond the soft memory limit (1GB) to 6.3GB, then it's OOM killed.
Actually, I know there's some difference between ballast and soft memory limit. I am just curious about why no GC activity triggered when heap grow beyond the limit.
The text was updated successfully, but these errors were encountered: