-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in instance creation #42
Comments
This isn't actually a memory leak, but an artifact of how Go manages memory. It consistently "allocates" a lot of virtual memory that it doesn't let go, which isn't really a problem because virtual memory is exactly that: virtual. See the Go docs for more. The important point there is that the memory is local to the Go process and doesn't starve other processes. If you check the resident memory ( For what it's worth, this also caught me off-guard! I have a project using wasmtime-go that constantly creates new wasm instances and saw virtual memory climbing into the terabytes. |
See also this section in the wasmtime docs, which explains why each instance allocates so much. |
I understand the go memory management and I do think there's a bit more going on here than that: I don't see goFinalizeWrap ever invoked, which then means wrapMapEntry instances are never released for the golang GC (and subsequently, neither are Store instances). To test this, I patched the 0.21.0 by adding an execution of goFinalizeWrap for the indexes allocated during the lifetime of the Store instance, and now the memory usage is stable, not growing at all. |
To clarify further, without the patch I describe above, I do see RSS (RES) increasing, not just the virtual memory. With the patch, RSS remains stable. |
Whoa, thanks for digging in. I stand corrected. I did some experimentation myself a couple of months ago and didn't see anything getting out of sorts, and the growth looked inline with the 6GB per instance that I would expect. But you clearly know more about it than I do. I hope I didn't come off as condescending. |
Haha, not at all, and do let me know if there's something I could help with. While I have a local workaround for timebeing, it would be good to have a proper fix in place in upstream. |
So, I think we'll have to wait for @alexcrichton to weigh in here unless you submit a patch. I don't know enough about how the C api for wasmtime works (or about cgo or c or programming in general...) to make a serious effort. |
Thanks for the report! It does indeed look like I've created a cycle between the rust heap and the Go heap which can't be collected. I'll look into figuring out how to break this. |
The global map for storing `*Func` objects previously stored `*Store`, but that creates a reference cycle between the Rust and the Go heaps which can't be cleaned up. Instead this commit updates the logic to instead use a "thread local" variable to temporarily store the current freelist during a call and moves the storage outside of the global variables. This should allow everything to get garbage collected as usual. Closes bytecodealliance#42
Ok @koponen-styra can you confirm that #44 works for you? |
Yes, I can confirm, the memory usage looks stable with your patch. Great! |
Thanks for confirming! |
The global map for storing `*Func` objects previously stored `*Store`, but that creates a reference cycle between the Rust and the Go heaps which can't be cleaned up. Instead this commit updates the logic to instead use a "thread local" variable to temporarily store the current freelist during a call and moves the storage outside of the global variables. This should allow everything to get garbage collected as usual. Closes #42
Great patch |
Running the following to see the memory leak in action:
The text was updated successfully, but these errors were encountered: