-
Notifications
You must be signed in to change notification settings - Fork 792
Memory leak in renderToStringWithData #2126
Comments
We are seeing the same thing, same versions, however we use |
We are also trying to profile a memory leak in our app that appears to involve a branch of Here it is with |
We have been seeing this in production for a while. One other thing to note is that a corresponding CPU "leak" is accompanies the increased memory usage. Note the drop where a deploy restarted all nodes: I suppose this should be a suspect with any JS memory leak, but we might want to start by looking at anything explicitly passed to the next event frame (such as with |
I'm behind from chasing my tail on all sorts of upstream issues today, so I can't really dive into this anymore, but my current suspicion is this setInterval which is never cleared internally or by react-apollo. Everything in its scope is going to stay in memory indefinitely, and we're going to add a new scheduler every time we create an apollo client instance. If this is correct, this also effects browser environments, but isn't problematic since they're so much shorter lived. |
Thanks for the excellent reproduction @amannn! When I zoom in on small slices of the tail of my allocation profile, the newly created objects are almost all copies of query documents, so I suspect there’s some accidental cloning of queries happening somewhere, possibly compounded by subscription-related closures hanging around too long, like the |
May help with apollographql/react-apollo#2126, since this code was only making a shallow copy of defaultQueryInfo, so there's a risk that defaultQueryInfo.listeners and .subscriptions could have grown over time, without ever getting garbage collected.
In addition to trying to create fewer clones of query documents (see commits referenced above), I believe this issue may be contributing to the leak, since the reproduction repository uses |
I'm affected by this issue. In my case, I'm only relying on This issue is quite significant as it will force me to set up my node servers to restart once in a while to avoid slow GC cycles, and to prevent them from running out of memory. |
I can confirm we see the memory leak too and our containers are killed and restarted once every ~24 hours. I am using Simply not calling |
Bump. Is anyone working on this or looking into it? |
+1 on @RamIdeas .. We are really looking forward on an update on this issue at my office. |
We see the exact same problem. Would really like som feedback on this issue. |
Can any of you either put together a reproduction, or start Node with Some notes:
In order to make any progress on this issue, we need to have some idea where the persistent allocations are coming from. In other words, I take the most recent comments above as volunteering to help track down the problem. Reports of seeing the same problem (by what definition of sameness?) have limited value unless you stick around and participate in the debugging. I don't mean to single anyone out, but I hope I can encourage/empower you to dig a little deeper. |
@benjamn Thanks for your help! My initial post includes a reproduction repository and instructions on how to surface the issue. Does that help? |
I'm not sure whether my process manager solution that kills proceses caused my heap snapshots to be inconclusive, but it will take me more time to provide meaningful data. |
@amannn Yes that repository was very helpful, though I believe we've made significant progress on the issues it highlighted. Here's the latest plot (with updated dependencies): Starts a little higher, but the slope of the line is much gentler, and as far as I can tell there's nothing Apollo-specific accumulating over time, like there was before. I should also mention: you can now use I don't think that difference points to any long-term memory leaks, but merely reflects the absence of some object recycling techniques used by the result caching system. My suspicion is that the more recent commenters are seeing something more problematic than what these plots reflect, so I'm still interested in new reproductions. 🙏 |
After checking out the development branch, compiling and configuring my server to make it possible to analyze how the memory leaks (disabled cluster mode, no full page cache to allow re-rendering the same page over and over) and testing it with queries sent from If a file with these statistics is needed, I might be able to provide it. |
Hi, guys! Is there any update on this? |
I created https://github.com/Quramy/reproduce-apollo-react-memleak to reproduce or investigate this issue. I made 10,000 SSR requests using a simple GraphQL query and measured the size of the heap. The heap grows by about 80 bytes / request, but on the other hands, I measured without It seems that react-apollo or apollo-client does not relate the leak. Or am I wrong with the use of react-dom-server. I'm not sure... And I've attempted to detect objects which are remaining on the heap by comparison 2 heap snapshots but I couldn't ... :( FYI: I'm using:
|
As it turns out, we may have run into the exact same problem. When can we expect a fix? |
Any updates? Thanks. |
Tried 3.0 pre release? |
We've had issues every time we've tried updating apollo and its dependencies. It's so bad that we use |
Hello! Sorry to bother, but, has there been any progress related to this issue? |
Hi, is there any update on this tread? I have a similar problem with |
Hi, same problem here. Using |
Intended outcome:
I have a service which uses the
renderToStringWithData
function from react-apollo. I expect the function to not have any memory leaks.Actual outcome:
The service crashes every now and then, because it runs out of memory. I read a tutorial on how to diagnose memory leaks and I think I found one that is caused by the mentioned function. I have the same issue with another node app which uses
renderToStringWithData
.This graph shows the growing memory consumption of my reproduction with GitHunt:
![](https://raw.githubusercontent.com/amannn/GitHunt-React/memory-profiling/githunt-memory-profile.png)
How to reproduce the issue:
memory-profiling
branch.yarn build
yarn start
ab -n 100 http://127.0.0.1:3000/
(sends a request to the server 100 times)stats.json
.python plot.py
to see the graph.The mentioned tutorial also outlines a method for getting more information about which objects are allocated and never released by garbage collection.
Version
The text was updated successfully, but these errors were encountered: