-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AsyncIterator causes memory leaks in production #124
Comments
Hey @TimSusa! Thanks for opening this issue. I have been following #114 and it seems that I would rather have this package inline with graphql-subscriptions package. The PR over there was never merged as well apollographql/graphql-subscriptions#147. If that change will end up fixing the issues you had, let me know and I'll gladly merge it. Would love to help trying to reproduce your issue with this package if and if it is reproduces, I will give it my full attention. Thanks again and best of luck! |
I am having the same issue, by the looks of it. I think it does have to do with how you use the |
From reading through: tc39/proposal-async-iteration#126 I believe this issue is exposed if you are using async generators syntax sugar, e.g.:
In this case the |
Hi, at first I am sorry for that delay. Following points bubbled up in between at our side. We found out for ourselves to improve the situation by sending keep alive messages to the client. This helped to reduced the amazing amount of dangled connections. Furthermore, an AWS Autobalancer made a lot of riot by canceling each connection at specific time. We get pay attention about that. at second we gave our node process a flag called: "--optimize_for_size", according to this article: https://medium.com/@snird/do-not-use-node-js-optimization-flags-blindly-3cc8dfdf76fd this helped to run the garbage collector more frequently, because we have frequent new data. at third we improved our open source project to use unit-tests at all and converted this to type-script: https://github.com/axelspringer/graphql-google-pubsub/commits/master At the moment we will have a look about how it will evolve and look forward. Best, Tim Susa |
I have the same issue in production: every time the message is received, memory increase.
|
You might have to move your call to |
Unfortunately, the same result. I use the similar code in other project, but with graphql-rabbitmq-subscriptions pubsub instead (also based on AsyncIterator), and it's not leaking. |
@groundmuffin @dobesv @TimSusa Please have a look at v2.1.2 and let us know if that fixed your issue. If it does, praise @jedwards1211 for the fix! |
It probably doesn't...I discovered the main cause of memory leaks in our application was using async generators for stuff like async function * subscribe() {
await checkUserPermissions()
for await (const event of redisPubSub.asyncIterator('foo')) {
yield {Foo: event}
}
} I'm considering making a babel plugin to fix this use case, but right now the only solution is to not use async generators. |
@jedwards1211 Not sure I am following, should we roll back the fix? Or are you saying that those are not directly related issues? |
The fix might be helpful, but it won't fix the issue fully. The async generator and for await loop still won't pass through the fact that the caller is not iterating any more. |
No rollback needed, the issue I'm talking about is a separate thing. My PR still lowers the risk of memory leaks |
Having same issues in production - when there are many subscribers service is being restarted. |
@ursualexandr are you using any for await loops? |
no, I don't think so
Doe it matter if my
|
Definitely doesn't matter about So this might be causing subs to pile up if you were expecting your error to terminate the subscription. |
I've tried to return |
Any update on this? We also have the issue with the following code in production on our chat server NodeJS V13.2.0 {
joinedRoomWatcher: {
subscribe: withFilter(
(root, args, context) =>
pubsub.asyncIterator(
`${config.get('pubsub.prefix')}PARTICIPANT_JOINED_ROOM`
),
async (payload, args, { identity }, info) => {
// Assert identity matches w/ company rooms
const { participant_id: participantId } = payload;
const companiesIds = (await identity.getUser).companiesIds;
return participantId && companiesIds.indexOf(participantId) !== -1;
}
),
resolve: (payload) => new Room(payload.room),
}
} Scenario:
=> Leaky result : after calling the same graphQL subscription 10 times by refreshing the same window before closing it, PubSubAsyncIterator has retainers in memory that are never evicted even after GC was forced before creating snapshots 1 & 2 Chrome devtools analysis below Memory consumption in production below UPDATE : apparently this had something to do with websocket payload originating from the client not specifying a fixed ID when opening subscription. Adding this ID as in the image below results in PubSubAsyncIterator not retaining memory after GC anymore Note that I was not using a particular GraphQL Client library on the frontend for this leaky case, but the raw Websocket API. Clients such as Apolo auto-add these IDs to subscriptions |
Hi,
this is not an issue directly related to your repository, but as contributor of the project: https://github.com/axelspringer/graphql-google-pubsub I can say we use a very similar version of the AsyncIterator like you have. I wonder if you ran into similar issues?
We observe huge problems with memory leaks on production, where about 100 users are on the system in parallel. It takes about one hour until the memory reaches its limit and a restart of the container is triggered. We only saw this, because of our monitoring, otherwise nobody would have taken notice about it.
Heapdumps gave us the assumption, async iterator is the bad guy here:
Here my Questions:
By the way, I wonder, why you are not interested in this pull-request, which seems to adress the problem, described: #114
--> We will try that out tomorrow and come back with results.
The text was updated successfully, but these errors were encountered: