Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seneca TCP transport consumes memory regularly, never frees it #146

Open
SzybkiSasza opened this issue Nov 22, 2016 · 3 comments
Open

Seneca TCP transport consumes memory regularly, never frees it #146

SzybkiSasza opened this issue Nov 22, 2016 · 3 comments
Assignees
Labels

Comments

@SzybkiSasza
Copy link

SzybkiSasza commented Nov 22, 2016

For some considerable time we noticed that our Seneca instances using TCP transport as main way of communication consume memory in a very regular manner. Not so long time ago, due to another incompatibility/stability problems, we decided to migrate to Seneca 3.x from Seneca 2.1.0. However, this did not solve our core problem - we still se growing memory stamp on our instances and it happens only with TCP transport:

screen shot 2016-11-22 at 13 35 23

On the attached graph, you could see two services, sharing exactly the same codebase, initialised only with different transports. One, queue based (our own proprietary SQS transport, will source probably soon ;)) acts perfectly stable, whereas second one - TCP-based, behaves in a strange, yet pretty regular fashion. The drop-downs are obviously restarts.

On 11/18 we introduced Seneca 3.x to this particular service - you could notice that for some reason this saw-shaped characteristic got... Smoothed :) Still, it increases super-regularly, every hour by the same amount of memory.

Instances are hosted EC2 containers. The same behaviour is observed locally, in long-term runs.
I'd like to add that our microservice isn't a very complicated one and all the act handlers are stateless (apart from MongoDB connection, but this one is present on non-TCP ones as well).

@panva
Copy link

panva commented May 2, 2017

Hello @SzybkiSasza, were you able to resolve this issue whilst still using TCP transport?

@andybar2
Copy link

andybar2 commented Aug 5, 2017

I'm facing this same issue. Is there a solution without changing the transport?

@wzrdtales
Copy link
Contributor

Hey together,

I currently work on a new transport and while that I am getting some bits from this driver. So for me it seems this issue is the result of the same as the following bug #114 .
For every send a new reconnect gets created, that never gets destroyed, which is one the reason for the memory leak and two the reason for the memory duplication.

wzrdtales added a commit to wzrdtales/seneca-transport that referenced this issue Mar 3, 2018
Fixes senecajs#114
Fixes senecajs#146

Signed-off-by: Tobias Gurtzick <magic@wizardtales.com>
wzrdtales added a commit to wzrdtales/seneca-transport that referenced this issue Mar 3, 2018
Fixes senecajs#114
Refers senecajs#146

Signed-off-by: Tobias Gurtzick <magic@wizardtales.com>
@rjrodger rjrodger self-assigned this Feb 16, 2019
@rjrodger rjrodger added the bug label Feb 16, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants