Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There seems to be a memory leak with constantly growing memory consumption #448

Closed
3 tasks done
dadrus opened this issue Jan 18, 2023 · 1 comment · Fixed by #449
Closed
3 tasks done

There seems to be a memory leak with constantly growing memory consumption #448

dadrus opened this issue Jan 18, 2023 · 1 comment · Fixed by #449
Labels
bug Something isn't working

Comments

@dadrus
Copy link
Owner

dadrus commented Jan 18, 2023

Preflight checklist

  • I agree to follow this project's Code of Conduct.
  • I have read and am following this repository's Contribution Guidelines."
  • I could not find a solution in the existing issues, docs, nor discussions.

Describe the bug

Each and every request to heimdall increases the memory consumption by a small portion.

How can the bug be reproduced

To be able to properly observe the memory consuption, you'll need an observability stack, like e.g. provided by grafana, prometheus and loki

  1. Install heimdall in demo mode, e.g. via helm install heimdall --namespace heimdall --create-namespace --set demo.enable=true dadrus/heimdall
  2. install a pod monitor (example is available in the heimdall's documentation)
  3. wait until the setup is up and running
  4. run something like for i in {1..100000}; do curl -H "Host: demo-app" 127.0.0.1/heimdall-demo/anonymous; done
  5. check the pod metrics to see how the used memory is growing

Relevant log output

No response

Relevant configuration

No response

Version

0.5.0-alpha

On which operating system are you observing this issue?

Linux

In which environment are you deploying?

Kubernetes with Helm

Additional Context

It looks like this issue has been introduced with #359, which basically creates a new counter for each and every request.

@dadrus dadrus added the bug Something isn't working label Jan 18, 2023
@dadrus
Copy link
Owner Author

dadrus commented Jan 18, 2023

The profiling information available with #446 shows clearly that the issue is related to the implementation done in #359. After removing the correlation between metrics and traces, everything is stable and there are no memory consumption issues any more.

grafik
The image above shows also clearly how the update of the above said implementation (removing of the correlation) stabilized the situation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant