-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
daprd occupies too much memory #6581
Comments
It is worth adding that the request volume of this application is also very small, with an average of 5-10 requests every 10 seconds. The above memory usage will continue to increase with the running time (more like a memory leak), and will not decrease significantly. |
Which features/APIs in Dapr are you using? |
I don’t use any api/daprclient in this app code. |
You stated you are loading Redis state and RabbtMQ pub/sub. If you remove these component from the namespace do you observe any changes in memory consumption? Also, can you please paste the logs of the daprd container? |
I can’t remove components because this is live environment. I checked this pod on dapr-dashboard, don’t have any log with two containers(log level is warn), and kubectl logs command too. |
Is the only usage of the |
In my cluster, dapr includes some components (rabbitmq, redis, etc.). The pod (iaas-config-axe) example in this case did not make any calls to the dapr component. The 'iaas config axis' only injects the dapr sidecar, in order to allow other services (pods) to make service invocation through the dapr sdk. |
A similar problem has arisen again,
This pod only runs for 23 hours. Sidecar (dapr) occupies almost five times the memory of the app.
This pod uses service invocation and state storage. |
I tried to turn off metric, and the memory is working properly.
The following pods are the occupancy status after running for 18 hours (experiencing peak traffic).
So, is metric collected in the memory of the sidecar and only removed from memory when it is collected? My cluster does not use Prometheus, and metric is enabled by default. |
@Hsuwen The metrics collector does require additional memory. It's very interesting that it's having such a large impact for you however. Probably something we should investigate |
I will continue to observe the resource usage of pods. |
@Hsuwen do you have high cardinality URLs in your system? for example |
We are getting similar reports from other users. I am thinking of re-opening this as it seems to be having a broad impact. CC: @yaron2 |
I'm getting similar reports |
Is there something that can be done from the app side to remove the effect? Like rewriting the URLs to query params instead of path? |
It would help much if we could rule out high cardinality metrics. If you can disable metrics all together and report if memory's exhibiting normal usage patterns that would be great. |
I can provide all necessary information without affecting the normal use of the production environment. Please tell me exactly how to do it. |
About this issue, I found that the docs has been updated. Several details are still not very clear:
Regarding point 2:
|
This issue has been automatically marked as stale because it has not had activity in the last 60 days. It will be closed in the next 7 days unless it is tagged (pinned, good first issue, help wanted or triaged/resolved) or other activity occurs. Thank you for your contributions. |
Still active, should be fixed by #6723 |
This issue has been automatically marked as stale because it has not had activity in the last 60 days. It will be closed in the next 7 days unless it is tagged (pinned, good first issue, help wanted or triaged/resolved) or other activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had activity in the last 67 days. If this issue is still valid, please ping a maintainer and ask them to label it as pinned, good first issue, help wanted or triaged/resolved. Thank you for your contributions. |
In my environment, everything runs normally with dapr, but sidecar (daprd) occupies a lot of memory, which is much more than 5 times the memory of my application.
My environment:
Server: AWS EKS version 1.27
Dapr CLI: 1.11
Dapr Runtime: 1.11
Dapr SDK: dot-net 1.10
Dapr Components: redis(statestore), rabbitmq(pubsub)
Other: zipkin, middleware.http.ratelimit, middleware.http.routeralias
For example, one of the applications, iaas config axis, occupies 661Mi of sidecar memory, while my program only has 89Mi. I think this is abnormal. Usually, whenever my application processes a request, DAPRD will also accept and process a request, and the number of DAPRD requests will not exceed five times the number of requests in my program.
Moreover, this application does not have any service invocation, state storage, event publications/subscriptions. Even so, I have never used DaprClient in my code, just injected it into the dapr environment.
Of course, I have already specified relevant annotations for this application with reference to Production guidelines.
Not only does the sidecar of this application have memory issues, but there are also others. Here, I gave one of the most representative examples.
Do you have any suggestions regarding this issue? How should I investigate and solve it?
The text was updated successfully, but these errors were encountered: