You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have set up a statsD node that sends metrics to a cluster wide carbon relay server(only carbon relay installed on a linux server). And this carbon relay server forwards the metrics to 3 graphite nodes(version 0.9.15). And finally i render the graphs on grafana.
Users are sending data that takes a lot of time to be rendered on grafana since they send numerous number of sub-metrics for single data source name (for example: Test.numerous.*.count , * is a huge number of metrics within Test) .
Could you please advise:
How to track the data source as it is very difficult to know who is sending the data?
How to block metrics based on certain pattern at statsD or cluster wide carbon relay level?
Thanks in advance,
Kiran
The text was updated successfully, but these errors were encountered:
Hi @kira510,
Sorry for late response, I missed your question somehow.
First, it's quite hard to track connections in graphite cluster, unfortunately. You can enable access logging and you will see metrics and source IP of the requestor in the log.
For the second question - yes, what @wolfzhaoshuai said. Graphite has blacklist feature too, but for cluster-wide blocking, you may prefer do it on relay level. 3rd party carbon-c-relay or graphite-ng relays are able to do that.
Hello Team,
I have set up a statsD node that sends metrics to a cluster wide carbon relay server(only carbon relay installed on a linux server). And this carbon relay server forwards the metrics to 3 graphite nodes(version 0.9.15). And finally i render the graphs on grafana.
Users are sending data that takes a lot of time to be rendered on grafana since they send numerous number of sub-metrics for single data source name (for example: Test.numerous.*.count , * is a huge number of metrics within Test) .
Could you please advise:
Thanks in advance,
Kiran
The text was updated successfully, but these errors were encountered: