Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Custom request] Metrics & Dashboard #118

Open
SkylerLutz opened this issue Jul 6, 2023 · 6 comments
Open

[Custom request] Metrics & Dashboard #118

SkylerLutz opened this issue Jul 6, 2023 · 6 comments
Assignees

Comments

@SkylerLutz
Copy link

Hi, I would like to learn more about "Metrics & Dashboard". Unfortunately, there is no documentation. What functionality exists today? What integrations are required? Is there a GUI? Thank you

@SkylerLutz
Copy link
Author

@AbdelrhmanHamouda for vis

@AbdelrhmanHamouda
Copy link
Owner

AbdelrhmanHamouda commented Jul 10, 2023

Hi @SkylerLutz,
Thank you for showing interest in the project and actively participating. It has been a while I want to write a nice documentation on this part of the operator but time is always eluding me on that front. I may use this issue as a reason to do so.

However, in a nutshell;

  • Every running test get an automatically configured locust exporter container. This is a very stable and robust Go based project. In our production environment where we are using the operator with thousands of tests, no issues where never encountered with this exporter.
  • The container exposes a Prometheus compatible metrics that can be scrapped by any Prometheus server and visualised on any metrics provider (e.g. Grafana, NewRelic, Data Dog, etc...)

One of the nice things about this approach, is that it is a plug and play with any observability solution already configured in your cluster for other services. So nothing special needs to be done for this. once a test runs and the observability solution is configured, it everything becomes automatically available to build a dashboard or query the data.

I know it is not super detailed answer, but does that satisfy your question?

@PaulRudin
Copy link

So if you have prometheus and grafana running in you k8s cluster what more do you need to do in order to get data into a grafana dashboard? I've imported the locust-exporter dashboard, but it just shows no data after a test job has run, so presumably there's more configuration necessary...

@dennrich
Copy link

You at least need some servicemonitor or podmonitor to make Prometheus scrape the metrics. But the service lacks labels, which is the usual approach to configure the selector in the servicemonitor.

@AbdelrhmanHamouda How do you scrape the metrics? Do you have some example config you can share?

@AbdelrhmanHamouda
Copy link
Owner

Hello all, thank you for your extreme patience with me on this. I'll provide a detailed answer in a future (very very soon) comment. But in a nutshell, for us we have the below setup that ranwith NewRelic and currently running with Datadog with 100% reliability.
General idea (very standard appraoch):

  • Set Prometheus agent/server to scrape pods with prometheus.io/scrape: true
  • Instruct Prometheus to target correct path& port with annotations that Prometheus expects and look for prometheus.io/path: & prometheus.io/port -> all of these, the operator already set for you

Newrelic setup:

  • We used a standard Prometheus agent that was configured to send its scraped metrics to Newrelic. The configuration was exactly as mentioned in the "General idea" section above.

Datadog setup:

  • For datadog, they gave a property scrapper "part of their datadog agent" that basically accepts "Prometheus like" config which we configured to exactly the same thing => target pods with prometheus.io/scrape: true and use prometheus.io/path: & prometheus.io/port to know which port and endpoint to hit on that pod.

We don't setup anything on the k8s service level

@dennrich
Copy link

We are using kube-prometheus-stack and it is by default configured to use servicemonitors or podmonitors to discover additional targets, it doesn't discover based on annotations. I can't change the prometheus configuration because this is common infra code from our DevOps team.

An additional option in the operator would be nice allowing to create a servicemonitor. But the service labels would be more than enough for the start so people can create their own servicemonitor.

As far as I am aware servicemonitors are the standard way, so I think lots of people would benefit from this integration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants