-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Following Instructions but no Config File #13
Comments
The exporter is specifically built with the intent of not having any configuration file and with a very minimal set of configuration parameters, so there is nothing that you need to pass it in order to work. |
I have set that configuration file up and placed it in a temp folder, but how do I get the Docker Container to see that file? Or is there a specific place i need to put it so the Container sees it automatically? |
Not being able to take advantage of the labeling seems like a bad ieda. Is there a reason it was created for both exporters this way? FB and FA. |
I ran this command and I still don't see the the container isn't seeing the config file: |
@andrewm659 I completely understand you point, but the issue arises from the fact that providing authentication/authorization credentials or tokens as query parameters is not a good practice, so we decided to remove that possibility. Prometheus provides the authorization config key for that specific purpose. This approach unfortunately requires to define a job for each target FlashArray, as it is not possible to create unique API token for a pool of arrays. |
@genegr Does it seem that I have the command correct? When it loads it doesn't seem to be seeing the config |
@ApickettLGA The command you are using runs the exporter properly, but it is the additional volume you are trying to pass to the container that is not understood/used by the exporter. The exporter does not use any config file and it has only the few options that are shown when it is executed with the -h/--help flag
All the configuration happens at the Prometheus side, in which config file you have to add the target array as the query parameter of the scraped endpoint, like this:
|
@genegr I now see the Server itself needs this installed and then the scrape can be configured, but I did that and its still not loading the config: my global configglobal: scrape_timeout is set to the global default (10s).Alertmanager configurationalerting: Load rules once and periodically evaluate them according to the global 'evaluation_interval'.rule_files: - "first_rules.yml"- "second_rules.yml"A scrape configuration containing exactly one endpoint to scrape:Here it's Prometheus itself.scrape_configs: The job name is added as a label
|
I was hoping I could get any more assistance as I know this is close but not sure what else would be missing |
@genegr or Anyone, I configured Prometheus on the Ubuntu Server and configured the script as above, but the Pure Import Container is still not seeing the correct information, is there something I missing or can check to further troubleshooting? |
@ApickettLGA it's a little unclear to understand your prometheus.yaml config as it's not in a code block and markdown has formatted it all. It looks like Prometheus is not picking up your authorization credentials as yaml format is expecting an indentation. We recently posted some additional content on how to deploy Prometheus and Grafana with overview dashboards for FA/FB. There is a troubleshooting section there for Prometheus, try running this to prove your config works.
Also I notice you are running the query against /metrics. While this will work, it is an expensive query and it may take longer than the timeout and therefore may fail. It is recommended to configure a query job for specific metric endpoints /metrics/array, /metrics/volumes, /metrics/hosts, etc. I have gone to the effort to format your yaml configuration and included jobs to /metrics/array, /metrics/volumes and /metrics/hosts and processed through a yaml validator online to ensure it conforms to yaml formatting.
Run the There are some troubleshooting steps in the README.md which might help you with each component. For example, to check the exporter is working we need to pass the bearer token.
|
@james-laing Thank you for all your help and its really appreciated. I have taken your information and applied and seems to be passing the check, and I can run the curl but when I go to the page I'm still not getting any results back. Promtheus Check: promtool check config /etc/prometheus/prometheus.yml And Curl Check looks fine: curl -H 'Authorization: Bearer abc-blah' -X GET http://10.6.25.132:9490/metrics/array?endpoint=10.6.100.71 HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.TYPE go_gc_duration_seconds summarygo_gc_duration_seconds{quantile="0"} 3.9432e-05 HELP go_goroutines Number of goroutines that currently exist.TYPE go_goroutines gaugego_goroutines 93 Browser Check: |
@ApickettLGA The browser can't pass an authorisation token as you can see in the result of the browser check it is failing. The curl output you provided looks truncated. You need to scroll down and check you can see It looks like you are almost there, please try the troubleshooting section of the setup README.md. If there is an issue please provide feedback. If you are still requiring consultation to pull data from FlashArray, Pure Storage Professional Services offer a Monitoring and Observability service specifically for this. Just get in touch with your sales representative. |
@ApickettLGA Did you see any Did you see targets listed in Prometheus and can you see results in with a PromQL query? |
@james-laing Thank you for your response; Sorry We have had tons of outages that i couldn't get back to this.
So I know it is connecting with the API Key, I'm going to try troubleshooting with Prometheus now, Thank you |
@james-laing I was able to Connect to the Prometheus Connection and I'm able to run active queries on my Pure Storage and see live data. |
@ApickettLGA great news, excellent! Are you planning on pointing Grafana to your Prometheus TSDB to view broad and historical metrics? |
@james-laing - Well right now I'm able to see the Data which is very good, but I'm still having an issue with the Container Web page; I'm still getting "Target authorization token is missing" when I visit the page And the plan is to have our System monitoring software DataDog query the page and pull the information then put it to their page. This way all the stats will be collected, but need the Container Page to work. Seems like I have all the pieces working but the container. |
@ApickettLGA just out of curiosity why not use the Pure FlashArray DataDog integration that we already have? |
Funny enough, DataDog sent us down the rabbit hole of using this step to send data to DD.. Trying the Link you sent me to see how that works. Thank you |
@sdodsley Reading the steps DataDog steps and it does require for me to have the container setup; So this leads me back to my page not working correctly.. If I can get the container webpage to work then I would be completed this with step.. I have Prometheus working but it doesn't seem like the page is working from the container |
@ApickettLGA if your objective is to get data into Datadog, you would simplify your solution by using the Datadog integration for Pure FA. It works with both the depricated pure-exporter and purefa_openmetrics-exporter. Configure the
Restart Datadog and check the metrics are being pulled with
If the data is already in Prometheus and you wish to continue with this solution, you will need to point Datadog to the Prometheus instance, not the exporter. In your previous comments you've stated you are trying to connect to the purefa-openmetrics-exporter from the browser. This won't work as the browser cannot pass the bearer token like cURL, Prometheus and Datadog can. Hope this helps. |
That makes sense and I didn't realize it wouldn't work from my browser. So I have configured my DD Agent and now waiting for my networking team to open the port for connection, then I think I might be good to go! Thank you all for your assistance! |
To correct @james-laing 's comment - the current published I'm working on the
|
@james-laing ; Thank you for all your help! @chrroberts-pure I finally have everything up and running and we are getting some metrics into DataDog, but we arn't seeing all and now DataDog is asking for us to upgrade to 1.1.0, but i don't see this version published? Can I upgrade to the latest version to get more metrics or do you know when 1.1 will be released? |
Hi, @ApickettLGA - Datadog PureFA integration v1.1 was released on Feb 28, 2023. DataDog/integrations-extras#1750 https://docs.datadoghq.com/integrations/purefa/ also lists the version as v1.1.0 |
Also - feel free to reach out to the Observability channel in our Slack channel, I'll be in there if you'd like to connect. |
Closing this as the latest OME release and DD integration provide all the fixes. |
I'm new to Docker and containers but I have the basic configuration running and I have gained access to the page so I know the container is running. Which example is the best configuration file to have basic metrics pulled and then how do I tell the Docker Build or what configuration file do I need to modify to use the config file i created?
The text was updated successfully, but these errors were encountered: