Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate data collected #5489

Closed
SteveYf opened this Issue Apr 19, 2019 · 2 comments

Comments

Projects
None yet
2 participants
@SteveYf
Copy link

SteveYf commented Apr 19, 2019

Hey, Dear developer,I was in trouble when I used prometheus to monitor the many apis.
prometheus version: 2.9.0
export : prometheus_client

I have a server with four services running on it, and each service is independent. I need to monitor these four services at the same time.

Scrape_configs configuration in prometheus.yml:

......
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  #- job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

  #  static_configs:
  #  - targets: ['localhost:9090']

  # target
  - job_name: 'test1'
    static_configs:
    - targets: ['192.168.175.22:29090']

  # target
  - job_name: 'test2'
    static_configs:
    - targets: ['192.168.175.22:29016']

  # target
  - job_name: 'test3'
    static_configs:
    - targets: ['192.168.175.22:29008']

  # target
  - job_name: 'test4'
    static_configs:
    - targets: ['192.168.175.22:29009']

I am configured to run normally and collect metrics, but the metrics collected are four, no matter what type of metrics, this situation is very bad for me, because I want to use the aggregation operation to process the data.

I read the relevant documents, as follows:

A scrape_config section specifies a set of targets and parameters describing how to scrape them. In the general case, one scrape configuration specifies a single job. In advanced configurations, this may change.

What is the so-called advanced configuration? Is it possible to monitor multiple services on a single machine? And the data is not repeated.
I also checked the configuration file dynamic management, I don't think it can solve this problem.
What I expected was to monitor multiple services on a single machine separately, and the metrics obtained were independent of each other.
Is my configuration file wrong? Or is the current prometheus temporarily not supporting this feature?
thank you very much for your help.
Steve Bailey.

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Apr 19, 2019

Thanks for your report. It looks as if this is actually a question about usage and not development.

To make your question, and all replies, easier to find, we suggest you move this over to our user mailing list, which you can also search. If you prefer more interactive help, join or our IRC channel, #prometheus on irc.freenode.net. Please be aware that our IRC channel has no logs, is not searchable, and that people might not answer quickly if they are busy or asleep. If in doubt, you should choose the mailing list.

Once your questions have been answered, please add a short line pointing to relevant replies in case anyone stumbles here via a search engine in the future.

@SteveYf

This comment has been minimized.

Copy link
Author

SteveYf commented Apr 19, 2019

Ok, I have solved this problem, I set the multi-process variable whose scope is app level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.