Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Scrape time taking issue #2863

Closed
ganeshghube opened this Issue Jun 20, 2017 · 2 comments

Comments

Projects
None yet
2 participants
@ganeshghube
Copy link

ganeshghube commented Jun 20, 2017

Prometheus Scrape time taking issue
Scrape configuration is looking good.

What did you expect to see?
Exepcted to working normal

What did you see instead? Under which circumstances?

Environment

  • System information:

    insert output of uname -srm here
    4.4.41-35.53.amzn1.x86_64 #1 SMP Mon Jan 9 23:00:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  • Prometheus version:

prometheus, version 1.7.1 (branch: master, revision: 3afb3ff)
build user: root@0aa1b7fc430d
build date: 20170612-11:44:05
go version: go1.8.3

  • Alertmanager version:

alertmanager, version 0.5.1 (branch: master, revision: 0ea1cac51e6a620ec09d053f0484b97932b5c902)
build user: root@fb407787b8bf
build date: 20161125-08:14:40
go version: go1.7.3

  • Prometheus configuration file:

insert configuration here
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 15s
external_labels:
monitor: AWS
rule_files:

  • sqartm.rules
    scrape_configs:
  • job_name: prometheus
    scrape_interval: 15s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    static_configs:
    • targets:
      • localhost:9100
        labels:
        host: prometheus
  • job_name: aws tag
    scrape_interval: 15s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    ec2_sd_configs:
    • region: us-east-1
      refresh_interval: 1m
      port: 9100
      relabel_configs:
    • source_labels: [__meta_ec2_tag_Name]
      separator: ;
      regex: aws tag
      replacement: $1
      action: keep
  • Alertmanager configuration file:
    No change

  • Logs:
    time="2017-06-20T12:01:45Z" level=info msg="Completed full maintenance sweep through 20861 in-memory fingerprints in 134.155137ms." source="storage.go:1398"
    time="2017-06-20T12:01:46Z" level=info msg="Completed full maintenance sweep through 20861 in-memory fingerprints in 132.868738ms." source="storage.go:1398"
    time="2017-06-20T12:01:47Z" level=info msg="Completed full maintenance sweep through 20861 in-memory fingerprints in 132.615854ms." source="storage.go:1398"

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 14, 2017

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.