Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: invalid memory address or nil pointer dereference #2382

Closed
nordbranch opened this Issue Jan 31, 2017 · 7 comments

Comments

Projects
None yet
3 participants
@nordbranch
Copy link

nordbranch commented Jan 31, 2017

What did you do?
Upgraded from prometheus-1.3.1.linux-amd64 to /opt/prometheus-1.5.0.linux-amd64/ .
Service will run for slightly less than 24hrs before encountering this issue. Little else was changed in going from 1.3.1 to 1.5.0. Problem was not experienced with 1.3.1.

What did you expect to see?

What did you see instead? Under which circumstances?

Environment

  • System information:

AWS EC2 c4.4xlarge
Linux 3.13.0-48-generic x86_64

  • Prometheus version:

./prometheus -version
prometheus, version 1.5.0 (branch: master, revision: d840f2c)
build user: root@a04ed5b536e3
build date: 20170123-13:56:24
go version: go1.7.4

  • Alertmanager version:

(not likely relevant, but for completeness)
./alertmanager -version
alertmanager, version 0.4.2 (branch: master, revision: 9a5ab2fa63dd7951f4f202b0846d4f4d8e9615b0)
build user: root@2811d2f42616
build date: 20160902-15:33:13
go version: go1.6.3

  • Prometheus configuration file:
global:
  scrape_interval: 30s

rule_files:
  - /etc/prometheus/prometheus.rules
  - /etc/prometheus/aerospike.rules
  - /etc/prometheus/kafka.rules
  - /etc/prometheus/zookeeper.rules

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

  - job_name: aerospike_nodeexporter
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*aerospike.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)

  - job_name: aerospike
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*aerospike.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)
  - job_name: zookeeper_nodeexporter
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*[zZ]ookeeper.*)
        
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)

  - job_name: zookeeper
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*[zZ]ookeeper.*)
        
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)
  - job_name: druid_nodeexporter
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*druid.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)

  
  - job_name: kafka_nodeexporter
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*kafka.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)

  - job_name: kafka
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*kafka.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)
  - job_name: prometheus_nodeexporter
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*prometheus.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)

  - job_name: prometheus
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*prometheus.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)
  - job_name: spark_nodeexporter
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*spark.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)

  
  - job_name: webhook-application_nodeexporter
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*webhook-application.*)
      - source_labels: [__meta_ec2_tag_environment]
        action: keep
        regex: prod
      - action: labelmap
        regex: __meta_(.+)

  

  - job_name: webhook-application
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: (.*webhook-application.*)
      - action: labelmap
        regex: __meta_(.+)

  - job_name: 'kubernetes_nodes'
    basic_auth:
      username: [REDACTED]
      password: [REDACTED]
    tls_config:
      insecure_skip_verify: [REDACTED]
      ca_file: [REDACTED]
    scheme: https
    kubernetes_sd_configs:
      - api_server: https://[REDACTED_BUT_SANE]/
        basic_auth:
          username: [REDACTED]
          password: [REDACTED]
        tls_config:
          insecure_skip_verify: [REDACTED]
        role: node
    scrape_interval: 30s

  - job_name: 'kubernetes_nodeexporter'
    basic_auth:
      username: [REDACTED]
      password: [REDACTED]
    tls_config:
      insecure_skip_verify: [REDACTED]
      ca_file: [REDACTED]
    scheme: http
    kubernetes_sd_configs:
      - api_server: https://[REDACTED_BUT_SANE]/
        basic_auth:
          username: [REDACTED]
          password: [REDACTED]
        tls_config:
          insecure_skip_verify: [REDACTED]
        role: node
    scrape_interval: 30s
    relabel_configs:
      - source_labels: [__address__]
        action: replace
        regex: (.+):(?:\d+)
        replacement: ${1}:9100
        target_label: __address__

  - job_name: 'kafka_offsets'
    static_configs:
    - targets: ['localhost:[REDACTED]']

  - job_name: 'push_gateway'
    honor_labels: true
    static_configs:
    - targets: ['localhost:[REDACTED]']

  - job_name: 'k8s_state_new'
    static_configs:
    - targets: ['[REDACTED_BUT_SANE:PORT]']

  - job_name: 'k8s_state_old'
    static_configs:
    - targets: ['[REDACTED_BUT_SANE:REDACTED]']

  - job_name: 'k8s_federate'
    honor_labels: true
    scrape_interval: 15s
    params:
      'match[]':
        - '{job=~"skopos"}'
    static_configs:
    - targets: ['[REDACTED_BUT_SANE:REDACTED]']
    relabel_configs:
      - source_labels: [__metrics_path__]
        replacement: /federate
        target_label: __metrics_path__

  - job_name: 'k8s_federate2'
    honor_labels: true
    params:
      'match[]':
        - '{job=~"pods"}'
    static_configs:
    - targets: ['[REDACTED_BUT_SANE:REDACTED]']
    relabel_configs:
      - source_labels: [__metrics_path__]
        replacement: /federate
        target_label: __metrics_path__

  - job_name: nodejs_api1
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: api
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)
      - action: labeldrop
        regex: ec2(.*)

  - job_name: nodejs_api2
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: api
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)
    #  - action: labelmap
     #   regex: ec2_(.*)

  - job_name: nodejs_api3
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: api
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)

  - job_name: nodejs_api4
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        action: keep
        regex: api
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)

  - job_name: nodejs_link1
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        target_label: role
        action: keep
        regex: link
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)

  - job_name: nodejs_link2
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        target_label: role
        action: keep
        regex: link
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)

  - job_name: nodejs_link3
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        target_label: role
        action: keep
        regex: link
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)

  - job_name: nodejs_link4
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        target_label: role
        action: keep
        regex: link
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)

  - job_name: nodejs_link5
    metrics_path: '/metrics/prometheus'
    ec2_sd_configs:
      - region: us-west-1
        access_key: [REDACTED]
        secret_key: [REDACTED]
        port: [REDACTED_BUT_SANE]
    relabel_configs:
      - source_labels: [__meta_ec2_tag_Name]
        target_label: instance
      - source_labels: [__meta_ec2_tag_Role]
        target_label: role
        action: keep
        regex: link
      - source_labels: [__meta_ec2_tag_Stage]
        action: keep
        regex: Live
      - action: labelmap
        regex: __meta_(.+)

  • Alertmanager configuration file:
insert configuration here (if relevant to the issue)
  • Logs:

We're aware of the sample discard messaging - we plan to use relabeling to deal with some of that. (Those were occurring under 1.3.1, as well, without this issue.) There are log captures from two events below.

EVENT ONE:
time="2017-01-30T16:35:32Z" level=info msg="Checkpointing in-memory metrics and chunks..." source="persistence.go:611"
time="2017-01-30T16:35:54Z" level=warning msg="Error on ingesting out-of-order samples" numDropped=206 source="scrape.go:517"
time="2017-01-30T16:35:54Z" level=warning msg="Scrape health sample discarded" error="sample timestamp out of order" sample=up{instance="api-asg", job="nodejs_api2", role="api"} => 1 @[1485794154.879] source="scrape.go:570"
time="2017-01-30T16:35:54Z" level=warning msg="Scrape duration sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{instance="api-asg", job="nodejs_api2", role="api"} => 0.043731663000000004 @[1485794154.879] source="scrape.go:573"
time="2017-01-30T16:35:54Z" level=warning msg="Scrape sample count sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{instance="api-asg", job="nodejs_api2", role="api"} => 0.043731663000000004 @[1485794154.879] source="scrape.go:576"
time="2017-01-30T16:35:54Z" level=warning msg="Scrape sample count post-relabeling sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{instance="api-asg", job="nodejs_api2", role="api"} => 0.043731663000000004 @[1485794154.879] source="scrape.go:579"
time="2017-01-30T16:35:56Z" level=info msg="Done checkpointing in-memory metrics and chunks in 23.40928269s." source="persistence.go:638"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x82f818]

goroutine 191 [running]:
panic(0x18b1120, 0xc420012040)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/prometheus/prometheus/discovery/ec2.(*EC2Discovery).refresh.func2(0xc4baa981a0, 0x1, 0x0)
	/go/src/github.com/prometheus/prometheus/discovery/ec2/ec2.go:179 +0x9b8
github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2.(*EC2).DescribeInstancesPages.func1(0x1907100, 0xc4baa981a0, 0x1, 0xc4baa981c0)
	/go/src/github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go:6785 +0x49
github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/aws/request.(*Request).EachPage(0xc57a8b6e00, 0xc5fb79bb38, 0x2, 0x2)
	/go/src/github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go:98 +0x90
github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2.(*EC2).DescribeInstancesPages(0xc4dea956b8, 0x0, 0xc5fb79bc58, 0x0, 0x0)
	/go/src/github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go:6786 +0x116
github.com/prometheus/prometheus/discovery/ec2.(*EC2Discovery).refresh(0xc567c3c9f0, 0xc4e42fbc20, 0x0, 0x0)
	/go/src/github.com/prometheus/prometheus/discovery/ec2/ec2.go:199 +0x347
github.com/prometheus/prometheus/discovery/ec2.(*EC2Discovery).Run(0xc567c3c9f0, 0x7f88bbf72030, 0xc5435ac880, 0xc538bd2d20)
	/go/src/github.com/prometheus/prometheus/discovery/ec2/ec2.go:114 +0x1b8
created by github.com/prometheus/prometheus/discovery.(*TargetSet).updateProviders
	/go/src/github.com/prometheus/prometheus/discovery/discovery.go:242 +0x2b4

EVENT TWO:

time="2017-01-31T13:33:17Z" level=warning msg="Error on ingesting out-of-order samples" numDropped=273 source="scrape.go:517"
time="2017-01-31T13:33:17Z" level=warning msg="Scrape health sample discarded" error="sample timestamp out of order" sample=up{instance="api-asg", job="nodejs_api1"} => 1 @[1485869597.057] source="scrape.go:570"
time="2017-01-31T13:33:17Z" level=warning msg="Scrape duration sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{instance="api-asg", job="nodejs_api1"} => 0.049201368 @[1485869597.057] source="scrape.go:573"
time="2017-01-31T13:33:17Z" level=warning msg="Scrape sample count sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{instance="api-asg", job="nodejs_api1"} => 0.049201368 @[1485869597.057] source="scrape.go:576"
time="2017-01-31T13:33:17Z" level=warning msg="Scrape sample count post-relabeling sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{instance="api-asg", job="nodejs_api1"} => 0.049201368 @[1485869597.057] source="scrape.go:579"
time="2017-01-31T13:33:19Z" level=info msg="Checkpointing in-memory metrics and chunks..." source="persistence.go:611"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x82f4b5]

goroutine 692839 [running]:
panic(0x18b1120, 0xc420012040)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/prometheus/prometheus/discovery/ec2.(*EC2Discovery).refresh.func2(0xc48961c480, 0x1, 0x0)
	/go/src/github.com/prometheus/prometheus/discovery/ec2/ec2.go:192 +0x655
github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2.(*EC2).DescribeInstancesPages.func1(0x1907100, 0xc48961c480, 0x1, 0xc48961c4a0)
	/go/src/github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go:6785 +0x49
github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/aws/request.(*Request).EachPage(0xc5bb36b880, 0xc590f59b38, 0x2, 0x2)
	/go/src/github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go:98 +0x90
github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2.(*EC2).DescribeInstancesPages(0xc5a97e33b0, 0x0, 0xc590f59c58, 0x0, 0x0)
	/go/src/github.com/prometheus/prometheus/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go:6786 +0x116
github.com/prometheus/prometheus/discovery/ec2.(*EC2Discovery).refresh(0xc51d631b30, 0xc6060e30e0, 0x0, 0x0)
	/go/src/github.com/prometheus/prometheus/discovery/ec2/ec2.go:199 +0x347
github.com/prometheus/prometheus/discovery/ec2.(*EC2Discovery).Run(0xc51d631b30, 0x7f0591e3ab80, 0xc587f65400, 0xc5d291a420)
	/go/src/github.com/prometheus/prometheus/discovery/ec2/ec2.go:114 +0x1b8
created by github.com/prometheus/prometheus/discovery.(*TargetSet).updateProviders
	/go/src/github.com/prometheus/prometheus/discovery/discovery.go:242 +0x2b4

Thanks!

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Feb 1, 2017

Thanks for reporting.

Both instances seem rather odd as the API response seems to contain nil interfaces and tags.
The code didn't change at all w.r.t version 1.3.1.

That can only really mean that AWS suddenly changed the contents of their API responses to have these fields zero'd out. Any chance you can run two small Prometheus servers with 1.3.1 and 1.5.0 in parallel and check whether one crashes while the other keeps going?

@nordbranch

This comment has been minimized.

Copy link
Author

nordbranch commented Feb 1, 2017

yeah.. I have a staging env where I can run 1.3.1. I haven't spent much time with golang, unfortunately. Would it make more sense to have the API call dump its payload to file (this assuming all the data is pulled before parsing, and not parsed per tokenized page)? And naturally.. the problem hasn't reproduced since I posted this. ;)

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Feb 2, 2017

Might have been a one-time artifact. Probably should still be handled by the discovery.

@nordbranch

This comment has been minimized.

Copy link
Author

nordbranch commented Feb 3, 2017

1.3.1 instance hasn't been crashing, but it's also not pulling the same set of data from AWS, unless Prom does a full decribe-instances pull, instead of individuals assets.

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented May 15, 2017

Can you confirm this is no longer happening?

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 14, 2017

I presume by the non-response that this is resolved.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.