Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Counter values are zero #1

Open
davecaplinger opened this issue Jul 22, 2019 · 4 comments
Open

Counter values are zero #1

davecaplinger opened this issue Jul 22, 2019 · 4 comments

Comments

@davecaplinger
Copy link

I've been playing around with batch import of historical data using your tool, and for some reason the resulting counter values in my prometheus 2.8.0 remote-write backend (influxdb 1.7.7) are all zero. I think I have remote-write working because I see other series in InfluxDB that Prometheus put there. Any ideas on what I may be doing wrong?

To reproduce:

Push test-data from sender:

[sender] $ cat test-data
# HELP frotz_processed_bytes Number of bytes processed
# TYPE frotz_processed_bytes counter
frotz_processed_bytes{host="192.168.10.55"} 22800 1563321600000
frotz_processed_bytes{host="192.168.10.55"} 45600 1563325200000
frotz_processed_bytes{host="192.168.10.55"} 68400 1563328800000

[sender] $ curl -XPUT metric-importer:9099/ -H 'Content-type: text/plain' --data-binary @test-data

On receiving InfluxDB instance, metric values are all zero:

[influx] $ influx
> use prometheus
Using database prometheus

> show series
key
---
frotz_processed_bytes,__name__=frotz_processed_bytes,host=192.168.10.55
scrape_duration_seconds,__name__=scrape_duration_seconds,instance=server01.test.internal.net:9121,job=example,site=test
scrape_samples_post_metric_relabeling,__name__=scrape_samples_post_metric_relabeling,instance=server01.test.internal.net:9121,job=example,site=test
scrape_samples_scraped,__name__=scrape_samples_scraped,instance=server01.test.internal.net:9121,job=example,site=test
up,__name__=up,instance=server01.test.internal.net:9121,job=example,site=test

> select * from frotz_processed_bytes
name: frotz_processed_bytes
time                __name__              host          value
----                --------              ----          -----
1563321600000000000 frotz_processed_bytes 192.168.10.55 0
1563325200000000000 frotz_processed_bytes 192.168.10.55 0
1563328800000000000 frotz_processed_bytes 192.168.10.55 0

Logs from the metric-importer seem to indicate that the metrics are received properly,
but maybe the counter values are lost in mergeMetrics() prior to writeRequest() sending them to the InfluxDB host (guessing because the writeRequest logs show only timestamps?):

I0722 19:16:19.944959       1 push.go:29] handler.HandlePush: PUT /
I0722 19:16:19.945034       1 push.go:35] handler.HandlePush: map[frotz_processed_bytes:name:"frotz_processed_bytes" help:"Number of bytes processed" type:COUNTER metric:<label:<name:"host" value:"192.168.10.55" > counter:<value:22800 > timestamp_ms:1563321600000 > metric:<label:<name:"host" value:"192.168.10.55" > counter:<value:45600 > timestamp_ms:1563325200000 > metric:<label:<name:"host" value:"192.168.10.55" > counter:<value:68400 > timestamp_ms:1563328800000 > ]
I0722 19:16:19.945168       1 util.go:50] metricFamilies: {
  "frotz_processed_bytes": {
    "name": "frotz_processed_bytes",
    "help": "Number of units processed",
    "type": 0,
    "metric": [
      {
        "label": [
          {
            "name": "host",
            "value": "192.168.10.55"
          }
        ],
        "counter": {
          "value": 22800
        },
        "timestamp_ms": 1563321600000
      },
      {
        "label": [
          {
            "name": "host",
            "value": "192.168.10.55"
          }
        ],
        "counter": {
          "value": 45600
        },
        "timestamp_ms": 1563325200000
      },
      {
        "label": [
          {
            "name": "host",
            "value": "192.168.10.55"
          }
        ],
        "counter": {
          "value": 68400
        },
        "timestamp_ms": 1563328800000
      }
    ]
  }
}
I0722 19:16:19.945186       1 push.go:92] handler.mergeMetrics: name = frotz_processed_bytes
I0722 19:16:19.945198       1 push.go:94] handler.mergeMetrics: s.GetLabel() = [name:"host" value:"192.168.10.55" ]
I0722 19:16:19.945212       1 push.go:96] handler.mergeMetrics: k = frotz_processed_bytes�host�192.168.10.55
I0722 19:16:19.945220       1 push.go:109] handler.mergeMetrics: ts = labels:<name:"host" value:"192.168.10.55" > labels:<name:"__name__" value:"frotz_processed_bytes" > samples:<timestamp:1563321600000 >
I0722 19:16:19.945246       1 push.go:94] handler.mergeMetrics: s.GetLabel() = [name:"host" value:"192.168.10.55" ]
I0722 19:16:19.945257       1 push.go:96] handler.mergeMetrics: k = frotz_processed_bytes�host�192.168.10.55
I0722 19:16:19.945264       1 push.go:109] handler.mergeMetrics: ts = labels:<name:"host" value:"192.168.10.55" > labels:<name:"__name__" value:"frotz_processed_bytes" > samples:<timestamp:1563321600000 > samples:<timestamp:1563325200000 >
I0722 19:16:19.945290       1 push.go:94] handler.mergeMetrics: s.GetLabel() = [name:"host" value:"192.168.10.55" ]
I0722 19:16:19.945301       1 push.go:96] handler.mergeMetrics: k = frotz_processed_bytes�host�192.168.10.55
I0722 19:16:19.945307       1 push.go:109] handler.mergeMetrics: ts = labels:<name:"host" value:"192.168.10.55" > labels:<name:"__name__" value:"frotz_processed_bytes" > samples:<timestamp:1563321600000 > samples:<timestamp:1563325200000 > samples:<timestamp:1563328800000 >
I0722 19:16:19.945335       1 push.go:111] handler.mergeMetrics:
 labelsToSeries = map[frotz_processed_bytes�host�192.168.10.55:labels:<name:"host" value:"192.168.10.55" > labels:<name:"__name__" value:"frotz_processed_bytes" > samples:<timestamp:1563321600000 > samples:<timestamp:1563325200000 > samples:<timestamp:1563328800000 > ]

I0722 19:16:19.945394       1 util.go:50] labelsToSeries: {
  "frotz_processed_bytes\ufffdhost\ufffd192.168.10.55": {
    "labels": [
      {
        "name": "host",
        "value": "192.168.10.55"
      },
      {
        "name": "__name__",
        "value": "frotz_processed_bytes"
      }
    ],
    "samples": [
      {
        "timestamp": 1563321600000
      },
      {
        "timestamp": 1563325200000
      },
      {
        "timestamp": 1563328800000
      }
    ]
  }
}
I0722 19:16:19.945416       1 push.go:52] handler.ProcessSeries: http://influxdb:8086/api/v1/prom/write?u=prom&p=prom&db=prometheus
I0722 19:16:19.945435       1 push.go:80] handler.SeriesToWriteRequest: serie labels:<name:"host" value:"192.168.10.55" > labels:<name:"__name__" value:"frotz_processed_bytes" > samples:<timestamp:1563321600000 > samples:<timestamp:1563325200000 > samples:<timestamp:1563328800000 >
I0722 19:16:19.945473       1 util.go:50] writeRequest: {
  "timeseries": [
    {
      "labels": [
        {
          "name": "host",
          "value": "192.168.10.55"
        },
        {
          "name": "__name__",
          "value": "frotz_processed_bytes"
        }
      ],
      "samples": [
        {
          "timestamp": 1563321600000
        },
        {
          "timestamp": 1563325200000
        },
        {
          "timestamp": 1563328800000
        }
      ]
    }
  ]
}
@pgillich
Copy link
Owner

Sorry for late answer, I was on vacation.
It's a bug and I corrected by Correct Counter and Gauge types commit. The Docker image has also been updated (with latest label, so you may force to download the image).
Your test data was also added to the test directory.

Thanks for the notice!

@davecaplinger
Copy link
Author

Thanks for the update; I confirmed that it appears to work! I say "appears" because I can't see any data in Prometheus but I can see it in InfluxDB directly via 'show series' and 'select * from ...' . Most likely this is something I have mis-configured in the remote-read settings in p8s, or maybe the issue is that the historical metrics I have bulk-imported into the influx backend have no corresponding "current" metrics in the p8s front-end, so p8s can't find them. But in any case, the bulk importer is working fine. Thanks for your help!

@pgillich pgillich reopened this Jul 31, 2019
@pgillich
Copy link
Owner

I don't have too much time for troubleshooting nowadays, but if you have time, could you try to fill InfluxDB by official way (by p8s remote_write), and try to use this data in p8s by remote_read, please?

You may use pushgateway to send data for a while. You can find an example to start Pushgateway by docker-compose here: https://github.com/stefanprodan/dockprom/blob/master/docker-compose.yml#L108

In order to rule out the native, short-term p8s database, it's retention time should be decreased by --storage.tsdb.retention.time, see: https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects
Here you can find an example for setting the retention time: https://github.com/stefanprodan/dockprom/blob/master/docker-compose.yml#L24

@davecaplinger
Copy link
Author

Sure, I'll see what I can do in the next few days. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants