Skip to content
This repository has been archived by the owner on Dec 1, 2018. It is now read-only.

influxdb error: missing tag value. labels empty? #1320

Closed
jonaz opened this issue Sep 29, 2016 · 9 comments
Closed

influxdb error: missing tag value. labels empty? #1320

jonaz opened this issue Sep 29, 2016 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sink/influxdb

Comments

@jonaz
Copy link
Contributor

jonaz commented Sep 29, 2016

Hi

Just installed heapster 1.1.0 in our development cluster and tried to send metrics to kubernetes.

Influxdb is version 1.0.0. I have not tried older versions.

This is the error i get:

'cpu/limit,host_id=192.168.3.57,hostname=192.168.3.57,labels=,namespace_id=24430b99-0de2-11e6-8afb-005056885071,namespace_name=kube-system,nodename=192.168.3.57,pod_id=8673b010-6dd8-11e6-bffc-005056880f6d,pod_name=kube-controller-manager-192.168.3.57,pod_namespace=kube-system,type=pod value=0 1475130540000000000': missing tag value"

According to influxdata/influxdb#4421 this is expected behavior.
So metrics/sinks/influxdb/influxdb.go should not add the label if its empty.

This is my heapster deployment:


---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster-v1.1.0
  namespace: kube-system
  labels:
    k8s-app: heapster
    kubernetes.io/cluster-service: "true"
    version: v1.1.0
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: heapster
      version: v1.1.0
  template:
    metadata:
      labels:
        k8s-app: heapster
        version: v1.1.0
    spec:
      containers:
        - image: gcr.io/google_containers/heapster:v1.1.0
          name: heapster
          resources:
            # keep request = limit to keep this container in guaranteed class
            limits:
              cpu: 100m
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          command:
            - /heapster
            - --source=kubernetes.summary_api:''
            - --sink=influxdb:http://dev-influx01.domain.com:8086?user=k8s&pw=asdf
        - image: gcr.io/google_containers/heapster:v1.1.0
          name: eventer
          resources:
            # keep request = limit to keep this container in guaranteed class
            limits:
              cpu: 100m
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 200Mi
          command:
            - /eventer
            - --source=kubernetes:''
            - --sink=influxdb:http://dev-influx01.domain.com:8086?user=k8s&pw=asdf
        - image: gcr.io/google_containers/addon-resizer:1.3
          name: heapster-nanny
          resources:
            limits:
              cpu: 50m
              memory: 100Mi
            requests:
              cpu: 50m
              memory: 100Mi
          env:
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          command:
            - /pod_nanny
            - --cpu=100m
            - --extra-cpu=0.5m
            - --memory=140Mi
            - --extra-memory=4Mi
            - --threshold=5
            - --deployment=heapster-v1.1.0
            - --container=heapster
            - --poll-period=300000
            - --estimator=exponential
        - image: gcr.io/google_containers/addon-resizer:1.3
          name: eventer-nanny
          resources:
            limits:
              cpu: 50m
              memory: 100Mi
            requests:
              cpu: 50m
              memory: 100Mi
          env:
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          command:
            - /pod_nanny
            - --cpu=100m
            - --extra-cpu=0m
            - --memory=200Mi
            - --extra-memory=500Ki
            - --threshold=5
            - --deployment=heapster-v1.1.0
            - --container=eventer
            - --poll-period=300000
            - --estimator=exponential

@DirectXMan12
Copy link
Contributor

I don't think we officially support InfluxDB 1.0.0 yet, but this does look somewhat like a bug nonetheless.
Please try with an older version of InfluxDB and see if it works.

@jonaz
Copy link
Contributor Author

jonaz commented Sep 30, 2016

According to influx issue tracker this has been the same sine 0.9.4 and is
same issue on 0.12.0

Ive added labels to all my pods to workarround the problem for now

On Fri, 30 Sep 2016, 21:40 Solly Ross, notifications@github.com wrote:

I don't think we officially support InfluxDB 1.0.0 yet, but this does look
somewhat like a bug nonetheless.

Please try with an older version of InfluxDB and see if it works.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#1320 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABF-FfxFsVMHsM3UWnQ9EQY2QUmWzg8mks5qvWWfgaJpZM4KJpRm
.

@DirectXMan12
Copy link
Contributor

hmm... this probably needs a fix in the InfluxDB sink, then

cc @mwielgus

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 16, 2018
@jonaz
Copy link
Contributor Author

jonaz commented Jan 21, 2018

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 21, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2018
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. and removed bug labels Jun 5, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sink/influxdb
Projects
None yet
Development

No branches or pull requests

4 participants