Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error connecting to datasource: Data source connected, but no labels received. Verify that Loki and Promtail is configured properly. #271

Closed
leeypp opened this issue Feb 1, 2019 · 27 comments

Comments

@leeypp
Copy link

leeypp commented Feb 1, 2019

image

what should i do
i just update promtail-local-config.yaml and restart promtail service , it works not normal .but before update,it work normal

@robmuze
Copy link

robmuze commented Feb 1, 2019

Hi , i had similar issue , it appears to be the latest image tag , i.e image: master - always seems to break Loki datasource .

Image:
tag: master-ffe1093
Last update: 3 days ago and it works

@daixiang0
Copy link
Contributor

Refer doc

@negbie
Copy link
Contributor

negbie commented Feb 2, 2019

make sure you use a recent loki config
https://github.com/grafana/loki/blob/master/cmd/loki/loki-local-config.yaml

@wilful
Copy link

wilful commented Feb 9, 2019

Have a same issue, nothing help for me. Pulled last images loki and promtail.
In the last messages of Promtail there is not a word about the established connection or failure

promtail_1 | level=info ts=2019-02-09T17:27:45.378030372Z caller=main.go:47 msg="Starting Promtail" version="(version=master-58d2d21, branch=master, revision=58d2d21)"
promtail_1 | level=info ts=2019-02-09T17:27:50.37733498Z caller=filetargetmanager.go:165 msg="Adding target" key="{job="varlogs"}"
promtail_1 | level=info ts=2019-02-09T17:27:50.377931998Z caller=filetarget.go:269 msg="start tailing file" path=/var/log/backupninja.log
promtail_1 | level=info ts=2019-02-09T17:27:50.3782893Z caller=filetarget.go:269 msg="start tailing file" path=/var/log/boot.log
promtail_1 | level=info ts=2019-02-09T17:27:50.378583606Z caller=filetarget.go:269 msg="start tailing file" path=/var/log/maillog
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/backupninja.log - &{Offset:18304 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/lastlog - &{Offset:0 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/boot.log - &{Offset:95 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/tallylog - &{Offset:0 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/maillog - &{Offset:0 Whence:0}
promtail_1 | level=info ts=2019-02-09T17:27:50.379321674Z caller=filetarget.go:269 msg="start tailing file" path=/var/log/tallylog
promtail_1 | level=info ts=2019-02-09T17:27:50.381192173Z caller=filetarget.go:269 msg="start tailing file" path=/var/log/lastlog
promtail_1 | level=info ts=2019-02-09T17:27:50.381746355Z caller=filetarget.go:269 msg="start tailing file" path=/var/log/test.log
promtail_1 | level=info ts=2019-02-09T17:27:50.381849633Z caller=filetarget.go:269 msg="start tailing file" path=/var/log/yum.log
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/yum.log - &{Offset:22494 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/test.log - &{Offset:5 Whence:0}

@daixiang0
Copy link
Contributor

daixiang0 commented Feb 11, 2019

@wilful share the log of loki please. seems that promail part is normal.

@wilful
Copy link

wilful commented Feb 11, 2019

No message is displayed on the Loki side when Promtail starts.

loki_1      | level=info ts=2019-02-11T06:03:35.982717042Z caller=loki.go:122 msg=initialising module=server
loki_1      | level=info ts=2019-02-11T06:03:35.983179243Z caller=gokit.go:36 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
loki_1      | level=info ts=2019-02-11T06:03:35.983718039Z caller=loki.go:122 msg=initialising module=overrides
loki_1      | level=info ts=2019-02-11T06:03:35.983750208Z caller=override.go:33 msg="per-tenant overides disabled"
loki_1      | level=info ts=2019-02-11T06:03:35.983787937Z caller=loki.go:122 msg=initialising module=store
loki_1      | level=info ts=2019-02-11T06:03:35.985754972Z caller=loki.go:122 msg=initialising module=ingester
loki_1      | level=info ts=2019-02-11T06:03:35.987309309Z caller=lifecycler.go:358 msg="entry not found in ring, adding with no tokens"
loki_1      | level=info ts=2019-02-11T06:03:35.987747435Z caller=lifecycler.go:288 msg="auto-joining cluster after timeout"
loki_1      | level=info ts=2019-02-11T06:03:36.004831721Z caller=loki.go:122 msg=initialising module=ring
loki_1      | level=info ts=2019-02-11T06:03:36.005003778Z caller=loki.go:122 msg=initialising module=querier
loki_1      | level=info ts=2019-02-11T06:03:36.005738165Z caller=loki.go:122 msg=initialising module=distributor
loki_1      | level=info ts=2019-02-11T06:03:36.005817828Z caller=loki.go:122 msg=initialising module=all
loki_1      | level=info ts=2019-02-11T06:03:36.005849317Z caller=main.go:45 msg="Starting Loki" version="(version=master-58d2d21, branch=master, revision=58d2d21)"

docker-compose exec promtail sh -c 'cat /etc/promtail/docker-config.yaml'
server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

client:
  url: http://loki:3100/api/prom/push

scrape_configs:
- job_name: system
  entry_parser: raw
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      __path__: /var/log/*log

docker-compose exec promtail sh -c 'ping loki'
PING loki (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.129 ms

@zajca
Copy link

zajca commented Mar 4, 2019

I'm having same issue.
here is my complete setup: https://github.com/zajca/docker-server-explore
loki ip is resolved fine docker-compose -f ... exec grafana sh -c 'getent hosts loki' same with loki ip from promtail. Yet I'm getting error Data source connected, but no labels received. Verify that Loki and Promtail is configured properly.

If I access loki ip on port 3100 and url /metrics data are there.

@Sergellll
Copy link

Hello,
I have this problem too
Iam started my prom + loki + grafana stack and some time all is fine (about 15-20 minutes)
After that my "log line" - stop to refresh and make a double events
After 20-30 min - i have this Error:

Error connecting to datasource: Data source connected, but no labels received. Verify that Loki and Promtail is configured properly.

After that iam restarted prom container and all is fine
2019-03-05 15 36 53

@wilful
Copy link

wilful commented Mar 11, 2019

After 2 weeks of work, the problem returned. Have no idea what happened. UP. Nothing in logs.

@shprotobaza
Copy link

Hi all.

You must mount the volume with logs describe in /etc/promtail/docker-config.yaml:

  job: varlogs
  __path__: /var/log/*log

like this:

  promtail:
   image: grafana/promtail:master
   container_name: promtail
   volumes:
     - /var/log:/var/log:ro

Check it!

@leops
Copy link

leops commented Mar 11, 2019

I seem to be hitting the same problem, to me the problems seems to be that if nothing has been logged for some time (maybe about 20 minutes) then all the logs disappear: Grafana shows the "no label received" error, and trying some API calls such as GET /api/prom/label return nothing when there where some data a few minutes earlier.
I am sending logs to Loki using both Promtail (installed directly as a binary in virtual machines), and direct push (POST /api/prom/push) from some applications.
Edit: I think I should add that if new log lines get pushed, all the logs reappear including what was sent previously, so the data isn't actually lost.

@vanhtuan0409
Copy link

vanhtuan0409 commented Mar 27, 2019

Ref to #430

Look like only label data was lost

This error also happened with Loki Cloud. (UserID: 2315)

@davkal
Copy link
Contributor

davkal commented Apr 1, 2019

Your first stop when you see this issue is the troubleshooting guide.

If you're testing things on your laptop, and restart loki or promtail often, you'll face the problem that your low volume logs are already consumed before loki was ready to receive (start promtail a bit after loki), or promtail already pushed all logs (delete positions file to force a new push), or loki did not have time to flush what if indexed before you restarted it (need to push it again, probably by deleting positions file). We're still working on making this single-binary use case a bit smoother.

It's worth noting that these issues wont affect production use of Loki: once it's running and replicated it can handle restarts without data loss. And if your apps keep producing logs, there will be labels.

@slim-bean
Copy link
Collaborator

Closing this issue, seems to be related to some instability in the earlier versions of promtail/loki (which should hopefully be gone now) and miss-config (which there should hopefully be better docs and support for now)

@cmanzi
Copy link

cmanzi commented Sep 27, 2019

I'm still seeing this type of issue on a fresh install (using loki-stack chart v0.16.5). Sometimes the labels disappear, sometimes the logs disappear as well. Seems sporadic, didn't see anything in the logs to explain it.

@litaxc
Copy link

litaxc commented Oct 18, 2019

still seeing the same issue with loki/promtail v0.3.0

@hellodudu
Copy link

same issue here #1173

Is there any config can last loki labels for long time?

@chbloemer
Copy link

chbloemer commented Oct 28, 2019

Seeing issue also in loki/promtail v0.4.0
On Friday I got this:

curl -G -s  "http://someserver:3100/loki/api/v1/query" --data-urlencode 'query=sum(rate({job="varlogs"}[10m])) by (level)'
{"status":"success","data":{"resultType":"vector","result":[{"metric":{},"value":[1572013966.797,"48.016666666666666"]}]}}

On Monday I got this:

curl -G -s  "http://someserver:3100/loki/api/v1/query" --data-urlencode 'query=sum(rate({job="varlogs"}[10m])) by (level)'
{"status":"success","data":{"resultType":"vector","result":[]}}

Grafana says: "Error connecting to datasource: Data source connected, but no labels received. Verify that Loki and Promtail is configured properly."
No restarts happened.

Service config from compose file:

  loki:
    image: grafana/loki:v0.4.0
    volumes:
      - ./config/loki/local-config.yaml:/etc/loki/local-config.yaml
      - ./data/loki:/tmp/loki/
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml

  promtail:
    image: grafana/promtail:v0.4.0
    volumes:
      - /var/log:/var/log
    command: -config.file=/etc/promtail/docker-config.yaml

loki local-config.yaml:

auth_enabled: false

server:
  http_listen_port: 3100

ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 5m
  chunk_retain_period: 30s
  max_transfer_retries: 1

schema_config:
  configs:
  - from: 2018-04-15
    store: boltdb
    object_store: filesystem
    schema: v9
    index:
      prefix: index_
      period: 168h

storage_config:
  boltdb:
    directory: /tmp/loki/index

  filesystem:
    directory: /tmp/loki/chunks

limits_config:
  enforce_metric_name: false
  reject_old_samples: true
  reject_old_samples_max_age: 168h

chunk_store_config:
  max_look_back_period: 0

table_manager:
  chunk_tables_provisioning:
    inactive_read_throughput: 0
    inactive_write_throughput: 0
    provisioned_read_throughput: 0
    provisioned_write_throughput: 0
  index_tables_provisioning:
    inactive_read_throughput: 0
    inactive_write_throughput: 0
    provisioned_read_throughput: 0
    provisioned_write_throughput: 0
  retention_deletes_enabled: false
  retention_period: 0

@cyriltovena
Copy link
Contributor

When that happens can you use logcli and query for recent logs ? It looks like a problem with docker compose or the local config, let’s move the discussion to the other issue #1173 Please give as much details, logs, config , how you send logs, etc...

@rasple
Copy link

rasple commented Nov 22, 2019

I seem to be hitting the same problem, to me the problems seems to be that if nothing has been logged for some time (maybe about 20 minutes) then all the logs disappear: Grafana shows the "no label received" error, and trying some API calls such as GET /api/prom/label return nothing when there where some data a few minutes earlier.

@leops I have the same problem with the latest docker images of loki and promtail on a swarm.

curl -G -s "http://<host>:3100/loki/api/v1/label" | jq .

Returns {} when there have not been any new logs for some time and the correct labels if there are new logs (for 5 minutes or so).

@haohaifeng002
Copy link

haohaifeng002 commented Oct 15, 2020

I m hitting the same problem in loki+promtail v1.6.1, After replacing with loki+promtail v1.6.0 ,it disappeared
I use Grafana v7.2.1 (72a6c64532)

@adaszko
Copy link

adaszko commented Sep 7, 2022

For the posteriority: What fixed it for me was to make sure I had tenant_id set as tenant1 in the promtail config file:

clients:
  - url: http://REDACTED:3100/loki/api/v1/push
    tenant_id: tenant1

@haohaifeng002
Copy link

haohaifeng002 commented Sep 7, 2022 via email

@srikrishnanr
Copy link

For the posteriority: What fixed it for me was to make sure I had tenant_id set as tenant1 in the promtail config file:

clients:
  - url: http://REDACTED:3100/loki/api/v1/push
    tenant_id: tenant1

Thank you, the tenant_id should match the "X-Scope-OrgID" header in grafana.

@haohaifeng002
Copy link

haohaifeng002 commented Feb 12, 2023 via email

@blackliner
Copy link

I used the wrong namespace for promtail, so loki never was filled with data. -> grafana/helm-charts#1162 (comment)

@haohaifeng002
Copy link

haohaifeng002 commented Mar 9, 2023 via email

btaani pushed a commit to btaani/loki that referenced this issue Apr 18, 2024
Update from upstream repository
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests