Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

promtail logs no such file or directory #429

Closed
matti opened this issue Mar 27, 2019 · 16 comments
Closed

promtail logs no such file or directory #429

matti opened this issue Mar 27, 2019 · 16 comments
Labels
stale A stale issue or PR that will automatically be closed.

Comments

@matti
Copy link

matti commented Mar 27, 2019

installed with:

curl -fsS https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh | sh -s 1234 asdfasdfasdfasdfasdf= logs-us-west1.grafana.net default | kubectl apply --namespace=default -f  -

promtail logs are getting filled with:

3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log
level=error ts=2019-03-27T09:05:14.8651359Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log
level=error ts=2019-03-27T09:05:24.8647833Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log=error
level=error ts=2019-03-27T09:05:24.864907Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log=error
level=error ts=2019-03-27T09:05:24.8649644Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log
level=error ts=2019-03-27T09:05:24.8650161Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log
level=error ts=2019-03-27T09:05:34.8644351Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log=error
level=error ts=2019-03-27T09:05:34.8645299Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log=error
level=error ts=2019-03-27T09:05:34.8645833Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log
level=error ts=2019-03-27T09:05:34.8646217Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log
level=error ts=2019-03-27T09:05:44.8303715Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log=error
level=error ts=2019-03-27T09:05:44.8304966Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log=error
level=error ts=2019-03-27T09:05:44.8305617Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log
level=error ts=2019-03-27T09:05:44.8306023Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log
level=error ts=2019-03-27T09:05:54.8299935Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log=error
level=error ts=2019-03-27T09:05:54.8300764Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log=error
level=error ts=2019-03-27T09:05:54.8301243Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log
level=error ts=2019-03-27T09:05:54.8301608Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log
level=error ts=2019-03-27T09:06:04.8302938Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log=error
level=error ts=2019-03-27T09:06:04.8304231Z caller=filetarget.go:300 msg="failed to stat matched file, cannot report size" /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log=error
level=error ts=2019-03-27T09:06:04.8304807Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/1.log
level=error ts=2019-03-27T09:06:04.830531Z caller=filetarget.go:247 msg="failed to tail file, stat failed" error="stat /var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log: no such file or directory" filename=/var/log/pods/3e18ac30-4998-11e9-bfbf-025000000001/kubedns/2.log
@daixiang0
Copy link
Contributor

have you customized docker data root? If so, you need add option for it.

usage: ./tools/promtail.sh <instanceId> <apiKey> <url> [<namespace>[<container_root_path>[<parser>]]]

@Serrvosky
Copy link

If you are trying that on a Kubernetes Cluster, please check if you don't forget to mount the host logs directory.

      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: promtail-conf
        configMap:
          name: promtail-conf

and

        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: promtail-conf
          mountPath: /etc/promtail

I also had that problem because I forgot varlibdockercontainers mount.

@slim-bean
Copy link
Collaborator

I've also seen this locally with docker for mac, it feels like a bug where the symlink in /var/log/pods still exists but points to a file that doesn't exist anymore. I usually reset the kubernetes environment in docker for mac to clean things up.

@matti are you still having this issue? Did either @Serrvosky or my suggestions above help?

@javefang
Copy link

I'm also having this problem, it does look like the symlinks in /var/log/pods are not cleaned up after the actual log files under /var/lib/docker/containers have been removed by the docker daemon (our setting keeps 3 log files for each container). This behaviour seems pretty normal and fluent-bit doesn't seem to complain. Maybe it is safe to make promtail ignore this error? (or print it in level=debug)

@stale
Copy link

stale bot commented Sep 12, 2019

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Sep 12, 2019
@stale stale bot closed this as completed Sep 19, 2019
@chenpfeisoo
Copy link

@Serrvosky ,I had this configation , but the issue still

@chenpfeisoo
Copy link

image

@Serrvosky
Copy link

Hello @chenpfeisoo. Actualy, I don't work with promtail for a long time ago, but this was my implementation at the time:

volumes:
      - hostPath:
          path: /var/log
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
        name: varlibdockercontainers
...
volumeMounts:
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true

I think, I used this configuration with kubernetes version 1.13. Which version are you using?

@chenpfeisoo
Copy link

@Serrvosky Thanks for your reply .The version of my cluster is 1.15.0
I have an other issuse #1329 ,a complete record of my problems

raksonibs added a commit to raksonibs/homer7-docker that referenced this issue Dec 4, 2019
Without *log, the error seen here grafana/loki#429 will occur
@per-lind
Copy link

per-lind commented Jun 4, 2020

We have also run into this, it would be nice for a way to supress the error. We are hitting around 1000 errors / s in unnecessary log.

@jumping
Copy link

jumping commented Jul 14, 2020

We have similar issues, those logs under /var/log/pods/*/*/*.log are symbol link which's destination are /var/lib/docker/containers/*/*.log.

@ningyougang
Copy link

I changed like below, worked well for me.
image

@data-dude
Copy link

This error message is flooding my logs and it's not a real problem.

@TChinaBen
Copy link

maybe you change the docker root Dir to the /home/docker @
ningyougang

@zbum
Copy link

zbum commented May 9, 2022

Have you customized docker data root and installed promtail by helm? If so, you need add values as below.
You make sure that your data-root is mounted as volume. I recommend not to change pods mount.

  • helm-loki-stack-values.yaml ( /dooray is data-root in /etc/daemon.json )
## helm-loki-stack-values.yaml
loki:
  enabled: true
  persistence:
    enabled: true
    storageClassName: nfs-client
    size: 1Gi

promtail:
  enabled: true
  extraVolumes:
    - name: dataroot
      hostPath:
        path: /dooray
  extraVolumeMounts:
    - name: dataroot
      mountPath: /dooray
      readOnly: true
  • command
 helm install loki-stack grafana/loki-stack --values helm-loki-stack-values.yaml 

@spencerdcarlson
Copy link

spencerdcarlson commented Jul 6, 2023

I am having an issue that seems very similar to this, I'm not sure if I should open a new issue. Please direct me to do so if that's preferred.

I am using the grafana-agent on linux to export logs via Loki and I used to exporting a log file (/root/.pm2/logs/server-out-0.log) that the agent did have access to, but now does not. The agent is filling my system logs with a permission denied message:

Jul  6 15:03:33 localhost grafana-agent[11276]: ts=2023-07-06T15:03:33.342184934Z caller=positions.go:206 level=warn component=logs logs_config=integrations msg="could not determine if log file still exists while cleaning positions file" error="stat /root/.pm2/logs/server-out-0.log: permission denied"

This started after I moved the location of the log file to a more universal location and updated the job spec:

- job_name: pm2
      static_configs:
        - labels:
            instance: <INSTANCE>
            job: pm2
            __path__: /var/log/pm2/server-{out,error}-[0-9].log
      pipeline_stages:
        - multiline:
              firstline: '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}:\s*'
        - regex:
              expression: '(?P<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}):\s(?P<log>(?s:.*))'
        - timestamp:
            source: timestamp
            format: RFC3339
        - drop:
            expression: "^ *$"
            drop_counter_reason: "drop empty lines"

__path__ used to point to /root/.pm2/logs/server-out-0.log

After I made the change I restarted the agent.

Is there a way to tell the agent to forget about the old log file and it's checkpoints?

[UPDATE]
I also posted on the Grafana Labs Forms

I was able to resolve my issue by removing the entry form the positions.yaml file. Note I had to stop the agent while editing the positions file because it is constantly being updated. Also, I needed to act as the grafana-agent to edit the file.

systemctl stop grafana-agent.service
sudo su -l grafana-agent -s /bin/bash
vim /tmp/positions.yaml
# remove lines to unused log files:
#   /root/.pm2/logs/server-error-0.log: "10901"
#   /root/.pm2/logs/server-out-0.log: "8404847"
systemctl start grafana-agent.service

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale A stale issue or PR that will automatically be closed.
Projects
None yet
Development

No branches or pull requests