-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filebeat stops harvesting logs #13607
Comments
After restarting filebeat docker service, logs are populated correctly. |
I have the same issue on versions from 7.2 till 7.6.2. I have 3 similar containers with filebeat in nomad. 1 of them has high load (~5000 messages per minute). And it stucks once a day. Reboot of filebeat-container helps. |
I am also facing this problem with filebeat 7.0.1, it runs and harvest for a while and eventually stops with those messages. |
And I am alse facing the same problem with filebeat 7.7.1,it‘’s harvested logs after restart,and only once |
I'm facing the same issue since 7.6.1, 7.8.1, 7.8.2-SNAPSHOT and now with 7.9.0, even after a lot of PRs trying to fix that. My issue is related to k8s autodiscover. |
Happens to me as well (7.9.0, docker autodiscover). Filebeat closes and reopens the missing container log file serveral times due to inactivity and then simply stops watching the file after a while. Just curios, do you guys have logrotation enabled for docker (I do...)? $ cat /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
} |
Pinging @elastic/integrations-platforms (Team:Platforms) |
Original issue reported in 7.3.0, of inputs incorrectly stopped on container restarts, was fixed by #13127 in 7.4.0. |
@pauvos could you share the autodiscover configuration you are using? |
Just happened again on 7.10.1 I'm no longer using logstash, and the problem was with the nginx (swag) container. My config:
|
I have the same problem, but with Elastic Agent and Filebeat managed by Fleet server. After some time (randomly, from time to time), filebeat just stops sending logs to my ElasticSearch cluster. I'm using:
All configuration to every Elastic Agent on all servers i'm performing through Fleet Management tab inside Kibana. In log files from filebeat, under {"log.level":"info","@timestamp":"2022-01-13T06:00:24.459+0100","log.logger":"input.harvester","log.origin":{"file.name":"log/harvester.go","file.line":336},"message":"Reader was closed. Closing.","service.name":"filebeat","input_id":"6f476387-59f8-4ccc-8baa-38f0c646fd3d","source":"/var/log/apt/history.log.1.gz","state_id":"native::142434-64768","finished":false,"os_id":"142434-64768","old_source":"/var/log/apt/history.log.1.gz","old_finished":true,"old_os_id":"142434-64768","harvester_id":"4a471e46-b22a-42af-8d83-df53309cc81f","ecs.version":"1.6.0"} Until today i've just reinstalled the Elastic Agents with ansible playbook when that happened. |
I had a similar issue. It helped me to reduce the logging level to warning:
|
I'm experiencing the same issue with filebeat 8.4.3 with Kubernetes as autodiscover provider, are there any updates? |
We have the same problem with autodiscover, filebeat is installed on the host machine and searches in the container logs filebeat version 7.16.2 (amd64), libbeat 7.16.2 [3c518f4 built 2021-12-18 21:04:19 +0000 UTC] |
This does not solve the issue |
This seems to be related to #34388, probably the same issue caused by autodiscover with Kubernetes provider. @MonicaMagoniCom Could you provide your configuration and some debug logs? A logging configuration like this should give enough information (I'm making some assumptions about your configuration here) and will not log any event ingested by Filebeat. logging:
level: debug
selectors:
- autodiscover
- autodiscover.bus-filebeat
- autodiscover.pod
- beat
- cfgwarn
- crawler
- hints.builder
- input
- input.filestream
- input.harvester
- kubernetes
- modules
- seccomp
- service |
Regarding update on this issue, we are aware of issues with autodiscover like the one I linked above and it is on our backlog. |
We removed the use of autodiscover Kubernetes in our filebeat configuration, since we were experiencing the issue. We replaced it with filebeat inputs and it is working correctly. So yes, the issue seems to be releated to autodiscover Kubernetes provider. |
We have the same issue with docker autodiscover |
Could you provide some debug logs following this configuration: logging:
level: debug
selectors:
- autodiscover
- autodiscover.bus-filebeat
- autodiscover.pod
- beat
- cfgwarn
- crawler
- hints.builder
- input
- input.filestream
- input.harvester
- kubernetes
- docker
- modules
- seccomp
- service Or at least look your debug logs and see if you find a message like this:
|
Yes, when we had the problem we were seeing this error with debug logs. |
@belimawr yes i will watch it next week and i will tell you |
Hi is there any progress? We are using beats version 8.4.3 and experiencing the same problem. |
@toms-place the issue I mentioned (#34388) has been fixed in Regarding the |
We're a paying customer and have a support ticket open with elastic. We've been running 8.8.2 for a while now, sent debug logs from 8.8.2 about a month ago, and still have the problem. |
@belimawr -> I will report back, when we updated our prod systems. regarding |
FWIW we've been on filebeat 8.10.2 for a month now. Haven't seen any instances of harvesting stopping. |
Only with debug logs I'll be able do dig into it. A few things that can cause Filebeat to miss some logs:
|
Hi! We're labeling this issue as |
Note that logstash correctly receives logs from other containers harvested by filebeat.
Filebeat harvester didn't recover even after a few hours.
Filebeat configuration:
Docker compose service:
My docker container that is not being harvested has id: 3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c
This container was also restarted at 2019-09-11 15:19:03,983.
Last log entry correctly processed has timestamp 2019-09-11 15:19:37,235.
Next log entry with timestamp 2019-09-11 15:20:15,276 and next ones are missing.
Filebeat relevant log:
Logstash restarted timeline log:
Filebeat repository entry:
Log file stats:
Hope I provided enough details to investigate this problem.
Elastic stack 7.3.0
The text was updated successfully, but these errors were encountered: