You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Large log leads to high CPU (high system cpu time) and high RAM usage (close to 1G), and keep high until I restart the service.
I set Skip_Long_Lines true in Input, But CPU & RAM usage still not tolerable.
I know the large log record is not suitable. But sometimes they will show up, not according to my wishes.
And, the high CPU & RAM usage keep until fluent bit restart is not suitable yet!
To Reproduce
Rubular link if applicable: Post in Filters and plugins
Example log message if applicable: Too large that 235k per single log, most of that
Steps to reproduce the problem:
Install it in td-agent-bit in my Centos 7.4 vm, enable & start it.
Wait my app generate logs that include some of them are large that longger then 200k
Expected behavior
Screenshots
Your Environment
Version used: Fluent Bit v1.8.9
Configuration:
Environment name and version (e.g. Kubernetes? What version?): linux vm
Server type and version: CentOS Linux release 7.4.1708 (Core)
Operating System and version: Linux version 3.10.0-693.2.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) Fix include paths #1 SMP Tue Sep 12 22:26:13 UTC 2017
Filters and plugins:
[Input]
Name tail
Path <my_app_path>/log/app.log
Path_Key source
Refresh_Interval 10
Skip_Long_Lines true
DB.Sync Normal
Buffer_Max_Size 5M
Buffer_Chunk_Size 1M
Mem_Buf_Limit 500MB
Ignore_Older 1h
DB /etc/td-agent-bit/tail/app.db
Tag app
Parser_Firstline app-firstline
Multiline true
[Filter]
Name parser
Match app
Key_Name log
Parser app
Reserve_Data true
[PARSER]
Name app
Format regex
Regex ^(?<loglevel>[^ ]*) (?<request_id>[^ ]*) (?<process>[^ ]*) \] \[ file=(?<filename>[^ ]*) line=(?<lineno>[^ ]*) func=(?<funcName>[^ ]*) thread=(?<threadName>[^ ]*) \] <<(?<info>[^$].*)>>(?<traceback>.*)
Time_Key logdate
Time_Format %Y-%m-%d %H:%M:%S.%L %z
Time_Keep true
[PARSER]
Name app-firstline
Format regex
Regex ^\[ (?<logdate>[^([A-Z])]*) (?<log>.*)
[OUTPUT]
Name es
Type _doc
Host <my_es_host>
Port 9200
Generate_ID On
Logstash_Format true
Time_Key @timestamp
Match app
Logstash_Prefix test-python
Additional context
The text was updated successfully, but these errors were encountered:
Bug Report
Describe the bug
Large log leads to high CPU (high system cpu time) and high RAM usage (close to 1G), and keep high until I restart the service.
I set
Skip_Long_Lines
true in Input, But CPU & RAM usage still not tolerable.I know the large log record is not suitable. But sometimes they will show up, not according to my wishes.
And, the high CPU & RAM usage keep until fluent bit restart is not suitable yet!
To Reproduce
Filters and plugins
Install it in td-agent-bit in my Centos 7.4 vm, enable & start it.
Wait my app generate logs that include some of them are large that longger then 200k
Expected behavior
Screenshots
Your Environment
Additional context
The text was updated successfully, but these errors were encountered: