|Unit Tests (master)|
|Integration Tests (master)|
|Docker images (master)|
|Latest release build|
Fluent Bit allows to collect log events or metrics from different sources, process them and deliver them to different backends such as Fluentd, Elasticsearch, Splunk, DataDog, Kafka, New Relic, Azure services, AWS services, Google services, NATS, InfluxDB or any custom HTTP end-point.
Fluent Bit comes with full SQL Stream Processing capabilities: data manipulation and analytics using SQL queries.
Fluent Bit runs on x86_64, x86, arm32v7 and arm64v8 architectures.
- High Performance
- Data Parsing
- Reliability and Data Integrity
- Security: built-in TLS/SSL support
- Asynchronous I/O
- Pluggable Architecture and Extensibility: Inputs, Filters and Outputs
- Monitoring: expose internal metrics over HTTP in JSON and Prometheus format
- Stream Processing: Perform data selection and transformation using simple SQL queries
- Create new streams of data using query results
- Aggregation Windows
- Data analysis and prediction: Timeseries forecasting
- Portable: runs on Linux, MacOS, Windows and BSD systems
Fluent Bit in Production
Fluent Bit is used widely in production environments. In 2020 Fluent Bit was deployed more than 220 Million times, and continues to be deploy over 1 million times a day. The following is a preview of who uses Fluent Bit heavily in production:
If your company uses Fluent Bit and is not listed, feel free to open a Github issue and we will add the logo.
Build from Scratch
If you aim to build Fluent Bit from sources, you can go ahead and start with the following commands.
cd build cmake .. make bin/fluent-bit -i cpu -o stdout -f 1
If you are interested into more details, please refer to the Build & Install section.
We provide packages for most common Linux distributions:
Linux / Docker Container Images
Our Linux containers images are the most common deployment model, thousands of new installation happen every day, learn more about the available images and tags here.
Fluent Bit is fully supported on Windows environments, get started with these instructions.
Plugins: Inputs, Filters and Outputs
Fluent Bit is based in a pluggable architecture where different plugins plays a major role in the data pipeline:
|collectd||Collectd||Listen for UDP packets from Collectd.|
|cpu||CPU Usage||measure total CPU usage of the system.|
|disk||Disk Usage||measure Disk I/Os.|
|dummy||Dummy||generate dummy event.|
|exec||Exec||executes external program and collects event logs.|
|forward||Forward||Fluentd forward protocol.|
|head||Head||read first part of files.|
|health||Health||Check health of TCP services.|
|kmsg||Kernel Log Buffer||read the Linux Kernel log buffer messages.|
|mem||Memory Usage||measure the total amount of memory used on the system.|
|mqtt||MQTT||start a MQTT server and receive publish messages.|
|netif||Network Traffic||measure network traffic.|
|proc||Process||Check health of Process.|
|random||Random||Generate Random samples.|
|serial||Serial Interface||read data information from the serial interface.|
|stdin||Standard Input||read data from the standard input.|
|syslog||Syslog||read syslog messages from a Unix socket.|
|systemd||Systemd||read logs from Systemd/Journald.|
|tail||Tail||Tail log files.|
|tcp||TCP||Listen for JSON messages over TCP.|
|thermal||Thermal||measure system temperature(s).|
|aws||AWS Metadata||Enrich logs with AWS Metadata.|
|expect||Expect||Validate records match certain criteria in structure.|
|grep||Grep||Match or exclude specific records by patterns.|
|kubernetes||Kubernetes||Enrich logs with Kubernetes Metadata.|
|lua||Lua||Filter records using Lua Scripts.|
|record_modifier||Record Modifier||Modify record.|
|rewrite_tag||Rewrite Tag||Re-emit records under new tag.|
|stdout||Stdout||Print records to the standard output interface.|
|throttle||Throttle||Apply rate limit to event flow.|
|nest||Nest||Nest records under a specified key|
|modify||Modify||Modifications to record.|
|azure||Azure Log Analytics||Ingest records into Azure Log Analytics|
|bigquery||BigQuery||Ingest records into Google BigQuery|
|counter||Count Records||Simple records counter.|
|datadog||Datadog||Ingest logs into Datadog.|
|es||Elasticsearch||flush records to a Elasticsearch server.|
|file||File||Flush records to a file.|
|forward||Forward||Fluentd forward protocol.|
|gelf||GELF||Flush records to Graylog|
|http||HTTP||Flush records to an HTTP end point.|
|influxdb||InfluxDB||Flush records to InfluxDB time series database.|
|kafka||Apache Kafka||Flush records to Apache Kafka|
|kafka-rest||Kafka REST Proxy||Flush records to a Kafka REST Proxy server.|
|loki||Loki||Flush records to Loki server.|
|nats||NATS||Flush records to a NATS server.|
|null||NULL||Throw away events.|
|s3||S3||Flush records to s3|
|stackdriver||Google Stackdriver Logging||Flush records to Google Stackdriver Logging service.|
|stdout||Standard Output||Flush records to the standard output.|
|splunk||Splunk||Flush records to a Splunk Enterprise service|
|tcp||TCP & TLS||Flush records to a TCP server.|
|td||Treasure Data||Flush records to the Treasure Data cloud service for analytics.|
Fluent Bit is an open project, several individuals and companies contribute in different forms like coding, documenting, testing, spreading the word at events within others. If you want to learn more about contributing opportunities please reach out to us through our Community Channels.
If you are interested in contributing to Fluent bit with bug fixes, new features or coding in general, please refer to the code CONTRIBUTING guidelines. You can also refer the Beginners Guide to contributing to Fluent Bit here.
Community & Contact
Feel free to join us on our Slack channel, Mailing List or IRC:
This program is under the terms of the Apache License v2.0.