Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit

Presently, we set the same user agent for both Linux and Windows i.e. `aws-fluent-bit-plugin`. In order to differentiate metrics for Windows and Linux, we are using different user agents for Linux and Windows.

We are also updating the version and adding the change log.

Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Test Actions Status

Fluent Bit Plugin for Amazon Kinesis Firehose

NOTE: A new higher performance Fluent Bit Firehose Plugin has been released. Check out our official guidance.

A Fluent Bit output plugin for Amazon Kinesis Data Firehose.

Security disclosures

If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here or email AWS security directly at


Run make to build ./bin/ Then use with Fluent Bit:

./fluent-bit -e ./ -i cpu \
-o firehose \
-p "region=us-west-2" \
-p "delivery_stream=example-stream"

For building Windows binaries, we need to install mingw-64w for cross-compilation. The same can be done using-

sudo apt-get install -y gcc-multilib gcc-mingw-w64

After this step, run make windows-release to build ./bin/firehose.dll. Then use with Fluent Bit on Windows:

./fluent-bit.exe -e ./firehose.dll -i dummy `
-o firehose `
-p "region=us-west-2" `
-p "delivery_stream=example-stream"

Plugin Options

  • region: The region which your Firehose delivery stream(s) is/are in.
  • delivery_stream: The name of the delivery stream that you want log records sent to.
  • data_keys: By default, the whole log record will be sent to Kinesis. If you specify a key name(s) with this option, then only those keys and values will be sent to Kinesis. For example, if you are using the Fluentd Docker log driver, you can specify data_keys log and only the log message will be sent to Kinesis. If you specify multiple keys, they should be comma delimited.
  • log_key: By default, the whole log record will be sent to Firehose. If you specify a key name with this option, then only the value of that key will be sent to Firehose. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to Firehose.
  • role_arn: ARN of an IAM role to assume (for cross account access).
  • endpoint: Specify a custom endpoint for the Kinesis Firehose API.
  • sts_endpoint: Specify a custom endpoint for the STS API; used to assume your custom role provided with role_arn.
  • time_key: Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis.
  • time_key_format: strftime compliant format string for the timestamp; for example, %Y-%m-%dT%H:%M:%S%z. This option is used with time_key. You can also use %L for milliseconds and %f for microseconds. If you are using ECS FireLens, make sure you are running Amazon ECS Container Agent v1.42.0 or later, otherwise the timestamps associated with your container logs will only have second precision.
  • replace_dots: Replace dot characters in key names with the value of this option. For example, if you add replace_dots _ in your config then all occurrences of . will be replaced with an underscore. By default, dots will not be replaced.
  • simple_aggregation: Option to allow plugin send multiple log events in the same record if current record not exceed the maximumRecordSize (1 MiB). It joins together as many log records as possible into a single Firehose record and delimits them with newline. It's good to enable if your destination supports aggregation like S3. Default to be false, set to true to enable this option.


The plugin requires firehose:PutRecordBatch permissions.


This plugin uses the AWS SDK Go, and uses its default credential provider chain. If you are using the plugin on Amazon EC2 or Amazon ECS or Amazon EKS, the plugin will use your EC2 instance role or ECS Task role permissions or EKS IAM Roles for Service Accounts for pods. The plugin can also retrieve credentials from a shared credentials file, or from the standard AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN environment variables.

Environment Variables

  • FLB_LOG_LEVEL: Set the log level for the plugin. Valid values are: debug, info, and error (case insensitive). Default is info. Note: Setting log level in the Fluent Bit Configuration file using the Service key will not affect the plugin log level (because the plugin is external).
  • SEND_FAILURE_TIMEOUT: Allows you to configure a timeout if the plugin can not send logs to Firehose. The timeout is specified as a Golang duration, for example: 5m30s. If the plugin has failed to make any progress for the given period of time, then it will exit and kill Fluent Bit. This is useful in scenarios where you want your logging solution to fail fast if it has been misconfigured (i.e. network or credentials have not been set up to allow it to send to Firehose).

New Higher Performance Core Fluent Bit Plugin

In the summer of 2020, we released a new higher performance Kinesis Firehose plugin named kinesis_firehose.

That plugin has almost all of the features of this older, lower performance and less efficient plugin. Check out its documentation.

Do you plan to deprecate this older plugin?

This plugin will continue to be supported. However, we are pausing development on it and will focus on the high performance version instead.

Which plugin should I use?

If the features of the higher performance plugin are sufficient for your use cases, please use it. It can achieve higher throughput and will consume less CPU and memory.

As time goes on we expect new features to be added to the C plugin only, however, this is determined on a case by case basis. There is a small feature gap between the two plugins. Please consult the C plugin documentation and this document for the features offered by each plugin.

How can I migrate to the higher performance plugin?

For many users, you can simply replace the plugin name firehose with the new name kinesis_firehose. At the time of writing, the only feature missing from the high performance version is the replace_dots option. Check out its documentation.

Do you accept contributions to both plugins?

Yes. The high performance plugin is written in C, and this plugin is written in Golang. We understand that Go is an easier language for amateur contributors to write code in- that is the primary reason we are continuing to maintain this repo.

However, if you can write code in C, please consider contributing new features to the higher performance plugin.

Fluent Bit Versions

This plugin has been tested with Fluent Bit 1.2.0+. It may not work with older Fluent Bit versions. We recommend using the latest version of Fluent Bit as it will contain the newest features and bug fixes.

Example Fluent Bit Config File

    Name        forward
    Port        24224

    Name   firehose
    Match  *
    region us-west-2
    delivery_stream my-stream
    replace_dots _

AWS for Fluent Bit

We distribute a container image with Fluent Bit and these plugins.


Amazon ECR Public Gallery


Our images are available in Amazon ECR Public Gallery. You can download images with different tags by following command:

docker pull<tag>

For example, you can pull the image with latest version by:

docker pull

If you see errors for image pull limits, try log into public ECR with your AWS credentials:

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin

You can check the Amazon ECR Public official doc for more details.

Docker Hub


Amazon ECR

You can use our SSM Public Parameters to find the Amazon ECR image URI in your region:

aws ssm get-parameters-by-path --path /aws/service/aws-for-fluent-bit/

For more see our docs.


This library is licensed under the Apache 2.0 License.