Skip to content

Commit eaaafa4

Browse files
edsipergitbook-bot
authored andcommitted
GitBook: [1.6] 19 pages and one asset modified
1 parent 3bd8dec commit eaaafa4

File tree

20 files changed

+100
-88
lines changed

20 files changed

+100
-88
lines changed
25.4 KB
Loading

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
description: High Performance Logs Processor
33
---
44

5-
# Fluent Bit v1.5 Documentation
5+
# Fluent Bit v1.6 Documentation
66

7-
![](imgs/logo_documentation_1.6.png)
7+
![](.gitbook/assets/logo_documentation_1.6.png)
88

99
[Fluent Bit](http://fluentbit.io) is a Fast and Lightweight Log Processor, Stream Processor and Forwarder for Linux, OSX, Windows and BSD family operating systems. It has been made with a strong focus on performance to allow the collection of events from different sources without complexity.
1010

SUMMARY.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Table of contents
22

3-
* [Fluent Bit v1.5 Documentation](README.md)
3+
* [Fluent Bit v1.6 Documentation](README.md)
44

55
## About
66

@@ -159,3 +159,4 @@
159159
* [Ingest Records Manually](development/ingest-records-manually.md)
160160
* [Golang Output Plugins](development/golang-output-plugins.md)
161161
* [Developer guide for beginners on contributing to Fluent Bit](development/developer-guide.md)
162+

administration/configuring-fluent-bit/record-accessor.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,27 +4,27 @@ description: A full feature set to access content of your records
44

55
# Record Accessor
66

7-
Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.
7+
Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.
88

9-
Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called _Record Accessor._
9+
Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called _Record Accessor._
1010

1111
> consider Record Accessor a simple grammar to specify record content and other miscellaneus values.
1212
13-
### Format
13+
## Format
1414

1515
A _record accessor_ rule starts with the character `$`. Using the structured content above as an example the following table describes how to access a record:
1616

1717
```javascript
1818
{
19-
"log": "some message",
20-
"stream": "stdout",
21-
"labels": {
22-
"color": "blue",
23-
"unset": null,
24-
"project": {
25-
"env": "production"
26-
}
27-
}
19+
"log": "some message",
20+
"stream": "stdout",
21+
"labels": {
22+
"color": "blue",
23+
"unset": null,
24+
"project": {
25+
"env": "production"
26+
}
27+
}
2828
}
2929
```
3030

@@ -38,9 +38,9 @@ The following table describe some accessing rules and the expected returned valu
3838
| $labels\['unset'\] | null |
3939
| $labels\['undefined'\] | |
4040

41-
If the accessor key does not exist in the record like the last example `$labels['undefined']` , the operation is simply omitted, no exception will occur.
41+
If the accessor key does not exist in the record like the last example `$labels['undefined']` , the operation is simply omitted, no exception will occur.
4242

43-
### Usage Example
43+
## Usage Example
4444

4545
The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using [grep](../../pipeline/filters/grep.md) that only matches where labels have a color blue:
4646

administration/security.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The following **output** plugins can take advantage of the TLS feature:
2727
* [BigQuery](../pipeline/outputs/bigquery.md)
2828
* [Datadog](../pipeline/outputs/datadog.md)
2929
* [Elasticsearch](../pipeline/outputs/elasticsearch.md)
30-
* [Forward]()
30+
* [Forward](security.md)
3131
* [GELF](../pipeline/outputs/gelf.md)
3232
* [HTTP](../pipeline/outputs/http.md)
3333
* [InfluxDB](../pipeline/outputs/influxdb.md)
@@ -93,3 +93,4 @@ Fluent Bit supports [TLS server name indication](https://en.wikipedia.org/wiki/S
9393
tls.ca_file /etc/certs/fluent.crt
9494
tls.vhost fluent.example.com
9595
```
96+

installation/kubernetes.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -102,18 +102,18 @@ When deploying Fluent Bit to Kubernetes, there are three log files that you need
102102

103103
`C:\k\kubelet.err.log`
104104

105-
* This is the error log file from kubelet daemon running on host.
106-
* You will need to retain this file for future troubleshooting (to debug deployment failures etc.)
105+
* This is the error log file from kubelet daemon running on host.
106+
* You will need to retain this file for future troubleshooting \(to debug deployment failures etc.\)
107107

108108
`C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log`
109109

110-
* This is the main log file you need to watch. Configure Fluent Bit to follow this file.
111-
* It is actually a symlink to the Docker log file in `C:\ProgramData\`, with some additional metadata on its file name.
110+
* This is the main log file you need to watch. Configure Fluent Bit to follow this file.
111+
* It is actually a symlink to the Docker log file in `C:\ProgramData\`, with some additional metadata on its file name.
112112

113113
`C:\ProgramData\Docker\containers\<docker>\<docker>.log`
114114

115-
* This is the log file produced by Docker.
116-
* Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
115+
* This is the log file produced by Docker.
116+
* Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
117117

118118
Typically, your deployment yaml contains the following volume configuration.
119119

@@ -185,17 +185,18 @@ parsers.conf: |
185185
186186
### Mitigate unstable network on Windows pods
187187
188-
Windows pods often lack working DNS immediately after boot ([#78479](https://github.com/kubernetes/kubernetes/issues/78479)). To mitigate this issue, `filter_kubernetes` provides a built-in mechanism to wait until the network starts up:
188+
Windows pods often lack working DNS immediately after boot \([\#78479](https://github.com/kubernetes/kubernetes/issues/78479)\). To mitigate this issue, `filter_kubernetes` provides a built-in mechanism to wait until the network starts up:
189189

190-
* `DNS_Retries` - Retries N times until the network start working (6)
191-
* `DNS_Wait_Time` - Lookup interval between network status checks (30)
190+
* `DNS_Retries` - Retries N times until the network start working \(6\)
191+
* `DNS_Wait_Time` - Lookup interval between network status checks \(30\)
192192

193-
By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough for you, tweak the configuration as follows.
193+
By default, Fluent Bit waits for 3 minutes \(30 seconds x 6 times\). If it's not enough for you, tweak the configuration as follows.
194194

195-
```
195+
```text
196196
[filter]
197197
Name kubernetes
198198
...
199199
DNS_Retries 10
200200
DNS_Wait_Time 30
201201
```
202+

installation/sources/build-and-install.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ The _input plugins_ provides certain features to gather information from a speci
112112
| [FLB\_IN\_DISK](../../pipeline/inputs/disk-io-metrics.md) | Enable Disk I/O Metrics input plugin | On |
113113
| [FLB\_IN\_DOCKER](../docker.md) | Enable Docker metrics input plugin | On |
114114
| [FLB\_IN\_EXEC](../../pipeline/inputs/exec.md) | Enable Exec input plugin | On |
115-
| [FLB\_IN\_FORWARD]() | Enable Forward input plugin | On |
115+
| [FLB\_IN\_FORWARD](build-and-install.md) | Enable Forward input plugin | On |
116116
| [FLB\_IN\_HEAD](../../pipeline/inputs/head.md) | Enable Head input plugin | On |
117117
| [FLB\_IN\_HEALTH](../../pipeline/inputs/health.md) | Enable Health input plugin | On |
118118
| [FLB\_IN\_KMSG](../../pipeline/inputs/kernel-logs.md) | Enable Kernel log input plugin | On |
@@ -182,3 +182,4 @@ The _output plugins_ gives the capacity to flush the information to some externa
182182
| [FLB\_OUT\_STDOUT](build-and-install.md) | Enable STDOUT output plugin | On |
183183
| FLB\_OUT\_TCP | Enable TCP/TLS output plugin | On |
184184
| [FLB\_OUT\_TD](../../pipeline/outputs/treasure-data.md) | Enable [Treasure Data](http://www.treasuredata.com) output plugin | On |
185+

pipeline/filters/aws-metadata.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ The plugin supports the following configuration parameters:
1111
| imds\_version | Specify which version of the instance metadata service to use. Valid values are 'v1' or 'v2'. | v2 |
1212
| az | The [availability zone](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html); for example, "us-east-1a". | true |
1313
| ec2\_instance\_id | The EC2 instance ID. | true |
14-
| ec2\_instance\_type | The EC2 instance type. | false |
14+
| ec2\_instance\_type | The EC2 instance type. | false |
1515
| private\_ip | The EC2 instance private ip. | false |
1616
| ami\_id | The EC2 instance image id. | false |
1717
| account\_id | The account ID for current EC2 instance. | false |
@@ -53,4 +53,5 @@ $ bin/fluent-bit -c /PATH_TO_CONF_FILE/fluent-bit.conf
5353
[OUTPUT]
5454
Name stdout
5555
Match *
56-
```
56+
```
57+

pipeline/filters/tensorflow.md

Lines changed: 21 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,44 @@
11
# Tensorflow
22

3-
_Tensorflow Filter_ allows running Machine Learning inference tasks on the records of data coming from
4-
input plugins or stream processor. This filter uses [Tensorflow Lite](https://www.tensorflow.org/lite/)
5-
as the inference engine, and **requires Tensorflow Lite shared library to be present during build and at runtime**.
3+
## Tensorflow
64

7-
Tensorflow Lite is a lightweight open-source deep learning framework that is used for mobile and IoT applications. Tensorflow Lite only handles inference (not training), therefore, it loads pre-trained models (`.tflite` files) that are converted into Tensorflow Lite format (`FlatBuffer`). You can read more on converting Tensorflow models [here](https://www.tensorflow.org/lite/convert)
5+
_Tensorflow Filter_ allows running Machine Learning inference tasks on the records of data coming from input plugins or stream processor. This filter uses [Tensorflow Lite](https://www.tensorflow.org/lite/) as the inference engine, and **requires Tensorflow Lite shared library to be present during build and at runtime**.
86

9-
## Configuration Parameters
7+
Tensorflow Lite is a lightweight open-source deep learning framework that is used for mobile and IoT applications. Tensorflow Lite only handles inference \(not training\), therefore, it loads pre-trained models \(`.tflite` files\) that are converted into Tensorflow Lite format \(`FlatBuffer`\). You can read more on converting Tensorflow models [here](https://www.tensorflow.org/lite/convert)
8+
9+
### Configuration Parameters
1010

1111
The plugin supports the following configuration parameters:
1212

1313
| Key | Description | Default |
1414
| :--- | :--- | :--- |
15-
| input_field | Specify the name of the field in the record to apply inference on. | |
16-
| model_file | Path to the model file (`.tflite`) to be loaded by Tensorflow Lite. | |
17-
| include_input_fields | Include all input filed in filter's output | True |
18-
| normalization_value | Divide input values to normalization_value | |
15+
| input\_field | Specify the name of the field in the record to apply inference on. | |
16+
| model\_file | Path to the model file \(`.tflite`\) to be loaded by Tensorflow Lite. | |
17+
| include\_input\_fields | Include all input filed in filter's output | True |
18+
| normalization\_value | Divide input values to normalization\_value | |
1919

20-
## Creating Tensorflow Lite shared library
20+
### Creating Tensorflow Lite shared library
2121

2222
Clone [Tensorflow repository](https://github.com/tensorflow/tensorflow), install bazel package manager, and run the following command in order to create the shared library:
23+
2324
```bash
2425
$ bazel build -c opt //tensorflow/lite/c:tensorflowlite_c # see https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/c
2526
```
26-
The script creates the shared library `bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so`. You need to copy the library to a location (such as `/usr/lib`) that can be used by Fluent Bit.
2727

28-
## Building Fluent Bit with Tensorflow filter plugin
28+
The script creates the shared library `bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so`. You need to copy the library to a location \(such as `/usr/lib`\) that can be used by Fluent Bit.
29+
30+
### Building Fluent Bit with Tensorflow filter plugin
31+
32+
Tensorflow filter plugin is disabled by default. You need to build Fluent Bit with Tensorflow plugin enabled. In addition, it requires access to Tensorflow Lite header files to compile. Therefore, you also need to pass the address of the Tensorflow source code on your machine to the [build script](https://github.com/fluent/fluent-bit#build-from-scratch):
2933

30-
Tensorflow filter plugin is disabled by default. You need to build Fluent Bit with Tensorflow plugin enabled. In addition, it requires access to Tensorflow Lite header files to compile.
31-
Therefore, you also need to pass the address of the Tensorflow source code on your machine to the [build script](https://github.com/fluent/fluent-bit#build-from-scratch):
3234
```bash
3335
cmake -DFLB_FILTER_TENSORFLOW=On -DTensorflow_DIR=<AddressOfTensorflowSourceCode> ...
3436
```
3537

36-
### Command line
38+
#### Command line
3739

3840
If Tensorflow plugin initializes correctly, it reports successful creation of the interpreter, and prints a summary of model's input/output types and dimensions.
41+
3942
```bash
4043
$ bin/fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field=image' -p 'model_file=/home/user/model.tflite' -p 'include_input_fields=false' -p 'normalization_value=255' -o stdout
4144
[2020/08/04 20:00:00] [ info] Tensorflow Lite interpreter created!
@@ -45,7 +48,7 @@ $ bin/fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field
4548
[2020/08/04 20:00:00] [ info] [tensorflow] type: FLOAT32 dimensions: {1, 2}
4649
```
4750

48-
### Configuration File
51+
#### Configuration File
4952

5053
```text
5154
[SERVICE]
@@ -70,7 +73,8 @@ $ bin/fluent-bit -i mqtt -p 'tag=mqtt.data' -F tensorflow -m '*' -p 'input_field
7073
Match *
7174
```
7275

73-
# Limitations
76+
## Limitations
7477

7578
1. Currently supports single-input models
7679
2. Uses Tensorflow 2.3 header files
80+

pipeline/inputs/forward.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ The plugin supports the following configuration parameters:
1010
| :--- | :--- | :--- |
1111
| Listen | Listener network interface. | 0.0.0.0 |
1212
| Port | TCP port to listen for incoming connections. | 24224 |
13-
| Buffer\_Max\_Size | Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the [Unit Size](../administration/configuring-fluent-bit/unit-sizes.md) specification. | _Buffer\_Chunk\_Size_ |
14-
| Buffer\_Chunk\_Size | By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by _Buffer\_Chunk\_Size_. The value must be according to the [Unit Size](../administration/configuring-fluent-bit/unit-sizes.md) specification. | 32KB |
13+
| Buffer\_Max\_Size | Specify the maximum buffer memory size used to receive a Forward message. The value must be according to the [Unit Size](https://github.com/fluent/fluent-bit-docs/tree/07822c841d38702ae88b58bf3b3c07c9f1e35434/pipeline/administration/configuring-fluent-bit/unit-sizes.md) specification. | _Buffer\_Chunk\_Size_ |
14+
| Buffer\_Chunk\_Size | By default the buffer to store the incoming Forward messages, do not allocate the maximum memory allowed, instead it allocate memory when is required. The rounds of allocations are set by _Buffer\_Chunk\_Size_. The value must be according to the [Unit Size](https://github.com/fluent/fluent-bit-docs/tree/07822c841d38702ae88b58bf3b3c07c9f1e35434/pipeline/administration/configuring-fluent-bit/unit-sizes.md) specification. | 32KB |
1515

1616
## Getting Started
1717

@@ -69,3 +69,4 @@ Copyright (C) Treasure Data
6969
[2016/10/07 21:49:40] [ info] [in_fw] binding 0.0.0.0:24224
7070
[0] my_tag: [1475898594, {"key 1"=>123456789, "key 2"=>"abcdefg"}]
7171
```
72+

pipeline/inputs/tail.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -117,3 +117,4 @@ By default SQLite client tool do not format the columns in a human read-way, so
117117
File rotation is properly handled, including logrotate's _copytruncate_ mode.
118118

119119
Note that the `Path` patterns **cannot** match the rotated files. Otherwise, the rotated file would be read again and lead to duplicate records.
120+

pipeline/outputs/azure_blob.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ We expose different configuration properties. The following table lists all the
2323
| account\_name | Azure Storage account name. This configuration property is mandatory | |
2424
| shared\_key | Specify the Azure Storage Shared Key to authenticate against the service. This configuration property is mandatory. | |
2525
| container\_name | Name of the container that will contain the blobs. This configuration property is mandatory | |
26-
| blob\_type | Specify the desired blob type. Fluent Bit supports `appendblob` and `blockblob`. | appendblob |
26+
| blob\_type | Specify the desired blob type. Fluent Bit supports `appendblob` and `blockblob`. | appendblob |
2727
| auto\_create\_container | If `container_name` does not exist in the remote service, enabling this option will handle the exception and auto-create the container. | on |
2828
| path | Optional path to store your blobs. If your blob name is `myblob`, you can specify sub-directories where to store it using path, so setting path to `/logs/kubernetes` will store your blob in `/logs/kubernetes/myblob`. | |
2929
| emulator\_mode | If you want to send data to an Azure emulator service like [Azurite](https://github.com/Azure/Azurite), enable this option so the plugin will format the requests to the expected format. | off |

0 commit comments

Comments
 (0)