Skip to content

Commit f689765

Browse files
authored
Reindent configuration file example, improve readability, etc. (#1441)
* pipeline: inputs: opentelemetry: tabs -> 4 spaces Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: inputs: prometheus-remote-write: tabs -> 4 spaces Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: filters: kubernetes: yaml syntax highlight Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: filters: kubernetes: reindent config file Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: filters: lua: reindent config Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: outputs: chronicle: Remove dashes which causes markdown break Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: outputs: oci-logging-analytics: Add newlines Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: outputs: oci-logging-analytics: reindent json fields Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: outputs: s3: reindent with 4 spaces Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: outputs: vivo-exporter: Remove unusual line terminator LSEP(U+2028) Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> * pipeline: outputs: websocket: Add newline Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com> --------- Signed-off-by: Seonghyeon Cho <seonghyeoncho96@gmail.com>
1 parent fb9458a commit f689765

File tree

9 files changed

+120
-114
lines changed

9 files changed

+120
-114
lines changed

pipeline/filters/kubernetes.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ There are some configuration setup needed for this feature.
271271

272272
Role Configuration for Fluent Bit DaemonSet Example:
273273

274-
```text
274+
```yaml
275275
---
276276
apiVersion: v1
277277
kind: ServiceAccount
@@ -314,34 +314,34 @@ The difference is that kubelet need a special permission for resource `nodes/pro
314314
Fluent Bit Configuration Example:
315315

316316
```text
317-
[INPUT]
318-
Name tail
319-
Tag kube.*
320-
Path /var/log/containers/*.log
321-
DB /var/log/flb_kube.db
322-
Parser docker
323-
Docker_Mode On
324-
Mem_Buf_Limit 50MB
325-
Skip_Long_Lines On
326-
Refresh_Interval 10
327-
328-
[FILTER]
329-
Name kubernetes
330-
Match kube.*
331-
Kube_URL https://kubernetes.default.svc.cluster.local:443
332-
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
333-
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
334-
Merge_Log On
335-
Buffer_Size 0
336-
Use_Kubelet true
337-
Kubelet_Port 10250
317+
[INPUT]
318+
Name tail
319+
Tag kube.*
320+
Path /var/log/containers/*.log
321+
DB /var/log/flb_kube.db
322+
Parser docker
323+
Docker_Mode On
324+
Mem_Buf_Limit 50MB
325+
Skip_Long_Lines On
326+
Refresh_Interval 10
327+
328+
[FILTER]
329+
Name kubernetes
330+
Match kube.*
331+
Kube_URL https://kubernetes.default.svc.cluster.local:443
332+
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
333+
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
334+
Merge_Log On
335+
Buffer_Size 0
336+
Use_Kubelet true
337+
Kubelet_Port 10250
338338
```
339339

340340
So for fluent bit configuration, you need to set the `Use_Kubelet` to true to enable this feature.
341341

342342
DaemonSet config Example:
343343

344-
```text
344+
```yaml
345345
---
346346
apiVersion: apps/v1
347347
kind: DaemonSet

pipeline/filters/lua.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -196,12 +196,12 @@ We want to extract the `sandboxbsh` name and add it to our record as a special k
196196
{% tabs %}
197197
{% tab title="fluent-bit.conf" %}
198198
```
199-
[FILTER]
200-
Name lua
201-
Alias filter-iots-lua
202-
Match iots_thread.*
203-
Script filters.lua
204-
Call set_landscape_deployment
199+
[FILTER]
200+
Name lua
201+
Alias filter-iots-lua
202+
Match iots_thread.*
203+
Script filters.lua
204+
Call set_landscape_deployment
205205
```
206206
{% endtab %}
207207

@@ -358,23 +358,23 @@ Configuration to get istio logs and apply response code filter to them.
358358
{% tabs %}
359359
{% tab title="fluent-bit.conf" %}
360360
```ini
361-
[INPUT]
362-
Name tail
363-
Path /var/log/containers/*_istio-proxy-*.log
364-
multiline.parser docker, cri
365-
Tag istio.*
366-
Mem_Buf_Limit 64MB
367-
Skip_Long_Lines Off
368-
369-
[FILTER]
370-
Name lua
371-
Match istio.*
372-
Script response_code_filter.lua
373-
call cb_response_code_filter
374-
375-
[Output]
376-
Name stdout
377-
Match *
361+
[INPUT]
362+
Name tail
363+
Path /var/log/containers/*_istio-proxy-*.log
364+
multiline.parser docker, cri
365+
Tag istio.*
366+
Mem_Buf_Limit 64MB
367+
Skip_Long_Lines Off
368+
369+
[FILTER]
370+
Name lua
371+
Match istio.*
372+
Script response_code_filter.lua
373+
call cb_response_code_filter
374+
375+
[Output]
376+
Name stdout
377+
Match *
378378
```
379379
{% endtab %}
380380

pipeline/inputs/opentelemetry.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -79,13 +79,13 @@ pipeline:
7979
{% tab title="fluent-bit.conf" %}
8080
```
8181
[INPUT]
82-
name opentelemetry
83-
listen 127.0.0.1
84-
port 4318
82+
name opentelemetry
83+
listen 127.0.0.1
84+
port 4318
8585

8686
[OUTPUT]
87-
name stdout
88-
match *
87+
name stdout
88+
match *
8989
```
9090
{% endtab %}
9191

pipeline/inputs/prometheus-remote-write.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -26,14 +26,14 @@ A sample config file to get started will look something like the following:
2626
{% tab title="fluent-bit.conf" %}
2727
```
2828
[INPUT]
29-
name prometheus_remote_write
30-
listen 127.0.0.1
31-
port 8080
32-
uri /api/prom/push
29+
name prometheus_remote_write
30+
listen 127.0.0.1
31+
port 8080
32+
uri /api/prom/push
3333
3434
[OUTPUT]
35-
name stdout
36-
match *
35+
name stdout
36+
match *
3737
```
3838
{% endtab %}
3939

@@ -65,13 +65,13 @@ Communicating with TLS, you will need to use the tls related parameters:
6565

6666
```
6767
[INPUT]
68-
Name prometheus_remote_write
69-
Listen 127.0.0.1
70-
Port 8080
71-
Uri /api/prom/push
72-
Tls On
73-
tls.crt_file /path/to/certificate.crt
74-
tls.key_file /path/to/certificate.key
68+
Name prometheus_remote_write
69+
Listen 127.0.0.1
70+
Port 8080
71+
Uri /api/prom/push
72+
Tls On
73+
tls.crt_file /path/to/certificate.crt
74+
tls.key_file /path/to/certificate.key
7575
```
7676

7777
Now, you should be able to send data over TLS to the remote write input.

pipeline/outputs/chronicle.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
---
2-
31
# Chronicle
42

53
The Chronicle output plugin allows ingesting security logs into [Google Chronicle](https://chronicle.security/) service. This connector is designed to send unstructured security logs.

pipeline/outputs/oci-logging-analytics.md

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -86,11 +86,13 @@ In case of multiple inputs, where oci_la_* properties can differ, you can add th
8686
[INPUT]
8787
Name dummy
8888
Tag dummy
89+
8990
[Filter]
9091
Name modify
9192
Match *
9293
Add oci_la_log_source_name <LOG_SOURCE_NAME>
9394
Add oci_la_log_group_id <LOG_GROUP_OCID>
95+
9496
[Output]
9597
Name oracle_log_analytics
9698
Match *
@@ -109,6 +111,7 @@ You can attach certain metadata to the log events collected from various inputs.
109111
[INPUT]
110112
Name dummy
111113
Tag dummy
114+
112115
[Output]
113116
Name oracle_log_analytics
114117
Match *
@@ -138,12 +141,12 @@ The above configuration will generate a payload that looks like this
138141
"metadata": {
139142
"key1": "value1",
140143
"key2": "value2"
141-
},
142-
"logSourceName": "example_log_source",
143-
"logRecords": [
144-
"dummy"
145-
]
146-
}
144+
},
145+
"logSourceName": "example_log_source",
146+
"logRecords": [
147+
"dummy"
148+
]
149+
}
147150
]
148151
}
149152
```
@@ -156,23 +159,27 @@ With oci_config_in_record option set to true, the metadata key-value pairs will
156159
[INPUT]
157160
Name dummy
158161
Tag dummy
162+
159163
[FILTER]
160164
Name Modify
161165
Match *
162166
Add olgm.key1 val1
163167
Add olgm.key2 val2
168+
164169
[FILTER]
165170
Name nest
166171
Match *
167172
Operation nest
168173
Wildcard olgm.*
169174
Nest_under oci_la_global_metadata
170175
Remove_prefix olgm.
176+
171177
[Filter]
172178
Name modify
173179
Match *
174180
Add oci_la_log_source_name <LOG_SOURCE_NAME>
175181
Add oci_la_log_group_id <LOG_GROUP_OCID>
182+
176183
[Output]
177184
Name oracle_log_analytics
178185
Match *

pipeline/outputs/s3.md

Lines changed: 41 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -198,13 +198,13 @@ The following settings are recommended for this use case:
198198

199199
```
200200
[OUTPUT]
201-
Name s3
202-
Match *
203-
bucket your-bucket
204-
region us-east-1
205-
total_file_size 1M
206-
upload_timeout 1m
207-
use_put_object On
201+
Name s3
202+
Match *
203+
bucket your-bucket
204+
region us-east-1
205+
total_file_size 1M
206+
upload_timeout 1m
207+
use_put_object On
208208
```
209209

210210
## S3 Multipart Uploads
@@ -252,14 +252,14 @@ Example:
252252

253253
```
254254
[OUTPUT]
255-
Name s3
256-
Match *
257-
bucket your-bucket
258-
region us-east-1
259-
total_file_size 1M
260-
upload_timeout 1m
261-
use_put_object On
262-
workers 1
255+
Name s3
256+
Match *
257+
bucket your-bucket
258+
region us-east-1
259+
total_file_size 1M
260+
upload_timeout 1m
261+
use_put_object On
262+
workers 1
263263
```
264264

265265
If you enable a single worker, you are enabling a dedicated thread for your S3 output. We recommend starting without workers, evaluating the performance, and then enabling a worker if needed. For most users, the plugin can provide sufficient throughput without workers.
@@ -274,10 +274,10 @@ Example:
274274

275275
```
276276
[OUTPUT]
277-
Name s3
278-
Match *
279-
bucket your-bucket
280-
endpoint http://localhost:9000
277+
Name s3
278+
Match *
279+
bucket your-bucket
280+
endpoint http://localhost:9000
281281
```
282282

283283
Then, the records will be stored into the MinIO server.
@@ -300,27 +300,27 @@ In your main configuration file append the following _Output_ section:
300300

301301
```
302302
[OUTPUT]
303-
Name s3
304-
Match *
305-
bucket your-bucket
306-
region us-east-1
307-
store_dir /home/ec2-user/buffer
308-
total_file_size 50M
309-
upload_timeout 10m
303+
Name s3
304+
Match *
305+
bucket your-bucket
306+
region us-east-1
307+
store_dir /home/ec2-user/buffer
308+
total_file_size 50M
309+
upload_timeout 10m
310310
```
311311

312312
An example that using PutObject instead of multipart:
313313

314314
```
315315
[OUTPUT]
316-
Name s3
317-
Match *
318-
bucket your-bucket
319-
region us-east-1
320-
store_dir /home/ec2-user/buffer
321-
use_put_object On
322-
total_file_size 10M
323-
upload_timeout 10m
316+
Name s3
317+
Match *
318+
bucket your-bucket
319+
region us-east-1
320+
store_dir /home/ec2-user/buffer
321+
use_put_object On
322+
total_file_size 10M
323+
upload_timeout 10m
324324
```
325325

326326
## AWS for Fluent Bit
@@ -387,15 +387,15 @@ Once compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format.
387387

388388
```
389389
[INPUT]
390-
Name cpu
390+
Name cpu
391391
392392
[OUTPUT]
393-
Name s3
394-
Bucket your-bucket-name
395-
total_file_size 1M
396-
use_put_object On
397-
upload_timeout 60s
398-
Compression arrow
393+
Name s3
394+
Bucket your-bucket-name
395+
total_file_size 1M
396+
use_put_object On
397+
upload_timeout 60s
398+
Compression arrow
399399
```
400400

401401
As shown in this example, setting `Compression` to `arrow` makes Fluent Bit to convert payload into Apache Arrow format.

pipeline/outputs/vivo-exporter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Here is a simple configuration of Vivo Exporter, note that this example is not b
2525
match *
2626
empty_stream_on_read off
2727
stream_queue_size 20M
28-
http_cors_allow_origin *
28+
http_cors_allow_origin *
2929
```
3030

3131
### How it works

0 commit comments

Comments
 (0)