diff --git a/pipeline/inputs/http.md b/pipeline/inputs/http.md index 52150a24b..f701c5135 100644 --- a/pipeline/inputs/http.md +++ b/pipeline/inputs/http.md @@ -12,7 +12,7 @@ description: The HTTP input plugin allows you to send custom records to an HTTP | port | The port for Fluent Bit to listen on | 9880 | | tag_key | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | | | buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M | -| buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K | +| buffer_chunk_size | This sets the chunk size for incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K | | successful_response_code | It allows to set successful response code. `200`, `201` and `204` are supported. | 201 | | success_header | Add an HTTP header key/value pair on success. Multiple headers can be set. Example: `X-Custom custom-answer` | | | threaded | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` | @@ -34,7 +34,7 @@ The http input plugin allows Fluent Bit to open up an HTTP port that you can the #### How to set tag The tag for the HTTP input plugin is set by adding the tag to the end of the request URL. This tag is then used to route the event through the system. -For example, in the following curl message below the tag set is `app.log**. **` because the end end path is `/app_log`: +For example, in the following curl message below the tag set is `app.log**. **` because the end path is `/app_log`: ### Curl request diff --git a/pipeline/inputs/prometheus-remote-write.md b/pipeline/inputs/prometheus-remote-write.md index b149977b7..1d711a43d 100644 --- a/pipeline/inputs/prometheus-remote-write.md +++ b/pipeline/inputs/prometheus-remote-write.md @@ -13,7 +13,7 @@ This input plugin allows you to ingest a payload in the Prometheus remote-write | listen | The address to listen on | 0.0.0.0 | | port | The port for Fluent Bit to listen on | 8080 | | buffer\_max\_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M | -| buffer\_chunk\_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K | +| buffer\_chunk\_size | This sets the chunk size for incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K | |successful\_response\_code | It allows to set successful response code. `200`, `201` and `204` are supported.| 201 | | tag\_from\_uri | If true, tag will be created from uri, e.g. api\_prom\_push from /api/prom/push, and any tag specified in the config will be ignored. If false then a tag must be provided in the config for this input. | true | | uri | Specify an optional HTTP URI for the target web server listening for prometheus remote write payloads, e.g: /api/prom/push | | diff --git a/pipeline/inputs/splunk.md b/pipeline/inputs/splunk.md index 38a7fcd75..b091b2e4b 100644 --- a/pipeline/inputs/splunk.md +++ b/pipeline/inputs/splunk.md @@ -10,7 +10,7 @@ The **splunk** input plugin handles [Splunk HTTP HEC](https://docs.splunk.com/Do | port | The port for Fluent Bit to listen on | 9880 | | tag_key | Specify the key name to overwrite a tag. If set, the tag will be overwritten by a value of the key. | | | buffer_max_size | Specify the maximum buffer size in KB to receive a JSON message. | 4M | -| buffer_chunk_size | This sets the chunk size for incoming incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K | +| buffer_chunk_size | This sets the chunk size for incoming JSON messages. These chunks are then stored/managed in the space available by buffer_max_size. | 512K | | successful_response_code | It allows to set successful response code. `200`, `201` and `204` are supported. | 201 | | splunk\_token | Specify a Splunk token for HTTP HEC authentication. If multiple tokens are specified (with commas and no spaces), usage will be divided across each of the tokens. | | | store\_token\_in\_metadata | Store Splunk HEC tokens in the Fluent Bit metadata. If set false, they will be stored as normal key-value pairs in the record data. | true | diff --git a/pipeline/inputs/standard-input.md b/pipeline/inputs/standard-input.md index 9715ff685..d65fb1c9d 100644 --- a/pipeline/inputs/standard-input.md +++ b/pipeline/inputs/standard-input.md @@ -17,7 +17,7 @@ If no parser is configured for the stdin plugin, it expects *valid JSON* input d 1. A JSON object with one or more key-value pairs: `{ "key": "value", "key2": "value2" }` 3. A 2-element JSON array in [Fluent Bit Event](../../concepts/key-concepts.md#event-or-record) format, which may be: * `[TIMESTAMP, { "key": "value" }]` where TIMESTAMP is a floating point value representing a timestamp in seconds; or - * from Fluent Bit v2.1.0, `[[TIMESTAMP, METADATA], { "key": "value" }]` where TIMESTAMP has the same meaning as above and and METADATA is a JSON object. + * from Fluent Bit v2.1.0, `[[TIMESTAMP, METADATA], { "key": "value" }]` where TIMESTAMP has the same meaning as above and METADATA is a JSON object. Multi-line input JSON is supported. diff --git a/pipeline/outputs/postgresql.md b/pipeline/outputs/postgresql.md index 16eac7ffc..6ce6d1b2e 100644 --- a/pipeline/outputs/postgresql.md +++ b/pipeline/outputs/postgresql.md @@ -12,7 +12,7 @@ According to the parameters you have set in the configuration file, the plugin w > **NOTE:** If you are not familiar with how PostgreSQL's users and grants system works, you might find useful reading the recommended links in the "References" section at the bottom. -A typical installation normally consists of a self-contained database for Fluent Bit in which you can store the output of one or more pipelines. Ultimately, it is your choice to to store them in the same table, or in separate tables, or even in separate databases based on several factors, including workload, scalability, data protection and security. +A typical installation normally consists of a self-contained database for Fluent Bit in which you can store the output of one or more pipelines. Ultimately, it is your choice to store them in the same table, or in separate tables, or even in separate databases based on several factors, including workload, scalability, data protection and security. In this example, for the sake of simplicity, we use a single table called `fluentbit` in a database called `fluentbit` that is owned by the user `fluentbit`. Feel free to use different names. Preferably, for security reasons, do not use the `postgres` user \(which has `SUPERUSER` privileges\).