diff --git a/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc b/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc index 3b010b7ed36..0f7d7364985 100644 --- a/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc +++ b/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc @@ -666,10 +666,8 @@ endif::[] The maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 1600. -Events can be collected into batches. When using the memory queue with `queue.mem.flush.min_events` -set to a value greater than `1`, the maximum batch is is the value of `queue.mem.flush.min_events`. -{beatname_uc} will split batches read from the queue which are larger than `bulk_max_size` into -multiple batches. +Events can be collected into batches. {beatname_uc} will split batches read from the queue which are +larger than `bulk_max_size` into multiple batches. Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in diff --git a/libbeat/outputs/logstash/docs/logstash.asciidoc b/libbeat/outputs/logstash/docs/logstash.asciidoc index 5fa2fc5a028..d5e2e2741a6 100644 --- a/libbeat/outputs/logstash/docs/logstash.asciidoc +++ b/libbeat/outputs/logstash/docs/logstash.asciidoc @@ -381,10 +381,8 @@ endif::[] The maximum number of events to bulk in a single {ls} request. The default is 2048. -Events can be collected into batches. When using the memory queue with `queue.mem.flush.min_events` -set to a value greater than `1`, the maximum batch is is the value of `queue.mem.flush.min_events`. -{beatname_uc} will split batches read from the queue which are larger than `bulk_max_size` into -multiple batches. +Events can be collected into batches. {beatname_uc} will split batches read from the queue which are +larger than `bulk_max_size` into multiple batches. Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in diff --git a/libbeat/outputs/redis/docs/redis.asciidoc b/libbeat/outputs/redis/docs/redis.asciidoc index 0b758e524cb..366d3cb832a 100644 --- a/libbeat/outputs/redis/docs/redis.asciidoc +++ b/libbeat/outputs/redis/docs/redis.asciidoc @@ -216,10 +216,8 @@ endif::[] The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. -Events can be collected into batches. When using the memory queue with `queue.mem.flush.min_events` -set to a value greater than `1`, the maximum batch is is the value of `queue.mem.flush.min_events`. -{beatname_uc} will split batches read from the queue which are larger than `bulk_max_size` into -multiple batches. +Events can be collected into batches. {beatname_uc} will split batches read from the queue which are +larger than `bulk_max_size` into multiple batches. Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times,