Skip to content

Commit

Permalink
peer review updates
Browse files Browse the repository at this point in the history
  • Loading branch information
abrennan89 committed Sep 1, 2023
1 parent e69c5f5 commit 93f7839
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions logging/cluster-logging-loki.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ include::_attributes/common-attributes.adoc[]

toc::[]

In {logging} documentation, _LokiStack_ refers to the {logging} supported combination of Loki, and web proxy with {product-title} authentication integration. LokiStack's proxy uses {product-title} authentication to enforce multi-tenancy. _Loki_ refers to the log store as either the individual component or an external store.
In {logging} documentation, _LokiStack_ refers to the {logging} supported combination of Loki and web proxy with {product-title} authentication integration. LokiStack's proxy uses {product-title} authentication to enforce multi-tenancy. _Loki_ refers to the log store as either the individual component or an external store.

Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the {logging}. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion, and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. You can query Loki by using the link:https://grafana.com/docs/loki/latest/logql/[LogQL log query language].
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the {logging}. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. You can query Loki by using the link:https://grafana.com/docs/loki/latest/logql/[LogQL log query language].

include::modules/loki-deployment-sizing.adoc[leveloffset=+1]

Expand Down
8 changes: 4 additions & 4 deletions modules/loki-rate-limit-errors.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
[id="loki-rate-limit-errors_{context}"]
= Troubleshooting Loki rate limit errors

If the Log Forwarder API forwards a large block of messages to Loki that exceeds the rate limit, Loki generates rate limit (`429`) errors.
If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (`429`) errors.

These errors can occur during normal operation. For example, when adding {ocp-logging} to a cluster that already has some logs, rate limit errors might occur while {ocp-logging} tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.

In cases where the rate limit errors do not resolve themselves, you can fix the issue by modifying the `LokiStack` custom resource (CR).
In cases where the rate limit errors continue to occur, you can fix the issue by modifying the `LokiStack` custom resource (CR).

[IMPORTANT]
====
Expand Down Expand Up @@ -79,5 +79,5 @@ spec:
ingestionRate: 8 # <2>
# ...
----
<1> The `ingestionBurstSize` field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit, and should be set to at least the maximum logs size expected in a single push request. No single request is permitted that is larger than the `ingestionBurstSize` value.
<2> The `ingestionRate` field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs, and as long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.
<1> The `ingestionBurstSize` field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the `ingestionBurstSize` value are not permitted.
<2> The `ingestionRate` field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.

0 comments on commit 93f7839

Please sign in to comment.