-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update from upstream repository #298
Conversation
Signed-off-by: Michel Hollands <michel.hollands@gmail.com>
Signed-off-by: Michel Hollands <michel.hollands@gmail.com>
This code allows us to preprocess generic logs and replace highly variable dynamic data (timestamps, IPs, numbers, UUIDs, hex values, bytesizes and durations) with static placeholders for easier pattern extraction and more efficient and user-friendly matching by the Drain algorithm. Additionally, there is logic that splits generic log lines into discrete tokens that can be used with Drain for better results than just naively splitting the logs on every whitespace. The tokenization here handles quote counting and emits quoted strings as a part of the same token. On the other side, it also handles likely JSON logs without any white spaces in them better, by trying to split `{"key":value}` pairs (without actually parsing the JSON). All of this is done without using regular expressions and without actually parsing the log lines in any specific format. That's why it works very efficiently in terms of CPU usage and allocations, and should handle all log formats and unformatted logs equally well.
…MRoleName for lambda-promtail CloudFormation template (#12728)
…age (#12740) Co-authored-by: J Stickler <julie.stickler@grafana.com>
Signed-off-by: Owen Diehl <ow.diehl@gmail.com>
Signed-off-by: Michel Hollands <michel.hollands@gmail.com>
Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com>
Followup to #12806 which exposes skipped pages more explicitly than as an error. * refactors skip logic for bloom pages that are too large * s/Seek/LoadOffset/ for LazyBloomIter * removes unused code
…12807) This PR aims for full de-duplication of chunks and series from filter requests from the index gateway to the bloom gateway. Whenever we merge/de-duplicate slices, the inputs need to be sorted. It appears that the Removals (chunks) from the v1.Output are not guaranteed to be sorted. When comparing ShortRefs, both From, Through, and Checksum need to be used. Signed-off-by: Christian Haudum <christian.haudum@gmail.com>
Signed-off-by: thorker <th.kerber+github@gmail.com> Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com>
#12838) The bloom shipper uses metas to resolve available blocks. Metas are fetched from cache, and if not available from object storage. If fetching metas from cache fails, e.g. timeout, the request should not fail, but proceed as if no metas were available. Signed-off-by: Christian Haudum <christian.haudum@gmail.com>
Co-authored-by: J Stickler <julie.stickler@grafana.com> Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com>
Signed-off-by: Callum Styan <callumstyan@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
We've seen a few cases where creating the ULID failed for unknown reasons, and the ID is not really used. It was only useful early on in the development for debugging. Signed-off-by: Christian Haudum <christian.haudum@gmail.com>
There is a time window between between listing metas and fetching them from object storage which could lead to a race condition that the meta is not found in object storage, because it was deleted and superseded by a newer meta. This can happen when querying recent bloom data, that is still subject to updates, and results in an error like this: ``` rpc error: code = Unknown desc = failed to get meta file bloom/tsdb_index_19843/XXXX/metas/18fbdc8500000000-1921d15dffffffff-270affee.json: storage: object doesn't exist (Trace ID: 4fe28d32cfa3e3df9495c3a5d4a683fb) ``` Signed-off-by: Christian Haudum <christian.haudum@gmail.com>
Signed-off-by: Michel Hollands <michel.hollands@gmail.com> Co-authored-by: J Stickler <julie.stickler@grafana.com>
Signed-off-by: Christian Haudum <christian.haudum@gmail.com>
Trying to remove some cruft from traces.
Adds some timing information to pre-existing spans to help better understand bloom read path latency responsibility
…and Promtail (#12741) From https://systemd.io/NETWORK_ONLINE/: **How do I make sure that my service starts after the network is really online?** That depends on your setup and the services you plan to run after it (see above). If you need to delay you service after network connectivity has been established, include ```systemd After=network-online.target Wants=network-online.target ``` in the `.service` file. This will delay boot until the network management software says the network is “up”. For details, see the next question. Signed-off-by: Christian Haudum <christian.haudum@gmail.com>
… other) (#12868) Signed-off-by: Michel Hollands <michel.hollands@gmail.com> Co-authored-by: Vladyslav Diachenko <82767850+vlad-diachenko@users.noreply.github.com> Co-authored-by: Michel Hollands <42814411+MichelHollands@users.noreply.github.com> Co-authored-by: Michel Hollands <michel.hollands@gmail.com>
Co-authored-by: J Stickler <julie.stickler@grafana.com>
Co-authored-by: J Stickler <julie.stickler@grafana.com>
Co-authored-by: J Stickler <julie.stickler@grafana.com>
@periklis: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: JoaoBraveCoding, periklis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Refs: