New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Download the logs #409
Comments
|
Hi I have the logs from various microservices which are sent to loki server using promtail.Now in Grafana i can see the logs from various components. |
What's a realistic limit for volume when downloading the results? 1MB, 10MB, 100MB? |
This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
I have a simpler use case with this requirement. We plan to have short-lived containers run as Kubernetes jobs with logging to stdout. We then want to use some selector to return all the stdout as text for this job. Probably 1 MB is a reasonable size limit. I haven't given much thought yet to the labels. FYI, we will also be using Grafana. This use case is to mimic the current behavior we have where users can get the full stdout for a specific job as a simple text file. It looks like the cli could be used for this, but are there other ways? |
You can definitely use the http api for this.
Le mar. 3 sept. 2019 à 13:03, charlie-roosen <notifications@github.com> a
écrit :
… I have a simpler use case with this requirement.
We plan to have short-lived containers run as Kubernetes jobs with logging
to stdout.
We then want to use some selector to return all the stdout as text for
this job.
Probably 1 MB is a reasonable size limit. I haven't given much thought yet
to the labels.
FYI, we will also be using Grafana. This use case is to mimic the current
behavior we have where users can get the full stdout for a specific job as
a simple text file.
It looks like the cli could be used for this, but are there other ways?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#409?email_source=notifications&email_token=AAIBF3JA2BHMLNR4EQDZA3LQH2KF3A5CNFSM4G7NGZP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5Y4DTI#issuecomment-527548877>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAIBF3NFQBQCUJRBS4SFGVDQH2KF3ANCNFSM4G7NGZPQ>
.
|
This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
is there a way for this please? |
I've written some small service to query logs hourly from loki and then pushing them to s3. To the details:
Fetching logs from loki is sequential at the moment, but to download, transform and upload 40MB (3.4MB gzipped) of logs to s3, it takes about 3 seconds. Accessing loki on another host. I've only seen a high cpu usage on loki when I've exported three months of data (on a quite small machine running prometheus and loki). |
Nice one @marcbachmann ! |
very useful feature, any update? |
Same issue in Grafana Project: grafana/grafana#28752 |
This was closed without much information but to answer @davkal:
the result of a log query yes
all of them over a given timerange
same options as logcli offers in general? raw logs at least
could be multiple gigabytes, the point is if one wants traces over a large period of time for statistical significance purposes atm this is more or less (as in rather "less" than "more") achievable with raising --limit but still clearly outside of the use-case -- would be nice to have a |
@Tristan971 I don't know if you already got your answer. But in the attached task we have a answer(probably for your problem as well) Downloading is implemented via "inspector" in Grafana. Check this out: |
@AyrtonRicardo Unfortunately (unless that changed recently) you're limited to the number of data points loaded in the current panel. For Loki datasources that's a # of lines, which IIRC can't be increased beyond 9999 (might be lower, ymmv etc). So if I want to export millions of lines efficiently I'm sort of screwed. |
Logcli now supports batching requests, so you can request arbitrarily large amounts of log lines from the CLI and it will fetch them in batches to not exceed the max lines limit. I think if there is a desire to be able to do something similar from the UI, an issue should be created in the Grafana repo to add batching support to the download option as well over there. |
How can I download several gigabytes of query logs using logcli |
Unfortunately, there is no good solution at the moment, which is why I raised #6840 as follow-up... |
Alright,I wrote a shell script that loops to download logs for a period of time. Use the logcli tool. Uh... looks like it works,I've downloaded a stream of logs for the past month, about 50G. But it takes time |
Same here
I'm using a script like this, but since I've GBs of load it takes an eternity :( I'm looking to use syslog or similar to collect the logs in text files (used for backup and more complex data analysis) and loki on the side for precises queries. It would be nice to have one system instead of two .. |
Is your feature request related to a problem? Please describe.
I need to download the logs from loki .How can i do that?
The text was updated successfully, but these errors were encountered: