Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[elasticsearch/logging] Add slowlogs config option. #58086

Open
Tracked by #134169
tylersmalley opened this issue Feb 20, 2020 · 12 comments
Open
Tracked by #134169

[elasticsearch/logging] Add slowlogs config option. #58086

tylersmalley opened this issue Feb 20, 2020 · 12 comments
Labels
Feature:elasticsearch Feature:Logging Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc

Comments

@tylersmalley
Copy link
Contributor

tylersmalley commented Feb 20, 2020

For multiple reasons, it's helpful to log the queries to Elasticsearch performed by Kibana. This is often needed for auditing or inspection of slow queries.

Configuration would include an option to only log queries slower than a specified amount of time, producing something similar to slowlog. Setting this to 0, would essentially log everything.

For audit purposes, it's important to associate the query with a user and an action in Kibana (ex: which visualization, health check, task manager). We will need to come up with a solution to provide the ES proxy with this information so it could be included in the logs.

Identify if a query timeout limit was hit. If so, which timeout was responsible? Is it the elasticsearch.requestTimeout, ES search_timeout in request body,

In 8.0 we intend on packaging Filebeat with the Kibana package. Doing so could allow us to easily ingest this data to enable a UI view like what was described in #51224.

@kobelb and @joshdover wanted to ping you both here to make sure this isn't already being done/tracked anywhere since I could see it overlapping.

@tylersmalley tylersmalley added the Team:Operations Team label for Operations Team label Feb 20, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations (Team:Operations)

@tylersmalley
Copy link
Contributor Author

@epixa I don't think we created an issue after our discussion at EAH. Is there anything you want to add here or that needs correction?

@epixa
Copy link
Contributor

epixa commented Feb 20, 2020

I have nothing to add

@joshdover
Copy link
Member

I think there is some overlap here with #57546 (we haven't started work on that yet). In general, I'd just like to see that we're leveraging the new logging config. Maybe this means adding a new layout type for the elasticsearch logging context that is prepopulated with this more detailed information?

@restrry any thoughts?

@mshustov
Copy link
Contributor

mshustov commented Feb 21, 2020

(we haven't started work on that yet). In general, I'd just like to see that we're leveraging the new logging config. Maybe this means adding a new layout type for the elasticsearch logging context that is prepopulated with this more detailed information?

log4j allows populating logs with custom data with MDC. We use metadata in the POJO format to provide data to the logger. Then we need to render metadata somehow. I'd expect that we ship preconfigured appender for elasticsearch.client context that has a pattern formatting metadata. For this, we need to extend the meta conversion pattern syntax to support keys lookup. An example of custom pattern for ES:

[%date][%context]  user:%meta{username} url:%meta{url} transferred:%meta{bytes}  - %meta{duration}

Another interesting question: how network layer access application layer data? Such as a username, for example. We will have to extend ElasticsearchService/KibanaRequest API, I suppose.

@spalger
Copy link
Contributor

spalger commented Feb 21, 2020

I've been wondering how easy it would be to attach elasticsearch query logs to every response automatically and automatically harvest those from kfetch/http service/etc on the front end.

@mshustov
Copy link
Contributor

@spalger that's a good idea! In theory, elasticsearch client can attach a request metadata to a response header, and elasticsearch client in the browser can log response metadata in the console. Later, when we have the client-side logging service #33796 we can reuse the same pattern layout on the client.

@spalger
Copy link
Contributor

spalger commented Feb 25, 2020

Thanks, the thing I like about this approach is that it puts the data in the hands of the user and doesn't require storing it anywhere, so it could be on for certain types of users or always in OSS.

@mshustov
Copy link
Contributor

mshustov commented Mar 3, 2020

@kobelb @jportner Does this task overlap in any way with #52125?
As I understand, in this task we want to have a performance audit of the ES query. While #52125 records on behalf of which user the operation was performed. Correct me if I wrong, pls.

@jportner
Copy link
Contributor

jportner commented Mar 3, 2020

That's correct, so it sounds like there's some overlap.
#52125 is a security audit log -- it's concerned with who initiated the action and what the outcome was. So:

  • We don't want any "performance audit logs" to have user data attached
  • It would be nice if the security audit log had any additional info available (such as "which timeout was responsible?" as described above)

@jbudz
Copy link
Member

jbudz commented Nov 2, 2020

Current status

Migration to the new elasticsearch client is done and logging all queries under logging.verbose: true and elasticsearch.logQueries.

stdout

server    log   [06:16:46.727] [debug][data][elasticsearch][query] 200
PUT /_template/.management-beats
{"index_patterns":[".management-beats"],...

json

{"type":"log","@timestamp":"2020-11-02T06:21:04-06:00","tags":["debug","elasticsearch","data","query"],"p
id":4058,"message":"200\nPOST /_bulk?refresh=false&_source_includes=originId\n{\"update\":{\"_id\":\"task
:endpoint:user-artifact-packager:1.0.0\",\"_index\":\".kibana_task_manager\"}}\n{\"doc\":{\"task\":{\"run
At\":\"2020-11-02T12:22:04.687Z\",\"state\":\"{}\",\"attempts\":0,\"status\":\"idle\",\"startedAt\":null,
\"retryAt\":null,\"ownerId\":null,\"schedule\":{\"interval\":\"60s\"},\"taskType\":\"endpoint:user-artifa
ct-packager\",\"scope\":[\"securitySolution\"],\"params\":\"{\\\"version\\\":\\\"1.0.0\\\"}\",\"scheduled
At\":\"2020-11-02T12:16:51.317Z\"},\"updated_at\":\"2020-11-02T12:21:04.739Z\"}}\n"}

Recap from comments

Remaining for this issue, feel free to edit:

  • slow log settings
  • response time from elasticsearch
  • x-opaque-id

Do we want these logged to a separate file? Should slow log settings hold off until #57546 to avoid any configuration conflicts?

@joshdover joshdover added this to 7.13 - Tentative in kibana-core [DEPRECATED] Dec 2, 2020
@lukeelmers lukeelmers changed the title Elasticsearch query log [elasticsearch/logging] Add slowlogs config option. Dec 16, 2020
@lukeelmers lukeelmers added Feature:elasticsearch Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc labels Dec 16, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-core (Team:Core)

@lukeelmers lukeelmers removed the Team:Operations Team label for Operations Team label Dec 16, 2020
@joshdover joshdover moved this from 7.13 - Tentative to 7.14 - Tentative in kibana-core [DEPRECATED] Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:elasticsearch Feature:Logging Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
Projects
Development

No branches or pull requests

9 participants