diff --git a/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md b/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md index e8205096d5..a6359ba1dc 100644 --- a/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md +++ b/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md @@ -1,7 +1,7 @@ --- mapped_urls: - - https://www.elastic.co/guide/en/{{es}}/reference/current/starting-elasticsearch.html - - https://www.elastic.co/guide/en/{{es}}/reference/current/stopping-elasticsearch.html + - https://www.elastic.co/guide/en/elasticsearch/reference/current/starting-elasticsearch.html + - https://www.elastic.co/guide/en/elasticsearch/reference/current/stopping-elasticsearch.html applies_to: deployment: self: diff --git a/deploy-manage/monitor.md b/deploy-manage/monitor.md index 4b82269337..ca6cf49903 100644 --- a/deploy-manage/monitor.md +++ b/deploy-manage/monitor.md @@ -74,6 +74,13 @@ Out of the box logs and metrics tools, including ECH preconfigured logs and metr To learn more about the health and performance tools in {{ecloud}}, refer to [](/deploy-manage/monitor/cloud-health-perf.md). +## {{kib}} task manager monitoring + +```{applies_to} +stack: preview +``` +The {{kib}} [task manager](/deploy-manage/distributed-architecture/kibana-tasks-management.md) has an internal monitoring mechanism to keep track of a variety of metrics, which can be consumed with either the health monitoring API or the {{kib}} server log. [Learn how to configure thresholds and consume related to {{kib}} task manager](/deploy-manage/monitor/kibana-task-manager-health-monitoring.md). + ## Monitoring your orchestrator ```{applies_to} deployment: @@ -81,11 +88,17 @@ deployment: eck: ``` -TODO +In addition to monitoring your cluster or deployment health and performance, you need to monitor your orchestrator. Monitoring is especially important for orchestrators hosted on infrastructure that you control. -## Logging +Learn how to enable monitoring of your orchestrator: + +* [ECK operator metrics](/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md): Open and secure a metrics endpoint that can be used to monitor the operator’s performance and health. This endpoint can be scraped by third-party Kubernetes monitoring tools. +* [ECK platform monitoring](/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md): Learn about how ECE collects monitoring data for your installation in the `logging-and-metrics` deployment, and how to access monitoring data. -TODO +:::{admonition} Monitoring {{ecloud}} +Elastic monitors [{{ecloud}}](/deploy-manage/deploy/elastic-cloud.md) service metrics and performance as part of [our shared responsibility](https://www.elastic.co/cloud/shared-responsibility). We provide service availability information on our [service status page](/deploy-manage/cloud-organization/service-status.md). +::: -% * [*Elasticsearch application logging*](../../../deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md) +## Logging +You can configure several types of logs in {{stack}} that can help you to gain insight into {{stack}} operations, diagnose issues, and track certain types of events. [Learn about the types of logs available, where to find them, and how to configure them](/deploy-manage/monitor/logging-configuration.md). diff --git a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md index 71459d5388..cb81fe7b02 100644 --- a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md +++ b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md @@ -3,13 +3,9 @@ navigation_title: "Kibana task manager monitoring" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/task-manager-health-monitoring.html applies_to: - deployment: - self: preview + stack: preview --- - - - # Kibana task manager health monitoring [task-manager-health-monitoring] @@ -18,7 +14,7 @@ This functionality is in technical preview and may be changed or removed in a fu :::: -The Task Manager has an internal monitoring mechanism to keep track of a variety of metrics, which can be consumed with either the health monitoring API or the {{kib}} server log. +The {{kib}} [Task Manager](/deploy-manage/distributed-architecture/kibana-tasks-management.md) has an internal monitoring mechanism to keep track of a variety of metrics, which can be consumed with either the health monitoring API or the {{kib}} server log. The health monitoring API provides a reliable endpoint that can be monitored. Consuming this endpoint doesn’t cause additional load, but rather returns the latest health checks made by the system. This design enables consumption by external monitoring services at a regular cadence without additional load to the system. @@ -59,13 +55,19 @@ xpack.task_manager.monitored_task_execution_thresholds: ## Consuming health stats [task-manager-consuming-health-stats] -The health API is best consumed by via the `/api/task_manager/_health` endpoint. +The health API is best consumed using the `/api/task_manager/_health` endpoint. Additionally, there are two ways to consume these metrics: -**Debug logging** +### Debug logging +```{applies_to} +deployment: + self: + ece: + eck: +``` -The metrics are logged in the {{kib}} `DEBUG` logger at a regular cadence. To enable Task Manager debug logging in your {{kib}} instance, add the following to your `kibana.yml`: +In self-managed deployments, you can configure health stats to be logged in the {{kib}} `DEBUG` logger at a regular cadence. To enable Task Manager debug logging in your {{kib}} instance, add the following to your `kibana.yml`: ```yaml logging: @@ -77,7 +79,7 @@ logging: These stats are logged based on the number of milliseconds set in your [`xpack.task_manager.poll_interval`](kibana://reference/configuration-reference/task-manager-settings.md#task-manager-settings) setting, which could add substantial noise to your logs. Only enable this level of logging temporarily. -**Automatic logging** +### Automatic logging By default, the health API runs at a regular cadence, and each time it runs, it attempts to self evaluate its performance. If this self evaluation yields a potential problem, a message will log to the {{kib}} server log. In addition, the health API will look at how long tasks have waited to start (from when they were scheduled to start). If this number exceeds a configurable threshold ([`xpack.task_manager.monitored_stats_health_verbose_log.warn_delayed_task_start_in_seconds`](kibana://reference/configuration-reference/task-manager-settings.md#task-manager-settings)), the same message as above will log to the {{kib}} server log. @@ -92,9 +94,9 @@ If this message appears, set [`xpack.task_manager.monitored_stats_health_verbose ## Making sense of Task Manager health stats [making-sense-of-task-manager-health-stats] -The health monitoring API exposes three sections: `configuration`, `workload` and `runtime`: +The health monitoring API exposes the following sections: -| | | +| Section | Description | | --- | --- | | Configuration | This section summarizes the current configuration of Task Manager. This includes dynamic configurations that change over time, such as `poll_interval` and `max_workers`, which can adjust in reaction to changing load on the system. | | Workload | This section summarizes the work load across the cluster, including the tasks in the system, their types, and current status. | diff --git a/deploy-manage/monitor/logging-configuration.md b/deploy-manage/monitor/logging-configuration.md index 19ab67334b..5ce3b1cfa0 100644 --- a/deploy-manage/monitor/logging-configuration.md +++ b/deploy-manage/monitor/logging-configuration.md @@ -6,49 +6,120 @@ applies_to: eck: all self: all --- -# Logging configuration +# Logging -% What needs to be done: Write from scratch +You can configure several types of logs in {{stack}} that can help you to gain insight into {{stack}} operations, diagnose issues, and track certain types of events. -% GitHub issue: https://github.com/elastic/docs-projects/issues/350 +The following logging features are available: -⚠️ **This page is a work in progress.** ⚠️ +## For {{es}} [extra-logging-features-elasticsearch] +* **Application and component logging**: Logs messages related to running {{es}}. + + You can [configure the log level for {{es}}](/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md), and, in self-managed clusters, [configure underlying Log4j settings](/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md) to customize logging behavior. +* [Deprecation logging](/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md): Deprecation logs record a message to the {{es}} log directory when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. +* [Audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md): Logs security-related events on your deployment. +* [Slow query and index logging](elasticsearch://reference/elasticsearch/index-settings/slow-log.md): Helps find and debug slow queries and indexing. -## Logging features [ECE/ECH] [extra-logging-features] +## For {{kib}} [extra-logging-features-kibana] -When shipping logs to a monitoring deployment there are more logging features available to you. These features include: +* **Application and component logging**: Logs messages related to running {{kib}}. + + You can [configure the log level for {{kib}}](/deploy-manage/monitor/logging-configuration/kibana-log-levels.md), and, in self-managed, ECE, or ECK deployments, [configure advanced settings](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md) to customize logging behavior. +* [Audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md): Logs security-related events on your deployment. -### For {{es}} [extra-logging-features-elasticsearch] +## Access {{kib}} and {{es}} logs -* [Audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment -* [Slow query and index logging](elasticsearch://reference/elasticsearch/index-settings/slow-log.md) - helps find and debug slow queries and indexing -* Verbose logging - helps debug stack issues by increasing component logs +The way that you access your logs differs depending on your deployment method. -After you’ve enabled log delivery on your deployment, you can [add the Elasticsearch user settings](/deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) to enable these features. +### Orchestrated deployments +Access your logs using one of the following options: -### For {{kib}} [extra-logging-features-kibana] +* All orchestrated deployments: [](/deploy-manage/monitor/stack-monitoring.md) +* {{ech}}: [Preconfigured logs and metrics](/deploy-manage/monitor/cloud-health-perf.md#ec-es-health-preconfigured) +* {{ece}}: [Platform monitoring](/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md) -* [Audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md) - logs security-related events on your deployment +### Self-managed deployments -After you’ve enabled log delivery on your deployment, you can [add the {{kib}} user settings](/deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) to enable this feature. +#### {{kib}} +If you run {{kib}} as a service, the default location of the logs varies based on your platform and installation method: -### Other components [extra-logging-features-enterprise-search] +:::::::{tab-set} -Enabling log collection also supports collecting and indexing the following types of logs from other components in your deployments: +::::::{tab-item} Docker +On [Docker](/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md), log messages go to the console and are handled by the configured Docker logging driver. To access logs, run `docker logs`. +:::::: -**APM** +::::::{tab-item} Debian (APT) and RPM +For [Debian](/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md) and [RPM](/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md) installations, {{es}} writes logs to `/var/log/kibana`. +:::::: + +::::::{tab-item} macOS and Linux +For [macOS and Linux `.tar.gz`](/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$KIBANA_HOME/logs`. + +Files in `$KIBANA_HOME` risk deletion during an upgrade. In production, you should configure a [different location for your logs](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md). +:::::: + +::::::{tab-item} Windows .zip +For [Windows `.zip`](/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md) installations, {{es}} writes logs to `%KIBANA_HOME%\logs`. + +Files in `%KIBANA_HOME%` risk deletion during an upgrade. In production, you should configure a [different location for your logs](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md). +:::::: + +::::::: + +If you run {{kib}} from the command line, {{kib}} prints logs to the standard output (`stdout`). + +You can also consume logs using [stack monitoring](/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md). + +#### {{es}} + +If you run {{es}} as a service, the default location of the logs varies based on your platform and installation method: + +:::::::{tab-set} + +::::::{tab-item} Docker +On [Docker](/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md), log messages go to the console and are handled by the configured Docker logging driver. To access logs, run `docker logs`. +:::::: + +::::::{tab-item} Debian (APT) and RPM +For [Debian](/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md) and [RPM](/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md) installations, {{es}} writes logs to `/var/log/elasticsearch`. +:::::: + +::::::{tab-item} macOS and Linux +For [macOS and Linux `.tar.gz`](/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$ES_HOME/logs`. + +Files in `$ES_HOME` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `$ES_HOME`. See [Path settings](/deploy-manage/deploy/self-managed/important-settings-configuration.md#path-settings). +:::::: + +::::::{tab-item} Windows .zip +For [Windows `.zip`](/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md) installations, {{es}} writes logs to `%ES_HOME%\logs`. + +Files in `%ES_HOME%` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `%ES_HOME%``. See [Path settings](/deploy-manage/deploy/self-managed/important-settings-configuration.md#path-settings). +:::::: + +::::::: + +If you run {{es}} from the command line, {{es}} prints logs to the standard output (`stdout`). + +You can also consume logs using [stack monitoring](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md). + +## Other components [extra-logging-features-enterprise-search] + +You can also collect and index the following types of logs from other components in your deployments: + +[**APM**](/solutions/observability/apps/configure-logging.md) * `apm*.log*` -**Fleet and Elastic Agent** +[**Fleet and Elastic Agent**](/reference/ingestion-tools/fleet/monitor-elastic-agent.md) * `fleet-server-json.log-*` * `elastic-agent-json.log-*` The `*` indicates that we also index the archived files of each type of log. -Check the respective product documentation for more information about the logging capabilities of each product. \ No newline at end of file +In {{ech}} and {{ece}}, these types of logs are automatically ingested when [stack monitoring](/deploy-manage/monitor/stack-monitoring.md) is enabled. \ No newline at end of file diff --git a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md index d1d86b254e..6b3e5d9820 100644 --- a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md +++ b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md @@ -11,241 +11,38 @@ applies_to: # Elasticsearch deprecation logs [logging] -You can use {{es}}'s application logs to monitor your cluster and diagnose issues. If you run {{es}} as a service, the default location of the logs varies based on your platform and installation method: +{{es}} writes deprecation logs to the [log directory](/deploy-manage/monitor/logging-configuration.md#access-kib-and-es-logs). These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. -:::::::{tab-set} +:::{tip} +You can also access deprecation warnings in the [upgrade assistant](/deploy-manage/upgrade/prepare-to-upgrade/upgrade-assistant.md). +::: -::::::{tab-item} Docker -On [Docker](../../deploy/self-managed/install-elasticsearch-with-docker.md), log messages go to the console and are handled by the configured Docker logging driver. To access logs, run `docker logs`. -:::::: - -::::::{tab-item} Debian (APT) -For [Debian installations](../../deploy/self-managed/install-elasticsearch-with-debian-package.md), {{es}} writes logs to `/var/log/elasticsearch`. -:::::: - -::::::{tab-item} RPM -For [RPM installations](../../deploy/self-managed/install-elasticsearch-with-rpm.md), {{es}} writes logs to `/var/log/elasticsearch`. -:::::: - -::::::{tab-item} macOS -For [macOS `.tar.gz`](../../deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$ES_HOME/logs`. - -Files in `$ES_HOME` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `$ES_HOME`. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::{tab-item} Linux -For [Linux `.tar.gz`](../../deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$ES_HOME/logs`. - -Files in `$ES_HOME` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `$ES_HOME`. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::{tab-item} Windows .zip -For [Windows `.zip`](../../deploy/self-managed/install-elasticsearch-with-zip-on-windows.md) installations, {{es}} writes logs to `%ES_HOME%\logs`. - -Files in `%ES_HOME%` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `%ES_HOME%``. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::: -If you run {{es}} from the command line, {{es}} prints logs to the standard output (`stdout`). - - -## Logging configuration [logging-configuration] - -::::{important} -Elastic strongly recommends using the Log4j 2 configuration that is shipped by default. -:::: - - -Elasticsearch uses [Log4j 2](https://logging.apache.org/log4j/2.x/) for logging. Log4j 2 can be configured using the log4j2.properties file. Elasticsearch exposes three properties, `${sys:es.logs.base_path}`, `${sys:es.logs.cluster_name}`, and `${sys:es.logs.node_name}` that can be referenced in the configuration file to determine the location of the log files. The property `${sys:es.logs.base_path}` will resolve to the log directory, `${sys:es.logs.cluster_name}` will resolve to the cluster name (used as the prefix of log filenames in the default configuration), and `${sys:es.logs.node_name}` will resolve to the node name (if the node name is explicitly set). - -For example, if your log directory (`path.logs`) is `/var/log/elasticsearch` and your cluster is named `production` then `${sys:es.logs.base_path}` will resolve to `/var/log/elasticsearch` and `${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log` will resolve to `/var/log/elasticsearch/production.log`. - -```properties -####### Server JSON ############################ -appender.rolling.type = RollingFile <1> -appender.rolling.name = rolling -appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json <2> -appender.rolling.layout.type = ECSJsonLayout <3> -appender.rolling.layout.dataset = elasticsearch.server <4> -appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz <5> -appender.rolling.policies.type = Policies -appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <6> -appender.rolling.policies.time.interval = 1 <7> -appender.rolling.policies.time.modulate = true <8> -appender.rolling.policies.size.type = SizeBasedTriggeringPolicy <9> -appender.rolling.policies.size.size = 256MB <10> -appender.rolling.strategy.type = DefaultRolloverStrategy -appender.rolling.strategy.fileIndex = nomax -appender.rolling.strategy.action.type = Delete <11> -appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path} -appender.rolling.strategy.action.condition.type = IfFileName <12> -appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <13> -appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize <14> -appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB <15> -################################################ -``` - -1. Configure the `RollingFile` appender -2. Log to `/var/log/elasticsearch/production_server.json` -3. Use JSON layout. -4. `dataset` is a flag populating the `event.dataset` field in a `ECSJsonLayout`. It can be used to distinguish different types of logs more easily when parsing them. -5. Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd-i.json`; logs will be compressed on each roll and `i` will be incremented -6. Use a time-based roll policy -7. Roll logs on a daily basis -8. Align rolls on the day boundary (as opposed to rolling every twenty-four hours) -9. Using a size-based roll policy -10. Roll logs after 256 MB -11. Use a delete action when rolling logs -12. Only delete logs matching a file pattern -13. The pattern is to only delete the main logs -14. Only delete if we have accumulated too many compressed logs -15. The size condition on the compressed logs is 2 GB - - -```properties -####### Server - old style pattern ########### -appender.rolling_old.type = RollingFile -appender.rolling_old.name = rolling_old -appender.rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.log <1> -appender.rolling_old.layout.type = PatternLayout -appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n -appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.old_log.gz -``` - -1. The configuration for `old style` pattern appenders. These logs will be saved in `*.log` files and if archived will be in `* .log.gz` files. Note that these should be considered deprecated and will be removed in the future. - - -::::{note} -Log4j’s configuration parsing gets confused by any extraneous whitespace; if you copy and paste any Log4j settings on this page, or enter any Log4j configuration in general, be sure to trim any leading and trailing whitespace. -:::: - - -Note than you can replace `.gz` by `.zip` in `appender.rolling.filePattern` to compress the rolled logs using the zip format. If you remove the `.gz` extension then logs will not be compressed as they are rolled. - -If you want to retain log files for a specified period of time, you can use a rollover strategy with a delete action. - -```properties -appender.rolling.strategy.type = DefaultRolloverStrategy <1> -appender.rolling.strategy.action.type = Delete <2> -appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path} <3> -appender.rolling.strategy.action.condition.type = IfFileName <4> -appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <5> -appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified <6> -appender.rolling.strategy.action.condition.nested_condition.age = 7D <7> -``` - -1. Configure the `DefaultRolloverStrategy` -2. Configure the `Delete` action for handling rollovers -3. The base path to the Elasticsearch logs -4. The condition to apply when handling rollovers -5. Delete files from the base path matching the glob `${sys:es.logs.cluster_name}-*`; this is the glob that log files are rolled to; this is needed to only delete the rolled Elasticsearch logs but not also delete the deprecation and slow logs -6. A nested condition to apply to files matching the glob -7. Retain logs for seven days - - -Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). - - -## Configuring logging levels [configuring-logging-levels] - -Log4J 2 log messages include a *level* field, which is one of the following (in order of increasing verbosity): - -* `FATAL` -* `ERROR` -* `WARN` -* `INFO` -* `DEBUG` -* `TRACE` - -By default {{es}} includes all messages at levels `INFO`, `WARN`, `ERROR` and `FATAL` in its logs, but filters out messages at levels `DEBUG` and `TRACE`. This is the recommended configuration. Do not filter out messages at `INFO` or higher log levels or else you may not be able to understand your cluster’s behaviour or troubleshoot common problems. Do not enable logging at levels `DEBUG` or `TRACE` unless you are following instructions elsewhere in this manual which call for more detailed logging, or you are an expert user who will be reading the {{es}} source code to determine the meaning of the logs. - -Messages are logged by a hierarchy of loggers which matches the hierarchy of Java packages and classes in the [{{es}} source code](https://github.com/elastic/elasticsearch/). Every logger has a corresponding [dynamic setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) which can be used to control the verbosity of its logs. The setting’s name is the fully-qualified name of the package or class, prefixed with `logger.`. - -You may set each logger’s verbosity to the name of a log level, for instance `DEBUG`, which means that messages from this logger at levels up to the specified one will be included in the logs. You may also use the value `OFF` to suppress all messages from the logger. - -For example, the `org.elasticsearch.discovery` package contains functionality related to the [discovery](../../distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md) process, and you can control the verbosity of its logs with the `logger.org.elasticsearch.discovery` setting. To enable `DEBUG` logging for this package, use the [Cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) as follows: +By default, {{es}} rolls and compresses deprecation logs at 1GB. The default configuration preserves a maximum of five log files: four rolled logs and an active log. -```console -PUT /_cluster/settings -{ - "persistent": { - "logger.org.elasticsearch.discovery": "DEBUG" - } -} -``` +{{es}} emits deprecation log messages at the `CRITICAL` level. Those messages are indicating that a used deprecation feature will be removed in a next major version. Deprecation log messages at the `WARN` level indicates that a less critical feature was used, it won’t be removed in next major version, but might be removed in the future. -To reset this package’s log verbosity to its default level, set the logger setting to `null`: +To stop writing deprecation log messages, change the logging level: ```console PUT /_cluster/settings { "persistent": { - "logger.org.elasticsearch.discovery": null + "logger.org.elasticsearch.deprecation": "OFF" } } ``` -Other ways to change log levels include: - -1. `elasticsearch.yml`: - - ```yaml - logger.org.elasticsearch.discovery: DEBUG - ``` - - This is most appropriate when debugging a problem on a single node. - -2. `log4j2.properties`: - - ```properties - logger.discovery.name = org.elasticsearch.discovery - logger.discovery.level = debug - ``` - - This is most appropriate when you already need to change your Log4j 2 configuration for other reasons. For example, you may want to send logs for a particular logger to another file. However, these use cases are rare. - - -::::{important} -{{es}}'s application logs are intended for humans to read and interpret. Different versions of {{es}} may report information in these logs in different ways, perhaps adding extra detail, removing unnecessary information, formatting the same information in different ways, renaming the logger or adjusting the log level for specific messages. Do not rely on the contents of the application logs remaining precisely the same between versions. -:::: - - -::::{note} -To prevent leaking sensitive information in logs, {{es}} suppresses certain log messages by default even at the highest verbosity levels. To disable this protection on a node, set the Java system property `es.insecure_network_trace_enabled` to `true`. This feature is primarily intended for test systems which do not contain any sensitive information. If you set this property on a system which contains sensitive information, you must protect your logs from unauthorized access. -:::: - - - -## Deprecation logging [deprecation-logging] - -{{es}} also writes deprecation logs to the log directory. These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. - -By default, {{es}} rolls and compresses deprecation logs at 1GB. The default configuration preserves a maximum of five log files: four rolled logs and an active log. - -{{es}} emits deprecation log messages at the `CRITICAL` level. Those messages are indicating that a used deprecation feature will be removed in a next major version. Deprecation log messages at the `WARN` level indicates that a less critical feature was used, it won’t be removed in next major version, but might be removed in the future. - -To stop writing deprecation log messages, set `logger.deprecation.level` to `OFF` in `log4j2.properties` : +Alternatively, in self-managed clusters, you can set `logger.deprecation.level` to `OFF` in `log4j2.properties` : ```properties logger.deprecation.level = OFF ``` -Alternatively, you can change the logging level dynamically: - -```console -PUT /_cluster/settings -{ - "persistent": { - "logger.org.elasticsearch.deprecation": "OFF" - } -} -``` - -Refer to [Configuring logging levels](elasticsearch-log4j-configuration-self-managed.md#configuring-logging-levels). +For more information on the available log levels, refer to [Configuring logging levels](/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md). You can identify what is triggering deprecated functionality if `X-Opaque-Id` was used as an HTTP header. The user ID is included in the `X-Opaque-ID` field in deprecation JSON logs. -```js +```json { "type": "deprecation", "timestamp": "2019-08-30T12:07:07,126+02:00", @@ -260,36 +57,9 @@ You can identify what is triggering deprecated functionality if `X-Opaque-Id` wa } ``` -Deprecation logs can be indexed into `.logs-deprecation.elasticsearch-default` data stream `cluster.deprecation_indexing.enabled` setting is set to true. +Deprecation logs can be indexed into the `.logs-deprecation.elasticsearch-default` data stream when `cluster.deprecation_indexing.enabled` setting is set to true. ### Deprecation logs throttling [_deprecation_logs_throttling] -Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.html) for more details. - - -## JSON log format [json-logging] - -To make parsing Elasticsearch logs easier, logs are now printed in a JSON format. This is configured by a Log4J layout property `appender.rolling.layout.type = ECSJsonLayout`. This layout requires a `dataset` attribute to be set which is used to distinguish logs streams when parsing. - -```properties -appender.rolling.layout.type = ECSJsonLayout -appender.rolling.layout.dataset = elasticsearch.server -``` - -Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.html) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. - -::::{note} -You can still use your own custom layout. To do that replace the line `appender.rolling.layout.type` with a different layout. See sample below: -:::: - - -```properties -appender.rolling.type = RollingFile -appender.rolling.name = rolling -appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.log -appender.rolling.layout.type = PatternLayout -appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n -appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz -``` - +Deprecation logs are deduplicated based on a deprecated feature key and `x-opaque-id` so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false. Refer to this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.html) for more details. \ No newline at end of file diff --git a/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md b/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md index 4277d3ca26..689d0c51f3 100644 --- a/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md +++ b/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md @@ -6,57 +6,20 @@ applies_to: self: all --- -# Elasticsearch log4j configuration [logging] - -You can use {{es}}'s application logs to monitor your cluster and diagnose issues. If you run {{es}} as a service, the default location of the logs varies based on your platform and installation method: - -:::::::{tab-set} - -::::::{tab-item} Docker -On [Docker](../../deploy/self-managed/install-elasticsearch-with-docker.md), log messages go to the console and are handled by the configured Docker logging driver. To access logs, run `docker logs`. -:::::: - -::::::{tab-item} Debian (APT) -For [Debian installations](../../deploy/self-managed/install-elasticsearch-with-debian-package.md), {{es}} writes logs to `/var/log/elasticsearch`. -:::::: - -::::::{tab-item} RPM -For [RPM installations](../../deploy/self-managed/install-elasticsearch-with-rpm.md), {{es}} writes logs to `/var/log/elasticsearch`. -:::::: - -::::::{tab-item} macOS -For [macOS `.tar.gz`](../../deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$ES_HOME/logs`. - -Files in `$ES_HOME` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `$ES_HOME`. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::{tab-item} Linux -For [Linux `.tar.gz`](../../deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$ES_HOME/logs`. - -Files in `$ES_HOME` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `$ES_HOME`. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::{tab-item} Windows .zip -For [Windows `.zip`](../../deploy/self-managed/install-elasticsearch-with-zip-on-windows.md) installations, {{es}} writes logs to `%ES_HOME%\logs`. - -Files in `%ES_HOME%` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `%ES_HOME%``. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::: -If you run {{es}} from the command line, {{es}} prints logs to the standard output (`stdout`). - - -## Logging configuration [logging-configuration] +# {{es}} log4j configuration [logging] ::::{important} Elastic strongly recommends using the Log4j 2 configuration that is shipped by default. :::: - -Elasticsearch uses [Log4j 2](https://logging.apache.org/log4j/2.x/) for logging. Log4j 2 can be configured using the log4j2.properties file. Elasticsearch exposes three properties, `${sys:es.logs.base_path}`, `${sys:es.logs.cluster_name}`, and `${sys:es.logs.node_name}` that can be referenced in the configuration file to determine the location of the log files. The property `${sys:es.logs.base_path}` will resolve to the log directory, `${sys:es.logs.cluster_name}` will resolve to the cluster name (used as the prefix of log filenames in the default configuration), and `${sys:es.logs.node_name}` will resolve to the node name (if the node name is explicitly set). +{{es}} uses [Log4j 2](https://logging.apache.org/log4j/2.x/) for logging. Log4j 2 can be configured using the log4j2.properties file. {{es}} exposes three properties, `${sys:es.logs.base_path}`, `${sys:es.logs.cluster_name}`, and `${sys:es.logs.node_name}` that can be referenced in the configuration file to determine the location of the log files. The property `${sys:es.logs.base_path}` will resolve to the log directory, `${sys:es.logs.cluster_name}` will resolve to the cluster name (used as the prefix of log filenames in the default configuration), and `${sys:es.logs.node_name}` will resolve to the node name (if the node name is explicitly set). For example, if your log directory (`path.logs`) is `/var/log/elasticsearch` and your cluster is named `production` then `${sys:es.logs.base_path}` will resolve to `/var/log/elasticsearch` and `${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log` will resolve to `/var/log/elasticsearch/production.log`. +:::{tip} +To learn how to configure logging levels, refer to [](/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md). +::: + ```properties ####### Server JSON ############################ appender.rolling.type = RollingFile <1> @@ -133,148 +96,27 @@ appender.rolling.strategy.action.condition.nested_condition.age = 7D <7> 1. Configure the `DefaultRolloverStrategy` 2. Configure the `Delete` action for handling rollovers -3. The base path to the Elasticsearch logs +3. The base path to the {{es}} logs 4. The condition to apply when handling rollovers -5. Delete files from the base path matching the glob `${sys:es.logs.cluster_name}-*`; this is the glob that log files are rolled to; this is needed to only delete the rolled Elasticsearch logs but not also delete the deprecation and slow logs +5. Delete files from the base path matching the glob `${sys:es.logs.cluster_name}-*`; this is the glob that log files are rolled to; this is needed to only delete the rolled {{es}} logs but not also delete the deprecation and slow logs 6. A nested condition to apply to files matching the glob 7. Retain logs for seven days -Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). - - -## Configuring logging levels [configuring-logging-levels] - -Log4J 2 log messages include a *level* field, which is one of the following (in order of increasing verbosity): - -* `FATAL` -* `ERROR` -* `WARN` -* `INFO` -* `DEBUG` -* `TRACE` - -By default {{es}} includes all messages at levels `INFO`, `WARN`, `ERROR` and `FATAL` in its logs, but filters out messages at levels `DEBUG` and `TRACE`. This is the recommended configuration. Do not filter out messages at `INFO` or higher log levels or else you may not be able to understand your cluster’s behaviour or troubleshoot common problems. Do not enable logging at levels `DEBUG` or `TRACE` unless you are following instructions elsewhere in this manual which call for more detailed logging, or you are an expert user who will be reading the {{es}} source code to determine the meaning of the logs. - -Messages are logged by a hierarchy of loggers which matches the hierarchy of Java packages and classes in the [{{es}} source code](https://github.com/elastic/elasticsearch/). Every logger has a corresponding [dynamic setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) which can be used to control the verbosity of its logs. The setting’s name is the fully-qualified name of the package or class, prefixed with `logger.`. - -You may set each logger’s verbosity to the name of a log level, for instance `DEBUG`, which means that messages from this logger at levels up to the specified one will be included in the logs. You may also use the value `OFF` to suppress all messages from the logger. - -For example, the `org.elasticsearch.discovery` package contains functionality related to the [discovery](../../distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md) process, and you can control the verbosity of its logs with the `logger.org.elasticsearch.discovery` setting. To enable `DEBUG` logging for this package, use the [Cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) as follows: - -```console -PUT /_cluster/settings -{ - "persistent": { - "logger.org.elasticsearch.discovery": "DEBUG" - } -} -``` - -To reset this package’s log verbosity to its default level, set the logger setting to `null`: - -```console -PUT /_cluster/settings -{ - "persistent": { - "logger.org.elasticsearch.discovery": null - } -} -``` - -Other ways to change log levels include: - -1. `elasticsearch.yml`: - - ```yaml - logger.org.elasticsearch.discovery: DEBUG - ``` - - This is most appropriate when debugging a problem on a single node. - -2. `log4j2.properties`: - - ```properties - logger.discovery.name = org.elasticsearch.discovery - logger.discovery.level = debug - ``` - - This is most appropriate when you already need to change your Log4j 2 configuration for other reasons. For example, you may want to send logs for a particular logger to another file. However, these use cases are rare. - - -::::{important} -{{es}}'s application logs are intended for humans to read and interpret. Different versions of {{es}} may report information in these logs in different ways, perhaps adding extra detail, removing unnecessary information, formatting the same information in different ways, renaming the logger or adjusting the log level for specific messages. Do not rely on the contents of the application logs remaining precisely the same between versions. -:::: - - -::::{note} -To prevent leaking sensitive information in logs, {{es}} suppresses certain log messages by default even at the highest verbosity levels. To disable this protection on a node, set the Java system property `es.insecure_network_trace_enabled` to `true`. This feature is primarily intended for test systems which do not contain any sensitive information. If you set this property on a system which contains sensitive information, you must protect your logs from unauthorized access. -:::: - - - -## Deprecation logging [deprecation-logging] - -{{es}} also writes deprecation logs to the log directory. These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. - -By default, {{es}} rolls and compresses deprecation logs at 1GB. The default configuration preserves a maximum of five log files: four rolled logs and an active log. - -{{es}} emits deprecation log messages at the `CRITICAL` level. Those messages are indicating that a used deprecation feature will be removed in a next major version. Deprecation log messages at the `WARN` level indicates that a less critical feature was used, it won’t be removed in next major version, but might be removed in the future. - -To stop writing deprecation log messages, set `logger.deprecation.level` to `OFF` in `log4j2.properties` : - -```properties -logger.deprecation.level = OFF -``` - -Alternatively, you can change the logging level dynamically: - -```console -PUT /_cluster/settings -{ - "persistent": { - "logger.org.elasticsearch.deprecation": "OFF" - } -} -``` - -Refer to [Configuring logging levels](#configuring-logging-levels). - -You can identify what is triggering deprecated functionality if `X-Opaque-Id` was used as an HTTP header. The user ID is included in the `X-Opaque-ID` field in deprecation JSON logs. - -```js -{ - "type": "deprecation", - "timestamp": "2019-08-30T12:07:07,126+02:00", - "level": "WARN", - "component": "o.e.d.r.a.a.i.RestCreateIndexAction", - "cluster.name": "distribution_run", - "node.name": "node-0", - "message": "[types removal] Using include_type_name in create index requests is deprecated. The parameter will be removed in the next major version.", - "x-opaque-id": "MY_USER_ID", - "cluster.uuid": "Aq-c-PAeQiK3tfBYtig9Bw", - "node.id": "D7fUYfnfTLa2D7y-xw6tZg" -} -``` - -Deprecation logs can be indexed into `.logs-deprecation.elasticsearch-default` data stream `cluster.deprecation_indexing.enabled` setting is set to true. - - -### Deprecation logs throttling [_deprecation_logs_throttling] - -Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.html) for more details. +Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the {{es}} config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). ## JSON log format [json-logging] -To make parsing Elasticsearch logs easier, logs are now printed in a JSON format. This is configured by a Log4J layout property `appender.rolling.layout.type = ECSJsonLayout`. This layout requires a `dataset` attribute to be set which is used to distinguish logs streams when parsing. +To make parsing {{es}} logs easier, logs are now printed in a JSON format. This is configured by a Log4J layout property `appender.rolling.layout.type = ECSJsonLayout`. This layout requires a `dataset` attribute to be set which is used to distinguish logs streams when parsing. ```properties appender.rolling.layout.type = ECSJsonLayout appender.rolling.layout.dataset = elasticsearch.server ``` +Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.html) for more details. -Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.html) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. +If a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. ::::{note} You can still use your own custom layout. To do that replace the line `appender.rolling.layout.type` with a different layout. See sample below: diff --git a/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md b/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md new file mode 100644 index 0000000000..0a703fce57 --- /dev/null +++ b/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md @@ -0,0 +1,418 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/kibana/current/_cli_configuration.html +applies_to: + deployment: + self: + ece: + eck: +--- + +# Advanced {{kib}} logging settings + +You do not need to configure any additional settings to use the logging features in {{kib}}. Logging is enabled by default, and will log at info level using the `pattern` layout, which outputs logs to `stdout`. + +If you are planning to ingest your logs using {{es}} or another tool, we recommend using the `json` layout, which produces logs in ECS format. In general, `pattern` layout is recommended when raw logs will be read by a human, and `json` layout when logs will be read by a machine. + +:::{note} +You can't configure these settings in an {{ech}} deployment. +::: + +The {{kib}} logging system has three main components: *loggers*, *appenders* and *layouts*. + +* **Loggers** define what logging settings should be applied to a particular logger. +* [Appenders](#logging-appenders) define where log messages are displayed (for example, stdout or console) and stored (for example, file on the disk). +* [Layouts](#logging-layouts) define how log messages are formatted and what type of information they include. + + +These components allow us to log messages according to message type and level, to control how these messages are formatted and where the final logs will be displayed or stored. + +* [Log level](#log-level) +* [Layouts](#logging-layouts) +* [Logger hierarchy](#logger-hierarchy) + +:::{tip} +For additional information about the available logging settings, refer to the [{{kib}} configuration reference](kibana://reference/configuration-reference/logging-settings.md). +::: + +## Log level [log-level] + +{{kib}} logging supports the following log levels: `off`, `fatal`, `error`, `warn`, `info`, `debug`, `trace`, `all`. + +Levels are ordered, so `off` > `fatal` > `error` > `warn` > `info` > `debug` > `trace` > `all`. + +A log record will be logged by the logger if its level is higher than or equal to the level of its logger. For example: If the output of an API call is configured to log at the `info` level and the parameters passed to the API call are set to `debug`, with a global logging configuration in `kibana.yml` set to `debug`, both the output *and* parameters are logged. If the log level is set to `info`, the debug logs are ignored, meaning that you’ll only get a record for the API output and *not* for the parameters. + +Logging set at a plugin level is always respected, regardless of the `root` logger level. In other words, if root logger is set to fatal and pluginA logging is set to `debug`, debug logs are only shown for pluginA, with other logs only reporting on `fatal`. + +The `all` and `off` levels can only be used in configuration, and are handy shortcuts that allow you to log every log record or disable logging entirely for a specific logger. These levels can also be specified using [CLI arguments](#logging-cli-migration). + + +## Layouts [logging-layouts] + +Every appender should know exactly how to format log messages before they are written to the console or file on the disk. This behavior is controlled by the layouts and configured through `appender.layout` configuration property for every custom appender. Currently we don’t define any default layout for the custom appenders, so one should always make the choice explicitly. + +There are two types of layout supported at the moment: [`pattern`](#pattern-layout) and [`json`](#json-layout). + +### Pattern layout [pattern-layout] + +With `pattern` layout, it’s possible to define a string pattern with special placeholders `%conversion_pattern` that will be replaced with data from the actual log message. By default, the following pattern is used: `[%date][%level][%logger] %message`. + +::::{note} +The `pattern` layout uses a sub-set of [log4j 2 pattern syntax](https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout) and doesn’t implement all `log4j 2` capabilities. +:::: + +The following conversions are provided out of the box: + +* **level**: Outputs the [level](/deploy-manage/monitor/logging-configuration/kibana-log-levels.md) of the logging event. Example of `%level` output: `TRACE`, `DEBUG`, `INFO`. + +* **logger**: Outputs the name of the logger that published the logging event. Example of `%logger` output: `server`, `server.http`, `server.http.kibana`. + +* **message**: Outputs the application supplied message associated with the logging event. + +* **meta**: Outputs the entries of `meta` object data in ***json** format, if one is present in the event. Example of `%meta` output: + + ```bash + // Meta{from: 'v7', to: 'v8'} + '{"from":"v7","to":"v8"}' + // Meta empty object + '{}' + // no Meta provided + '' + ``` + +$$$date-format$$$ +* **date**: Outputs the date of the logging event. The date conversion specifier may be followed by a set of braces containing a name of predefined date format and canonical timezone name. Timezone name is expected to be one from [TZ database name](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). Timezone defaults to the host timezone when not explicitly specified. Example of `%date` output: + + $$$date-conversion-pattern-examples$$$ + + | Conversion pattern | Example | + | --- | --- | + | `%date` | `2012-02-01T14:30:22.011Z` uses `ISO8601` format by default | + | `%date{{ISO8601}}` | `2012-02-01T14:30:22.011Z` | + | `%date{{ISO8601_TZ}}` | `2012-02-01T09:30:22.011-05:00` `ISO8601` with timezone | + | `%date{{ISO8601_TZ}}{America/Los_Angeles}` | `2012-02-01T06:30:22.011-08:00` | + | `%date{{ABSOLUTE}}` | `09:30:22.011` | + | `%date{{ABSOLUTE}}{America/Los_Angeles}` | `06:30:22.011` | + | `%date{{UNIX}}` | `1328106622` | + | `%date{{UNIX_MILLIS}}` | `1328106622011` | + +* **pid**: Outputs the process ID. + +The pattern layout also offers a `highlight` option that allows you to highlight some parts of the log message with different colors. Highlighting is quite handy if log messages are forwarded to a terminal with color support. + + +### JSON layout [json-layout] + +With `json` layout log messages will be formatted as JSON strings in [ECS format](ecs://reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. + + +## Logger hierarchy [logger-hierarchy] + +Every logger has a unique name that follows a hierarchical naming rule. The logger is considered to be an ancestor of another logger if its name followed by a `.` is a prefix of the descendant logger. For example, a logger named `a.b` is an ancestor of logger `a.b.c`. All top-level loggers are descendants of a special `root` logger at the top of the logger hierarchy. The `root` logger always exists, is fully configured and logs to `info` level by default. The `root` logger must also be configured if any other logging configuration is specified in your `kibana.yml`. + +You can configure *[log level](/deploy-manage/monitor/logging-configuration/kibana-log-levels.md)* and *appenders* for a specific logger. If a logger only has a *log level* configured, then the *appenders* configuration applied to the logger is inherited from the ancestor logger, up to the `root` logger. + +::::{note} +In the current implementation we *don’t support* so called *appender additivity* when log messages are forwarded to *every* distinct appender within the ancestor chain including `root`. That means that log messages are only forwarded to appenders that are configured for a particular logger. If a logger doesn’t have any appenders configured, the configuration of that particular logger will be inherited from its closest ancestor. +:::: + +### Dedicated loggers [dedicated-loggers] + +#### Root + +The `root` logger has a dedicated configuration node since this logger is special and should always exist. By default `root` is configured with `info` level and `default` appender that is also always available. This is the configuration that all custom loggers will use unless they’re re-configured explicitly. + +For example to see *all* log messages that fall back on the `root` logger configuration, just add one line to the configuration: + +```yaml +logging.root.level: all +``` + +Or disable logging entirely with `off`: + +```yaml +logging.root.level: off +``` + +#### Metrics logs + +The `metrics.ops` logger is configured with `debug` level and will automatically output sample system and process information at a regular interval. The metrics that are logged are a subset of the data collected and are formatted in the log message as follows: + +| Ops formatted log property | Location in metrics service | Log units | +| --- | --- | --- | +| memory | process.memory.heap.used_in_bytes | [depends on the value](http://numeraljs.com/#format), typically MB or GB | +| uptime | process.uptime_in_millis | HH:mm:ss | +| load | os.load | [ "load for the last 1 min" "load for the last 5 min" "load for the last 15 min"] | +| delay | process.event_loop_delay | ms | + +The log interval is the same as the interval at which system and process information is refreshed and is configurable under `ops.interval`: + +```yaml +ops.interval: 5000 +``` + +The minimum interval is 100ms and defaults to 5000ms. + + +#### Request and response logs [request-response-logger] + +The `http.server.response` logger is configured with `debug` level and will automatically output data about http requests and responses occurring on the {{kib}} server. The message contains some high-level information, and the corresponding log meta contains the following: + +| Meta property | Description | Format | +| --- | --- | --- | +| client.ip | IP address of the requesting client | ip | +| http.request.method | http verb for the request (uppercase) | string | +| http.request.mime_type | (optional) mime as specified in the headers | string | +| http.request.referrer | (optional) referrer | string | +| http.request.headers | request headers | object | +| http.response.body.bytes | (optional) Calculated response payload size in bytes | number | +| http.response.status_code | status code returned | number | +| http.response.headers | response headers | object | +| http.response.responseTime | (optional) Calculated response time in ms | number | +| url.path | request path | string | +| url.query | (optional) request query string | string | +| user_agent.original | raw user-agent string provided in request headers | string | + + +## Appenders [logging-appenders] + + +### Rolling file appender [rolling-file-appender] + +Similar to Log4j’s `RollingFileAppender`, this appender will log into a file, and rotate it following a rolling strategy when the configured policy triggers. + + +#### Triggering policies [_triggering_policies] + +The triggering policy determines when a rollover should occur. + +There are currently two policies supported: `size-limit` and `time-interval`. + +##### Size-limit triggering policy [size-limit-triggering-policy] + +This policy will rotate the file when it reaches a predetermined size. + +```yaml +logging: + appenders: + rolling-file: + type: rolling-file + fileName: /var/logs/kibana.log + policy: + type: size-limit + size: 50mb + strategy: + //... + layout: + type: pattern +``` + +The options are: + +* `size`: The maximum size the log file should reach before a rollover should be performed. The default value is `100mb`. + + +##### Time-interval triggering policy [time-interval-triggering-policy] + +This policy will rotate the file every given interval of time. + +```yaml +logging: + appenders: + rolling-file: + type: rolling-file + fileName: /var/logs/kibana.log + policy: + type: time-interval + interval: 10s + modulate: true + strategy: + //... + layout: + type: pattern +``` + +The options are: + +* `interval`: How often a rollover should occur. The default value is `24h`. + +* `modulate`: Whether the interval should be adjusted to cause the next rollover to occur on the interval boundary. + + For example, if modulate is true and the interval is `4h`, if the current hour is 3 am then the first rollover will occur at 4 am and then next ones will occur at 8 am, noon, 4pm, etc. The default value is `true`. + + +#### Rolling strategies [_rolling_strategies] + +The rolling strategy determines how the rollover should occur: both the naming of the rolled files, and their retention policy. + +There is currently one strategy supported: `numeric`. + +**Numeric rolling strategy** + +This strategy will suffix the file with a given pattern when rolling, and will retains a fixed amount of rolled files. + +```yaml +logging: + appenders: + rolling-file: + type: rolling-file + fileName: /var/logs/kibana.log + policy: + // ... + strategy: + type: numeric + pattern: '-%i' + max: 2 + layout: + type: pattern +``` + +For example, with this configuration: + +* During the first rollover `kibana.log` is renamed to `kibana-1.log`. A new `kibana.log` file is created and starts being written to. +* During the second rollover `kibana-1.log` is renamed to `kibana-2.log` and `kibana.log` is renamed to `kibana-1.log`. A new `kibana.log` file is created and starts being written to. +* During the third and subsequent rollovers, `kibana-2.log` is deleted, kibana-1.log is renamed to `kibana-2.log` and `kibana.log` is renamed to kibana-1.log. A new `kibana.log` file is created and starts being written to. + +The options are: + +* `pattern`: The suffix to append to the file path when rolling. Must include `%i`, as this is the value that will be converted to the file index. + + For example, with `fileName: /var/logs/kibana.log` and `pattern: '-%i'`, the rolling files created will be `/var/logs/kibana-1.log`, `/var/logs/kibana-2.log`, and so on. The default value is `-%i` + +* `max`: The maximum number of files to keep. Once this number is reached, oldest files will be deleted. The default value is `7` + + +### Rewrite appender [rewrite-appender] + +::::{warning} +This appender is currently considered experimental and is not intended for public consumption. The API is subject to change at any time. +:::: + + +Similar to log4j’s `RewriteAppender`, this appender serves as a sort of middleware, modifying the provided log events before passing them along to another appender. + +```yaml +logging: + appenders: + my-rewrite-appender: + type: rewrite + appenders: [console, file] # name of "destination" appender(s) + policy: + # ... +``` + +The most common use case for the `RewriteAppender` is when you want to filter or censor sensitive data that may be contained in a log entry. In fact, with a default configuration, {{kib}} will automatically redact any `authorization`, `cookie`, or `set-cookie` headers when logging http requests & responses. + +To configure additional rewrite rules, you’ll need to specify a [`RewritePolicy`](#rewrite-policies). + + +#### Rewrite policies [rewrite-policies] + +Rewrite policies exist to indicate which parts of a log record can be modified within the rewrite appender. + +##### Meta + +The `meta` rewrite policy can read and modify any data contained in the `LogMeta` before passing it along to a destination appender. + +Meta policies must specify one of three modes, which indicate which action to perform on the configured properties: - `update` updates an existing property at the provided `path`. - `remove` removes an existing property at the provided `path`. + +The `properties` are listed as a `path` and `value` pair, where `path` is the dot-delimited path to the target property in the `LogMeta` object, and `value` is the value to add or update in that target property. When using the `remove` mode, a `value` is not necessary. + +Here’s an example of how you would replace any `cookie` header values with `[REDACTED]`: + +```yaml +logging: + appenders: + my-rewrite-appender: + type: rewrite + appenders: [console] + policy: + type: meta # indicates that we want to rewrite the LogMeta + mode: update # will update an existing property only + properties: + - path: "http.request.headers.cookie" # path to property + value: "[REDACTED]" # value to replace at path +``` + +Rewrite appenders can even be passed to other rewrite appenders to apply multiple filter policies/modes, as long as it doesn’t create a circular reference. Each rewrite appender is applied sequentially (one after the other). + +```yaml +logging: + appenders: + remove-request-headers: + type: rewrite + appenders: [censor-response-headers] # redirect to the next rewrite appender + policy: + type: meta + mode: remove + properties: + - path: "http.request.headers" # remove all request headers + censor-response-headers: + type: rewrite + appenders: [console] # output to console + policy: + type: meta + mode: update + properties: + - path: "http.response.headers.set-cookie" + value: "[REDACTED]" +``` + + +##### Rewrite appender configuration example [_complete_example_for_rewrite_appender] + +```yaml +logging: + appenders: + custom_console: + type: console + layout: + type: pattern + highlight: true + pattern: "[%date][%level][%logger] %message %meta" + file: + type: file + fileName: ./kibana.log + layout: + type: json + censor: + type: rewrite + appenders: [custom_console, file] + policy: + type: meta + mode: update + properties: + - path: "http.request.headers.cookie" + value: "[REDACTED]" + loggers: + - name: http.server.response + appenders: [censor] # pass these logs to our rewrite appender + level: debug +``` + +## Logging configuration using the CLI [logging-cli-migration] + +You can specify your logging configuration using the CLI. For convenience, the `--verbose` and `--silent` flags exist as shortcuts and will continue to be supported beyond v7. + +If you wish to override these flags, you can always do so by passing your preferred logging configuration directly to the CLI. For example, with the following configuration: + +```yaml +logging: + appenders: + custom: + type: console + layout: + type: pattern + pattern: "[%date][%level] %message" + root: + level: warn + appenders: [custom] +``` + +you can override the root logging level using the following flags: + +| Legacy logging | {{kib}} platform logging | CLI shortcuts | +| --- | --- | --- | +| --verbose | --logging.root.level=debug | --verbose | +| --silent | --logging.root.level=off | --silent | diff --git a/deploy-manage/monitor/logging-configuration/kibana-log-levels.md b/deploy-manage/monitor/logging-configuration/kibana-log-levels.md new file mode 100644 index 0000000000..68968f55c2 --- /dev/null +++ b/deploy-manage/monitor/logging-configuration/kibana-log-levels.md @@ -0,0 +1,20 @@ +--- +applies_to: + deployment: + self: + eck: + ece: + ess: +--- + +# Set global log levels for {{kib}} + +{{kib}} logging supports the following log levels: `off`, `fatal`, `error`, `warn`, `info`, `debug`, `trace`, `all`. + +Levels are ordered, so `off` > `fatal` > `error` > `warn` > `info` > `debug` > `trace` > `all`. + +A record will be logged by the logger if its level is higher than or equal to the level of its logger. For example: If the output of an API call is configured to log at the `info` level and the parameters passed to the API call are set to `debug`, with a global logging configuration in `kibana.yml` set to `debug`, both the output *and* parameters are logged. If the log level is set to `info`, the debug logs are ignored, meaning that you’ll only get a record for the API output and *not* for the parameters. + +To set the log level, add the `logging.root.level` setting to `kibana.yml`, specifying the log level that you want. `logging.root.level` defaults to `info`. + +In a self-managed cluster, these levels can also be specified using [CLI arguments](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md#logging-cli-migration), and different log levels can be set for various loggers. [Learn more](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md). \ No newline at end of file diff --git a/deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md b/deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md deleted file mode 100644 index 2662239479..0000000000 --- a/deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/kibana/current/_cli_configuration.html -applies_to: - deployment: - self: all ---- - -# Cli configuration [_cli_configuration] - - -## Logging configuration via CLI [logging-cli-migration] - -As is the case for any of Kibana’s config settings, you can specify your logging configuration via the CLI. For convenience, the `--verbose` and `--silent` flags exist as shortcuts and will continue to be supported beyond v7. - -If you wish to override these flags, you can always do so by passing your preferred logging configuration directly to the CLI. For example, with the following configuration: - -```yaml -logging: - appenders: - custom: - type: console - layout: - type: pattern - pattern: "[%date][%level] %message" - root: - level: warn - appenders: [custom] -``` - -you can override the root logging level with: - -| legacy logging | {{kib}} Platform logging | cli shortcuts | -| --- | --- | --- | -| --verbose | --logging.root.level=debug | --verbose | -| --silent | --logging.root.level=off | --silent | - diff --git a/deploy-manage/monitor/logging-configuration/kibana-logging.md b/deploy-manage/monitor/logging-configuration/kibana-logging.md index b8508cd616..6780693d45 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-logging.md +++ b/deploy-manage/monitor/logging-configuration/kibana-logging.md @@ -3,398 +3,73 @@ mapped_pages: - https://www.elastic.co/guide/en/kibana/current/logging-configuration.html applies_to: deployment: - self: all + self: + eck: + ece: + ess: --- -% this might not be valid for all deployment types. needs review. -% certain topics like LEVELS are valid for all deployment types, but not all - # Kibana logging [logging-configuration] -The {{kib}} logging system has three main components: *loggers*, *appenders* and *layouts*. These components allow us to log messages according to message type and level, to control how these messages are formatted and where the final logs will be displayed or stored. - -* [Loggers, Appenders and Layouts](#loggers-appenders-layout) -* [Log level](#log-level) -* [Layouts](#logging-layouts) -* [Logger hierarchy](#logger-hierarchy) - - -## Loggers, Appenders and Layouts [loggers-appenders-layout] - -*Loggers* define what logging settings should be applied to a particular logger. - -*[Appenders](#logging-appenders)* define where log messages are displayed (eg. stdout or console) and stored (eg. file on the disk). - -*[Layouts](#logging-layouts)* define how log messages are formatted and what type of information they include. - - -## Log level [log-level] - -Currently we support the following log levels: *off*, *fatal*, *error*, *warn*, *info*, *debug*, *trace*, *all*. - -Levels are ordered, so *off* > *fatal* > *error* > *warn* > *info* > *debug* > *trace* > *all*. - -A log record will be logged by the logger if its level is higher than or equal to the level of its logger. For example: If the output of an API call is configured to log at the `info` level and the parameters passed to the API call are set to `debug`, with a global logging configuration in `kibana.yml` set to `debug`, both the output *and* parameters are logged. If the log level is set to `info`, the debug logs are ignored, meaning that you’ll only get a record for the API output and *not* for the parameters. - -Logging set at a plugin level is always respected, regardless of the `root` logger level. In other words, if root logger is set to fatal and pluginA logging is set to `debug`, debug logs are only shown for pluginA, with other logs only reporting on `fatal`. - -The *all* and *off* levels can only be used in configuration and are handy shortcuts that allow you to log every log record or disable logging entirely for a specific logger. These levels can also be specified using [cli arguments](kibana-logging-cli-configuration.md#logging-cli-migration). - - -## Layouts [logging-layouts] - -Every appender should know exactly how to format log messages before they are written to the console or file on the disk. This behavior is controlled by the layouts and configured through `appender.layout` configuration property for every custom appender. Currently we don’t define any default layout for the custom appenders, so one should always make the choice explicitly. - -There are two types of layout supported at the moment: [`pattern`](#pattern-layout) and [`json`](#json-layout). - - -### Pattern layout [pattern-layout] - -With `pattern` layout it’s possible to define a string pattern with special placeholders `%conversion_pattern` that will be replaced with data from the actual log message. By default the following pattern is used: `[%date][%level][%logger] %message`. - -::::{note} -The `pattern` layout uses a sub-set of [log4j 2 pattern syntax](https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout) and **doesn’t implement** all `log4j 2` capabilities. -:::: - - -The conversions that are provided out of the box are: - -**level** Outputs the [level](#log-level) of the logging event. Example of `%level` output: `TRACE`, `DEBUG`, `INFO`. - -**logger** Outputs the name of the logger that published the logging event. Example of `%logger` output: `server`, `server.http`, `server.http.kibana`. - -**message** Outputs the application supplied message associated with the logging event. - -**meta*** Outputs the entries of `meta` object data in ***json** format, if one is present in the event. Example of `%meta` output: - -```bash -// Meta{from: 'v7', to: 'v8'} -'{"from":"v7","to":"v8"}' -// Meta empty object -'{}' -// no Meta provided -'' -``` - -$$$date-format$$$ -**date** Outputs the date of the logging event. The date conversion specifier may be followed by a set of braces containing a name of predefined date format and canonical timezone name. Timezone name is expected to be one from [TZ database name](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). Timezone defaults to the host timezone when not explicitly specified. Example of `%date` output: - -$$$date-conversion-pattern-examples$$$ - -| Conversion pattern | Example | -| --- | --- | -| `%date` | `2012-02-01T14:30:22.011Z` uses `ISO8601` format by default | -| `%date{{ISO8601}}` | `2012-02-01T14:30:22.011Z` | -| `%date{{ISO8601_TZ}}` | `2012-02-01T09:30:22.011-05:00` `ISO8601` with timezone | -| `%date{{ISO8601_TZ}}{America/Los_Angeles}` | `2012-02-01T06:30:22.011-08:00` | -| `%date{{ABSOLUTE}}` | `09:30:22.011` | -| `%date{{ABSOLUTE}}{America/Los_Angeles}` | `06:30:22.011` | -| `%date{{UNIX}}` | `1328106622` | -| `%date{{UNIX_MILLIS}}` | `1328106622011` | - -**pid** Outputs the process ID. - -The pattern layout also offers a `highlight` option that allows you to highlight some parts of the log message with different colors. Highlighting is quite handy if log messages are forwarded to a terminal with color support. - - -### JSON layout [json-layout] - -With `json` layout log messages will be formatted as JSON strings in [ECS format](ecs://reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. - - -## Logger hierarchy [logger-hierarchy] - -Every logger has a unique name that follows a hierarchical naming rule. The logger is considered to be an ancestor of another logger if its name followed by a `.` is a prefix of the descendant logger. For example, a logger named `a.b` is an ancestor of logger `a.b.c`. All top-level loggers are descendants of a special `root` logger at the top of the logger hierarchy. The `root` logger always exists, is fully configured and logs to `info` level by default. The `root` logger must also be configured if any other logging configuration is specified in your `kibana.yml`. - -You can configure *[log level](#log-level)* and *appenders* for a specific logger. If a logger only has a *log level* configured, then the *appenders* configuration applied to the logger is inherited from the ancestor logger, up to the `root` logger. - -::::{note} -In the current implementation we *don’t support* so called *appender additivity* when log messages are forwarded to *every* distinct appender within the ancestor chain including `root`. That means that log messages are only forwarded to appenders that are configured for a particular logger. If a logger doesn’t have any appenders configured, the configuration of that particular logger will be inherited from its closest ancestor. -:::: - - - -#### Dedicated loggers [dedicated-loggers] - -**Root** - -The `root` logger has a dedicated configuration node since this logger is special and should always exist. By default `root` is configured with `info` level and `default` appender that is also always available. This is the configuration that all custom loggers will use unless they’re re-configured explicitly. - -For example to see *all* log messages that fall back on the `root` logger configuration, just add one line to the configuration: - -```yaml -logging.root.level: all -``` - -Or disable logging entirely with `off`: - -```yaml -logging.root.level: off -``` - -**Metrics Logs** - -The `metrics.ops` logger is configured with `debug` level and will automatically output sample system and process information at a regular interval. The metrics that are logged are a subset of the data collected and are formatted in the log message as follows: - -| Ops formatted log property | Location in metrics service | Log units | -| --- | --- | --- | -| memory | process.memory.heap.used_in_bytes | [depends on the value](http://numeraljs.com/#format), typically MB or GB | -| uptime | process.uptime_in_millis | HH:mm:ss | -| load | os.load | [ "load for the last 1 min" "load for the last 5 min" "load for the last 15 min"] | -| delay | process.event_loop_delay | ms | - -The log interval is the same as the interval at which system and process information is refreshed and is configurable under `ops.interval`: - -```yaml -ops.interval: 5000 -``` - -The minimum interval is 100ms and defaults to 5000ms. - -$$$request-response-logger$$$ -**Request and Response Logs** - -The `http.server.response` logger is configured with `debug` level and will automatically output data about http requests and responses occurring on the {{kib}} server. The message contains some high-level information, and the corresponding log meta contains the following: - -| Meta property | Description | Format | -| --- | --- | --- | -| client.ip | IP address of the requesting client | ip | -| http.request.method | http verb for the request (uppercase) | string | -| http.request.mime_type | (optional) mime as specified in the headers | string | -| http.request.referrer | (optional) referrer | string | -| http.request.headers | request headers | object | -| http.response.body.bytes | (optional) Calculated response payload size in bytes | number | -| http.response.status_code | status code returned | number | -| http.response.headers | response headers | object | -| http.response.responseTime | (optional) Calculated response time in ms | number | -| url.path | request path | string | -| url.query | (optional) request query string | string | -| user_agent.original | raw user-agent string provided in request headers | string | - - -## Appenders [logging-appenders] - - -### Rolling File Appender [rolling-file-appender] - -Similar to Log4j’s `RollingFileAppender`, this appender will log into a file, and rotate it following a rolling strategy when the configured policy triggers. - - -##### Triggering Policies [_triggering_policies] - -The triggering policy determines when a rollover should occur. - -There are currently two policies supported: `size-limit` and `time-interval`. - -$$$size-limit-triggering-policy$$$ -**Size-limit triggering policy** - -This policy will rotate the file when it reaches a predetermined size. - -```yaml -logging: - appenders: - rolling-file: - type: rolling-file - fileName: /var/logs/kibana.log - policy: - type: size-limit - size: 50mb - strategy: - //... - layout: - type: pattern -``` - -The options are: - -* `size` - -The maximum size the log file should reach before a rollover should be performed. The default value is `100mb` - +$$$pattern-layout$$$ $$$time-interval-triggering-policy$$$ -**Time-interval triggering policy** - -This policy will rotate the file every given interval of time. - -```yaml -logging: - appenders: - rolling-file: - type: rolling-file - fileName: /var/logs/kibana.log - policy: - type: time-interval - interval: 10s - modulate: true - strategy: - //... - layout: - type: pattern -``` - -The options are: - -* `interval` - -How often a rollover should occur. The default value is `24h` - -* `modulate` - -Whether the interval should be adjusted to cause the next rollover to occur on the interval boundary. - -For example, if modulate is true and the interval is `4h`, if the current hour is 3 am then the first rollover will occur at 4 am and then next ones will occur at 8 am, noon, 4pm, etc. The default value is `true`. - - -#### Rolling strategies [_rolling_strategies] - -The rolling strategy determines how the rollover should occur: both the naming of the rolled files, and their retention policy. - -There is currently one strategy supported: `numeric`. - -**Numeric rolling strategy** - -This strategy will suffix the file with a given pattern when rolling, and will retains a fixed amount of rolled files. - -```yaml -logging: - appenders: - rolling-file: - type: rolling-file - fileName: /var/logs/kibana.log - policy: - // ... - strategy: - type: numeric - pattern: '-%i' - max: 2 - layout: - type: pattern -``` - -For example, with this configuration: - -* During the first rollover kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts being written to. -* During the second rollover kibana-1.log is renamed to kibana-2.log and kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts being written to. -* During the third and subsequent rollovers, kibana-2.log is deleted, kibana-1.log is renamed to kibana-2.log and kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts being written to. - -The options are: - -* `pattern` - -The suffix to append to the file path when rolling. Must include `%i`, as this is the value that will be converted to the file index. - -For example, with `fileName: /var/logs/kibana.log` and `pattern: '-%i'`, the rolling files created will be `/var/logs/kibana-1.log`, `/var/logs/kibana-2.log`, and so on. The default value is `-%i` - -* `max` - -The maximum number of files to keep. Once this number is reached, oldest files will be deleted. The default value is `7` +$$$size-limit-triggering-policy$$$ +$$$logging-appenders$$$ +$$$dedicated-loggers$$$ +You do not need to configure any additional settings to use the logging features in Kibana. Logging is enabled by default. -### Rewrite appender [rewrite-appender] +In all deployment types, you might want to change the log level for {{kib}}. In a self-managed, ECE, or ECK deployment, you might want to further customize your logging settings to define where log messages are displayed, stored, and formatted, or provide granular settings for different loggers. -::::{warning} -This appender is currently considered experimental and is not intended for public consumption. The API is subject to change at any time. -:::: +* [](/deploy-manage/monitor/logging-configuration/kibana-log-levels.md) +* [](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md) +You can also configure [{{kib}} task manager health monitoring](/deploy-manage/monitor/kibana-task-manager-health-monitoring.md) using logging settings. -Similar to log4j’s `RewriteAppender`, this appender serves as a sort of middleware, modifying the provided log events before passing them along to another appender. +:::{tip} +For additional information about the available logging settings, refer to the [{{kib}} configuration reference](kibana://reference/configuration-reference/logging-settings.md). +::: -```yaml -logging: - appenders: - my-rewrite-appender: - type: rewrite - appenders: [console, file] # name of "destination" appender(s) - policy: - # ... -``` +## Access {{kib}} logs -The most common use case for the `RewriteAppender` is when you want to filter or censor sensitive data that may be contained in a log entry. In fact, with a default configuration, {{kib}} will automatically redact any `authorization`, `cookie`, or `set-cookie` headers when logging http requests & responses. +The way that you access your logs differs depending on your deployment method. -To configure additional rewrite rules, you’ll need to specify a [`RewritePolicy`](#rewrite-policies). +### Orchestrated deployments +Access your logs using one of the following options: -##### Rewrite policies [rewrite-policies] +* All orchestrated deployments: [](/deploy-manage/monitor/stack-monitoring.md) +* {{ech}}: [Preconfigured logs and metrics](/deploy-manage/monitor/cloud-health-perf.md#ec-es-health-preconfigured) +* {{ece}}: [Platform monitoring](/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md) -Rewrite policies exist to indicate which parts of a log record can be modified within the rewrite appender. +### Self-managed deployments -**Meta** +If you run {{kib}} as a service, the default location of the logs varies based on your platform and installation method: -The `meta` rewrite policy can read and modify any data contained in the `LogMeta` before passing it along to a destination appender. +:::::::{tab-set} -Meta policies must specify one of three modes, which indicate which action to perform on the configured properties: - `update` updates an existing property at the provided `path`. - `remove` removes an existing property at the provided `path`. +::::::{tab-item} Docker +On [Docker](/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md), log messages go to the console and are handled by the configured Docker logging driver. To access logs, run `docker logs`. +:::::: -The `properties` are listed as a `path` and `value` pair, where `path` is the dot-delimited path to the target property in the `LogMeta` object, and `value` is the value to add or update in that target property. When using the `remove` mode, a `value` is not necessary. +::::::{tab-item} Debian (APT) and RPM +For [Debian](/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md) and [RPM](/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md) installations, {{es}} writes logs to `/var/log/kibana`. +:::::: -Here’s an example of how you would replace any `cookie` header values with `[REDACTED]`: +::::::{tab-item} macOS and Linux +For [macOS and Linux `.tar.gz`](/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$KIBANA_HOME/logs`. -```yaml -logging: - appenders: - my-rewrite-appender: - type: rewrite - appenders: [console] - policy: - type: meta # indicates that we want to rewrite the LogMeta - mode: update # will update an existing property only - properties: - - path: "http.request.headers.cookie" # path to property - value: "[REDACTED]" # value to replace at path -``` +Files in `$KIBANA_HOME` risk deletion during an upgrade. In production, you should configure a [different location for your logs](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md). +:::::: -Rewrite appenders can even be passed to other rewrite appenders to apply multiple filter policies/modes, as long as it doesn’t create a circular reference. Each rewrite appender is applied sequentially (one after the other). +::::::{tab-item} Windows .zip +For [Windows `.zip`](/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md) installations, {{es}} writes logs to `%KIBANA_HOME%\logs`. -```yaml -logging: - appenders: - remove-request-headers: - type: rewrite - appenders: [censor-response-headers] # redirect to the next rewrite appender - policy: - type: meta - mode: remove - properties: - - path: "http.request.headers" # remove all request headers - censor-response-headers: - type: rewrite - appenders: [console] # output to console - policy: - type: meta - mode: update - properties: - - path: "http.response.headers.set-cookie" - value: "[REDACTED]" -``` +Files in `%KIBANA_HOME%` risk deletion during an upgrade. In production, you should configure a [different location for your logs](/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md). +:::::: +::::::: -##### Complete Example For Rewrite Appender [_complete_example_for_rewrite_appender] +If you run {{kib}} from the command line, {{es}} prints logs to the standard output (`stdout`). -```yaml -logging: - appenders: - custom_console: - type: console - layout: - type: pattern - highlight: true - pattern: "[%date][%level][%logger] %message %meta" - file: - type: file - fileName: ./kibana.log - layout: - type: json - censor: - type: rewrite - appenders: [custom_console, file] - policy: - type: meta - mode: update - properties: - - path: "http.request.headers.cookie" - value: "[REDACTED]" - loggers: - - name: http.server.response - appenders: [censor] # pass these logs to our rewrite appender - level: debug -``` +You can also consume logs using [stack monitoring](/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md). diff --git a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md index 92b3909817..df67b0780c 100644 --- a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md +++ b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md @@ -11,134 +11,10 @@ applies_to: # Update Elasticsearch logging levels [logging] -You can use {{es}}'s application logs to monitor your cluster and diagnose issues. If you run {{es}} as a service, the default location of the logs varies based on your platform and installation method: +$$$deprecation-logging$$$ +$$$_deprecation_logs_throttling$$$ -:::::::{tab-set} - -::::::{tab-item} Docker -On [Docker](../../deploy/self-managed/install-elasticsearch-with-docker.md), log messages go to the console and are handled by the configured Docker logging driver. To access logs, run `docker logs`. -:::::: - -::::::{tab-item} Debian (APT) and RPM -For [Debian](../../deploy/self-managed/install-elasticsearch-with-debian-package.md) and [RPM](../../deploy/self-managed/install-elasticsearch-with-rpm.md) installations, {{es}} writes logs to `/var/log/elasticsearch`. -:::::: - -::::::{tab-item} macOS and Linux -For [macOS and Linux `.tar.gz`](../../deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) installations, {{es}} writes logs to `$ES_HOME/logs`. - -Files in `$ES_HOME` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `$ES_HOME`. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::{tab-item} Windows .zip -For [Windows `.zip`](../../deploy/self-managed/install-elasticsearch-with-zip-on-windows.md) installations, {{es}} writes logs to `%ES_HOME%\logs`. - -Files in `%ES_HOME%` risk deletion during an upgrade. In production, we strongly recommend you set `path.logs` to a location outside of `%ES_HOME%``. See [Path settings](../../deploy/self-managed/important-settings-configuration.md#path-settings). -:::::: - -::::::: -If you run {{es}} from the command line, {{es}} prints logs to the standard output (`stdout`). - - -## Logging configuration [logging-configuration] - -::::{important} -Elastic strongly recommends using the Log4j 2 configuration that is shipped by default. -:::: - - -Elasticsearch uses [Log4j 2](https://logging.apache.org/log4j/2.x/) for logging. Log4j 2 can be configured using the log4j2.properties file. Elasticsearch exposes three properties, `${sys:es.logs.base_path}`, `${sys:es.logs.cluster_name}`, and `${sys:es.logs.node_name}` that can be referenced in the configuration file to determine the location of the log files. The property `${sys:es.logs.base_path}` will resolve to the log directory, `${sys:es.logs.cluster_name}` will resolve to the cluster name (used as the prefix of log filenames in the default configuration), and `${sys:es.logs.node_name}` will resolve to the node name (if the node name is explicitly set). - -For example, if your log directory (`path.logs`) is `/var/log/elasticsearch` and your cluster is named `production` then `${sys:es.logs.base_path}` will resolve to `/var/log/elasticsearch` and `${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log` will resolve to `/var/log/elasticsearch/production.log`. - -```properties -####### Server JSON ############################ -appender.rolling.type = RollingFile <1> -appender.rolling.name = rolling -appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json <2> -appender.rolling.layout.type = ECSJsonLayout <3> -appender.rolling.layout.dataset = elasticsearch.server <4> -appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz <5> -appender.rolling.policies.type = Policies -appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <6> -appender.rolling.policies.time.interval = 1 <7> -appender.rolling.policies.time.modulate = true <8> -appender.rolling.policies.size.type = SizeBasedTriggeringPolicy <9> -appender.rolling.policies.size.size = 256MB <10> -appender.rolling.strategy.type = DefaultRolloverStrategy -appender.rolling.strategy.fileIndex = nomax -appender.rolling.strategy.action.type = Delete <11> -appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path} -appender.rolling.strategy.action.condition.type = IfFileName <12> -appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <13> -appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize <14> -appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB <15> -################################################ -``` - -1. Configure the `RollingFile` appender -2. Log to `/var/log/elasticsearch/production_server.json` -3. Use JSON layout. -4. `dataset` is a flag populating the `event.dataset` field in a `ECSJsonLayout`. It can be used to distinguish different types of logs more easily when parsing them. -5. Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd-i.json`; logs will be compressed on each roll and `i` will be incremented -6. Use a time-based roll policy -7. Roll logs on a daily basis -8. Align rolls on the day boundary (as opposed to rolling every twenty-four hours) -9. Using a size-based roll policy -10. Roll logs after 256 MB -11. Use a delete action when rolling logs -12. Only delete logs matching a file pattern -13. The pattern is to only delete the main logs -14. Only delete if we have accumulated too many compressed logs -15. The size condition on the compressed logs is 2 GB - - -```properties -####### Server - old style pattern ########### -appender.rolling_old.type = RollingFile -appender.rolling_old.name = rolling_old -appender.rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.log <1> -appender.rolling_old.layout.type = PatternLayout -appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n -appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.old_log.gz -``` - -1. The configuration for `old style` pattern appenders. These logs will be saved in `*.log` files and if archived will be in `* .log.gz` files. Note that these should be considered deprecated and will be removed in the future. - - -::::{note} -Log4j’s configuration parsing gets confused by any extraneous whitespace; if you copy and paste any Log4j settings on this page, or enter any Log4j configuration in general, be sure to trim any leading and trailing whitespace. -:::: - - -Note than you can replace `.gz` by `.zip` in `appender.rolling.filePattern` to compress the rolled logs using the zip format. If you remove the `.gz` extension then logs will not be compressed as they are rolled. - -If you want to retain log files for a specified period of time, you can use a rollover strategy with a delete action. - -```properties -appender.rolling.strategy.type = DefaultRolloverStrategy <1> -appender.rolling.strategy.action.type = Delete <2> -appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path} <3> -appender.rolling.strategy.action.condition.type = IfFileName <4> -appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-* <5> -appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified <6> -appender.rolling.strategy.action.condition.nested_condition.age = 7D <7> -``` - -1. Configure the `DefaultRolloverStrategy` -2. Configure the `Delete` action for handling rollovers -3. The base path to the Elasticsearch logs -4. The condition to apply when handling rollovers -5. Delete files from the base path matching the glob `${sys:es.logs.cluster_name}-*`; this is the glob that log files are rolled to; this is needed to only delete the rolled Elasticsearch logs but not also delete the deprecation and slow logs -6. A nested condition to apply to files matching the glob -7. Retain logs for seven days - - -Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). - - -## Configuring logging levels [configuring-logging-levels] - -Log4J 2 log messages include a *level* field, which is one of the following (in order of increasing verbosity): +Log4j 2 log messages include a *level* field, which is one of the following (in order of increasing verbosity): * `FATAL` * `ERROR` @@ -147,11 +23,15 @@ Log4J 2 log messages include a *level* field, which is one of the following (in * `DEBUG` * `TRACE` -By default {{es}} includes all messages at levels `INFO`, `WARN`, `ERROR` and `FATAL` in its logs, but filters out messages at levels `DEBUG` and `TRACE`. This is the recommended configuration. Do not filter out messages at `INFO` or higher log levels or else you may not be able to understand your cluster’s behaviour or troubleshoot common problems. Do not enable logging at levels `DEBUG` or `TRACE` unless you are following instructions elsewhere in this manual which call for more detailed logging, or you are an expert user who will be reading the {{es}} source code to determine the meaning of the logs. +By default, {{es}} includes all messages at levels `INFO`, `WARN`, `ERROR` and `FATAL` in its logs, but filters out messages at levels `DEBUG` and `TRACE`. This is the recommended configuration. + +Do not filter out messages at `INFO` or higher log levels, or else you may not be able to understand your cluster’s behavior or troubleshoot common problems. + +Do not enable logging at levels `DEBUG` or `TRACE` unless you are following instructions elsewhere in this manual which call for more detailed logging, or you are an expert user who will be reading the {{es}} source code to determine the meaning of the logs. Messages are logged by a hierarchy of loggers which matches the hierarchy of Java packages and classes in the [{{es}} source code](https://github.com/elastic/elasticsearch/). Every logger has a corresponding [dynamic setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) which can be used to control the verbosity of its logs. The setting’s name is the fully-qualified name of the package or class, prefixed with `logger.`. -You may set each logger’s verbosity to the name of a log level, for instance `DEBUG`, which means that messages from this logger at levels up to the specified one will be included in the logs. You may also use the value `OFF` to suppress all messages from the logger. +You can set each logger’s verbosity to the name of a log level, for instance `DEBUG`, which means that messages from this logger at levels up to the specified one will be included in the logs. You can also use the value `OFF` to suppress all messages from the logger. For example, the `org.elasticsearch.discovery` package contains functionality related to the [discovery](../../distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md) process, and you can control the verbosity of its logs with the `logger.org.elasticsearch.discovery` setting. To enable `DEBUG` logging for this package, use the [Cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) as follows: @@ -185,7 +65,7 @@ Other ways to change log levels include: This is most appropriate when debugging a problem on a single node. -2. `log4j2.properties`: +2. `log4j2.properties` (self-managed clusters only): ```properties logger.discovery.name = org.elasticsearch.discovery @@ -196,90 +76,9 @@ Other ways to change log levels include: ::::{important} -{{es}}'s application logs are intended for humans to read and interpret. Different versions of {{es}} may report information in these logs in different ways, perhaps adding extra detail, removing unnecessary information, formatting the same information in different ways, renaming the logger or adjusting the log level for specific messages. Do not rely on the contents of the application logs remaining precisely the same between versions. +{{es}}'s application logs are intended for humans to read and interpret. Different versions of {{es}} might report information in these logs in different ways. For example, they might add extra detail, remove unnecessary information, format the same information in different ways, rename the logger, or adjust the log level for specific messages. Do not rely on the contents of the application logs remaining exactly the same between versions. :::: - ::::{note} -To prevent leaking sensitive information in logs, {{es}} suppresses certain log messages by default even at the highest verbosity levels. To disable this protection on a node, set the Java system property `es.insecure_network_trace_enabled` to `true`. This feature is primarily intended for test systems which do not contain any sensitive information. If you set this property on a system which contains sensitive information, you must protect your logs from unauthorized access. +To prevent leaking sensitive information in logs, {{es}} suppresses certain log messages by default even at the highest verbosity levels. To disable this protection on a node, set the Java system property `es.insecure_network_trace_enabled` to `true`. This feature is primarily intended for test systems that do not contain any sensitive information. If you set this property on a system which contains sensitive information, you must protect your logs from unauthorized access. :::: - - - -## Deprecation logging [deprecation-logging] - -{{es}} also writes deprecation logs to the log directory. These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. - -By default, {{es}} rolls and compresses deprecation logs at 1GB. The default configuration preserves a maximum of five log files: four rolled logs and an active log. - -{{es}} emits deprecation log messages at the `CRITICAL` level. Those messages are indicating that a used deprecation feature will be removed in a next major version. Deprecation log messages at the `WARN` level indicates that a less critical feature was used, it won’t be removed in next major version, but might be removed in the future. - -To stop writing deprecation log messages, set `logger.deprecation.level` to `OFF` in `log4j2.properties` : - -```properties -logger.deprecation.level = OFF -``` - -Alternatively, you can change the logging level dynamically: - -```console -PUT /_cluster/settings -{ - "persistent": { - "logger.org.elasticsearch.deprecation": "OFF" - } -} -``` - -Refer to [Configuring logging levels](elasticsearch-log4j-configuration-self-managed.md#configuring-logging-levels). - -You can identify what is triggering deprecated functionality if `X-Opaque-Id` was used as an HTTP header. The user ID is included in the `X-Opaque-ID` field in deprecation JSON logs. - -```js -{ - "type": "deprecation", - "timestamp": "2019-08-30T12:07:07,126+02:00", - "level": "WARN", - "component": "o.e.d.r.a.a.i.RestCreateIndexAction", - "cluster.name": "distribution_run", - "node.name": "node-0", - "message": "[types removal] Using include_type_name in create index requests is deprecated. The parameter will be removed in the next major version.", - "x-opaque-id": "MY_USER_ID", - "cluster.uuid": "Aq-c-PAeQiK3tfBYtig9Bw", - "node.id": "D7fUYfnfTLa2D7y-xw6tZg" -} -``` - -Deprecation logs can be indexed into `.logs-deprecation.elasticsearch-default` data stream `cluster.deprecation_indexing.enabled` setting is set to true. - - -### Deprecation logs throttling [_deprecation_logs_throttling] - -Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.html) for more details. - - -## JSON log format [json-logging] - -To make parsing Elasticsearch logs easier, logs are now printed in a JSON format. This is configured by a Log4J layout property `appender.rolling.layout.type = ECSJsonLayout`. This layout requires a `dataset` attribute to be set which is used to distinguish logs streams when parsing. - -```properties -appender.rolling.layout.type = ECSJsonLayout -appender.rolling.layout.dataset = elasticsearch.server -``` - -Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.html) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. - -::::{note} -You can still use your own custom layout. To do that replace the line `appender.rolling.layout.type` with a different layout. See sample below: -:::: - - -```properties -appender.rolling.type = RollingFile -appender.rolling.name = rolling -appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.log -appender.rolling.layout.type = PatternLayout -appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n -appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz -``` - diff --git a/deploy-manage/monitor/orchestrators.md b/deploy-manage/monitor/orchestrators.md index d5fe00adbe..977d05cba7 100644 --- a/deploy-manage/monitor/orchestrators.md +++ b/deploy-manage/monitor/orchestrators.md @@ -7,10 +7,17 @@ applies_to: # Monitoring orchestrators -% What needs to be done: Write from scratch +Your [orchestrator](/deploy-manage/deploy.md#about-orchestration) is an important part of your Elastic architecture. It automates the deployment and management of multiple Elastic clusters, handling tasks like scaling, upgrades, and monitoring. Like your cluster or deployment, you need to monitor your orchestrator to ensure that it is healthy and performant. Monitoring is especially important for orchestrators hosted on infrastructure that you control. -% GitHub issue: https://github.com/elastic/docs-projects/issues/350 +In this section, you'll learn how to enable monitoring of your orchestrator. -% Scope notes: Landing page to monitoring orchestrators (not deployments) +* [ECK operator metrics](/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md): Open and secure a metrics endpoint that can be used to monitor the operator’s performance and health. This endpoint can be scraped by third-party Kubernetes monitoring tools. +* [ECK platform monitoring](/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md): Learn about how ECE collects monitoring data for your installation in the `logging-and-metrics` deployment, and how to access monitoring data. -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file +:::{admonition} Monitoring {{ecloud}} +Elastic monitors {{ecloud}} service metrics and performance as part of [our shared responsibility](https://www.elastic.co/cloud/shared-responsibility). We provide service availability information on our [service status page](/deploy-manage/cloud-organization/service-status.md). +::: + +:::{note} +Orchestrator monitoring can sometimes augment cluster or deployment monitoring, but doesn't replace it. For information about monitoring your cluster or deployment, refer to [](/deploy-manage/monitor.md). +::: \ No newline at end of file diff --git a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md index 75cc639feb..fe854796ff 100644 --- a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md +++ b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md @@ -6,52 +6,47 @@ applies_to: ece: all --- -# Access logs and metrics [ece-monitoring-ece-access] +# Platform monitoring deployment logs and metrics [ece-monitoring-ece-access] -To access logs and metrics for your deployment: - -1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. From the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. On the **Elasticsearch** page, the following logs and metrics are available: - - Elasticsearch logs - : Detailed logs related to cluster state +On the **{{es}}** page, the following logs and metrics are available: - Elasticsearch metrics - : Detailed metrics for CPU and memory usage, running processes, networking and file system performance, and more +| Log or metric | Description | +| --- | --- | +| {{es}} logs | Detailed logs related to cluster state. | +| {{es}} metrics | Detailed metrics for CPU and memory usage, running processes, networking and file system performance, and more. | +| Proxy logs | Search and indexing requests that proxies have sent to the {{es}} cluster. | - Proxy logs - : Search and indexing requests that proxies have sent to the Elasticsearch cluster +If a {{kib}} instance exists for the deployment, the following {{kib}} logs and metrics are also available from the **{{kib}}** page: - If a Kibana instance exists for the deployment, the following Kibana logs and metrics are also available from the **Kibana** page: +| Log or metric | Description | +| --- | --- | +| {{kib}} logs | Detailed logs related to instance state. | +| {{kib}} metrics | Detailed metrics for CPU and memory usage, running processes, networking and file system performance, and more. | +| Proxy logs | Requests that proxies have sent to the {{kib}} instance. | - Kibana logs - : Detailed logs related to instance state +## Access logs and metrics - Kibana metrics - : Detailed metrics for CPU and memory usage, running processes, networking and file system performance, and more +To access logs and metrics for your deployment: - Proxy logs - : Requests that proxies have sent to the Kibana instance +1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). +2. From the **Deployments** page, select your deployment. + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. -::::{tip} -If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. Elastic Cloud Enterprise manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for Elastic Cloud Enterprise to reapply the cluster license. If you have a license from 2019 and later, you’ll receive a warning only when your full platform license is about to expire, which you’ll need to renew. -:::: +3. Depending on the logs or metrics that you want to access, click on the **Elasticsearch** or **Kibana** page. + ::::{tip} + If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. Elastic Cloud Enterprise manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for Elastic Cloud Enterprise to reapply the cluster license. If you have a license from 2019 and later, you’ll receive a warning only when your full platform license is about to expire, which you’ll need to renew. + :::: -1. Select one of the links and log in with the `elastic` user. If you do not know the password, you can [reset it first](../../users-roles/cluster-or-deployment-auth/built-in-users.md). +4. Select one of the links and log in with the `elastic` user of the `logging-and-metrics` deployment. If you do not know the password, you can [reset it first](../../users-roles/cluster-or-deployment-auth/built-in-users.md). ::::{tip} The password you specify must be for the `elastic` user on the `logging-and-metrics` cluster, where the logging and metrics indices are collected. If you need to reset the password for the user, make sure you reset for the `logging-and-metrics` cluster. :::: - After you select one of the links, Kibana opens and shows you a view of the monitoring metrics for the logs or metrics that you selected. + After you select one of the links, {{kib}} opens and shows you a view of the monitoring metrics for the logs or metrics that you selected. If you are looking for an {{es}} or {{kib}} diagnostic to share with Elastic support, go to the **Operations** page for the deployment and download the diagnostic bundle to attach to your ticket. If logs or an ECE diagnostic are requested by Elastic support, please [run the ECE diagnostics tool](../../../troubleshoot/deployments/cloud-enterprise/run-ece-diagnostics-tool.md). - diff --git a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md index ba73a0b6fc..7bcef52379 100644 --- a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md +++ b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md @@ -18,7 +18,7 @@ You might need to adjust the retention period for one of the following reasons: To customize the retention period, set up a custom lifecycle policy for logs and metrics indices: 1. [Create a new index lifecycle management (ILM) policy](../../../manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md) in the logging and metrics cluster. -2. Create a new, legacy-style, index template that matches the data view (formerly *index pattern*) that you wish to customize lifecycle for. +2. Create a new, legacy-style, index template that matches the data view (formerly *index pattern*) that you want to customize the lifecycle for. 3. Specify a lifecycle policy in the index template settings. 4. Choose a higher `order` for the template so the specified lifecycle policy will be used instead of the default. diff --git a/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md b/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md index 4a8fc79a00..7fa1064ca9 100644 --- a/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md +++ b/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md @@ -8,22 +8,23 @@ applies_to: # ECE platform monitoring [ece-monitoring-ece] -Elastic Cloud Enterprise by default collects monitoring data for your installation using Filebeat and Metricbeat. This data gets sent as monitoring indices to a dedicated `logging-and-metrics` deployment that is created whenever you install Elastic Cloud Enterprise on your first host. Data is collected on every host that is part of your Elastic Cloud Enterprise installation and includes: +By default, {{ece}} collects monitoring data for your installation using Filebeat and Metricbeat. This data gets sent as monitoring indices to a dedicated `logging-and-metrics` deployment that is created whenever you install {{ece}} on your first host. Data is collected on every host that is part of your {{ece}} installation and includes: -* Logs for all core services that are a part of Elastic Cloud Enterprise and monitoring metrics for core Elastic Cloud Enterprise services, system processes on the host, and any third-party software -* Logs and monitoring metrics for Elasticsearch clusters and for Kibana instances +* Logs for all core services that are a part of {{ece}} +* Monitoring metrics for core {{ece}} services, system processes on the host, and any third-party software +* Logs and monitoring metrics for {{es}} clusters and for {{kib}} instances -These monitoring indices are collected in addition to the [monitoring you might have enabled for specific clusters](/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md), which also provides monitoring metrics that you can access in Kibana (note that the `logging-and-metrics` deployment is used for monitoring data from system deployments only; for non-system deployments, monitoring data must be sent to a deployment other than `logging-and-metrics`). +These monitoring indices are collected in addition to the [stack monitoring you might have enabled for specific clusters](/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md), which also provides monitoring metrics that you can access in {{kib}}. -In this section: +In this section, you'll learn the following about using ECE platform monitoring: -* [Access logs and metrics](ece-monitoring-ece-access.md) - Where to find the logs and metrics that are collected. -* [Capture heap dumps](../../../troubleshoot/deployments/cloud-enterprise/heap-dumps.md) - Troubleshoot instances that run out of memory. -* [Capture thread dumps](../../../troubleshoot/deployments/cloud-enterprise/thread-dumps.md) - Troubleshoot instances that are having thread or CPU issues. -* [Set the Retention Period for Logging and Metrics Indices](ece-monitoring-ece-set-retention.md) - Set the retention period for the indices that Elastic Cloud Enterprise collects. +* [](ece-monitoring-ece-access.md): The types of logs and metrics that are collected for deployments, and where to find them. +% where do we find logs and metrics for the installation itself? +* [](/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md): The fields that are included in proxy logs. Proxy logs capture the search and indexing requests that proxies have sent to the {{es}} cluster, and requests that proxies have sent to the {{kib}} instance. +* [](ece-monitoring-ece-set-retention.md): How to set the retention period for the indices that {{ece}} collects. + +For information about troubleshooting {{ECE}} using these metrics, and guidance on capturing other diagnostic information like heap dumps and thread dumps, refer to [](/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md). ::::{important} The `logging-and-metrics` deployment is for use by your ECE installation only. You must not use this deployment to index monitoring data from your own Elasticsearch clusters or use it to index data from Beats and Logstash. Always create a separate, dedicated monitoring deployment for your own use. -:::: - - +:::: \ No newline at end of file diff --git a/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md b/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md index a752ca8d90..eaf3086932 100644 --- a/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md +++ b/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md @@ -6,44 +6,46 @@ applies_to: ece: all --- -# Proxy Log Fields [ece-proxy-log-fields] +# Proxy log fields [ece-proxy-log-fields] + +Proxy logs capture data for search and indexing requests that proxies have sent to the {{es}} cluster, and requests that proxies have sent to the {{kib}} instance. ::::{note} -These fields *are* subject to change, though the vast majority of them are generic for HTTP requests and should be relatively stable. +These fields are subject to change. However, most of these fields are generic for HTTP requests and should be relatively stable. :::: | Field | Description | | --- | --- | -| `proxy_ip` | the IP on the connection, i.e. a proxy IP if the request has been proxied | -| `request_end` | the time the request was returned in ms since unix epoch | -| `status_code` | the HTTP status returned to the client | -| `handling_instance` | the product instance name the request was forwarded to | -| `handling_server` | the allocator IP address the request was forwarded to | -| `request_length` | the length of the request body, a value of `-1` means streaming/continuing | -| `request_path` | the request path from the url | -| `instance_capacity` | the total capacity of the handling instance | -| `response_time` | the total time taken for the request in milliseconds `ms`. `response_time` includes `backend_response_time`. | -| `backend_response_time` | the total time spent processing the upstream request with the backend instance (Elasticsearch, Kibana, and so on), including the initial connection, time the component is processing the request, and time streaming the response back to the calling client. The proxy latency is `backend_response_time` - `response_time`. `backend_response_time` minus `backend_response_body_time` indicates the time spent making the initial connection to the backend instance as well as the time for the backend instance to actually process the request. `backend_response_time` includes `backend_response_body_time`. | -| `backend_response_body_time` | the total time spent streaming the response from the backend instance to the calling client. | -| `auth_user` | the authenticated user for the request (only supported for basic authentication) | -| `capacity` | the total capacity of the handling cluster | -| `request_host` | the `Host` header from the request | -| `client_ip` | the client IP for the request (may differ from proxy ip if `X-Forwarded-For` or proxy protocol is configured) | -| `availability_zones` | the number of availablity zones supported by the target cluster | -| `response_length` | the number of bytes written in the response body | -| `connection_id` | a unique ID represented a single client connecition, multiple requests may use a single connection | -| `status_reason` | an optional reason to explain the response code - e.g. `BLOCKED_BY_TRAFFIC_FILTER` | -| `request_start` | the time the request was received in milliseconds `ms` since unix epoch | -| `request_port` | the port used for the request | -| `request_scheme` | the scheme (HTTP/HTTPS) used for the request | -| `message` | an optoinal message associated with a proxy error | -| `action` | the type of elasticsearch request (e.g. search/bulk etc) | -| `handling_cluster` | the cluster the request was forwarded to | -| `request_id` | a unique ID for each request (returned on the response as `X-Cloud-Request-Id` - can be used to correlate client requests with proxy logs) | -| `tls_version` | a code indicating the TLS version used for the request - `1.0 769`,`1.1 770`,`1.2 771`,`1.3 772` | -| `instance_count` | the number of instances in the target cluster | -| `cluster_type` | the type of cluster the request was routed to (e.g. elasticsearch, kibana, apm etc) | -| `request_method` | the HTTP method for the request | -| `backend_connection_id` | a unique ID for the upstream request to the product, the proxy maintains connection pools so this should be re-used | +| `proxy_ip` | The IP on the connection, i.e. a proxy IP if the request has been proxied | +| `request_end` | The time the request was returned in ms since unix epoch | +| `status_code` | The HTTP status returned to the client | +| `handling_instance` | The product instance name the request was forwarded to | +| `handling_server` | The allocator IP address the request was forwarded to | +| `request_length` | The length of the request body, a value of `-1` means streaming/continuing | +| `request_path` | The request path from the url | +| `instance_capacity` | The total capacity of the handling instance | +| `response_time` | The total time taken for the request in milliseconds `ms`. `response_time` includes `backend_response_time`. | +| `backend_response_time` | The total time spent processing the upstream request with the backend instance ({{es}}, {{kib}}, and so on), including the initial connection, time the component is processing the request, and time streaming the response back to the calling client. The proxy latency is `backend_response_time` - `response_time`. `backend_response_time` minus `backend_response_body_time` indicates the time spent making the initial connection to the backend instance as well as the time for the backend instance to actually process the request. `backend_response_time` includes `backend_response_body_time`. | +| `backend_response_body_time` | The total time spent streaming the response from the backend instance to the calling client. | +| `auth_user` | The authenticated user for the request (only supported for basic authentication) | +| `capacity` | The total capacity of the handling cluster | +| `request_host` | The `Host` header from the request | +| `client_ip` | The client IP for the request (may differ from proxy ip if `X-Forwarded-For` or proxy protocol is configured) | +| `availability_zones` | The number of availability zones supported by the target cluster | +| `response_length` | The number of bytes written in the response body | +| `connection_id` | A unique ID represented a single client connection, multiple requests may use a single connection | +| `status_reason` | An optional reason to explain the response code - e.g. `BLOCKED_BY_TRAFFIC_FILTER` | +| `request_start` | The time the request was received in milliseconds `ms` since unix epoch | +| `request_port` | The port used for the request | +| `request_scheme` | The scheme (HTTP/HTTPS) used for the request | +| `message` | An optional message associated with a proxy error | +| `action` | The type of elasticsearch request (e.g. search/bulk etc) | +| `handling_cluster` | The cluster the request was forwarded to | +| `request_id` | A unique ID for each request (returned on the response as `X-Cloud-Request-Id` - can be used to correlate client requests with proxy logs) | +| `tls_version` | A code indicating the TLS version used for the request - `1.0 769`,`1.1 770`,`1.2 771`,`1.3 772` | +| `instance_count` | The number of instances in the target cluster | +| `cluster_type` | The type of cluster the request was routed to (e.g. {{es}}, {{kib}}, APM) | +| `request_method` | The HTTP method for the request | +| `backend_connection_id` | A unique ID for the upstream request to the product, the proxy maintains connection pools so this should be re-used | diff --git a/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md b/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md index 907c743bde..e6c656507a 100644 --- a/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md +++ b/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md @@ -6,19 +6,18 @@ applies_to: eck: all --- -# ECK metrics configuration [k8s-configure-operator-metrics] +# ECK operator metrics [k8s-configure-operator-metrics] -The ECK operator provides a metrics endpoint that can be used to monitor the operator’s performance and health. By default, the metrics endpoint is not enabled and is not secured. The following sections describe how to enable it, secure it and the associated Prometheus requirements: +% todo: what metrics? what to watch for? + +The ECK operator provides a metrics endpoint that can be used to monitor the operator’s performance and health. By default, the metrics endpoint is not enabled. In ECK version 2.16 and lower, the metrics endpoint is also not secured. + +The following sections describe how to enable and secure the metrics endpoint. If you use [Prometheus](https://prometheus.io/) to consume the monitoring data, then you need to perform additional configurations within Prometheus. * [Enabling the metrics endpoint](k8s-enabling-metrics-endpoint.md) -* [Securing the metrics endpoint](k8s-securing-metrics-endpoint.md) +* [Securing the metrics endpoint](k8s-securing-metrics-endpoint.md) (ECK 2.16 and lower) * [Prometheus requirements](k8s-prometheus-requirements.md) ::::{note} -The ECK operator metrics endpoint will be secured by default beginning in version 3.0.0. +The ECK operator metrics endpoint is secured by default beginning in version 3.0.0. :::: - - - - - diff --git a/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md b/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md index d72ff49bb5..f530ae73dc 100644 --- a/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md +++ b/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md @@ -12,97 +12,101 @@ The metrics endpoint is not enabled by default. To enable the metrics endpoint, ## Using the operator Helm chart [k8s_using_the_operator_helm_chart] -If you installed ECK through the Helm chart commands listed in [Install ECK using the Helm chart](../../deploy/cloud-on-k8s/install-using-helm-chart.md), you can now set `config.metrics.port` to a value greater than 0 in your values file and the metrics endpoint will be enabled. +If you installed ECK through the Helm chart commands listed in [Install ECK using the Helm chart](../../deploy/cloud-on-k8s/install-using-helm-chart.md), you can set `config.metrics.port` to a value greater than 0 in your values file and the metrics endpoint will be enabled. ## Using the operator manifests [k8s_using_the_operator_manifests] -If you installed ECK using the manifests using the commands listed in [*Deploy ECK in your Kubernetes cluster*](../../deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md) some additional changes will be required to enable the metrics endpoint. - -Enable the metrics endpoint in the `ConfigMap`. - -```shell -cat < - name: tls-certificate - readOnly: true - volumes: - - name: conf - configMap: - name: elastic-operator - - name: cert - secret: - defaultMode: 420 - secretName: elastic-webhook-server-cert - - name: tls-certificate - secret: - defaultMode: 420 - secretName: eck-metrics-tls-certificate -EOF -``` - -1. If mounting the TLS secret to a different directory the `metrics-cert-dir` setting in the operator configuration has to be adjusted accordingly. - - -Potentially patch the `ServiceMonitor`. This will only need to be done if you are adjusting the `insecureSkipVerify` field to `false`. - -```shell -kubectl patch servicemonitor -n elastic-system elastic-operator --patch-file=/dev/stdin <<-EOF -spec: - endpoints: - - port: https - path: /metrics - scheme: https - interval: 30s - tlsConfig: - insecureSkipVerify: false - caFile: /etc/prometheus/secrets/{secret-name}/ca.crt <1> - serverName: elastic-operator-metrics.elastic-system.svc - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token -EOF -``` + template: + spec: + containers: + - name: manager + volumeMounts: + - mountPath: "/tmp/k8s-metrics-server/serving-certs" <1> + name: tls-certificate + readOnly: true + volumes: + - name: conf + configMap: + name: elastic-operator + - name: cert + secret: + defaultMode: 420 + secretName: elastic-webhook-server-cert + - name: tls-certificate + secret: + defaultMode: 420 + secretName: eck-metrics-tls-certificate + EOF + ``` + + 1. If you're mounting the TLS secret to a different directory, the `metrics-cert-dir` setting in the operator configuration has to be adjusted accordingly. + + +3. If required, patch the `ServiceMonitor`. This is required if you are adjusting the `insecureSkipVerify` field to `false`. + + ```shell + kubectl patch servicemonitor -n elastic-system elastic-operator --patch-file=/dev/stdin <<-EOF + spec: + endpoints: + - port: https + path: /metrics + scheme: https + interval: 30s + tlsConfig: + insecureSkipVerify: false + caFile: /etc/prometheus/secrets/{secret-name}/ca.crt <1> + serverName: elastic-operator-metrics.elastic-system.svc + bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token + EOF + ``` + + 1. See [](/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md) for more information on creating the CA secret. -1. See the [Prometheus requirements section](k8s-prometheus-requirements.md) for more information on creating the CA secret. diff --git a/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md b/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md index 143c0e6f95..6335ecdb98 100644 --- a/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md +++ b/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md @@ -29,7 +29,7 @@ If you’re using {{agent}}, do not deploy {{filebeat}} for log collection. Inst 2. Identify which logs you want to monitor. - The {{filebeat}} {{es}} module can handle [audit logs](../../security/logging-configuration/logfile-audit-output.md), [deprecation logs](../logging-configuration/elasticsearch-log4j-configuration-self-managed.md#deprecation-logging), [gc logs](elasticsearch://reference/elasticsearch/jvm-settings.md#gc-logging), [server logs](../logging-configuration/elasticsearch-log4j-configuration-self-managed.md), and [slow logs](elasticsearch://reference/elasticsearch/index-settings/slow-log.md). For more information about the location of your {{es}} logs, see the [path.logs](../../deploy/self-managed/important-settings-configuration.md#path-settings) setting. + The {{filebeat}} {{es}} module can handle [audit logs](../../security/logging-configuration/logfile-audit-output.md), [deprecation logs](/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md), [gc logs](elasticsearch://reference/elasticsearch/jvm-settings.md#gc-logging), [server logs](../logging-configuration/elasticsearch-log4j-configuration-self-managed.md), and [slow logs](elasticsearch://reference/elasticsearch/index-settings/slow-log.md). For more information about the location of your {{es}} logs, see the [path.logs](../../deploy/self-managed/important-settings-configuration.md#path-settings) setting. ::::{important} If there are both structured (`*.json`) and unstructured (plain text) versions of the logs, you must use the structured logs. Otherwise, they might not appear in the appropriate context in {{kib}}. diff --git a/deploy-manage/toc.yml b/deploy-manage/toc.yml index 94af289c0a..06c0b6683e 100644 --- a/deploy-manage/toc.yml +++ b/deploy-manage/toc.yml @@ -685,6 +685,7 @@ toc: children: - file: monitor/access-performance-metrics-on-elastic-cloud.md - file: monitor/ec-memory-pressure.md + - file: monitor/kibana-task-manager-health-monitoring.md - file: monitor/orchestrators.md children: - file: monitor/orchestrators/eck-metrics-configuration.md @@ -697,7 +698,6 @@ toc: - file: monitor/orchestrators/ece-monitoring-ece-access.md - file: monitor/orchestrators/ece-proxy-log-fields.md - file: monitor/orchestrators/ece-monitoring-ece-set-retention.md - - file: monitor/kibana-task-manager-health-monitoring.md - file: monitor/logging-configuration.md children: - file: monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md @@ -705,8 +705,10 @@ toc: - file: monitor/logging-configuration/elasticsearch-deprecation-logs.md - file: monitor/logging-configuration/kibana-logging.md children: - - file: monitor/logging-configuration/kibana-log-settings-examples.md - - file: monitor/logging-configuration/kibana-logging-cli-configuration.md + - file: monitor/logging-configuration/kibana-log-levels.md + - file: monitor/logging-configuration/kib-advanced-logging.md + children: + - file: monitor/logging-configuration/kibana-log-settings-examples.md - file: kibana-reporting-configuration.md - file: cloud-organization.md children: diff --git a/deploy-manage/tools/snapshot-and-restore/s3-repository.md b/deploy-manage/tools/snapshot-and-restore/s3-repository.md index 920eebbf64..6a10c6ba8f 100644 --- a/deploy-manage/tools/snapshot-and-restore/s3-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/s3-repository.md @@ -388,7 +388,7 @@ There are many systems, including some from very well-known storage vendors, whi You can perform some basic checks of the suitability of your storage system using the [repository analysis API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze). If this API does not complete successfully, or indicates poor performance, then your storage system is not fully compatible with AWS S3 and therefore unsuitable for use as a snapshot repository. However, these checks do not guarantee full compatibility. -Most storage systems can be configured to log the details of their interaction with {{es}}. If you are investigating a suspected incompatibility with AWS S3, it is usually simplest to collect these logs and provide them to the supplier of your storage system for further analysis. If the incompatibility is not clear from the logs emitted by the storage system, configure {{es}} to log every request it makes to the S3 API by [setting the logging level](../../monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md#configuring-logging-levels) of the `com.amazonaws.request` logger to `DEBUG`. +Most storage systems can be configured to log the details of their interaction with {{es}}. If you are investigating a suspected incompatibility with AWS S3, it is usually simplest to collect these logs and provide them to the supplier of your storage system for further analysis. If the incompatibility is not clear from the logs emitted by the storage system, configure {{es}} to log every request it makes to the S3 API by [setting the logging level](/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md) of the `com.amazonaws.request` logger to `DEBUG`. To prevent leaking sensitive information such as credentials and keys in logs, {{es}} rejects configuring this logger at high verbosity unless [insecure network trace logging](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#http-rest-request-tracer) is enabled. To do so, you must explicitly enable it on each node by setting the system property `es.insecure_network_trace_enabled` to `true`. diff --git a/redirects.yml b/redirects.yml index 0d4a3f31d9..316c0f61c0 100644 --- a/redirects.yml +++ b/redirects.yml @@ -27,6 +27,7 @@ redirects: 'deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md': '!deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md' 'deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md': '!deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md' 'deploy-manage/deploy/kibana-reporting-configuration.md': '!deploy-manage/kibana-reporting-configuration.md' + 'deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md': '!deploy-manage/monitor/logging-configuration/kib-advanced-logging.md' ## audit logging movement to security section 'deploy-manage/monitor/logging-configuration/configuring-audit-logs.md': 'deploy-manage/security/logging-configuration/configuring-audit-logs.md' diff --git a/troubleshoot/elasticsearch/security/security-trb-roles.md b/troubleshoot/elasticsearch/security/security-trb-roles.md index 4fba2d9a0d..48948dd42d 100644 --- a/troubleshoot/elasticsearch/security/security-trb-roles.md +++ b/troubleshoot/elasticsearch/security/security-trb-roles.md @@ -53,7 +53,7 @@ mapped_pages: logger.authc.level = DEBUG ``` - Refer to [configuring logging levels](../../../deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md#configuring-logging-levels) for more information. + Refer to [](/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md) for more information. A successful authentication should produce debug statements that list groups and role mappings. diff --git a/troubleshoot/elasticsearch/security/trb-security-saml.md b/troubleshoot/elasticsearch/security/trb-security-saml.md index 82d8acf383..66b4f9dc0a 100644 --- a/troubleshoot/elasticsearch/security/trb-security-saml.md +++ b/troubleshoot/elasticsearch/security/trb-security-saml.md @@ -198,5 +198,5 @@ logger.saml.name = org.elasticsearch.xpack.security.authc.saml logger.saml.level = DEBUG ``` -Refer to [configuring logging levels](../../../deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md#configuring-logging-levels) for more information. +Refer to [configuring logging levels](/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md) for more information.