diff --git a/docs/cse/troubleshoot/troubleshoot-parsers.md b/docs/cse/troubleshoot/troubleshoot-parsers.md index 1dba073dfd..07db789ba5 100644 --- a/docs/cse/troubleshoot/troubleshoot-parsers.md +++ b/docs/cse/troubleshoot/troubleshoot-parsers.md @@ -32,7 +32,7 @@ The recommended method is to set `_siemForward = true` and `_parser = [^\"]*)\"\:(?\{[^\}]*\})" multi +| parse regex "\"(?<_sourcecategory>[^\"]*)\"\:(?\{[^\}]*\})" multi | json field=data "sizeInBytes", "count" as bytes, count | timeslice 1h | bytes/1024/1024/1024 as gbytes -| sum(gbytes) as gbytes by sourcecategory, _timeslice -| where !(sourceCategory matches "*_volume") +| sum(gbytes) as gbytes by _sourcecategory, _timeslice +| where !(_sourceCategory matches "*_volume") | compare timeshift -1w 4 max | if(isNull(gbytes_4w_max), 0, gbytes_4w_max) as gbytes_4w_max | ((gbytes - gbytes_4w_max) / gbytes) * 100 as pct_increase diff --git a/docs/manage/partitions/data-tiers/create-edit-partition.md b/docs/manage/partitions/data-tiers/create-edit-partition.md index 7c1f904e32..b46dab1e7e 100644 --- a/docs/manage/partitions/data-tiers/create-edit-partition.md +++ b/docs/manage/partitions/data-tiers/create-edit-partition.md @@ -48,7 +48,7 @@ If you have a Sumo Logic Enterprise Suite account, you can take advantage of th When designing partitions, keep the following in mind: * **Avoid using queries that are subject to change**. In order to benefit from using Partitions, they should be used for long-term message organization. * **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the Partition, which increases search performance. -* **Keep the query flexible**. Use a flexible query, such as `sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query. +* **Keep the query flexible**. Use a flexible query, such as `_sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query. * **Group data together that is most often used together**. For example, create Partitions for categories such as web data, security data, or errors. * **Group data together that is used by teams**. Partitions are an excellent way to organize messages by role and teams within your organization. * **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a Partition. Including 90% of the data in your index in a Partition won’t improve search performance. diff --git a/docs/manage/partitions/flex/create-edit-partition-flex.md b/docs/manage/partitions/flex/create-edit-partition-flex.md index 1c01bd25ee..0ce6d66247 100644 --- a/docs/manage/partitions/flex/create-edit-partition-flex.md +++ b/docs/manage/partitions/flex/create-edit-partition-flex.md @@ -44,7 +44,7 @@ To create or edit a Partition, you must be an account Administrator or have th When designing partitions, keep the following in mind: * **Avoid using queries that are subject to change**. In order to benefit from using Partitions, they should be used for long-term message organization. * **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the Partition, which increases search performance. -* **Keep the query flexible**. Use a flexible query, such as `sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query. +* **Keep the query flexible**. Use a flexible query, such as `_sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query. * **Group data together that is most often used together**. For example, create Partitions for categories such as web data, security data, or errors. * **Group data together that is used by teams**. Partitions are an excellent way to organize messages by role and teams within your organization. * **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a Partition. Including 90% of the data in your index in a Partition won’t improve search performance. diff --git a/docs/search/get-started-with-search/build-search/keyword-search-expressions.md b/docs/search/get-started-with-search/build-search/keyword-search-expressions.md index 8e79b3704c..130279257c 100644 --- a/docs/search/get-started-with-search/build-search/keyword-search-expressions.md +++ b/docs/search/get-started-with-search/build-search/keyword-search-expressions.md @@ -143,7 +143,7 @@ It is '\u001F', U+001F UNIT SEPARATOR For example, in the following query, there are multiple space characters present in `"VM Periodic" and "Task Thread"`, but normalization returns the same result as a single space whitespace character. ```sql - sourceCategory=stream_thread_dumps "VM Periodic_____Task Thread" + _sourceCategory=stream_thread_dumps "VM Periodic_____Task Thread" ``` :::note @@ -154,7 +154,7 @@ It is '\u001F', U+001F UNIT SEPARATOR For example, in the the following query there is a tab character present in `"VM Periodic" and "Task Thread"`, but normalization returns the same result as a single space whitespace character. ```sql - sourceCategory=stream_thread_dumps "VM Periodic_Task Thread" + _sourceCategory=stream_thread_dumps "VM Periodic_Task Thread" ``` :::note @@ -165,7 +165,7 @@ It is '\u001F', U+001F UNIT SEPARATOR For example, in the following query, there is a new line after the string `Task`, but normalization returns the same result as a single space whitespace character. This shows that a query string with a single space can match a log line that has a new line character. ```sql - sourceCategory=stream_thread_dumps "VM Periodic Task\nThread" + _sourceCategory=stream_thread_dumps "VM Periodic Task\nThread" ``` :::note @@ -176,7 +176,7 @@ It is '\u001F', U+001F UNIT SEPARATOR For example, in the the following query, there is a new line and tab character after the string `Task`, but normalization returns the same result as a single space whitespace character. This shows that a query string with a single space can match a log line that has a new line and a tab whitespace character. ```sql - sourceCategory=stream_thread_dumps "VM Periodic Task\n\tThread" + _sourceCategory=stream_thread_dumps "VM Periodic Task\n\tThread" ``` :::note The character `\n\t` is used to describe the new line + tab whitespace characters. @@ -184,5 +184,5 @@ It is '\u001F', U+001F UNIT SEPARATOR All of the above queries containing various whitespace characters will accept a single space whitespace character by default and return the desired results. See the query below. ```sql -sourceCategory=stream_thread_dumps "VM Periodic Task Thread" +_sourceCategory=stream_thread_dumps "VM Periodic Task Thread" ``` diff --git a/docs/search/optimize-search-partitions.md b/docs/search/optimize-search-partitions.md index c965e3b9a1..7ce802b976 100644 --- a/docs/search/optimize-search-partitions.md +++ b/docs/search/optimize-search-partitions.md @@ -123,9 +123,9 @@ It may prevent you from searching horizontally without OR’ing partitions toget This helps users easily identify the correct partition to use. -### Keep your partition broadly scoped with sourceCategory and avoid keywords +### Keep your partition broadly scoped with _sourceCategory and avoid keywords -Use sourceCategory in your partitions definitions and avoid keywords to keep your partition broadly scoped. You can always narrow down the scope of your search when you query your partition. +Use `_sourceCategory` in your partitions definitions and avoid keywords to keep your partition broadly scoped. You can always narrow down the scope of your search when you query your partition. ### Group similar data together diff --git a/docs/search/optimize-search-performance.md b/docs/search/optimize-search-performance.md index fb1d67e970..d0b030d084 100644 --- a/docs/search/optimize-search-performance.md +++ b/docs/search/optimize-search-performance.md @@ -60,10 +60,10 @@ Here's a quick look at how to choose the right indexed search optimization tool. | :-- | :-- | :-- | | Run queries against a certain set of data | Choose if the quantity of data to be indexed is more than 2% of the total data. | Choose if the quantity of data to be indexed is less than 2% of the total data. | | Use data to identify long-term trends |   | Yes | -| Segregate data by sourceCategory | Yes |   | +| Segregate data by _sourceCategory | Yes |   | | Have aggregate data ready to query |   | Yes | | Use RBAC to deny or grant access to the data set | Yes | Yes | -| Reuse the fields that I'm parsing for other searches against this same sourceCategory |   |   | +| Reuse the fields that I'm parsing for other searches against this same _sourceCategory |   |   | ## How is data added to Partitions and Scheduled Views? diff --git a/docs/search/search-cheat-sheets/log-operators.md b/docs/search/search-cheat-sheets/log-operators.md index 0283782d7d..cac8a2db1e 100644 --- a/docs/search/search-cheat-sheets/log-operators.md +++ b/docs/search/search-cheat-sheets/log-operators.md @@ -182,7 +182,7 @@ This section provides detailed syntax, rules, and examples for Sumo Logic Opera The backshift operator compares values as they change over time. Backshift can be used with rollingstd, smooth, or any other operators whose results could be affected by spikes of data (where a spike could possibly throw off future results). _backshift Can be used in Dashboard Panels, but in the search they must be included after the first group-by phrase. - _sourcecategory=katta
| timeslice by 1m
| count by _timeslice,_sourcehost
| sort + _timeslice
| backshift _count,1 by _sourcehost
+ _sourceCategory=katta
| timeslice by 1m
| count by _timeslice,_sourcehost
| sort + _timeslice
| backshift _count,1 by _sourcehost
base64Decode diff --git a/docs/search/search-query-language/search-operators/lookup.md b/docs/search/search-query-language/search-operators/lookup.md index fa328a1e22..c3953c1440 100644 --- a/docs/search/search-query-language/search-operators/lookup.md +++ b/docs/search/search-query-language/search-operators/lookup.md @@ -61,12 +61,12 @@ You can only perform a lookup using fields defined as primary keys. If the key c * `srcDevice_ip` * `eventTime` -* `sourceCategory` +* `_sourceCategory` your lookup query scope must include: ```sql -... on srcDevice_ip=srcDevice_ip and eventTime=eventTime and sourceCategory=sourceCategory +... on srcDevice_ip=srcDevice_ip and eventTime=eventTime and _sourceCategory=sourceCategory ``` ## Syntax  diff --git a/docs/send-data/collect-from-other-data-sources/docker-collection-methods.md b/docs/send-data/collect-from-other-data-sources/docker-collection-methods.md index 4d56ab34bb..beb1126100 100644 --- a/docs/send-data/collect-from-other-data-sources/docker-collection-methods.md +++ b/docs/send-data/collect-from-other-data-sources/docker-collection-methods.md @@ -56,7 +56,7 @@ You can bake the Collector into an image, install it manually, or use automat * Host Metrics (Sumo's [Host Metrics source](/docs/send-data/installed-collectors/sources/host-metrics-source.md) is required.) * Logs are cached locally, so if a source is throttled by Sumo, you won’t drop data.   * You can bake Installed Collectors into AMIs to allow for consistent deployments across all your hosts. - * Configurable metadata. You can use variables available from Docker and the Docker host to configure the sourceCategory and sourceHost for a Docker log source or a Docker stats. For more information, see Configure sourceCategory and sourceHost using variables. + * Configurable metadata. You can use variables available from Docker and the Docker host to configure the `_sourceCategory` and `sourceHost` for a Docker log source or a Docker stats. For more information, see Configure `_sourceCategory` and `sourceHost` using variables. * **Cons** * Maintaining AMIs can be tricky if the process is not automated, so this might be a disadvantage, depending on your situation and resources.  * It’s not as easy to set up this method to monitor selected containers on a host, as opposed to all containers. You might need to configure multiple sources to achieve this goal. @@ -70,7 +70,7 @@ Logic collector. * No need to bake into any AMIs. Can be fully automated depending on your automation tooling around Docker. * The Collector will cache the files in the container, so if a Source is throttled by Sumo, you won’t drop data. Ensure that you have ample space, or use persistent storage. * Easy to upgrade: it’s a container, just deploy a new one! - * Configurable metadata. You can use variables available from Docker and the Docker host to configure the sourceCategory and sourceHost for a Docker log source or a Docker stats. For more information, see Configure sourceCategory and sourceHost using variables. + * Configurable metadata. You can use variables available from Docker and the Docker host to configure the `_sourceCategory` and `sourceHost` for a Docker log source or a Docker stats. For more information, see Configure `_sourceCategory` and `sourceHost` using variables. * **Cons** * With this method, you cannot collect host metrics from the Docker host. The Collector must be installed on the Docker host to get the host metrics. You can still collect container logs, container metrics and host logs. * It’s not as easy to set up this method to monitor selected containers on a host, as opposed to all containers. You might need to configure multiple sources to achieve this goal. diff --git a/docs/send-data/collection/search-collector-or-source.md b/docs/send-data/collection/search-collector-or-source.md index 8e5e027543..3d176e0ceb 100644 --- a/docs/send-data/collection/search-collector-or-source.md +++ b/docs/send-data/collection/search-collector-or-source.md @@ -6,7 +6,7 @@ description: Search for a Collector or Source on the Manage Collection page. Many Sumo Logic customers have hundreds of collectors and sources installed and configured. But even with only 10 Collectors, sometimes it can be hard to find the one you need in the list. -On the **Collection** page, a search field allows you to search for collectors and sources by name or sourceCategory using complete keywords. +On the **Collection** page, a search field allows you to search for collectors and sources by name or `_sourceCategory` using complete keywords. To match partial keywords use a wildcard. For example, use "**apache\***" to match "apacheprod".