Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/cse/troubleshoot/troubleshoot-parsers.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The recommended method is to set `_siemForward = true` and `_parser = <path/to/p
* At the [source](/docs/cse/ingestion/ingestion-sources-for-cloud-siem/). Logs from an entire source will be forwarded to Cloud SIEM and the specified parser.
* At the [collector](/docs/send-data/installed-collectors/). Logs from the collector and its child sources will be forwarded to Cloud SIEM and the specified parser
* Using a [Field Extraction Rule (FER)](/docs/manage/field-extractions/create-field-extraction-rule/).
* Often used to specify SIEM forwarding and the parser path by `sourceCategory`, but can also be used to filter specific subsets of logs for forwarding to Cloud SIEM (or not forwarded).
* Often used to specify SIEM forwarding and the parser path by `_sourceCategory`, but can also be used to filter specific subsets of logs for forwarding to Cloud SIEM (or not forwarded).
* Sending subsets of logs to Cloud SIEM is useful as not all log data is useful from a security context.

Many [Cloud-To-Cloud (C2C)](/docs/send-data/hosted-collectors/cloud-to-cloud-integration-framework/) sources set the `_parser` and `_siemForward` metadata within the parser, bypassing the need to manually specify for these sources.
Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/saas-cloud/citrix-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ This Citrix Cloud App uses [SystemLog](https://developer.cloud.com/citrix-cloud/
### Sample queries

```sql="Active Team Members"
sourceCategory="citrixCloudSource"
_sourceCategory="citrixCloudSource"
| json "eventType","targetDisplayName","targetEmail","beforeChanges.AccessType","afterChanges.AccessType","actorType","message.en-US" as event_type,name, email, access_type_before, access_type_after, actor, message nodrop
| where event_type matches("*platform/administrator/create*")
| where actor matches"{{actor}}"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The first rule is generic and matches all messages:
**Scope:**

```sql
sourceCategory=networking/cisco/fwsm
_sourceCategory=networking/cisco/fwsm
```

**Extraction Rule:**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -155,12 +155,12 @@ This hourly alert is generated when both of the following occur:

```
_index=sumologic_volume sizeInBytes _sourceCategory="sourcecategory_volume"
| parse regex "\"(?<sourcecategory>[^\"]*)\"\:(?<data>\{[^\}]*\})" multi
| parse regex "\"(?<_sourcecategory>[^\"]*)\"\:(?<data>\{[^\}]*\})" multi
| json field=data "sizeInBytes", "count" as bytes, count
| timeslice 1h
| bytes/1024/1024/1024 as gbytes
| sum(gbytes) as gbytes by sourcecategory, _timeslice
| where !(sourceCategory matches "*_volume")
| sum(gbytes) as gbytes by _sourcecategory, _timeslice
| where !(_sourceCategory matches "*_volume")
| compare timeshift -1w 4 max
| if(isNull(gbytes_4w_max), 0, gbytes_4w_max) as gbytes_4w_max
| ((gbytes - gbytes_4w_max) / gbytes) * 100 as pct_increase
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ If you have a Sumo Logic Enterprise Suite account, you can take advantage of th
When designing partitions, keep the following in mind:
* **Avoid using queries that are subject to change**. In order to benefit from using Partitions, they should be used for long-term message organization.
* **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the Partition, which increases search performance.
* **Keep the query flexible**. Use a flexible query, such as `sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query.
* **Keep the query flexible**. Use a flexible query, such as `_sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query.
* **Group data together that is most often used together**. For example, create Partitions for categories such as web data, security data, or errors.
* **Group data together that is used by teams**. Partitions are an excellent way to organize messages by role and teams within your organization.
* **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a Partition. Including 90% of the data in your index in a Partition won’t improve search performance.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ To create or edit a Partition, you must be an account Administrator or have th
When designing partitions, keep the following in mind:
* **Avoid using queries that are subject to change**. In order to benefit from using Partitions, they should be used for long-term message organization.
* **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the Partition, which increases search performance.
* **Keep the query flexible**. Use a flexible query, such as `sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query.
* **Keep the query flexible**. Use a flexible query, such as `_sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query.
* **Group data together that is most often used together**. For example, create Partitions for categories such as web data, security data, or errors.
* **Group data together that is used by teams**. Partitions are an excellent way to organize messages by role and teams within your organization.
* **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a Partition. Including 90% of the data in your index in a Partition won’t improve search performance.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ It is '\u001F', U+001F UNIT SEPARATOR

For example, in the following query, there are multiple space characters present in `"VM Periodic" and "Task Thread"`, but normalization returns the same result as a single space whitespace character.
```sql
sourceCategory=stream_thread_dumps "VM Periodic_____Task Thread"
_sourceCategory=stream_thread_dumps "VM Periodic_____Task Thread"
```

:::note
Expand All @@ -154,7 +154,7 @@ It is '\u001F', U+001F UNIT SEPARATOR

For example, in the the following query there is a tab character present in `"VM Periodic" and "Task Thread"`, but normalization returns the same result as a single space whitespace character.
```sql
sourceCategory=stream_thread_dumps "VM Periodic_Task Thread"
_sourceCategory=stream_thread_dumps "VM Periodic_Task Thread"
```

:::note
Expand All @@ -165,7 +165,7 @@ It is '\u001F', U+001F UNIT SEPARATOR

For example, in the following query, there is a new line after the string `Task`, but normalization returns the same result as a single space whitespace character. This shows that a query string with a single space can match a log line that has a new line character.
```sql
sourceCategory=stream_thread_dumps "VM Periodic Task\nThread"
_sourceCategory=stream_thread_dumps "VM Periodic Task\nThread"
```

:::note
Expand All @@ -176,13 +176,13 @@ It is '\u001F', U+001F UNIT SEPARATOR

For example, in the the following query, there is a new line and tab character after the string `Task`, but normalization returns the same result as a single space whitespace character. This shows that a query string with a single space can match a log line that has a new line and a tab whitespace character.
```sql
sourceCategory=stream_thread_dumps "VM Periodic Task\n\tThread"
_sourceCategory=stream_thread_dumps "VM Periodic Task\n\tThread"
```
:::note
The character `\n\t` is used to describe the new line + tab whitespace characters.
:::

All of the above queries containing various whitespace characters will accept a single space whitespace character by default and return the desired results. See the query below.
```sql
sourceCategory=stream_thread_dumps "VM Periodic Task Thread"
_sourceCategory=stream_thread_dumps "VM Periodic Task Thread"
```
4 changes: 2 additions & 2 deletions docs/search/optimize-search-partitions.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,9 +123,9 @@ It may prevent you from searching horizontally without OR’ing partitions toget

This helps users easily identify the correct partition to use.

### Keep your partition broadly scoped with sourceCategory and avoid keywords
### Keep your partition broadly scoped with _sourceCategory and avoid keywords

Use sourceCategory in your partitions definitions and avoid keywords to keep your partition broadly scoped. You can always narrow down the scope of your search when you query your partition.
Use `_sourceCategory` in your partitions definitions and avoid keywords to keep your partition broadly scoped. You can always narrow down the scope of your search when you query your partition.

### Group similar data together

Expand Down
4 changes: 2 additions & 2 deletions docs/search/optimize-search-performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,10 @@ Here's a quick look at how to choose the right indexed search optimization tool.
| :-- | :-- | :-- |
| Run queries against a certain set of data | Choose if the quantity of data to be indexed is more than 2% of the total data. | Choose if the quantity of data to be indexed is less than 2% of the total data. |
| Use data to identify long-term trends |   | Yes |
| Segregate data by sourceCategory | Yes |   |
| Segregate data by _sourceCategory | Yes |   |
| Have aggregate data ready to query |   | Yes |
| Use RBAC to deny or grant access to the data set | Yes | Yes |
| Reuse the fields that I'm parsing for other searches against this same sourceCategory |   |   |
| Reuse the fields that I'm parsing for other searches against this same _sourceCategory |   |   |

## How is data added to Partitions and Scheduled Views?

Expand Down
2 changes: 1 addition & 1 deletion docs/search/search-cheat-sheets/log-operators.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ This section provides detailed syntax, rules, and examples for Sumo Logic Opera
<td>The backshift operator compares values as they change over time. Backshift can be used with rollingstd, smooth, or any other operators whose results could be affected by spikes of data (where a spike could possibly throw off future results).</td>
<td>_backshift</td>
<td>Can be used in Dashboard Panels, but in the search they must be included after the first <code>group-by</code> phrase.</td>
<td><code>_sourcecategory=katta <br/>| timeslice by 1m <br/>| count by _timeslice,_sourcehost <br/>| sort + _timeslice <br/>| backshift _count,1 by _sourcehost</code></td>
<td><code>_sourceCategory=katta <br/>| timeslice by 1m <br/>| count by _timeslice,_sourcehost <br/>| sort + _timeslice <br/>| backshift _count,1 by _sourcehost</code></td>
</tr>
<tr>
<td><a href="/docs/search/search-query-language/search-operators/base64decode">base64Decode</a></td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,12 +61,12 @@ You can only perform a lookup using fields defined as primary keys. If the key c

* `srcDevice_ip`
* `eventTime`
* `sourceCategory`
* `_sourceCategory`

your lookup query scope must include:

```sql
... on srcDevice_ip=srcDevice_ip and eventTime=eventTime and sourceCategory=sourceCategory
... on srcDevice_ip=srcDevice_ip and eventTime=eventTime and _sourceCategory=sourceCategory
```

## Syntax 
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ You can bake the Collector into an image, install it manually, or use automat
* Host Metrics (Sumo's [Host Metrics source](/docs/send-data/installed-collectors/sources/host-metrics-source.md) is required.)
* Logs are cached locally, so if a source is throttled by Sumo, you won’t drop data.  
* You can bake Installed Collectors into AMIs to allow for consistent deployments across all your hosts.
* Configurable metadata. You can use variables available from Docker and the Docker host to configure the sourceCategory and sourceHost for a Docker log source or a Docker stats. For more information, see Configure sourceCategory and sourceHost using variables.
* Configurable metadata. You can use variables available from Docker and the Docker host to configure the `_sourceCategory` and `sourceHost` for a Docker log source or a Docker stats. For more information, see Configure `_sourceCategory` and `sourceHost` using variables.
* **Cons**
* Maintaining AMIs can be tricky if the process is not automated, so this might be a disadvantage, depending on your situation and resources. 
* It’s not as easy to set up this method to monitor selected containers on a host, as opposed to all containers. You might need to configure multiple sources to achieve this goal.
Expand All @@ -70,7 +70,7 @@ Logic collector.
* No need to bake into any AMIs. Can be fully automated depending on your automation tooling around Docker.
* The Collector will cache the files in the container, so if a Source is throttled by Sumo, you won’t drop data. Ensure that you have ample space, or use persistent storage.
* Easy to upgrade: it’s a container, just deploy a new one!
* Configurable metadata. You can use variables available from Docker and the Docker host to configure the sourceCategory and sourceHost for a Docker log source or a Docker stats. For more information, see Configure sourceCategory and sourceHost using variables.
* Configurable metadata. You can use variables available from Docker and the Docker host to configure the `_sourceCategory` and `sourceHost` for a Docker log source or a Docker stats. For more information, see Configure `_sourceCategory` and `sourceHost` using variables.
* **Cons**
* With this method, you cannot collect host metrics from the Docker host. The Collector must be installed on the Docker host to get the host metrics. You can still collect container logs, container metrics and host logs.
* It’s not as easy to set up this method to monitor selected containers on a host, as opposed to all containers. You might need to configure multiple sources to achieve this goal.
Expand Down
2 changes: 1 addition & 1 deletion docs/send-data/collection/search-collector-or-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: Search for a Collector or Source on the Manage Collection page.

Many Sumo Logic customers have hundreds of collectors and sources installed and configured. But even with only 10 Collectors, sometimes it can be hard to find the one you need in the list.

On the **Collection** page, a search field allows you to search for collectors and sources by name or sourceCategory using complete keywords.
On the **Collection** page, a search field allows you to search for collectors and sources by name or `_sourceCategory` using complete keywords.

To match partial keywords use a wildcard. For example, use "**apache\***" to match "apacheprod".

Expand Down