Skip to content

Commit

Permalink
Merge pull request #4379 from newrelic/DOC7418-plinko
Browse files Browse the repository at this point in the history
chore(Logs management): Get started plinko
  • Loading branch information
barbnewrelic committed Oct 19, 2021
2 parents c03223b + 3bd66b8 commit 2a1162a
Show file tree
Hide file tree
Showing 18 changed files with 169 additions and 89 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,7 @@ tags:
metaDescription: "Create, query, and manage data partition rules with NerdGraph, the New Relic GraphQL explorer."
---

You can use NerdGraph at [api.newrelic.com/graphiql](https://api.newrelic.com/graphiql) to create, query, and manage your [data partition rules](/docs/logs/log-management/ui-data/data-partitions/) for logs. NerdGraph is our GraphQL-format API explorer.

This document includes:

* [The data partition rule schema](#data-partition-schema)
* [An example query of data partition rules](#query-data-partition-rules)
* [How to create a data partition rule](#create-data-partition-rules)
* [How to update a data partition rule](#update-data-partition-rules)
* [How to delete a data partition rule](#delete-data-partition-rules)
You can use NerdGraph at [api.newrelic.com/graphiql](https://api.newrelic.com/graphiql) to create, query, and manage your [data partition rules](https://docs.newrelic.com/docs/logs/log-management/ui-data/data-partitions/) for logs. NerdGraph is our GraphQL-format API explorer.

## Data partition rule schema [#data-partition-schema]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,7 @@ tags:
metaDescription: How to create, query, and manage log parsing rules with NerdGraph, the New Relic GraphQL explorer.
---

You can use NerdGraph at [api.newrelic.com/graphiql](https://api.newrelic.com/graphiql) to create, query, and manage your [parsing rules](/docs/logs/log-management/ui-data/parsing/) for logs. NerdGraph is our GraphQL-format API explorer.

This document includes:

* [The parsing rule schema](#data-partition-schema)
* [An example query of parsing rules](#query-data-partition-rules)
* [How to create a parsing rule](#create-data-partition-rules)
* [How to update a parsing rule](#update-data-partition-rules)
* [How to delete a parsing rule](#delete-data-partition-rules)
You can use NerdGraph at [api.newrelic.com/graphiql](https://api.newrelic.com/graphiql) to create, query, and manage your [parsing rules](https://docs.newrelic.com/docs/logs/log-management/ui-data/parsing/) for logs. NerdGraph is our GraphQL-format API explorer.

## Data parsing schema [#parsing-schema]

Expand Down Expand Up @@ -54,7 +46,7 @@ Available parsing rule fields include:
</tr>
<tr>
<td>lucene</td>
<td>The search value used for your Grok pattern from the New Relic UI; for example, `logtype:alb`.
<td>The search value used from the New Relic UI; for example, `logtype:alb`. For more information about valid Lucene functions in the New Relic UI, see our documentation about [logs query syntax](https://docs.newrelic.com/docs/logs/log-management/ui-data/query-syntax-logs/).
</td>
</tr>
<tr>
Expand Down Expand Up @@ -164,7 +156,7 @@ The response returned will look similar to this:
...
```

## Create parsing rules [#create-data-partition-rules]
## Create parsing rules [#create-parsing-rules]

This example creates a new log parsing rule. Before creating the rule, be sure to review the documentation about [log parsing](/docs/logs/log-management/ui-data/parsing/) and [built-in parsing rules](/docs/logs/log-management/ui-data/built-log-parsing-rulesets/).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,50 +16,58 @@ redirects:
- /docs/logs/enable-log-management-new-relic/new-relic-logs/introduction-log-monitoring
---

As applications move towards the cloud, microservices architecture is becoming more dispersed, making the ability to monitor logs essential. New Relic offers a fast, scalable log management platform so you can connect your logs with the rest of your telemetry and infrastructure data in a single place.
As applications move towards the cloud, microservices architecture is becoming more dispersed, making the ability to monitor logs essential. New Relic offers a fast, scalable log management platform so you can connect your logs with the rest of your telemetry and infrastructure data in a single place. See how it works with this video (approx. 2 minutes).

<Video
type="wistia"
id="3j7spmzlhc"
/>

Our log management solution provides deeper visibility into application and infrastructure performance data (events and errors) to reduce mean-time-to-resolve (MTTR) and quickly troubleshoot production incidents. It does this by providing super-fast searching capabilities, alerts, and co-location of application, infrastructure, and log data, while visualizing everything from a single place.
Our log management solution provides deeper visibility into application and infrastructure performance data (events and errors) to reduce mean-time-to-resolve (MTTR) and quickly troubleshoot production incidents.

## Find problems faster, reduce context switching [#logs-definition]

Log management provides a way to connect your log data with the rest of your application and infrastructure data, allowing you to get to the root cause of problems quickly, without losing context switching between tools.
Log management provides a way to connect your log data with the rest of your application and infrastructure data. You can get to the root cause of problems quickly, without losing context switching between tools.

Log management features include:

* Instantly search through your logs.
* Visualize your log data directly from the [Logs UI](/docs/logs/new-relic-logs/ui-data/explore-your-data-new-relic-logs-ui).
* Use logging data to create custom [charts](/docs/chart-builder/use-chart-builder/get-started/introduction-chart-builder), [dashboards](/docs/dashboards/new-relic-one-dashboards/get-started/introduction-new-relic-one-dashboards), and [alerts](/docs/alerts/new-relic-alerts/getting-started/introduction-new-relic-alerts).
* Visualize your log data directly from the Logs UI.
* Use logging data to create custom charts, dashboards, and alerts.
* Troubleshoot performance issues without switching between tools.
* Visualize everything in a single place.

## Bring in your logging data [#integrate-logs]

To bring your log data into New Relic, you can:
To forward your log data to New Relic, you can:

* Use our [infrastructure monitoring agent](/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent/) as a lightweight data collector, without having to install additional software.
* Select from a wide range of [log forwarding plugins](/docs/logs/forward-logs/enable-log-management-new-relic/), including Amazon, Microsoft, Fluentd, Fluent Bit, Kubernetes, Logstash, and more.
* Use our [OpenTelemetry](/docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts/#logs) solutions.
* Send your log data by using the [Log API](/docs/logs/log-management/log-api/) or [TCP endpoint](/docs/logs/log-management/log-api/use-tcp-endpoint-forward-logs-new-relic/).
* Use our infrastructure monitoring agent as a lightweight data collector, without having to install additional software.
* Select from a wide range of log forwarding plugins, including Amazon, Microsoft, Fluentd, Fluent Bit, Kubernetes, Logstash, and more.
* Use our OpenTelemetry solutions.
* Send your log data by using the Log API or TCP endpoint.

Once log management is enabled, you can also connect your logs with your APM agent, Kubernetes clusters, or distributed tracing to get additional contextual logging data with our [logs in context extensions](/docs/logs/logs-context/configure-logs-context-apm-agents/).
Once log management is enabled, you can also connect your logs with your APM agent, Kubernetes clusters, or distributed tracing to get additional contextual logging data with our logs in context extensions.

## View your logging data in New Relic [#find-data]

You can explore your logging data in the UI or by API:

* Logs UI at [one.newrelic.com](https://one.newrelic.com)
* Logs UI for [EU region data center](/docs/using-new-relic/welcome-new-relic/get-started/our-eu-us-region-data-centers/) if applicable: [one.eu.newrelic.com](https://one.eu.newrelic.com)
* Logs UI for EU region data center if applicable: [one.eu.newrelic.com](https://one.eu.newrelic.com)

You can also query the `Log` data type. For example, use [NRQL](/docs/query-data/nrql-new-relic-query-language/getting-started/introduction-nrql) to run:
You can also query the `Log` data type. For example, use NRQL to run:

```
SELECT * FROM Log
```

You can also use [NerdGraph](/docs/apis/nerdgraph/get-started/introduction-new-relic-nerdgraph/), our GraphQL-format API, to request the exact data you need.
You can also use NerdGraph, our GraphQL-format API, to request the exact data you need.

For more information, see our documentation about [query options](/docs/query-your-data/explore-query-data/get-started/introduction-querying-new-relic-data/) in New Relic.
## What's next [#what-next]

Ready to get started with our log management solutions?

1. If you don't have one already, [create a New Relic account](https://newrelic.com/signup). It's free, forever.
2. [Forward your logs](/docs/logs/forward-logs/enable-log-management-new-relic/) to New Relic. **Recommendation:** Use our [infrastructure agent](https://docs.newrelic.com/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent/) as your log forwarder, so you can get logs in context of your platform and services directly in our UI.
3. For apps monitored by a New Relic APM agent, configure [logs in context](/docs/logs/logs-context/configure-logs-context-apm-agents/).
4. Explore the logging data across your platform with our [Logs UI](/docs/logs/log-management/ui-data/use-logs-ui/) in New Relic One, where you can add alerts, query your data, and create dashboards.
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ redirects:
- /docs/logs/log-management/get-started/new-relic-log-management-security-privacy
---

With our [log management](/docs/logs/log-management/get-started/get-started-log-management/) solution, you have direct control over what data is reported to New Relic. To ensure data privacy, and to limit the types of information New Relic receives, no customer data is captured except what you supply in API calls or log forwarder configuration. All data for the logs service is then reported to New Relic over HTTPS.
With our log management solution, you have direct control over what data is reported to New Relic. To ensure data privacy, and to limit the types of information New Relic receives, no customer data is captured except what you supply in API calls or log forwarder configuration. All data for the logs service is then reported to New Relic over HTTPS.

This document describes additional security considerations for your logging data. For more information about New Relic's security measures:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Typically logs have a `message` field and level or severity, but we do not have

## Data storage [#events]

Log records are stored by default in the `Log` event type. You can create additional event types by defining a custom data partition in Logs. The resulting types will always be prefaced with `Log_`. For detailed information, see our [data partitions documentation](/docs/logs/log-management/ui-data/data-partitions/).
Log records are stored by default in the `Log` event type. You can create additional event types by defining a custom data partition in our Logs UI. The resulting types will always be prefaced with `Log_`. For detailed information, see our [data partitions documentation](/docs/logs/log-management/ui-data/data-partitions/).

**Attributes:**

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,10 @@ Often the lines that come before or after the log data that you queried can help
1. Go to **[one.newrelic.com > Logs](https://one.newrelic.com)**.
2. Enter a query to focus on the type of log info you need, then click **Query logs**.
3. Click the log line you want to examine in more detail.
4. From the attributes list, click the **Show surrounding logs** <Icon name="fe-eye"/>
icon.
4. From the attributes list, click the **Show surrounding logs** <Icon name="fe-eye"/> icon.

Your selected (highlighted) log line is fixed so that you can still see it while you scroll the surrounding logs. To change the selected log, do any of the following:

* Click the **Hide surrounding logs** <Icon name="eye-off" />
icon.
* Click the **Hide surrounding logs** <Icon name="eye-off" /> icon.
* Select another log line.
* Run a different query.
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ redirects:

## Problem

When JSON content is sent in the log's message field, it's not automatically parsed, and it's not stored as attributes (key-value pairs). Instead, the content remains in the message. It also may be truncated if the message exceeds the [character limit](/docs/logs/log-management/troubleshooting/log-message-truncated/).
When JSON content is sent in the log's message field, it's not automatically parsed, and it's not stored as attributes (key/value pairs). Instead, the content remains in the message. It also may be truncated if the message exceeds the [character limit](https://docs.newrelic.com/docs/logs/log-management/troubleshooting/log-message-truncated/).

## Solution

Reasons this may be happening:

* If the content is not valid JSON, it won't be parsed. Instead, it will be stored as a string and truncated if it exceeds the character limit.
* If the content is valid JSON, it may have been "stringified" with escape characters. If that's the case, it will first be evaluated as a string, meaning that it will be truncated to 4096 characters before being evaluated as JSON. The result of the truncation will be invalid JSON, and the data will be stored as a string.
* If the content is valid JSON, it may have been "stringified" with escape characters. If that's the case, it will first be evaluated as a string, meaning that it will be truncated to 4,096 characters before being evaluated as JSON. The result of the truncation will be invalid JSON, and the data will be stored as a string.

To solve this problem, send messages containing JSON that haven't been converted to a string. This content will be parsed even if the total length exceeds the character limit. If the JSON contains arrays, they'll be flattened and stored as unparsed strings.
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,62 @@ Not all log data in a message or for a specific attribute is being displayed. Th

## Solution

This occurs because the New Relic's logs datastore limits field length to 4096 characters. Any data longer than that is truncated during ingestion.
This occurs because the logs datastore in New Relic limits the field length to 4,096 characters. Any data longer than that is truncated during ingestion.

If you have values exceeding the character limit, here are some options to explore:
If you have values exceeding the character limit, here are some options to try:

* Parse your log message into shorter key-value pairs. A common example is a single log line from an NGINX access log. That log message can be parsed using built-in parsing via [Logstash](https://www.elastic.co/guide/en/logstash/current/logstash-config-for-filebeat-modules.html#parsing-nginx), [Fluentd](https://docs.fluentd.org/parser/nginx), or [Fluent Bit](https://fluentbit.io/documentation/0.12/parser/).
* Use JSON as an output format instead of plain text. JSON log messages will automatically be parsed into key/value pairs, which makes it much less likely to hit the character limit.
* Split a long section of text across multiple fields, instead of it all being a part of the message field. For information on how to do this, contact your New Relic account representative or support.
<table>
<thead>
<tr>
<th style={{ width: "200px" }}>
Troubleshooting tips
</th>
<th>
Comments
</th>
</tr>
</thead>

<tbody>
<tr>
<td>
Parse long messages
</td>
<td>
Parse your log message into shorter key/value pairs. A common example is a single log line from an NGINX access log. That log message can be parsed using built-in parsing via [Logstash](https://www.elastic.co/guide/en/logstash/7.9/logstash-config-for-filebeat-modules.html), [Fluentd](https://docs.fluentd.org/parser/nginx), or [Fluent Bit](https://fluentbit.io/documentation/0.12/parser/). For more information, see our documentation about [parsing log data](https://docs.newrelic.com/docs/logs/log-management/ui-data/parsing/).
</td>
</tr>

<tr>
<td>
Use JSON output
</td>
<td>
Use JSON as an output format instead of plain text. JSON log messages will automatically be parsed into key/value pairs, which makes it much less likely to hit the character limit.
</td>
</tr>

<tr>
<td>
Expand blob data
</td>
<td>
The first 4,094 characters in a log message are stored as a string. The next 128,000 bytes are stored as a `blob`.

To query for any log data in New Relic, run the following query:

```
SELECT * FROM Log
```

To expand the blob data, run the following query, using `message` or any other attribute. Be sure to enclose the blob's attribute with backticks. For example:

```
SELECT message, <var>another-attribute</var>, blob(`newrelic.ext.message`), blob(`newrelic.ext.<var>another-attribute</var>) FROM Log
```

For more information, see our documentation about [long messages stored as blobs](https://docs.newrelic.com/docs/logs/log-management/ui-data/long-logs-blobs/).
</td>
</tr>
</tbody>
</table>

0 comments on commit 2a1162a

Please sign in to comment.