Skip to content

Commit

Permalink
feat(sonic): implement copyedits
Browse files Browse the repository at this point in the history
  • Loading branch information
bradleycamacho committed May 26, 2023
1 parent aeec87b commit ee3f49e
Show file tree
Hide file tree
Showing 3 changed files with 65 additions and 49 deletions.
40 changes: 21 additions & 19 deletions src/content/docs/journey-large-logs/filter.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Filter log ingest
metaDescription: "Test"
title: Reduce log complexity and cost by filtering
metaDescription: "Reduce the complexity and cost of your log management by filtering your logs with drop rules"
---

import logsIngestPipeline from 'images/logs_diagram_ingest-pipeline.webp'
Expand All @@ -14,13 +14,13 @@ import logsDrop from 'images/logs_screenshot-drop-log.webp'
import logsIntro from 'images/logs_screenshot-full_intro.webp'


As we've talked about before, modern systems create massive amounts of logs. Not all of those are useful. In fact, there's a high chance when you look at your logs you'd find *most* aren't useful.
Modern systems create massive amounts of logs. Not all of those are useful. In fact, there's a high chance when you look at your logs you'd find *most* aren't useful. You might have a service that spews logs for every page load or a backup service which you never need to monitor logs for.


<SideBySide>
<Side>

New Relic provides the ability to create rule sets that look at your logs and ignore logs that you haven't selected for ingest. This has a few key benefits:
With New Relic you can create drop rules that look at your logs and ignore logs that you haven't selected for ingest. This has a few key benefits:


* Lower costs by storing only the logs relevant to your account.
Expand All @@ -42,18 +42,18 @@ New Relic provides the ability to create rule sets that look at your logs and ig

## How drop filter rules work [#how-it-works]

A drop filter rule matches data based on a query. When triggered, the drop filter rule removes the matching data from the ingestion pipeline before it is written to the New Relic database (NRDB).
A drop filter rule matches data based on a query. When triggered, the drop filter rule removes the matching data from the ingestion pipeline before it is written to [the New Relic database (NRDB)](/docs/data-apis/get-started/nrdb-horsepower-under-hood/).

This creates an explicit demarcation between the logs being forwarded from your domain and the data that New Relic collects. Since the data removed by the drop filter rule doesn't reach our backend, it cannot be queried: the data is gone and cannot be restored.
This creates an distinction between the logs being forwarded from your domain and the data that New Relic collects. Since the data removed by the drop filter rule doesn't reach our backend, it can't be queried: the data is gone and cannot be restored.

## Decide which logs to drop [#decide]

Deciding which logs to keep and which logs to drop is a highly specific decision for each team and organization. Logs valuable to one organization may not be valuable to another. Regardless, here are a few suggestions on how to decide which logs are valuable and which to drop:

* **What logs do your team already use**: If your team already manually reviews a subset of logs regularly, that indicates those logs are valuable and should not be dropped. Likewise, if there is a set of lgos your team never looks at that might indicate they should be dropped.
* **What apps and systems are producing the most logs**: an app or system that creates a large amount of logs indicates you should spend time deciding what to do with those logs. Is it a valuable and widely used app which indicates you should keep most of the logs? Is it a redundent system which is spewing logs with minimal value?
* **What logs does your team rely on today?**: If your team already manually reviews a subset of logs regularly, that indicates those logs are valuable and should not be dropped. Likewise, if there is a set of logs your team never looks at that might indicate they should be dropped.
* **What apps and systems produce the most logs?**: An app or system that creates a large amount of logs indicates you should spend time deciding what to do with those logs. Is it a valuable and widely used app which indicates you should keep most of the logs? Is it a redundent system which is spewing logs with minimal value?

Do take note that while an app or system may be rarely used, that doesn't mean logs are always not valuable. You would hate to drop logs from an application that is barely used only for that application to go down in a few months with no easy way to troubleshoot.
Do take note that while an app or system may be rarely used, that doesn't mean its logs have no value. You would hate to drop logs from an application that is barely used only for that application to go down in a few months with no easy way to troubleshoot.

<img
title="Logs architecture for drop filters in New Relic"
Expand All @@ -62,15 +62,15 @@ Do take note that while an app or system may be rarely used, that doesn't mean l
/>

<figcaption>
During the ingestion process, customer log data can be parsed, transformed, or dropped before being stored in the New Relic database (NRDB).
During ingest, customer log data can be parsed, transformed, or dropped before being stored in the New Relic database (NRDB).
</figcaption>

## Filter your log ingest [#filter-steps]

The following steps will guide you through how to drop logs in the New Relic UI.


Let's say Acme Corp creates 2TB of logs each day. They decide this too many logs to ingest for both cost and usability reasons. They take a look at their logs and realize over half of their daily logs are from a legacy Node.js application. While they need to keep the app around, they don't care for the logs created by this app. They decided to drop all logs created from the Node app.
Let's say Acme Corp creates 2TB of logs each day. They decide this is too many logs to ingest for both cost and usability reasons. They take a look at their logs and realize over half of their daily logs are from a legacy Node.js application. While they need to keep the app around, they don't care for the logs created by this app. They decided to drop all logs created from the Node.js app.


<Steps>
Expand All @@ -82,18 +82,18 @@ Let's say Acme Corp creates 2TB of logs each day. They decide this too many logs
<Step>
### Create your drop rule

Filter or query to the specific set of logs that contain the data to be dropped.
Filter or query to the specific set of logs that contain the data you want to drop.

There are a few ways to do this, but the easiest is to query for the logs you want to drop. In this case, ACME Corp would do the following:
There are a few ways to do this, but the easiest is to query for the logs you want to drop. In this case, you would do the following:


<SideBySide>
<Side>

1. Select **All partitions** near the search bar
2. Enter their query. In this case `logtype=node`
1. Select **All partitions** near the search bar.
2. Enter their query. In this case `logtype=node`.
3. Press enter and confirm the correct logs appear.
4. Once the query is active, from **Manage data** on the left nav of the Logs UI, click **Create drop filter**.
4. Once the query is active, click **Create drop filter** on the left nav.
5. Give the drop rule a meaningful name.
6. Save the drop filter rule.
</Side>
Expand All @@ -111,7 +111,7 @@ Let's say Acme Corp creates 2TB of logs each day. They decide this too many logs
<Step>
### Drop attributes

ACME Corp still wants to reduce their ingest. They decided that they don't need certain attributes in their stored logs, so they decide to drop attributes such as `purchase_order`.
Acme Corp still wants to reduce their ingest. They decided that they don't need certain attributes in their stored logs, so they decide to drop attributes such as `purchase_order`.

<SideBySide>
<Side>
Expand All @@ -133,11 +133,13 @@ Let's say Acme Corp creates 2TB of logs each day. They decide this too many logs
</Step>
</Steps>

Repeate the above steps as many times as required until you're happy with your log ingest. If you need help querying for logs and attributes, [check out our doc on log specific syntax](/docs/logs/ui-data/query-syntax-logs/).
Repeat the above steps as many times as required until you're happy with your log ingest. If you need help querying for logs and attributes, [check out our doc on log specific syntax](/docs/logs/ui-data/query-syntax-logs/) or our doc on [more complex log filtering](/docs/logs/ui-data/drop-data-drop-filter-rules/).

<DocTiles numbered>
<DocTile title='Get started' path="/docs/journey-large-logs/get-started" ></DocTile>
<DocTile title='Filter and reduce your log ingest' label={{text: 'You are here', color: '#FCD672'}} path="/docs/journey-large-logs/filter" ></DocTile>
<DocTile title='Organize your logs' number='3'path="/docs/journey-large-logs/organize" ></DocTile>
</DocTiles>

<DocTiles>
<DocTile title='Organize your logs' number='3'path="/docs/journey-large-logs/organize" ></DocTile>
</DocTiles>
28 changes: 20 additions & 8 deletions src/content/docs/journey-large-logs/get-started.mdx
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
---
title: How to manage large log ingest
metaDescription: "Test"
metaDescription: "Learn how to use New Relic to handle a large amount of log data to reduce toil and save on cost"
---

import logsParsing from 'images/logs_screenshot_full-parsing.webp'
import logsPartition from 'images/logs_screenshot_full-partition.webp'


Modern systems create large amounts of log data. You might be dealing with hundreds of gigabytes to dozens of terabytes today, and the amount will continue to increase as your system scales. Anything past a few gigabytes of logs means hours of toil while you search through logs when need arises. Log management solutions, like New Relic, provide the tools to handle large sets of logs and make them managable — and more important, valuable.
Modern systems create large amounts of log data. You might be dealing with hundreds of gigabytes to dozens of terabytes today, and the amount will continue to increase as your system scales. When you need to search through your logs, you'll encounter hours of toil trying to uncover valuable and relevant logs. Sending all your logs to a log management tool can help reduce this toil, but you'll quickly encounter organizational hurdles and rising costs as you ingest more logs. New Relic solves this problem by providing tools to ingest only valuable logs to reduce cost, a unified UI to correlate your logs to your services, and various ways to organize your logs before your drown in them.

This tutorial walks you through how to us New Relic to manage a large amount of log ingest. You'll start by forwarding your logs to New Relic, which means sending your log data to New Relic automatically. You'll then identify what logs to ingest and which to drop. Finally you'll organize your logs through partitions and parsing.

This tutorial walks you through how to use New Relic to manage a large amount of log ingest. You'll start by forwarding your logs to New Relic, which means sending your log data to New Relic automatically. You'll then identify what logs to ingest and which to drop. Finally you'll organize your logs through partitions and parsing.


<img
Expand All @@ -23,10 +24,11 @@ This tutorial walks you through how to us New Relic to manage a large amount of

Once you've identified you have a problem with managing logs, it's time to choose a log management platform. There are many platforms out there. Some focus on quick automation but sacrifice ease-of-use. Others focus on complex features, but obscure their pricing.

New Relic's philosphy when it comes to log management focuses on three things: we want out logs solution to be **flexible, transparent, and usage based**. Let's quickly talk about what these mean:
New Relic's philosphy when it comes to log management focuses on three things: we want our logs solution to be **flexible, transparent, and usage-based**. Let's quickly talk about what these mean:

* **Flexible**: everyone needs different things from their logs. Some may need to ingest a large amount for record keeping while some may need to ingest a small amount. Some may need to heavily parse their logs while other may barely parse their logs at all. New Relic provides a platform flexible enough to meet everyone's needs.
* **Transparent and usage-based**: only pay for logs you ingest. Not all logs are valuable, so there's no use in ingesting and paying for logs you will never use. In this tutorial we'll explore how to selectively ingest logs in an affordable and effective manner.
* **Flexible**: Everyone needs different things from their logs. Some may need to ingest a large amount for record keeping while some may need to ingest a small amount. Some may need to heavily parse their logs while other may barely parse their logs at all. Our log management platform gives you tools to manage what you send us.
* **Transparent**: There are no surprises in billing. New Relic charges you only for the data you ingest at a fixed price per gigabyte.
* **Usage-based**: Only pay for logs you ingest. Not all logs are valuable, so there's no use in ingesting and paying for logs you will never use. In this tutorial we'll explore how to selectively ingest logs in an affordable and effective manner.

## Let's begin: forward your logs [#forward]

Expand All @@ -53,7 +55,10 @@ To forward your log data to New Relic, choose one or more of these options:
APM agent
</td>
<td>
By default, our APM agents do two things: add metadata to your logs, which gives you logs in context (ability to see logs data in various relevant places in our platform) and forward your logs to New Relic.
By default, our APM agents do three things:
* Add metadata to your logs, which gives you logs in context (ability to see logs data in various relevant places in our platform)
* Forward your logs to New Relic.
* Report performance metrics for your application [Read more about our APM capabilities](/introduction-apm/)

This is a popular option for DevOps teams and smaller organizations because it lets you easily report application logs, with no additional third-party solutions required. [Learn more about APM logs.](/docs/apm/new-relic-apm/getting-started/get-started-logs-context)
</td>
Expand Down Expand Up @@ -125,7 +130,14 @@ With our infrastructure agent, you can capture any logs present on your host, in
Compared to using an APM agent to report logs, this can take a little more setting up but gives you much more powerful options (for example, ability to collect custom attributes, which you can't do with <InlinePopover type="apm" /> agents).
</td>
<td>
[Install the infrastructure agent](/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent/)
<TechTileGrid>
<TechTile
name="Infrastructure agent"
icon="logo-newrelic"
to="/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent/"
/>

</TechTileGrid>
</td>
</tr>
<tr>
Expand Down

0 comments on commit ee3f49e

Please sign in to comment.