Skip to content

Commit

Permalink
feat(aim): See description
Browse files Browse the repository at this point in the history
- added compatibility section to intro
- replaced webp with gif in intro
- updated content for intro
- updated yml to move UI pages in UI folder
- added configuration doc for reference matmerial
- updated install intro screenshot
- updated some titles
  • Loading branch information
akristen committed Feb 4, 2024
1 parent 597bcde commit 58552d9
Show file tree
Hide file tree
Showing 8 changed files with 191 additions and 62 deletions.
Empty file.
89 changes: 63 additions & 26 deletions src/content/docs/ai-observability/drop-sensitive-data.mdx
Expand Up @@ -6,9 +6,9 @@ freshnessValidatedDate: never

import apmAimDropFilters from 'images/apm_screenshot-crop_aim-drop-filters.webp'

Introducing an AI product into your environment can introduce security concerns when end users chat with your AI assistant. Let's say an end user prompts an AI assistant to retrieve their own personal data, like an address or credit card on file. As your AI model gathers context to respond to your end user, it carries the prompt through several events.
AI Monitoring captures event data about how your AI assistant responds to prompts. This event data can contain metrics about latency,

You can set up AI Monitoring to drop sensitive information before it's stored in NRDB. Drop filters apply regex to NRQL queries, which tells AI Monitoring to filter for and drop any sensitive information that matches the query. Even if your AI assistant posts sensitive information to your end user, the details of that conversation wouldn't appear in our database.
After AI Monitoring captures event data, you can direct the agent to drop that data so it isn't stored in NRDB. Drop filters apply regex to NRQL queries so AI Monitoring can match data to the query, then drop it. This doc walks you through how to set up drop filters.

<img
title="Drop filters page"
Expand All @@ -24,51 +24,86 @@ You can set up AI Monitoring to drop sensitive information before it's stored in

An AI assistant's message to an end user can be stored in NRDB under five separate event types:

* `LlmChatCompletion`
* `LlmTool`
* `LlmChain events`
* `LlmChatCompletionSummary`
* `LlmChatCompletionMessage`

This means that the contents of the message are replicated in five different event processes. For example, if an end user prompts the AI for credit card information, that data will be posted when:

* The message passes through the completion API to the model
* Laymans explanation of LLMTool
* Ibid chain events
* Ibid summary
* Ibid completion message

It's critical that when creating drop rules, you apply that rule to our five different events.
<table>

<tbody>
<tr>
<td>
`LlmChatCompletion`
</td>
<td>
Describe what decisions / point of the toolchain this event captures as it relates to sensitive data
</td>
</tr>
<tr>
<td>
`LlmTool`
</td>
<td>
Describe what decisions / point of the toolchain this event captures as it relates to sensitive data
</td>
</tr>
<tr>
<td>
`LlmChain events`
</td>
<td>
Describe what decisions / point of the toolchain this event captures as it relates to sensitive data
</td>
</tr>
<tr>
<td>
`LlmChatCompletionSummary`
</td>
<td>
Describe what decisions / point of the toolchain this event captures as it relates to sensitive data
</td>
</tr>
<tr>
<td>
`LlmChatCompletionMessage`
</td>
<td>
Describe what decisions / point of the toolchain this event captures as it relates to sensitive data
</td>
</tr>
</tbody>

</table>

When you create a drop filter for one kind of sensitive data, it's critical that you apply the rule to the four other events. This ensures that no artifact of that sensitve data persists in our database.

## Create drop filters [#create]

Read these procedures carefully as you make your first drop rule.
Follow these procedures when creating a drop filter. Let's say you want to drop any instance of a customer's personal address whenever it appears in a prompt or response.

<Steps>
<Step>

## Go to the drop filters page
## Go to the drop filters page [#filters-page]

Go to **[one.newrelic.com > All Capabilities > AI Monitoring > Drop filters](https://onenr.io/0PoR8KlvYwG)**, then click **Create drop filter**.

</Step>
<Step>

## Create a drop filter

Drop filters use NRQL queries to drop sensitive data. Here are three examples of a NRQL query using regex to locate and drop sensitive data:
## Prepare your NRQL query [#nrql]

* EXAMPLE
* EXAMPLE
* EXAMPLE

There's a secion at the end of this procedure with regex samples to get you started with drop filters.

</Step>
<Step>

## Add drop filter to four additional tables

Drop filters use NRQL queries to drop sensitive data. Here are three examples of a NRQL query using regex to locate and drop sensitive data:

* EXAMPLE
* EXAMPLE
* EXAMPLE

There's a secion at the end of this procedure with regex samples to get you started with drop filters.

New Relic stores its data in five different tables. When you create a drop filter to drop one kind of data (say, a birth date), you need to repeat the process four additional times to account for those five tables.

</Step>
Expand Down Expand Up @@ -121,6 +156,7 @@ REGEX INTRO HERE
title="Email address"
>
**Expression:**

```
([a-zA-Z0-9!#$'*+?^_`{|}~.-]+(?:@|%40)(?:[a-zA-Z0-9-]+\.)+[a-zA-Z0-9-]+)
```
Expand Down Expand Up @@ -230,3 +266,4 @@ REGEX INTRO HERE

## What's next? [#whats-next]

jhbhkjhkjhkjh
129 changes: 108 additions & 21 deletions src/content/docs/ai-observability/intro-to-ai-monitoring.mdx
Expand Up @@ -4,39 +4,126 @@ metaDescription: 'AI Monitoring lets you observe the AI-layer of your tech stack
freshnessValidatedDate: never
---

import apmAIResponsesDefaultPage from 'images/apm_screenshot-full_AI-Responses-default-page.webp'
import apmAiMonitoringAllCapabilities from 'images/apm_screenshot-full_ai-monitoring-all-capabilities.gif'

Your AI-powered app introduces new technologies into your tech stack that are hard to monitor, let alone understand. With AI Monitoring (AIM), you can bridge unfamiliar technologies with familiar data, deepening your knowledge about your AI's behavior.
Your AI-powered app introduces new technologies into your environment that are hard to monitor, let alone understand. AI Monitoring bridges unfamiliar technologies with familiar solutions, deepening your knowledge about your AI's behavior.

<img
title="AI Responses page for AIM"
alt="A screenshot of the AI Responses page for AI Monitoring"
src={apmAIResponsesDefaultPage}
title="View your app's AI data"
alt="A gif that shows where to find your AI data in New Relic"
src={apmAiMonitoringAllCapabilities}
/>

<figcaption>
Go to **[one.newrelic.com](https://one.newrelic.com) > All Capabilities > AI Monitoring**
Go to **[one.newrelic.com](https://one.newrelic.com) > All Capabilities > AI Monitoring. Choose your entity from the AI entities page to view your data.
</figcaption>

With AIM, you can:
* **Evaluate errors and bugs on the code-level.** Get total insight over how your AI processes perform and make improvements when they fail.
* **View feedback from your end user.** AI Monitoring correlates prompts, AI responses, and user feedback with transaction ids so you can see where your AI assistant breaks down.
* **Track token usage.** See when token usage spike, then optimize your AI-powered app to keep costs down.

* Use our trace view to find where errors and bugs appear in your vector databases, embedding processes, and completion APIs.
* Correlate user feedback with
* **Track your AI's token usage.** Keep costs down by always knowing how many tokens your AI toolchain uses when generating a response. If tokens go up, it's time to go deeper to optimize your toolchain and reduce cost.
* **Compare costs across different models.** You can compare the performance, quality, and usage of different models within your toolchain.
## Get started with AI Monitoring [#start-aim]

## Debug the AI layer of your app [#ai-layer]
AI Monitoring exposes your AI data as a subset of APM data. To get started, you need to:

AIM uses our APM agents to expose the AI subset of your data. When a customer makes a request to your AI assistant, your assistant makes a call to collect context from your LLM, its supporting services and databases. This process of gathering context is called a harvest cycle, and it's the heart of your AI data.
* [Create a New Relic account](https://newrelic.com/signup)
* [Install one of our supported APM agents](/install/aim)

## Check app compatibility [#compatibility]

AI Monitoring supports some languages and some libraries. Refer to the below table to determine if your AI-powered app is compatible with your AI-powered app. For example:

## Get started with AIM [#start-aim]

To get started:

* [Create a New Relic account](https://newrelic.com/signup)
* [Install and configure an APM agent to observe your AI-powered app](/install/aim)

## Integrate with other AI technologies [#integrate-ai]
* Our Python agent supports instrumentation for Amazon Bedrock, OpenAI, and LangChain libraries.
* Our Java agent supports instrumentation for Amazon Bedrock, but not OpenAI or LangChain.
* If your AI-powered app makes calls to other providers not listed here, then AI Monitoring won't capture data about your app.

<table>
<thead>
<tr>
<th>

</th>
<th>
Amazon Bedrock
</th>
<th>
OpenAI
</th>
<th>
LangChain
</th>
</tr>
</thead>
<tbody>
<tr>
<td>
**Go**
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#FA492B'}} name="fe-x-circle"/>
</td>
</tr>
<tr>
<td>
**Java**
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#FA492B'}} name="fe-x-circle"/>
</td>
<td>
<Icon style={{color: '#FA492B'}} name="fe-x-circle"/>
</td>
</tr>
<tr>
<td>
**Node.js**
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
</tr>
<tr>
<td>
**Python**
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
</tr>
<tr>
<td>
**Ruby**
</td>
<td>
<Icon style={{color: '#14E812'}} name="fe-check"/>
</td>
<td>
<Icon style={{color: '#FA492B'}} name="fe-x-circle"/>
</td>
<td>
<Icon style={{color: '#FA492B'}} name="fe-x-circle"/>
</td>
</tr>
</tbody>
</table>
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 6 additions & 6 deletions src/install/aim/intro.mdx
Expand Up @@ -3,18 +3,18 @@ headingText: Introduction to AIM install process
componentType: default
---

import apmAimTraceWaterfallView from 'images/apm_screenshot-crop_aim-trace-waterfall-view.webp'
import apmAIResponsesDefaultPage from 'images/apm_screenshot-full_AI-Responses-default-page.webp'

AI Monitoring (AIM) lets you track the performance of the different apps, services, and databases that make up your AI assistant. When your AI responds to a prompt, the response contains a payload, which is made up of bits of data and decision points that New Relic can monitor. Instrumenting your AI-powered app allows us to ingest that data, letting you monitor and improve your AI assistant.

Setting up involves installing one of our language agents and enabling AI Monitoring at the config level. When you're done, your AI data will appear in our UI.

<img
title="Break down your AI's response with the trace view"
alt="A screenshot of the trace waterfall view. Select a response from the response table to find this page."
src={apmAimTraceWaterfallView}
title="AI Responses page for AIM"
alt="A screenshot of the AI Responses page for AI Monitoring"
src={apmAIResponsesDefaultPage}
/>

<figcaption>
Go to **[one.newrelic.com](https://one.newrelic.com) > [AI Monitoring](ELEPHANT)** > Select an entity > **AI Responses** > Select a response.
</figcaption>
Go to **[one.newrelic.com](https://one.newrelic.com) > All Capabilities > AI Monitoring**
</figcaption>
23 changes: 14 additions & 9 deletions src/nav/ai-observability.yml
@@ -1,15 +1,20 @@
title: AI Monitoring
path: ai-observability
pages:
- title: Intro to AI Monitoring (AIM)
- title: Introduction to AI Monitoring
path: /docs/ai-observability/intro-to-ai-monitoring
- title: Install AIM
- title: Install AI Monitoring
path: /docs/install/aim
- title: Track AI responses
path: /docs/ai-observability/ai-response-performance/
- title: Filter out sensitive data
path: /docs/ai-observability/drop-sensitive-data/
- title: Correlate user feedback with AI data
- title: Configure APM agents for AI Monitoring
path: /docs/ai-observability/configure-for-aim
- title: View AI data
pages:
- title: Evaluate AI response data
path: /docs/ai-observability/view-ai-data/ai-response-performance
- title: Compare performance between models
path: /docs/ai-observability/view-ai-data/compare-models
- title: Correlate user feedback with AI responses
path: /docs/ai-observability/user-feedback
- title: Compare data between models
path: /docs/ai-observability/compare-models
- title: Filter out sensitive data
path: /docs/ai-observability/drop-sensitive-data

0 comments on commit 58552d9

Please sign in to comment.