Skip to content

Commit

Permalink
feat(aim): added jupyter notebook install branch, updated screenshot,…
Browse files Browse the repository at this point in the history
… updated intro per PM review, fixed ruby config doc
  • Loading branch information
akristen committed Mar 18, 2024
1 parent 84cca5d commit 387f09c
Show file tree
Hide file tree
Showing 10 changed files with 97 additions and 17 deletions.
Expand Up @@ -8,6 +8,8 @@ AI monitoring allows agents to recognize and capture AI data. AI monitoring has

## Compatible AI libraries [#compatibility]

Below you can check if you're using the correct agent version for AI monitoring. You can also confirm whether our APM agents can collect data from the libraries or frameworks you use in your AI-powered app.

<table>
<thead>
<tr>
Expand Down Expand Up @@ -45,7 +47,7 @@ AI monitoring allows agents to recognize and capture AI data. AI monitoring has
</td>
<td>
* [OpenAI](https://pypi.org/project/openai/) library versions 1.13.3 and above
* [Boto3 AWS SDK for Python](https://docs.aws.amazon.com/pythonsdk/) versions ELEPHANT and above
* [Boto3 AWS SDK for Python](https://docs.aws.amazon.com/pythonsdk/)
* [LangChain](https://pypi.org/project/langchain/0.0.300/) versions 0.1.17 and above
</td>
</tr>
Expand All @@ -54,12 +56,14 @@ AI monitoring allows agents to recognize and capture AI data. AI monitoring has
[Ruby version 9.8.0 and above](/docs/apm/agents/ruby-agent/getting-started/ruby-agent-requirements-supported-frameworks/#digital-intelligence-platform)
</td>
<td>
[`ruby_openai` gem](https://github.com/alexrudall/ruby-openai) version 3.4.0 and above
[OpenAI gem](https://github.com/alexrudall/ruby-openai) version 3.4.0 and above
</td>
</tr>
</tbody>
</table>

Certain SDKs may contain support for certain AI models. For example, Amazon BedRock supports Anthropic. If an SDK in the table contains a model but the table doesn't list the model itself, AI monitoring may still collect data about that model.

## What's next? [#whats-next]

* [Install AI monitoring](/install/ai-monitoring).
Expand Down
16 changes: 9 additions & 7 deletions src/content/docs/ai-monitoring/configure-ai-monitoring.mdx
Expand Up @@ -17,7 +17,7 @@ Update default agent behavior for AI monitoring at these agent configuration doc

## Enable token count API [#enable-token]

If you've opted to remove message and input content from LLM event data, you can still forward token count information to New Relic. If you haven't disabled content from being recorded by the agent, you don't need to invoke the token count API.
If you've disabled `ai_monitoring.record_content.enabled`, you may still want to know the number of tokens your app uses in an interaction. If you haven't disabled content from being recorded by the agent, you don't need to invoke the token count API.

Update your app code with the below code snippets:

Expand Down Expand Up @@ -60,26 +60,28 @@ Update your app code with the below code snippets:
```ruby
# Write a proc to calculate token counts
# The proc should accept a hash as an argument with two keys:
# :model => [String] The name of the LLM model
# :content => [String] The message content/prompt or
:model => [String]
# The name of the LLM model
:content => [String]
# The message content/prompt or
# embedding input
#
# It should return an Integer representing the token count for
# It should return an integer representing the token count for
# the content.
callback_proc = proc do |hash|
# your calculation behavior
hash[:model] + hash[:content]
10
end

# Register this callback proc using the API
# If you're using Rails, this might be placed in an initializer
#register this callback proc using the API
#if you're using Rails, this might be placed in an initializer
NewRelic::Agent.set_llm_token_count_callback(callback_proc)
```
</Collapser>
</CollapserGroup>

## Enable user feedback [#enable-feedback]
## Enable user feedback API [#enable-feedback]

AI monitoring can correlate trace IDs between a generated message from your AI and the feedback an end user submitted. When you update your code to correlate messages with feedback, you need to take the trace ID and pass it to the feedback call, as these two events occur at two different endpoints.

Expand Down
10 changes: 5 additions & 5 deletions src/content/docs/ai-monitoring/intro-to-ai-monitoring.mdx
Expand Up @@ -8,9 +8,9 @@ import aiTraceViewIntroPage from 'images/ai_screenshot-full_trace-view-intro-pag

import aiAIResponsesOverview from 'images/ai_screenshot-full_AI-responses-overview.webp'

When people talk about artificial intelligence, they can mean different things. At New Relic, when we say AI, we mean the layer of your environment that uses a large language model (LLM) to generate a response when it receives an end user prompt.
When people talk about artificial intelligence, they can mean different things. At New Relic, when we say AI, we mean the layer of your environment that uses a large language model (LLM) to generate a response when it receives an end user prompt. AI monitoring is an APM solution that gives you end-to-end visibility into your AI-powered app.

With AI monitoring, you can measure the performance of the engine powering your AI assistant, so that you can ensure your users have the best possible experience. To get started, all you need is to install one of our APM agents and enable AI monitoring.
With AI monitoring, you can measure the performance of the engine powering your AI app, so that you can ensure your users have the best possible experience. To get started, all you need is to install one of our APM agents and enable AI monitoring.

<img
title="Trace waterfall for AI monitoring"
Expand Down Expand Up @@ -42,9 +42,9 @@ Enabling AI monitoring allows the agent to recognize AI metadata associated with

AI monitoring can help you answer critical questions about AI app performance: are your end users waiting too long for a response? Is there a recent spike in token usage? Are there patterns of negative user feedback around certain topics? With AI monitoring, you can see data specific to the AI-layer:

* Refer to the responses table to identify errors in specific prompt and response interactions. If an error occurs, open the trace waterfall view to scope to the methods and calls your AI-powered app makes when generating a response.
* Did your prompt engineers update the prompt parameters for your AI? You can track whether token usage spiked and dropped after this change, helping you make decisions that keep costs down.
* Maybe you're adjusting the logic behind an app in development, but you want to ensure that it's cost efficient before it goes into production. If you're using different models in different app environments, you can compare the cost and performance of apps against each other.
* Identify errors in specific prompt and response interactions from the response table. If an error occurs, open the trace waterfall view to scope to the methods and calls your AI-powered app makes when generating a response.
* Did your prompt engineers update prompt parameters for your AI? Track whether token usage spiked and dropped after this change, then make decisions that keep costs down.
* Maybe you're fine tuning your app in development, but you want cost efficiency before it goes to production. If you're using different models in different app environments, you can compare the cost and performance of your apps before deploying.

## Get started with AI monitoring [#get-started-ai-monitoring]

Expand Down
Expand Up @@ -757,7 +757,7 @@ For information on ignored and expected errors, [see this page on Error Analytic
This section includes Ruby agent configurations for setting up AI monitoring.

<Callout variant="important">
You need to set [distributed_tracing](/docs/apm/agents/nodejs-agent/installation-configuration/nodejs-agent-configuration/#dt-main) to `true` to capture trace and feedback data. It is turned on by default in Node.js agents 8.3.0 and higher.
You need to enable distributed tracing to capture trace and feedback data. It is turned on by default in Ruby agents 8.0.0 and higher.
</Callout>

<CollapserGroup>
Expand Down
Binary file not shown.
32 changes: 32 additions & 0 deletions src/install/ai-monitoring/agent-lang/python-jupyter-config.mdx
@@ -0,0 +1,32 @@
---
componentType: default
headingText: Update your code for the Python agent
freshnessValidatedDate: never
---

1. You want to update your code so that the agent initializes when you spin up your script or notebook. Keep in mind that you need to define your app name for the agent to initialize:

```py
newrelic.agent.initialize("newrelic.ini")
newrelic.agent.register_application(timeout=10)
```

2. Add LLM calls with New Relic annotations to a method in your code:

```py
bedrock_runtime = boto3.client("bedrock-runtime", "us-east-1")

runTitan(bedrock_runtime)
runAnthropic(bedrock_runtime)
runAi21(bedrock_runtime)
runCohere(bedrock_runtime)
runMeta(bedrock_runtime)
runTitanEmbedding(bedrock_runtime)
runCohereEmbedding(bedrock_runtime)
```

3. Add this code snippet to ensure that when you shut down your script or notebook, the agent forwards data from the last 60 seconds to New Relic:

```py
newrelic.agent.shutdown_agent(60)
```
29 changes: 29 additions & 0 deletions src/install/ai-monitoring/agent-lang/python-jupyter-install.mdx
@@ -0,0 +1,29 @@
---
componentType: default
optionType: agent-lang
headingText: Register the Python agent in your code
---

1. In your Jupyter notebook or Python script, install the Python agent:

```python
pip install git+https://github.com/newrelic/newrelic-python-agent
```

This version of the Python agent already enables `nr-openai-observability` library.

2. If you haven't updated your `newrelic.ini` file manually, then add these environment variables to your notebook:

```python
import os

os.environ["NEW_RELIC_APP_NAME"] = "openai-example"
os.environ["NEW_RELIC_CUSTOM_INSIGHTS_EVENTS_MAX_ATTRIBUTE_VALUE"] = "100000"
os.environ["NEW_RELIC_APPLICATION_LOGGING_ENABLED"] = "true"
os.environ["NEW_RELIC_APPLICATION_LOGGING_FORWARDING_ENABLED"] = "true"
os.environ["NEW_RELIC_DISTRIBUTED_TRACING_ENABLED"] = "true"
```

You can refer to the [Python configuration doc](/docs/apm/agents/python-agent/configuration/python-agent-configuration/#ai-monitoring) to learn more about AI monitoring configurations.
* Refer to the [event harvest section](/docs/apm/agents/python-agent/configuration/python-agent-configuration/#harvest-limits-span-event-data) for `event_harvest_config.harvest_limits.span_event_data`
* Refer to the [custom events section](/docs/apm/agents/python-agent/configuration/python-agent-configuration/#harvest-limits-custom-event-data) for `event_harvest_config.harvest_limits.custom_event_data`
Binary file added src/install/assets/install-logos/jupyter.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
13 changes: 13 additions & 0 deletions src/install/config/ai-monitoring.yaml
Expand Up @@ -17,6 +17,9 @@ appInfo:
- value: 'python'
displayName: 'Python'
logo: 'python'
- value: 'jupyter'
displayName: 'Python (Jupyter Notebook or script)'
logo: 'jupyter'
- value: 'ruby'
displayName: 'Ruby'
logo: 'ruby'
Expand All @@ -40,6 +43,11 @@ steps:
- optionType: agent-lang
options:
- value: 'python'
- filePath: 'src/install/ai-monitoring/agent-lang/python-jupyter-install.mdx'
selectedOptions:
- optionType: agent-lang
options:
- value: 'jupyter'
- filePath: 'src/install/ai-monitoring/agent-lang/nodejs-install.mdx'
selectedOptions:
- optionType: agent-lang
Expand Down Expand Up @@ -67,6 +75,11 @@ steps:
- optionType: agent-lang
options:
- value: 'python'
- filePath: 'src/install/ai-monitoring/agent-lang/python-jupyter-config.mdx'
selectedOptions:
- optionType: agent-lang
options:
- value: 'jupyter'
- filePath: 'src/install/ai-monitoring/agent-lang/ruby-config.mdx'
selectedOptions:
- optionType: agent-lang
Expand Down
4 changes: 2 additions & 2 deletions src/nav/ai-monitoring.yml
Expand Up @@ -3,10 +3,10 @@ path: /docs/ai-monitoring
pages:
- title: Introduction to AI monitoring
path: /docs/ai-monitoring/intro-to-ai-monitoring
- title: Install AI monitoring
path: /install/ai-monitoring
- title: Compatibility and requirements
path: /docs/ai-monitoring/compatibility-requirements-ai-monitoring
- title: Install AI monitoring
path: /install/ai-monitoring
- title: Configure AI monitoring
path: /docs/ai-monitoring/configure-ai-monitoring
- title: View AI data in New Relic
Expand Down

0 comments on commit 387f09c

Please sign in to comment.