Skip to content

Commit

Permalink
Merge pull request #16715 from lchockalingam/whats-new-03-28-aimonito…
Browse files Browse the repository at this point in the history
…ringga

Whats new 03 28 aimonitoringga
  • Loading branch information
akristen committed Mar 28, 2024
2 parents cad70b4 + dafa280 commit 8abc2bc
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 0 deletions.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 20 additions & 0 deletions src/content/whats-new/2024/03/whats-new-03-28-aimonitoringga.md
@@ -0,0 +1,20 @@
---
title: 'New Relic AI monitoring is now generally available'
summary: 'Gain in-depth insights across your AI application stack to improve performance, quality and cost'
releaseDate: '2024-03-26'
learnMoreLink: ''
getStartedLink: 'https://docs.newrelic.com/docs/ai-monitoring/intro-to-ai-monitoring/'
---

We’re happy to announce that the industry’s first APM for AI, New Relic AI monitoring, is now available to all our customers.

![AI monitoring screenshot](./images/aimonitoringga.png "A screenshot that shows the tracing view")

New Relic AI monitoring gives you deep insights and unprecedented visibility across your entire AI stack, so you can build and run AI applications with confidence. You can now take advantage of:

* **Auto instrumentation:** New Relic agents come equipped with all AI monitoring capabilities, including full AI stack visibility, response tracing, model comparison, and simplified set-up for popular AI frameworks like OpenAI, Bedrock, and LangChain across Python, Node.js, Ruby, and Go languages.
* **Full AI stack visibility:** Holistic view across the application, infrastructure, and the AI layer, including AI metrics like response quality and token counts. View all this alongside APM golden signals.
* **LLM response overview with end-user feedback:** Quickly identify trends and outliers in LLM responses with a consolidated view. Sentiment analysis and actual user feedback are now displayed alongside AI responses, empowering you to prioritize areas for improvement, ensure unbiased outputs, and maintain user trust.
* **Deep trace insights for every response:** Trace the lifecycle of complex LLM responses built with tools like LangChain to fix performance issues and quality problems such as bias, toxicity, and hallucination.
* **Enhanced data security:** Safeguard sensitive data (PII) sent to your AI application with the new drop filter functionality that allows you to selectively exclude specific data types from monitoring, ensuring compliance and protecting user privacy.
* **Optimized model performance and cost:** Compare performance and cost across models or services in a single view to choose the model that best fits your need.

0 comments on commit 8abc2bc

Please sign in to comment.