Skip to content

Latest commit

 

History

History
56 lines (30 loc) · 3.02 KB

File metadata and controls

56 lines (30 loc) · 3.02 KB

Analytics

As soon as you integrate Portkey, you can start to view detailed & real-time analytics on cost, latency and accuracy across all your LLM requests.

The analytics dashboard provides an interactive interface to understand your LLM application Here, you can see various graphs and metrics related to requests to different LLMs, costs, latencies, tokens, user activity, feedback, cache hits, errors, and much more.

The metrics in the Analytics section can help you understand the overall efficiency of your application, discover patterns, identify areas of optimization, and much more.

Analytics Overview Dashboard

Charts

The dashboard provides insights into your users, errors, cache, feedback and also summarizes information by metadata.

Overview

The overview tab is a 70,000ft view of your application's performance. This highlights the cost, tokens used, mean latency, requests and information on your users and top models.

This is a good starting point to then dive deeper.

Users

The users tab provides an overview of the user information associated with your Portkey requests. This data is derived from the user parameter in OpenAI SDK requests or the special _user key in the Portkey metadata header.

{% hint style="info" %} Portkey currently does not provide analytics on usage patterns for individual team members in your Portkey organization. The users tab is designed to track end-user behavior in your application, not internal team usage. {% endhint %}

Errors

Portkey captures errors automatically for API and Accuracy errors. The charts give you a quick sense of error rates allowing you to debug further when needed.

The dashboard also shows you the number of requests rescued by Portkey through the various AI gateway strategies.

Error Analytics Dashboard

Cache

When you enable cache through the AI gateway, you can view data on the latency improvements and cost savings due to cache.

Feedback

Portkey allows you to collect feedback on LLM requests through the logs dashboard or via API. You can view analytics on this feedback collected on this dashboard.

Metadata Summary

Group your request data by metadata parameters to unlock insights on usage. Select the metadata property to use in the dropdown and view the request data grouped by values of that metadata parameter.

This lets you answer questions like:

  1. Which users are we spending the most on?
  2. Which organisations have the highest latency?

Metadata Analytics