diff --git a/_config.yml b/_config.yml index 256902799..508931e71 100644 --- a/_config.yml +++ b/_config.yml @@ -104,7 +104,7 @@ navigation: title: "Configuring the Output Formats" interactivity: position: 80 - title: "Interactivity" + title: "Interactivity & AI" interactivity/bookmarks: position: 10 title: "Bookmarks" diff --git a/interactivity/AI-powered-insights.md b/interactivity/AI-powered-insights.md deleted file mode 100644 index 34c2d6bf2..000000000 --- a/interactivity/AI-powered-insights.md +++ /dev/null @@ -1,410 +0,0 @@ ---- -title: AI-Powered Insights -page_title: AI-Powered Insights in Report Preview -description: "Learn how to implement an AI-powered prompt UI as part of any web-based report viewer." -slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights -tags: telerik, reporting, ai -published: True -position: 1 ---- - -# AI-Powered Insights Overview - -**AI Insights** is an AI-powered feature available during the report preview. It enables users to execute predefined or custom prompts on the core data of the previewed report, uncovering valuable insights, generating summaries, or answering specific questions. The feature also supports fine-tuning of the embedded Retrieval-Augmented Generation (RAG) algorithms, optimizing them to deliver accurate responses while minimizing token consumption. - ->tip For a working example of this functionality, check the [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights). - -![The UI of the AI system after configuration.](images/angular-report-viewer-with-ai-insights.png) - -## Feature Concept - -To bring the power of Generative AI (GenAI) into reporting workflows, we are introducing an **AI Prompt** dialog that integrates seamlessly in the report viewers. The dialog provides a convenient UI for sending predefined or custom prompts to an AI model, configured in the Reporting REST Service. The prompts and responses returned from the AI model are displayed in the Output panel of the dialog, allowing for easier tracking of the conversation. - -The AI conversation maintains context throughout user's interaction with a specific report. All previous questions and responses are preserved and sent to the AI model as context, enabling more coherent and contextually relevant conversations. However, this context is automatically cleared when report parameters are changed or when navigating to a different report, ensuring that each report session starts with a fresh conversation thread. - -The feature is supported by all [web report viewers]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/web-application/html5-report-viewer/overview%}) and by the [WPF Report Viewer]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/wpf-application/overview%}) connected to a remote Reporting REST Service. - -### Key Features: - -- **Retrieval-Augmented Generation (RAG)** - When enabled, the option activates an algorithm that filters out the irrelevant report data, producing accurate responses with reduced token usage. By default, the feature is enabled. - - When enabled, you may configure the RAG through the [AIClient ragSettings element]({%slug telerikreporting/aiclient-element%}##attributes-and-elements). - - You can disable the feature by setting the _AIClient allowRAG_ attribute to _false_. - -- **Predefined Summary Prompts** - Users can choose from a set of predefined prompts tailored for common tasks like summarization, explanation, and data insights—boosting efficiency with minimal effort. - -- **Custom AI Prompts** - Besides the predefined prompts, users can create and use custom prompts through the UI. - -- **End-User Consent for Data Sharing** - To ensure transparency and compliance, the AI Prompt requests explicit consent from users before sharing any data with GenAI services. - -Image of the Prompt UI - -## User Consent - -Before using the AI Prompt dialog, users must give consent for the AI to process their provided text. This ensures transparency and user control over their data. - -User Consent for AI Summaries - -## Configuration - -To enable the AI-powered insights functionality, you must provide a valid configuration that defines the AI client, model, and other essential details such as authentication credentials. This configuration also allows you to customize various aspects of the AI functionality, including user consent requirements, custom prompt permissions, and Retrieval-Augmented Generation (RAG) settings. The AI configuration is managed through the [report engine configuration]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}). For a complete list of available settings, check the table below. For an example configuration, check the [Example](#example) section. - -| Setting | Description | -| ------ | ------ | -|friendlyName|This setting specifies the name corresponding to the type of AI client you wish to use. For example, setting friendlyName to "MicrosoftExtensionsAzureOpenAI" indicates that the Azure OpenAI client is being utilized.| -|model|This setting specifies the AI model to be used for generating responses. For example, setting the model to "gpt-4o-mini" indicates that the GPT-4o mini model variant is being utilized.| -|endpoint|This setting specifies the URL of the AI service endpoint.| -|credential|This setting specifies the authentication credentials required to access the AI service. It ensures that the AI client can securely connect to the specified endpoint.| -|requireConsent|A boolean configuration option that determines whether users must explicitly consent to the use of AI models before the AI report insights features can be utilized within the application.| -|allowCustomPrompts|This setting is set to true by default. If you set it to `false`, users will only be able to use the predefined prompts and will not be allowed to ask custom prompts.| -|predefinedPrompts|This setting specifies a list of predefined prompts that the AI client can use. Each prompt is defined by a text attribute, which contains the prompt's content.| -|allowRAG|This setting specifies whether the [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) is allowed. The default value is _true_. Available only on projects targeting .NET8 or higher.| -|ragSettings|These settings specify the configuration of the [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) when allowed by the _allowRAG_ setting. Available only on projects targeting .NET8 or higher.| - -__AI clients__ - -There are four available options for the `friendlyName` setting: - -| Client Type | Friendly Name | -| ------ | ------ | -|Microsoft.Extensions.AI.AzureAIInference|"MicrosoftExtensionsAzureAIInference"| -|Microsoft.Extensions.AI.OpenAI + Azure.AI.OpenAI|"MicrosoftExtensionsAzureOpenAI"| -|Microsoft.Extensions.AI.Ollama|"MicrosoftExtensionsOllama"| -|Microsoft.Extensions.AI.OpenAI|"MicrosoftExtensionsOpenAI"| - -Depending on which option will be used, a corresponding `Telerik.Reporting.Telerik.Reporting.AI.Microsoft.Extensions.{name}` NuGet package must be installed in the project. In other words, please install one of the following packages before continuing with the configuration: - -- `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference` -- `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI` -- `Telerik.Reporting.AI.Microsoft.Extensions.Ollama` -- `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI` - -### Example - -Below is an example of how to configure the project for the `AzureOpenAI` option. - -````JSON -{ - "telerikReporting": { - "AIClient": { - "friendlyName": "MicrosoftExtensionsAzureOpenAI", - "model": "gpt-4o-mini", - "endpoint": "https://ai-explorations.openai.azure.com/", - "credential": "...", - "requireConsent": false, - "allowCustomPrompts": false, - "allowRAG": true, - "predefinedPrompts": [ - { "text": "Generate a summary of the report." }, - { "text": "Translate the report into German." } - ], - "ragSettings": { - "tokenizationEncoding": "Set Encoding Name Here", - "modelMaxInputTokenLimit": 15000, - "maxNumberOfEmbeddingsSent": 15, - "maxTokenSizeOfSingleEmbedding": 0, - "splitTables": true - } - } - } -} -```` -````XML - - - - - - - - -```` - -## Customization - -The workflow of instantiating the AI client and passing a request to it can be customized by overriding the following methods of the [ReportsController](/api/telerik.reporting.services.webapi.reportscontrollerbase) class: -* [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) - called when the AI Prompt dialog is to be displayed. In this method, the AI client is instantiated either using the settings provided in the application configuration file, or by using the `AIClientFactory` instance provided with the Reporting REST Service Configuration (see [Extensibility]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}#extensibility) below). Providing custom logic in the method allows to control the UI properties of the AI Prompt dialog: changing or disabling the consent message, enabling/disabling custom prompts, etc. This logic can be based on the currently previewed report, represented by the property `ClientReportSource`. - - * .NET - - ````C# -/// - /// Overrides the default , adding verification depending on the passed parameter. - /// - /// - public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) - { - if (reportSource.Report == "report-with-disabled-ai-insights.trdp") - { - return StatusCode( - StatusCodes.Status403Forbidden, - new - { - message = "An error has occurred.", - exceptionMessage = "AI Insights functionality is not allowed for this report.", - exceptionType = "Exception", - stackTrace = (string?)null - } - ); - } - - return base.CreateAIThread(clientID, instanceID, reportSource); - } -```` - - - * .NET Framework - - ````C# -/// - /// Overrides the default , adding verification depending on the passed parameter. - /// - /// - public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) - { - if (reportSource.Report == "SampleReport.trdp") - { - var errorResponse = new - { - message = "An error has occurred.", - exceptionMessage = "AI Insights functionality is not allowed for this report.", - exceptionType = "Exception", - stackTrace = (string)null - }; - - return this.Request.CreateResponse(HttpStatusCode.Forbidden, errorResponse); - } - - return base.CreateAIThread(clientID, instanceID, reportSource); -} -```` - - -* [UpdateAIPrompts(ClientReportSource, AIThreadInfo)](/api/telerik.reporting.services.webapi.reportscontrollerbase#collapsible-Telerik_Reporting_Services_WebApi_ReportsControllerBase_UpdateAIPrompts_Telerik_Reporting_Services_WebApi_ClientReportSource_Telerik_Reporting_Services_Engine_AIThreadInfo_) - called internally during the execution of the `CreateAIThread()` method. Provides easier access to the predefined prompts, allowing to alter or disable them based on custom logic like the role of the currently logged user, or on the currently previewed report, represented by the property `ClientReportSource`. - - * .NET - - ````C# -/// - /// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. - /// - /// - /// - protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) - { - if (reportSource.Report == "report-suitable-for-markdown-output.trdp") - { - aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); - } - - base.UpdateAIPrompts(reportSource, aiThreadInfo); - } -```` - - - * .NET Framework - - ````C# -/// - /// Modifies the collection of predefined prompts before displaying it in the AI Insights dialog. - /// - /// - /// - protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) - { - if (reportSource.Report == "report-suitable-for-markdown-output.trdp") - { - aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); - } - - base.UpdateAIPrompts(reportSource, aiThreadInfo); -} -```` - - -* [GetAIResponse(string, string, string, string, AIQueryArgs)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_GetAIResponse_System_String_System_String_System_String_System_String_Telerik_Reporting_Services_Engine_AIQueryArgs_) - called every time when a prompt is sent to the AI model. Allows for examining or altering the prompt sent from the client, inspecting the state of the RAG optimization, or checking the estimated amount of tokens that the prompt will consume, by implementing a callback function assigned to the [ConfirmationCallback](/api/telerik.reporting.services.engine.aiqueryargs#collapsible-Telerik_Reporting_Services_Engine_AIQueryArgs_ConfirmationCallBack) property. Below, you will find several examples of how to override the `GetAIResponse` method to handle different scenarios. - - * .NET - - ````C# -/// - /// Modifies the prompt sent from the client before passing it to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.Query += $"{Environment.NewLine}Keep your response concise."; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - const int MAX_TOKEN_COUNT = 500; - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) - { - return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines whether the RAG optimization is applied for the current prompt. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) - { - System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - * .NET Framework - - ````C# -/// - /// Modifies the prompt sent from the client before passing it to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.Query += $"{Environment.NewLine}Keep your response concise."; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - const int MAX_TOKEN_COUNT = 500; - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) - { - return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - - ````C# -/// - /// Examines whether the RAG optimization is applied for the current prompt. - /// - /// - public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) - { - args.ConfirmationCallBack = (AIRequestInfo info) => - { - if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) - { - System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); - } - - return ConfirmationResult.ContinueResult(); - }; - - return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); - } -```` - - -## Extensibility - -If necessary, the Reporting engine can use a custom `Telerik.Reporting.AI.IClient` implementation, which can be registered in the Reporting REST Service configuration: - -* .NET - - ````C# -builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration - { - HostAppId = "MyApp", - AIClientFactory = GetCustomAIClient, - // ... - }); - - static Telerik.Reporting.AI.IClient GetCustomAIClient() - { - return new MyCustomAIClient(...); - } -```` - - -* .NET Framework - - ````C# -public class CustomResolverReportsController : ReportsControllerBase - { - static ReportServiceConfiguration configurationInstance; - - static CustomResolverReportsController() - { - configurationInstance = new ReportServiceConfiguration - { - HostAppId = "MyApp", - AIClientFactory = GetCustomAIClient, - // ... - }; - } - } - - static Telerik.Reporting.AI.IClient GetCustomAIClient() - { - return new MyCustomAIClient(...); - } -```` - - -## See Also - -* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) -* [AIClient Element Overview]({%slug telerikreporting/aiclient-element%}) -* [Interface IClient](https://docs.telerik.com/reporting/api/telerik.reporting.ai.iclient) diff --git a/interactivity/ai-powered-insights-overview.md b/interactivity/ai-powered-insights-overview.md new file mode 100644 index 000000000..33c10b0c5 --- /dev/null +++ b/interactivity/ai-powered-insights-overview.md @@ -0,0 +1,58 @@ +--- +title: AI-Powered Insights Overview +page_title: AI-Powered Insights in Report Preview +description: "Learn about the AI insights feature of Reporting, which allow users to execute predefined or custom prompts on the core data of the previewed report, receiving responses from an AI model." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights +tags: telerik, reporting, ai +published: True +position: 1 +--- + +# AI-Powered Insights Overview + +**AI Insights** is an AI-powered feature available during the report preview. It enables users to execute predefined or custom prompts on the core data of the previewed report, uncovering valuable insights, generating summaries, or answering specific questions through an AI model. The feature also supports fine-tuning of the embedded Retrieval-Augmented Generation (RAG) algorithms, optimizing them to deliver accurate responses while minimizing token consumption. + +>tip For a working example of this functionality, check the [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights). + +![The UI of the AI system after configuration.](images/angular-report-viewer-with-ai-insights.png) + +## How Does It Work? + +To bring the power of Generative AI (GenAI) into reporting workflows, we are introducing an **AI Prompt** dialog that integrates seamlessly in the report viewers. The dialog provides a convenient UI for sending predefined or custom prompts to an external AI model (for example, GPT-5), configured in the Reporting REST Service. The prompts and responses returned from the AI model are displayed in the **Output** panel of the dialog, allowing for easier involvement in the conversation. + +The AI conversation maintains context throughout user's interaction with a specific report. All previous questions and responses are preserved and sent to the AI model as context, enabling more coherent and contextually relevant conversations. However, this context is automatically cleared when report parameters are changed or when navigating to a different report, ensuring that each report session starts with a fresh conversation thread. + +The feature is supported by all [web report viewers]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/web-application/html5-report-viewer/overview%}) and by the [WPF Report Viewer]({%slug telerikreporting/using-reports-in-applications/display-reports-in-applications/wpf-application/overview%}) connected to a remote Reporting REST Service. + +### Key Features + +- Retrieval-Augmented Generation (RAG)—When enabled, the AI insights feature uses an algorithm that filters out the irrelevant report data, producing more accurate responses with reduced token usage. + +- Predefined Summary Prompts—Users can choose from a set of predefined prompts tailored for common tasks like summarization, explanation, and data insights—boosting efficiency with minimal effort. + +- Custom AI Prompts—Besides the predefined prompts, users can create custom prompts to ask more specific queries. + + Image of the Prompt UI + +- End-User Consent for Data Sharing—To ensure transparency and compliance, the AI Prompt requests explicit consent from users before sending their prompts to the AI models. + + User Consent for AI Summaries + +## Next Steps + +To enable AI-Powered Insights in your application, choose one of these two implementation approaches: + +- Use built-in AI client—For supported LLM providers (Azure OpenAI, OpenAI, Azure AI Foundry, or Ollama), follow the [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) guide. + +- Create custom AI client—For unsupported LLM providers or when you need custom logic (like token usage tracking), refer to [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}). + +Once you have enabled the functionality, you can optionally: + +- Customize the experience—Fine-tune settings like user consent, predefined prompts, and RAG optimization using the [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) article. + +## See Also + +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) +* [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) +* [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}) +* [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) diff --git a/interactivity/built-in-client-integration-ai-insights.md b/interactivity/built-in-client-integration-ai-insights.md new file mode 100644 index 000000000..4a915c235 --- /dev/null +++ b/interactivity/built-in-client-integration-ai-insights.md @@ -0,0 +1,83 @@ +--- +title: Enable AI-Powered Insights with Built-in AI Client +page_title: How to Enable AI-Powered Insights with Built-in AI Client +description: "Learn how to enable AI-powered insights using built-in support for popular LLM providers like Azure OpenAI, OpenAI, Azure AI Foundry, and Ollama." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client +tags: telerik, reporting, ai, rest +published: True +position: 2 +--- + +# Enable AI-Powered Insights with Built-in AI Client + +This tutorial shows how to enable and configure AI-powered insights using built-in support for popular LLM providers, such as Azure OpenAI, OpenAI, Azure AI Foundry, and Ollama, so that end users can run predefined or custom prompts against the data behind the currently previewed report and receive responses from an LLM. + +> If you use a [Telerik Report Server](https://docs.telerik.com/report-server/introduction) instead of a standalone Telerik Reporting REST service, check the Report Server article [AI-Powered Features Settings](https://docs.telerik.com/report-server/implementer-guide/configuration/ai-settings) instead. + +## Prerequisites + +To follow the steps from this tutorial, you must have: + +- A running application that hosts a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}). +- A report viewer connected to that REST service. +- An active subscription (or local runtime) for an LLM model provider with API access. The supported out of the box ones are: + - [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/deployments-overview) + - [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/overview#how-do-i-get-access-to-azure-openai) + - [OpenAI](https://platform.openai.com/docs/models) + - [Ollama](https://docs.ollama.com/quickstart) + +>tip You can also connect to LLM providers that are not supported out of the box. To do this, create a custom `Telerik.Reporting.AI.IClient` implementation to integrate the provider into Reporting and enable the AI-powered insights functionality. For more details, refer to the article [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}). + +## Using AI-Powered Insights with a REST service + +To enable the AI-powered insights, follow these steps: + +1. Install exactly one of the following NuGet packages, depending on the LLM provider you use: + + - `Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference`—for Azure AI Foundry + - `Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI`—for Azure OpenAI resources + - `Telerik.Reporting.AI.Microsoft.Extensions.OpenAI`—for OpenAI + - `Telerik.Reporting.AI.Microsoft.Extensions.Ollama`—for Ollama + +1. Add the [AIClient element]({%slug telerikreporting/aiclient-element%}) to the report engine configuration in your application's configuration file. This element allows you to specify the AI model, endpoint, and authentication credentials. The following example demonstrates a basic Azure OpenAI configuration: + + +````JSON +{ + "telerikReporting": { + "AIClient": { + "friendlyName": "MicrosoftExtensionsAzureOpenAI", + "model": "gpt-4o-mini", + "endpoint": "https://ai-explorations.openai.azure.com/", + "credential": "YOUR_API_KEY" + } + } +} +```` +````XML + + + + +```` + + +>tip If you haven't configured the report engine previously, make sure to check the article [Report Engine Configuration Overview]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}) to get familiar with this topic. + +In this case, the `friendlyName` attribute identifies the LLM provider to the report engine. Each provider has specific configuration requirements: + +- Azure OpenAI: Use `MicrosoftExtensionsAzureOpenAI`. Requires `model`, `endpoint`, and `credential`. +- Azure AI Foundry: Use `MicrosoftExtensionsAzureAIInference`. Requires `model`, `endpoint`, and `credential`. +- OpenAI: Use `MicrosoftExtensionsOpenAI`. Requires only `model` and `credential` (uses the default OpenAI API endpoint). +- Ollama: Use `MicrosoftExtensionsOllama`. Requires only `model` and `endpoint` (no authentication needed for local deployments). + +## See Also + +* [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) +* [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}) +* [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) diff --git a/interactivity/custom-client-integration-ai-insights.md b/interactivity/custom-client-integration-ai-insights.md new file mode 100644 index 000000000..7c649f1eb --- /dev/null +++ b/interactivity/custom-client-integration-ai-insights.md @@ -0,0 +1,306 @@ +--- +title: Enable AI-Powered Insights with Custom AI Client +page_title: How to Enable AI-Powered Insights with Custom AI Client +description: "Learn how to enable AI-powered insights by creating a custom IClient implementation to integrate unsupported LLM providers or implement custom logic." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client +tags: telerik, reporting, ai, custom, implementation +published: True +position: 3 +--- + +# Enable AI-Powered Insights with Custom AI Client + +While Telerik Reporting provides built-in support for popular LLM providers like Azure OpenAI, OpenAI, Azure AI Foundry, and Ollama, you may need to integrate with other AI services or implement custom logic, such as token usage tracking. This article shows how to enable AI-powered insights by creating a custom `IClient` implementation to connect any LLM provider. + +## Prerequisites + +To follow the steps from this tutorial, you must have: + +- A running application that hosts a [Telerik Reporting REST service]({%slug telerikreporting/using-reports-in-applications/host-the-report-engine-remotely/telerik-reporting-rest-services/overview%}). +- A report viewer connected to that REST service. +- An active subscription (or local runtime) for an LLM model provider with API access. + +## Enabling Custom AI Client + +To enable a custom AI client implementation, follow these steps: + +1. Create a class that implements the `Telerik.Reporting.AI.IClient` interface. The following example demonstrates an Azure OpenAI integration for illustration purposes, though you can use any LLM provider: + + * .NET + + ````C# +using Azure.AI.OpenAI; + using Microsoft.Extensions.AI; + using System.ClientModel; + using Telerik.Reporting.AI; + + namespace WebApplication1.AI; + + public class CustomAIClient : IClient + { + public string Model { get; } = "gpt-4o-mini"; + + public bool SupportsSystemPrompts => false; + + private readonly IChatClient chatClient; + + public CustomAIClient() + { + string endpoint = "https://ai-explorations.openai.azure.com/"; + string credential = "YOUR_API_KEY"; + string model = "gpt-4o-mini"; + + chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) + .GetChatClient(model) + .AsIChatClient(); + } + + public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) + { + // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage + var chatMessages = new List(); + foreach (var message in query) + { + ChatRole chatRole = message.Role switch + { + MessageRole.System => ChatRole.System, + MessageRole.Assistant => ChatRole.Assistant, + MessageRole.User => ChatRole.User, + _ => throw new ArgumentException($"Invalid MessageRole: {message.Role}") + }; + + // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI + var textContents = message.Contents + .OfType() + .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) + .Cast() + .ToList(); + + chatMessages.Add(new ChatMessage(chatRole, textContents)); + } + + // Call Azure OpenAI + var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); + + // Convert response back to Telerik.Reporting.AI IMessage + var resultMessages = new List(); + foreach (var responseMessage in response.Messages) + { + MessageRole messageRole = responseMessage.Role.Value switch + { + "system" => MessageRole.System, + "assistant" => MessageRole.Assistant, + "user" => MessageRole.User, + _ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}") + }; + + // Convert back to Telerik.Reporting.AI content + var contents = responseMessage.Contents + .OfType() + .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) + .Cast() + .ToList(); + + resultMessages.Add(new Message(messageRole, contents)); + } + + return resultMessages; + } + + public static IClient GetCustomAIClient() + { + return new CustomAIClient(); + } + } +```` + + + * . NET Framework + + ````C# +using Azure.AI.OpenAI; + using Microsoft.Extensions.AI; + using System; + using System.ClientModel; + using System.Collections.Generic; + using System.Linq; + using System.Threading; + using System.Threading.Tasks; + using System.Web.UI.WebControls; + using Telerik.Reporting.AI; + + namespace WebApplication1.AI + { + public class CustomAIClient : IClient + { + public string Model { get; } = "gpt-4o-mini"; + + public bool SupportsSystemPrompts => false; + + private readonly IChatClient chatClient; + + public CustomAIClient() + { + string endpoint = "https://ai-explorations.openai.azure.com/"; + string credential = "YOUR_API_KEY"; + string model = "gpt-4o-mini"; + + chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential)) + .GetChatClient(model) + .AsIChatClient(); + } + + public async Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken) + { + // Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage + var chatMessages = new List(); + foreach (var message in query) + { + ChatRole chatRole; + switch (message.Role) + { + case MessageRole.System: + chatRole = ChatRole.System; + break; + case MessageRole.Assistant: + chatRole = ChatRole.Assistant; + break; + case MessageRole.User: + chatRole = ChatRole.User; + break; + default: + throw new ArgumentException($"Invalid MessageRole: {message.Role}"); + } + + // Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI + var textContents = message.Contents + .OfType() + .Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text)) + .Cast() + .ToList(); + + chatMessages.Add(new ChatMessage(chatRole, textContents)); + } + + // Call Azure OpenAI + var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken); + + // Convert response back to Telerik.Reporting.AI IMessage + var resultMessages = new List(); + foreach (var responseMessage in response.Messages) + { + MessageRole messageRole; + switch (responseMessage.Role.Value) + { + case "system": + messageRole = MessageRole.System; + break; + case "assistant": + messageRole = MessageRole.Assistant; + break; + case "user": + messageRole = MessageRole.User; + break; + default: + throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}"); + } + + // Convert back to Telerik.Reporting.AI content + var contents = responseMessage.Contents + .OfType() + .Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text)) + .Cast() + .ToList(); + + resultMessages.Add(new Telerik.Reporting.AI.Message(messageRole, contents)); + } + + return resultMessages; + } + + public static IClient GetCustomAIClient() + { + return new CustomAIClient(); + } + } + } +```` + + > This Azure OpenAI example uses `Azure.AI.OpenAI` version `2.2.0-beta.4` and `Microsoft.Extensions.AI.OpenAI` version `9.4.3-preview.1.25230.7` for demonstration purposes. For your implementation, you will typically use different packages specific to your LLM provider. Focus on the implementation structure, which is further detailed in the [Understanding the IClient Interface](#understanding-the-iclient-interface) section. + +1. Register the custom client in your `ReportServiceConfiguration`: + + * .NET + + ````C# +builder.Services.TryAddSingleton(sp => new ReportServiceConfiguration + { + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }); +```` + + + * .NET Framework + + ````C# +public class CustomResolverReportsController : ReportsControllerBase + { + static ReportServiceConfiguration configurationInstance; + + static CustomResolverReportsController() + { + configurationInstance = new ReportServiceConfiguration + { + HostAppId = "MyApp", + AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient, + // ... + }; + } + } +```` + + +You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}). + +## Understanding the IClient Interface + +The `Telerik.Reporting.AI.IClient` interface defines the contract for AI service integration: + +````C# +public interface IClient +{ + string Model { get; } + bool SupportsSystemPrompts { get; } + Task> GetResponseAsync(IReadOnlyCollection query, CancellationToken cancellationToken); +} +```` + +### Key Properties and Methods + +- **Model**—Specifies the model name used for tokenization encoding. This should match the actual model being used for accurate token counting. For more information on its impact, check the `tokenizationEncoding` option in the [RAG Configuration]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}#retrieval-augmented-generation-rag-configuration) section. +- **SupportsSystemPrompts**—Indicates whether the LLM supports system role messages. When `false`, all messages in the `query` argument from the `GetResponseAsync` method are converted to user role to prevent invalid message types from being unintentionally passed to the LLM client during type conversion. +- **GetResponseAsync**—The core method that processes AI queries and returns responses. + +### Implementation Details + +The `IChatClient` in the [example above](#enabling-custom-ai-client) is not mandatory—it is used to simplify interaction with the Azure OpenAI service. You can implement the interface using any client that communicates with your chosen LLM provider. + +When RAG (Retrieval-Augmented Generation) is enabled via the `allowRAG` configuration option, the `GetResponseAsync` method is called twice per user prompt: + +1. **RAG Evaluation Call**—Determines if the prompt is suitable for RAG optimization. The `query` parameter contains instructions for RAG applicability assessment and the user's question. +1. **Main Query Call**—Processes the request with the report data. The `query` parameter includes response instructions, report metadata (may be filtered based on the RAG evaluation), and the user's question. + +This dual-call approach optimizes token usage by first determining RAG suitability, then filtering report data only when the evaluation indicates RAG optimization is beneficial. + +When RAG is disabled, the method is called only once without the report metadata being pre-filtered. + +> RAG is available only in .NET and .NET Standard. + +## See Also + +* [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) +* [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) +* [Customizing AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}) +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights) diff --git a/interactivity/customizing-ai-powered-insights.md b/interactivity/customizing-ai-powered-insights.md new file mode 100644 index 000000000..117dd8e80 --- /dev/null +++ b/interactivity/customizing-ai-powered-insights.md @@ -0,0 +1,341 @@ +--- +title: Customizing AI-Powered Insights +page_title: How to Customize the AI-Powered Insights +description: "Learn how to configure the AI-powered insights functionality to handle common and not so much use cases." +slug: telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights +tags: telerik, reporting, ai, configuration +published: True +position: 4 +--- + +# Customizing AI-Powered Insights + +This article explains how to customize the AI-powered insights functionality for different use cases. There are two distinct ways to achieve this: +- [Configuring the Report Engine](#configuring-the-report-engine)—Declarative configuration through application settings. +- [Overriding ReportsControllerBase Methods](#overriding-reportscontrollerbase-methods)—Programmatic customization with custom logic. + +## Configuring the Report Engine + +The declarative configuration approach handles most common customization scenarios through the [AIClient element]({%slug telerikreporting/aiclient-element%}) in your application's configuration file. It allows you to customize user consent, custom and predefined prompts, and RAG optimization without writing any code. + +>tip If you haven't configured the report engine previously, make sure to check the article [Report Engine Configuration Overview]({%slug telerikreporting/using-reports-in-applications/export-and-configure/configure-the-report-engine/overview%}) to get familiar with this topic. + +### User Consent Configuration + +By default, the **AI Prompt** dialog requests explicit consent from users before sending prompts to the AI model. This ensures transparency about data being sent to external AI services and gives users control over their data privacy. + +User Consent for AI Summaries + +In enterprise environments where AI usage policies are already established or when working with trusted internal models, you may want to streamline the user experience by disabling this consent requirement. In these cases, you can set the `requireConsent` option to `false`: + +````JSON +{ + "telerikReporting": { + "AIClient": { + "requireConsent": false + } + } +} +```` +````XML + + + + +```` + +### Prompts Configuration + +By default, users can create their own custom prompts to ask any questions about their reports. While this provides maximum flexibility, it can lead to unpredictable token usage costs and potentially inconsistent results. In these cases, you can provide the users with predefined prompts that are designed to handle specific tasks. + +To restrict users to predefined prompts only, you set `allowCustomPrompts` to `false` and add the predefined prompts through the `predefinedPrompts` option: + +````JSON +{ + "telerikReporting": { + "AIClient": { + "allowCustomPrompts": false, + "predefinedPrompts": [ + { "text": "Generate a summary of the report." }, + { "text": "Translate the report into German." } + ], + } + } +} +```` +````XML + + + + + + + + +```` + +You can also add predefined prompts without disabling custom ones, giving users both curated options and the flexibility to create their own queries. + +### Retrieval-Augmented Generation (RAG) Configuration + +By default, the AI-powered insights functionality uses a [Retrieval-Augmented Generation (RAG)](https://aws.amazon.com/what-is/retrieval-augmented-generation/) algorithm to filter out the irrelevant report data before sending it to the AI model. This approach significantly improves the accuracy and relevance of the AI-generated response while optimizing token usage. + +> RAG is available only in .NET and .NET Standard. Therefore, the options that are listed below are not supported in .NET Framework configurations. + +If needed, you can disable this algorithm by setting `allowRAG` to `false`. + +You can also configure the RAG behavior through the `ragSettings` option: +- `modelMaxInputTokenLimit`—Limits the maximum input tokens the AI model can process in a single request. The default value is `15000`. +- `maxNumberOfEmbeddingsSent`—Limits how many embeddings (chunks of retrieved content) are sent to the model in a single request. The default value is `15`. +- `maxTokenSizeOfSingleEmbedding`—Limits token size of each individual embedding, which prevents large chunks from dominating the prompt. The default value is `0` (no limit). +- `tokenizationEncoding`—Specifies tokenization scheme used to estimate the tokens usage before sending the request to the LLM model. By default, the encoding is determined automatically based on the specified model, which is recommended to ensure accurate token counting. Incorrect encoding may lead to miscalculations in token limits, causing either premature truncation of context or exceeding the model’s input capacity. +- `splitTables`—Indicates whether tables should be split during Retrieval-Augmented Generation (RAG) processing. When the splitting is allowed, only the relevant table cells will be taken into account, significantly reducing the number of tokens. The default value is `true`. + +Below is an example that takes advantage of the table splitting and automatic encoding inference, but reduces the token limits: + +````JSON +"telerikReporting": { + "AIClient": { + "ragSettings": { + "modelMaxInputTokenLimit": 12000, + "maxNumberOfEmbeddingsSent": 10, + "maxTokenSizeOfSingleEmbedding": 2000 + } + } +} +```` + +For a complete reference of all available `AIClient` options, check the article [AIClient Element Overview]({%slug telerikreporting/aiclient-element%}). + +## Overriding ReportsControllerBase Methods + +While the [declarative configuration](#configuring-the-report-engine) handles most common scenarios, some advanced use cases require programmatic customization. You can achieve this by overriding specific methods of the [ReportsControllerBase](/api/telerik.reporting.services.webapi.reportscontrollerbase) class in your `ReportsController`. This approach allows you to implement dynamic logic based on user context, report properties, or business rules. + +You can override the methods described in the following sections and customize different aspects of the AI-powered insights workflow. + +### CreateAIThread(string, string, ClientReportSource) + +The [CreateAIThread(string, string, ClientReportSource)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_CreateAIThread_System_String_System_String_Telerik_Reporting_Services_WebApi_ClientReportSource_) method is called when the AI Prompt dialog is about to be displayed. You can override this method to disable the AI-powered insights functionality entirely. The logic can be tailored based on the currently previewed report, which is represented by the `ClientReportSource` parameter. For modifying dialog properties like consent messages or predefined prompts, use the [UpdateAIPrompts](#updateaipromptsclientreportsource-aithreadinfo) method instead, which provides direct access to the `AIThreadInfo` object. + +#### .NET + +````C# +/// +/// Disables the AI-powered insights functionality dynamically depending on the passed parameter. +/// +/// +public override IActionResult CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + if (reportSource.Report == "report-with-disabled-ai-insights.trdp") + { + return StatusCode( + StatusCodes.Status403Forbidden, + new + { + message = "An error has occurred.", + exceptionMessage = "AI Insights functionality is not allowed for this report.", + exceptionType = "Exception", + stackTrace = (string?)null + } + ); + } + + return base.CreateAIThread(clientID, instanceID, reportSource); +} +```` + + +#### .NET Framework + +````C# +/// +/// Disables the AI-powered insights functionality dynamically depending on the passed parameter. +/// +/// +public override HttpResponseMessage CreateAIThread(string clientID, string instanceID, ClientReportSource reportSource) +{ + if (reportSource.Report == "SampleReport.trdp") + { + var errorResponse = new + { + message = "An error has occurred.", + exceptionMessage = "AI Insights functionality is not allowed for this report.", + exceptionType = "Exception", + stackTrace = (string)null + }; + + return this.Request.CreateResponse(HttpStatusCode.Forbidden, errorResponse); + } + + return base.CreateAIThread(clientID, instanceID, reportSource); +} +```` + + +### UpdateAIPrompts(ClientReportSource, AIThreadInfo) + +The [UpdateAIPrompts(ClientReportSource, AIThreadInfo)](/api/telerik.reporting.services.webapi.reportscontrollerbase#collapsible-Telerik_Reporting_Services_WebApi_ReportsControllerBase_UpdateAIPrompts_Telerik_Reporting_Services_WebApi_ClientReportSource_Telerik_Reporting_Services_Engine_AIThreadInfo_) method is called internally during the execution of `CreateAIThread()`. This is the recommended method for modifying dialog properties like consent messages and predefined prompts, as it provides direct access to the `AIThreadInfo` object without requiring type casting or result checking. + +#### .NET + +````Changing·Consent·Message +/// +/// Overrides the default user consent message. +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) +{ + aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; + + base.UpdateAIPrompts(reportSource, aiThreadInfo); +} +```` +````Setting·Predefined·Prompts·Dynamically +/// +/// Modifies the collection of predefined prompts. +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) +{ + if (reportSource.Report == "report-suitable-for-markdown-output.trdp") + { + aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); + } + + base.UpdateAIPrompts(reportSource, aiThreadInfo); +} +```` + +#### .NET Framework + +````Changing·Consent·Message +/// +/// Overrides the default user consent message. +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) +{ + aiThreadInfo.ConsentMessage = "By using this AI functionality, you authorize the processing of any data you provide, including your prompt, for the purposes of delivering the service to you. Your use of this functionality is governed by the Progress privacy policy, available at: Privacy Policy - Progress."; + + base.UpdateAIPrompts(reportSource, aiThreadInfo); +} +```` +````Setting·Predefined·Prompts·Dynamically +/// +/// Modifies the collection of predefined prompts. +/// +protected override void UpdateAIPrompts(ClientReportSource reportSource, AIThreadInfo aiThreadInfo) +{ + if (reportSource.Report == "report-suitable-for-markdown-output.trdp") + { + aiThreadInfo.PredefinedPrompts.Add("Create a summary of the report in Markdown (.md) format."); + } + + base.UpdateAIPrompts(reportSource, aiThreadInfo); +} +```` + + +### GetAIResponse(string, string, string, string, AIQueryArgs) + +The [GetAIResponse(string, string, string, string, AIQueryArgs)](/api/telerik.reporting.services.webapi.reportscontrollerbase#Telerik_Reporting_Services_WebApi_ReportsControllerBase_GetAIResponse_System_String_System_String_System_String_System_String_Telerik_Reporting_Services_Engine_AIQueryArgs_) method is called every time a prompt is sent to the AI model. This method provides control over the AI request workflow, allowing you to intercept, modify, and validate requests before they reach the LLM. Below are examples of common customization scenarios. + +#### .NET + +````Modifying·Outgoing·Prompts +/// +/// Modifies the prompt sent from the client before passing it to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + args.Query += $"{Environment.NewLine}Keep your response concise."; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` +````Token·Usage·Validation +/// +/// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + const int MAX_TOKEN_COUNT = 500; + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) + { + return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` +````RAG·Optimization·Monitoring +/// +/// Examines whether the RAG optimization is applied for the current prompt. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.Origin == AIRequestInfo.AIRequestOrigin.Client) + { + System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` + +#### .NET Framework + +> The RAG Optimization Monitoring example is not included in this section because RAG functionality is available only in .NET and .NET Standard configurations. + +````Modifying·Outgoing·Prompts +/// +/// Modifies the prompt sent from the client before passing it to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + args.Query += $"{Environment.NewLine}Keep your response concise."; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` +````Token·Usage·Validation +/// +/// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM. +/// +/// +public override async Task GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args) +{ + const int MAX_TOKEN_COUNT = 500; + args.ConfirmationCallBack = (AIRequestInfo info) => + { + if (info.EstimatedTokensCount > MAX_TOKEN_COUNT) + { + return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens."); + } + + return ConfirmationResult.ContinueResult(); + }; + + return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args); +} +```` + +## See Also + +* [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%}) +* [Enable AI-Powered Insights with Built-in AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-builtin-client%}) +* [Enable AI-Powered Insights with Custom AI Client]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights-custom-client%}) +* [AI Insights Report Demo](https://demos.telerik.com/reporting/ai-insights)