Replies: 1 comment
-
|
Yes, it is possible to integrate LiteLLM, and it is actually a very good fit. LiteLLM provides an OpenAI-compatible API layer that can proxy requests to multiple LLM providers (OpenAI, Azure OpenAI, Anthropic, Gemini, Mistral, etc.). LiteLLM is exposed as an OpenAI-compatible endpoint (usually under The key point is that no dedicated LiteLLM client is required. We keep using the official OpenAI client and simply override the endpoint to point to LiteLLM. Since LiteLLM is fully OpenAI-API compatible, the integration is transparent to the application. This means the existing abstractions ( Example: Using LiteLLM as the Chat GatewayHow LiteLLM is used
public async Task<IChatClient> CreateChatClientAsync(
CancellationToken cancellationToken = default)
{
// LiteLLM endpoint (OpenAI-compatible)
string[] endpoints = configuration
.GetSection("Services:LiteLlm:https")
.Get<string[]>();
string host = endpoints?.FirstOrDefault()
?? throw new InvalidOperationException("LiteLLM endpoint not configured.");
var endpoint = new Uri($"https://{host}");
// LiteLLM API key (master key or per-tenant virtual key)
string apiKey = await settingManager.GetOrNullForCurrentTenantAsync(ApiKey);
string model = await settingManager.GetOrNullForCurrentTenantAsync(ChatModel);
var openAiClient = new OpenAIClient(
new ApiKeyCredential(apiKey),
new OpenAIClientOptions
{
Endpoint = endpoint
});
return openAiClient
.GetChatClient(model)
.AsIChatClient();
}The usage remains exactly the same: var chatClient = await chatClientFactory
.CreateChatClientAsync(CancellationToken);
var agente = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "my-agent",
Description = """
My Description
""",
ChatOptions = new ChatOptions
{
ResponseFormat = ChatResponseFormat.ForJsonSchema<MyResult>(),
Instructions = "My Prompt"
}
});
var response = await agente.RunAsync<MyResult>("My message");
return response.Result;Production usage and multi-tenancyWe are already using this approach in production, including:
From the application’s point of view, each tenant simply receives its own API key and model name. LiteLLM takes care of routing, authentication, and provider selection behind the scenes. Overall, this keeps the codebase clean while giving us much more flexibility and control over model providers. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is it possible to integrate LiteLLM to simplify the project setup and gain broader support for models?
Beta Was this translation helpful? Give feedback.
All reactions